Quality by Design

Scope: QTPP definition, CQA identification and criticality assessment, CPP/CMA mapping, risk management (FMEA/HAZOP), DoE and model-building, PAT strategy, control-strategy construction (NOR/PAR/edge-of-failure), validation alignment (Stage 1–3), continued process verification (CPV), and comparability—fully integrated with eBMR/eBR and a unified digital QMS (ALCOA+).

Quality by Design is not a slogan; it’s the operating system for everything we run. At MycoVista, QbD begins before a shake flask is inoculated and remains in force after PPQ, through CPV and lifecycle change control. We write the product’s QTPP in plain language, convert it into measurable CQAs, identify the CPPs/CMAs that govern those attributes, then design experiments and controls that operators can hold and auditors can follow. The result is a process that scales without improvisation and a dossier that reads like the process actually runs.

Why QbD matters here

A design that lives only in slides fails under production physics. A dossier that lives only in prose fails under inspection. QbD is how we align the two:

  • Predictable scale-up. Designs built on mechanistic levers (oxygen transfer, residence time, shear, ionic strength) survive the jump from bench to pilot to GMP.
  • Audit-ready narratives. Each CQA has a method, an acceptance criterion, and a controlled lever (CPP/CMA) with documented ranges.
  • Resilience to change. When a site, scale, or raw material changes, a prespecified comparability plan and statistical acceptance windows keep the story coherent.
  • Shorter time to lock. Focused DoE on the two or three parameters that actually govern risk (not twenty that don’t) reduces cycles to a process description we can place into eBMR/eBR.

Vocabulary & artifacts (consistent across both hubs)

  • QTPP (Quality Target Product Profile): route/presentation, dose, potency, purity profile, safety limits, stability, shipping, and device constraints when applicable.
  • CQAs (Critical Quality Attributes): measurable attributes that must meet acceptance criteria (e.g., identity, glycan/charge windows, aggregate levels, vg titer & empty/full, Enc%, endotoxin, sterility, dsRNA, topology, residuals).
  • CPPs/CMAs (Critical Process Parameters / Material Attributes): controllable levers or inputs that affect CQAs (e.g., pH/DO/temperature, FRR/TFR/N:P, capture residence time, gradient/ionic strength, TMP/cross-flow, excipient levels).
  • NOR / PAR / Design Space: normal operating range; proven acceptable range; multi-parameter region where CQAs remain in spec with stated probability.
  • Edge-of-failure: the boundary (by data) at which a CQA fails—documented before PPQ.
  • Control strategy: the package that links QTPP → CQAs → CPPs/CMAs, methods, acceptance criteria, interlocks, alarms, and investigations.
  • Risk tools: FMEA (severity/occurrence/detectability), HAZOP-style prompts for mode discovery, bow-ties where consequence modeling helps.

All artifacts are versioned in the QMS; relevant extracts are embedded in batch records.


MycoVista’s QbD framework (ten steps)

  1. State the QTPP in one page. Make the end-use constraints explicit (dose/route/presentation/stability/shipping; for SC: viscosity/device).
  2. List candidate CQAs then narrow by risk ranking (impact on safety/efficacy and regulatory sensitivity).
  3. Map CPPs/CMAs by science and prior data; draft hypothesized linkages in a simple matrix.
  4. Plan measurement fitness (MSA): specificity, range, linearity, accuracy, precision; Gage R&R for instrumented readouts; LOQ/LOD where needed.
  5. Design DoE to estimate main effects and key interactions; define replication/randomization/blocking; pre-declare models and power.
  6. Run DoE and fit models (ANOVA; when appropriate, RSM or D-optimal/mixtures); retain only effects that reduce uncertainty about CQAs.
  7. Probe edge-of-failure for the few levers that matter; document NOR/PAR and the guardband to the edge.
  8. Build the control strategy (IPCs, parametric controls, specifications, interlocks/alarms) and draft change-control triggers.
  9. Draft Stage-2 PPQ plan from the Stage-1 design space (recipe ranges, sampling, acceptance criteria, investigations).
  10. Define Stage-3 CPV (control charts, capability, periodic review) and the lifecycle plan for methods, resins/filters, raw materials.

Each step produces a named document and a data object. Nothing is only verbal.


Risk management: from intuition to tables you can defend

We start with FMEA to prioritize hazards. Scores are not decoration; they inform experiments and controls.

  • Severity: tied to patient/animal risk and regulatory criticality, not sponsor preference.
  • Occurrence: estimated from platform/precedent and updated with run data.
  • Detectability: grounded in method capability (MSA and LOQ), not optimism.

Actions are assigned to reduce risk score: design changes (preferred), detection/monitoring (if physics cannot change), or procedural mitigations (last resort). We revisit the FMEA after DoE and again after PPQ.


Measurement Systems Analysis (MSA): prove the yardstick first

A design is only as good as the assays that read it.

  • Specificity/selectivity: for overlapping species (charge variants, empty/full), we show interference limits.
  • Linearity & range: anchored to planned acceptance windows; for potency, we defend the model form.
  • Precision: repeatability and intermediate precision; Gage R&R for instrument-based readouts (<10% contribution preferred; 10–20% conditionally acceptable with justification).
  • Accuracy & bias: against reference standards; for quantitative PCR/ddPCR, include inhibition checks and control curves.

MSA is not optional; if a signal cannot detect drift, it cannot guard a CQA.


DoE & modeling: efficient, focused, defensible

We pick designs that minimize runs while maximizing information.

  • Screening: 2^(k-p) fractional factorials or Plackett–Burman to narrow candidate CPPs.
  • Optimization: central composite/RSM for curvature; mixture designs for lipids/buffer components; D-optimal when space is constrained.
  • Blocking/randomization: account for day/lot effects and operator changes.
  • Power & replication: pre-declared to avoid overfitting; replicate center points to estimate pure error.
  • Model selection: ANOVA and lack-of-fit tests; residual diagnostics; cross-validation where needed.
  • Effect translation: significant terms become recipe guidance; interactions become interlocks or combined specifications.

We keep equations in the file; batch recipes carry the ranges, not the polynomials.


PAT & sensor strategy (when data-in-time reduces risk)

We adopt PAT where it truly increases decisionality and reduces failure rate:

  • USP: capacitance/spectroscopy soft sensors for biomass, metabolites, and titer proxies calibrated to reference analytics.
  • DSP: UV280, conductivity, pH, pressure/flow with recipe-level tolerances; pressure–time models for viral filters; TMP/cross-flow for TFF stability.
  • DP: in-line weight checks, temperature/hold timers, nitrogen overlays; headspace oxygen for oxidation-sensitive products.

PAT signals go into eBMR/eBR with interlocks and alarm logic; alarms are actionable, not informational.


Control strategy: from design space to recipe, alarms, and investigations

A control strategy is only real when it can be executed by trained staff under time pressure. We codify:

  • Recipe setpoints & ranges (NOR, PAR) for each CPP; combined limits when interactions matter.
  • IPCs & release tests with sampling plans, acceptance criteria, and OOS/OOT procedures.
  • Interlocks & alarms: what halts a step; what triggers an investigation; what requires QA disposition.
  • Materials controls: identity, CoA ranges, alternates, and comparability protocol for changes (e.g., resin lot, lipid lot).
  • Documentation hooks: references to methods, validation status, and change-control IDs within the batch record.

Edge-of-failure & worst-case exercises

We deliberately run near failure for the few parameters that matter most, so PPQ isn’t the first time you see it.

  • Design: one-at-a-time or small factorials around the suspected edge; include realistic disturbances (utility shifts, feed lot changes).
  • Outcome: distance-to-edge documented (guardband); corrective actions defined.
  • File: plots and acceptance calls live in the control-strategy report; recipes inherit the guardbands.

Validation alignment (FDA/EMA): Stage-1 → Stage-2 (PPQ) → Stage-3 (CPV)

  • Stage-1 (Process Design): QTPP→CQA→CPP mapping done; DoE, edge-of-failure, and control strategy documented; recipe/ranges ready.
  • Stage-2 (PPQ): protocol with lots, sampling, acceptance criteria, and investigations; execution under routine controls; summary with mass balance, capability, and deviations.
  • Stage-3 (CPV): control charts (I-MR, X-bar/R) for key CQAs/CPPs; capability indices (Cp, Cpk) where specs are two-sided; periodic review cadence; change-control triggers.

Continued Process Verification is not a binder—it is a live dashboard that QA reviews on a schedule.


Comparability (planned before change)

Site, scale, raw-material, or process changes are managed by prespecified protocols:

  • Design: equivalence windows, lot numbers, sample sizes, statistical tests (e.g., TOST for equivalence), and orthogonal confirmations where they reduce uncertainty.
  • Execution: run side-by-side when feasible; otherwise, matched historical controls with risk-based justification.
  • Outcome: documented sameness (or managed difference) with regulatory text pre-drafted.

Dual-hub harmonization (San Diego & Montréal)

  • Shared methods and standards with scheduled cross-site checks.
  • Method transfer protocols with predefined equivalence metrics (bias/precision/linearity/specificity).
  • Mirrored batch records and identical alarm/interlock logic.
  • Cross-site CPV views so drift or divergence is visible early.

One truth follows the molecule across both hubs.


Technical annexes (examples by modality)

A) mAbs & Fc-fusions

  • CQAs: glycan window (e.g., afucosylation band), charge profile (icIEF), aggregates (SEC-MALS), potency (cell-based), residuals (HCP/DNA/Protein A), viral safety.
  • CPPs: feed/temperature/osmolality; perfusion residence time where used; capture loading/residence; polishing gradient/ionic strength; low-pH inactivation time/temperature; UF/DF TMP/cross-flow; DP buffer and filtration ΔP/T.
  • DoE highlights: two-level factorial on feed/temperature/osmolality → RSM; residence time versus glycan window; Protein A loading vs leachables/cleanability; polishing selectivity for charge “tails.”
  • Edge-of-failure: low-pH hold and neutralization timing; viral filter pressure collapse modeling; UF/DF shear thresholds.

B) Microbial enzyme

  • CQAs: specific activity, purity, endotoxin, residual DNA/protein, stability at use pH/temperature.
  • CPPs: carbon uptake (overflow metabolism guard), induction temperature, pH, oxygen transfer (kLa), lysis & clarification chemistry, capture/polish ionic strength, UF/DF recipe, formulation pH/excipients.
  • DoE highlights: uptake vs acetate; induction temperature × pH; clarification chemistry vs filter capacity; HIC/IEX polishing interactions.
  • Edge-of-failure: filter blinding at scaled flux; refold kinetics (if IB route) with mixing profiles.

C) AAV & pDNA

  • CQAs: vg titer (qPCR/ddPCR), empty/full distribution (primary + orthogonal), potency, residuals (host DNA/protein/nuclease/detergents), sterility/endotoxin/mycoplasma; for pDNA: topology (SC/OC/L), residual RNA/gDNA, endotoxin.
  • CPPs: production mode (adherent/suspension/fixed-bed), transfection conditions, pH/DO/mixing, nuclease time/units, capture binding/residence, polishing gradient, TFF TMP/cross-flow, DP buffer and filtration ΔP/T.
  • DoE highlights: transfection stoichiometry × osmolality; nuclease effectiveness; charge-based empty/full; TFF recovery vs TMP/cross-flow.
  • Edge-of-failure: filtration feasibility limits; nuclease residuals; AEX capture breakthrough.

D) LNPs (mRNA/siRNA/DNA)

  • CQAs: size/PDI, Enc%, nucleic-acid integrity (and dsRNA for mRNA), residual solvent, osmolality/pH, potency, sterility/endotoxin; for DNA: topology.
  • CPPs: FRR/TFR, N:P, temperature at mixing, quench/dilution chemistry, TFF TMP/cross-flow/diavols, filtration ΔP/T, DP buffer.
  • DoE highlights: FRR × TFR × temperature; TFF shear vs Enc%/size; filtration recovery vs ΔP/T and pore chemistry.
  • Edge-of-failure: Enc% loss at high shear or temperature; size drift at high TFR; dsRNA growth under poor solvent removal.

Statistics and acceptance logic (how we call pass/fail)

  • Capability: Cp/Cpk where two-sided limits exist; guardbanding when uncertainty eats margin.
  • Control charts: I-MR for individual QCAs; X-bar/R for grouped data; pre-declared rules (e.g., Western Electric).
  • Equivalence testing: TOST for comparability when “no relevant difference” is the claim.
  • Sampling: lot-linked sampling plans sized to the decision, not tradition; for PPQ, power to detect relevant drifts.
  • Investigations: prewritten decision trees for OOS/OOT; root cause tied to CPPs/CMAs; CAPA with effectiveness checks.

Digital implementation (QMS + eBMR/eBR + LIMS/ELN)

  • eBMR/eBR: recipes, setpoints, interlocks, alarms, IPC sampling, and sign-offs embedded; deviations link to CAPA automatically.
  • LIMS & ELN: barcoded chain-of-custody, instrument interfaces, audit trails; method versions and validation status visible at run time.
  • ALCOA+: attributable, legible, contemporaneous, original, accurate; plus complete, consistent, enduring, and available—enforced by system design, not memory.

Program Onboarding (first 30 days)

  1. QTPP & CQA baseline (1–2 pages) with initial criticality assessment and a draft acceptance table.
  2. Risk & measurement plan: FMEA v0; MSA/fitness plan for each assay, including Gage R&R where relevant.
  3. Focused DoE plan: factors, ranges, replication, power; predefined models and success criteria.
  4. Draft control strategy: early hypothesis of CPPs, IPCs, release specs, alarm/interlock logic; edge-of-failure plan for the top two risks.
  5. Validation alignment: Stage-1→Stage-2 mapping, PPQ lot count rationale, and CPV chart set; comparability skeleton for likely changes.

We request existing data (design, prior runs, release panels, deviations/stability). We return a written plan with dates, experiments, and pass criteria that QA can sign.


Typical timelines (indicative; program-dependent)

  • Weeks 0–2: QTPP/CQA table; MSA plan; FMEA v0; DoE protocols drafted.
  • Weeks 3–8: DoE execution and model fitting; MSA execution on first-tier assays; edge-of-failure on 1–2 levers.
  • Weeks 9–12: Control-strategy freeze (NOR/PAR; interlocks/alarms); validation/PPQ protocol drafting; CPV dashboard frames.
  • Post-lock: engineering runs under recipe; PPQ when evidence supports.

We publish gates and pass criteria; biology and physics set the pace.


Deliverables (what you receive)

  • QTPP ↔ CQA ↔ CPP/CMA matrix with criticality and rationale.
  • MSA package (method fitness; Gage R&R where relevant).
  • DoE reports with models, diagnostics, and factor effect tables.
  • Edge-of-failure dossier and NOR/PAR/design space declaration.
  • Control strategy (IPCs, specs, interlocks/alarms, investigation trees).
  • Validation alignment (Stage-1 summary, PPQ protocol, CPV plan).
  • Comparability protocol templates for site/scale/material changes.
  • Regulatory text stubs for IND/IMPD/BLA CMC sections.
  • eBMR/eBR recipes linked to QMS documents; CPV dashboards.

Frequently asked (direct answers)

Do you do “full” design spaces? Where models are stable and risk reduction is clear. Otherwise, we publish NOR/PAR with guardbands and keep the story honest.
Will you use our sponsor assays as-is? If fitness is proven by MSA and phase-appropriate qual/val, yes. Gaps are documented and closed before PPQ.
How many PPQ lots? Risk-based. We justify lot count with Stage-1 evidence, control-strategy maturity, and intended market scope.
What if a critical raw material changes? A prespecified comparability protocol (with equivalence windows and orthogonal checks) runs before adoption.
How do you ensure cross-site consistency? Method transfer protocols with equivalence metrics, mirrored recipes and alarm logic, shared standards, cross-site CPV.


Summary

QbD at MycoVista is concrete: a one-page QTPP, a defendable CQA list, a short list of CPPs/CMAs that truly matter, focused DoE, edge-of-failure evidence, and a control strategy that lives in the batch record—with validation and CPV already aligned. That’s how we move from design to data to decision—consistently, in San Diego and Montréal.

MycoVista | San Diego, CA & Montréal, Canada
Start Program Onboarding → Share current design, assays, and your top two process risks. We’ll return a QbD plan (QTPP/CQA/CPP map, DoE, and control-strategy outline) with dates and pass criteria.

EN / FR support available.