PPQ & Process Validation

PPQ & Process Validation — From Stage 1 Design to Stage 3 Control

Scope: End-to-end process validation aligned to FDA/EMA expectations: Stage 1 (Process Design), Stage 2 (Process Performance Qualification—PPQ), and Stage 3 (Continued Process Verification—CPV). Coverage includes protocol strategy, batch definitions, sampling plans and statistics, boundary/worst-case conditions, unit-operation validations (viral safety, sterile filtration, hold-time, cleaning, leachable risk), aseptic simulations/media fills, equipment & utilities qualification (URS/DQ/IQ/OQ/PQ), computerized systems (CSV/CSA) interfaces, raw-material and supplier control, resin/membrane lifecycle, dual-site comparability, deviation/CAPA governance, and submission-ready reporting—executed in a unified digital QMS (ALCOA+) with eBMR/eBR and LIMS/ELN integration.

A validated process is not a single event. It is a demonstrated chain: a design that explains itself, execution that behaves under stress, and monitoring that catches drift. We run PPQ only after ranges, interlocks, and alarms are real, and we prove that routine operations can hold those ranges without improvisation—day shift or night.

MycoVista Biotech Services

Why teams choose MycoVista for PPQ

  • Design → Recipe → Evidence. We translate Stage-1 design spaces into Stage-2 recipes with NOR/PAR, interlocks, and alarm logic embedded in eBMR/eBR—then collect Stage-3 signals by default.
  • Operator-holdable ranges. We do not “lock” tight, fragile settings. We publish guardbands to the edge-of-failure so execution has headroom.
  • Unit-operation validation that matters. Viral safety, sterilizing filtration, hold-times, cleaning validation, AEX/DSP lifecycles, and aseptic simulations are sized to the actual risk—documented and reproducible.
  • Dual-hub sameness. San Diego and Montréal run mirrored SOPs, recipes, and method versions; cross-site CPV shows drift early and forces convergence.
  • Submission-ready outputs. Protocols, executed records, statistics, deviation/CAPA narratives, and PPQ summary reports align to FDA/EMA expectations and slot directly into CMC.

Background: Why PPQ fails

Predictable failure modes are well known: (1) Stage-1 design never became an operator-holdable recipe, (2) sampling plans were symbolic rather than informative, (3) boundary conditions were never proven, (4) unit-operation validations (sterile filter, holds, cleaning) were either missing or unrealistic, (5) deviations during PPQ were closed with patchwork rather than root cause. Our approach removes these failure points before the first PPQ lot is scheduled.

Program spine: Stage 1 → Stage 2 → Stage 3 (and the documents that prove it)

Stage 1 — Process Design

  • QTPP → CQA list with criticality rationale.
  • CPP/CMA map, DoE results, models (if used), edge-of-failure evidence, and declared NOR/PAR/design space.
  • Draft control strategy with IPCs, release specifications, interlocks, and alarms.

Stage 2 — Process Performance Qualification (PPQ)

  • PPQ Protocol: scope, batch count and justification, equipment & line configurations, raw-material lots and alternates, sampling plans & statistics, acceptance criteria, boundary/worst-case challenges, and predeclared investigations.
  • Executed PPQ: lots run under routine controls (not technician heroics), full traceability, deviations linked to CAPA, and contemporaneous data integrity.
  • PPQ Report: capability and reproducibility narrative with mass balance, recovery/purity statistics, residuals and safety outcomes, and validation addenda (filtration, viral safety, holds, cleaning, media fills).

Stage 3 — Continued Process Verification (CPV)

  • Control charts and capability indices for key CQAs/CPPs, resin/membrane lifecycle trending, EM/utilities trending, and periodic review cadence—cross-site by default.

All three stages live in the QMS, and references are embedded in eBMR/eBR so reviewers can follow the breadcrumb trail in minutes.

PPQ Protocol: what we write before we run

  1. Products & sites covered — molecule(s), presentation(s), and which hub(s) execute.
  2. Batch count & rationale — risk-based lot count tied to Stage-1 maturity, demand, and regulatory region (we state why the count is sufficient).
  3. Equipment & configuration — reactors, skids, filters, columns; version/serials; any shared assemblies; clean/dirty trains; URS references.
  4. Materials & vendors — critical raw materials with lot plans; alternates and triggers for comparability; resin/membrane lot plans and lifespan assumptions.
  5. Sampling plan & statistics — IPC and release sampling densities with statistical intent: confidence, detection limits, and capability targets (Cp/Cpk).
  6. Boundary/worst-case — elevated bioburden challenges (where appropriate), highest practical loadings, low-end residence times, tightest filter flux, longest qualified holds, high-viscosity DP, etc.
  7. Acceptance criteria — numeric pass/fail tied to QTPP/CQAs; which metrics are per-lot and which are campaign-level.
  8. Deviations & investigations — predeclared decision trees (halt vs continue vs rework), OOS/OOT governance, and CAPA expectations; what constitutes a failed PPQ.
  9. Deliverables list — executed batch records, certificates, raw data, statistical analyses, and validation reports (sterile filtration integrity, viral safety, holds, cleaning, media fills).

Sampling plans and statistics (make the data speak)

  • IPC samples sized to detect trend and drift (e.g., at-line titer and metabolites in USP; pressure-throughput for filtration; UV/conductivity for DSP).
  • Release testing powered to demonstrate capability; we use lot-by-lot metrics and pooled summaries carefully, with rationale.
  • Capability indices (Cp/Cpk) reported where two-sided specs apply; for one-sided limits (e.g., endotoxin), we use appropriate estimators and confidence.
  • Equivalence and comparability (TOST) when moving lots across hubs or materials across vendors.
  • Graphing & diagnostics beyond pass/fail tables: run charts, control charts, and residual plots (for model-based elements) are included in the PPQ report.

Boundary and worst-case confirmation (before PPQ completion)

We prove that guardbands are real by touching them with plan:

  • USP: lower/upper feed rates and viable ranges, low DO boundary, high antifoam scenario; for perfusion, shortest viable residence time.
  • DSP: maximum planned capture loading and lowest practical residence time; polishing at high/low ionic strength; prefilter loading at edge flux; viral filter at validated pressure-time.
  • LNPs/vectors: highest feasible TFR/FRR/N:P within design, TFF at upper TMP/cross-flow; empty/full separation at narrowest gradient window to be used in production.
  • DP: sterile filtration at upper ∆P and lowest product temperature; lyo cycle at narrowest shelf-temp margin; for PFS, highest concentration/viscosity within design and device limits.

We document performance, investigations (if any), and any re-centered ranges.

Unit-operation validations that matter

Sterilizing filtration (solution & DP)

  • Feasibility and recovery on real bulk, not surrogates.
  • Integrity testing: pre- and post-use (and PUPSIT where applicable) with acceptance logic in the batch record.
  • Parameter guardbands: ∆P, temperature, and flow; prefilters justified; hold rules enumerated.

Viral safety (biologics)

  • Low-pH inactivation: kinetics on true intermediate with neutralization windows defined.
  • Viral filtration: filter choice, sizing, and pressure–time modeling to avoid flux collapse; prefilter loads characterized.

Hold-time studies

  • Intermediates across USP/DSP/DP: time, temp, and container defined; acceptance linked to CQAs; re-qualification triggers stated.

Cleaning validation

  • Worst-case soils and MACO/PDE math; swab/rinse recoveries validated; periodic verification schedule defined; change-control hooks (new soils, solvent changes).

Extractables/Leachables

  • Assessment for single-use flow paths and DP container closure; where risk warrants, simulation studies aligned to contact conditions.

Aseptic simulations

  • Design reflects interventions and maximum holds; replicates by line and configuration; trend across time.

Equipment and utilities (qualification you can defend)

  • URS/DQ/IQ/OQ/PQ documented for reactors, skids, lyophilizers, fillers, and analytical systems; calibration/PM schedules live in QMS.
  • Utilities: HPW/WFI-equivalent (as applicable), clean steam, compressed air, gases—mapping, alert/action limits, trending, and investigation SOPs.
  • Environmental monitoring: viable/non-viable plans by room/class; trending in CPV; links to deviation/CAPA.

Materials, vendors, and lifecycle

  • Supplier qualification: audits where risk warrants; quality agreements; incoming testing & CoA verification plans; barcode chain-of-custody.
  • Critical raw materials: alternates defined with comparability triggers; lipid/resin/filter lot plans; enzyme/nuclease control for vector/LNP programs.
  • Resin/membrane lifecycle: cycle count, cleaning recipe validation, performance trending, and end-of-life criteria; PPQ includes multiple cycles if lifecycle claims are made.

Dual-hub comparability (San Diego ↔ Montréal)

  • Method transfers with predefined equivalence metrics; shared standards and cross-checks scheduled.
  • Recipe parity: identical setpoints, ranges, interlocks, and alarms in eBMR/eBR; differences documented and justified if unavoidable.
  • CPV visibility: site-by-site charts; drift detection and convergence actions agreed with QA.

Deviation, CAPA, and investigations during PPQ

  • Deviations use predeclared decision trees; containment, root cause, and risk assessed against CQA protection.
  • CAPA includes effectiveness checks with scheduled verification.
  • Failed PPQ is defined up front (what fails a lot; what fails the campaign). We prefer clarity over negotiation.

Computerized systems (CSV/CSA) at the PV interface

  • Part 11/Annex 11 controls: audit trails, e-signatures with meaning, access control, and time synchronization.
  • CSA risk-based validation for configured workflows; vendor docs leveraged; we test what we configure.
  • Data flow: instrument → middleware → LIMS → QMS/eBMR; reconciliation and exception handling validated.

What your PPQ Report contains

  • Scope & rationale (including batch count justification).
  • Executed protocols with lot-by-lot results, IPC trends, mass balance, yields, purity/residuals, and safety.
  • Statistics: capability indices, run charts/control charts, boundary-condition outcomes, and equivalence (if site/material changes).
  • Validation annexes: sterilizing filtration, viral safety, hold-time, cleaning, E/L risk, media fills, equipment/utilities summaries.
  • Deviations/CAPA: narratives with root cause and effectiveness; impact on validation conclusion.
  • Conclusion that mirrors the control strategy and states residual risks and lifecycle monitoring.

QA signs; the document is submission-ready.

Program Onboarding (first 30 days)

  1. Validation Master Plan (VMP) excerpt for your program: scope, ownership, timelines, and change-management links.
  2. PPQ protocol shell tailored to modality (biologic, microbial enzyme, AAV/pDNA, LNP, DP, or small molecule).
  3. Sampling & statistics plan: IPC density, release panel power, capability targets, and boundary tests.
  4. Unit-op validation matrix: filtration, viral safety, holds, cleaning, media fills, E/L, and lifecycle studies with responsibilities and dates.
  5. Equipment/utilities status: URS through PQ status; gaps and close plans.
  6. Materials plan: critical raw materials, alternates, comparability triggers, resin/membrane lifecycle.
  7. CPV dashboard frame: CQAs/CPPs to chart from first engineering lot.

You provide: latest process description, control strategy, recent lots, deviations/stability history, and target filing dates.
We deliver: a signed plan with protocol IDs, sampling tables, and a calendar QA can approve.

Indicative timelines

  • Weeks 0–2: VMP excerpt; protocol shell; sampling/statistics finalized; unit-op validation plans issued; equipment/utilities gap list.
  • Weeks 3–6: Boundary tests, filtration/hold/cleaning studies, media-fill cycle (if applicable); eBMR recipes and alarms locked.
  • Weeks 6–12+: PPQ lot execution (sequential or overlapping as justified); real-time trending; deviation/CAPA closeouts.
  • +2–4 weeks post-lots: PPQ report with statistics, annexes, and regulatory summary text.

Deliverables

  • VMP excerpt and PPQ protocol(s) with sampling & statistics tables.
  • Executed batch records (eBMR/eBR), IPC trends, mass-balance and recovery/purity tables.
  • Validation reports: sterilizing filtration, viral safety, hold-time, cleaning, E/L (as applicable), media fills.
  • Equipment & utilities qualification summaries; calibration & PM records.
  • Materials qualification & lifecycle files; resin/membrane logs.
  • Deviation/CAPA dossiers with effectiveness checks.
  • PPQ Summary Report and Regulatory text for IND/IMPD/BLA updates.
  • CPV dashboards initialized and handed to QA.

How many PPQ lots? Risk-based—often three, but justified by Stage-1 maturity, process complexity, and market scope. We state the rationale explicitly.


Can PPQ lots overlap? Yes, if Stage-1 evidence and risk allow; we document rationale and maintain independence of data.


Do boundary tests have to be in PPQ lots? Not always—pre-PPQ engineering or targeted validation runs may suffice; we document linkage.


What fails a PPQ campaign? Predefined: critical CQA failures without justified investigations, systemic deviations, or inability to hold control strategy; we prefer clarity to debate.


How do you demonstrate cross-site sameness? Transfer protocols with equivalence metrics, matched recipes/alarms, shared standards, and CPV comparisons with corrective actions for divergence.


Can we change a resin or lipid post-PPQ? Yes—with comparability and, if needed, limited re-validation; protocolized and justified before adoption.

Conclusion

PPQ is credible when Stage-1 design becomes an operator-holdable recipe and Stage-3 control proves the process stays in family. Our role is to make those links explicit: numeric ranges with guardbands, unit-operation validations that reflect risk, sampling plans that speak statistically, and reports that read cleanly. That is how we bring a process from design to data to decision—consistently.

Start Program Onboarding → Share your process description, control strategy, and target filing dates. We’ll return a PPQ plan, sampling/statistics, and a validation calendar