top of page
Search

Why "Validated" Is a State You Have to Keep Earning

  • 1 day ago
  • 5 min read

There's a version of process validation that most organisations are very familiar with. Protocols are written, PPQ batches are executed, data is reviewed, and a report is produced that formally declares the process validated. The file goes into the quality system, the process moves into routine manufacturing, and validation is considered done.


The word "done" is where the problem starts.


Validation isn't a certificate you receive at the end of a qualification exercise. It's a description of the current relationship between your process and its state of control. And that relationship changes, gradually, continuously, and often without anyone making a single decision that looks obviously wrong.


High angle view of a manufacturing facility
Process Validation Review.

What Drift Actually Looks Like


The reason process drift is difficult to manage is that it doesn't announce itself. It accumulates through a series of small, individually unremarkable changes that each sit comfortably within accepted limits.


A raw material supplier ships a batch with slightly different particle size distribution, within specification but toward the edge of the historical range. An operator adjusts their technique slightly to compensate for a piece of equipment that's running a little differently than it used to. A process parameter gets nudged during a batch to hit yield targets, staying within its validated range but moving away from the centre. A cleaning cycle runs slightly longer than typical because of a scheduling gap.


None of these events trigger a deviation. None of them fail a specification. Individually, each one is defensible and probably correct. But each one also moves the process incrementally away from the conditions under which it was originally validated. The margin between where the process is operating and where it starts to produce unacceptable outcomes narrows, and nobody has made a single decision that anyone could point to as the moment things went wrong.


This is what process drift looks like in practice. Not a failure, not a deviation, just a slow accumulation of small shifts that the system wasn't designed to see as connected.


The PPQ Gap


Part of what makes this difficult is the structural disconnect between how processes are run during PPQ and how they're run during routine manufacturing, and it's worth being specific about what that disconnect looks like.


During PPQ, the intensity of scrutiny is high by design. Sampling plans are extensive, monitoring is detailed, and variability is examined carefully across multiple batches under conditions designed to stress-test consistency. The whole point is to understand how the process behaves at its edges as well as its centre.


Once routine manufacturing begins, that intensity recedes. Monitoring becomes more selective and more focused on confirming that specifications are met rather than understanding the shape of the data within those specifications. Variability that sits within predefined limits tends to get accepted rather than examined. The analytical depth that characterised PPQ doesn't carry over into day-to-day operation, and there's a logic to that from an efficiency perspective.


The consequence is that the process is most thoroughly understood at the moment it enters routine manufacturing and progressively less well understood as time passes and conditions evolve. CPV is supposed to bridge that gap, but in practice it often functions as a monitoring programme that confirms specifications are being met rather than as a genuine ongoing assessment of whether the validated state is being maintained.


Why "Within Specification" Isn't the Same as "In Control"


This is probably the distinction that matters most and gets made least often.


A process can be meeting every specification on every batch while simultaneously drifting away from its validated state. Specifications define the boundary between acceptable and unacceptable product. They don't define the conditions under which the process reliably and consistently produces product well within that boundary. Those conditions are what validation actually characterises, and they're not the same thing as the specification limits.


When a process starts producing results that cluster toward one end of the specification range, or when batch-to-batch variability starts increasing even though every batch still passes, those are signals that something about the process is changing. A monitoring programme that's designed to confirm pass or fail won't necessarily detect them, because the batches are still passing. But the process is telling you something, and if the system isn't set up to hear it, that information gets lost.


This is the gap that tends to become visible during inspection. Not a specification failure, not a validation protocol that wasn't followed, but a pattern in the ongoing data that suggests the process is operating differently from how it was when validation was completed, and an absence of evidence that the organisation noticed or acted on it.


What Ongoing Validation Actually Requires


The shift in expectation from regulators is sometimes framed as moving goalposts, but it's more useful to think of it as a clarification of what validation was always supposed to mean.


A validated process was never intended to mean a process that passed a qualification exercise at a point in time. It was intended to mean a process that is understood well enough to be reliably controlled. Maintaining that state requires actively monitoring whether that understanding remains current as conditions evolve, and being willing to reassess validation assumptions when the data suggests they may no longer hold.


In practice that means treating CPV data as something to be interpreted rather than just collected. It means asking not only whether batches are passing but whether the process capability is stable, whether variability is trending in any direction, and whether the relationship between process parameters and product quality looks the same as it did during PPQ. It means connecting change control decisions to validation assumptions rather than treating them as separate workstreams. And it means being willing to trigger a reassessment when the data suggests the process has moved, rather than waiting for a specification failure to make the case.


When the Validated State Starts to Look Historical


The difficult inspection conversation is the one where an organisation is asked to demonstrate that a process is currently validated and can only really point to documentation showing it was validated several years ago under conditions that have since evolved. At that point the question of whether the process is in a validated state becomes genuinely hard to answer, and that uncertainty tends to expand the scope of inspection scrutiny considerably.


Pharmalliance Consulting Ltd works with organisations when the gap between original validation assumptions and current operating conditions starts to become difficult to defend. If your CPV programme isn't generating the kind of insight that would let you answer that question confidently, or if you're concerned about how your validation lifecycle would hold up under scrutiny, that's worth addressing before an inspector starts asking.

 
 
 

Comments


bottom of page