Monday, July 16, 2012

Validation Roles and Responsibilities

By

Who is responsible for validating a computerized system?


Validating a computerized system is not a single event. It involves many activities and deliverables (see my previous post Validation is a Parallel Process, Not a Parallel Universe). A system is validated when the activities relevant to demonstrating the system meets its requirements and is fit for purpose have been performed, reviewed, and approved by the management of the executing organization.

While some would think the test analysts or the validation lead are responsible for validating the system, just as there isn't a single validation event, there is no person solely responsible for validating a computerized system. It involves many people within the organization including IT, Quality, Validation, and the Business. Some of these people create validation deliverables. Other review and approve them. And others file and retrieve the records of the validated system. All participate in the activities of validation.

The following list is an example drawn from the Validation Plan Template in the Toolbox (in the sidebar). The actual allocation of the responsibilities will vary depending on the roles within your organization. But it does illustrate the point that many people are involved in validating a computerized system.

Example of Validation Roles and Responsibilities



Application Developer



  • Create and execute IQ test cases for installation and configuration of software (all GAMP® 5 software categories)
  • Create and execute IQ test cases for each specific customization for GAMP® 5 software category 5 (Custom Software), e.g., RICE – Reports, Interfaces, Data Conversion, and Enhancements
Application Lead
  • Create Functional Requirements Specification
  • Create Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Create System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Create IQ protocol
  • Create IQ Summary Report
  • Create Data Conversion Report
  • Approve IQ test cases
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Application Manager
  • Review and approve SOPs
Business Analyst
  • Create the User Requirements Specification
Operations Manager
  • Determining required SOPs, providing developmental support for SOPs, reviewing and approving SOPs
Quality Manager
  • Perform Vendor Audit
  • Reviewing and approving SOPs
Subject Matter Expert (SME)
  • Create the User Requirements Specification
  • Create the Functional Requirements Specification
  • Create the Configuration Specification
  • Develop SOPs
  • Develop Training Materials
Technical Lead
  • Review and approve GMP Assessment
  • Review and approve 21 CFR Part 11 Assessment
  • Review and approve Functional Risk Assessment
  • Review and approve Functional Requirements Specification
  • Review and approve Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Review and approve System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Review and approve IQ protocol
  • Review and approve IQ Summary Report
  • Review and approve Data Conversion Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Test Analyst
  • Create OQ Test cases
  • Create PQ Test cases
Validation Lead
  • Perform compliance assessments
  • Determine whether or not to test requirements with a medium or low risk priority
  • Create the Requirements Traceability Matrix
  • Create the Operational Qualification Summary Reports
  • Create the Performance Qualification Summary Reports
  • Create the Validation Summary Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Technorati Tags: , , ,

Wednesday, July 11, 2012

Software Validation Plan

By
To be prepared is half the victory
-Miguel de Cervantes

What is a Validation Plan?

A validation plan describes the approach and responsible parties for validating a system within its operating environment. The "system" could be a piece of manufacturing equipment, a manufacturing process, or a computerized system. The key elements of a validation plan include:

  • 
The scope of work - the Validation Plan identifies both the system and its component elements to be validated and the required documentation
  • The responsibilities - the Validation Plan identifies the roles and specific responsibilities of the parties responsible for validating the system
  • The approach - the Validation Plan outlines the strategy for validating the system within its operating environment. The approach is generally bounded by the standards, procedures, and methodologies of the executing organization.
A Validation Plan is required whenever significant changes are made to facilities, equipment or processes affect the quality of a product.

How is a Validation Plan different from the Validation Master Plan?

A Validation Master Plan (VMP), sometimes called a Master Validation Plan (MVP), is the basis for the validation program within an organization. It should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computerized system validation.

A Validation Plan is specific to one of the elements within the Validation Master Plan. For instance, a new biotechnology manufacturing plant will require a Validation Master Plan that addresses everything from the facility and utilities to the manufacturing equipment and processes, including the computerized systems that measure, record, process, or store quality data related to the product. Each element will require its own detailed Validation Plan, i.e., every piece of manufacturing equipment, every process, and every computerized system will be covered by a Validation Plan.

How is a Validation Plan different from the Project Plan?

In a previous post (Validation is a Parallel Process, Not a Parallel Universe), I mapped the validation activities and deliverables associated with computerized system validation (sometimes referred to as software validation) to the system development life cycle (SDLC). With the exception of a few items, most of the deliverables are already part of the SDLC. These deliverables define the scope of a validated SDLC project and one of the three parts of the project management triple constraints of cost, time, and scope.
Triple constraint
(For a nice discussion of how to manage the triple constraint, please read Tom Tsongas' blog entry Scope, Time and Cost – Managing the Triple Constraint.) It is important to note that when scope is equated with quality (as in discussions of the triple constraint), quality is used in the sense of the NUMBER of requirements rather than in the sense of MEETING requirements. In other words, more requirements means higher quality (with the implication that all requirements are met).

With regard to the other two concerns of project management, the validation plan only addresses them tangentially. Cost is not a direct concern of the validation plan. However, in that a significant part of the cost of a project may come from the labor of the resources involved, the roles identified in the validation plan should also be addressed in the project plan. But the validation plan doesn't go so far as to articulate an organizational breakdown structure or labor rates.

Time is also not a direct concern of the validation plan. It does not include a work breakdown structure or any estimates of task/activity duration. In fact, the schedule of validation activities is not specifically addressed in the validation plan and as I pointed out in my blog post, these activities follow the SDLC.

Why is a Validation Plan required?

So if a Validation Plan overlaps the scope of a project plan, but does not address the cost or the schedule of validation, what good is it? Some of the value of a validation plan simply comes from its form as a narrative version of the project deliverables. While most of the "form" of a validation plan is captured in a template (for example, the Validation Plan Template in the Toolbox in the sidebar), this form explains the purpose of each deliverable required for validation and places them in context of the methodology, SOPs, Document Management, and Change Control of the executing organizations Quality Management System (QMS).

The validation plan also functions as a catalog of al the documents required to demonstrate the system was validated and how it will be maintained in a validated state. As such, it forms the basis for the Validation Summary Report (VSR), which records the actual results of the validation versus the approved plan.

However, the primary value of the validation plan is as a record of management's intention to validate the system. Once the document is approved and signed it becomes a controlled quality record that is placed in the record management system of the executing organization and is available for audits.

How is a Validation Plan created?

A Validation Plan is not so much created as it is tailored from a template provided by the executing organization. The template itself should be consistent with the QMS and methodologies of the executing organization. Generally, the organization responsible for Quality or Validation will provide this level of tailoring.

The next step of tailoring is based on the scope of validation - in other words the boundaries between the system being validated and its operating environment. (See my previous post What is a computerized system? for an explanation of this boundary.) The boundaries and responsibilities for the various elements will drive the content of the validation plan. Elements may be combined in a single validation plan or segregated into separate validation plans. As an example, in the Toolbox in the sidebar I have a link to the International Society of Pharmaceutical Engineers (ISPE) that provides an example of a validation template for equipment. And on a recent engagement, I developed a Validation Plan Template that allocated responsibilities for validation software as a service in the cloud among the cloud tenants, the cloud service provider, and the infrastructure hosting vendor.

And finally, the last step in tailoring is to identify by name every document that will be required for validating the system.

Friday, July 6, 2012

Now on Technorati

Bookmark and Share

Now you can find A Piece of My Mind on Technorati.

G9KMEDK5KABZ

Monday, July 2, 2012

Are We There Yet? How to Tell When Testing is Complete With Simple Metrics

By

"When will testing be complete?" is one of the most-asked questions of a testing manager. And, just as often, the answer is given in terms of "% Complete". But what does that really mean?

Measures and Metrics

% Complete is a valuable testing metric that reports on how many of the planned tests have been executed and shows your progress versus your plan:
  • 
% Complete = (Number of tests executed) / (Number of tests planned)
So, if  you've planned 15 tests and executed 8 tests, you are (8) / (15) = 53% complete.
Simple enough, if all you want to know is how many tests you've executed versus your plan. But it doesn't tell the whole story.

What about the quality of the product you're testing? Easy enough, that is the number of executed tests that have passed:
  • 
% Passed = (Number of tests passed) / (Number of tests planned)
So, if you've planned and executed 15 tests and 10 have passed, then (10) / (15) = 67% passed reflects how many of the requirements you are testing for have been met.

Obviously, things have not gone as well as planned if we're only meeting 67% of the requirements. There's a simple metric you can use to express the cost of poor quality.

  • 
% Rework = ((Number of re-executed tests) / (Number of tests executed + Number of tests re-executed)
So, if you planned 15 tests, executed 15 tests and had to repeat 8 tests more than once, then (8) / (15 + 8) = 35% rework, a metric reflecting the additional effort involved due to any number of reasons from poor requirements, design, coding, or testing. (But that's a discussion about defect management and root cause analysis for another day.)

One thing % Complete, % Passed, and % Rework all have in common is that they're metrics. In other words they're calculated. And the things they are calculated from - tests planned, tests executed, tests re-executed, and test passed - are measures. In other words, they're counted. The good news is that the data for these measures and metrics should be found in your test log.

I have included a simple test execution log (available in the Toolbox in the sidebar) that automatically calculates and graphs the measures and metrics. (I have populated the log only with the information necessary to drive the calculation of metrics and measures.)

Are We There Yet?

But none of these measures and metrics directly answers the question "When will testing be complete?". This is because each measure and metric is a snapshot in time. The answer to the question can be found by graphing the measures over time to produce a trend chart that should look something like this:

OBD011 Test Measure Trend Chart

If you just go by the % Complete versus plan, testing is done on Day 2 when all tests have been executed once. But the quality of the product is less than adequate with only 67% of the requirements passing their tests. So you keep on testing. But when do you stop? The answer to that is in the "Total Passed" trend line. It should always create an S-shaped curve with the top of the curve approaching the number of tests planned. At the point the "Total Passed" S-curve levels off  (Day 5 in the chart above) is when you should stop testing. At this point you should assess whether you've met your exit criteria for the testing phase, e.g., no open critical or major defects, and whether to release the product.