Monday, July 16, 2012

Validation Roles and Responsibilities

By

Who is responsible for validating a computerized system?


Validating a computerized system is not a single event. It involves many activities and deliverables (see my previous post Validation is a Parallel Process, Not a Parallel Universe). A system is validated when the activities relevant to demonstrating the system meets its requirements and is fit for purpose have been performed, reviewed, and approved by the management of the executing organization.

While some would think the test analysts or the validation lead are responsible for validating the system, just as there isn't a single validation event, there is no person solely responsible for validating a computerized system. It involves many people within the organization including IT, Quality, Validation, and the Business. Some of these people create validation deliverables. Other review and approve them. And others file and retrieve the records of the validated system. All participate in the activities of validation.

The following list is an example drawn from the Validation Plan Template in the Toolbox (in the sidebar). The actual allocation of the responsibilities will vary depending on the roles within your organization. But it does illustrate the point that many people are involved in validating a computerized system.

Example of Validation Roles and Responsibilities



Application Developer



  • Create and execute IQ test cases for installation and configuration of software (all GAMP® 5 software categories)
  • Create and execute IQ test cases for each specific customization for GAMP® 5 software category 5 (Custom Software), e.g., RICE – Reports, Interfaces, Data Conversion, and Enhancements
Application Lead
  • Create Functional Requirements Specification
  • Create Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Create System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Create IQ protocol
  • Create IQ Summary Report
  • Create Data Conversion Report
  • Approve IQ test cases
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Application Manager
  • Review and approve SOPs
Business Analyst
  • Create the User Requirements Specification
Operations Manager
  • Determining required SOPs, providing developmental support for SOPs, reviewing and approving SOPs
Quality Manager
  • Perform Vendor Audit
  • Reviewing and approving SOPs
Subject Matter Expert (SME)
  • Create the User Requirements Specification
  • Create the Functional Requirements Specification
  • Create the Configuration Specification
  • Develop SOPs
  • Develop Training Materials
Technical Lead
  • Review and approve GMP Assessment
  • Review and approve 21 CFR Part 11 Assessment
  • Review and approve Functional Risk Assessment
  • Review and approve Functional Requirements Specification
  • Review and approve Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Review and approve System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Review and approve IQ protocol
  • Review and approve IQ Summary Report
  • Review and approve Data Conversion Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Test Analyst
  • Create OQ Test cases
  • Create PQ Test cases
Validation Lead
  • Perform compliance assessments
  • Determine whether or not to test requirements with a medium or low risk priority
  • Create the Requirements Traceability Matrix
  • Create the Operational Qualification Summary Reports
  • Create the Performance Qualification Summary Reports
  • Create the Validation Summary Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Technorati Tags: , , ,

Wednesday, July 11, 2012

Software Validation Plan

By
To be prepared is half the victory
-Miguel de Cervantes

What is a Validation Plan?

A validation plan describes the approach and responsible parties for validating a system within its operating environment. The "system" could be a piece of manufacturing equipment, a manufacturing process, or a computerized system. The key elements of a validation plan include:

  • 
The scope of work - the Validation Plan identifies both the system and its component elements to be validated and the required documentation
  • The responsibilities - the Validation Plan identifies the roles and specific responsibilities of the parties responsible for validating the system
  • The approach - the Validation Plan outlines the strategy for validating the system within its operating environment. The approach is generally bounded by the standards, procedures, and methodologies of the executing organization.
A Validation Plan is required whenever significant changes are made to facilities, equipment or processes affect the quality of a product.

How is a Validation Plan different from the Validation Master Plan?

A Validation Master Plan (VMP), sometimes called a Master Validation Plan (MVP), is the basis for the validation program within an organization. It should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computerized system validation.

A Validation Plan is specific to one of the elements within the Validation Master Plan. For instance, a new biotechnology manufacturing plant will require a Validation Master Plan that addresses everything from the facility and utilities to the manufacturing equipment and processes, including the computerized systems that measure, record, process, or store quality data related to the product. Each element will require its own detailed Validation Plan, i.e., every piece of manufacturing equipment, every process, and every computerized system will be covered by a Validation Plan.

How is a Validation Plan different from the Project Plan?

In a previous post (Validation is a Parallel Process, Not a Parallel Universe), I mapped the validation activities and deliverables associated with computerized system validation (sometimes referred to as software validation) to the system development life cycle (SDLC). With the exception of a few items, most of the deliverables are already part of the SDLC. These deliverables define the scope of a validated SDLC project and one of the three parts of the project management triple constraints of cost, time, and scope.
Triple constraint
(For a nice discussion of how to manage the triple constraint, please read Tom Tsongas' blog entry Scope, Time and Cost – Managing the Triple Constraint.) It is important to note that when scope is equated with quality (as in discussions of the triple constraint), quality is used in the sense of the NUMBER of requirements rather than in the sense of MEETING requirements. In other words, more requirements means higher quality (with the implication that all requirements are met).

With regard to the other two concerns of project management, the validation plan only addresses them tangentially. Cost is not a direct concern of the validation plan. However, in that a significant part of the cost of a project may come from the labor of the resources involved, the roles identified in the validation plan should also be addressed in the project plan. But the validation plan doesn't go so far as to articulate an organizational breakdown structure or labor rates.

Time is also not a direct concern of the validation plan. It does not include a work breakdown structure or any estimates of task/activity duration. In fact, the schedule of validation activities is not specifically addressed in the validation plan and as I pointed out in my blog post, these activities follow the SDLC.

Why is a Validation Plan required?

So if a Validation Plan overlaps the scope of a project plan, but does not address the cost or the schedule of validation, what good is it? Some of the value of a validation plan simply comes from its form as a narrative version of the project deliverables. While most of the "form" of a validation plan is captured in a template (for example, the Validation Plan Template in the Toolbox in the sidebar), this form explains the purpose of each deliverable required for validation and places them in context of the methodology, SOPs, Document Management, and Change Control of the executing organizations Quality Management System (QMS).

The validation plan also functions as a catalog of al the documents required to demonstrate the system was validated and how it will be maintained in a validated state. As such, it forms the basis for the Validation Summary Report (VSR), which records the actual results of the validation versus the approved plan.

However, the primary value of the validation plan is as a record of management's intention to validate the system. Once the document is approved and signed it becomes a controlled quality record that is placed in the record management system of the executing organization and is available for audits.

How is a Validation Plan created?

A Validation Plan is not so much created as it is tailored from a template provided by the executing organization. The template itself should be consistent with the QMS and methodologies of the executing organization. Generally, the organization responsible for Quality or Validation will provide this level of tailoring.

The next step of tailoring is based on the scope of validation - in other words the boundaries between the system being validated and its operating environment. (See my previous post What is a computerized system? for an explanation of this boundary.) The boundaries and responsibilities for the various elements will drive the content of the validation plan. Elements may be combined in a single validation plan or segregated into separate validation plans. As an example, in the Toolbox in the sidebar I have a link to the International Society of Pharmaceutical Engineers (ISPE) that provides an example of a validation template for equipment. And on a recent engagement, I developed a Validation Plan Template that allocated responsibilities for validation software as a service in the cloud among the cloud tenants, the cloud service provider, and the infrastructure hosting vendor.

And finally, the last step in tailoring is to identify by name every document that will be required for validating the system.

Friday, July 6, 2012

Monday, July 2, 2012

Are We There Yet? How to Tell When Testing is Complete With Simple Metrics

By

"When will testing be complete?" is one of the most-asked questions of a testing manager. And, just as often, the answer is given in terms of "% Complete". But what does that really mean?

Measures and Metrics

% Complete is a valuable testing metric that reports on how many of the planned tests have been executed and shows your progress versus your plan:
  • 
% Complete = (Number of tests executed) / (Number of tests planned)
So, if  you've planned 15 tests and executed 8 tests, you are (8) / (15) = 53% complete.
Simple enough, if all you want to know is how many tests you've executed versus your plan. But it doesn't tell the whole story.

What about the quality of the product you're testing? Easy enough, that is the number of executed tests that have passed:
  • 
% Passed = (Number of tests passed) / (Number of tests planned)
So, if you've planned and executed 15 tests and 10 have passed, then (10) / (15) = 67% passed reflects how many of the requirements you are testing for have been met.

Obviously, things have not gone as well as planned if we're only meeting 67% of the requirements. There's a simple metric you can use to express the cost of poor quality.

  • 
% Rework = ((Number of re-executed tests) / (Number of tests executed + Number of tests re-executed)
So, if you planned 15 tests, executed 15 tests and had to repeat 8 tests more than once, then (8) / (15 + 8) = 35% rework, a metric reflecting the additional effort involved due to any number of reasons from poor requirements, design, coding, or testing. (But that's a discussion about defect management and root cause analysis for another day.)

One thing % Complete, % Passed, and % Rework all have in common is that they're metrics. In other words they're calculated. And the things they are calculated from - tests planned, tests executed, tests re-executed, and test passed - are measures. In other words, they're counted. The good news is that the data for these measures and metrics should be found in your test log.

I have included a simple test execution log (available in the Toolbox in the sidebar) that automatically calculates and graphs the measures and metrics. (I have populated the log only with the information necessary to drive the calculation of metrics and measures.)

Are We There Yet?

But none of these measures and metrics directly answers the question "When will testing be complete?". This is because each measure and metric is a snapshot in time. The answer to the question can be found by graphing the measures over time to produce a trend chart that should look something like this:

OBD011 Test Measure Trend Chart

If you just go by the % Complete versus plan, testing is done on Day 2 when all tests have been executed once. But the quality of the product is less than adequate with only 67% of the requirements passing their tests. So you keep on testing. But when do you stop? The answer to that is in the "Total Passed" trend line. It should always create an S-shaped curve with the top of the curve approaching the number of tests planned. At the point the "Total Passed" S-curve levels off  (Day 5 in the chart above) is when you should stop testing. At this point you should assess whether you've met your exit criteria for the testing phase, e.g., no open critical or major defects, and whether to release the product.

Monday, June 25, 2012

Cracking the Code: 21 CFR Part 11 Authentication and Authorization

By

 

Recently, I was asked what are the 21 CFR Part 11 compliance implications with the following scenario:

The user (an Approver) used his id/password to login and continued his session and arrived at the "APPROVE NOW" screen. The software does not challenge the user (again) at the time of this critical event (e.g., APPROVAL) for his password again.

21 CFR Part 11: Authentication and Authorization

First, a little background on what the US Code of Federal Regulations has to say about electronic signatures. 21 CFR Part 11 uses "authority checks" to address two different aspects of information security - authentication and authorization.

Authentication concerns identifying the user:

11.10(d) Limiting system access to authorized individuals

Authentication also involves a procedural control, i.e., a controlled, documented, process for granting access to a new user and deleting a user account, before the userid and password are issued to the user.

Authorization concerns the level of access a particular authenticated user should have to secured resources controlled by the system:

11.10(g) Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand.

(The clause in red concerns authentication and the clause in blue concerns authorization. I can understand the confusion caused by using "authorized" to describe authentication and combining both concepts into a single clause.)

Authorization also involves:
 

  • Definitions of user privileges - different levels of access based on user responsibilities (if appropriate) that are documented and controlled (e.g., user access levels are approved by management prior to granting access to the user).
  • Procedures - a controlled, documented, process for granting privleges to a new user, changing privileges for an existing user, and removing privileges.
  • Software controls - the application verifies that an individual has the appropriate privileges to perform specific functions within the application before allowing them to do so (e.g., access the operation or computer system input or output device, alter a record, or perform the operation at hand). Specifically with regard to electronic signatures, the application verifies that an individual has the authority to electronically sign a record before allowing them to do so.

Cracking the Code

Now, with some background in 21 CFR Part 11, let's apply it to the scenario described above:

The user (an Approver) used his id/password to login [<=authentication] and continued his session and arrived at the "APPROVE NOW" screen. The software does not challenge the user (again) at the time of this critical event (e.g., APPROVAL) [<=authorization] for his password again.

Assuming the user has been properly authenticated with a userid/password login, the question now becomes does the person clicking "APPROVE" have the privilege to electronically sign the record. The risk is that between the time of authenticating the user and electronically signing the record, there is no way for the software to know if it's really the same user.

For example, an authorized user is authenticated (logs on) and then walks away from the device without locking it. A second person then uses the device to approve records (before a screen saver time out). Software that doesn't require some verification for an approval would have no way to know if the user is authorized. The only way the software would "know" at the time of the approval is to challenge the user to provide a shared secret, re-entry of the password.

For this reason, every electronic signature requires not only the meaning of the signature (such as review, approval, responsibility, or authorship) associated with the signature, but also the entry of the shared secret (password):

11.200(a)(1)(i) When an individual executes a series of signing during a single, continuous period of controlled system access, the first signing shall be executed using all electronic signature components [<=initial authorization, not authentication]; subsequent signings shall be executed using at least one electronic signature component that is only executable by, and designed to be used only by, the individual [<=subsequent authorizations].

AND there should be a policy in place regarding NOT sharing passwords:

11.200(a)(3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals.

Conclusion

The use of a userid and password to login to software is an issue of authentication and is performed once to identify the user and permit access to the software. This authentication does NOT address the requirement for authorization to electronically sign and approve electronic records, which must be done for each record. In short, since the software in question doesn't require the entry of a password for the electronic signature when approving electronic records, it doesn't comply with the requirements of 21 CFR Part 11. 

Monday, June 18, 2012

Plan Your Work – Work Your Plan

By

The first time I studied a Quality Management System (QMS), I thought it was a business operations playbook. It seemed so obvious to me that what was described in the Quality Manual was, at a high level, how a business should run.
Strictly speaking (according to the ASQ the American Society for Quality), a quality management system is:
A formalized system that documents the structure, responsibilities and procedures required to achieve effective quality management.
In other words, it is a blueprint for how quality will be achieved and maintained within an organization.
Recently I had the opportunity to develop the Quality Manual for a project I was working on. I used ISO 9000 as a reference standard / checklist to ensure I included all the appropriate elements. (You can find a link to ISO in the sidebar.) As I was mapping the interrelated processes described in the QMS (and shown in the figure below) it occurred to me that I was creating a supersized PDCA cycle. The PDCA cycle (which stands for Plan-Do-Check-Act, also known as the Shewart cycle or Deming cycle) is a four-step process for quality improvement.
Figure 1. Quality Management System in a regulated environment
OBD008a
Step 1: Plan
In this step, you develop a way to build quality into the product or service you intend to produce and also a way to improve that product or service. In a QMS, these are the Quality Management System Planning processes. These plans include not only planning the realization of the product/service, but also the management of the resources to product the product/service, and also the QMS processes to ensure and improve the quality of the product/service.
Step 2: Do
In this step, you carry out your plans from Step 1: product/service realization, resource management, and the processes of the QMS.
Step 3: Check
The third step is to compare what you produced with what you planned. These are the Measurement, Analysis, and Improvement processes of the QMS. These activities breakdown into two areas: quality assurance and quality control. While both are often used interchangeable because they both concern ensuring the quality of the product/service, quality assurance is about the processes you use to ensure quality and quality control is about the product itself.
Step 4:Act
The last step is to review results of the operations of the QMS and decide whether your objectives have been met and whether or not changes should be made to meet your objectives. Based on the results of this Management Review processes within the QMS, if changes are required your next move is back to Step 1: Plan and the continuous improvement cycle begins again.

Tuesday, June 12, 2012

Validation is a Parallel Process, Not a Parallel Universe

By

I hope by now I’ve convinced you from my previous blog posts that computerized systems validation is more than testing. It has its own terminology that comes from process manufacturing validation and covers a lot of the same ground as a system development life cycle (SDLC). It is a process that parallels the development and operation of a computer system that ensures the system meets its requirements when placed in production and then continues to meet its requirements during its operational life.
In the table below, I’ve mapped validation activities and deliverables to the SDLC, highlighting those documents that are not typically the concern of business analysts or application developers. In fact, most of the “validation” deliverables are already in the SDLC and produced by the relevant subject matter experts (business analysts, developers, test analysts, and business representatives). Accounting for the differences in terminology between IT and validation (e.g., a plan is equivalent to a protocol), then it should be clear that a validated computer system is a by-product of good practice.
Table 1. Comparison of Validation Steps and Documents to the SDLC
Step Validation Activity Validation Documents SDLC Phase
1 Determine validation activities Validation Plan (VP) Project Planning
2 Specify the system development details User Requirements Specification (URS)
Functional Requirements Specification (FRS)
Specification
Configuration Specification (CON)
System Design Specification (SDS)
Data Conversion Plan (DCP)
Infrastructure Requirements Specification (IRS)
Design
3 Perform Qualification Activities Design Qualification (DQ)
Requirements Traceability Matrix (RTM)
Vendor Audit (VA)
Design Review (DSR)
Installation Qualification (IQ)
IQ Protocol (IQP)
IQ Test Case/Scripts (IQT)
IQ Summary Report (IQR)
Construction
Build physical infrastructure
Build virtual infrastructure
Build virtual machine
Develop custom RICE
Application installation
Application configuration
Testing
Infrastructure verification
Virtual machine verification
Application verification
Unit tests
String tests
Integration tests
Data Conversion Qualification (DCQ)
Data Conversion Protocol (DCL)
Data Conversion Test Case/Scripts (DCT)
Data Conversion Summary Report (DCR)
Verify data conversion
Operational Qualification (OQ)
OQ Protocol (OQP)
OQ Test Case/Scripts (OQT)
OQ Summary Report (OQR)
System Acceptance Test (SAT)
Performance Qualification (PQ)
PQ Protocol (PQP)
PQ Test Case/Scripts (PQT)
PQ Summary Report (PQR)
User Acceptance Test (UAT)
Exception Management (ERF, ERL) Construction and Testing
4 Develop / Review Controls and Procedures Standard Operating Procedures (SOP)
Training Procedures
Training Records
5 Certify the System Validation Summary Report (VSR)
Release to Production Note (RN)
Release
6 On-going Operations Configuration Control
Change Control
Document Control
Record Retention
Security
Backup and Recovery
Disaster Recovery
Business Continuity
Operation & Maintenance
7 Periodic Review Periodic Review Report
In fact, only three documents are produced exclusively by the validation team:
  • Validation Plan
  • Validation Summary Report
  • Periodic Review Report
The balance of the highlighted documents is what differentiates CSV from the SDLC:
Validation Scope
  • Vendor assessment involving additional SME’s outside of IT, e.g., QA and procurement professionals to assist in vendor audits.
  • Standard operating procedures relating to the functions controlled by the computerized systems (and the Quality Management System) are developed or revised by business process owners and/or business analysts.
  • Having evidence that people are trained to perform their roles.
Maintaining the Validated State and Recordkeeping
Demonstrating the validated state is maintained by creating documented evidence (records) from standard operating procedures of the Quality Management System:
  • Configuration Control
  • Change Control
  • Document Control
  • Record Retention
  • Security
  • Backup and Recovery
  • Disaster Recovery
  • Business Continuity
As I’ve said before, validation is more than testing. In fact it’s more than the SDLC. But if you appreciate how it overlaps with good practices you can quickly see how to leverage existing capabilities to achieve and maintain a validated computerized system.