Monday, July 16, 2012

Validation Roles and Responsibilities

By

Who is responsible for validating a computerized system?


Validating a computerized system is not a single event. It involves many activities and deliverables (see my previous post Validation is a Parallel Process, Not a Parallel Universe). A system is validated when the activities relevant to demonstrating the system meets its requirements and is fit for purpose have been performed, reviewed, and approved by the management of the executing organization.

While some would think the test analysts or the validation lead are responsible for validating the system, just as there isn't a single validation event, there is no person solely responsible for validating a computerized system. It involves many people within the organization including IT, Quality, Validation, and the Business. Some of these people create validation deliverables. Other review and approve them. And others file and retrieve the records of the validated system. All participate in the activities of validation.

The following list is an example drawn from the Validation Plan Template in the Toolbox (in the sidebar). The actual allocation of the responsibilities will vary depending on the roles within your organization. But it does illustrate the point that many people are involved in validating a computerized system.

Example of Validation Roles and Responsibilities



Application Developer



  • Create and execute IQ test cases for installation and configuration of software (all GAMP® 5 software categories)
  • Create and execute IQ test cases for each specific customization for GAMP® 5 software category 5 (Custom Software), e.g., RICE – Reports, Interfaces, Data Conversion, and Enhancements
Application Lead
  • Create Functional Requirements Specification
  • Create Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Create System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Create IQ protocol
  • Create IQ Summary Report
  • Create Data Conversion Report
  • Approve IQ test cases
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Application Manager
  • Review and approve SOPs
Business Analyst
  • Create the User Requirements Specification
Operations Manager
  • Determining required SOPs, providing developmental support for SOPs, reviewing and approving SOPs
Quality Manager
  • Perform Vendor Audit
  • Reviewing and approving SOPs
Subject Matter Expert (SME)
  • Create the User Requirements Specification
  • Create the Functional Requirements Specification
  • Create the Configuration Specification
  • Develop SOPs
  • Develop Training Materials
Technical Lead
  • Review and approve GMP Assessment
  • Review and approve 21 CFR Part 11 Assessment
  • Review and approve Functional Risk Assessment
  • Review and approve Functional Requirements Specification
  • Review and approve Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Review and approve System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Review and approve IQ protocol
  • Review and approve IQ Summary Report
  • Review and approve Data Conversion Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Test Analyst
  • Create OQ Test cases
  • Create PQ Test cases
Validation Lead
  • Perform compliance assessments
  • Determine whether or not to test requirements with a medium or low risk priority
  • Create the Requirements Traceability Matrix
  • Create the Operational Qualification Summary Reports
  • Create the Performance Qualification Summary Reports
  • Create the Validation Summary Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Technorati Tags: , , ,

Wednesday, July 11, 2012

Software Validation Plan

By
To be prepared is half the victory
-Miguel de Cervantes

What is a Validation Plan?

A validation plan describes the approach and responsible parties for validating a system within its operating environment. The "system" could be a piece of manufacturing equipment, a manufacturing process, or a computerized system. The key elements of a validation plan include:

  • 
The scope of work - the Validation Plan identifies both the system and its component elements to be validated and the required documentation
  • The responsibilities - the Validation Plan identifies the roles and specific responsibilities of the parties responsible for validating the system
  • The approach - the Validation Plan outlines the strategy for validating the system within its operating environment. The approach is generally bounded by the standards, procedures, and methodologies of the executing organization.
A Validation Plan is required whenever significant changes are made to facilities, equipment or processes affect the quality of a product.

How is a Validation Plan different from the Validation Master Plan?

A Validation Master Plan (VMP), sometimes called a Master Validation Plan (MVP), is the basis for the validation program within an organization. It should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computerized system validation.

A Validation Plan is specific to one of the elements within the Validation Master Plan. For instance, a new biotechnology manufacturing plant will require a Validation Master Plan that addresses everything from the facility and utilities to the manufacturing equipment and processes, including the computerized systems that measure, record, process, or store quality data related to the product. Each element will require its own detailed Validation Plan, i.e., every piece of manufacturing equipment, every process, and every computerized system will be covered by a Validation Plan.

How is a Validation Plan different from the Project Plan?

In a previous post (Validation is a Parallel Process, Not a Parallel Universe), I mapped the validation activities and deliverables associated with computerized system validation (sometimes referred to as software validation) to the system development life cycle (SDLC). With the exception of a few items, most of the deliverables are already part of the SDLC. These deliverables define the scope of a validated SDLC project and one of the three parts of the project management triple constraints of cost, time, and scope.
Triple constraint
(For a nice discussion of how to manage the triple constraint, please read Tom Tsongas' blog entry Scope, Time and Cost – Managing the Triple Constraint.) It is important to note that when scope is equated with quality (as in discussions of the triple constraint), quality is used in the sense of the NUMBER of requirements rather than in the sense of MEETING requirements. In other words, more requirements means higher quality (with the implication that all requirements are met).

With regard to the other two concerns of project management, the validation plan only addresses them tangentially. Cost is not a direct concern of the validation plan. However, in that a significant part of the cost of a project may come from the labor of the resources involved, the roles identified in the validation plan should also be addressed in the project plan. But the validation plan doesn't go so far as to articulate an organizational breakdown structure or labor rates.

Time is also not a direct concern of the validation plan. It does not include a work breakdown structure or any estimates of task/activity duration. In fact, the schedule of validation activities is not specifically addressed in the validation plan and as I pointed out in my blog post, these activities follow the SDLC.

Why is a Validation Plan required?

So if a Validation Plan overlaps the scope of a project plan, but does not address the cost or the schedule of validation, what good is it? Some of the value of a validation plan simply comes from its form as a narrative version of the project deliverables. While most of the "form" of a validation plan is captured in a template (for example, the Validation Plan Template in the Toolbox in the sidebar), this form explains the purpose of each deliverable required for validation and places them in context of the methodology, SOPs, Document Management, and Change Control of the executing organizations Quality Management System (QMS).

The validation plan also functions as a catalog of al the documents required to demonstrate the system was validated and how it will be maintained in a validated state. As such, it forms the basis for the Validation Summary Report (VSR), which records the actual results of the validation versus the approved plan.

However, the primary value of the validation plan is as a record of management's intention to validate the system. Once the document is approved and signed it becomes a controlled quality record that is placed in the record management system of the executing organization and is available for audits.

How is a Validation Plan created?

A Validation Plan is not so much created as it is tailored from a template provided by the executing organization. The template itself should be consistent with the QMS and methodologies of the executing organization. Generally, the organization responsible for Quality or Validation will provide this level of tailoring.

The next step of tailoring is based on the scope of validation - in other words the boundaries between the system being validated and its operating environment. (See my previous post What is a computerized system? for an explanation of this boundary.) The boundaries and responsibilities for the various elements will drive the content of the validation plan. Elements may be combined in a single validation plan or segregated into separate validation plans. As an example, in the Toolbox in the sidebar I have a link to the International Society of Pharmaceutical Engineers (ISPE) that provides an example of a validation template for equipment. And on a recent engagement, I developed a Validation Plan Template that allocated responsibilities for validation software as a service in the cloud among the cloud tenants, the cloud service provider, and the infrastructure hosting vendor.

And finally, the last step in tailoring is to identify by name every document that will be required for validating the system.

Friday, July 6, 2012

Now on Technorati

Bookmark and Share

Now you can find A Piece of My Mind on Technorati.

G9KMEDK5KABZ

Monday, July 2, 2012

Are We There Yet? How to Tell When Testing is Complete With Simple Metrics

By

"When will testing be complete?" is one of the most-asked questions of a testing manager. And, just as often, the answer is given in terms of "% Complete". But what does that really mean?

Measures and Metrics

% Complete is a valuable testing metric that reports on how many of the planned tests have been executed and shows your progress versus your plan:
  • 
% Complete = (Number of tests executed) / (Number of tests planned)
So, if  you've planned 15 tests and executed 8 tests, you are (8) / (15) = 53% complete.
Simple enough, if all you want to know is how many tests you've executed versus your plan. But it doesn't tell the whole story.

What about the quality of the product you're testing? Easy enough, that is the number of executed tests that have passed:
  • 
% Passed = (Number of tests passed) / (Number of tests planned)
So, if you've planned and executed 15 tests and 10 have passed, then (10) / (15) = 67% passed reflects how many of the requirements you are testing for have been met.

Obviously, things have not gone as well as planned if we're only meeting 67% of the requirements. There's a simple metric you can use to express the cost of poor quality.

  • 
% Rework = ((Number of re-executed tests) / (Number of tests executed + Number of tests re-executed)
So, if you planned 15 tests, executed 15 tests and had to repeat 8 tests more than once, then (8) / (15 + 8) = 35% rework, a metric reflecting the additional effort involved due to any number of reasons from poor requirements, design, coding, or testing. (But that's a discussion about defect management and root cause analysis for another day.)

One thing % Complete, % Passed, and % Rework all have in common is that they're metrics. In other words they're calculated. And the things they are calculated from - tests planned, tests executed, tests re-executed, and test passed - are measures. In other words, they're counted. The good news is that the data for these measures and metrics should be found in your test log.

I have included a simple test execution log (available in the Toolbox in the sidebar) that automatically calculates and graphs the measures and metrics. (I have populated the log only with the information necessary to drive the calculation of metrics and measures.)

Are We There Yet?

But none of these measures and metrics directly answers the question "When will testing be complete?". This is because each measure and metric is a snapshot in time. The answer to the question can be found by graphing the measures over time to produce a trend chart that should look something like this:

OBD011 Test Measure Trend Chart

If you just go by the % Complete versus plan, testing is done on Day 2 when all tests have been executed once. But the quality of the product is less than adequate with only 67% of the requirements passing their tests. So you keep on testing. But when do you stop? The answer to that is in the "Total Passed" trend line. It should always create an S-shaped curve with the top of the curve approaching the number of tests planned. At the point the "Total Passed" S-curve levels off  (Day 5 in the chart above) is when you should stop testing. At this point you should assess whether you've met your exit criteria for the testing phase, e.g., no open critical or major defects, and whether to release the product.

Monday, June 25, 2012

Cracking the Code: 21 CFR Part 11 Authentication and Authorization

By

 

Recently, I was asked what are the 21 CFR Part 11 compliance implications with the following scenario:

The user (an Approver) used his id/password to login and continued his session and arrived at the "APPROVE NOW" screen. The software does not challenge the user (again) at the time of this critical event (e.g., APPROVAL) for his password again.

21 CFR Part 11: Authentication and Authorization

First, a little background on what the US Code of Federal Regulations has to say about electronic signatures. 21 CFR Part 11 uses "authority checks" to address two different aspects of information security - authentication and authorization.

Authentication concerns identifying the user:

11.10(d) Limiting system access to authorized individuals

Authentication also involves a procedural control, i.e., a controlled, documented, process for granting access to a new user and deleting a user account, before the userid and password are issued to the user.

Authorization concerns the level of access a particular authenticated user should have to secured resources controlled by the system:

11.10(g) Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand.

(The clause in red concerns authentication and the clause in blue concerns authorization. I can understand the confusion caused by using "authorized" to describe authentication and combining both concepts into a single clause.)

Authorization also involves:
 

  • Definitions of user privileges - different levels of access based on user responsibilities (if appropriate) that are documented and controlled (e.g., user access levels are approved by management prior to granting access to the user).
  • Procedures - a controlled, documented, process for granting privleges to a new user, changing privileges for an existing user, and removing privileges.
  • Software controls - the application verifies that an individual has the appropriate privileges to perform specific functions within the application before allowing them to do so (e.g., access the operation or computer system input or output device, alter a record, or perform the operation at hand). Specifically with regard to electronic signatures, the application verifies that an individual has the authority to electronically sign a record before allowing them to do so.

Cracking the Code

Now, with some background in 21 CFR Part 11, let's apply it to the scenario described above:

The user (an Approver) used his id/password to login [<=authentication] and continued his session and arrived at the "APPROVE NOW" screen. The software does not challenge the user (again) at the time of this critical event (e.g., APPROVAL) [<=authorization] for his password again.

Assuming the user has been properly authenticated with a userid/password login, the question now becomes does the person clicking "APPROVE" have the privilege to electronically sign the record. The risk is that between the time of authenticating the user and electronically signing the record, there is no way for the software to know if it's really the same user.

For example, an authorized user is authenticated (logs on) and then walks away from the device without locking it. A second person then uses the device to approve records (before a screen saver time out). Software that doesn't require some verification for an approval would have no way to know if the user is authorized. The only way the software would "know" at the time of the approval is to challenge the user to provide a shared secret, re-entry of the password.

For this reason, every electronic signature requires not only the meaning of the signature (such as review, approval, responsibility, or authorship) associated with the signature, but also the entry of the shared secret (password):

11.200(a)(1)(i) When an individual executes a series of signing during a single, continuous period of controlled system access, the first signing shall be executed using all electronic signature components [<=initial authorization, not authentication]; subsequent signings shall be executed using at least one electronic signature component that is only executable by, and designed to be used only by, the individual [<=subsequent authorizations].

AND there should be a policy in place regarding NOT sharing passwords:

11.200(a)(3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals.

Conclusion

The use of a userid and password to login to software is an issue of authentication and is performed once to identify the user and permit access to the software. This authentication does NOT address the requirement for authorization to electronically sign and approve electronic records, which must be done for each record. In short, since the software in question doesn't require the entry of a password for the electronic signature when approving electronic records, it doesn't comply with the requirements of 21 CFR Part 11. 

Monday, June 18, 2012

Plan Your Work – Work Your Plan

By

The first time I studied a Quality Management System (QMS), I thought it was a business operations playbook. It seemed so obvious to me that what was described in the Quality Manual was, at a high level, how a business should run.
Strictly speaking (according to the ASQ the American Society for Quality), a quality management system is:
A formalized system that documents the structure, responsibilities and procedures required to achieve effective quality management.
In other words, it is a blueprint for how quality will be achieved and maintained within an organization.
Recently I had the opportunity to develop the Quality Manual for a project I was working on. I used ISO 9000 as a reference standard / checklist to ensure I included all the appropriate elements. (You can find a link to ISO in the sidebar.) As I was mapping the interrelated processes described in the QMS (and shown in the figure below) it occurred to me that I was creating a supersized PDCA cycle. The PDCA cycle (which stands for Plan-Do-Check-Act, also known as the Shewart cycle or Deming cycle) is a four-step process for quality improvement.
Figure 1. Quality Management System in a regulated environment
OBD008a
Step 1: Plan
In this step, you develop a way to build quality into the product or service you intend to produce and also a way to improve that product or service. In a QMS, these are the Quality Management System Planning processes. These plans include not only planning the realization of the product/service, but also the management of the resources to product the product/service, and also the QMS processes to ensure and improve the quality of the product/service.
Step 2: Do
In this step, you carry out your plans from Step 1: product/service realization, resource management, and the processes of the QMS.
Step 3: Check
The third step is to compare what you produced with what you planned. These are the Measurement, Analysis, and Improvement processes of the QMS. These activities breakdown into two areas: quality assurance and quality control. While both are often used interchangeable because they both concern ensuring the quality of the product/service, quality assurance is about the processes you use to ensure quality and quality control is about the product itself.
Step 4:Act
The last step is to review results of the operations of the QMS and decide whether your objectives have been met and whether or not changes should be made to meet your objectives. Based on the results of this Management Review processes within the QMS, if changes are required your next move is back to Step 1: Plan and the continuous improvement cycle begins again.

Tuesday, June 12, 2012

Validation is a Parallel Process, Not a Parallel Universe

By

I hope by now I’ve convinced you from my previous blog posts that computerized systems validation is more than testing. It has its own terminology that comes from process manufacturing validation and covers a lot of the same ground as a system development life cycle (SDLC). It is a process that parallels the development and operation of a computer system that ensures the system meets its requirements when placed in production and then continues to meet its requirements during its operational life.
In the table below, I’ve mapped validation activities and deliverables to the SDLC, highlighting those documents that are not typically the concern of business analysts or application developers. In fact, most of the “validation” deliverables are already in the SDLC and produced by the relevant subject matter experts (business analysts, developers, test analysts, and business representatives). Accounting for the differences in terminology between IT and validation (e.g., a plan is equivalent to a protocol), then it should be clear that a validated computer system is a by-product of good practice.
Table 1. Comparison of Validation Steps and Documents to the SDLC
Step Validation Activity Validation Documents SDLC Phase
1 Determine validation activities Validation Plan (VP) Project Planning
2 Specify the system development details User Requirements Specification (URS)
Functional Requirements Specification (FRS)
Specification
Configuration Specification (CON)
System Design Specification (SDS)
Data Conversion Plan (DCP)
Infrastructure Requirements Specification (IRS)
Design
3 Perform Qualification Activities Design Qualification (DQ)
Requirements Traceability Matrix (RTM)
Vendor Audit (VA)
Design Review (DSR)
Installation Qualification (IQ)
IQ Protocol (IQP)
IQ Test Case/Scripts (IQT)
IQ Summary Report (IQR)
Construction
Build physical infrastructure
Build virtual infrastructure
Build virtual machine
Develop custom RICE
Application installation
Application configuration
Testing
Infrastructure verification
Virtual machine verification
Application verification
Unit tests
String tests
Integration tests
Data Conversion Qualification (DCQ)
Data Conversion Protocol (DCL)
Data Conversion Test Case/Scripts (DCT)
Data Conversion Summary Report (DCR)
Verify data conversion
Operational Qualification (OQ)
OQ Protocol (OQP)
OQ Test Case/Scripts (OQT)
OQ Summary Report (OQR)
System Acceptance Test (SAT)
Performance Qualification (PQ)
PQ Protocol (PQP)
PQ Test Case/Scripts (PQT)
PQ Summary Report (PQR)
User Acceptance Test (UAT)
Exception Management (ERF, ERL) Construction and Testing
4 Develop / Review Controls and Procedures Standard Operating Procedures (SOP)
Training Procedures
Training Records
5 Certify the System Validation Summary Report (VSR)
Release to Production Note (RN)
Release
6 On-going Operations Configuration Control
Change Control
Document Control
Record Retention
Security
Backup and Recovery
Disaster Recovery
Business Continuity
Operation & Maintenance
7 Periodic Review Periodic Review Report
In fact, only three documents are produced exclusively by the validation team:
  • Validation Plan
  • Validation Summary Report
  • Periodic Review Report
The balance of the highlighted documents is what differentiates CSV from the SDLC:
Validation Scope
  • Vendor assessment involving additional SME’s outside of IT, e.g., QA and procurement professionals to assist in vendor audits.
  • Standard operating procedures relating to the functions controlled by the computerized systems (and the Quality Management System) are developed or revised by business process owners and/or business analysts.
  • Having evidence that people are trained to perform their roles.
Maintaining the Validated State and Recordkeeping
Demonstrating the validated state is maintained by creating documented evidence (records) from standard operating procedures of the Quality Management System:
  • Configuration Control
  • Change Control
  • Document Control
  • Record Retention
  • Security
  • Backup and Recovery
  • Disaster Recovery
  • Business Continuity
As I’ve said before, validation is more than testing. In fact it’s more than the SDLC. But if you appreciate how it overlaps with good practices you can quickly see how to leverage existing capabilities to achieve and maintain a validated computerized system.

Friday, June 8, 2012

Validation - It's More Than Just Testing (Part 4: Maintaining the Validated State)

By

The first part of this series introduced the Validation V-Model, an overview of a validation methodology, and compared it to the Waterfall SDLC methodology. Next, I discussed the terminology used by validation professionals to describe those SDLC activities that causes so much agita among IT professionals. And in the last post I explored the scope of computerized systems validation. This final post about why validation is more than just testing focuses on maintaining the validated state in production and demonstrating that the computerized system is in control.
 

Emphasis on maintaining the validated state

Validation is not a one-time event. Validation is part of the complete life cycle of a computer system that includes planning, specification, programming, testing, commissioning, documentation, operation, monitoring and modifying. Once testing is complete and the computerized system is known to be controlled, it’s important to maintain its validated state during the operational life span of the computerized system. This, of course, is achieved by correctly following the written procedures and maintaining the system.
However, if maintenance or a change is required to a validated system, it must be subject to change control. A change control system should be in place to document all changes to facilities, equipment, processes, or procedures that may have an impact on the product quality. The impact of every change should be evaluated and the extent of re-validation defined and performed.
For example, if you make changes to a computerized system after it has been validated how do you know whether it is operating in a controlled and consistent manner? Without a formal evaluation of the impact of the change and re-validation of the system, you have no way of knowing the answer to this question.
Change control and re-validation only address changes to the computerized system. Another key element of maintaining the validated state of a computerized system is to provide evidence that the written procedures are being followed. In order to do this, you must conduct a periodic review (aka an audit). It is a good practice to undertake an internal audit several times a year to target different processes and procedures each time. While periodic reviews are necessary, they aren’t sufficient, you must also have a Corrective Action and Preventative Action (CAPA) system to manage and fix anything found during an audit.
 

“If it’s not written down, it didn’t happen.”

The importance of good documentation and record keeping cannot be over emphasized. There is much to say about record keeping and good documentation practices, but I’ll save that for a future blog post.
For now you must recognized that documentation and records demonstrate compliance with requirements, standards, procedures, and regulations. The word I like to use is “evidence” because it puts me in a legal frame of mind months after the activities occurred. In other words, what evidence can you show me that the system that was implemented two years ago was in control at the time of its implementation and maintained in control thereafter. Good records enable you to track all activities and provide a history of those activities.
 

In Conclusion

While the terminology of computerized systems validation may cause confusion, many aspects of it are “good practice” and incorporated into current methodologies. Its emphasis on demonstrating the system will consistently produce information that meets its specifications is balanced between the development of the system and the operation and maintenance of the system. The concerns of validation are not limited to a technical demonstration that design specifications are met, but include the assessment of vendors, development of procedures, training, change control, and audit – in short, more than just testing.

Thursday, June 7, 2012

Validation - It's More Than Just Testing (Part 3: Validation Scope)

By

The first part of this series introduced the Validation V-Model, an overview of a validation methodology, and compared it to the Waterfall SDLC methodology. And the second part discussed the terminology used by validation professionals to describe those SDLC activities that causes so much agita among IT professionals. This post is about the scope of computerized systems validation – specifically the activities that differentiate it from the SDLC.
 

Scope of validation

Whether you call it validation, verification, or testing the activities performed in the testing phase of the SDLC generally refer to the dynamic testing of a system by exercising it against its specifications – the system design, functional requirements, and user requirements. However, validation takes a broader view that includes static testing, vendor assessment, development of procedures, training, and the operation of the system in production.
Static testing goes by many names – design review, peer review, code review, inspection, walkthrough. An essential part of the SDLC, static testing is a means of verifying the requirements and design of software without actually exercising the software. It involves a manual review of documents for errors to ensure they are complete, appropriate, and consistent (both internally and with related documents). It is a cost-effective way of building quality into a system because bugs discovered at the early stages of development are less expensive to fix.
When key software, computer systems, or services impacting the user’s product quality are purchased from vendors, the user is still responsible for the overall validation. This aspect of validation involves assessing the vendor to establish assurance that the vendor’s development and delivery processes meet the requirements of the user’s company for quality. Vendor assessment as part of the RFP process may be included in certain methodologies and is an essential part of computerized systems validation when vendor products and services are used.
The development of operational procedures is another key differentiator between the SDLC and CSV. The fundamental purpose of procedures is to ensure operational production processes are properly guided by management, performed in a consistent way, and capture and communicate important related information and data. However, procedures are also part of the internal control system and are used not only to ensure key activities are performed consistently, but also to manage risk and demonstrate compliance.
A well written procedure will manage risk in two ways. First, procedures mitigate risk by describing reasonable measures to prevent foreseeable risks from occurring and how to recover if such a risk occurs. And second, they reduce risk by capturing organizational knowledge to mitigate the loss of key personnel.
Procedures themselves do not demonstrate compliance, but a well-defined and documented process will generate records that demonstrate process capability and demonstrate an effective internal control system and compliance with regulations and standards.
And finally, training is also in the scope of validation. Operating a system in production requires not only that the right equipment be installed correctly and that procedures be written and followed to ensure consistent performance, the employees must also have the skills and knowledge of the operation of the equipment and the procedures. Training involves both basic training on the theory and practice of GMP as well as specific training relative to their role. (For application developers and other IT staff, this requires specific training relative to computerized systems validation.) Training should also be highlighted as part of the change control system. If new equipment or systems are installed or procedures change, then the employees must know how to use it.
 

Next

Validation is not a one-time event. In the final part of this series, I’ll discuss the maintenance of the validated state of a computerized system and the importance of good record keeping.

Wednesday, June 6, 2012

Validation - It's More Than Just Testing (Part 2: Validation Terminology)

By

The first part of this series introduced the Validation V-Model, an overview of a validation methodology, and compared it to the Waterfall SDLC methodology.
This post is about the terminology used by validation professionals to describe those SDLC activities that causes so much agita among IT professionals.

Validation Terminology

The terminology used in computerized system validation (CSV) has its roots in pre-computerized manufacturing processes. First, manufacturing equipment was installed and examined to ensure it was compliant with appropriate codes and approved design intentions, and that the manufacturer's recommendations were suitably considered; this was called the installation qualification (IQ). Next the equipment was exercised to see if it was capable of consistently operating within its established limits and tolerances; this was called the operational qualification (OQ). And finally, the equipment was used to produce a product to establish confidence that the finished product meets all its requirements; this is called the performance qualification (PQ).
If you were to install a gas stove, you would have a qualified plumber perform the installation per the manufacturer’s directions (the IQ) and then turn on each burner to verify it came on and could produce high, medium, and low flames and that the oven could reach and maintain a temperature (the OQ). You would then use the burner to boil water in pot and the oven to bake a cake to ensure it was able to be used to prepare dinner (the PQ).
The development of information systems performs similar testing, but the activities go by different names, e.g., unit/string/integration testing (aka IQ for custom development), system testing (aka OQ), and user acceptance testing (aka PQ).
It’s important to note that the performance of the equipment (or information system) is specified by the user’s requirements for the equipment (or information system) which may not be the entire operational range of the equipment (or information system).
For COTS software purchased for a specific purpose, the implication is that you do not have to qualify all the functions of the software, only those you intend to use and for which you have written requirements. So, the IQ would consist of the installation of the COTS software per the manufacturer’s instructions. The OQ might consist of any vendor-provided scripts to verify the installation or some rudimentary tests to ensure the COTS software operated in your environment. And the PQ would be custom scripts to demonstrate the functions of the system you intend to use meet the requirements you have previously written.
For custom developed software built for a specific purpose, that is, to the “established limits and tolerances” of the user requirements, there is less distinction between OQ and PQ. The IQ would consist of the installation of the operating system, databases, utilities, tools, etc. per the manufacturer’s instructions. The OQ would consist of custom scripts to demonstrate the requirements of the functional specification. And the PQ would consist of custom scripts to verify the requirements of the user requirements specification.
As I discussed in my previous blog entry “Tower of Babel”, adding to the confusion of terms with similar and overlapping meanings, different organizations mix the terms and definitions. Some organizations refer to verification as validation. Some define verification as dynamic testing and validation as static testing (i.e., peer review). Others refer to testing as verification or qualification. And others refer to qualification as validation. My advice is to adopt the lingua franca of the organization in which you are undertaking the validation effort and focus on the validation activities rather than the terms used to describe them.

Next

In the third part of this four part series, I’ll discuss the scope of computerized systems validation.

Tuesday, June 5, 2012

Validation – It’s More Than Just Testing (Part 1: The V-Model)

By

Validation is more than just testing. Its scope is broader than testing and it has an emphasis on not only achieving a validated state for a computerized system, but also maintaining that validated state. While the terminology may be different, many aspects of computerized systems validation (CSV) are “good practice” and occur as part of the development, implementation, and maintenance of a system. However, if you do not exercise discipline in your document and record management you may not be able to prove that a system is validated.

The V-Model is similar to the Waterfall SDLC Methodology

The figure below shows a high level overview of a validation methodology, called the V-Model, based on GAMP® 5, and applied to a cloud environment. It shows the deliverables of the V-Model with the validation activities across the top, the project SDLC activities across the bottom, and the allocation of responsibilities between the tenant, the cloud, and the physical/virtual infrastructure hosting vendor along the left.
Figure 1. V-Model applied to a cloud environment

OBD003
The methodology shown in the figure is called the V-Model because while it has many aspects of a waterfall SDLC methodology, the depiction of the deliverables as a “V” illustrates the symmetry between the testing and verification of a computerized system against the specification on the same level. It starts on the left-hand side of the diagram with specifying both what the users want from the system and also what capabilities and functions the system must have.
From the specification phase on the left-hand side of the diagram, there should be enough information to configure, design, and/or build the system and its supporting physical and virtual infrastructure, which occurs at the base of the V. This is where the system is installed, customized, and tested by the developer.
The dynamic testing and verification of the configured and customized system occurs on the right-hand side of the V. Dynamic testing is exercising the application against a specification while verification is confirming that a requirement is met outside of the system by developing controls and procedures.
Typically, there is a symmetry in the V as the dynamic testing and verification of the application is against the specification on the same level, so there is more detailed testing at the lower levels of the V (white box testing focused on the inputs and outputs of the modules of software) and higher level testing at the top of the V (black box testing looking at the overall functions of the application).
Once all validation deliverables are complete and approved, the application is released to the operation and maintenance life cycle phase.

So what’s different about validation?

Been there. Done that. So what else is new? As I see it, there are four things that differentiate CSV from the SDLC. I’ll be covering these in more detail in subsequent blog posts:
  1. Validation terminology
  2. Validation scope
  3. Emphasis on maintaining the validated state
  4. Record keeping

Monday, June 4, 2012

Tower of Babel - Validation, Verification, Qualification, and Testing

By

Validation, verification, qualification, and testing – don’t they really mean the same thing? Well, yes. And no.
In Life Sciences, we use these terms to refer to different activities associated with computerized systems validation as shown in the figure below. To simplify the discussion, let’s start with these terms: validation, testing, and verification.

Validation

The FDA provides the following definition of validation that is used in the Life Sciences industry:
Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.
This is a general definition of validation that has its roots in pre-computerized manufacturing processes, so it needs a little deconstruction in order to understand it in the context of information systems.
In manufacturing, a process is a sequence of actions or operations, performed by a person with or without equipment, to produce a product. In information systems, the sequence of operations is the application program, there may be equipment controlled by the computer system or the computer system itself may be the only equipment needed, and the product is information or data. So, one application of the FDA definition of validation to information systems is:
Establishing documented evidence which provides a high degree of assurance that a specific application program will consistently produce information or data meeting its predetermined specifications and quality attributes.
The two main activities that follow from this definition of validation are (1) testing of the computer system and (2) verification of the documentation associated with the computer system to control its operation and ensure its maintenance and also that the documentation is created, followed, and regularly reviewed for appropriateness.
Figure 1. Activities associated with computerized systems validation
OBD002

Testing the Computer System (aka Verification and Qualification)

Testing is confirmation that a computer system meets its requirements and specifications. This is achieved through variously detailed levels of testing beginning with the installation of the hardware and system software, however, the term “qualification” is used because of the pre-computerized manufacturing roots of validation:
  • Installation Qualification (IQ) – (FDA) Establishing confidence that process equipment and ancillary systems are compliant with appropriate codes and approved design intentions, and that manufacturer's recommendations are suitably considered. In other words: (1) installation of hardware and system software per the manufacturer’s instructions, or (2) in the cloud, the provisioning of a virtual machine per an approved procedure and the installation of system software per the manufacturer’s instructions
  • Operational Qualification (OQ) – (FDA) Establishing confidence that process equipment and sub-systems are capable of consistently operating within established limits and tolerances. In other words: testing against the documented and approved requirements and specifications (unit, string, and integration testing per the documented and approved system design specifications; and system testing per the documented and approved functional requirements).
  • Performance Qualification (PQ) – (1) (FDA) process performance qualification: establishing confidence that the process is effective and reproducible, or (2) (FDA) product performance qualification: establishing confidence through appropriate testing that the finished product produced by a specified process meets all release requirements for functionality and safety. In other words: user acceptance testing (UAT) against documented and approved user requirements.

Verification of Other Functional Controls (aka Validation and Qualification)

Verification concerns the confirmation that the documentation associated with the computer system to control its operation and ensure its maintenance is created, followed, and regularly reviewed for appropriateness. This is achieved through the creation, review, and approval of documents and records and the auditing of operational and QMS (quality management system) processes:
  • URS, FRS, SDS, et al. – approved user, functional, and design documents to be used for testing computer systems and equipment
  • SOPs (standard operating procedures) – documented and approved instructions to be followed for carrying out an operation or in a given situation
  • Records produced by SOPs – provide objective evidence that procedures are being followed
  • Training records – provide objective evidence that people are qualified to perform their roles, to operate computer systems and equipment, and to understand the procedures to be followed
  • Periodic Review and Audits –to ensure documentation remains appropriate, to verify the existence of objective evidence (records) showing conformance to required procedures, and to assess how successfully procedures have been implemented

What’s the difference between qualification and validation?

Because of the manufacturing process roots of validation and its terminology, a distinction is sometimes made between qualification and validation.  Within the context of information systems the difference is that equipment and computer systems are qualified and computerized systems (including the controlled process) are validated.

Confused? There’s More

Adding to the confusion caused by these terms with similar and overlapping meanings, different organizations mix the terms and definitions.  Some organizations refer to verification as validation. Some define verification as dynamic testing and validation as static testing (i.e., peer review). Others refer to testing as verification or qualification. And others refer to qualification as validation. And because of the nature of custom software development, unit/string/integration testing is often referred to as IQ rather than OQ.
What’s important is not that we agree on terms, but that we understand all the activities associated with the validation of computerized systems and ensure that they are performed.
My strategy has been to understand the activities that are associated with computerized systems validation and then, with each consulting engagement, learn the terms the client organization assigns to those meanings.

Friday, June 1, 2012

What is a computerized system?

By

What is a computerized system? How is it different from a computer system?
A computerized system is a computer system with a purpose. When we talk about a computer system, we are simply referring to the hardware and software that comprise the computer system. But when we talk about a computerized system, we are referring to a function (process or operation) integrated with a computer system and performed by trained people.
There are two major components to a computerized system:
  1. The computer system that controls the function (process or operation)
  2. The function (process or operation) that is controlled by the computer system
The controlling computer system is comprised of hardware and software, but in the case of cloud computing the computer system is comprised of virtual hardware (the virtual machine) and software (system software and application software).
The controlled function is comprised of equipment to be controlled and operating procedures performed by trained people. In some cases, the controlled function may not require equipment other than the operation of the computer system in which case the controlled function consists entirely of operating procedures (of the computer system) performed by trained people.
Why is this distinction important?
The distinction between computer systems and computerized systems is important when validating computerized systems. Validation requires ensuring the computerized system is “fit for purpose” which involves not only the verification (static and dynamic testing) that occurs when building a computer system, but also the development of procedures for the operation of the computerized system, training of the people who will perform the controlled functions of the computerized system, and periodic review and maintenance of the validated state of the computerized system through configuration management and change control until the system is retired.
It’s also important to recognize that computerized systems do not exist in a vacuum. They exist in an operating environment potentially linking them to a multitude of other computers and applications. So it is important when validating a computerized system to describe its boundaries so that you don’t undertake validation of the entire operating environment.
What does a computerized system look like when cloud computing is the operating environment?
Within the context of a computerized system, the computer system is composed of a cloud application or support service and its supporting virtual machine and system software.
Figure 1. Identifies the relationship of the various components of a computerized system in its operating environment in a cloud environment.
OBD001
The controlled function does not require equipment other than the operation of the computer system, i.e., the application or support service and its supporting virtual machine and system software. Controlled functions are performed on computer systems by trained people according to operating procedures.
The physical infrastructure, virtual infrastructure, interfaces and networked functions through LAN and WAN are aspects of both the computerized system and operating environment potentially linking a multitude of computers and applications.

Introduction

By
I work at the intersection of project and quality management leading the way from good practice to best practices in information systems development and computerized systems validation.