The first part of this series introduced the Validation V-Model, an overview of a validation methodology, and compared it to the Waterfall SDLC methodology.
This post is about the terminology used by validation professionals to describe those SDLC activities that causes so much agita among IT professionals.
The terminology used in computerized system validation (CSV) has its roots in pre-computerized manufacturing processes. First, manufacturing equipment was installed and examined to ensure it was compliant with appropriate codes and approved design intentions, and that the manufacturer's recommendations were suitably considered; this was called the installation qualification (IQ). Next the equipment was exercised to see if it was capable of consistently operating within its established limits and tolerances; this was called the operational qualification (OQ). And finally, the equipment was used to produce a product to establish confidence that the finished product meets all its requirements; this is called the performance qualification (PQ).
If you were to install a gas stove, you would have a qualified plumber perform the installation per the manufacturer’s directions (the IQ) and then turn on each burner to verify it came on and could produce high, medium, and low flames and that the oven could reach and maintain a temperature (the OQ). You would then use the burner to boil water in pot and the oven to bake a cake to ensure it was able to be used to prepare dinner (the PQ).
The development of information systems performs similar testing, but the activities go by different names, e.g., unit/string/integration testing (aka IQ for custom development), system testing (aka OQ), and user acceptance testing (aka PQ).
It’s important to note that the performance of the equipment (or information system) is specified by the user’s requirements for the equipment (or information system) which may not be the entire operational range of the equipment (or information system).
For COTS software purchased for a specific purpose, the implication is that you do not have to qualify all the functions of the software, only those you intend to use and for which you have written requirements. So, the IQ would consist of the installation of the COTS software per the manufacturer’s instructions. The OQ might consist of any vendor-provided scripts to verify the installation or some rudimentary tests to ensure the COTS software operated in your environment. And the PQ would be custom scripts to demonstrate the functions of the system you intend to use meet the requirements you have previously written.
For custom developed software built for a specific purpose, that is, to the “established limits and tolerances” of the user requirements, there is less distinction between OQ and PQ. The IQ would consist of the installation of the operating system, databases, utilities, tools, etc. per the manufacturer’s instructions. The OQ would consist of custom scripts to demonstrate the requirements of the functional specification. And the PQ would consist of custom scripts to verify the requirements of the user requirements specification.
As I discussed in my previous blog entry “Tower of Babel”, adding to the confusion of terms with similar and overlapping meanings, different organizations mix the terms and definitions. Some organizations refer to verification as validation. Some define verification as dynamic testing and validation as static testing (i.e., peer review). Others refer to testing as verification or qualification. And others refer to qualification as validation. My advice is to adopt the lingua franca of the organization in which you are undertaking the validation effort and focus on the validation activities rather than the terms used to describe them.
In the third part of this four part series, I’ll discuss the scope of computerized systems validation.