Tuesday, June 6, 2017

Agile Project-Release Management Using Burn Charts

Bookmark and Share

Agility and Predictability


“Some businesses value agility, but many businesses value predictability more than they value the ability to change direction quickly. For those businesses, becoming more Agile is a second level consideration; the first level consideration is how to become more predictable. … [While] Agile does allow you to change directions more easily […it] doesn’t require you to change directions and it certainly does expect you to have a direction. [So] how does Agile give you a predictable schedule? By giving you early, frequent, and fine-grained feedback and allowing you to steer the project to that schedule."

We’ve been selling the benefits of Agile for some time now – great ROI with incremental return, speed to market with greater adaptability, greater visibility, and lower cost. However, what a large enterprise also needs is predictability. This need is often expressed in requests for measures and metrics.

So, how is Agile more predictable than traditional Waterfall?
  • First, the Agile Project-Release Plan is based on the empirical velocity of the Agile team. The initial average velocity of an Agile team is based on the actual value (in story points) completed during the first three sprints. In other words, the initial Agile Release Plan is based on the actual performance of the team performing the work.
  • With the empirical average velocity and the total Agile Project effort (which is determined by grooming the backlog  and is also in story points and established by the end of the first three sprints) you can determine:

Formula1.JPG
  • The Agile Release Plan can be refined at the end of every sprint based on the actual performance of the team.
  • Finally, Agile delivers fully tested stories with every sprint. In practice, this should include automated regression testing so you know it continues to work. The traditional approach leaves testing in general and regression testing in particular to the very end when there’s little time to fix problems that are discovered. This late discovery leads to the lack of predictability in a traditional approach.

Agile Release Planning IS Project Planning


The traditional approach is to create a serial, phased project plan for review at a Planning and Requirements Gate. While the project plan will provide a prediction of the delivery date, in practice the delivery date is often known even before the start of planning as many projects are started with all three sides of the iron triangle – scope, schedule, and cost - already constrained.

For the Agile projects, a Release Plan is created comprised of the sprints required to complete the project and is based on the total release effort estimated in the project backlog and the team velocity established during the first three sprints. Roughly, if you divide the total release effort by the team velocity you will have the number of sprints necessary to complete the project.

Agile Project-Release Plan.JPG

While an Agile Release plan is initially based on estimates and equivalent to a traditional serial, phased project plan, it is unlike a traditional project plan in that it can be refined at the end of every sprint based on the actual performance of the team. If an Agile Release plan cannot meet the required delivery date, an Agile PM has the same options as a traditional PM:

  • Add resources (and cost) to increase team velocity or
  • Reduce scope (the total project/release effort)

However, the Agile PM also can manage scope at a more fine-grained level whereas the traditional PM often cannot. (This is referred to as grooming the backlog.) In Agile, teams do the most important things first. Instead of leaving whole features to the end, “nice to have” refinements can be left to the end.

Scope Management


At the end of every sprint, you know exactly where the project stands and the effort required to complete the project on schedule.

Agile Project-Release Burn Up.JPG

A Project-Release Burn Up chart displays scope (expressed in story points) over time (expressed in sprints). For every sprint you have a complete view of the project and snapshot of the current project status information:

  • Total Release Effort = size of the project/release backlog
  • Avg Earn Needed = the cumulative work (in story points) needed to be completed to stay on schedule
  • Forecasted Earn = the cumulative work (in story points) forecasted to be completed
  • Actual Earn = the cumulative work (in story points) completed
  • Forecast (velocity) = the work (in story points) planned to be completed in the sprint
  • Actual Velocity = the work (in story points) completed in the sprint

Project Scope Status Measures and Metrics


  • Actual Earn vs. Forecasted Earn vs. Average Earn Needed = 369 vs. 578 vs. 438 story points completed

By Sprint 10, the project has completed 369 story points which is both less than the forecast of 578 story points and less than what is needed to complete the project on time 438 story points. While the project is behind plan, it is recoverable (see below).
Formula2.JPG
By Sprint 10, the project has completed 56% of its planned scope.

Project Scope Performance Measures and Metrics


  • Actual Velocity vs. Forecast Velocity = 11 vs. 212 story points per sprint

The team may have encountered a serious issue to only complete 11 out of 212 planned story points. It may also be possible that the team is not reporting their work. More than likely, this snapshot was taken before the end of the sprint (perhaps in the first few days) so the actual velocity (to date) for the release is acceptable and no action is necessary if the individual Sprint burn down chart does not indicate an issue.

Formula3.JPG

At the end of Sprint 10, the project needs to complete 96 story points per sprint to complete the project on time. This is achievable because the maximum velocity to date was 162 story points in Sprint 08. So, while the project is behind plan, it is recoverable because the team is able to deliver a maximum of 162 story points in a Sprint and only needs to deliver 96 story points for each remaining sprint.

Formula4.JPG

This metric is a measure of the value delivered by the team. If this is the end of sprint 10, only 5% of the planned stories have been accepted for the sprint. It may also be possible that the team is not reporting their work. More than likely, this snapshot was taken before the end of the sprint (perhaps in the first few days) so the actual velocity (to date) for the release is acceptable and no action is necessary if the individual Sprint burn down chart does not indicate an issue.

Schedule Management


The Release Burn Up Chart can also provide you information on your project schedule performance. Here, the information about the schedule is in the relationships of the lines.

Project Schedule Performance Measures and Metrics


  • The Avg Earn Needed line represents the work progress needed over time to develop the entire project backlog (aka the Total Release Effort).
  • Any line ABOVE the Avg Earn Needed line shows more work (in story points) than planned and is therefore AHEAD of schedule.
  • Any line BELOW the Avg Earn Needed line shows less work than planned and is therefore BEHIND schedule.

Agile Project-Release Schedule Burn Up.JPG

In the chart above, the project is forecast to be ahead of schedule throughout the life of the project. However, in Sprint 10, the Actual Earn line dips below the Avg Earn Needed line showing that
the project has fallen behind schedule. It may also be possible that the team is not reporting their work. More than likely, this snapshot was taken before the end of the sprint (perhaps in the first few days) so the Actual Earn is acceptable and no action is necessary if the individual Sprint burn down chart does not indicate an issue.

Time Lines vs. Burn Charts


It's easy enough to convert the project status information from a release burn up chart to something more familiar like a project timeline. This may be necessary when communicating project status to stakeholders who are used to receiving project information in the form of a timeline rather than a burn up chart.

Key Milestones Timeline.JPG

But when you collapse the two-dimensional burn chart into a one-dimensional timeline you lose valuable information related to the scope and schedule performance of the project. One of the first questions a stakeholder might ask when seeing the timeline above would be "Is it recoverable?" or "Will you make your delivery date?". While the PM might answer yes, there is no facts or data in the timeline to support that response. With a burn chart, you could point to the fact that the velocity to complete is 96 story points and the team has delivered a maximum of 162 story points in the past (Sprint 08). So if you have to report status using time lines, be sure to have a release burn up chart as supporting documentation.

Cost Management


Burn up charts can also be used for cost management and budget performance. You can create a financial burn chart that displays cost/budget (expressed in dollars) over time (expressed in months).

  • Revised Plan = Total budget
  • Forecast = Cumulative planned spending
  • Actuals = Cumulative budget spent
  • Monthly Actuals = Budget spent per month
  • Monthly L/E = Latest estimate or forecast per month

Agile Financial Burn Up.JPG

Note: The time period of the financial burn up chart is monthly. The time period for release burn up charts is sprints (which are generally two to three weeks long).

Project Budget Status Measures and Metrics


Formula5.JPG

By July, the project has spent 55% of its planned budget. As shown in the next section, this is slightly under budget.

Project Budget Performance Measures and Metrics


The Forecast line represents the planned spending over time to consume the Total Budget (aka the Revised Plan).

  • An Actuals line ABOVE the Forecast line shows more budget spent than planned and is therefore OVER budget.
  • An Actuals line BELOW the Forecast line shows less budget spent than planned and is therefore UNDER budget.

Project Management FAQs


Traditional project management starts with the question, "What do you want me to build?" which is quickly followed by "How much will it cost?" and "How long will it take?" In other words, the project starts with scope and you estimate cost and time. This approach can give the illusion of certainty. I asked for what I wanted and got a cost estimate and a timeline. The problem is that we don't know enough about what we want to provide good estimates. And even if we knew what we wanted, we don't have perfect knowledge of how we would build it or how long it would take.

This relationship of scope, schedule, and cost is captured in the Iron Triangle which models the constraints of project management:

  • Scope – what must be built
  • Schedule – when it must be finished
  • Resources – who builds it, how much it must cost.

In traditional project management, the scope is fixed and the quality, schedule, and cost are variable. The rule is “Good, Fast, or Cheap – Pick any Two”.

triangleupsidedown.png


Agile flips the triangle upside down and asks the questions, "How much time do we have?" and "How much do we have to spend?" After the time and cost are established, the scope is varied to deliver within those constraints. Think of this In terms of product development: We have to be in a market at a certain time and we only have this team of people to do the work. (Our time and resources are fixed.) What can we build in that amount of time for that amount of money that will yield the most value for our stakeholders? (Our scope is variable.)

When will the project run out of money?


As part of Release Planning, you determine the total budget required:

Formula6.JPG

If the available budget is less than the total budget required, then there isn’t enough money for the entire project.  If your available budget is less than the total budget required, you can determine how many sprints are funded by the available budget:

Formula7.JPG


How much of the project can I complete with the available budget?


You can also determine how many story points you can complete with the available budget:

Formula8.JPG

The Bottom Line


Agile is not only adaptable, it’s also very predictable. And burn up charts give you most of the measures and metrics you need to manage the iron triangle of project management: scope, schedule, and cost.

Monday, May 29, 2017

Harmonizing Agile and CMMI Verification and Validation

Bookmark and Share

Harmonizing CMMI and Agile is challenging but not impossible. First, you must find the balance between CMMI requirements for documentation and the Agile Manifesto declaration of valuing "working software over comprehensive documentation". Then you must convince the Agile practitioners of the value of any additional documentation. This presentation is a case study of integrating CMMI Level 3 Verification and Validation engineering practices with Agile software development in a large enterprise.

Harmonizing Agile and CMMI Verification and Validation

CMMI Speaker Bio

Beyond Compliance

Bookmark and Share

Here's one from the archives comparing the language of regulatory compliance with the language of project and quality management. Conformance to requirements is only one aspect of quality. Quality is also reflected in the number of features and whether a product is "fit for purpose" - meaning no missing or incorrect requirements. Quality, including regulatory requirements, is built in, not inspected in.

Beyond Compliance - Product Quality and Organizational Compliance

Monday, July 16, 2012

Validation Roles and Responsibilities

By

Who is responsible for validating a computerized system?


Validating a computerized system is not a single event. It involves many activities and deliverables (see my previous post Validation is a Parallel Process, Not a Parallel Universe). A system is validated when the activities relevant to demonstrating the system meets its requirements and is fit for purpose have been performed, reviewed, and approved by the management of the executing organization.

While some would think the test analysts or the validation lead are responsible for validating the system, just as there isn't a single validation event, there is no person solely responsible for validating a computerized system. It involves many people within the organization including IT, Quality, Validation, and the Business. Some of these people create validation deliverables. Other review and approve them. And others file and retrieve the records of the validated system. All participate in the activities of validation.

The following list is an example drawn from the Validation Plan Template in the Toolbox (in the sidebar). The actual allocation of the responsibilities will vary depending on the roles within your organization. But it does illustrate the point that many people are involved in validating a computerized system.

Example of Validation Roles and Responsibilities



Application Developer



  • Create and execute IQ test cases for installation and configuration of software (all GAMP® 5 software categories)
  • Create and execute IQ test cases for each specific customization for GAMP® 5 software category 5 (Custom Software), e.g., RICE – Reports, Interfaces, Data Conversion, and Enhancements
Application Lead
  • Create Functional Requirements Specification
  • Create Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Create System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Create IQ protocol
  • Create IQ Summary Report
  • Create Data Conversion Report
  • Approve IQ test cases
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Application Manager
  • Review and approve SOPs
Business Analyst
  • Create the User Requirements Specification
Operations Manager
  • Determining required SOPs, providing developmental support for SOPs, reviewing and approving SOPs
Quality Manager
  • Perform Vendor Audit
  • Reviewing and approving SOPs
Subject Matter Expert (SME)
  • Create the User Requirements Specification
  • Create the Functional Requirements Specification
  • Create the Configuration Specification
  • Develop SOPs
  • Develop Training Materials
Technical Lead
  • Review and approve GMP Assessment
  • Review and approve 21 CFR Part 11 Assessment
  • Review and approve Functional Risk Assessment
  • Review and approve Functional Requirements Specification
  • Review and approve Configuration Specification for GAMP® 5 software category 4 (Configurable Software)
  • Review and approve System Design Specification describing the technical details of specific customizations for GAMP® 5 software category 5
  • Review and approve IQ protocol
  • Review and approve IQ Summary Report
  • Review and approve Data Conversion Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Test Analyst
  • Create OQ Test cases
  • Create PQ Test cases
Validation Lead
  • Perform compliance assessments
  • Determine whether or not to test requirements with a medium or low risk priority
  • Create the Requirements Traceability Matrix
  • Create the Operational Qualification Summary Reports
  • Create the Performance Qualification Summary Reports
  • Create the Validation Summary Report
  • Determine required SOPs, provide developmental support for SOPs, review and approve SOPs
Technorati Tags: , , ,

Wednesday, July 11, 2012

Software Validation Plan

By
To be prepared is half the victory
-Miguel de Cervantes

What is a Validation Plan?

A validation plan describes the approach and responsible parties for validating a system within its operating environment. The "system" could be a piece of manufacturing equipment, a manufacturing process, or a computerized system. The key elements of a validation plan include:

  • 
The scope of work - the Validation Plan identifies both the system and its component elements to be validated and the required documentation
  • The responsibilities - the Validation Plan identifies the roles and specific responsibilities of the parties responsible for validating the system
  • The approach - the Validation Plan outlines the strategy for validating the system within its operating environment. The approach is generally bounded by the standards, procedures, and methodologies of the executing organization.
A Validation Plan is required whenever significant changes are made to facilities, equipment or processes affect the quality of a product.

How is a Validation Plan different from the Validation Master Plan?

A Validation Master Plan (VMP), sometimes called a Master Validation Plan (MVP), is the basis for the validation program within an organization. It should include process validation, facility and utility qualification and validation, equipment qualification, cleaning and computerized system validation.

A Validation Plan is specific to one of the elements within the Validation Master Plan. For instance, a new biotechnology manufacturing plant will require a Validation Master Plan that addresses everything from the facility and utilities to the manufacturing equipment and processes, including the computerized systems that measure, record, process, or store quality data related to the product. Each element will require its own detailed Validation Plan, i.e., every piece of manufacturing equipment, every process, and every computerized system will be covered by a Validation Plan.

How is a Validation Plan different from the Project Plan?

In a previous post (Validation is a Parallel Process, Not a Parallel Universe), I mapped the validation activities and deliverables associated with computerized system validation (sometimes referred to as software validation) to the system development life cycle (SDLC). With the exception of a few items, most of the deliverables are already part of the SDLC. These deliverables define the scope of a validated SDLC project and one of the three parts of the project management triple constraints of cost, time, and scope.
Triple constraint
(For a nice discussion of how to manage the triple constraint, please read Tom Tsongas' blog entry Scope, Time and Cost – Managing the Triple Constraint.) It is important to note that when scope is equated with quality (as in discussions of the triple constraint), quality is used in the sense of the NUMBER of requirements rather than in the sense of MEETING requirements. In other words, more requirements means higher quality (with the implication that all requirements are met).

With regard to the other two concerns of project management, the validation plan only addresses them tangentially. Cost is not a direct concern of the validation plan. However, in that a significant part of the cost of a project may come from the labor of the resources involved, the roles identified in the validation plan should also be addressed in the project plan. But the validation plan doesn't go so far as to articulate an organizational breakdown structure or labor rates.

Time is also not a direct concern of the validation plan. It does not include a work breakdown structure or any estimates of task/activity duration. In fact, the schedule of validation activities is not specifically addressed in the validation plan and as I pointed out in my blog post, these activities follow the SDLC.

Why is a Validation Plan required?

So if a Validation Plan overlaps the scope of a project plan, but does not address the cost or the schedule of validation, what good is it? Some of the value of a validation plan simply comes from its form as a narrative version of the project deliverables. While most of the "form" of a validation plan is captured in a template (for example, the Validation Plan Template in the Toolbox in the sidebar), this form explains the purpose of each deliverable required for validation and places them in context of the methodology, SOPs, Document Management, and Change Control of the executing organizations Quality Management System (QMS).

The validation plan also functions as a catalog of al the documents required to demonstrate the system was validated and how it will be maintained in a validated state. As such, it forms the basis for the Validation Summary Report (VSR), which records the actual results of the validation versus the approved plan.

However, the primary value of the validation plan is as a record of management's intention to validate the system. Once the document is approved and signed it becomes a controlled quality record that is placed in the record management system of the executing organization and is available for audits.

How is a Validation Plan created?

A Validation Plan is not so much created as it is tailored from a template provided by the executing organization. The template itself should be consistent with the QMS and methodologies of the executing organization. Generally, the organization responsible for Quality or Validation will provide this level of tailoring.

The next step of tailoring is based on the scope of validation - in other words the boundaries between the system being validated and its operating environment. (See my previous post What is a computerized system? for an explanation of this boundary.) The boundaries and responsibilities for the various elements will drive the content of the validation plan. Elements may be combined in a single validation plan or segregated into separate validation plans. As an example, in the Toolbox in the sidebar I have a link to the International Society of Pharmaceutical Engineers (ISPE) that provides an example of a validation template for equipment. And on a recent engagement, I developed a Validation Plan Template that allocated responsibilities for validation software as a service in the cloud among the cloud tenants, the cloud service provider, and the infrastructure hosting vendor.

And finally, the last step in tailoring is to identify by name every document that will be required for validating the system.

Friday, July 6, 2012

Monday, July 2, 2012

Are We There Yet? How to Tell When Testing is Complete With Simple Metrics

By

"When will testing be complete?" is one of the most-asked questions of a testing manager. And, just as often, the answer is given in terms of "% Complete". But what does that really mean?

Measures and Metrics

% Complete is a valuable testing metric that reports on how many of the planned tests have been executed and shows your progress versus your plan:
  • 
% Complete = (Number of tests executed) / (Number of tests planned)
So, if  you've planned 15 tests and executed 8 tests, you are (8) / (15) = 53% complete.
Simple enough, if all you want to know is how many tests you've executed versus your plan. But it doesn't tell the whole story.

What about the quality of the product you're testing? Easy enough, that is the number of executed tests that have passed:
  • 
% Passed = (Number of tests passed) / (Number of tests planned)
So, if you've planned and executed 15 tests and 10 have passed, then (10) / (15) = 67% passed reflects how many of the requirements you are testing for have been met.

Obviously, things have not gone as well as planned if we're only meeting 67% of the requirements. There's a simple metric you can use to express the cost of poor quality.

  • 
% Rework = ((Number of re-executed tests) / (Number of tests executed + Number of tests re-executed)
So, if you planned 15 tests, executed 15 tests and had to repeat 8 tests more than once, then (8) / (15 + 8) = 35% rework, a metric reflecting the additional effort involved due to any number of reasons from poor requirements, design, coding, or testing. (But that's a discussion about defect management and root cause analysis for another day.)

One thing % Complete, % Passed, and % Rework all have in common is that they're metrics. In other words they're calculated. And the things they are calculated from - tests planned, tests executed, tests re-executed, and test passed - are measures. In other words, they're counted. The good news is that the data for these measures and metrics should be found in your test log.

I have included a simple test execution log (available in the Toolbox in the sidebar) that automatically calculates and graphs the measures and metrics. (I have populated the log only with the information necessary to drive the calculation of metrics and measures.)

Are We There Yet?

But none of these measures and metrics directly answers the question "When will testing be complete?". This is because each measure and metric is a snapshot in time. The answer to the question can be found by graphing the measures over time to produce a trend chart that should look something like this:

OBD011 Test Measure Trend Chart

If you just go by the % Complete versus plan, testing is done on Day 2 when all tests have been executed once. But the quality of the product is less than adequate with only 67% of the requirements passing their tests. So you keep on testing. But when do you stop? The answer to that is in the "Total Passed" trend line. It should always create an S-shaped curve with the top of the curve approaching the number of tests planned. At the point the "Total Passed" S-curve levels off  (Day 5 in the chart above) is when you should stop testing. At this point you should assess whether you've met your exit criteria for the testing phase, e.g., no open critical or major defects, and whether to release the product.