Defect forecasting: Turn past mistakes into future gains
Connecting state and local government leaders
Recognizing that defects are a part of the quality process can build trust between users, testers and developers.
Everyone makes mistakes, but what’s learned from these mistakes builds strength. This mantra can be applied to many aspects of life, including software development. Whether these mistakes are in the form of software defects or inconclusive testing, it’s important to learn from them to avoid recurrence. Plus, past mistakes can be used to predict future defects in software development, especially when upgrading or maintaining existing software.
Developing a maintainable defect management plan can be a handy tool for predicting project defects. Teams can use this plan to circumvent potential pitfalls, saving significant time and the need for retroactive development.
Before creating a defect management plan, consider the customer’s needs. If the project is an upgrade or maintenance release, try to build from prior lessons learned. Find out what worked particularly well in the last release as well as what didn’t work. Identifying what is most important to the customer will inform a process that works for everyone and leads to efficient root-cause analysis and quick defect removal time.
A defect management plan should include a quality-focused defect prediction plan that features a detailed estimate of the number of defects found per iteration of work. Iterations may vary between teams and projects and may impact team velocity, or the amount of work a team can complete in an iteration or set period of time.
There are several steps that should be taken to create the prediction plan:
Gather data from previous releases. This is a good initial step toward predicting defects in the early stages. Start with the level of effort, sizing and velocity tracked in a previous release. These measureable attributes can provide substantial data to develop a template to use moving forward.
Use prior defect reports to develop baseline predictions. Refer to similar code development work with proportional defect density. This can be done by comparing previously predicted defect counts versus actual defect counts.
Organize projected defect identification instances according to sprint cycle and testing functions. It may be helpful to identify areas where a greater number of defects were found during user acceptance testing by a subject matter expert. Predicting defects based on tester role may help to identify environments where defects are commonly found.
Re-evaluate predictions as velocity becomes more consistent. Forecasting at the beginning of a project may be challenging because the team may still be working through sizing and identifying efforts for the work to be completed. As the project progresses, the team will gain perspective and consistency on the group velocity.
Use root-cause analyses to help the team understand why there were more defects than expected or, in some cases, less than expected. Sometimes sizing and level of effort may be under or over estimated. Forecasting should directly correlate with these attributes through development, so continue to track sizing and level of effort as the project progresses.
Pinpoint complexities or areas of logic in code that tend to be more difficult to understand or test. While forecasting may be difficult for one-of-a-kind releases, it can be very advantageous when working on maintenance or upgrade releases with fixed time, size and velocity. If a project touches existing or legacy code, it may be possible to identify areas of logic or complexity that may have otherwise been difficult to recognize, understand or test. As testers continue to recognize defects in software, trends in forecasting may develop. The process will get better with each release as more historical data is available to analyze and review.
Review defect reports and lessons learned to create baselines for future projects. Mapping the number of defects to the level of effort, sizing and testing environment can provide enough information to improve forecasting in the future.
The science of forecasting will likely never lead to defect-free code, but it can be of great use to identify a realistic schedule and the thoroughness of the test plan. Providing detailed defect prediction plans gives the user insight into what to expect during acceptance testing. Recognizing that defects are a part of the quality process and not always a negative thing can build trust between users, testers and developers. The more accurately a team can forecast the number of defects and where they are expected, the better they can adhere to schedules and confidently deploy quality code.