One of the biggest complaints we have with the way Quality Assurance is typically performed for large systems projects is that the process is reactive rather than predictive. This means most QA consultants wait for failure to happen (for example a project deliverable has defects), report the defect occurred, and then figure out what to do about the problem after the fact. It’s an approach that evolved from financial auditing and is a bit like driving a car while only looking in the rear view mirror: all we can say is either “Everything has been OK so far” or “Oops we hit a tree back there”. The issues this raises are pretty obvious:
- You have some crisis or failure to deal with;
- Dealing with this takes time that could be better spent elsewhere;
- Relationships can be strained and trust lost; and
- You have a quality problem.
So what do we do differently? Over the years PK has developed a “toolkit” of techniques we use to help identify potential issues and avoid them. We call it “Predictive Management”. The techniques range from developing mathematical models that predict project performance and schedule delays to conducting facilitated workshops that proactively identify, prioritize, and avoid potential risks. Some of these we have developed based on our experience (declining balance work graphing) and others are industry standard tools (Failure Mode and Effects Analysis). Combined these serve as a powerful toolkit to avoid quality issues. Do they identify and avoid all issues? Of course not, you’ll never predict everything. What we have found is through the rigorous application of these techniques most major issues that significantly delay projects and raise costs can be identified ahead of time and if not avoided at least mitigated to optimize project performance.