A software project team is strictly adhering to the Cleanroom Software Engineering methodology, emphasizing statistically certified reliability. They are in the final acceptance testing phase of the third product increment. The independent Certification Team has just revealed that, based on statistical usage testing, the calculated Mean Time To Failure (MTTF) for the current increment falls significantly below the minimum certified reliability target established in the contract specification.
The implementation team is arguing they should be allowed to run full-coverage unit tests on suspect modules to quickly pinpoint the likely errors before the next build. The Verification Team leader insists on simply increasing the size and diversity of the random-usage test suite to gather more data.
What is the most consistent and methodologically sound action you should take next?
A. Postpone the increment release, formally document the current achieved MTTF, and defer the required corrective action and refactoring to the next planned incremental build cycle to stabilize the current process.
B. Permit the implementation team to perform targeted unit testing on the suspect modules to quickly diagnose the root cause, provided all fixes and tests are fully documented and reviewed by the Verification Team before inclusion in the final build.
C. Immediately halt the implementation team's work, conduct a formal design and code walk-through by the independent Inspection Team, and utilize the formal specification to mathematically prove the correction before any code modification is committed.
D. Reject the unit testing proposal, require the Certification Team to focus their next testing cycle exclusively on high-risk, unverified use-case profiles to isolate the faults, and then apply minimal, verified changes.