In October 2005, "Getting it Right" loosely defined levels of rigor for commissioning tests. That was intended to help building owners understand the wide variety of testing approaches available from commissioning providers. Last month (November 2005), and this month, I am presenting a qualitative discussion of the advantages and disadvantages associated with the various levels of rigor. The intent is to help building owners decide what is most appropriate for their projects prior to soliciting commissioning services.

To help this column stand on its own, I'll repeat the definitions for each level of rigor. Last month I covered the three lowest levels, and this month I'll complete the discussion with the two highest levels of rigor.

Medium-High Rigor Testing

Preparation: Generic test procedures with generic acceptance criteria customized for the systems to be commissioned.

Execution: The Cx provider decides in the field exactly how to execute the generic test procedures. The contractor performs tests under Cx provider direction.

Documentation: Pass/fail of each acceptance criterion documented on test procedure form and recommendation for acceptance or rejection.

The inclusion of acceptance criteria for each step of the test procedure has two major benefits. First, it will significantly reduce the amount of time spent both in the field during testing and afterwards debating whether the test passed or failed. In fact, experience has shown that lack of acceptance criteria could leave an owner no better off with commissioning than without commissioning. If the project team cannot agree on what is required, there is little chance that the owner will receive what the owner expects. Second, the test becomes much more repeatable for future use by system operators.

The generic aspect of this level of rigor's procedures and acceptance criteria leaves a fair amount of latitude and decision making for the Cx provider in the field. As such, testing personnel need to be high-level, technically experienced individuals. Sometimes, generic acceptance criteria includes language such as, "The system performs as specified." It is unreasonable to expect anyone to memorize all of the words of a specified system's operation, so this level of rigor may also require that the design specification and/or approved system shop drawings be on hand for reference during the field tests. If disputes arise regarding a pass/fail determination, valuable time will be spent looking things up and debating their interpretation. As such, the design engineers need to be more closely involved in the testing process in order to arbitrate disputes or clarify their intent.

High Rigor Testing

Preparation: Customized detailed step-by-step test procedures with pass/fail acceptance criteria for each step.

Execution: The Cx provider directs contractor through all steps, from start to finish in a single testing session.

Documentation: Pass/fail of each step documented on test procedure form and separate summary test report and action list.

Under this scenario, the test procedure and associated acceptance criteria become another mechanism for communicating the expected system operation to the contractors. It is different from (but the same information as) the specified control sequences, and some controls programmers reference the test procedures as another source of information regarding what the systems are expected to do. This enhances the probability that the system will pass the first time through the test.

The customized test procedure also becomes the most repeatable, regardless of who performs it - contractors, engineers, building operators, etc. - as there is little left to the imagination and judgment of the person directing the test. With this level of rigor, everyone on the commissioning team has reviewed and agreed to both the test steps and the unambiguous acceptance criteria for each step prior to testing. This saves valuable time when time matters the most, usually at the end of the project.

The dual level of test documentation is also beneficial but more costly. The filled-out test procedures are useful as information about what happened, but they contain a level of technical detail about which most people on the project team don't care to know. The summary test report and action list, however, are referenced immediately by team members responsible for correcting problems or making recommendations for acceptance. In addition, the test report can be used to summarize the benefits of commissioning realized by the owner as a result of that particular test - an important aspect of sustaining a commissioning program.

Of course, costs increase along with increased levels of testing rigor. Only the owner can decide which level represents the best value for each project. ES