We intend to study the relationship between the Bayesian approach and software testing. This relationship is important not only since testing should have significant implications for belief and confidence levels, but also since a Bayesian model should have ramifications for human judgment and decision making during testing. For example, one important question in software testing is ``How much testing is enough?''. This question may be addressed by explicit modeling of uncertainty, if sufficient testing is defined in terms of levels of confidence in select system entities, for example, its code modules. As testing progresses, confidence levels increase as long as test execution is successful. Testing is guided and monitored by continuous update and comparison of confidence levels against predefined thresholds. Testers are notified and may take appropriate action whenever thresholds are exceeded. This approach may be especially useful in safety-critical systems, where confidence requirements and constraints are often specified numerically. With regard to CEquencer, we intend to enhance our Bayesian model to include testing information - as it becomes available at Beckman - and monitor the effects of the Bayesian approach on testing decisions.