Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Calculating sensitivity and specificity requires selection of a decision value for the test to define the threshold value at or above which the test is considered "positive." For any given test, as this cut point is moved to improve sensitivity, specificity typically falls and vice versa. This dynamic tradeoff between more accurate identification of subjects with disease versus those without disease is often displayed graphically as a receiver operating characteristic (ROC) curve (Fig. 3-1). An ROC curve plots sensitivity (y-axis) versus 1 – specificity (x-axis). Each point on the curve represents a potential cut point with an associated sensitivity and specificity. | Chapter 003. Decision-Making in Clinical Medicine Part 6 Calculating sensitivity and specificity requires selection of a decision value for the test to define the threshold value at or above which the test is considered positive. For any given test as this cut point is moved to improve sensitivity specificity typically falls and vice versa. This dynamic tradeoff between more accurate identification of subjects with disease versus those without disease is often displayed graphically as a receiver operating characteristic ROC curve Fig. 3-1 . An ROC curve plots sensitivity y-axis versus 1 - specificity v-axis . Each point on the curve represents a potential cut point with an associated sensitivity and specificity value. The area under the ROC curve is often used as a quantitative measure of the information content of a test. Values range from 0.5 no diagnostic information at all test is equivalent to flipping a coin to 1.0 perfect test . In the testing literature ROC areas are often used to compare alternative tests that can be used for a particular diagnostic problem Fig. 3-1 . The test with the highest area i.e. closest to 1.0 is presumed to be the most accurate. However ROC curves are not a panacea for evaluation of diagnostic test utility. Like Bayes theorem discussed below they are typically focused on only one possible test parameter e.g. ST-segment response in a treadmill exercise test to the exclusion of other potentially relevant data. In addition ROC area comparisons do not simulate the way test information is actually used in clinical practice. Finally biases in the underlying population used to generate the ROC curves e.g. related to an unrepresentative test sample can bias the ROC area and the validity of a comparison among tests. Measures of Disease Probability and Bayes Theorem Unfortunately there are no perfect tests after every test is completed the true disease state of the patient remains uncertain. Quantitating this residual uncertainty can be .