# ASME PTC 19.1-2013 pdf download

ASME PTC 19.1-2013 pdf download.Test Uncertainty.

The statistics found by combining these multiple data sets may be used to estimate the variations in the result that might be due to the control of test operating conditions, or use of different test rigs, instrumentation, or test location. Whereas these influences might normally be considered systematic errors during repeated tests, the duplicated tests can randomize these systematic errors providing error estimates from the statistical variations in the combined data pool [6]. The overall reported result will usually be combined to provide the mean of the multiple results, R.

Careful consideration should be given to designing the test series to average as many causes of variation as possible within cost constraints. The test design should be tailored to the specific situation. For example, if experience indicates that time-to-time and test apparatus-to- apparatus variations are significant, a test design that averages multiple test results on one rig or for only one day may produce optimistic random uncertainty estimates compared to testing several rigs, each monitored several times over a period of several days. The list of test variation causes are many and may include the above plus environmental and test crew variations. Historic data are invaluable for studying these effects. A statistical technique called analysis of variance (ANOVA) is useful for partitioning the total variance by source [7].

7-2 SENSITIVITY

Sensitivity is the rate of change in a result to a change in a variable evaluated at a desired test operating point. Two approaches to estimating the sensitivity coefficient of a parameter are discussed below.

Note that in this case the signs for all the correlated terms are positive because all of the derivatives of z with respect to 1112, and in3 are negative. If flowmeters 1, 2, and 3 are calibrated against the same standard, and flowmeter 4 is calibrated against a different standard, the systematic standard uncertainty for z is larger than if all the meters had been calibrated against different standards (Case 1).

Case 3: Flowmeters 1, 2, 3, and 4 Are Calibrated Against the Same Standard That Has an Uncertainty Expressed as % of Reading. This example began by stating the calibration standard systematic standard uncertainty for each flowmeter was ±1.5 kg/h for the three small meters and was ±4.5 kg/h for the large meter. In this case, each of the four (4) flowmeters is calibrated against the same standard which has a specified uncertainty as a percent of the flow rate.

The sketch of flowmeter arrangement for this example showed that meters 1, 2, and 3 are in parallel and sum to the flow that is sensed by meter 4. This suggests that ideally meters 1, 2, and 3 should each provide about one-third of the total flow that is sensed by meter 4. Notice that the common systematic source of uncertainty for meters 1, 2, and 3 was given as 1.5 which is exactly one-third of the common systematic source of uncertainty for meter 4. This proportionality in the systematic uncertainties for the four meters is a result of the systematic uncertainty in the common standard that is used for all four meters being expressed as a percent of reading.

Case 4: Flowmeters 1, 2, 3, and 4 Are Calibrated Against the Same Standard That Has an Uncertainty Expressed as Percent of Full Scale. In this case, each of the four flowmeters is calibrated against the same standard; however, the systematic standard uncertainty from the standard is a fixed value of ± 4.5 kg/h across all flow rates. This implies that the calibration standard systematic standard uncertainty is expressed as a percent of full scale.

The use of these equations is illustrated by examplein subsection 10-2.

8-2NONSYMMETRIC SYSTEMATIC UNCERTAINTY.