Sensor and I/O Drift

Calibration Procedures and Instrumental Accuracy Estimates of TAO Temperature, Relative Humidity and Radiation Measurements

H. Paul Freitag, Yue Feng, Linda J. Mangum, Michael J. McPhaden, LT Julia Neander, and Linda D. Stratton


When instrumentation was recovered in working condition it was returned to PMEL for post-deployment calibration before being reused on a future deployment. Damage to some instruments by electronic component failure, vandalism, harsh environmental conditions, loss of mooring or seal failures prevented post-deployment calibrations in some cases. When post-deployment calibrations were made, the resultant coefficients were compared to the pre-deployment coefficients in the following manner. A set of output values were computed by application of the calibration equation using pre-deployment coefficients to a set of input values. Input values were chosen so that the output values would range over normal environmental conditions. A second set of output values were generated by application of the calibration equation using post-deployment coefficients to the same set of input values. The first output values were then subtracted from the second output values. Mean and RMS differences over the full output range and for all calibration pairs are given in Table 4. Similar statistics for I/O boards are given in Table 5. Plots of individual sensor or I/O board calibration differences are in Appendix B.

Table 4. Differences between pre- and post-deployment sensor calibration.

Table 5. Differences between pre- and post-deployment I/O bopard calibrations.

Air Temperature Sensor

RMS AT sensor calibration differences (Table 4) were similar for both PROTEUS and ATLAS groups with values of 0.154°C (PROTEUS) and 0.168°C (ATLAS). These values were roughly four times larger than the RMS maximum residual for AT sensors (Table 2), indicating that the differences were significant. Mean differences were roughly 4 times smaller than RMS differences and of different sign for the two groups, indicating that there was no preferred direction for sensor drift.

Air Temperature I/O Board

PROTEUS AT I/O boards had a RMS difference between calibration pairs of 0.046°C (Table 5), about 2.4 times the RMS maximum residual of single calibrations (Table 3), indicating that the boards drifted measurably over time. The mean drift of 0.021°C was smaller than the single bit resolution, thus it cannot be said that the boards had a preferred drift direction. Statistics for ATLAS I/O boards were larger than for PROTEUS boards presumably because of the inclusion of the 0 v calibration point in their calibration procedure (see discussion in the AT board individual calibration section above). In fact, the RMS difference between calibration pairs, 0.113°C, was smaller than the RMS maximum residual of single calibrations, 0.136°C, indicating no significant change between calibrations. However, PROTEUS calibrations (which do not include the 0 v calibration point) should be regarded as a more accurate indication of board performance.

Sea Surface Temperature Sensor

RMS SST sensor calibration differences (Table 4) were significantly larger than the individual calibration residuals, indicating that measurable drift occurred between calibrations. RMS PROTEUS SST sensor differences (0.014°C) were almost 5 times larger than the RMS maximum residual (Table 2), while ATLAS sensor differences (0.030°C) were 10 times larger than RMS maximum residual. Although ATLAS RMS calibration differences were twice as large as those for PROTEUS sensors, they equaled the manufacturer's specified drift of 0.03°C per year. Mean differences were roughly 4 to 7 times smaller than RMS differences indicating that there was little or no preferred direction for sensor drift. The larger differences for ATLAS sensors may be in part due to the fact that they are deployed 1.7 times longer than PROTEUS sensors, which is roughly the same as the ratio between ATLAS and PROTEUS RMS difference. There was a significant, yet small, correlation (r = 0.27) between the absolute drift and the time between calibration for ATLAS SST sensors (Fig. 4). Application of the regression slope to the mean difference in deployment days would account for about one third of the 0.016°C difference. Other sources for the larger drift for ATLAS SST sensors could be errors in the calibration coefficient data base. The large number of sensors involved may increase the likelihood of a sensor ID number being entered in error, or a sensor being modified without being noted in the data base.

Fig. 4. Absolute value of ATLAS SST sensor calibration differences at 25°C vs. the number of days between calibrations. The dashed line is a least squares fit to the data. The correlation coefficient for the fit is 0.27.

Sea Surface Temperature I/O Board

RMS SST I/O board calibration differences were relatively small and only 0.001°C larger than the RMS maximum residual of the individual calibrations, indicating that SST board drift, if present, was small relative to calibration uncertainty. SST board drifts were likewise small compared to SST sensor drifts.

Relative Humidity Sensor

RMS RH sensor calibration differences (Table 4) for both PROTEUS and ATLAS sensors were twice as large as their respective individual calibration residuals, indicating that measurable differences occurred between calibrations. ATLAS sensor RMS differences (4.04% RH) were about twice as large as those for PROTEUS sensors (1.77% RH). Mean differences were positive (0.94% RH) for ATLAS sensors, but negative ( 0.32% RH) for PROTEUS sensors. The cause of these differences is not readily obvious. Positive differences would result if post-deployment calibrations were not allowed to equilibrate before readings were taken. PROTEUS RH sensors were calibrated with the filters removed, while ATLAS RH sensors were calibrated in the filters installed. As noted above, clogged filters could significantly increase the sensor response time. Since ATLAS RH sensors were on average deployed for relatively long periods (335 days as opposed to 181 days for PROTEUS sensors) it could be that they became more fouled (e.g., from sea spray induced salt incrustation) due to longer deployments. While large calibration differences did occur on ATLAS sensors which had been deployed longer than 300 days, these large differences were both negative and positive (Fig. 5).

Fig. 5. PROTEUS and ATLAS relative humidity sensor calibration differences at 90% RH vs. the number of days the sensors were deployed at sea.

Relative Humidity I/O Board

RMS RH I/O board calibration difference of 0.07% RH for PROTEUS boards was less than half that of the RMS maximum residual for individual board calibrations, which indicates that no measurable drift in RH board calibration occurred between calibrations. On the other hand, ATLAS RH board RMS difference between calibration pairs was much larger (0.48% RH) and nearly double its RMS maximum residual for individual board calibrations. The reason for the difference between PROTEUS and ATLAS boards is unclear, but may be related to the fact that calibration coefficients for ATLAS boards include the voltage = 0 calibration point, while PROTEUS boards do not. Nevertheless, RH I/O board drift for both PROTEUS and ATLAS boards were an order of magnitude smaller than the drift of the RH sensors.

Shortwave Radiation Sensors

As noted above, shortwave radiation sensor calibrations were performed by the manufacturer. Eight sensors have been calibrated more than once. RMS SWR sensor calibration difference was 13.7 W/m 2 (Table 4) at 700 W m-2, implying a relative accuracy of about 2%. With the exception of one sensor all drifts were in the same direction, resulting in a mean difference (11.8 W m-2) comparable to the RMS. The sense of the drift is that pre-deployment calibration coefficients would underestimate radiation towards the end of the record. Eppley does not specify drift characteristics for this sensor, but informally the manufacturer suggests that in a tropical marine environment the drift could be as much as 2.5% per year (George Kirk, personal communication). Drift is the result of the black lacquer coating on the sensor fading. Our mean drift of 1.7% (11.8 W m-2 at 700 W m-2) normalized over 157 mean deployment days would exceed 2.5% per year, but this could be due to the fact that the drift rate is maximum when the sensor is new and should decrease as the sensor ages (ibid.).

Shortwave Radiation I/O Board

RMS SWR I/O board calibration difference was 7.7 W m-2, four times the individual calibration RMS maximum residual, indicating that measurable differences occurred between calibrations. It should be noted though that the SWR board measures relatively small voltages (order 10 æv resolution, 10 mv full scale). The RMS calibration difference of 7.7 W m-2 is roughly 6 bits or 60 æv which may be beyond the accuracy of the PMEL test equipment as presently used. At this level of accuracy it would be necessary to calibrate the PMEL voltage sources against a more accurate standard on a periodic basis and to monitor their output during each board calibration.

Subsurface Temperature Sensors

Subsurface temperature sensor calibration differences (Table 4) are significantly larger than individual calibration uncertainties, indicating that measurable drift has occurred between calibrations. RMS difference of 0.094°C was 3 times that for the ATLAS SST sensor and the manufacturer's specification for annual drift. Some of the difference could be due to outliers (Appendix B) in the calibration database. Additionally, since the SBT sensors and I/O boards are calibrated as one, some of the increased drift could be due to the I/O boards. While the SBT V/F converters are the same ones used in the SST I/O boards (which exhibited RMS differences of 0.005°C) they are used in a different fashion. These include being physically mounted in a different manner, being exposed to pressures of up to 500 dbar and using different logic to convert their output. The SBT circuitry included a precision Vishay resistor which is used periodically to check circuit stability. Preliminary evaluation of the Vishay records indicate that measured resistances begin to drift a few weeks after deployment, rising typically to (a temperature equivalent of) 0.1°C, but on occasion to as large as 0.3°C. These drifts decrease on recovery, but can be as large as 0.1°C at the time of post-deployment calibration. This apparent drift in the SBT V/F boards has little affect on first-deployment data quality as the drift in Vishay resistance is used to correct the output SBT values in the TAO database. Vishay resistances were not used during calibration and therefore would affect calibration differences. Thus the apparent SBT drift of 0.094°C in Table 4 is probably too large. Second-deployment (and later) data would be adversely affected by the overestimation of drift as their calibration coefficients would include this drift.
Return to Table of Contents
Go to Next Section