The outdoor temperature sensor was calibrated by comparing it to a thermometer with an NIST traceable calibration. The reference thermometer used was an ERTCO ASTM 63F mercury total immersion thermometer, serial number 6374. This thermometer can be used to measure temperatures between 18 and 89 deg F and is marked at 0.2 deg F intervals.
Since the modified temperature/humidity sensor could not be immersed in liquid the calibration was done in air. A crude environmental chamber was constructed to help maintain set temperatures during calibration. The chamber was made by modifying an Avanti EWC12 thermoelectric wine chiller. This chiller uses a Peltier module to chill the inside air to between 51 and 62 degrees F. The main modification was the rewiring of the thermostat circuitry. Because of the lag time involved when using a Peltier module to cool such a large volume of air the wiring modification bypassed the internal thermistor so the current to the Peltier was set by a simple resistor network. For a given resistance setting the unit would thus reach a constant temperature after several hours without short-term cycling of the cooling unit. The resistance was set by an externally mounted potentiometer. This modification also allowed temperatures higher than 62 deg F but lower than room temperature to be set. The only other modification was the drilling of holes in the top of the unit to allow temperature probes to be inserted.
The thermometer was mounted to two of the five wire shelves supplied with the unit. At the bulb end the thermometer rested in a loose-weave cloth bag sewn to the bottom shelf as shown in the next photo. The top of the thermometer was held to the other shelf by a simple rubber-band sling shown a the bottom of the second photo. This arrangement kept the thermometer vertical without placing it under any strain.
The final configuration used for the calibration is shown in the photo below. The yellow thermometer can be seen through the glass door of the chamber. The Lacrosse temperature/humidity sensor is standing at the bottom of the thermometer.
Initial tests showed the chamber had a temperature gradient across the height of the mercury column of approximates 2 deg F. Using standard stem corrections showed this would result in a temperature error of only 0.006 deg F. Despite this the goal was a more uniform temperature than this so two other modifications were made to reduce temperature gradients in the vicinity of the thermometer and test article. First a right angle shaped piece of styrofoam as tall as the inside of the cooler was mounted to the shelves to act as a "wind break" for the air being expelled by the cooling fan near the top of the mercury column. This break would prevent the fan from blowing directly on the thermometer creating a separated flow region over the thermometer's length. A 60mm computer cooling fan was then mounted to the top rack just behind the wind break pointing toward the bottom of the cooler. This fan mixed the air in the separated region leading to a column of air with a more uniform temperature distribution. Both of these modifications are visible in the earlier photo of the thermometer sling. With this modification the vertical temperature gradient was reduced to approximately 0.5 deg F. To asses horizontal gradients a RadioShack pool/spa probe was placed at several locations in the vicinity of the thermometer bulb. No temperature gradients were measured to the 0.1 deg F resolution of the unit. Finally, calibrations were done with the Lacrosse sensor in several locations close to the thermometer bulb. All of the calibration points fell on a single line independent of location further showing that significant gradients did not exist.
Since the wine cooler was not able to chill the air below about 50 deg F a different method was used to obtain a calibration at lower temperatures. The RadioShack pool/spa sensor mentioned above was calibrated in the same manner as the Lacrosse instrument and also by immersion in an ice bath. It was then used as a "standard" for comparison with the Lacrosse instrument when placed in the refigerator and freezer. This yielded several calibration points near 40 and 0 deg F respectively.
The results of the temperature calibration are shown in the graph below. Here the calibration has been applied to the measured temperature values as shown by the symbols which are then compared to the exact result shown by the line. The good linearity of the temperature sensor is apparent. The slope error is slightly more than 1%.
The final graph shows the scatter in the measurements after the calibration was applied. For all the test points the largest error was -0.7 deg F with all the other points falling within +/- 0.5 deg F of the actual value.
The setup for this calibration is shown below. The circuit board was sealed in a Pelican 1030 Micro Case. This Polypropylene case has inside dimensions of 6.37"x2.62"x2.06". The case contains a rubber liner that also serves as a seal between the lid and body forming an air tight seal. The pressure relief valve was removed to allow probes to be inserted into the closed case. For the humidity calibration this opening was sealed with cellophane tape.
The salt solution was held in a 1/2 cup plastic measuring cup. Since the time required for the relative humidity to reach equilibrium is a function of the air volume in the container, most of the open volume was filled with pieces of 1/2" thick Dow "Ethafoam" closed cell static-dissipative foam. Closed cell foam was chosen to reduce air infiltration into the foam. Static-dissipative foam was chosen to prevent ESD damage to the exposed circuit board while not shorting the circuits the way conductive foam could do. The humidity sensor is visible in the photo as the small black rectangle in the lower left corner of the circuit board. The orientation of the board and salt container were chosen such that the sensor was directly over the salt solution. The humidity data was transmitted wirelessly to the base station were the values were read by the computer over the serial port.
Although the saturated salts produce relative humidities that are a weak function of temperature, differences in temperature between the salt solution and the air above it can lead to relatively large errors. The foam used to fill the voids helps to insulate the chamber from outside temperature fluctuations. To reduce them further, the test article was wrapped in towels during the tests.
Using the above method, data was obtained using the seven salts listed in the table below. In addition, distilled water was used to obtain a point at 100% relative humidity. The second column in the table shows the theoretical relative humidity produced by the salt mixture and the third column shows the value as measured by the humidity sensor. The theoretical values have all been adjusted for the measured test temperature at each point.
|Relative Humidity (%)|
The test time with each salt varied depending on how long it took the humidity in the case to reach equilibrium. The lowest and highest humidity values took the longest to obtain. Test times ranged from 25 to 300 hours.
A second order polynomial was used to define the correction to the humidity. This form was chosen as it is the form suggested by the manufacturers of similar humidity sensors and the result is visually more satisfying than a linear fit. The plot below shows the effect of the calibration. The symbols represent the measure values with the calibration applied. The line represents a perfect reading of measured value = actual value.
The percent error in the calibrated results are shown in the last plot. Here "percent error" is defined as (measured humidity - actual humidity) / actual humidity * 100.
The default setting for the instrument is 0.0204" of rain per tip. This value was checked by slowly dripping a known amount of distilled water into the device to measure the quantity each tip corresponds to. The test setup is shown in the next photo. The rain gauge was located under the tip of the buret offset slightly so the water would not drip directly into the opening over the tipping buckets. The gauge was placed inside a water-tight box to catch the water that is dumped out the bottom of the gauge with each tip.
Distilled water was used for the test under the assumption it would have a similar density to rain water. This water was released from a 50 ml Class A buret to measure the amount required to tip the bucket. As the bucket neared the tip point the rate of release was reduced to approximately one drop every 5 seconds. This prevented an extra drop from entering an already full bucket in the finite time required for it to tip to the next position. This was continued until most of the water had been drained from the buret and an even number of tips had occurred. An even number was chosen so both sides of the see-saw would be tested an equal number of times minimizing the effects of asymmetries between the buckets. A total of three tests were made. In each case the buckets tipped 14 times before the water level would drop too low to read. The average of the three runs yielded a value of 0.0196 inches of rain per tip, a value only 4% different than stated. The measured rainfall amounts are thus reduced by the ratio 0.0196/0.0204.
One problem with tipping bucket rain gauges is they can underpredict rainfall in heavy rains. This happens when the rain is entering faster than the bucket can tip and water enters a bucket that is already full but that has not fully tipped due to its inertia. No attempt was made to quantify this effect in this study.
|Return to Home Page||Latest update: November 26, 2006|