Obviously, this depends on what kind of meteorological station you are thinking of. A fully automatic weather station with the basic variables (air temperature and humidity, total radiation, speed and wind direction, pressure and precipitation), a regular data logger and a GPRS communication should be in the range 10K - 20K € depending on quality of equipment. There are many factors that affect the final cost. One is the quality of the sensors and accessories. Another factor is the height of the tower. For wind, 10 meters is the standard, and this increases the cost. A higher tower requires a fence for security. Fence and tower means civil work, and probably a couple more days for installation. Other factors that affect cost is the location of the site and integration of data into the client system.
This depends on the temporal resolution you require and what information you want to know about precipitation. If total daily precipitation is all that you need and you have someone to take care of emptying a bucket daily, then this is the best option for you. Hellman rain gauge is the WMO standard for this. If you cannot afford to have personnel doing that daily or you want to know more about the kind of precipitation, rain intensity, etc., then you need something more sophisticated. Regarding automatic methods, tipping-bucket rain gauge is the most common option. This kind of rain gauges are relatively robust and accurate but have troubles if not maintained and snow. Depending on your site or application, you might need gravimetric or vibrating wire rain gauges. Optical disdrometers tells you a lot about precipitation like size and velocity of the drops.
If we define sampling time as the time between two consecutive samples, a correct choice is crucial to get the phenomena you are looking for. A too short sampling time will keep your CPU too busy and consuming a lot of power. A too low value might filter out what you are looking for. The same applies to averaging time, which is the period taken to calculate average, max, min, etc values. These are the values that your station will store and report.
You can always refer to WMO standards: 1.5 m for temperature and humidity of air, 10 meters for wind, 0.5 m for precipitation a couple of meters for total radiation. This can vary with specific requirements of each application.
Like any other sensor, automatic meteorological sensors suffer from drift which sometimes is too high to be ignored. It depends on the environment they are exposed, but manufacturers recommend calibration or replacement every two-three years. This can be extended with a quality assurance program that uses transfer standards on field, but this is not calibration.
This is something you need to make your station to live longer, to have reliable data and to minimize data loss. It is a shame because the better we do this, the stronger is the idea that it is not necessary.
Your weather station tells you the past and real time conditions. Well, sometimes they do not say the truth. But it is clear that they do not predict weather. Nevertheless understanding the past is the best way to predict the future, isn’t it?
This is the toughest environment. There are some specific mountain observation sensors but they are not a miracle. Power, data transmission, maintenance, etc are things you should consider too.
Sometimes you do not have time/money for a 100 m height anemometric tower. Or you are not allowed or it is almost impossible (offshore). These sensors give you very accurate profiles of wind in less than an hour. Do not forget that they measure in a bigger volume of air that an anemometer. This can be good for your application.
Definitely yes. You will have data almost online and will tell you right away when something goes wrong. Anyway, do not forget to go around often for P.M.
There are other factors that play a role here. Your sensor might have interferences like sun radiation, heat sources or evaporation. Right exposure is crucial. Moreover, you might have errors in the datalogger, transmission or manipulation of data.
Well, if you are interested of the microclimate of a roof then yes. But if you are looking for a representative location of environmental conditions at ground level, then the answer is NO please!
This depends on your objectives. First you should define your prediction horizon. Dependending on that, certain techniques have shown to be better than others. Availability of local data also determines the efficiency of the different techniques. Finally, the variable to be predicted and assumed error is important. Maybe for your application predictions from official prediction centers like NOAA, ECMWF or local Met Agencies are the best solution. But if you want something very specific like energy production from a wind farm or a solar plant, or potential evapotranspiration from an agricultural field, them you might need an specific development. It is very long to explain, but if you have a long data series of the variable to be predicted, maybe the most cost effective approach is statistical forecasting. If you know very little about the behaviour of the variable to be predicted, then maybe you should consider physical downscaling.
It is normally assumed that data coming out of a 20K € met station is right and represents the real state of the atmosphere at a certain time and location. Unfortunately, this is not always true. Obviously, the better it has been designed and installed, the more representative its data is. But sometimes there are situations that make data unreal, or wrong. Sometimes this “wrong” data is more or less random, sometimes it is not, biasing the climatology. Sometimes it can be fixed and prevented, sometimes it is impossible to correct. Data validation is a more scientific way of calling a quality assurance procedure on data. Some automatic algorithms help to do it, but from our experience we think that expert decision is necessary.
Calibration is a complex issue. Field calibration has the advantage of not affecting too much on the data completion if you want to keep the same sensor, but it is not recommended. In principle only accredited institutions can perform official sensor calibration on their laboratories. If you do not need an official calibration and for some reason you want field calibration then we can bring transfer standards to the field and perform a parallel measurement that can give some information about the performance of your sensors. Whether or not you use that data to change the calibration of your sensor is not an easy decision. In principle, the calibration should be done under all possible situations (winter, summer, raining, snowing, high/low wind, etc) so parallel data is representative enough of all possible interferences. We recommend parallel measurement with new sensors and keep the old ones as long as possible.
We recommend parallel point measurements during maintenance with new sensors so you can have good data for drift detection after some decades. This might help you to correct your long time drift.
If you have only one monitoring station, then you can manage with the manufacturer software or regular spreadsheet software. If you want your data to be plugged in an application, web page or to be sent to users, then you might need an ad-hoc development. If you have a network then having your own development is the best option. Backup, inventory management, documentation, calibration dates, maintenance control and many other things are better manage with ad-hoc software. We recommend open-source developments and linux for servers.