The Geeks way of checking what the outside wheather is like
Prologue
When I bought my house in 2004 I went shopping for a outside thermometer - and ended up with a full weather-station instead (a WS2300). When I unpacked it I found a serial cable inside...
Long story short - I was still in the process of recabling the house (running ethernet to every room) and added a serial cable from the machine room to the WS2300, and then did some pkgsrc work and got misc/open2300 and misc/open2300-mysql. I used those to log the data from the weather-station to a mysql database, and later moved that (via misc/open2300-pgsql) to a postgres database.
Now sometime this year the machine running that database had to be replaced (should have done that earlier, it was power hungry and wasteful). The replacement was an aarch64 SoC (a Pine64 Quartz64 model A) - and it had no real com ports (of course) any more. I had experimented with USB serial adapters and the WS2300 before, but for unclear reasons this time I had no luck and couldn't get it to work. Since some of the outdoor sensors of the old weather-station had started failing, I decided to replace it.
New Weather-Station, new Sensors
I picked a WS3500 because it comes with a nice remote sensor arrangement:
I attached it to a satellite dish mount about 1.2m above my garage and ran a two wire cable through the mount to supply it with 3V and get rid of any batteries. It does not have a connector for that, but the battery compartment had enough space for a 330µF elco and soldering that and the cable directly to the battery contacts was easy.
The sensors report to the weather-station via a proprietary protocol in the 868 MHz band.
New Weather-Station, new Reporting
The weather-station can connect to a wifi network but does not offer any services itself. The app used to configure the station offers several predefined weather collection services.
I found the idea a bit strange to have my local weather data logged to some server somewhere else in the cloud and then get it back via my browser, but for others this is a good thing. I found this article that describes exactly the remote-only, no machines required on-site setup. I used that article as inspiration for the data collection (but that part turned out to be quite trivial, see below) and copied a lot of the presentation site from it (also more details below).
So in my setup I created web servers on two dedicated ports of my tiny machine running the postgres server. One is used by the weather-station for reporting the data, the other is used to query the database.
The configuration of the weather-station for a custom server was easy:
I tested the ecowitt protocol first. It uses a post to a fixed URL and the form data has nearly identical data as we get with the solution I ended up with - only a few names (of form fields) are slightly different.
The blacked items "StationID" and "StationKey" appear verbatim in the reported data, you can set them to whatever you want - the scripts below do not check them.
The weather underground protocol does a simple http GET and provides all data as query parameters (I had to add the trailing question mark in the configuration). This makes it very easy to extract the data in a script on the server side.
But lets get there step by step. NetBSD comes with a http/https server in base, originally called "bozohttpd". It is very lightweight, but it can run various types of scripts - I picked the plain old simple CGI and /bin/sh as language, using a bit of awk to convert units.
First I added two users, so I could separate file access rights. This is how they look like in vipw:
weatherupdate:*************:1004:1004::0:0:Weather Update Service:/weather/home:/sbin/nologin weatherquery:*************:1005:1004::0:0:Weather Query Service:/weather/query:/sbin/nologinand two httpd instances for them
/etc/inetd
entry to collect the incoming data:
88 stream tcp nowait:600 weatherupdate /usr/libexec/httpd httpd -q -c /weather/cgi /weather/files 89 stream tcp nowait:600 weatherquery /usr/libexec/httpd httpd -q -c /weather/cgi -M .js "text/javascript" - - /weather/files
The document root (/weather/files
) would not be used for the instance on port 88, but httpd needs one.
Note that these lines use the quiet flag ("-q") which is only available in netbsd-current. You can replace it with "-s" for older versions.
The home directories of both users are mostly empty, besides a .pgpass
file that contains the password for this
user connection to the postgres server. They look like this:
127.0.0.1:5432:weatherhistory:open2300:xxxxxxxxxxxxxx
where "weatherhistory" is the datebase and "open2300" is the name of the postgres user for the update script and the password is x-ed out. The other file looks very similar:
127.0.0.1:5432:weatherhistory:weatherquery:xxxxxxxxxxx
At the postgres level the user "weatherquery" needs to have SELECT privilege on the table "weather", and "open2300" needs to have INSERT privilege. The table schema (output of "pg_dump -s") looks like this:
-- -- Name: weather; Type: TABLE; Schema: public; Owner: weathermaster -- CREATE TABLE public.weather ( "timestamp" timestamp without time zone DEFAULT '1970-01-01 00:00:00'::timestamp without time zone NOT NULL, temp_in double precision DEFAULT '0'::double precision NOT NULL, temp_out double precision DEFAULT '0'::double precision NOT NULL, dewpoint double precision DEFAULT '0'::double precision NOT NULL, rel_hum_in integer DEFAULT 0 NOT NULL, rel_hum_out integer DEFAULT 0 NOT NULL, windspeed double precision DEFAULT '0'::double precision NOT NULL, wind_angle double precision DEFAULT '0'::double precision NOT NULL, wind_chill double precision DEFAULT '0'::double precision NOT NULL, rain_1h double precision DEFAULT '0'::double precision NOT NULL, rain_24h double precision DEFAULT '0'::double precision NOT NULL, rain_total double precision DEFAULT '0'::double precision NOT NULL, rel_pressure double precision DEFAULT '0'::double precision NOT NULL, wind_gust double precision DEFAULT 0 NOT NULL, light double precision DEFAULT 0 NOT NULL, uvi double precision DEFAULT 0 NOT NULL ); ALTER TABLE public.weather OWNER TO weathermaster; -- -- Name: weather weather_pkey; Type: CONSTRAINT; Schema: public; Owner: weathermaster -- ALTER TABLE ONLY public.weather ADD CONSTRAINT weather_pkey PRIMARY KEY ("timestamp"); -- -- Name: TABLE weather; Type: ACL; Schema: public; Owner: weathermaster -- GRANT INSERT ON TABLE public.weather TO open2300; GRANT SELECT ON TABLE public.weather TO weatherquery;
As noted above, I carried this database over (with minor modifications) from previous instances of the whole setup - so it may not be optimal or elegant. One thing that needs special attention is the "timestamp" column - it carries date/time in UTC and has no timezone associated. This looked like a natural choice, but has some unexpected consequences. When querying data in JSON format, "timestamp" will not get the JavaScript marker for "UTC", a "Z" suffix. So in the JavaScript code in the web pages you will find quite a few places that cover up for this.
Now when the weather station sends data to the configured server, inetd(8) runs httpd(8) and that invokes a shell script /weather/cgi/update.cgi
as the "weatherupdate" user.
This script uses awk(1) to do a few unit conversions and output a SQL command to insert the data into the "weather" table.
This SQL command is then piped to psql(1) with the connection string passed on the command line.
The corresponding password is found in ~/.pgpass
of the "weatherupdate" user.
The script looks like this:
#! /bin/sh TZ=UTC; export TZ awk -v $( echo "$QUERY_STRING" | sed 's/\&/ -v /g' ) 'BEGIN { temp=(tempf-32)/1.8; indoortemp=(indoortempf-32)/1.8; dewpt=(dewptf-32)/1.8; windchill=(windchillf-32)/1.8; windspeed=windspeedmph*1.609344; windgust=windgustmph*1.609344; rain=rainin*25.4; dailyrain=dailyrainin*25.4; totalrain=totalrainin*25.4; rel_preasure=baromin/0.029529980164712; printf("INSERT INTO weather VALUES ('"'"'%s'"'"', %f, %f, %f, %d, %d, %f, %d, %f, %f, %f, %f, %f, %f, %f, %f);\n", strftime("%F %T"), indoortemp, temp, dewpt, indoorhumidity, humidity, windspeed, winddir, windchill, rain, dailyrain, totalrain, rel_preasure, windgust, solarradiation, UV); }' | psql "hostaddr='127.0.0.1'dbname='weatherhistory'user='open2300'" > /dev/null 2>&1
Note that it explicitly sets the timezone to UTC. The input data comes (as defined by CGI) via the QUERY_STRING environment variable, as a set of "field=value" items, separated by &. They are converted to sets of "-v" args for the awk invocation via a simple sed script.
With this in place, the weather-station adds a record every five minutes to the database, and it was fun to check it via SQL, but for reasons not quite clear to me most of the rest of the family did not like that kind of access very much.
psql (14.5) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. weatherhistory=> select min(temp_out), max(temp_out) from weather; min | max -------+------ -18.1 | 80.9 (1 row)
I initially thought the 80.9°C were measured while I was soldering the power cable, but apparently they were fallout from the sometimes failing sensors of the old station. The database has 2840 rows with temp_out > 40°C and all of them are 80.something. I should replace them with an average of the neighbor records.
Presenting the data
So I needed an internal web site. Which needs access to the data.
The above setup already paved the way for that, via the second port I set up.
I wanted to show all the current data in one page, and variable history data on another - which meant two CGI scripts to query the data.
The /weather/cgi/latest.cgi
script just fetches the last record logged and creates a JSON from it, and also uses pom(6) and the sunwait(1) program
from pkgsrc to supply some site and date specific data:
#! /bin/sh PATH=/usr/games:/usr/pkg/bin:$PATH GEOPOS="51.505554N 0.075278W" # geographic position of this weather station UPDATE=300 # seconds between updates # This script uses psql(1) from pkgsrc/databases/postgresql14-client, # pom(6) from the NetBSD games set and pkgsrc/misc/sunwait. # collect global site data: sunrise and friends eval $( sunwait report ${GEOPOS} | awk -F": " ' /Sun directly north/ { printf("zenith=\"%s\"\n", $2); } /Daylight:/ { split($2,v," to "); printf("sunrise=\"%s\"\nsunset=\"%s\"\n", v[1], v[2]); } /with Civil twilight:/ { split($2,v," to "); printf("dawn=\"%s\"\ndusk=\"%s\"\n", v[1], v[2]); } /It is: Day/ { printf("day=true\n"); } /It is: Night/ { printf("day=false\n"); } ' ) # moon phase eval $( pom | awk '-F(' ' /The Moon is Full/ { printf("moontrend=\"-\"\nmoon=100\n"); } /The Moon is New/ { printf("moontrend=\"+\"\nmoon=0\n"); } /First Quarter/ { printf("moontrend=\"+\"\nmoon=50\n"); } /Last Quarter/ { printf("moontrend=\"-\"\nmoon=50\n"); } /Waxing/ { a=$0; sub(/^.*\(/, "", a); sub(/%.*$/, "", a); printf("moontrend=\"+\"\nmoon=%d\n", a+0); } /Waning/ { a=$0; sub(/^.*\(/, "", a); sub(/%.*$/, "", a); printf("moontrend=\"-\"\nmoon=%d\n", a+0); } ' ) # start the json output printf "\n\n{ \"site\": { \"updates\": ${UPDATE}, \"dawn\": \"${dawn}\", \"sunrise\": \"${sunrise}\", \"zenith\": \"${zenith}\", \"day\": ${day}, \"sunset\": \"${sunset}\", \"dusk\": \"${dusk}\", \"moon\": { \"trend\": \"${moontrend}\", \"percent\": ${moon} }\n}, \"weather\":\n" # fill database results printf "WITH t AS ( SELECT * FROM weather ORDER BY timestamp DESC LIMIT 1 ) SELECT row_to_json(t) FROM t;\n" | psql --tuples-only --no-align "hostaddr='127.0.0.1'dbname='weatherhistory'user='weatherquery'" # terminate json printf "\n}\n"
As you can see, if you would restrict output to plain data from the database, the script would be only four or five lines long. But I like the additional spicing.
The /weather/cgi/history.cgi
script fetches rows between two timestamps passed to it (in JSON timestamp format)
and answers with a JSON containing an array of all the data in the requested time window:
#! /bin/sh COND=$( echo "${QUERY_STRING}" | tr '&' '\n'| sed -e 's/%22/\"/g' -e 's/%3A/:/g' | awk ' /from=/ { v=$0; sub(/^[^"]*\"/, "", v); sub(/\".*$/, "", v); arg_from=v; } /to=/ { v=$0; sub(/^[^"]*\"/, "", v); sub(/\".*$/, "", v); arg_to=v; } END { if (arg_from && arg_to) { printf("timestamp >= '"'"'%s'"'"' AND timestamp <= '"'"'%s'"'"'\n", arg_from, arg_to); } } ' ) if [ -z "${COND}" ]; then # printf "could not parse: ${QUERY_STRING}\n" >> /tmp/sql.log exit 0; fi # start output printf "\n\n" # printf "${COND}\n" >> /tmp/sql.log # fill database results printf "WITH t AS ( SELECT * FROM weather WHERE ${COND} ORDER by timestamp ASC ) SELECT json_agg(t) FROM t;\n" | psql --tuples-only --no-align "hostaddr='127.0.0.1'dbname='weatherhistory'user='weatherquery'" # 2&>> /tmp/sql.err
Fetching this data now is easy in JavaScript.
We have a request URL defined as a const, like this:
const queryURL = 'http://weatherhost.duskware.de:89/cgi-bin/history.cgi?';
and then add (if needed) the paramaters for the query, like in this example function that gets passed a from-date and a to-date:
function showData(fromD, toD) { var url = new URL(queryURL); url.searchParams.append("from", '"'+fromD.toJSON()+'"'); url.searchParams.append("to", '"'+toD.toJSON()+'"'); fetch(url).then(function(response) { return response.json(); }).then(function(data) { makeGraphs(data); updateButtons(); }).catch(function(error) { console.error(error) }); }
When the answer from the server arrives, it is decoded as JSON and returned as input data to the next function that makes some graphs from the data array. Finally a few buttons are updated (in this example the time window is put into a start and a end date control.
Inspired by the post mentioned above I used canvas gauges for the display of the latest data and dygraphs for the display of historic data.
Here is an example of how the latest display looks:
And here is how the history display looks:
I have put an archive of the cgi scripts and web pages here, and also for the curious who just want to peek at the full glory of my web design skills the start page (showing the latest weather data) and the history page.
Besides those files, you will need
- a
/weather/files/favicon.ico
if you like. - download
gauge.min.js
from canvas gauges and put it into/weather/files/
. - download
dygraph.css
,dygraph.min.js
from dygraph, plussynchronizer.js
from the dygraphextras/
directory and put it also into/weather/files/
.
Then you should be ready to go - easy, isn't it? And no heavy weight dependencies or pkgs needed.
What about other weather stations?
There are quite a few similar weather stations out there now that seem to run "related" firmware and have similar capabilities. Most likely the update script (and details in the presentation pages) will need adjustements for other types.
If you start with a different device, just log all the data it sends and adjust the cgi scripts/database/JavaScript accordingly. For protocol analyzis there are several easy means:
- Remove the "-q" flag in the httpd command (in
/etc/inetd.conf
) and check/var/log/xferlog
for the quey paramaters sent by the weather station (when using the weather underground protocol). - Make the station log to a
debug.cgi
first to capture everything (including form data posted). This works for the ecowitt protocoll. - All this stations seem to use http only (not https), so you can sniff the traffic. Use
tcpdump -w
on the server to capture the data and analyze it with net/wireshark from pkgsrc.
Here is what a debug.cgi script could look like:
#! /bin/sh env > /tmp/debug.env printf "\n\nOK\n" cat > /tmp/debug.input &
This allows you to see the form input in /tmp/debug.input and the CGI environment in /tmp/debug.env.
[0 comments]