Wednesday, October 29, 2014

First steps with Node-RED

In my ongoing quest to find a good home automation software framework to run the hardware I built, I discovered Node-Red.  Node-Red is the self-professed "visual tool for wiring the Internet of Things".  Sounds promising.

Node-Red is a graphical programming environment based on node.js.  (Node.js is basically server-side javascript). Since it's built on node.js, Node-Red can run anywhere node.js runs.  It's all open source, and was started by IBM.



The interface to Node-Red runs completely in a web-browser, so there's no need to install anything on your development machine, and it is even usable from a mobile browser. It works by connecting wires between different functional nodes, kind of like the picture above.  There are lots of different nodes that do various things, from interfacing to a WeMo light switch to accessing emails from a Gmail account.

There is no built-in UI in Node-Red, but I think that might actually be an advantage.  It has numerous methods of connecting to a custom UI, and this way, you don't restrict users to a default UI that might not suit their application.

I got interested in Node-Red because I saw several different people using it to do home automation with JeeNodes.

Installation
I mostly followed the instructions here to install Node-Red to an old laptop running Xubuntu.

First I installed the Node.js package manager and Node.js itself:
sudo apt-get install npm
sudo apt-get install node

Then I downloaded the latest release zip file from nodered.org and unzipped to ~/dev
cded into that dir and ran 'npm install --production'.

Next up I launched Node-Red it by doing 'nodejs red.js'.  Running 'node red.js' doesn't work since the debian packages install it as nodejs to avoid a conflict. Eventually I ended up symlinking node to nodejs to make things easier:
ln -s /usr/bin/nodejs /usr/bin/node

And with that Node-Red seems to be up and running.

Node-Red ships with a pretty good set of basic nodes, but there is also a git repo that has even more nodes, so I thought I'd try that out.  I installed the extra nodes by doing 'cd nodes; git clone https://github.com/node-red/node-red-nodes.git'.

It should be noted that many of these additional nodes rely on node.js libraries and until those libraries are installed, the new node will not appear in the Node-Red palette.  As an example, if I want to use the suncalc node to generate an event at sunrise every day, I would need to install the suncalc library by doing 'npm install suncalc'.

Hello World
I ran through the initial tutorial and was successful.  Programs in Node-Red are called "flows". The basic design pattern in Node-Red is: drop and config nodes, wire them together, hit deploy.  Pretty simple, and even better, it actually works.

The debugging is a little simplistic in that you can print values to a debug log.  But again, it's simple, and it works, so I can't complain too much.

Flows can be exported as a json string, and imported by the same mechanism.  For the rest of this post I'll post the exported json for the flows I've used.

Controlling  execution
The Node-Red documentation has a good "hello world" tutorial, and has a tutorial on creating new plugin nodes, but it's a little sparse beyond that.  I did some experiments to figure out how the nodes in Node-Red actually execute.  Here's what I've figured out.

First off, Node-Red is completely event based.  Nothing executes unless an external event triggered it.  The "delay" and "trigger" nodes also allow timing based events to be generated.

The wires between the nodes are there to symbolize a data connection to the previous node.  More specifically the wires are really a graphical representation of how javascript Objects get passed to and from nodes.  These are usually referred to as msg objects and their contents can be somewhat arbitrary but most often contain a data member called payload.

So how can you control whether a node executes, like you can with an "if" statement in most text based languages?  If you return a "null" for any of the outputs then any of the subsequent nodes attached to that output don't execute.  A way to demonstrate that is by creating a function node that has multiple outputs which means you return an array of msg objects.  Any element in the array that has a value of null will prevent the connected node from executing.  By the way, this is exactly how a "switch" node works.  Check out the "demo switch" node in the example flow below.



Here's the example flow I'll be using for the rest of the post:
[{"id":"f64d219e.53a208","type":"inject","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"x":439,"y":454,"z":"d5a6cdcf.6203b","wires":[["5863bf74.888a3"]]},{"id":"5863bf74.888a3","type":"function","name":"demo switch","func":"msg.payload = \"hello\";\n//return [null, msg];\nreturn [msg, msg];","outputs":"2","x":628,"y":452,"z":"d5a6cdcf.6203b","wires":[["6d0fe6ba.2d2e88"],["d6a0f7c1.e949e8"]]},{"id":"6d0fe6ba.2d2e88","type":"debug","name":"demo out 1","active":true,"console":"false","complete":"false","x":821,"y":407,"z":"d5a6cdcf.6203b","wires":[]},{"id":"d6a0f7c1.e949e8","type":"debug","name":"demo out 2","active":true,"console":"false","complete":"false","x":820,"y":486,"z":"d5a6cdcf.6203b","wires":[]},{"id":"1ec662d2.3b7c25","type":"http in","name":"","url":"/test","method":"get","x":450,"y":277,"z":"d5a6cdcf.6203b","wires":[["dd8cd3d3.17b08"]]},{"id":"9c30b932.4da1e","type":"debug","name":"test URL","active":true,"console":"false","complete":"false","x":850,"y":189,"z":"d5a6cdcf.6203b","wires":[]},{"id":"6b3281a7.0a29c8","type":"http response","name":"","x":841,"y":276,"z":"d5a6cdcf.6203b","wires":[]},{"id":"dd8cd3d3.17b08","type":"function","name":"Gen response","func":"var resp = \"Hello \" + msg.payload.name;\nmsg.payload = resp;\nreturn msg;","outputs":1,"x":665,"y":276,"z":"d5a6cdcf.6203b","wires":[["6b3281a7.0a29c8","9c30b932.4da1e"]]},{"id":"7a4aeee7.85b51","type":"inject","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"x":424.5182189941406,"y":573.5229034423828,"z":"d5a6cdcf.6203b","wires":[["fbd25db9.042da"]]},{"id":"fbd25db9.042da","type":"function","name":"count","func":"if (context.hasOwnProperty(\"counter\"))\n\tcontext.counter += 1;\nelse\n\tcontext.counter = 0;\n\ncontext.global.counter = context.counter + 1;\n\nmsg.payload = \"local counter = \" + parseInt(context.counter, 10);\n\nreturn msg;","outputs":1,"x":602.5182647705078,"y":572.522876739502,"z":"d5a6cdcf.6203b","wires":[["caaa07a5.3555f8"]]},{"id":"caaa07a5.3555f8","type":"debug","name":"","active":true,"console":false,"complete":false,"x":782.5182189941406,"y":577.5229034423828,"z":"d5a6cdcf.6203b","wires":[]},{"id":"36a0278c.c95fd8","type":"inject","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"x":424.5182189941406,"y":638.5228900909424,"z":"d5a6cdcf.6203b","wires":[["4b22fcbc.b4dd04"]]},{"id":"4b22fcbc.b4dd04","type":"function","name":"get global count","func":"msg.payload = \"global counter = \" + parseInt(context.global.counter, 10);\n\nreturn msg;","outputs":1,"x":603.5182189941406,"y":634.5229034423828,"z":"d5a6cdcf.6203b","wires":[["f51d7f6a.0ae28"]]},{"id":"f51d7f6a.0ae28","type":"debug","name":"","active":true,"console":false,"complete":false,"x":782.5182762145996,"y":633.5228900909424,"z":"d5a6cdcf.6203b","wires":[]}]

Local and Global variables
Within a function node there is a way to persist data between calls to that node that is both local to that node and global to all nodes.  It's documented pretty well here.  I created a quick demonstration of globals using a couple of function nodes.



First web service
Just to try out what I'd learned I used some techniques from here, and got a working test URL using http in and http response nodes!  Just pass a "name" parameter to the URL and it will print a Hello message.  Something like this: "http://site/test?name=Ken"




Conclusions
I really liked working with Node-Red, and I think I'll probably continue using it.

Pros
  • Open source and actively developed by IBM
  • Web-based development
  • Graphical programming/rule development
  • https and basic http authentication is supported.  See here for more info.
  • Mobile-friendly interface
Cons
  • No built-in UI
  • No built-in database for sensor readings, though there are nodes to interface with various DB formats

Friday, October 24, 2014

Trying out HouseAgent

For the last couple of years I've been looking for some Home Automation software to run the hardware that I'm slowly building to automate my house. There are so many choices that I'm not sure where to start.  I discovered HouseAgent mostly because it's written in Python, and I really like Python, so I thought I'd give it a test drive.

HouseAgent is a python-based home automation tool, with a promising architecture.  It appears that plugins are separate processes that communicate via AMQP, which means they can be written in (almost) any programming language.  It has a web interface, and a simple rules engine.

I installed the latest daily build of House Agent on a Windows 7 machine and ran into some problems.  The main problem was that it wouldn't launch.

After poking around for a while I figured out that you could run the House Agent daemon at the command line to get more info.  Just run 'houseagent.exe debug' to display debug messages.  I quickly discovered that there were files missing, namely houseagent.conf.  I grabbed these files from the git repo and put them in the places the error messages referred to.  The next error I had was that the port it was trying to use was already in use (8080) so I changed that in houseagent.conf.

At this point House Agent successfully launched, and I could get to the main web interface.  Then I went to the git repo for the JeeLabs plugin and grabbed all of the files there.  There wasn't any documentation so I made a directory called JeeLabs in the plugins folder in the House Agent install directory and copied all the files from the git repo there.  I then restarted the HouseAgent daemon and was able to successfully add a JeeLabs device.

I stopped at this point since I'll need to actually tie in some JeeNode hardware to continue my investigation.  There is a .conf file in the JeeLabs plugin directory that can be used to set the COM port to use and various other things.

There is also a fork of the JeeLabs which appears to be a little more up-to-date here. Even with those changes, when compared to another more up-to-date plugin, like ZWave, it looks like the JeeLabs plugin has been somewhat abandoned.

The web UI is also pretty plain and if I decide to use HouseAgent I'd like to modify it to look a little more snazzy. Right now it looks like an "admin" interface, rather than one that a user would really appreciate.  There does seem to be a branch where someone is working on mobile interface here.

At this point I kind of gave up since it seemed like getting HouseAgent to run was just fixing one bug after another.  HouseAgent seems like a great architecture but would probably take too much time and effort to make usable.

Pros:
  • Written in python (yeah!)
  • Good plugin architecture
  • Web-based UI
Cons: 
  • It doesn't look like HouseAgent has been under active developement in several years.  Has the project been abandoned?
  • In general, it appears to be a little rough around the edges
  • Web UI is a little ugly and not mobile friendly

Wednesday, October 22, 2014

embeddedWorld 2014

I was lucky enough to get chosen to speak at embeddedWorld 2014 in Nuremburg, Germany.  I gave a short presentation on using Yocto to develop a commercial product.  If you're not sure what Yocto is, don't worry, most people don't; it's a tool that helps Linux run on embedded hardware (like your WiFi router, for instance).  Rather than talk about a bunch of technical stuff I'll just stick mostly to just how the trip itself, impressions of Germany, etc.

General Observations
As I was flying from Dusseldorf to Nuremberg, I could see at least 4 nuclear reactors, quite a few windmills, and lots of solar panel installations (many on the roofs of houses and businesses and some empty fields with lots of solar panels in rows).  I also noticed lots of large-scale greenhouses in the farm fields, I couldn't tell if these were permanent installations or just temporary structures built over top of farm fields.  At least from the air, Germany looks very "green".

Once I got to Germany, the language barrier was not a problem at all.  I'd been told that there were alot of English speakers in Germany, but it really seemed that everyone spoke English.  It was funny too because most people would say "my English is not very good" and then they would go on to speak nearly flawless English.

Germany just felt very comfortable to me for some reason that's hard for me to articulate.  And I guess the Germans thought so too, because they almost always would initially speak German to me, until I asked "Sprechen Sie Englisch?".  The other people that came over with me from the States would usually get English right off the bat from the Germans they met.

We had several people also attending the conference from NI's Munich Office, and they were outstanding hosts, taking us to dinner each night and generally helping us out in any way.

I also found that business etiquette was a bit different:
  1. It's expected to wear a suit and tie, the US standard of a button up shirt and khakis doesn't cut it.
  2. Don't be late.  You'll get some very disapproving stares if you aren't punctual.
And this should surprise no one: Germans know beer.  Every beer I had there was excellent (and I had quite a few).  Don't worry about getting something you won't like; just drink it and it will be good.

I didn't have any time for sight-seeing so this is a crappy pic I took in downtown Nuremburg


Embedded World Exhibition Floor
The Exhibition part of Embedded World was crazy-huge, with 856 exhibitors and 26,000+ visitors.   It took me a long time just to do a relatively quick walkthrough of the floor.

I kept my eyes open for any LabVIEW front panels and I saw quite a few.  One of particular interest was a cool looking demo from Intel.  It's a slightly creepy looking insectile robot that plays the Chinese mandolin (not sure what the real name of the instrument is).  You can see the LabVIEW front panel on the monitor in the picture below.  The Intel guy running the demo couldn't say enough good things about developing in LabVIEW.



Presentation
My presentation went really well.  Almost every seat was filled; probably around 50 people.  I got some good questions and some compliments after it was over.  I also talked for a while with a couple of Intel guys that are heavily involved in the Yocto project.

Flight Back
The flight back was the only negative part of the whole trip. Coming in through Customs Stateside is no fun.  'Nuf said.

Wednesday, October 15, 2014

First open source patch accepted

I was about to post about my first patch accepted to an upstream open source project, but then I started digging around and discovered this one was actually my first!

I'm not trying to toot my own horn (ok, I am a little), I was just proud that I actually have some small, insignificant bits of code contributed to the open source community.

If anyone is remotely interested the first patch is to the Busybox project to fix a bug whereby a linked local network interface will keep getting a different IP between reboots when it should be getting the same IP on subsequent boots.  I can see your eyes glazing over...

The second patch is to the Yocto project and has to do with the generated entropy on an embedded Linux system.  Basically, if there's not enough entropy in the system then random numbers become much less random which can be a security risk if those random numbers are used for something related to cryptography.

And I guess to add to that I have lots of commits submitted to repos here and here, but those are repos owned by the company I work for so I wasn't counting those.




Wednesday, October 30, 2013

Wolverine claws for Halloween

A friend is wearing a Wolverine costume for Halloween, and he asked me to help him make some wooden claws.  He wanted them made out of thin (brittle) wood so no one would mistake them as actual weapons.  Although we haven't tested them, we're pretty sure the claws would break very easily if anyone tried to use them as a weapon.

I didn't think to take any pictures while we were making them but here are the final results:



Sunday, February 03, 2013

Scale results, round 2

I made a second run on my scale tester.  This time I used a 10KOhm thermistor as my temperature sensor.  This gave me around 0.15 degrees (F) of resolution, compared with the 1 degree resolution of the previous temperature sensor I used for the last test run.



Figure 1: Weight and Temperature vs. Time

The first graph this time is the usual weight and temperature over time.  You can see that the weight and temperature appear to be highly correlated just like the first test run. The other thing is that the temperature appears much less noisy then in the previous run.

Figure 2: Nominal weight minus weight vs. Average temperature
In Figure 2, I graphed the delta in weight vs nominal against a moving average of temperature.  I also threw down a best-fit linear trend line.  I can immediately see that the point cloud is much tighter than in the previous test run, which seems good.

Figure 3: Adjusted weight

The last graph is a plot of the weight reading adjusted by the trend line equation from Figure 2.  The ideal line is a straight line at 55 lb. Clearly the line is not straight but it certainly improves the data.  The weight reading varies by +/- 0.2 lb instead of +/- 0.75 lb for the unadjusted weight readings.

Conclusions

The adjusted weight did not seem much different from the previous test run, but it still seems useful.  I think I can now go on to implement the data logger for this project which will convert weight and temperature sensor readings to specific gravity.

There are still some unknowns here.  One is that the test runs have only been run on a constant 55 lb test weight.  It's certainly possible that the numbers will change if a different weight is used.  It might be worth re-testing with weights of 45, 65, and 75 lb weights.

Additionally, there seems to sometimes be a lag between temperature changes and weight changes.  This could be attributable to the temperature sensor and scale having different thermal mass, which is almost certainly true.  I could account for this by attaching the temperature sensor to the scale with thermal paste to ensure that they are at relatively the same temperature.

Tuesday, January 22, 2013

First scale test results

I took some time to analyze the results from my scale test.  If you like graphs, then keep reading.  If you don't like graphs then skip to my conclusion at the bottom.  I happen to like graphs...

To summarize the previous post, I wanted to see if there were two effects happening to my fancy new RS232 scale: 1) temperature and 2) sensor drift (or creep).  The easiest way to track this was to measure the weight of an object with a constant mass over time, while also measuring temperature over the same time span.  I did that and now have a week's worth of measurement data to sift through.

Figure 1: Temperature and Weight vs. Time
First I graphed weight and temperature each versus time.  Figure 1 shows this graph.  You can immediately see that the measured weight correlates closely with the temperature, so clearly I was right that there is a temperature effect.  The second thing to note is that the weight line shows clear quantization, but this is to be expected as the scale only specifies to have a 0.1 lb resolution (but it's still nice to see it clearly in the graph).  Another thing to note is that the temperature is pretty noisy.
Figure 2: Average Temperature and Weight vs. Time
Figure 2 is the same graph but with a moving averaged temperature.  Perhaps a "moving average" is not the correct term for this; I never did much with formal statistics.  I basically just calculated each reading by taking the mean of the previous 6 readings.  You can see the temperature line looks much less noisy now and the correlation between temperature and weight still remains.

Figure 3
My next task was to try and figure out the mathematical relationship between weight and temperature, so that I can then (hopefully) correct for temperature and get a more accurate weight reading.  The graph for weight versus temperature is shown in Figure 3.  There are several interesting things here.  The first is that you can see that both the weight and the temperature are quantized since all the blue points in the graph come out on grid lines.  The next thing to note is that the point cloud is pretty widely distributed (which isn't good).  I had Excel generate a best fit linear trend line.
Figure 4
Next up I repeated that graph (Figure 4) but with the the averaged temperature data from Figure 2.  Now you can see that the quantization in the temperature data has disappeared but remains for the weight, which is what we'd expect.  The trend line has changed a little based on this adjustment.  I also threw a 6th order polynomial fit trend line on the data to see if that would fit the data better, but as you can see that is nearly a straight line, so we'll stick with the linear fit line for future calculations since it seems to be a pretty good approximation (at least compared with a polynomial fit).
Figure 5
Now from this we can use the calculated trend line to create a mathematical attempt at canceling out the temperature effect of the scale readings.  Figure 5 is a graph of the temperature-adjusted weight plotted over time.  The ideal line would be a line that is constant at 55 lb.  The adjusted weight line is clearly not a straight line but if you compare it to Figure 2 we have certainly made an improvement over the raw weight readings.

As it stands this temperature correction is probably useful enough that I could start using the scale for continuous weight measurements, however I'd like to make it better if I can.  I think one improvement I can make (which I mentioned in my previous post) was that my temperature sensor has pretty low resolution (about 1 degree Fahrenheit). I suspect that having a sensor with 0.1 degree resolution would help, and might bring the point cloud in Figure 3 into a tighter cloud which would make the generated trend line more accurate.  So now I'm going to start looking around for a more suitable temperature sensor (I might already have one on hand in my parts library) and re-run the test to gather fresh data.

Zeo sleep monitor




My wife got me a new toy for Christmas this year, a Zeo Sleep Monitor. It's a device that fits on your forehead while you sleep and measures your brain waves while you sleep. Based on your brain waves it logs what phase of sleep you're in (light, REM, deep, or awake). I got some time to try it out a few nights ago and I was pretty impressed!

Above, you can see a screenshot from the app on my phone. You can see for yourself what each of the colors means on the graph above.  The fun thing to note for me is that it lines up perfectly with what happened that night.  At around 1AM (the second set of red lines) my dog woke me up to be let outside.  Then around 2AM my wife woke up which woke me up for a little while.  Then at around 6:30AM my wife woke up to get ready for work which again woke me up.  So that correlates the data with reality nicely.

The interesting things in the graph are what I wouldn't ordinarily be able to determine for myself, namely differentiating between light sleep, deep sleep, and REM sleep.  I find this pretty fascinating.  I think this also means that I don't have sleep apnea (or at least not this night), since I would assume there would be many more red lines throughout the night.

I'm going to keep using this as much as possible for the next few weeks to get an idea of what my sleep habits look like over time.  The Zeo Android app also has another feature whereby you can set an alarm and the app will try to wake you when you're in an optimal sleep state.  This works on the theory that you feel better and more alert if you're awakened when you're in a light sleep state.  I haven't tried that feature yet, but I will soon.

When I find some time I'd like to find a way to use the Zeo sensor to monitor my sleep cycles and when I'm in a REM sleep cycle trigger subtle lights or sounds that might allow me to experience lucid dreaming.  I found this Android API which may aid me in doing that.

Update: I tried the alarm feature last night and it is amazing!  It delayed my wakeup time by about 15 minutes to wait for the REM cycle I was in to end.  I woke up feeling fantastic!  I may have to start using it every night.