I recently helped create a toolkit to allow running
LabVIEW VIs on a BeagleBone Black (BBB) or Raspberry Pi 2 (RPi2). In this post I'd like to get into some of the technical challenges we encountered along the way.
SamK, the author of the LINX toolkit, took over creating
the LabVIEW I/O library. LINX was originally written to allow LabVIEW to easily access the I/O on Arduinos and similar devices, so he naturally chose to use the same
LabVIEW API to access I/O on BBB and RPi2. The underlying implementation on the BBB/RPi2 is
fundamentally different than the normal LINX toolkit. Details on the
architecture is available
here. The source code for the library is available
here.
Issues encountered
I worked on making the
LabVIEW run-time engine work on the BBB/RPi2. The LV run-time had
previously been ported to the ARMv7a CPU architecture and the Linux
operating system in order to enable the latest generation of NI's
CompactRIO industrial controllers. This made things way easier but
there were still some issues that had to be overcome:
- The run-time was already built around the fact that it was running
on the an armv7a architecture. This is fine for the CPU on the
BeagleBone Black and the Raspberry Pi 2 and 3. But the Raspberry Pi 0
and 1 use an older CPU that only supports the armv6 instruction set, so
for this reason we decided not to support the older Raspberry Pis.
- The LV run-time usually runs on Linux in NI's own embedded Linux
distribution, but for the BBB and RPi we wanted to make LabVIEW run on
the recommended Linux distro for each of these targets which happens to
be debian based.
- The ARM Linux versions of the LabVIEW run-time and the rest of the
NI software stack are compiled with the softfp gcc flag, while most ARM
Linux distros use the hardfp flag. Binaries built with one flag are
incompatible with binaries built with the other. These means that the
LabVIEW run-time cannot use the hardfp libc that is present on
Raspbian. That's a problem.
Solutions found
The first issue is kind of a limitation of the CPU in the Raspberry 1 and since most people are using the
newer Raspberry Pi 2 or 3, we decided that the easiest way to resolve the issue was to not support the RPi 1. I know it kind of sucks for people that want to use the RPi1 they have lying in a drawer somewhere but considering how cheap RPi's are in general, it doesn't seem like such a big deal.
To solve the rest of the issues, I decided to use a
Linux chroot. At it's core, a chroot, or root jail, runs a process with a different root directory. That's really all there is to it, but the implications are pretty far reaching. When a process runs it loads its libraries from the /lib directory among others, so if we provide a softfp set of libraries in a different directory and chroot the LabVIEW process to that directory then we can solve issue 3. And if, also in that same chroot directory, we provide a simple Linux distro that is custom-tailored to LabVIEW, then we can solve issue 2.
Building chroots in Yocto
To make this little chroot Linux distro, I used the
Yocto project, which I was already familiar with because I use it at work on a daily basis. I created a
Yocto layer that has everything needed to create chroot. The image recipe includes the
LabVIEW run-time engine and the
VISA and
LINX I/O libraries.
Installing the chroot
With the chroot image created, I needed some way to install it on a target. Since both the BBB and RPi use Debian-based Linux distros I create a
deb package installer. The install includes the
chroot image created using Yocto, a
systemd unit file to start the chroot and LabVIEW run-time at boot, a dependency to the schroot utility, and a small daemon that I'll discuss below.
Emulating the NI System Web Server
On traditional NI Linux RT targets, there is a system web server which provides various system configuration services like changing the network settings. We don't support most of these system configuration services, but we do need to support restarting the LabVIEW run-time. This is needed because when deploying a LabVIEW startup app to the target, the run-time needs to be restarted.
Since we only needed this service, rather than skimming though the significant codebase of the System Web Server, I instead took the tact of reverse-engineering the reboot web service using Wireshark. Basically, I ran a Wireshark capture while remotely restarting a traditional LabVIEW Real-Time target. This worked far better than I would have guessed, and soon I had
a small script which implements a small subset of the NI System Web Server's functionality. I created
another systemd unit file to start the script at boot time and put it all into the debian installer.
Security
On important note is that the NI System Web Server does lots of work to ensure that all of it's operations are handled securely, but for my quick-and-dirty python script I did not implement any of these security features. The LabVIEW run-time is also run as the root user so that it has access to the I/O resources that it needs. On other LabVIEW Real-Time targets, and on Linux systems in general, daemons like the LabVIEW run-time do not run as root due to security concerns.
The implications of these choices is that anyone with access to your local network can restart the LabVIEW daemon remotely, and can run VIs on the BBB/RPi2. This probably isn't a big deal if your network is private and has a firewall between it and the rest of the Internet, but it's still something to be aware of.