Skip to content

Laser speckle contrast imaging

When you shine a laser on a wall, the laser light seems to sparkle. A weird, shifting pattern of random dots appears in the illuminated spot. When you move your head, the pattern shifts around, seeming to follow you. The random dots seem to get larger as you move further away from them. This weird effect is called laser speckle. It is caused by interference patterns created when the coherent light strikes a finely textured surface — particularly textures of approximately the same size as the wavelength of light. It is a little window into the microscopic world.

hand-30f-ir8

A loop of 30 infrared image frames captured using a Kinect. Look closely at the hand, you can see the laser speckle pattern change.

If you shine a laser on your hand, the places where more blood is flowing beneath your skin will seem to sparkle more vigorously — changing more often, with a finer-grained pattern. This is because the light is bouncing off your blood cells, and when they move, they cause the speckle pattern to shift around.

You can use a camera to record this, and with computer assistance, quantitatively determine blood flow. This is an established technique in medical diagnosis and research, and is called laser speckle contrast imaging, or LSCI.

An awesome paper by Richards et al written in 2013 1 demonstrated that LSCI could be done with $90 worth of equipment and an ordinary computer, not thousands of dollars as researchers had previously been led to believe. All you need to do is shine a laser pointer, record with a webcam, and compute the local standard deviation either in time or space. I assumed when I first read this about a year ago that hobbyists would be falling over each other to try this out, but this appears to not be the case. I haven’t been able to find any account of amateur LSCI.

The University of Texas team, led by Andrew Dunn, previously provided software for the interpretation of captured images, but this software has been withdrawn and was presumably not open source. Thus, I aim to develop open source tools to capture and analyse laser speckle images. You can see my progress in the GitHub project I have created for this work.

The Wikimedia Foundation have agreed to reimburse reasonable expenses incurred in this project out of their Wellness Program, which also covers employee education and fitness.

Inspiration

My sister Melissa is a proper scientist. I helped her with technical aspects of her PhD project, which involved measuring cognitive bias in dogs using an automated training method.2 I built the electronics and wrote the microcontroller code.

She asked me if I had any ideas for cheap equipment to measure sympathetic nervous system response in dogs. They were already using thermal cameras to measure eye blood flow, but such cameras are very expensive, and she was wondering if a cheaper technique could be used to provide dog owners with insight into the animal’s behaviour. I came across LSCI while researching this topic. I’m not sure if LSCI is feasible for her application, but it has been used to measure the sympathetic vasomotor reflex in humans. 3

Hardware

Initially, I planned to use visible light, with one or two lenses to expand the beam to a safe diameter. Visible light is not ideal for imaging the human body through the skin, since the absorption coefficient of skin falls rapidly in the red and near-infrared region. But it has significant advantages for safety and convenience. The retina has no pain receptors — the blink reflex is triggered by visible light alone. But like visible light, light from a near infrared laser is focused by the eye into a tiny spot on the retina. A 60mW IR laser will burn a hole in your retina in a fraction of a second, and the only sign will be if the retina starts bleeding into the vitreous humour.

I ordered a 100mW red laser on eBay, and then started shopping for cameras, thinking (based on Richards et al) that cameras capable of capturing video in the raw Bayer mode would be easy to come by. In fact, the Logitech utility used by Richards et al is no longer available, and recent Logitech cameras do not appear to support raw image capture.

I’ll briefly explain why raw Bayer mode capture is useful. Camera manufacturers are lying to you about your camera’s resolution. When you buy a 1680×1050 monitor, you expect it to have some 1.7 million pixels each of red, green and blue — 5.3 million in total. But a 1680×1050 camera only has 1.7 million pixels in total, a quarter of them red, a quarter blue, and half green. Then, the camera chipset interpolates this Bayer image data to produce RGB data at the “full” resolution. This is called demosaicing.

Cameras use all sorts of different algorithms for demosaicing, and while we don’t know exactly what algorithm is used in what camera, they all make assumptions about the source image which do not hold for laser speckle data. Throw away the signal, we’re only interested in the noise. Our image is not smoothly-varying, we want to know about sharp variations on the finest possible scale.

Ideally, you would like to use a monochrome camera, but at retail, such cameras are perversely much more expensive than colour cameras. I asked the manufacturer about the technical details of this cheap “B/W” camera. They said it is actually a colour image sensor with the saturation turned down to zero in post-processing firmware!

Enter the Microsoft Kinect. This excellent device is sold with the Microsoft Xbox. I bought one intended for the Xbox 360 (an obsolete model) second hand for $25 AUD plus postage, then replaced the proprietary plug with a standard USB plug and DC power jack.

This device has an IR laser dot pattern projector, an IR camera with a filter matched to the laser, and an RGB camera.  Following successful reverse engineering of the USB protocol in 2010-11, it is now possible to extract IR and raw Bayer image streams from the Kinect’s cameras.

The nice thing about the Kinect’s IR laser is that despite providing about 60mW of optical power output, it has integrated beam expansion, which means the product as a whole is eye-safe. To homogenize the dot pattern, you don’t need lenses, you can just use a static diffuser.

When you capture an IR video stream at the maximum resolution, as far as I know, the firmware does not allow you to adjust the gain or exposure settings. The IR laser turns out to be too bright for near-field work. So it’s best to use a static diffuser with an integrated absorber to reduce the brightness. Specifically, masking tape.

20160926_115622

The optical rig used to capture the IR video at the top of this post.

Mathematics

My implementation of the mathematics mostly follows a paper by W. James Tom, from the same University of Texas research group 4. This paper is behind a paywall, but I can summarize it here. Speckle contrast can either be done spatially (spatial variance of a single image) or temporally (variance of a location in the video stream through time) or a combination of these. I started with spatial variance.

You calculate the mean and variance of a rolling window, say 7×7 pixels. This can be done with the usual estimator for sample variance of small samples, with Bessel’s correction:

\(s^2_I = \frac{N \sum\limits_{i=1}^N I_i^2 – \left( \sum\limits_{i=1}^N I_i \right) ^2}{N \left( N – 1 \right)}\)

where \(I_i\) is the image intensity.

To find the sum and sum of squares in a given window, you iterate through all pixels in the image once, adding pixels to a stored block sum as they move into the window, and subtracting pixels as they fall out of the window. This is efficient if you store “vertical” sums of each column within the block. I think it says something about the state of scientific computing that to implement this simple moving average filter, convolution by FFT multiplication was tried first, and found to be inefficient, before integer addition and subtraction was attempted.

The variance is normalized, to produce speckle contrast \(k\):

\(k = \frac{\sqrt{s^2_I}}{\left\langle I \right\rangle}\)

where \(\left\langle I \right\rangle\) is the sample mean. From this, the correlation time as a proportion of the camera exposure time \(x\) can be found by numerically solving:

\(k^2 = \beta \frac{e^{-2x} – 1 + 2x}{2x^2}\)

For small k, use

\(x \sim \frac{1}{k^2}\)

For large k, precompute a table of solutions and then apply a single iteration of the Newton-Raphson method for each new value of k.

Finally, plot 1/x.

Results

hand-30f-ir8-vis

It’s early days. We get a big signal from static surfaces which scatter the light heavily. Ideally we would filter that out and provide an image proportional to dynamic scattering. There is a model for this in Parthasarathy et al 5. Alternatively we can do temporal variance, sometimes called TLSCI, since this should be insensitive to static scattering. After all, you can see the blood flow effect with unaided eyes in the video. The disadvantage is that it will require at least 1-2 seconds to form an image.

One of the first things I did after I connected my Kinect to my computer was wrap a rubber band around one finger and had a look at the video. The reduction in temporal variance due to the reduced blood flow was very obvious. So I’m pretty sure I’m on the right track.

Future work

So far, I have written a tool which captures frames from the Kinect and writes them to a TIFF file, and a tool which processes the TIFF files and produces visualisations like the one above. This is a nice workflow for testing and debugging. But to make a great demo (definitely a high-priority goal), I need a GUI which will show live visualized LSCI video. I’m considering writing one in Qt. Everything is in C++ already, so Qt seems like a nice fit.

The eBay seller sent me the wrong red laser, and I still haven’t received a replacement after 20 days. But eventually, when I get my hands on a suitable red laser, I plan on gathering visible light speckle images using raw Bayer data from the Kinect’s RGB camera.

References

  1. Richards, L. M., Kazmi, S. M. S., Davis, J. L., Olin, K. E., & Dunn, A. K. (2013). Low-cost laser speckle contrast imaging of blood flow using a webcam. Biomedical Optics Express, 4(10), 2269–2283. http://doi.org/10.1364/BOE.4.002269
  2. Starling MJ, Branson N, Cody D, Starling TR, McGreevy PD (2014) Canine Sense and Sensibility: Tipping Points and Response Latency Variability as an Optimism Index in a Canine Judgement Bias Assessment. PLoS ONE 9(9): e107794. http://doi.org/10.1371/journal.pone.0107794
  3. Garry A. Tew, Markos Klonizakis, Helen Crank, J. David Briers, Gary J. Hodges, Comparison of laser speckle contrast imaging with laser Doppler for assessing microvascular function, Microvascular Research, Volume 82, Issue 3, November 2011, Pages 326-332, ISSN 0026-2862, http://dx.doi.org/10.1016/j.mvr.2011.07.007.
  4. Tom, W. J., Ponticorvo A., Dunn, A. K. (2008). Efficient Processing of Laser Speckle Contrast Images. IEEE Transactions on Medical Imaging, volume 27, issue 12. http://dx.doi.org/10.1109/TMI.2008.925081
  5. Ashwin B. Parthasarathy, W. James Tom, Ashwini Gopal, Xiaojing Zhang, and Andrew K. Dunn, “Robust flow measurement with multi-exposure speckle imaging,” Opt. Express 16, 1975-1989 (2008) http://dx.doi.org/10.1364/OE.16.001975

X11 security isolation

I previously wrote about methods for running untrusted code on a Linux workstation, with bare-metal performance and convenient local access to the build tree. Probably the best method for doing this is to use schroot. But by default, processes running under schroot still have access to the host’s X server, and can do things like keystroke logging and screenshot capture.

This is quite a nasty error on my part, it means the system I’ve been using for the last two years doesn’t actually meet one of my main security goals. So I think a post-mortem is in order.

Linux provides a concept of “abstract sockets”, which are named sockets which exist outside of the filesystem. So the set of abstract sockets is shared between processes with different root filesystems.

In March 2008, Adam Jackson added abstract socket support to the X.org server, based on a patch by Bill Crawford. The rationale for this was unclear at the time, but in September when client support was added, Adam Jackson explained to the Xcb mailing list that “the main advantages [of abstract sockets] are that they work without needing access to /tmp”.

So from the original introduction of the feature, it was acknowledged that the rationale was to bypass security controls.

In 2010, Jan Chadima applied the same rationale when he requested that the feature be added to OpenSSH’s X forwarding (bug #1789). He explained that “this is useful when the selinux rules prevents the /tmp directory”. Here we had the first critical evaluation of the security of the feature, from Damien Miller who wrote:

Isn’t the solution for SELinux rules breaking /tmp to fix the SELinux rules? Abstract sockets look like a complete trainwreck waiting to happen: a brand new, completely unstructured but shared namespace, with zero intrinsic security protections (not even filesystem permissions) where every consumer application must implement security controls correctly, rather than letting the kernel do it.

Well said.

In 2014, Keith Packard proposed to have the X.org server stop listening on regular UNIX sockets by default, relying entirely on abstract sockets. The problem of OpenSSH’s non-compliance was raised, so he suggested:

Perhaps someone with a clue about the security implications of using abstract sockets vs file system sockets might chime in and explain why using abstract sockets is safer than file system sockets…

Cue cricket chirping noise.

The issue is known to other people who have tried to sandbox processes on hosts with an X server. The developers of a browser-sandboxing system called Firejail wrote in February 2016:

The only way to disable the abstract socket @/tmp/.X11-unix/X0 is by using a network namespace. If for any reasons you cannot use a network namespace, the abstract socket will still be visible inside the sandbox. Hackers can attach keylogger and screenshot programs to this socket.

This, thankfully, does not appear to be true. You can use the X server command line parameter -nolisten local to prevent your X server from listening on the abstract socket. The UNIX socket transport (called “unix” in the X command line) will still be enabled, and all applications will use it instead.

For plain xinit, this means having a /etc/X11/xinit/xserverrc containing something like

#!/bin/sh

exec /usr/bin/X -nolisten tcp -nolisten local "$@"

For LightDM, you can create a file called e.g. /etc/lightdm/lightdm.conf.d/50-no-abstract.conf with contents:

[Seat:*]
xserver-command=X -nolisten local

At least, it works for me. You can test it within the chroot using:

socat ABSTRACT-CONNECT:/tmp/.X11-unix/X0 -

This should report “Connection refused”.

Dimmable night light

I’m not a big fan of using technology just for the sake of it. We used to have a wireless battery-powered door bell, but it was unreliable: once, a heavy-handed delivery driver pushed in the rubber button so far that it got stuck under the case. Then every caller for a week after that just pressed it anyway, and assumed it was working and that the lack of sound must be because the buzzer couldn’t be heard from outside the house. So I replaced it with a ship bell, which has the advantage of providing instant ear-splitting feedback to the user as to whether it is working or not. It has had 100% uptime over several years of operation.

So it is with dismay that I observe the trend towards push-button or touch controls on everything. My wife needs a dimmable bedside lamp: bright enough to help her breastfeed in bed at night, but not so bright so as to interfere with sleeping. I went shopping and found a wide variety of inappropriate designs. For example, some have only a single button and need to be cycled through the brightest setting in order to turn them off. How can a designer make such a thing and still take pride in their work? I know potentiometers are expensive, costing $1 or so, but even at the top end of the market, with lamps costing $200, the best they can do is put half a dozen touch switches on them, giving you bidirectional control over brightness and cycling through colour temperature settings. But they are still harder to use than an old-fashioned knob, and with an inappropriate minimum brightness for my application.

So I made my own.

Hazel threw an LED torch down the stairs and broke it. Angela asked me to fix it. Well, it only cost $4 from the Reject Shop in the first place, so my repairs weren’t very careful. I opened up the aluminium case lengthwise with a Dremel cut-off wheel and found the problem — the circuit board had been soldered to the case, and that solder joint had broken. Oh well, every failure is an opportunity, right?

For days, as I walked around the house, I looked at every item for its potential as a lamp. Eventually I settled on the acrylic case for an old iPod Shuffle. I put the iPod itself in the bin.

DSC02604

It provides indirect lighting: the upward-facing LEDs put a spot on the ceiling, lighting up the room without excessive glare for people lying in bed.

Bill of materials:

  • Salvaged LED array
  • 10kΩ linear potentiometer
  • 100Ω resistor
  • BC548 NPN transistor
  • 1N914 type power diode
  • iPod shuffle case
  • 2.1mm DC barrel jack
  • Universal switch-mode power supply set to 9V

All items were from my stock or salvaged.

dimmable lamp

To dim an LED with a potentiometer, you need to control the current rather than the voltage. If you control the voltage, then you’ll get nothing at all until it reaches a certain threshold, and then the brightness will rise exponentially until something overheats. So I adapted a simple voltage-controlled current source from Horowitz & Hill (2nd edition) to provide roughly linear current control as you turn the knob. Biasing the lower end with a diode brings the zero current point to approximately the zero position of the potentiometer. In practice, at the lowest setting, there is a very slight glow from the LED array which is only visible in a very dark room. I’ll call that a feature, to help you find the knob at night.

Battery power?

Update: A friend asked me about battery power. The torch the LED array came from used 3 AAA batteries, so about 4.5V, but with this inefficient current source the supply voltage needs to be doubled, since at full brightness, the voltage drop across the resistor R2 is as much as across the LED. And even if you used a current source that could go from rail to rail, you would still waste up to half the power.

A better solution is to power it from its original 4.5V, but with PWM. No doubt something could be cooked up with 555 timers, but they’re not really my style, I don’t have them in stock. I do have microcontrollers, and a microcontroller solution for this would have some nice advantages.

So I would use an ATtiny44, with a circuit very similar to my season clock (which, by the way, is still running on its original AA batteries, almost 3 years later). I would measure the potentiometer voltage with the microcontroller’s ADC, and when it drops below a certain voltage, go into sleep mode, waking say once every 100ms. I would power the potentiometer from a digital output, saving 450µA in sleep mode, just turning it on long enough to measure it. Maximum DC output for this chip is 40mA, so an outboard transistor may be needed, depending on choice of the LED array.

Lightweight isolation for software build and test

About two years ago, I decided that it’s not a good idea to constantly download unreviewed software written by untrusted individuals and to run that software with full privileges on my laptop. If I was telling a child how to avoid getting viruses on their Windows machine, this would be obvious and normal. But because I am talking about developing open source software on Linux, I find myself some distance from the beaten track.

I like things to be fast and cheap, and so I like the idea of a local build system. But my laptop is used for all sorts of things that should not necessarily be shared with the developers of the software I am patching. I want bare metal performance, but I want to restrict access to sensitive files, such as:

  • The SSH agent socket
  • The X display, which allows keyboard logging and a variety of other attacks
  • Password manager databases
  • Emails (such as password reset emails)
  • Write access to /tmp, which allows race attacks on various services
  • The application’s own source tree…

At first, I used a Debian package called schroot, which automates a lot of the work involved in setting up a permanent chroot. And in the last few weeks, I have been trialling LXC, a very similar technology which has recently matured substantially.

At first, I used a read-only source directory, but I kept encountering cases where build systems want to write to their own source trees. MediaWiki now uses Composer extensively, Parsoid uses npm, and HHVM’s build system subtly depends on the build directory being the same as the source directory. So in all these cases, it’s convenient to give the build system its own writeable source tree. It’s possible to do this without giving the application the ability to write to the copy of the source tree which is destined for a git commit.

The solution I’ve settled on for this is aufs, which is a kind of union filesystem. It is really a joy to use. I can edit source files in my GUI editor, and as soon as I hit “save”, the changes are visible in the build environment. The build system can edit or delete its own source files, but those modifications are not visible in the host environment. And if the build system screws up its own source tree beyond easy rectification (which happens surprisingly often), I can just wipe the whole overlay, instantly reverting the build tree to the state seen in the host environment. It is like git clean except that you can put unversioned files into the host source tree without any risk of accidental deletion.

If I need to generate a source file with the build system and then transfer it to the host system for commit, it is just a simple file move:

mv /srv/chroot/build-overlay/mw/core/autoload.php .

Comparison of LXC and schroot

schroot LXC
localhost Shared Isolated
Comprehensible error messages Yes No
Automatic mount point setup Yes No
Automatic user ID sharing Yes (setup.nssdatabases) No
Unprivileged start and login Yes (schroot.conf “users”) No (lxc-start, lxc-attach must run as root)
systemd inside container? No Yes
SysV init scripts inside container? Yes Yes
Root filesystem storage options Diverse Limited

The lack of network isolation in schroot could be a problem if you have sensitive services bound to TCP on localhost. It’s possible to bring up network services inside the container — even ones that are duplicated in the host, as long as you use a different port number or IP address. It’s a little-known fact that 127.0.0.1 is only one IP address in a subnet of 16.8 million — you can bind local services in the container to say 127.0.0.2.

schroot is generally less buggy and easier to use than LXC. That’s partly maturity and partly the greater level of difficulty involved in the implementation of LXC. For example, schroot is able to process fstab files by just running mount(8), whereas LXC is forced to reimplement significant amounts of code from mount(8). It parses fstab-like syntax itself and has special-case support for many different filesystems.

In LXC it’s normal to run the whole system as a daemon, starting with /bin/init — this is the default behaviour of lxc-start. There are lots of components that make this work, each with its own log file. Often, when I made a configuration error, lxc-start would print no error but the container would fail to start, then you had to hunt around in the logs, and turn on logs where necessary, to figure out what went wrong.

In schroot, by contrast, a persistent session is simply a collection of mounts, there does not need to be any process running inside the container for it to exist. So session start is synchronous and error propagation is trivial. Session termination is implemented as a shell script that iterates through /proc/*/root, killing all processes that appear to be running under the session in question.

schroot has some great options for root filesystem storage. For example, you can store the whole root filesystem in a .tar.gz file. When schroot starts a session, it will unpack the archive for you, which only takes a couple of seconds for a base system. By default such a root filesystem operates as a snapshot, but it can optionally update the tar file for you on session shutdown.

How to do it

I previously wrote some notes about how to set up MediaWiki under schroot.

The procedure to set up an AUFS-based build system is almost identical in schroot and LXC. Let’s say we’re making a container called “parsoid” with a union mount for the parsoid source tree. My host source tree is in ~tstarling/src/wmf/mediawiki/services/parsoid , and I create an empty overlay directory writeable by tstarling in /srv/chroot/build-overlay/parsoid.

For schroot, you would have /etc/schroot/chroot.d/parsoid containing:

[parsoid]
type=directory
description=Parsoid build and test
directory=/srv/chroot/parsoid
setup.fstab=parsoid/fstab

And in /etc/schroot/parsoid/fstab:

none /srv/parsoid aufs br=/srv/chroot/build-overlay/parsoid=rw:/home/tstarling/src/wmf/mediawiki/services/parsoid=ro

The container root (/srv/chroot/parsoid) is created by directly invoking debootstrap.

For LXC, you create the container with lxc-create, and then add the mount to /var/lib/lxc/parsoid/config:

lxc.mount.entry = none srv/parsoid aufs br=/srv/chroot/build-overlay/parsoid=rw:/home/tstarling/src/wmf/mediawiki/services/parsoid=ro

X11 isolation

Updated June 6: When using schroot, you need to configure your X server to not use “abstract sockets”, which have a global namespace (within each netns) independent of the current root filesystem. If you are using lightdm, create a file called /etc/lightdm/lightdm.conf.d/50-no-abstract.conf with contents:

[Seat:*]
 xserver-command=X -nolisten local

If your Linux distribution runs xinit directly, you would need a /etc/X11/xinit/xserverrc file containing something like:

#!/bin/sh
exec /usr/bin/X -nolisten tcp -nolisten local "$@"

For more information on X isolation, see my followup blog post.