A few of my recent projects have involved combining a single-board computer (like a Raspberry Pi) with some kind of display. I used the Linux framebuffer API to render simple UIs in software, and that worked perfectly fine for my application. The biggest issue was turnaround time during development - it was really annoying to have to build the root filesystem, flash it to an SD card, and boot up the Pi just to iterate on my interface. While I was able to save some effort by tunnelling a virtual framebuffer over VNC, I ended up with a workflow that I liked a lot more with some great side benefits.
Both of my projects use a Rust daemon to drive all of their functionality, including the UI. Rust makes it really easy to break your project into smaller units (called crates
), so it was natural to break the UI into a separate crate from the rest of the daemon. That means you can compile it separately and depend on it from other places, enabling mocking or unit tests. The best part, though, is when you combine this separated UI module with wasm-bindgen
and wasm-pack
, tools that help you compile your Rust projects to WebAssembly and use them from Javascript.
The first project I did this with was a DIY "confidence monitor" - it's a device that bands can use to help them remember the lyrics to a song when they're performing. It looks like a speaker from the front, but it has a screen in the back that the band can look at if they forget anything. Mine was controlled either programmatically via a DMX signal or manually via a Bluetooth foot switch. Both of these inputs were part of the public interface to the UI module, allowing it to be driven either by the real signals/foot switch or by Javascript. The daemon would also load configuration and the text/images it needed to display from a USB flash drive, and this could also be supplied from Javascript. The result is a pixel-perfect simulation of the embedded UI hosted entirely in the browser, giving me a way to test my code and the user a way to test their configuration and slides without needing to bother with the physical device.
While you could certainly hook up a more complicated UI framework like egui
to a Javascript canvas with WebGL, my use case didn't require complex interaction or layout (and I couldn't figure out how to open a fullscreen OpenGL context without X11 anyway). Instead, I rendered to a in-memory buffer and drew it to an HTML5 canvas using putImageData
. 1 This meant that nothing needed to change about the actual rendering code - so long as you're able to supply a framebuffer to the UI module as a render target (or retrieve the one it uses internally) you can pass that back to Javascript to be rendered. Check/textboxes can serve as configuration, and buttons can simulate the keys in the foot switch.
Overall, having this simulator sped up my development process immensely. It also makes it way easier to actually use the confidence monitor - because the simulator and the actual device use the exact same code, you never have to worry about discrepancies or keeping the two in sync when you make changes. It also makes it way easier to handle feature requests - before jumping into the complex firmware upgrade process, I can make changes in the simulator and ask for feedback.
There's another blog that includes some example code for this type of framebuffer-to-canvas rendering using WASM.