ARTICLE / Part 1

Journey to the centre of the nrf52840

The Prologue

Our journey begins like all great stories in medias res, much like Star Wars. I had had some great success using Zigbee2mqtt and Node Red to automate my IKEA Trådfri lights at home, but it was working so well it was getting boring. So I decided the logical next step was clearly to build a light sensor using a low-power microcontroller, and probably a soldering iron. While I was not particularly new to embedded programming, I had never used Rust for it before so I was really excited to see how much fun it would be to write Rust in the more constrained no_std environment where careful memory management really matters.

The embedded ecosystem on Rust is very exciting these days, and it has a very active community behind it, but some chips and architectures are still rather unsupported. My initial plan was to use an ESP8266 which is quite popular and despite being very low-power still has wifi support. However the Rust story for the ESP chips is unfortunately still a bit sub-par because of limited LLVM upstream support, which therefore requires you to use a forked LLVM compiler.

So I instead opted for a Nordic Semi nrf52840 which is really such a cool little chip. Seriously, the feature list for it is a mile long. Of particular interest to me, and the main reason I ended up choosing this (specifically Nordic Semi's own nrf52840 DK), is because the cortex-m and the nrf52 family are extremely well supported by the embedded Rust community. It also has built in support for Bluetooth Low Energy and IEEE802.15.4 (zigbee to you and me) radios, which means I could integrate it super easily to my existing zigbee network.

Gentle introduction to embedded programming

Even if you're familiar with programming in general, and Rust in particular (which is probably why you're here), embedded programming is a whole different thing. There are some new concepts we need to learn to understand and a some old concepts we might as well forget.

Constrained environment

First and foremost the hardware we're talking about here is so constrained that, unlike even a Raspberry Pi for example, there is no hope of running our program within anything approaching a linux environment. In fact, it's very common to not even run anything that qualifies as an operating system at all, and the code running is more usefully thought of as "firmware" rather than "a kernel". One immediate consequence of this constrained "no OS" environment is that the Rust we write has to be "no std" which means without an OS like linux we are unable to use much of Rust standard library, and there's generally no heap. This means no allocation, no Vecs, everything has to be fixed size arrays with lengths known at compile time.

We are writing our code much much closer to the hardware than we're used to so we need to be very aware of the specific hardware capabilities for our target devices, for most microcontrollers that likely means a single-core CPU. So if you've ever written async Rust before that's good, but don't worry if you haven't because while concurrency in the embedded space is very common it is slightly different from what you might be used to.

Real-time systems for fun and profit

This brings us to a core concept of embedded programming: real-time systems. It's not an inherent requirement of embedded systems themselves that they be real-time, but it's often a requirement of the task they're used for. Think of any electronic device you've used that isn't your laptop or phone, it probably has buttons and some kind of display and performs a few different functions. One thing it doesn't do is show you a spinner when you press a button while you wait for it to perform some other task that's running in the background first. Embedded devices almost always require strict guarantees about when tasks must start and how long they will take. This is what we mean by real-time systems, they aren't always user-facing but they are defined by having strict time allocations for each task and are thus deterministic. In real-time systems a late answer is considered a wrong answer.

How can we do this when we only have one CPU, no threads, and no OS at all? Good question! All embedded systems are essentially in their most basic form an endless while loop where some things occasionally happen. Any embedded system of any complexity whatsoever will need to do more than one thing and so it will have something which performs the role of a scheduler in some shape or form. This can be as simple as a round robin scheduler which just divides the time of each loop into slices and naïvely assigns tasks some milliseconds, then suspends it and moves on to the next task, etc. This can often be enough, especially if the system doesn't need to respond to external events very often. But it should be immediately obvious that some tasks are going to need more time than others, and it's not fair to give each task the same amount of time.

That concurreny I mentioned

A common alternative to round-robin scheduling is to use interrupts and exceptions (not the exceptions you're likely familiar with). These are hardware-supported methods for either external devices or the processor itself to signal an event via a GPIO pin, which is generally used to signal to the processor to pre-empt a currently running task with another task triggered by the interrupt. In this way, and with the help of some concept of priority, tasks can more likely get to run when they need to and take as much time as they need. When no tasks are running the system can run a special idle task which does nothing except sit and wait for the next interrupt.

Abstractions are always an answer

I said earlier that you need to be much more aware of the particular hardware you're targetting when writing embedded code. Most programming you're probably familiar with runs on some kind of x86_64 architecture, on something easily described as "a computer", and in a linux or windows environment. You've probably never even really thought about that, you've just written code and expected it to more or less work on any platform, because all x86-based computers are pretty similar and the OS papers over a lot of the remaining differences at a lower level before your code executes. But there are thousands of different SoC architectures and platforms, all with differences and idiosyncracies that an embedded programmer has to be aware of and work directly with. You can't just write your code once and expect it to work everywhere anymore.

Do I really need to know what a memory register is?

It's obviously unreasonable to seriously expect embedded developers to know much about the inner workings of each individual microcontroller their code might run on. This is where the first level of libraries steps in. It's very common to have a "peripheral access layer" (in Rust terminology this is usually PAC, or peripheral access crate), which is written for each individual chip or board, and deals with chip-specific things such as which memory registers correspond to GPIO pin 7 for example. They probably don't include much logic, just mappings and consts and definitions.

In the case of the nrf52840 that would be the nrf52840-pac crate. This exposes load of types such as EGU0 which contains a const PTR = 0x40014000. Now you don't need to know the memory register, you only need to access my_architecture_pac::EGU0, something with a similar name that should exist in many different PACs.

Open the pod bay doors, HAL

The second layer is the Hardware Abstraction Layer and this is where the interesting stuff starts to happen, as the job of a HAL is to abstract away much of what would be chip-specific concerns in favour of a more common and generic interface. In embedded Rust there is a crate called embedded-hal which defines a lot of traits that a boad-specific HAL will implement, such as embedded_hal::blocking::i2c::Read, allowing you to use the HAL to read and write I2C data without needing to know any particular implementation details. You will still need at least a passing knowledge of embedded hardware in general (and I2C in this case), but the HAL definitely saves you a lot of boilerplate. And of course, its main purpose which is to present a more common and unified interface for code to be written that might potentially run on several different devices without having to write multiple special cases.

Let's actually do something

First step is to make sure our board works. So let's write a very simple program and flash it to the board.

Cargo.toml

1[dependencies] 2cortex-m = "0.7.7" 3cortex-m-rt = "0.7.3" 4embedded-hal = "0.2.7" 5nrf52840-hal = "0.16.0"

The hal crates we've talked about before, but the cortex-m crates here fill a similar role but for the CPU architecture itself. We don't need to dwell too much on it at this stage, but one thing it provides us is the entry point mechanism, which is used instead of the more traditional "main function" that you're probably used to.

.cargo/config

1[build] 2target = "thumbv7em-none-eabihf" 3[target.'cfg(all(target_arch = "arm", target_os = "none"))'] 4rustflags = [ 5 "-C", "link-arg=-Tlink.x", 6]

This file basically just tells cargo that we want to cross-compile by default for the thumbv7em-none-eabihf architecture rather than the one our laptop is running. The second section tells rustc to compile with an extra flag. So far, so good right?

Then you want to replace the contents of main.rs with the following

1#![no_main] 2#![no_std] 3 4use embedded_hal::digital::v2::OutputPin; 5 6use nrf52840_hal as hal; 7 8use nrf52840_hal::gpio::Level; 9 10#[panic_handler] // panicking behavior<br/>fn panic(_: &core::panic::PanicInfo) -> ! { 11 loop { 12 cortex_m::asm::bkpt(); 13 } 14} 15 16#[cortex_m_rt::entry]<br/>fn main() -> ! { 17 let p = hal::pac::Peripherals::take().unwrap(); 18 let port0 = hal::gpio::p0::Parts::new(p.P0); 19 let mut led = port0.p0_17.into_push_pull_output(Level::Low); 20 led.set_high().unwrap(); 21 loop { 22 led.set_low().unwrap(); 23 } 24}

Okay, now we're finally writing embedded code. Right away you will notice some small differences from a normal main.rs, but let's go through it step by step.

1#![no_main] 2#![no_std]

We need to tell the compiler that we aren't using the standard library or the usual main function entry point, since they depend on an OS environment we won't have. This means we will need to provide our own entry point specifically, more on that soon.

1#[panic_handler] 2fn panic(_: &core::panic::PanicInfo) -> ! { 3 loop { 4 cortex_m::asm::bkpt(); 5 } 6}

One more consequence of the no_std enviroment is that panics are not handled by default any longer. Thus we need to do that part ourselves. Luckily it isn't hard, in this example we basically just create an infinite loop using a native cortex-m breakpoint function.

1#[cortex_m_rt::entry] 2fn main() -> ! {

Right, here we go. The main function entry point I mentioned earlier. I won't go into too much of the specifics about why this is necessary, suffice it to say that a normal rust program has some OS-related stuff going on to bootstrap in to a program's main function. Since we don't have any OS, we need to do that manually. In this case by using the entry macro which will do it for us.

Of particular note here which might be unfamiliar is the -> ! syntax, this is something you will start to see a lot in embedded programming, and it means a function which never returns.

1let port0 = hal::gpio::p0::Parts::new(p.P0);

The next embedded concept that is a little different from more "traditional" programming is how geared around memory registers it is. I will cover that a lot more in the next post, but you begin to see some of it here.

In an embedded context, "peripherals" refers to functions and components of the microcontroller, such as GPIO pins or RTCs. Each peripheral is interacted with via some dedicated memory registers, for writing values to go out on a serial interface, or readon values from a sensor, or configuring features even. Each peripheral's memory register resides at a dedicated address, and you will need to look up the relevant datasheet to find that value, or use a HAL crate.

To use a peripheral, you need to take ownership of that memory. Which, since this is rust, means you can't own it twice. This is what we see happening on the second line above: p.P0 is a const from the HAL that tells us the memory register for the P0 GPIO interface, and we are using that address to take ownership of it via Parts::new(). This is a bit different from how we might be used to doing things, we can't copy this struct, or create a second one, which can have wide-ranging consequences for how we might architect our program.

1let mut led = port0.p0_17.into_push_pull_output(Level::High); 2 // set led low 3 led.set_low().unwrap(); 4 loop { 5 cortex_m::asm::nop(); 6 } 7}

Finally, we're doing something! We use the p0 interface we created to set pin 17, which is the pin the built-in LED is connected to, into an output that is Level::High by default. I won't go into what pull-up or -down resistors are, it's a bit out of scope, but there's a great explanation here. All we need to know is that slightly counter-intuitively, setting the value of the LED "high" means sending a 0, and so setting it "low" as we do on the next line means sending a 1, or turning it on.

Then, finally, we create a loop so that the function never exits (remember that -> !) and in the loop we run a cortex-m built-in "no-op" function, which means essentially putting the CPU to sleep.

Then, we will need to actually flash it to the board and run it. I'm a fan of probe-rs which provides two tools: cargo-flash and cargo-embed. We'll be using cargo-flash in this example. Once it's installed add the following to .cargo/config.toml

1[target.thumbv7m-none-eabi] 2runner = cargo flash --chip nRF52840_xxAA

And lastly, run it:

1cargo run

And we should see one of the LEDs light up! Great work :)

In part 2 we will try connecting up the light sensor.