Tock OS Book
Getting Started
The book includes a quick start guide.
Tock Workshop Courses
For a more in-depth walkthough-style less, look here.
Development Guides
The book also has walkthoughs on how to implement different features in Tock OS.
Hands-on Guides
This portion of the book includes workshops and tutorials to teach how to use and develop with Tock, and is divided into two sections: the course and a series of mini tutorials. The course is a good place to start, and provides a structured introduction to Tock that should take a few hours to complete (it was designed for a half day workshop). The tutorials are smaller examples that highlight specific features.
Tock Course
In this hands-on guide, we will look at some of the high-level services provided by Tock. We will start with an understanding of the OS and its programming environment. Then we'll look at how a process management application can help afford remote debugging, diagnosing and fixing a resource-intensive app over the network. The last part of the tutorial is a bit more free-form, inviting attendees to further explore the networking and application features of Tock or to dig into the kernel a bit and explore how to enhance and extend the kernel.
This course assumes some experience programming embedded devices and fluency in C. It assumes no knowledge of Rust, although knowing Rust will allow you to be more creative during the kernel exploration at the end.
Tock Mini Tutorials
These tutorials feature specific examples of Tock applications. They can be completed after the course to learn about different capabilities of Tock apps.
Getting Started
This getting started guide covers how to get started using Tock.
Hardware
To really be able to use Tock and get a feel for the operating system, you will need a hardware platform that tock supports. The TockOS Hardware includes a list of supported hardware boards. You can also view the boards folder to see what platforms are supported.
As of February 2021, this getting started guide is based around five hardware
platforms. Steps for each of these platforms are explicitly described here.
Other platforms will work for Tock, but you may need to reference the README
files in tock/boards/
for specific setup information. The five boards are:
- Hail
- imix
- nRF52840dk (PCA10056)
- Arduino Nano 33 BLE (regular or Sense version)
- BBC Micro:bit v2
These boards are reasonably well supported, but note that others in Tock may have some "quirks" around what is implemented (or not), and exactly how to load code and test that it is working. This guides tries to be general, and Tock generally tries to follow a certain convention, but the project is under active development and new boards are added rapidly. You should definitely consult the board-specific README to see if there are any board-specific details you should be aware of.
Host Machine Setup
You can either download a virtual machine with the development environment pre-installed, or, if you have a Linux or OS X workstation, you may install the development environment natively. Using a virtual machine is quicker and easier to set up, while installing natively will yield the most comfortable development environment and is better for long term use.
Virtual Machine
If you're comfortable working inside a Debian virtual machine, you can download
an image with all of the dependencies already installed
here or
here. Using curl
to
download the image is recommended, but your browser should be able to download
it as well:
$ curl -O <url>
With the virtual machine image downloaded, you can run it with VirtualBox or VMWare:
- VirtualBox users: File → Import Appliance...,
- VMWare users: File → Open...
The VM account is "tock" with password "tock". Feel free to customize it with whichever editors, window managers, etc. you like.
If the Host OS is Linux, you may need to add your user to the
vboxusers
group on your machine in order to connect the hardware boards to the virtual machine.
Native Installation
If you choose to install the development environment natively on an existing operating system install, you will need the following software:
-
Command line utilities:
curl
,make
,git
,python
(version 3) andpip3
. -
Clone the Tock kernel repository.
$ git clone https://github.com/tock/tock
-
rustup. This tool helps manage installations of the Rust compiler and related tools.
$ curl https://sh.rustup.rs -sSf | sh
-
arm-none-eabi toolchain (version >= 5.2). This enables you to compile apps written in C for Cortex-M boards.
# mac $ brew tap ARMmbed/homebrew-formulae && brew update && brew install ARMmbed/homebrew-formulae/arm-none-eabi-gcc # linux $ sudo apt install gcc-arm-none-eabi
-
Optional. riscv64-unknown-elf toolchain for compiling C apps for RISC-V platforms. Getting this toolchain varies platform-to-platform.
# mac $ brew tap riscv/riscv && brew install riscv-gnu-toolchain --with-multilib # linux $ sudo apt install gcc-riscv64-unknown-elf
-
tockloader. This is an all-in-one tool for programming boards and using Tock.
$ pip3 install -U --user tockloader
Note: On MacOS, you may need to add
tockloader
to your path. If you cannot run it after installation, run the following:$ export PATH=$HOME/Library/Python/3.9/bin/:$PATH
Similarly, on Linux distributions, this will typically install to
$HOME/.local/bin
, and you may need to add that to your$PATH
if not already present:$ PATH=$HOME/.local/bin:$PATH
Testing You Can Compile the Kernel
To test if your environment is working enough to compile Tock, go to the
tock/boards/
directory and then to the board folder for the hardware you have
(e.g. tock/boards/imix
for imix). Then run make
in that directory. This
should compile the kernel. It may need to compile several supporting libraries
first (so may take 30 seconds or so the first time). You should see output like
this:
$ cd tock/boards/imix
$ make
Compiling tock-cells v0.1.0 (/Users/bradjc/git/tock/libraries/tock-cells)
Compiling tock-registers v0.5.0 (/Users/bradjc/git/tock/libraries/tock-register-interface)
Compiling enum_primitive v0.1.0 (/Users/bradjc/git/tock/libraries/enum_primitive)
Compiling tock-rt0 v0.1.0 (/Users/bradjc/git/tock/libraries/tock-rt0)
Compiling imix v0.1.0 (/Users/bradjc/git/tock/boards/imix)
Compiling kernel v0.1.0 (/Users/bradjc/git/tock/kernel)
Compiling cortexm v0.1.0 (/Users/bradjc/git/tock/arch/cortex-m)
Compiling capsules v0.1.0 (/Users/bradjc/git/tock/capsules)
Compiling cortexm4 v0.1.0 (/Users/bradjc/git/tock/arch/cortex-m4)
Compiling sam4l v0.1.0 (/Users/bradjc/git/tock/chips/sam4l)
Compiling components v0.1.0 (/Users/bradjc/git/tock/boards/components)
Finished release [optimized + debuginfo] target(s) in 28.67s
text data bss dec hex filename
165376 3272 54072 222720 36600 /Users/bradjc/git/tock/target/thumbv7em-none-eabi/release/imix
Compiling typenum v1.11.2
Compiling byteorder v1.3.4
Compiling byte-tools v0.3.1
Compiling fake-simd v0.1.2
Compiling opaque-debug v0.2.3
Compiling block-padding v0.1.5
Compiling generic-array v0.12.3
Compiling block-buffer v0.7.3
Compiling digest v0.8.1
Compiling sha2 v0.8.1
Compiling sha256sum v0.1.0 (/Users/bradjc/git/tock/tools/sha256sum)
6fa1b0d8e224e775d08e8b58c6c521c7b51fb0332b0ab5031fdec2bd612c907f /Users/bradjc/git/tock/target/thumbv7em-none-eabi/release/imix.bin
You can check that tockloader is installed by running:
$ tockloader --help
If either of these steps fail, please double check that you followed the environment setup instructions above.
Getting the Hardware Connected and Setup
Plug your hardware board into your computer. Generally this requires a micro USB cable, but your board may be different.
Note! Some boards have multiple USB ports.
Some boards have two USB ports, where one is generally for debugging, and the other allows the board to act as any USB peripheral. You will want to connect using the "debug" port.
Some example boards:
- imix: Use the port labeled
DEBUG
.- nRF52 development boards: Use the port of the left, on the skinny side of the board.
The board should appear as a regular serial device (e.g.
/dev/tty.usbserial-c098e5130006
on my Mac or /dev/ttyUSB0
on my Linux box).
This may require some setup, see the "one-time fixups" box.
One-Time Fixups
On Linux, you might need to give your user access to the serial port used by the board. If you get permission errors or you cannot access the serial port, this is likely the issue.
You can fix this by setting up a udev rule to set the permissions correctly for the serial device when it is attached. You only need to run the command below for your specific board, but if you don't know which one to use, running both is totally fine, and will set things up in case you get a different hardware board!
$ sudo bash -c "echo 'ATTRS{idVendor}==\"0403\", ATTRS{idProduct}==\"6015\", MODE=\"0666\"' > /etc/udev/rules.d/99-ftdi.rules" $ sudo bash -c "echo 'ATTRS{idVendor}==\"2341\", ATTRS{idProduct}==\"005a\", MODE=\"0666\"' > /etc/udev/rules.d/98-arduino.rules"
Afterwards, detach and re-attach the board to reload the rule.
With a virtual machine, you might need to attach the USB device to the VM. To do so, after plugging in the board, select in the VirtualBox/VMWare menu bar:
Devices -> USB Devices -> [The name of your board]
If you aren't sure which board to select, it is often easiest to unplug and re-plug the board and see which entry is removed and then added.
If this generates an error, often unplugging/replugging fixes it. You can also create a rule in the VM USB settings which will auto-attach the board to the VM.
With Windows Subsystem for Linux (WSL), the serial device parameters stored in the FTDI chip do not seem to get passed to Ubuntu. Plus, WSL enumerates every possible serial device. Therefore, tockloader cannot automatically guess which serial port is the correct one, and there are a lot to choose from.
You will need to open Device Manager on Windows, and find which
COM
port the tock board is using. It will likely be called "USB Serial Port" and be listed as an FTDI device. The COM number will match what is used in WSL. For example,COM9
is/dev/ttyS9
on WSL.To use tockloader you should be able to specify the port manually. For example:
tockloader --port /dev/ttyS9 list
.
One Time Board Setup
If you have a Hail, imix, or nRF52840dk please skip to the next section.
If you have an Arduino Nano 33 BLE (sense or regular), you need to update the bootloader on the board to the Tock bootloader. Please follow the bootloader update instructions.
If you have a Micro:bit v2 then you need to load the Tock booloader. Please follow the bootloader installation instructions.
Test The Board
With the board connected, you should be able to use tockloader to interact with
the board. For example, to retrieve serial UART data from the board, run
tockloader listen
, and you should see something like:
$ tockloader listen
No device name specified. Using default "tock"
Using "/dev/ttyUSB0 - Imix - TockOS"
Listening for serial output.
Initialization complete. Entering main loop
You may also need to reset (by pressing the reset button on the board) the board to see the message. You may also not see any output if the Tock kernel has not been flashed yet.
You can also see if any applications are installed with tockloader list
:
$ tockloader list
[INFO ] No device name specified. Using default name "tock".
[INFO ] Using "/dev/cu.usbmodem14101 - Nano 33 BLE - TockOS".
[INFO ] Paused an active tockloader listen in another session.
[INFO ] Waiting for the bootloader to start
[INFO ] No found apps.
[INFO ] Finished in 2.928 seconds
[INFO ] Resumed other tockloader listen session
If these commands fail you may not have installed Tockloader, or you may need to update to a later version of Tockloader. There may be other issues as well, and you can ask on Slack if you need help.
Flash the kernel
Now that the board is connected and you have verified that the kernel compiles (from the steps above), we can flash the board with the latest Tock kernel:
$ cd boards/<your board>
$ make
Boards provide the target make install
as the recommended way to load the
kernel.
$ make install
You can also look at the board's README for more details.
Install Some Applications
We have the kernel flashed, but the kernel doesn't actually do anything. Applications do! To load applications, we are going to use tockloader.
Loading Pre-built Applications
We're going to install some pre-built applications, but first, let's make sure we're in a clean state, in case your board already has some applications installed. This command removes any processes that may have already been installed.
$ tockloader erase-apps
Now, let's install two pre-compiled example apps. Remember, you may need to specify which board you are using and how to communicate with it for all of these commands. If you are using Hail or imix you will not have to.
$ tockloader install https://www.tockos.org/assets/tabs/blink.tab
The
install
subcommand takes a path or URL to an TAB (Tock Application Binary) file to install.
The board should restart and the user LED should start blinking. Let's also install a simple "Hello World" application:
$ tockloader install https://www.tockos.org/assets/tabs/c_hello.tab
If you now run tockloader listen
you should be able to see the output of the
Hello World! application. You may need to manually reset the board for this to
happen.
$ tockloader listen
[INFO ] No device name specified. Using default name "tock".
[INFO ] Using "/dev/cu.usbserial-c098e513000a - Hail IoT Module - TockOS".
[INFO ] Listening for serial output.
Initialization complete. Entering main loop.
Hello World!
␀
Uninstalling and Installing More Apps
Lets check what's on the board right now:
$ tockloader list
...
┌──────────────────────────────────────────────────┐
│ App 0 |
└──────────────────────────────────────────────────┘
Name: blink
Enabled: True
Sticky: False
Total Size in Flash: 2048 bytes
┌──────────────────────────────────────────────────┐
│ App 1 |
└──────────────────────────────────────────────────┘
Name: c_hello
Enabled: True
Sticky: False
Total Size in Flash: 1024 bytes
[INFO ] Finished in 2.939 seconds
As you can see, the apps are still installed on the board. We can remove apps with the following command:
$ tockloader uninstall
Following the prompt, if you remove the blink
app, the LED will stop blinking,
however the console will still print Hello World
.
Now let's try adding a more interesting app:
$ tockloader install https://www.tockos.org/assets/tabs/sensors.tab
The sensors
app will automatically discover all available sensors, sample them
once a second, and print the results.
$ tockloader listen
[INFO ] No device name specified. Using default name "tock".
[INFO ] Using "/dev/cu.usbserial-c098e513000a - Hail IoT Module - TockOS".
[INFO ] Listening for serial output.
Initialization complete. Entering main loop.
[Sensors] Starting Sensors App.
Hello World!
␀[Sensors] All available sensors on the platform will be sampled.
ISL29035: Light Intensity: 218
Temperature: 28 deg C
Humidity: 42%
FXOS8700CQ: X: -112
FXOS8700CQ: Y: 23
FXOS8700CQ: Z: 987
Compiling and Loading Applications
There are many more example applications in the libtock-c
repository that you
can use. Let's try installing the ROT13 cipher pair. These two applications use
inter-process communication (IPC) to implement a
ROT13 cipher.
Start by uninstalling any applications:
$ tockloader uninstall
Get the libtock-c repository:
$ git clone https://github.com/tock/libtock-c
Build the rot13_client application and install it:
$ cd libtock-c/examples/rot13_client
$ make
$ tockloader install
Then make and install the rot13_service application:
$ cd ../rot13_service
$ tockloader install --make
Then you should be able to see the output:
$ tockloader listen
[INFO ] No device name specified. Using default name "tock".
[INFO ] Using "/dev/cu.usbserial-c098e5130152 - Hail IoT Module - TockOS".
[INFO ] Listening for serial output.
Initialization complete. Entering main loop.
12: Uryyb Jbeyq!
12: Hello World!
12: Uryyb Jbeyq!
12: Hello World!
12: Uryyb Jbeyq!
12: Hello World!
12: Uryyb Jbeyq!
12: Hello World!
Note: Tock platforms are limited in the number of apps they can load and run. However, it is possible to install more apps than this limit, since tockloader is (currently) unaware of this limitation and will allow to you to load additional apps. However the kernel will only load the first apps until the limit is reached.
Note about Identifying Boards
Tockloader tries to automatically identify which board is attached to make this process simple. This means for many boards (particularly the ones listed at the top of this guide) tockloader should "just work".
However, for some boards tockloader does not have a good way to identify which
board is attached, and requires that you manually specify which board you are
trying to program. This can be done with the --board
argument. For example, if
you have an nrf52dk or nrf52840dk, you would run Tockloader like:
$ tockloader <command> --board nrf52dk --jlink
The --jlink
flag tells tockloader to use the JLink JTAG tool to communicate
with the board (this mirrors using make flash
above). Some boards support
OpenOCD, in which case you would pass --openocd
instead.
To see a list of boards that tockloader supports, you can run
tockloader list-known-boards
. If you have an imix or Hail board, you should
not need to specify the board.
Note, a board listed in
tockloader list-known-boards
means there are default settings hardcoded into tockloader's source on how to support those boards. However, all of those settings can be passed in via command-line parameters for boards that tockloader does not know about. Seetockloader --help
for more information.
Familiarize Yourself with tockloader
Commands
The tockloader
tool is a useful and versatile tool for managing and installing
applications on Tock. It supports a number of commands, and a more complete list
can be found in the tockloader repository, located at
github.com/tock/tockloader. Below is
a list of the more useful and important commands for programming and querying a
board.
tockloader install
This is the main tockloader command, used to load Tock applications onto a
board. By default, tockloader install
adds the new application, but does not
erase any others, replacing any already existing application with the same name.
Use the --no-replace
flag to install multiple copies of the same app. To
install an app, either specify the tab
file as an argument, or navigate to the
app's source directory, build it (probably using make
), then issue the install
command:
$ tockloader install
Tip: You can add the
--make
flag to have tockloader automatically run make before installing, i.e.tockloader install --make
Tip: You can add the
--erase
flag to have tockloader automatically remove other applications when installing a new one.
tockloader uninstall [application name(s)]
Removes one or more applications from the board by name.
tockloader erase-apps
Removes all applications from the board.
tockloader list
Prints basic information about the apps currently loaded onto the board.
tockloader info
Shows all properties of the board, including information about currently loaded applications, their sizes and versions, and any set attributes.
tockloader listen
This command prints output from Tock apps to the terminal. It listens via UART, and will print out anything written to stdout/stderr from a board.
Tip: As a long-running command,
listen
interacts with other tockloader sessions. You can leave a terminal window open and listening. If another tockloader process needs access to the board (e.g. to install an app update), tockloader will automatically pause and resume listening.
tockloader flash
Loads binaries onto hardware platforms that are running a compatible bootloader.
This is used by the Tock Make system when kernel binaries are programmed to the
board with make program
.
Tock Course
The Tock course includes several different modules that guide you through various aspects of Tock and Tock applications. Each module is designed to be fairly standalone such that a full course can be composed of different modules depending on the interests and backgrounds of those doing the course. You should be able to do the lessons that are of interest to you.
Each module begins with a description of the lesson, and then includes steps to follow. The modules cover both programming in the kernel as well as applications.
Setup and Preparation
You should follow the getting started guide to get your development setup and ensure you can communicate with the hardware.
Compile the Kernel
All of the hands-on exercises will be done within the main Tock repository and
the libtock-c
or libtock-rs
userspace repositories. To work on the kernel,
pop open a terminal, and navigate to the repository. If you're using the VM,
that'll be:
$ cd ~/tock
Make sure your Tock repository is up to date
$ git pull
This will fetch the lastest commit from the Tock kernel repository. Individual modules may ask you to check out specific commits or branches. In this case, be sure to have those revisions checked out instead.
Build the kernel
To build the kernel for your board, navigate to the boards/$YOUR_BOARD
subdirectory. From within this subdirectory, a simple make
should be
sufficient to build a kernel. For instance, for the Nordic nRF52840DK board, run
the following:
$ cd boards/nordic/nrf52840dk
$ make
Compiling nrf52840 v0.1.0 (/home/tock/tock/chips/nrf52840)
Compiling components v0.1.0 (/home/tock/tock/boards/components)
Compiling nrf52_components v0.1.0 (/home/tock/tock/boards/nordic/nrf52_components)
Finished release [optimized + debuginfo] target(s) in 24.07s
text data bss dec hex filename
167940 4 28592 196536 2ffb8 /home/tock/tock/target/thumbv7em-none-eabi/release/nrf52840dk
88302039a5698ab37d159ec494524cc466a0da2e9938940d2930d582404dc67a /home/tock/tock/target/thumbv7em-none-eabi/release/nrf52840dk.bin
If this is the first time you are trying to make the kernel, the build system will use cargo and rustup to install various Tock dependencies.
Programming the kernel and interfacing with your board
Boards may require slightly different procedures for programming the Tock kernel.
If you are following along with the provided VM, do not forget to pass your
board's USB interface(s) to the VM. In VirtualBox, this should work by selecting
"Devices > USB" and then enabling the respective device (for example
SEGGER J-Link [0001]
).
Nordic nRF52840DK
The Nordic nRF52840DK development board contains an integrated SEGGER J-Link JTAG debugger, which can be used to program and debug the nRF52840 microcontroller. It is also connected to the nRF's UART console and exposes this as a console device.
To flash the Tock kernel and applications through this interface, you will need to have the SEGGER J-Link tools installed. If you are using a VM, we provide a script you can execute to install these tools. TODO!
With the J-Link software installed, we can use Tockloader to flash the Tock kernel onto this board.
$ make install
[INFO ] Using settings from KNOWN_BOARDS["nrf52dk"]
[STATUS ] Flashing binary to board...
[INFO ] Finished in 7.645 seconds
Congrats! Tock should be running on your board now.
To verify that Tock runs, try to connect to your nRF's serial console.
Tockloader provides a tockloader listen
command for opening a serial
connection. In case you have multiple serial devices attached to your computer,
you may need to select the appropriate J-Link device:
$ tockloader listen
[INFO ] No device name specified. Using default name "tock".
[INFO ] No serial port with device name "tock" found.
[INFO ] Found 2 serial ports.
Multiple serial port options found. Which would you like to use?
[0] /dev/ttyACM1 - J-Link - CDC
[1] /dev/ttyACM0 - L830-EB - Fibocom L830-EB
Which option? [0] 0
[INFO ] Using "/dev/ttyACM1 - J-Link - CDC".
[INFO ] Listening for serial output.
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
tock$
In case you don't see any text printed after "Listening for serial output", try
hitting [ENTER]
a few times. You should be greeted with a tock$
shell
prompt. You can use the reset
command to restart your nRF chip and see the
above greeting.
In case you want to use a different serial console monitor, you may need to identify the serial console device created for your board. On Linux, you can identify the J-Link debugger's serial port by running:
$ dmesg -Hw | grep tty
< ... some output ... >
< plug in the nRF52840DKs front USB (not "nRF USB") >
[ +0.003233] cdc_acm 1-3:1.0: ttyACM1: USB ACM device
In this case, the nRF's serial console can be accessed as /dev/ttyACM1
.
Security USB Key with Tock
This module and submodules will walk you through how to create a USB security key using Tock.
Hardware Notes
To fully follow this guide you will need a hardware board that supports a peripheral USB port (i.e. where the microcontroller has USB hardware support). We recommend using the nRF52840dk.
Compatible boards:
- nRF52840dk
- imix
You'll also need two USB cables, one for programming the board and the other for attaching it as a USB device.
Goal
Our goal is to create a standards-compliant HOTP USB key that we can use with a demo website. The key will support enrolling new URL domains and providing secure authentication.
The main logic of the key will be implemented as a userspace program. That userspace app will use the kernel to decrypt the shared key for each domain, send the HMAC output as a USB keyboard device, and store each encrypted key in a nonvolatile key-value storage.
nRF52840dk Hardware Setup
If you are using the nRF52840dk, there are a couple of configurations on the nRF52840DK board that you should double-check:
- The "Power" switch on the top left should be set to "On".
- The "nRF power source" switch in the top middle of the board should be set to "VDD".
- The "nRF ONLY | DEFAULT" switch on the bottom right should be set to "DEFAULT".
For now, you should plug one USB cable into the top of the board for programming (NOT into the "nRF USB" port on the side). We'll attach the other USB cable later.
Stages
This module is broken into four stages:
- Configuring the kernel to provide necessary syscall drivers:
- Creating an HOTP userspace application.
- Creating an in-kernel encryption oracle.
- Enforcing access control restrictions to the oracle.
Implementing a USB Keyboard Device
The Tock kernel supports implementing a USB device and we can setup our kernel so that it is recognized as a USB keyboard device. This is necessary to enable the HOTP key to send the generated key to the computer when logging in.
Configuring the Kernel
We need to setup our kernel to include USB support, and particularly the USB HID
(keyboard) profile. This requires modifying the boards main.rs
file. You
should add the following setup near the end of main.rs, just before the creating
the Platform
struct.
You first need to create three strings that will represent this device to the USB host.
#![allow(unused)] fn main() { // Create the strings we include in the USB descriptor. let strings = static_init!( [&str; 3], [ "Nordic Semiconductor", // Manufacturer "nRF52840dk - TockOS", // Product "serial0001", // Serial number ] ); }
Then we need to create the keyboard USB capsule in the board. This example works for the nRF52840dk. You will need to modify the types if you are using a different microcontroller.
#![allow(unused)] fn main() { let (keyboard_hid, keyboard_hid_driver) = components::keyboard_hid::KeyboardHidComponent::new( board_kernel, capsules_core::driver::NUM::KeyboardHid as usize, &nrf52840_peripherals.usbd, 0x1915, // Nordic Semiconductor 0x503a, strings, ) .finalize(components::keyboard_hid_component_static!( nrf52840::usbd::Usbd )); }
Towards the end of the main.rs, you need to enable the USB HID driver:
#![allow(unused)] fn main() { keyboard_hid.enable(); keyboard_hid.attach(); }
Finally, we need to add the driver to the Platform
struct:
#![allow(unused)] fn main() { pub struct Platform { ... keyboard_hid_driver: &'static capsules_extra::usb_hid_driver::UsbHidDriver< 'static, capsules_extra::usb::keyboard_hid::KeyboardHid<'static, nrf52840::usbd::Usbd<'static>>, >, ... } let platform = Platform { ... keyboard_hid_driver, ... }; }
and map syscalls from userspace to our kernel driver:
#![allow(unused)] fn main() { // Keyboard HID Driver Num: const KEYBOARD_HID_DRIVER_NUM: usize = capsules_core::driver::NUM::KeyboardHid as usize; impl SyscallDriverLookup for Platform { fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R where F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R, { match driver_num { ... KEYBOARD_HID_DRIVER_NUM => f(Some(self.keyboard_hid_driver)), ... } } } }
Now you should be able to compile the kernel and load it on to your board.
cd tock/boards/...
make install
Connecting the USB Device
We will use both USB cables on our hardware. The main USB header is for debugging and programming. The USB header connected directly to the microcontroller will be the USB device. Ensure both USB devices are connected to your computer.
Testing the USB Keyboard
To test the USB keyboard device will will use a simple userspace application. libtock-c includes an example app which just prints a string via USB keyboard when a button is pressed.
cd libtock-c/examples/tests/keyboard_hid
make
tockloader install
Position your cursor somewhere benign, like a new terminal. Then press a button on the board.
Checkpoint: You should see a welcome message from your hardware!
Using HMAC-SHA256 in Userspace
Our next task is we need an HMAC engine for our HOTP application to use. Tock already includes HMAC-SHA256 as a capsule within the kernel, we just need to expose it to userspace.
Configuring the Kernel
First we need to use components to instantiate a software implementation of SHA256 and HMAC-SHA256. Add this to your main.rs file.
#![allow(unused)] fn main() { //-------------------------------------------------------------------------- // HMAC-SHA256 //-------------------------------------------------------------------------- let sha256_sw = components::sha::ShaSoftware256Component::new() .finalize(components::sha_software_256_component_static!()); let hmac_sha256_sw = components::hmac::HmacSha256SoftwareComponent::new(sha256_sw).finalize( components::hmac_sha256_software_component_static!(capsules_extra::sha256::Sha256Software), ); let hmac = components::hmac::HmacComponent::new( board_kernel, capsules_extra::hmac::DRIVER_NUM, hmac_sha256_sw, ) .finalize(components::hmac_component_static!( capsules_extra::hmac_sha256::HmacSha256Software<capsules_extra::sha256::Sha256Software>, 32 )); }
Then add these capsules to the Platform
struct:
#![allow(unused)] fn main() { pub struct Platform { ... hmac: &'static capsules_extra::hmac::HmacDriver< 'static, capsules_extra::hmac_sha256::HmacSha256Software< 'static, capsules_extra::sha256::Sha256Software<'static>, >, 32, >, ... } let platform = Platform { ... hmac, ... }; }
And make them accessible to userspace by adding to the with_driver
function:
#![allow(unused)] fn main() { impl SyscallDriverLookup for Platform { fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R where F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R, { match driver_num { ... capsules_extra::hmac::DRIVER_NUM => f(Some(self.hmac)), ... } } } }
Testing
You should be able to install the libtock-c/examples/tests/hmac
app and run
it:
cd libtock-c/examples/tests/hmac
make
tockloader install
Checkpoint: HMAC is now accessible to userspace!
Using Nonvolatile Application State in Userspace
When we use the HOTP application to store new keys, we want those keys to be persistent across reboots. That is, if we unplug the USB key, we would like our saved keys to still be accessible when we plug the key back in.
To enable this, we are using the app_state
capsule. This allows userspace
applications to edit their own flash region. We will use that flash region to
save our known keys.
Configuring the Kernel
Again we will use components to add app_state to the kernel. To add the proper drivers, include this in the main.rs file:
#![allow(unused)] fn main() { //-------------------------------------------------------------------------- // APP FLASH //-------------------------------------------------------------------------- let mux_flash = components::flash::FlashMuxComponent::new(&base_peripherals.nvmc).finalize( components::flash_mux_component_static!(nrf52840::nvmc::Nvmc), ); let virtual_app_flash = components::flash::FlashUserComponent::new(mux_flash).finalize( components::flash_user_component_static!(nrf52840::nvmc::Nvmc), ); let app_flash = components::app_flash_driver::AppFlashComponent::new( board_kernel, capsules_extra::app_flash_driver::DRIVER_NUM, virtual_app_flash, ) .finalize(components::app_flash_component_static!( capsules_core::virtualizers::virtual_flash::FlashUser<'static, nrf52840::nvmc::Nvmc>, 512 )); }
Then add these capsules to the Platform
struct:
#![allow(unused)] fn main() { pub struct Platform { ... app_flash: &'static capsules_extra::app_flash_driver::AppFlash<'static>, ... } let platform = Platform { ... app_flash, ... }; }
And make them accessible to userspace by adding to the with_driver
function:
#![allow(unused)] fn main() { impl SyscallDriverLookup for Platform { fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R where F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R, { match driver_num { ... capsules_extra::app_flash_driver::DRIVER_NUM => f(Some(self.app_flash)), ... } } } }
Checkpoint: App Flash is now accessible to userspace!
HOTP Application
The motivating example for this entire tutorial is the creation of a USB security key: a USB device that can be connected to your computer and authenticate you to some service. One open standard for implementing such keys is HMAC-based One-Time Password (HOTP). It generates the 6 to 8 digit numeric codes which are used as a second-factor for some websites.
The crypto for implementing HOTP has already been created (HMAC and SHA256), so you certainly don't have to be an expert in cryptography to make this application work. We have actually implemented the software for generating HOTP codes as well. Instead, you will focus on improving that code as a demonstration of Tock and its features.
On the application side, we'll start with a basic HOTP application which has a pre-compiled HOTP secret key. Milestone one will be improving that application to take user input to reconfigure the HOTP secret. Milestone two will be adding the ability to persistently store the HOTP information so it is remembered across resets and power cycles. Finally, milestone three will be adding the ability to handle multiple HOTP secrets simultaneously.
The application doesn't just calculate HOTP codes, it implements a USB HID device as well. This means that when plugged in through the proper USB port, it appears as an additional keyboard to your computer and is capable of entering text.
We have provided starter code as well as completed code for each of the milestones. If you're facing some bugs which are limiting your progress, you can reference or even wholesale copy a milestone in order to advance to the next parts of the tutorial.
Applications in Tock
A few quick details on applications in Tock.
Applications in Tock look much closer to applications on traditional OSes than to normal embedded software. They are compiled separately from the kernel and loaded separately onto the hardware. They can be started or stopped individually and can be removed from the hardware individually. Importantly for later in this tutorial, the kernel is really in full control here and can decide which applications to run and what permissions they should be given.
Applications make requests from the OS kernel through system calls, but for the
purposes of this part of the tutorial, those system calls are wrapped in calls
to driver libraries. The most important aspect though is that results from
system calls never interrupt a running application. The application must yield
to receive callbacks. Again, this is frequently hidden within synchronous
drivers, but our application code will have a yield
in the main loop as well,
where it waits for button presses.
The tool for interacting with Tock applications is called Tockloader
. It is a
python package capable of loading applications onto a board, inspecting
applications on a board, modifying application binaries before they are loaded
on a board, and opening a console to communicate with running applications.
We'll reference various Tockloader
commands which you'll run throughout the
tutorial.
Starter Code
We'll start by playing around with the starter code which implements a basic HOTP key.
-
Within the
libtock-c
checkout, navigate tolibtock-c/examples/tutorials/hotp/hotp_starter/
.This contains the starter code for the HOTP application. It has a hardcoded HOTP secret and generates an HOTP code from it each time the Button 1 on the board is pressed.
-
To compile the application and load it onto your board, run
make flash
in the terminal (running justmake
will compile but not upload).- You likely want to remove other applications that are running on your board
if there are any. You can see which applications are installed with
tockloader list
and you can remove an app withtockloader uninstall
(it will let you choose which app(s) to remove). Bonus information:make flash
is just a shortcut formake && tockloader install
.
- You likely want to remove other applications that are running on your board
if there are any. You can see which applications are installed with
-
To see console output from the application, run
tockloader listen
in a separate terminal.
TIP: You can leave the console running, even when compiling and uploading new applications. It's worth opening a second terminal and leaving
tockloader listen
always running.
-
Since this application creates a USB HID device to enter HOTP codes, you'll need a second USB cable which will connect directly to the microcontroller. Plug it into the port on the left-hand side of the nRF52840DK labeled "nRF USB".
- After attaching the USB cable, you should restart the application by hitting the reset button the nRF52840DK labeled "IF BOOT/RESET".
-
To generate an HOTP code, press "Button 1" on the nRF5240DK. You should see a message printed to console output that says
Counter: 0. Typed "750359" on the USB HID the keyboard
.The HOTP code will also be written out over the USB HID device. The six-digit number should appear wherever your cursor is.
-
You can verify the HOTP values with https://www.verifyr.com/en/otp/check#hotp
Go to section "#2 Generate HOTP Code". Enter "test" as the HOTP Code to auth, the current counter value from console as the Counter, "sha256" as the Algorithm, and 6 as the Digits. Click "Generate" and you'll see a six-digit HOTP code that should match the output of the Tock HOTP app.
-
The source code for this application is in the file
main.c
.This is roughly 300 lines of code and includes Button handling, HMAC use and the HOTP state machine. Execution starts at the
main()
function at the bottom of the file. -
Play around with the app and take a look through the code to make sure it makes sense. Don't worry too much about the HOTP next code generation, as it already works and you won't have to modify it.
Checkpoint: You should be able to run the application and have it output HOTP codes over USB to your computer when Button 1 is pressed.
Milestone One: Configuring Secrets
The first milestone is to modify the HOTP application to allow the user to set a
secret, rather than having a pre-compiled default secret. Completed code is
available in hotp_milestone_one/
in case you run into issues.
-
First, modify the code in main() to detect when a user wants to change the HOTP secret rather than get the next code.
The simplest way to do this is to sense how long the button is held for. You can delay a short period, roughly 500 ms would work well, and then read the button again and check if it's still being pressed. You can wait synchronously with the
delay_ms()
function and you can read a button with thebutton_read()
function.-
Note that buttons are indexed from 0 in Tock. So "Button 1" on the hardware is button number 0 in the application code. All four of the buttons on the nRF52840DK are accessible, although the
initialize_buttons()
helper function in main.c only initializes interrupts for button number 0. (You can change this if you want!) -
An alternative design would be to use different buttons for different purposes. We'll focus on the first method, but feel free to implement this however you think would work best.
-
-
For now, just print out a message when you detect the user's intent. Be sure to compile and upload your modified application to test it.
-
Next, create a new helper function to allow for programming new secrets. This function will have three parts:
-
The function should print a message about wanting input from the user.
- Let them know that they've entered this mode and that they should type a new HOTP secret.
-
The function should read input from the user to get the base32-encoded secret.
-
You'll want to use the Console functions
getch()
andputnstr()
.getch()
can read characters of user input whileputnstr()
can be used to echo each character the user types. Make a loop that reads the characters into a buffer. -
Since the secret is in base32, special characters are not valid. The easiest way to handle this is to check the input character with
isalnum()
and ignore it if it isn't alphanumeric. -
When the user hits the enter key, a
\n
character will be received. This can be used to break from the loop.
-
-
The function should decode the secret and save it in the
hotp-key
.- Use the
program_default_secret()
implementation for guidance here. Thedefault_secret
takes the place of the string you read from the user, but otherwise the steps are the same.
- Use the
-
-
Connect the two pieces of code you created to allow the user to enter a new key. Then upload your code to test it!
- You can test that the new secret works with https://www.verifyr.com/en/otp/check#hotp as described previously.
Checkpoint: Your HOTP application should now take in user-entered secrets and generate HOTP codes for them based on button presses.
Milestone Two: Persistent Secrets
The second milestone is to save the HOTP struct in persistent Flash rather than
in volatile memory. After doing so, the secret and current counter values will
persist after resets and power outages. We'll do the saving to flash with the
App State driver, which allows an application to save some information to its
own Flash region in memory. Completed code is available in hotp_milestone_two/
in case you run into issues.
-
First, understand how the App State driver works by playing with some example code. The App State test application is available in
libtock-c/examples/tests/app_state/main.c
-
Compile it and load it onto your board to try it out.
-
If you want to uninstall the HOTP application from the board, you can do so with
tockloader uninstall
. When you're done, you can use that same command to remove this application.
-
-
Next, we'll go back to the HOTP application code and add our own App State implementation.
Start by creating a new struct that holds both a
magic
field and the HOTP key struct.- The value in the
magic
field can be any unique number that is unlikely to occur by accident. A 32-bit value (that is neither all zeros nor all ones) of your choosing is sufficient.
- The value in the
-
Create an App State initialization function that can be called from the start of
main()
which will load the struct from Flash if it exists, or initialize it and store it if it doesn't.- Be sure to call the initialization function after the one-second delay at
the start of
main()
so that it doesn't attempt to modify Flash during resets while uploading code.
- Be sure to call the initialization function after the one-second delay at
the start of
-
Update code throughout your application to use the HOTP key inside of the App State struct.
You'll also need to synchronize the App State whenever part of the HOTP key is modified: when programming a new secret or updating the counter.
-
Upload your code to test it. You should be able to keep the same secret and counter value on resets and also on power cycles.
-
There is an on/off switch on the top left of the nRF52840DK you can use for power cycling.
-
Note that uploading a modified version of the application will overwrite the App State and lose the existing values inside of it.
-
Checkpoint: Your application should now both allow for the configuring of HOTP secrets and the HOTP secret and counter should be persistent across reboots.
Milestone Three: Multiple HOTP Keys
The third and final application milestone is to add multiple HOTP keys and a
method for choosing between them. This milestone is optional, as the rest of
the tutorial will work without it. If you're short on time, you can skip it
without issue. Completed code is available in hotp_milestone_three/
in case
you run into issues.
-
The recommended implementation of multiple HOTP keys is to assign one key per button (so four total for the nRF52840DK). A short press will advance the counter and output the HOTP code while a long press will allow for reprogramming of the HOTP secret.
-
The implementation here is totally up to you. Here are some suggestions to consider:
-
Select which key you are using based on the button number of the most recent press. You'll also need to enable interrupts for all of the buttons instead of just Button 1.
-
Make the HOTP key in the App State struct into an array with up to four slots.
-
Having multiple key slots allows for different numbers of digits for the HOTP code on different slots, which you could experiment with.
-
Checkpoint: Your application should now hold multiple HOTP keys, each of which can be configured and is persistent across reboots.
Encryption Oracle Capsule
Our HOTP security key works by storing a number of secrets on the device, and using these secrets together with some moving factor (e.g., a counter value or the current time) in an HMAC operation. This implies that our device needs some way to store these secrets, for instance in its internal flash.
However, storing such secrets in plaintext in ordinary flash is not particularly secure. For instance, many microcontrollers offer debug ports which can be used to gain read and write access to flash. Even if these ports can be locked down, such protection mechanisms have been broken in the past. Apart from that, disallowing external flash access makes debugging and updating our device much more difficult.
To circumvent these issues, we will build an encryption oracle capsule: this Tock kernel module will allow applications to request decryption of some ciphertext, using a kernel-internal key not exposed to applications themselves. By only storing an encrypted version of their secrets, applications are free to use unprotected flash storage, or store them even external to the device itself. This is a commonly used paradigm in root of trust systems such as TPMs or OpenTitan, which feature hardware-embedded keys that are unique to a chip and hardened against key-readout attacks.
Our kernel module will use a hard-coded symmetric encryption key (AES-128 CTR-mode), embedded in the kernel binary. While this does not actually meaningfully increase the security of our example application, it demonstrates some important concepts in Tock:
- How custom userspace drivers are implemented, and the different types of system calls supported.
- How Tock implements asynchronous APIs in the kernel.
- Tock's hardware-interface layers (HILs), which provide abstract interfaces for hardware or software implementations of algorithms, devices and protocols.
Capsules – Tock's Kernel Modules
Most of Tock's functionality is implemented in the form of capsules – Tock's
equivalent to kernel modules. Capsules are Rust modules contained in Rust crates
under the capsules/
directory within the Tock kernel repository. They can be
used to implement userspace drivers, hardware drivers (for example, a driver for
an I²C-connected sensor), or generic reusable code snippets.
What makes capsules special is that they are semi-trusted: they are not
allowed to contain any unsafe
Rust code, and thus can never violate Tock's
memory safety guarantees. They are only trusted with respect to liveness and
correctness – meaning that they must not block the kernel execution for long
periods of time, and should behave correctly according to their specifications
and API contracts.
We start our encryption oracle driver by creating a new capsule called
encryption_oracle
. Create a file under
capsules/extra/src/tutorials/encryption_oracle.rs
in the Tock kernel
repository with the following contents:
#![allow(unused)] fn main() { // Licensed under the Apache License, Version 2.0 or the MIT License. // SPDX-License-Identifier: Apache-2.0 OR MIT // Copyright Tock Contributors 2022. pub static KEY: &'static [u8; kernel::hil::symmetric_encryption::AES128_KEY_SIZE] = b"InsecureAESKey12"; pub struct EncryptionOracleDriver {} impl EncryptionOracleDriver { /// Create a new instance of our encryption oracle userspace driver: pub fn new() -> Self { EncryptionOracleDriver {} } } }
We will be filling this module with more interesting contents soon. To make this
capsule accessible to other Rust modules and crates, add it to
capsules/extra/src/tutorials/mod.rs
:
#[allow(dead_code)]
pub mod encryption_oracle_chkpt5;
+ pub mod encryption_oracle;
EXERCISE: Make sure your new capsule compiles by running
cargo check
in thecapsules/extra/
folder.
The capsules/tutorial
crate already contains checkpoints of the encryption
oracle capsule we'll be writing here. Feel free to use them if you're stuck. We
indicate that your capsule should have reached an equivalent state to one of our
checkpoints through blocks such as the following:
CHECKPOINT:
encryption_oracle_chkpt0.rs
BACKGROUND: While a single "capsule" is generally self-contained in a Rust module (
.rs
file), these modules are again grouped into Rust crates such ascapsules/core
andcapsules/extra
, depending on certain policies. For instance, capsules incore
have stricter requirements regarding their code quality and API stability. Neithercore
nor theextra
extra
capsules crates allow for external dependencies (outside of the Tock repository). The document on external dependencies further explains these policies.
Userspace Drivers
Now that we have a basic capsule skeleton, we can think about how this code is
going to interact with userspace applications. Not every capsule needs to offer
a userspace API, but those that do must implement
the SyscallDriver
trait.
Tock supports different types of application-issued systems calls, four of which are relevant to userspace drivers:
-
subscribe: An application can issue a subscribe system call to register upcalls, which are functions being invoked in response to certain events. These upcalls are similar in concept to UNIX signal handlers. A driver can request an application-provided upcall to be invoked. Every system call driver can provide multiple "subscribe slots", each of which the application can register a upcall to.
-
read-only allow: An application may expose some data for drivers to read. Tock provides the read-only allow system call for this purpose: an application invokes this system call passing a buffer, the contents of which are then made accessible to the requested driver. Every driver can have multiple "allow slots", each of which the application can place a buffer in.
-
read-write allow: Works similarly to read-only allow, but enables drivers to also mutate the application-provided buffer.
-
command: Applications can use command-type system calls to signal arbitrary events or send requests to the userspace driver. A common use-case for command-style systems calls is, for instance, to request that a driver start some long-running operation.
All Tock system calls are synchronous, which means that they should immediately return to the application. In fact, subscribe and allow-type system calls are transparently handled by the kernel, as we will see below. Capsules must not implement long-running operations by blocking on a command system call, as this prevents other applications or kernel routines from running – kernel code is never preempted.
Application Grants
Now there's just one key part missing to understanding Tock's system calls: how drivers store application-specific data. Tock differs significantly from other operating systems in this regard, which typically simply allocate some memory on demand through a heap allocator.
However, on resource constraint platforms such as microcontrollers, allocating from a pool of (limited) memory can inevitably become a prominent source of resource exhaustion errors: once there's no more memory available, Tock wouldn't be able to service new allocation requests, without revoking some prior allocations. This is especially bad when this memory pool is shared between kernel resources belonging to multiple processes, as then one process could potentially starve another.
To avoid these issues, Tock uses grants. A grant is a memory allocation belonging to a process, and is located within a process-assigned memory allocation, but reserved for use by the kernel. Whenever a kernel component must keep track of some process-related information, it can use a grant to hold this information. By allocating memory from a process-specific memory region it is impossible for one process to starve another's memory allocations, independent of whether those allocations are in the process itself or in the kernel. As a consequence, Tock can avoid implementing a kernel heap allocator entirely.
Ultimately, our encryption oracle driver will need to keep track of some
per-process state. Thus we extend the above driver with a Rust struct to be
stored within a grant, called App
. For now, we just keep track of whether a
process has requested a decryption operation. Add the following code snippet to
your capsule:
#![allow(unused)] fn main() { #[derive(Default)] pub struct ProcessState { request_pending: bool, } }
By implementing Default
, grant types can be allocated and initialized on
demand. We integrate this type into our EncryptionOracleDriver
by adding a
special process_grants
variable of
type Grant
. This Grant
struct takes a generic type parameter T
(which we set to our ProcessState
struct above) next to some constants: as a driver's subscribe upcall and allow
buffer slots also consume some memory, we store them in the process-specific
grant as well. Thus, UpcallCount
, AllowRoCont
, and AllowRwCount
indicate
how many of these slots should be allocated respectively. For now we don't use
any of these slots, so we set their counts to zero. Add the process_grants
variable to your EncryptionOracleDriver
:
#![allow(unused)] fn main() { use kernel::grant::{Grant, UpcallCount, AllowRoCount, AllowRwCount}; pub struct EncryptionOracleDriver { process_grants: Grant< ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>, >, } }
EXERCISE: The
Grant
struct will be provided as an argument to constructor of theEncryptionOracleDriver
. Extendnew
to accept it as an argument. Afterwards, make sure your code compiles by runningcargo check
in thecapsules/extra/
directory.
Implementing a System Call
Now that we know about grants we can start to implement a proper system call. We start with the basics and implement a simple command-type system call: upon request by the application, the Tock kernel will call a method in our capsule.
For this, we implement the following SyscallDriver
trait for our
EncryptionOracleDriver
struct. This trait contains two important methods:
command
: this method is called whenever an application issues a command-type system call towards this driver, andallocate_grant
: this is a method required by Tock to allocate some space in the process' memory region. The implementation of this method always looks the same, and while it must be implemented by every userspace driver, it's exact purpose is not important right now.
#![allow(unused)] fn main() { use kernel::{ErrorCode, ProcessId}; use kernel::syscall::{SyscallDriver, CommandReturn}; impl SyscallDriver for EncryptionOracleDriver { fn command( &self, command_num: usize, _data1: usize, _data2: usize, processid: ProcessId, ) -> CommandReturn { // Syscall handling code here! unimplemented!() } // Required by Tock for grant memory allocation. fn allocate_grant(&self, processid: ProcessId) -> Result<(), kernel::process::Error> { self.process_grants.enter(processid, |_, _| {}) } } }
The function signature of command
tells us a lot about what we can do with
this type of system call:
- Applications can provide a
command_num
, which indicates what type of command they are requesting to be handled by a driver, and - they can optionally pass up to two
usize
data arguments. - The kernel further provides us with a unique identifier of the calling
process, through a type called
ProcessId
.
Our driver can respond to this system call using a CommandReturn
struct. This
struct allows for returning either a success or a failure indication, along
with some data (at most four usize
return values). For more details, you can
look at its definition and API
here.
In our encryption oracle driver we only need to handle a single application
request: to decrypt some ciphertext into its corresponding plaintext. As we are
missing the actual cryptographic operations still, let's simply store that a
process has made such a request. Because this is per-process state, we store it
in the request_pending
field of the process' grant region. To obtain a
reference to this memory, we can conveniently use the ProcessId
type provided
to us by the kernel. The following code snippet shows how an implementation of
the command
could look like. Replace your command
method body with this
snippet:
#![allow(unused)] fn main() { match command_num { // Check whether the driver is present: 0 => CommandReturn::success(), // Request the decryption operation: 1 => { self .process_grants .enter(processid, |app, _kernel_data| { kernel::debug!("Received request from process {:?}", processid); app.request_pending = true; CommandReturn::success() }) .unwrap_or_else(|err| err.into()) }, // Unknown command number, return a NOSUPPORT error _ => CommandReturn::failure(ErrorCode::NOSUPPORT), } }
There's a lot to unpack here: first, we match on the passed command_num
. By
convention, command number 0
is reserved to check whether a driver is loaded
on a kernel. If our code is executing, then this must be the case, and thus we
simply return success
. For all other unknown command numbers, we must instead
return a NOSUPPORT
error.
Command number 1
is assigned to start the decryption operation. To get a
reference to our process-local state stored in its grant region, we can use the
enter
method: it takes a ProcessId
, and in return will call a provided Rust
closure that provides us access to the process' own ProcessState
instance.
Because entering a grant can fail (for instance when the process does not have
sufficient memory available), we handle any errors by converting them into a
CommandReturn
.
EXERCISE: Make sure that your
EncryptionOracleDriver
implements theSyscallDriver
trait as shown above. Then, verify that your code compiles by runningcargo check
in thecapsules/extra/
folder.
CHECKPOINT:
encryption_oracle_chkpt1.rs
Congratulations, you have implemented your first Tock system call! Next, we will look into how to to integrate this driver into a kernel build.
Adding a Capsule to a Tock Kernel
To actually make our driver available in a given build of the kernel, we need to
add it to a board crate. Board crates tie the kernel, a given chip, and a
set of drivers together to create a binary build of the Tock operating system,
which can then be loaded into a physical board. For the purposes of this
section, we assume to be targeting the Nordic Semiconductor nRF52840DK board,
and thus will be working in the boards/nordic/nrf52840dk/
directory.
EXERCISE: Enter the
boards/nordic/nrf52840dk/
directory and compile a kernel by typingmake
. A successful build should end with a message that looks like the following:Finished release [optimized + debuginfo] target(s) in 20.34s text data bss dec hex filename 176132 4 33284 209420 3320c /home/tock/tock/target/thumbv7em-none-eabi/release/nrf52840dk [Hash ommitted] /home/tock/tock/target/thumbv7em-none-eabi/release/nrf52840dk.bin
Applications interact with our driver by passing a "driver number" alongside
their system calls. The capsules/core/src/driver.rs
module acts as a registry
for driver numbers. For the purposes of this tutorial we'll use an unassigned
driver number in the misc range, 0x99999
, and add a constant to capsule
accordingly:
#![allow(unused)] fn main() { pub const DRIVER_NUM: usize = 0x99999; }
Accepting an AES Engine in the Driver
Before we start adding our driver to the board crate, we'll modify it slightly
to acceppt an instance of an AES128
cryptography engine. This is to avoid
modifying our driver's instantiation later on. We provide the
encryption_oracle_chkpt2.rs
checkpoint which has these changes integrated,
feel free to use this code. We make the following mechanical changes to our
types and constructor – don't worry about them too much right now.
First, we change our EncryptionOracleDriver
struct to hold a reference to some
generic type A
, which must implement the AES128
and the AESCtr
traits:
+ use kernel::hil::symmetric_encryption::{AES128Ctr, AES128};
- pub struct EncryptionOracleDriver {
+ pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
+ aes: &'a A,
process_grants: Grant<
ProcessState,
UpcallCount<0>,
Then, we change our constructor to accept this aes
member as a new argument:
- impl EncryptionOracleDriver {
+ impl<'a, A: AES128<'a> + AES128Ctr> EncryptionOracleDriver<'a, A> {
/// Create a new instance of our encryption oracle userspace driver:
pub fn new(
+ aes: &'a A,
+ _source_buffer: &'static mut [u8],
+ _dest_buffer: &'static mut [u8],
process_grants: Grant<ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>>,
) -> Self {
EncryptionOracleDriver {
process_grants: process_grants,
+ aes: aes,
}
}
}
And finally we update our implementation of SyscallDriver
to match these new
types:
- impl SyscallDriver for EncryptionOracleDriver {
+ impl<'a, A: AES128<'a> + AES128Ctr> SyscallDriver for EncryptionOracleDriver<'a, A> {
fn command(
&self,
Finally, make sure that your modified capsule still compiles.
CHECKPOINT:
encryption_oracle_chkpt2.rs
Instantiating the System Call Driver
Now, open the board's main file (boards/nordic/nrf52840dk/src/main.rs
) and
scroll down to the line that reads "PLATFORM SETUP, SCHEDULER, AND START KERNEL
LOOP". We'll instantiate our encryption oracle driver right above that, with
the following snippet:
#![allow(unused)] fn main() { const CRYPT_SIZE: usize = 7 * kernel::hil::symmetric_encryption::AES128_BLOCK_SIZE; let aes_src_buffer = kernel::static_init!([u8; 16], [0; 16]); let aes_dst_buffer = kernel::static_init!([u8; CRYPT_SIZE], [0; CRYPT_SIZE]); let oracle = static_init!( capsules_extra::tutorials::encryption_oracle::EncryptionOracleDriver< 'static, nrf52840::aes::AesECB<'static>, >, // Call our constructor: capsules_extra::tutorials::encryption_oracle::EncryptionOracleDriver::new( &base_peripherals.ecb, aes_src_buffer, aes_dst_buffer, // Magic incantation to create our `Grant` struct: board_kernel.create_grant( capsules_extra::tutorials::encryption_oracle::DRIVER_NUM, // our driver number &create_capability!(capabilities::MemoryAllocationCapability) ), ), ); // Leave commented out for now: // kernel::hil::symmetric_encryption::AES128::set_client(&base_peripherals.ecb, oracle); }
Now that we instantiated our capsule, we need to wire it up to Tock's system
call handling facilities. This involves two steps: first, we need to store our
instance in our Platform
struct. That way, we can refer to our instance while
the kernel is running. Then, we need to route system calls to our driver number
(0x99999
) to be handled by this driver.
Add the following line to the very bottom of the pub struct Platform {
declaration:
pub struct Platform {
[...],
systick: cortexm4::systick::SysTick,
+ oracle: &'static capsules_extra::tutorials::encryption_oracle::EncryptionOracleDriver<
+ 'static,
+ nrf52840::aes::AesECB<'static>,
+ >,
}
Furthermore, add our instantiated oracle to the let platform = Platform {
instantiation:
let platform = Platform {
[...],
systick: cortexm4::systick::SysTick::new_with_calibration(64000000),
+ oracle,
};
Finally, to handle received system calls in our driver, add the following line
to the match
block in the with_driver
method of the SyscallDriverLookup
trait implementation:
impl SyscallDriverLookup for Platform {
fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
where
F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R,
{
match driver_num {
capsules_core::console::DRIVER_NUM => f(Some(self.console)),
[...],
capsules_extra::app_flash_driver::DRIVER_NUM => f(Some(self.app_flash)),
+ capsules_extra::tutorials::encryption_oracle::DRIVER_NUM => f(Some(self.oracle)),
_ => f(None),
}
}
}
That's it! We have just added a new driver to the nRF52840DK's Tock kernel build.
EXERCISE: Make sure your board compiles by running
make
. If you want, you can test your driver with a libtock-c application which executes the following:command( 0x99999, // driver number 1, // command number 0, 0 // optional data arguments );
Upon receiving this system call, the capsule should print the "Received request from process" message.
Interacting with HILs
The Tock operating system supports different hardware platforms, each featuring
an individual set of integrated peripherals. At the same time, a driver such as
our encryption oracle should be portable between different systems running Tock.
To achieve this, Tock uses the concept of Hardware-Interface Layers (HILs), the
design paradigms of which are described in
this document.
HILs are organized as Rust modules, and can be found under the
kernel/src/hil/
directory. We will be working with the
symmetric_encryption.rs
HIL.
HILs capture another important concept of the Tock kernel: asynchronous
operations. As mentioned above, Tock system calls must never block for extended
periods of time, as kernel code is not preempted. Blocking in the kernel
prevents other useful being done. Instead, long-running operations in the Tock
kernel are implemented as asynchronous two-phase operations: one function call
on the underlying implementation (e.g., of our AES engine) starts an operation,
and another function call (issued by the underlying implementation, hence named
callback) informs the driver that the operation has completed. You can see
this paradigm embedded in all of Tock's HILs, including the
symmetric_encryption
HIL: the
crypt()
method
is specified to return immediately (and return a Some(_)
in case of an error).
When the requested operation is finished, the implementor of AES128
will call
the
crypt_done()
callback,
on the client registered with
set_client()
.
The below figure illustates the way asynchronous operations are handled in Tock,
using our encryption oracle capsule as an example. One further detail
illustrated in this figure is the fact that providers of a given interface
(e.g., AES128
) may not always be able to perform a large user-space operation
in a single call; this may be because of hardware-limitations, limited buffer
allocations, or to avoid blocking the kernel for too long in
software-implentations. In this case, a userspace-operation is broken up into
multiple smaller operations on the underlying provider, and the next
sub-operation is scheduled once a callback has been received:
To allow our capsule to receive crypt_done
callbacks, add the following trait
implementation:
#![allow(unused)] fn main() { use kernel::hil::symmetric_encryption::Client; impl<'a, A: AES128<'a> + AES128Ctr> Client<'a> for EncryptionOracleDriver<'a, A> { fn crypt_done(&'a self, mut source: Option<&'static mut [u8]>, destination: &'static mut [u8]) { unimplemented!() } } }
With this trait implemented, we can wire up the oracle
driver instance to
receive callbacks from the AES engine (base_peripherals.ecb
) by uncommenting
the following line in boards/nordic/nrf52840dk/src/main.rs
:
- // Leave commented out for now:
- // kernel::hil::symmetric_encryption::AES128::set_client(&base_peripherals.ecb, oracle);
+ kernel::hil::symmetric_encryption::AES128::set_client(&base_peripherals.ecb, oracle);
If this is missing, our capsule will not be able to receive feedback from the AES hardware that an operation has finished, and it will thus refuse to start any new operation. This is an easy mistake to make – you should check whether all callbacks are set up correctly when the kernel is in such a stuck state.
Multiplexing Between Processes
While our underlying AES128
implementation can only handle one request at a
time, multiple processes may wish to use this driver. Thus our capsule
implements a queueing system: even when another process is already using our
capsule to decrypt some ciphertext, another process can still initate such a
request. We remember these requests through the request_pending
flag in our
ProcessState
grant, and we've already implemented the logic to set this flag!
Now, to actually implement our asynchronous decryption operation, it is further
important to keep track of which process' request we are currently working on.
We add an additional state field to our EncryptionOracleDriver
holding an
OptionalCell
:
this is a container whose stored value can be modified even if we only hold an
immutable Rust reference to it. The optional indicates that it behaves similar
to an Option
– it can either hold a value, or be empty.
use kernel::utilities::cells::OptionalCell;
pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
aes: &'a A,
process_grants: Grant<ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>>,
+ current_process: OptionalCell<ProcessId>,
}
We need to add it to the constructor as well:
pub fn new(
aes: &'a A,
_source_buffer: &'static mut [u8],
_dest_buffer: &'static mut [u8],
process_grants: Grant<ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>>,
) -> Self {
EncryptionOracleDriver {
process_grants,
aes,
+ current_process: OptionalCell::empty(),
}
}
In practice, we simply want to find the next process request to work on. For
this, we add a helper method to the impl
of our EncryptionOracleDriver
:
#![allow(unused)] fn main() { /// Return a `ProcessId` which has `request_pending` set, if there is some: fn next_pending(&self) -> Option<ProcessId> { unimplemented!() } }
EXERCISE: Try to implement this method according to its specification. If you're stuck, see whether the documentation of the
OptionalCell
andGrant
types help. Hint: to interact with theProcessState
of every processes, you can use theiter
method on aGrant
: the returnedIter
type then has anenter
method access the contents of an invidiual process' grant.
CHECKPOINT:
encryption_oracle_chkpt3.rs
Interacting with Process Buffers and Scheduling Upcalls
For our encryption oracle, it is important to allow users provide buffers containing the encryption initialization vector (to prevent an attacker from inferring relationships between messages encrypted with the same key), and the plaintext or ciphertext to encrypt and decrypt respectively. Furthermore, userspace must provide a mutable buffer for our capsule to write the operation's output to. These buffers are placed into read-only and read-write allow slots by applications accordingly. We allocate fixed IDs for those buffers:
#![allow(unused)] fn main() { /// Ids for read-only allow buffers mod ro_allow { pub const IV: usize = 0; pub const SOURCE: usize = 1; /// The number of allow buffers the kernel stores for this grant pub const COUNT: u8 = 2; } /// Ids for read-write allow buffers mod rw_allow { pub const DEST: usize = 0; /// The number of allow buffers the kernel stores for this grant pub const COUNT: u8 = 1; } }
To deliver upcalls to the application, we further allocate an allow-slot for the
DONE
callback:
#![allow(unused)] fn main() { /// Ids for subscribe upcalls mod upcall { pub const DONE: usize = 0; /// The number of subscribe upcalls the kernel stores for this grant pub const COUNT: u8 = 1; } }
Now, we need to update our Grant
type to actually reserve these new allow and
subscribe slots:
pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
aes: &'a A,
process_grants: Grant<
ProcessState,
- UpcallCount<0>,
- AllowRoCount<0>,
- AllowRwCount<0>,
+ UpcallCount<{ upcall::COUNT }>,
+ AllowRoCount<{ ro_allow::COUNT }>,
+ AllowRwCount<{ rw_allow::COUNT }>,
>,
Update this type signature in your constructor as well.
While Tock applications can expose certain sections of their memory as buffers to the kernel, access to the buffers is limited while their grant region is entered (implemented through a Rust closure). Unfortunately, this implies that asynchronous operations cannot keep a hold of these buffers and use them while other code (or potentially the application itself) is executing.
For this reason, Tock uses static mutable slices (&'static mut [u8]
) in
HILs. These Rust types have the distinct advantage that they can be passed
around the kernel as "persistent references": when borrowing a 'static
reference into another 'static
reference, the original reference becomes
inaccessible. Tock features a special container to hold such mutable references,
called TakeCell
. We add such a container for each of our source and
destination buffers:
use core::cell::Cell;
use kernel::utilities::cells::TakeCell;
pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
[...],
current_process: OptionalCell<ProcessId>,
+ source_buffer: TakeCell<'static, [u8]>,
+ dest_buffer: TakeCell<'static, [u8]>,
+ crypt_len: Cell<usize>,
}
) -> Self {
EncryptionOracleDriver {
process_grants: process_grants,
aes: aes,
current_process: OptionalCell::empty(),
+ source_buffer: TakeCell::new(source_buffer),
+ dest_buffer: TakeCell::new(dest_buffer),
+ crypt_len: Cell::new(0),
}
}
Now we have all pieces in place to actually drive the AES implementation. As
this is a rather lengthy implementation containing a lot of specifics relating
to the AES128
trait, this logic is provided to you in the form of a single
run()
method. Fill in this implementation from encryption_oracle_chkpt4.rs
:
#![allow(unused)] fn main() { use kernel::processbuffer::ReadableProcessBuffer; use kernel::hil::symmetric_encryption::AES128_BLOCK_SIZE; /// The run method initiates a new decryption operation or /// continues an existing two-phase (asynchronous) decryption in /// the context of a process. /// /// If the process-state `offset` is `0`, we will initialize the /// AES engine with an initialization vector (IV) provided by the /// application, and configure it to perform an AES128-CTR /// operation. /// /// If the process-state `offset` is larger or equal to the /// process-provided source or destination buffer size, we return /// an error of `ErrorCode::NOMEM`. A caller can use this as a /// method to check whether the descryption operation has /// finished. fn run(&self, processid: ProcessId) -> Result<(), ErrorCode> { // Copy in the provided code from `encryption_oracle_chkpt4.rs` unimplemented!() } }
A core part still missing is actually invoking this run()
method, namely for
each process that has its request_pending
flag set. As we need to do this each
time an application requests an operation, as well as each time we finish an
operation (to work on the next enqueued) one, this is implemented in a helper
method called run_next_pending
.
#![allow(unused)] fn main() { /// Try to run another decryption operation. /// /// If `self.current_current` process contains a `ProcessId`, this /// indicates that an operation is still in progress. In this /// case, do nothing. /// /// If `self.current_process` is vacant, use your implementation /// of `next_pending` to find a process with an active request. If /// one is found, remove its `request_pending` indication and start // a new decryption operation with the following call: /// /// self.run(processid) /// /// If this method returns an error, return the error to the /// process in the registered upcall. Try this until either an /// operation was started successfully, or no more processes have /// pending requests. /// /// Beware: you will need to enter a process' grant both to set the /// `request_pending = false` and to (potentially) schedule an error /// upcall. `self.run()` will itself also enter the grant region. /// However, *Tock's grants are non-reentrant*. This means that trying /// to enter a grant while it is already entered will fail! fn run_next_pending(&self) { unimplemented!() } }
EXERCISE: Implement the
run_next_pending
method according to its specification. To schedule a process upcall, you can use the second argument passed into thegrant.enter()
method (kernel_data
):kernel_data.schedule_upcall( <upcall slot>, (<arg0>, <arg1>, <arg2>) )
By convention, errors are reported in the first upcall argument (
arg0
). You can convert anErrorCode
into ausize
with the following method:kernel::errorcode::into_statuscode(<error code>)
run_next_pending
should be invoked whenever we receive a new encryption /
decryption request from a process, so add it to the command()
method
implementation:
// Request the decryption operation:
- 1 => self
- .process_grants
- .enter(processid, |grant, _kernel_data| {
- grant.request_pending = true;
- CommandReturn::success()
- })
- .unwrap_or_else(|err| err.into()),
+ 1 => {
+ let res = self
+ .process_grants
+ .enter(processid, |grant, _kernel_data| {
+ grant.request_pending = true;
+ CommandReturn::success()
+ })
+ .unwrap_or_else(|err| err.into());
+
+ self.run_next_pending();
+
+ res
+ }
We store res
temporarily, as Tock's grant regions are non-reentrant: we can't
invoke run_next_pending
(which will attempt to enter grant regions), while
we're in a grant already.
CHECKPOINT:
encryption_oracle_chkpt4.rs
Now, to complete our encryption oracle capsule, we need to implement the
crypt_done()
callback. This callback performs the following actions:
- copies the in-kernel destination buffer (
&'static mut [u8]
) as passed tocrypt()
into the process' destination buffer through its grant, and - attempts to invoke another encryption / decryption round by calling
run()
.- If calling
run()
succeeds, anothercrypt_done()
callback will be scheduled in the future. - If calling
run()
fails with an error ofErrorCode::NOMEM
, this indicates that the current operation has been completed. Invoke the process' upcall to signal this event, and use ourrun_next_pending()
method to schedule the next operation.
- If calling
Similar to the run()
method, we provide this snippet to you in
encryption_oracle_chkpt5.rs
:
#![allow(unused)] fn main() { use kernel::processbuffer::WriteableProcessBuffer; impl<'a, A: AES128<'a> + AES128Ctr> Client<'a> for EncryptionOracleDriver<'a, A> { fn crypt_done(&'a self, mut source: Option<&'static mut [u8]>, destination: &'static mut [u8]) { // Copy in the provided code from `encryption_oracle_chkpt5.rs` unimplemented!() } } }
CHECKPOINT:
encryption_oracle_chkpt5.rs
Congratulations! You have written your first Tock capsule and userspace driver, and interfaced with Tock's asynchronous HILs. Your capsule should be ready to go now, go ahead and integrate it into your HOTP application! Don't forget to recompile your kernel such that it integrates the latest changes.
Integrating the Encryption Oracle Capsule into your libtock-c
App
The encryption oracle capsule is compatible with the oracle.c
and oracle.h
implementation in the libtock-c
part of the tutorial, under
examples/tutorials/hotp/hotp_oracle_complete/
.
You can try to integrate this with your application by using the interfaces
provided in oracle.h
. The main.c
file in this repository contains an example
of how these interfaces can be integrated into a fully-featured HOTP
application.
Security Key Application Access Control
With security-focused and privileged system resources, a board may wish to restrict which applications can access which system call resources. In this stage we will extend the Tock kernel to restrict access to the encryption capsule to only trusted (credentialed) apps.
Background
We need two Tock mechanisms to implement this feature. First, we need a way to identify the trusted app that we will give access to the encryption engine. We will do this by adding credentials to the app's TBF (Tock Binary Format file) and verifying those credentials when the application is loaded. This mechanism allows developers to sign apps, and then the kernel can verify those signatures.
The second mechanism is way to permit syscall access to only specific applications. The Tock kernel already has a hook that runs on each syscall to check if the syscall should be permitted. By default this just approves every syscall. We will need to implement a custom policy which permits access to the encryption capsule to only the trusted HOTP apps.
Module Overview
Our goal is to add credentials to Tock apps, verify those credentials in the kernel, and then permit only verified apps to use the encryption oracle API. To keep this simple we will use a simple SHA-256 hash as our credential, and verify that the hash is valid within the kernel.
Step 1: Credentialed Apps
To implement our access control policy we need to include an offline-computed SHA256 hash with the app TBF, and then check it when running the app. The SHA256 credential is simple to create, and serves as a stand-in for more useful credentials such as cryptographic signatures.
This will require a couple pieces:
- We need to actually include the hash in our app.
- We need a mechanism in the kernel to check the hash exists and is valid.
Signing Apps
We can use Tockloader to add a hash to a compiled app. This will require Tockloader version 1.10.0 or newer.
First, compile the app:
$ cd libtock-c/examples/blink
$ make
Now, add the hash credential:
$ tockloader tbf credential add sha256
It's fine to add to all architectures or you can specify which TBF to add it to.
To check that the credential was added, we can inspect the TAB:
$ tockloader inspect-tab
You should see output like the following:
$ tockloader inspect-tab
[INFO ] No TABs passed to tockloader.
[STATUS ] Searching for TABs in subdirectories.
[INFO ] Using: ['./build/blink.tab']
[STATUS ] Inspecting TABs...
TAB: blink
build-date: 2023-06-09 21:52:59+00:00
minimum-tock-kernel-version: 2.0
tab-version: 1
included architectures: cortex-m0, cortex-m3, cortex-m4, cortex-m7
Which TBF to inspect further? cortex-m4
cortex-m4:
version : 2
header_size : 104 0x68
total_size : 16384 0x4000
checksum : 0x722e64be
flags : 1 0x1
enabled : Yes
sticky : No
TLV: Main (1) [0x10 ]
init_fn_offset : 41 0x29
protected_size : 0 0x0
minimum_ram_size : 5068 0x13cc
TLV: Program (9) [0x20 ]
init_fn_offset : 41 0x29
protected_size : 0 0x0
minimum_ram_size : 5068 0x13cc
binary_end_offset : 8360 0x20a8
app_version : 0 0x0
TLV: Package Name (3) [0x38 ]
package_name : blink
TLV: Kernel Version (8) [0x4c ]
kernel_major : 2
kernel_minor : 0
kernel version : ^2.0
TLV: Persistent ACL (7) [0x54 ]
Write ID : 11 0xb
Read IDs (1) : 11
Access IDs (1) : 11
TBF Footers
Footer
footer_size : 8024 0x1f58
Footer TLV: Credentials (128)
Type: SHA256 (3) ✓ verified
Length: 32
Footer TLV: Credentials (128)
Type: Reserved (0)
Length: 7976
Note at the bottom, there is a Footer TLV
with SHA256 credentials! Because
tockloader was able to double-check the hash was correct there is ✓ verified
next to it.
SUCCESS: We now have an app with a hash credential!
Verifying Credentials in the Kernel
To have the kernel check that our hash credential is present and valid, we need to add a credential checker before the kernel starts each process.
To create the app checker, we'll edit the board's main.rs
file in the kernel.
Tock includes a basic SHA256 credential checker, so we can use that. The
following code should be added to the main.rs
file somewhere before the
platform setup occurs (probably right after the encryption oracle capsule from
the last module!).
#![allow(unused)] fn main() { //-------------------------------------------------------------------------- // CREDENTIALS CHECKING POLICY //-------------------------------------------------------------------------- // Create the software-based SHA engine. let sha = static_init!(capsules_extra::sha256::Sha256Software<'static>, capsules_extra::sha256::Sha256Software::new()); kernel::deferred_call::DeferredCallClient::register(sha); // Create the credential checker. static mut SHA256_CHECKER_BUF: [u8; 32] = [0; 32]; let checker = static_init!( kernel::process_checker::basic::AppCheckerSha256, kernel::process_checker::basic::AppCheckerSha256::new(sha, &mut SHA256_CHECKER_BUF) ); kernel::hil::digest::Digest::set_client(sha, checker); }
That code creates a checker
object. We now need to modify the board so it
hangs on to that checker
struct. To do so, we need to add this to our
Platform
struct type definition near the top of the file:
#![allow(unused)] fn main() { struct Platform { ... credentials_checking_policy: &'static kernel::process_checker::basic::AppCheckerSha256, } }
Then when we create the platform object near the end of main()
, we can add our
checker
:
#![allow(unused)] fn main() { let platform = Platform { ... credentials_checking_policy: checker, } }
And we need the platform to provide access to that checker when requested by the
kernel for credentials-checking purposes. This goes in the KernelResources
implementation for the Platform
type:
#![allow(unused)] fn main() { impl KernelResources for Platform { ... type CredentialsCheckingPolicy = kernel::process_checker::basic::AppCheckerSha256; ... fn credentials_checking_policy(&self) -> &'static Self::CredentialsCheckingPolicy { self.credentials_checking_policy } ... } }
Finally, we need to use the function that checks credentials when processes are
loaded (not just loads and executes them unconditionally). This should go at the
end of main()
, replacing the existing call to
kernel::process::load_processes
:
#![allow(unused)] fn main() { kernel::process::load_and_check_processes( board_kernel, &platform, // note this function requires providing the platform. chip, core::slice::from_raw_parts( &_sapps as *const u8, &_eapps as *const u8 as usize - &_sapps as *const u8 as usize, ), core::slice::from_raw_parts_mut( &mut _sappmem as *mut u8, &_eappmem as *const u8 as usize - &_sappmem as *const u8 as usize, ), &mut PROCESSES, &FAULT_RESPONSE, &process_management_capability, ) .unwrap_or_else(|err| { debug!("Error loading processes!"); debug!("{:?}", err); }); }
Compile and install the updated kernel.
SUCCESS: We now have a kernel that can check credentials!
Installing Apps and Verifying Credentials
Now, our kernel will only run an app if it has a valid SHA256 credential. To verify this, recompile and install the blink app but do not add credentials:
cd libtock-c/examples/blink
touch main.c
make
tockloader install --erase
Now, if we list the processes on the board with the process console. Note we
need to run the console-start
command to active the tock process console.
$ tockloader listen
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAF0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
console-start
tock$
Now we can list the processes:
tock$ list
PID Name Quanta Syscalls Restarts Grants State
0 blink 0 0 0 0/16 CredentialsFailed
tock$
Tip: You can re-disable the process console by using the
console-stop
command.
You can see our app is in the state CredentialsFailed
meaning it will not
execute (and the LEDs are not blinking).
To fix this, we can add the SHA256 credential.
cd libtock-c/examples/blink
tockloader tbf credential add sha256
tockloader install
Now when we list the processes, we see:
tock$ list
PID ShortID Name Quanta Syscalls Restarts Grants State
0 0x3be6efaa blink 0 323 0 1/16 Yielded
And we can verify the app is both running and now has a specifically assigned short ID.
Permitting Both Credentialed and Non-Credentialed Apps
The default operation is not quite what we want. We want all apps to run, but only credentialed apps to have access to the syscalls.
To allow all apps to run, even if they don't pass the credential check, we need to configure our checker. Doing that is actually quite simple. We just need to modify the credential checker we are using to not require credentials.
In tock/kernel/src/process_checker/basic.rs
, modify the
require_credentials()
function to not require credentials:
#![allow(unused)] fn main() { impl AppCredentialsChecker<'static> for AppCheckerSha256 { fn require_credentials(&self) -> bool { false // change from true to false } ... } }
Then recompile and install. Now even a non-credentialed process should run:
tock$ list
PID ShortID Name Quanta Syscalls Restarts Grants State
0 Unique c_hello 0 8 0 1/16 Yielded
SUCCESS: We now can determine if an app is credentialed or not!
Step 2: Permitting Syscalls for only Credentialed Apps
Our second step is to implement a policy that permits syscall access to the encryption capsule only for credentialed apps. All other syscalls should be permitted.
Tock provides the SyscallFilter
trait to do this. An object that implements
this trait is used on every syscall to check if that syscall should be executed
or not. By default all syscalls are permitted.
The interface looks like this:
#![allow(unused)] fn main() { pub trait SyscallFilter { // Return Ok(()) to permit the syscall, and any Err() to deny. fn filter_syscall( &self, process: &dyn process::Process, syscall: &syscall::Syscall, ) -> Result<(), errorcode::ErrorCode> { Ok(()) } } }
We need to implement the single filter_syscall()
function with out desired
behavior.
To do this, create a new file called syscall_filter.rs
in the board's src/
directory. Then insert the code below as a starting point:
#![allow(unused)] fn main() { use kernel::errorcode; use kernel::platform::SyscallFilter; use kernel::process; use kernel::syscall; pub struct TrustedSyscallFilter {} impl SyscallFilter for TrustedSyscallFilter { fn filter_syscall( &self, process: &dyn process::Process, syscall: &syscall::Syscall, ) -> Result<(), errorcode::ErrorCode> { // To determine if the process has credentials we can use the // `process.get_credentials()` function. // Now inspect the `syscall` the app is calling. If the `driver_numer` // is not XXXXXX, then return `Ok(())` to permit the call. Otherwise, if // the process is not credentialed, return `Err(ErrorCode::NOSUPPORT)`. If // the process is credentialed return `Ok(())`. } } }
Documentation for the Syscall
type is
here.
Save this file and include it from the board's main.rs:
#![allow(unused)] fn main() { mod syscall_filter }
Now to put our new policy into effect we need to use it when we configure the
kernel via the KernelResources
trait.
#![allow(unused)] fn main() { impl KernelResources for Platform { ... type SyscallFilter = syscall_filter::TrustedSyscallFilter; ... fn syscall_filter(&self) -> &'static Self::SyscallFilter { self.sysfilter } ... } }
Also you need to instantiate the TrustedSyscallFilter
:
#![allow(unused)] fn main() { let sysfilter = static_init!( syscall_filter::TrustedSyscallFilter, syscall_filter::TrustedSyscallFilter {} ); }
and add it to the Platform
struct:
#![allow(unused)] fn main() { struct Platform { ... sysfilter: &'static syscall_filter::TrustedSyscallFilter, } }
Then when we create the platform object near the end of main()
, we can add our
checker
:
#![allow(unused)] fn main() { let platform = Platform { ... sysfilter, } }
SUCCESS: We now have a custom syscall filter based on app credentials.
Verifying HOTP Now Needs Credentials
Now you should be able to install your HOTP app to the board without adding the SHA256 credential and verify that it is no longer able to access the encryption capsule. You should see output like this:
$ tockloader listen
Tock HOTP App Started. Usage:
* Press a button to get the next HOTP code for that slot.
* Hold a button to enter a new HOTP secret for that slot.
Flash read
Initialized state
ERROR cannot encrypt key
If you use tockloader to add credentials
(tockloader tbf credential add sha256
) and then re-install your app it should
run as expected.
Wrap-up
You now have implemented access control on important kernel resources and enabled your app to use it. This provides platform builders robust flexibility in architecting the security framework for their devices.
Kernel Boot and Setup
The goal of this module is to make you comfortable with the Tock kernel, how it is structured, how the kernel is setup at boot, and how capsules provide additional kernel functionality.
During this you will:
- Learn how Tock uses Rust's memory safety to provide isolation for free
- Read the Tock boot sequence, seeing how Tock uses static allocation
- Learn about Tock's event-driven programming
The Tock Boot Sequence
The very first thing that runs on a Tock board is an assembly function called
initialize_ram_jump_to_main()
. Rust requires that memory is configured before
any Rust code executes, so this must run first. As the function name implies,
control is then transferred to the main()
function in the board's main.rs
file. Tock intentionally tries to give the board as much control over the
operation of the system as possible, hence why there is very little between
reset and the board's main function being called.
Open the main.rs
file for your board in your favorite editor. This file
defines the board's platform: how it boots, what capsules it uses, and what
system calls it supports for userland applications.
How is everything organized?
Find the declaration of "platform" struct
. This is typically called
struct Platform
or named based on the name of the board (it's pretty early in
the file). This declares the structure representing the platform. It has many
fields, many of which are capsules that make up the board's platform. These
fields are resources that the board needs to maintain a reference to for future
use, for example for handling system calls or implementing kernel policies.
Recall that everything in the kernel is statically allocated. We can see that
here. Every field in the platform struct
is a reference to an object with a
static lifetime.
Many capsules themselves take a lifetime as a parameter, which is currently
always 'static
.
The boot process is primarily the construction of this platform structure. Once
everything is set up, the board will pass the constructed platform object to
kernel::kernel_loop
and we're off to the races.
How do things get started?
After RAM initialization, the reset handler invokes the main()
function in the
board main.rs file. main()
is typically rather long as it must setup and
configure all of the drivers and capsules the board needs. Many capsules depend
on other, lower layer abstractions that need to be created and initialized as
well.
Take a look at the first few lines of main()
. The boot sequence generally sets
up any low-level microcontroller configuration, initializes the MCU peripherals,
and sets up debugging capabilities.
How do capsules get created?
The bulk of main()
create and initializes capsules which provide the main
functionality of the Tock system. For example, to provide userspace applications
with ability to display serial data, boards typically create a console
capsule. An example of this looks like:
pub unsafe fn main() { ... // Create a virtualizer on top of an underlying UART device. Use 115200 as // the baud rate. let uart_mux = components::console::UartMuxComponent::new(channel, 115200) .finalize(components::uart_mux_component_static!()); // Instantiate the console capsule. This uses the virtualized UART provided // by the uart_mux. let console = components::console::ConsoleComponent::new( board_kernel, capsules_core::console::DRIVER_NUM, uart_mux, ) .finalize(components::console_component_static!()); ... }
Eventually, once all of the capsules have been created, we will populate the platform structure with them:
pub unsafe fn main() { ... let platform = Platform { console: console, gpio: gpio, ... } }
What Are Components?
When setting up the capsules (such as console
), we used objects in the
components
crate to help. In Tock, components are helper objects that make it
easier to correctly create and initialize capsules.
For example, if we look under the hood of the console
component, the main
initialization of console looks like:
#![allow(unused)] fn main() { impl Component for ConsoleComponent { fn finalize(self, s: Self::StaticInput) -> Console { let grant_cap = create_capability!(capabilities::MemoryAllocationCapability); let write_buffer = static_init!([u8; DEFAULT_BUF_SIZE], [0; DEFAULT_BUF_SIZE]); let read_buffer = static_init!([u8; DEFAULT_BUF_SIZE], [0; DEFAULT_BUF_SIZE]); let console_uart = static_init!( UartDevice, UartDevice::new(self.uart_mux, true) ); // Don't forget to call setup() to register our new UartDevice with the // mux! console_uart.setup(); let console = static_init!( Console<'static>, console::Console::new( console_uart, write_buffer, read_buffer, self.board_kernel.create_grant(self.driver_num, &grant_cap), ) ); // Very easy to figure to set the client reference for callbacks! hil::uart::Transmit::set_transmit_client(console_uart, console); hil::uart::Receive::set_receive_client(console_uart, console); console } } }
Much of the code within components is boilerplate that is copied for each board and easy to subtlety miss an important setup step. Components encapsulate the setup complexity and can be reused on each board Tock supports.
The static_init!
macro is simply an easy way to allocate a static variable
with a call to new
. The first parameter is the type, the second is the
expression to produce an instance of the type.
Components end up looking somewhat complex because they can be re-used across multiple boards and different microcontrollers. More detail here.
A brief aside on buffers:
Notice that the console needs both a read and write buffer for it to use. These buffers have to have a
'static
lifetime. This is because low-level hardware drivers, especially those that use DMA, require'static
buffers. Since we don't know exactly when the underlying operation will complete, and we must promise that the buffer outlives the operation, we use the one lifetime that is assured to be alive at the end of an operation:'static
. Other code with buffers without a'static
lifetime, such as userspace processes, use capsules likeConsole
by copying data into internal'static
buffers before passing them to the console. The buffer passing architecture looks like this:
Let's Make a Tock Board!
The code continues on, creating all of the other capsules that are needed by the
platform. Towards the end of main()
, we've created all of the capsules we
need, and it's time to create the actual platform structure
(let platform = Platform {...}
).
Boards must implement two traits to successfully run the Tock kernel:
SyscallDriverLookup
and KernelResources
.
SyscallDriverLookup
The first, SyscallDriverLookup
, is how the kernel maps system calls from
userspace to the correct capsule within the kernel. The trait requires one
function:
#![allow(unused)] fn main() { trait SyscallDriverLookup { /// Mapping of syscall numbers to capsules. fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R where F: FnOnce(Option<&dyn SyscallDriver>) -> R; } }
The with_driver()
function executes the provided function f()
by passing it
the correct capsule based on the provided driver_num
. A brief example of an
implementation of SyscallDriverLookup
looks like:
#![allow(unused)] fn main() { impl SyscallDriverLookup for Platform { fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R where F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R, { match driver_num { capsules_core::console::DRIVER_NUM => f(Some(self.console)), capsules_core::gpio::DRIVER_NUM => f(Some(self.gpio)), ... _ => f(None), } } } }
Why require each board to provide this mapping? Why not implement this mapping centrally in the kernel? Tock requires boards to implement this mapping as we consider the assignment of driver numbers to specific capsules as a platform-specific decisions. While Tock does have a default mapping of driver numbers, boards are not obligated to use them. This flexibility allows boards to expose multiple copies of the same capsule to userspace, for example.
KernelResources
The KernelResources
trait is the main method for configuring the operation of
the core Tock kernel. Policies such as the syscall mapping described above,
syscall filtering, and watchdog timers are configured through this trait. More
information is contained in a separate course module.
Loading processes
Once the platform is all set up, the board is responsible for loading processes into memory:
pub unsafe fn main() { ... kernel::process::load_processes( board_kernel, chip, core::slice::from_raw_parts( &_sapps as *const u8, &_eapps as *const u8 as usize - &_sapps as *const u8 as usize, ), core::slice::from_raw_parts_mut( &mut _sappmem as *mut u8, &_eappmem as *const u8 as usize - &_sappmem as *const u8 as usize, ), &mut PROCESSES, &FAULT_RESPONSE, &process_management_capability, ) .unwrap_or_else(|err| { debug!("Error loading processes!"); debug!("{:?}", err); }); ... }
A Tock process is represented by a kernel::Process
struct. In principle, a
platform could load processes by any means. In practice, all existing platforms
write an array of Tock Binary Format (TBF) entries to flash. The kernel provides
the load_processes
helper function that takes in a flash address and begins
iteratively parsing TBF entries and making Process
es.
A brief aside on capabilities:
To call
load_processes()
, the board had to provide a reference to a&process_management_capability
. Theload_processes()
function internally requires significant direct access to memory, and it should only be called in very specific places. To prevent its misuse (for example from within a capsule), calling it requires a capability to be passed in with the arguments. To create a capability, the calling code must be able to callunsafe
, Code (i.e. capsules) which cannot useunsafe
therefore has no way to create a capability and therefore cannot call the restricted function.
Starting the kernel
Finally, the board passes a reference to the current platform, the chip the platform is built on (used for interrupt and power handling), and optionally an IPC capsule to start the main kernel loop:
#![allow(unused)] fn main() { board_kernel.kernel_loop(&platform, chip, Some(&platform.ipc), &main_loop_capability); }
From here, Tock is initialized, the kernel event loop takes over, and the system enters steady state operation.
Tock Kernel Policies
As a kernel for a security-focused operating system, the Tock kernel is responsible for implementing various policies on how the kernel should handle processes. Examples of the types of questions these policies help answer are: What happens when a process has a hardfault? Is the process restarted? What syscalls are individual processes allowed to call? Which process should run next? Different systems may need to answer these questions differently, and Tock includes a robust platform for configuring each of these policies.
Background on Relevant Tock Design Details
If you are new to this aspect of Tock, this section provides a quick primer on the key aspects of Tock which make it possible to implement process policies.
The KernelResources
Trait
The central mechanism for configuring the Tock kernel is through the
KernelResources
trait. Each board must implement KernelResources
and provide
the implementation when starting the main kernel loop.
The general structure of the KernelResources
trait looks like this:
#![allow(unused)] fn main() { /// This is the primary method for configuring the kernel for a specific board. pub trait KernelResources<C: Chip> { /// How driver numbers are matched to drivers for system calls. type SyscallDriverLookup: SyscallDriverLookup; /// System call filtering mechanism. type SyscallFilter: SyscallFilter; /// Process fault handling mechanism. type ProcessFault: ProcessFault; /// Credentials checking policy. type CredentialsCheckingPolicy: CredentialsCheckingPolicy<'static> + 'static; /// Context switch callback handler. type ContextSwitchCallback: ContextSwitchCallback; /// Scheduling algorithm for the kernel. type Scheduler: Scheduler<C>; /// Timer used to create the timeslices provided to processes. type SchedulerTimer: scheduler_timer::SchedulerTimer; /// WatchDog timer used to monitor the running of the kernel. type WatchDog: watchdog::WatchDog; // Getters for each policy/mechanism. fn syscall_driver_lookup(&self) -> &Self::SyscallDriverLookup; fn syscall_filter(&self) -> &Self::SyscallFilter; fn process_fault(&self) -> &Self::ProcessFault; fn credentials_checking_policy(&self) -> &'static Self::CredentialsCheckingPolicy; fn context_switch_callback(&self) -> &Self::ContextSwitchCallback; fn scheduler(&self) -> &Self::Scheduler; fn scheduler_timer(&self) -> &Self::SchedulerTimer; fn watchdog(&self) -> &Self::WatchDog; } }
Many of these resources can be effectively no-ops by defining them to use the
()
type. Every board that wants to support processes must provide:
- A
SyscallDriverLookup
, which maps theDRIVERNUM
in system calls to the appropriate driver in the kernel. - A
Scheduler
, which selects the next process to execute. The kernel provides several common schedules a board can use, or boards can create their own.
Application Identifiers
The Tock kernel can implement different policies based on different levels of trust for a given app. For example, a trusted core app written by the board owner may be granted full privileges, while a third-party app may be limited in which system calls it can use or how many times it can fail and be restarted.
To implement per-process policies, however, the kernel must be able to establish a persistent identifier for a given process. To do this, Tock supports process credentials which are hashes, signatures, or other credentials attached to the end of a process's binary image. With these credentials, the kernel can cryptographically verify that a particular app is trusted. The kernel can then establish a persistent identifier for the app based on its credentials.
A specific process binary can be appended with zero or more credentials. The
per-board KernelResources::CredentialsCheckingPolicy
then uses these
credentials to establish if the kernel should run this process and what
identifier it should have. The Tock kernel design does not impose any
restrictions on how applications or processes are identified. For example, it is
possible to use a SHA256 hash of the binary as an identifier, or a RSA4096
signature as the identifier. As different use cases will want to use different
identifiers, Tock avoids specifying any constraints.
However, long identifiers are difficult to use in software. To enable more
efficiently handling of application identifiers, Tock also includes mechanisms
for a per-process ShortID
which is stored in 32 bits. This can be used
internally by the kernel to differentiate processes. As with long identifiers,
ShortIDs are set by KernelResources::CredentialsCheckingPolicy
and are chosen
on a per-board basis. The only property the kernel enforces is that ShortIDs
must be unique among processes installed on the board. For boards that do not
need to use ShortIDs, the ShortID type includes a LocallyUnique
option which
ensures the uniqueness invariant is upheld without the overhead of choosing
distinct, unique numbers for each process.
#![allow(unused)] fn main() { pub enum ShortID { LocallyUnique, Fixed(core::num::NonZeroU32), } }
Module Overview
In this module, we are going to experiment with using the KernelResources
trait to implement per-process restart policies. We will create our own
ProcessFaultPolicy
that implements different fault handling behavior based on
whether the process included a hash in its credentials footer.
Custom Process Fault Policy
A process fault policy decides what the kernel does with a process when it crashes (i.e. hardfaults). The policy is implemented as a Rust module that implements the following trait:
#![allow(unused)] fn main() { pub trait ProcessFaultPolicy { /// `process` faulted, now decide what to do. fn action(&self, process: &dyn Process) -> process::FaultAction; } }
When a process faults, the kernel will call the action()
function and then
take the returned action on the faulted process. The available actions are:
#![allow(unused)] fn main() { pub enum FaultAction { /// Generate a `panic!()` with debugging information. Panic, /// Attempt to restart the process. Restart, /// Stop the process. Stop, } }
Let's create a custom process fault policy that restarts signed processes up to a configurable maximum number of times, and immediately stops unsigned processes.
We start by defining a struct
for this policy:
#![allow(unused)] fn main() { pub struct RestartTrustedAppsFaultPolicy { /// Number of times to restart trusted apps. threshold: usize, } }
We then create a constructor:
#![allow(unused)] fn main() { impl RestartTrustedAppsFaultPolicy { pub const fn new(threshold: usize) -> RestartTrustedAppsFaultPolicy { RestartTrustedAppsFaultPolicy { threshold } } } }
Now we can add a template implementation for the ProcessFaultPolicy
trait:
#![allow(unused)] fn main() { impl ProcessFaultPolicy for RestartTrustedAppsFaultPolicy { fn action(&self, process: &dyn Process) -> process::FaultAction { process::FaultAction::Stop } } }
To determine if a process is trusted, we will use its ShortID
. A ShortID
is
a type as follows:
#![allow(unused)] fn main() { pub enum ShortID { /// No specific ID, just an abstract value we know is unique. LocallyUnique, /// Specific 32 bit ID number guaranteed to be unique. Fixed(core::num::NonZeroU32), } }
If the app has a short ID of ShortID::LocallyUnique
then it is untrusted (i.e.
the kernel could not validate its signature or it was not signed). If the app
has a concrete number as its short ID (i.e. ShortID::Fixed(u32)
), then we
consider the app to be trusted.
To determine how many times the process has already been restarted we can use
process.get_restart_count()
.
Putting this together, we have an outline for our custom policy:
#![allow(unused)] fn main() { use kernel::process; use kernel::process::Process; use kernel::process::ProcessFaultPolicy; pub struct RestartTrustedAppsFaultPolicy { /// Number of times to restart trusted apps. threshold: usize, } impl RestartTrustedAppsFaultPolicy { pub const fn new(threshold: usize) -> RestartTrustedAppsFaultPolicy { RestartTrustedAppsFaultPolicy { threshold } } } impl ProcessFaultPolicy for RestartTrustedAppsFaultPolicy { fn action(&self, process: &dyn Process) -> process::FaultAction { let restart_count = process.get_restart_count(); let short_id = process.short_app_id(); // Check if the process is trusted. If so, return the restart action // if the restart count is below the threshold. Otherwise return stop. // If the process is not trusted, return stop. process::FaultAction::Stop } } }
TASK: Finish implementing the custom process fault policy.
Save your completed custom fault policy in your board's src/
directory as
trusted_fault_policy.rs
. Then add mod trusted_fault_policy;
to the top of
the board's main.rs
file.
Testing Your Custom Fault Policy
First we need to configure your kernel to use your new fault policy.
-
Find where your
fault_policy
was already defined. Update it to use your new policy:#![allow(unused)] fn main() { let fault_policy = static_init!( trusted_fault_policy::RestartTrustedAppsFaultPolicy, trusted_fault_policy::RestartTrustedAppsFaultPolicy::new(3) ); }
-
Now we need to configure the process loading mechanism to use this policy for each app.
#![allow(unused)] fn main() { kernel::process::load_processes( board_kernel, chip, flash, memory, &mut PROCESSES, fault_policy, // this is where we provide our chosen policy &process_management_capability, ) }
-
Now we can compile the updated kernel and flash it to the board:
# in your board directory: make install
Now we need an app to actually crash so we can observe its behavior. Tock has a
test app called crash_dummy
that causes a hardfault when a button is pressed.
Compile that and load it on to the board:
-
Compile the app:
cd libtock-c/examples/tests/crash_dummy make
-
Install it on the board:
tockloader install
With the new kernel installed and the test app loaded, we can inspect the status of the board. Use tockloader to connect to the serial port:
tockloader listen
Note: if multiple serial port options appear, generally the lower numbered port is what you want to use.
Now we can use the onboard console to inspect which processes we have on the board. Run the list command:
tock$ list
PID Name Quanta Syscalls Restarts Grants State
0 crash_dummy 0 6 0 1/15 Yielded
Note that crash_dummy
is in the Yielded
state. This means it is just waiting
for a button press.
Press the first button on your board (it is "Button 1" on the nRF52840-dk). This will cause the process to fault. You won't see any output, and since the app was not signed it was just stopped. Now run the list command again:
tock$ list
PID Name Quanta Syscalls Restarts Grants State
0 crash_dummy 0 6 0 0/15 Faulted
Now the process is in the Faulted
state! This means the kernel will not try to
run it. Our policy is working! Next we have to verify signed apps so that we can
restart trusted apps.
App Credentials
With our custom fault policy, we can implement different responses based on whether an app is trusted or not. Now we need to configure the kernel to verify apps, and check if we trust them or not. For this example we will use a simple credential: a sha256 hash. This credential is simple to create, and serves as a stand-in for more useful credentials such as cryptographic signatures.
This will require a couple pieces:
- We need to actually include the hash in our app.
- We need a mechanism in the kernel to check the hash exists and is valid.
Signing Apps
We can use Tockloader to add a hash to a compiled app.
First, compile the app:
$ cd libtock-c/examples/blink
$ make
Now, add the hash credential:
$ tockloader tbf credential add sha256
It's fine to add to all architectures or you can specify which TBF to add it to.
To check that the credential was added, we can inspect the TAB:
$ tockloader inspect-tab
You should see output like the following:
$ tockloader inspect-tab
[INFO ] No TABs passed to tockloader.
[STATUS ] Searching for TABs in subdirectories.
[INFO ] Using: ['./build/blink.tab']
[STATUS ] Inspecting TABs...
TAB: blink
build-date: 2023-06-09 21:52:59+00:00
minimum-tock-kernel-version: 2.0
tab-version: 1
included architectures: cortex-m0, cortex-m3, cortex-m4, cortex-m7
Which TBF to inspect further? cortex-m4
cortex-m4:
version : 2
header_size : 104 0x68
total_size : 16384 0x4000
checksum : 0x722e64be
flags : 1 0x1
enabled : Yes
sticky : No
TLV: Main (1) [0x10 ]
init_fn_offset : 41 0x29
protected_size : 0 0x0
minimum_ram_size : 5068 0x13cc
TLV: Program (9) [0x20 ]
init_fn_offset : 41 0x29
protected_size : 0 0x0
minimum_ram_size : 5068 0x13cc
binary_end_offset : 8360 0x20a8
app_version : 0 0x0
TLV: Package Name (3) [0x38 ]
package_name : kv_interactive
TLV: Kernel Version (8) [0x4c ]
kernel_major : 2
kernel_minor : 0
kernel version : ^2.0
TLV: Persistent ACL (7) [0x54 ]
Write ID : 11 0xb
Read IDs (1) : 11
Access IDs (1) : 11
TBF Footers
Footer
footer_size : 8024 0x1f58
Footer TLV: Credentials (128)
Type: SHA256 (3) ✓ verified
Length: 32
Footer TLV: Credentials (128)
Type: Reserved (0)
Length: 7976
Note at the bottom, there is a Footer TLV
with SHA256 credentials! Because
tockloader was able to double-check the hash was correct there is ✓ verified
next to it.
SUCCESS: We now have an app with a hash credential!
Verifying Credentials in the Kernel
To have the kernel check that our hash credential is present and valid, we need to add a credential checker before the kernel starts each process.
In main.rs
, we need to create the app checker. Tock includes a basic SHA256
credential checker, so we can use that:
#![allow(unused)] fn main() { use capsules_extra::sha256::Sha256Software; use kernel::process_checker::basic::AppCheckerSha256; // Create the software-based SHA engine. let sha = static_init!(Sha256Software<'static>, Sha256Software::new()); kernel::deferred_call::DeferredCallClient::register(sha); // Create the credential checker. static mut SHA256_CHECKER_BUF: [u8; 32] = [0; 32]; let checker = static_init!( AppCheckerSha256, AppCheckerSha256::new(sha, &mut SHA256_CHECKER_BUF) ); sha.set_client(checker); }
Then we need to add this to our Platform
struct:
#![allow(unused)] fn main() { struct Platform { ... credentials_checking_policy: &'static AppCheckerSha256, } }
Add it when create the platform object:
#![allow(unused)] fn main() { let platform = Platform { ... credentials_checking_policy: checker, } }
And configure our kernel to use it:
#![allow(unused)] fn main() { impl KernelResources for Platform { ... type CredentialsCheckingPolicy = AppCheckerSha256; ... fn credentials_checking_policy(&self) -> &'static Self::CredentialsCheckingPolicy { self.credentials_checking_policy } ... } }
Finally, we need to use the function that checks credentials when processes are loaded (not just loads and executes them unconditionally):
#![allow(unused)] fn main() { kernel::process::load_and_check_processes( board_kernel, &platform, // note this function requires providing the platform. chip, core::slice::from_raw_parts( &_sapps as *const u8, &_eapps as *const u8 as usize - &_sapps as *const u8 as usize, ), core::slice::from_raw_parts_mut( &mut _sappmem as *mut u8, &_eappmem as *const u8 as usize - &_sappmem as *const u8 as usize, ), &mut PROCESSES, &FAULT_RESPONSE, &process_management_capability, ) .unwrap_or_else(|err| { debug!("Error loading processes!"); debug!("{:?}", err); }); }
(Instead of just kernel::process::load_processes(...)
.)
Compile and install the updated kernel.
SUCCESS: We now have a kernel that can check credentials!
Installing Apps and Verifying Credentials
Now, our kernel will only run an app if it has a valid SHA256 credential. To verify this, recompile and install the blink app but do not add credentials:
cd libtock-c/examples/blink
touch main.c
make
tockloader install --erase
Now, if we list the processes on the board with the process console:
$ tockloader listen
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAF0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
tock$ list
PID Name Quanta Syscalls Restarts Grants State
0 blink 0 0 0 0/16 CredentialsFailed
tock$
You can see our app is in the state CredentialsFailed
meaning it will not
execute (and the LEDs are not blinking).
To fix this, we can add the SHA256 credential.
cd libtock-c/examples/blink
tockloader tbf credential add sha256
tockloader install
Now when we list the processes, we see:
tock$ list
PID ShortID Name Quanta Syscalls Restarts Grants State
0 0x3be6efaa blink 0 323 0 1/16 Yielded
And we can verify the app is both running and now has a specifically assigned short ID.
Implementing the Privileged Behavior
The default operation is not quite what we want. We want all apps to run, but only credentialed apps to be restarted.
First, we need to allow all apps to run, even if they don't pass the credential check. Doing that is actually quite simple. We just need to modify the credential checker we are using to not require credentials.
In tock/kernel/src/process_checker/basic.rs
, modify the
require_credentials()
function to not require credentials:
#![allow(unused)] fn main() { impl AppCredentialsChecker<'static> for AppCheckerSha256 { fn require_credentials(&self) -> bool { false // change from true to false } ... } }
Then recompile and install. Now both processes should run:
tock$ list
PID ShortID Name Quanta Syscalls Restarts Grants State
0 0x3be6efaa blink 0 193 0 1/16 Yielded
1 Unique c_hello 0 8 0 1/16 Yielded
But note, only the credential app (blink) has a specific short ID.
Second, we need to use the presence of a specific short ID in our fault policy to only restart credentials apps. We just need to check if the short ID is fixed or not:
#![allow(unused)] fn main() { impl ProcessFaultPolicy for RestartTrustedAppsFaultPolicy { fn action(&self, process: &dyn Process) -> process::FaultAction { let restart_count = process.get_restart_count(); let short_id = process.short_app_id(); // Check if the process is trusted based on whether it has a fixed short // ID. If so, return the restart action if the restart count is below // the threshold. Otherwise return stop. match short_id { kernel::process::ShortID::LocallyUnique => process::FaultAction::Stop, kernel::process::ShortID::Fixed(_) => { if restart_count < self.threshold { process::FaultAction::Restart } else { process::FaultAction::Stop } } } } } }
That's it! Now we have the full policy: we verify application credentials, and handle process faults accordingly.
Task
Compile and install multiple applications, including the crash dummy app, and verify that only credentialed apps are successfully restarted.
SUCCESS: We now have implemented an end-to-end security policy in Tock!
TicKV Key-Value Store
TicKV is a flash-optimized key-value store written in Rust. Tock supports using TicKV within the OS to enable the kernel and processes to store and retrieve key-value objects in local flash memory.
TicKV and Key-Value Design
This section provides a quick overview of the TicKV and Key-Value stack in Tock.
TicKV Structure and Format
TicKV can store 8 byte keys and values up to 2037 bytes. TicKV is page-based, meaning that each object is stored entirely on a single page in flash.
Note: for familiarity, we use the term "page", but in actuality TicKV uses the size of the smallest erasable region, not necessarily the actual size of a page in the flash memory.
Each object is assigned to a page based on the lowest 16 bits of the key:
object_page_index = (key & 0xFFFF) % <number of pages>
Each object in TicKV has the following structure:
0 3 11 (bytes)
---------------------------------- ... -
| Header | Key | Value |
---------------------------------- ... -
The header has this structure:
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 (bits)
-------------------------------------------------
| Version=1 |V| res | Length |
-------------------------------------------------
Version
: Format of the object, currently this is always 1.Valid (V)
: 1 if this object is valid, 0 otherwise. This is set to 0 to delete an object.Length (Len)
: The total length of the object, including the length of the header (3 bytes), key (8 bytes), and value.
Subsequent keys either start at the first byte of a page or immediately after
another object. If a key cannot fit on the page assigned by the
object_page_index
, it is stored on the next page with sufficient room.
Objects are updated in TicKV by invalidating the existing object (setting the
V
flag to 0) and then writing the new value as a new object. This removes the
need to erase and re-write an entire page of flash to update a specific value.
TicKV on Tock Format
The previous section describes the generic format of TicKV. Tock builds upon this format by adding a header to the value buffer to add additional features.
The full object format for TicKV objects in Tock has the following structure:
0 3 11 12 16 20 (bytes)
------------------------------------------------ ... ----
| TicKV | Key |Ver| Length | Write | Value |
| Header | | | | ID | |
------------------------------------------------ ... ----
<--TicKV Header+Key--><--Tock TicKV Header+Value-...---->
Version (Ver)
: One byte version of the Tock header. Currently 0.Length
: Four byte length of the value.Write ID
: Four byte identifier for restricting access to this object.
The central addition is the Write ID
, which is a u32
indicating the
identifier of the writer that added the key-value object. The write ID of 0 is
reserved for the kernel to use. Each process can be assigned using TBF headers
its own write ID to use for storing state, such as in a TicKV database. Each
process and the kernel can then be granted specific read and update permissions,
based on the stored write ID. If a process has read permissions for the specific
ID stored in the Write ID
field, then it can access that key-value object. If
a process has update permissions for the specific ID stored in the Write ID
field, then it can change the value of that key-value object.
Tock Key-Value APIs
Tock supports two key-value orientated APIs: an upper and lower API. The lower API expects hashed keys and is designed with flash as the underlying storage in mind. The upper API is a more traditional K-V interface.
The lower interface looks like this. Note, this version is simplified for illustration, the actual version is complete Rust.
#![allow(unused)] fn main() { pub trait KVSystem { /// The type of the hashed key. For example `[u8; 8]`. type K: KeyType; /// Create the hashed key. fn generate_key(&self, unhashed_key: [u8], key: K) -> Result<(), (K, buffer,ErrorCode)>; /// Add a K-V object to the store. Error on collision. fn append_key(&self, key: K, value: [u8]) -> Result<(), (K, buffer, ErrorCode)>; /// Retrieve a value from the store. fn get_value(&self, key: K, value: [u8]) -> Result<(), (K, buffer, ErrorCode)>; /// Mark a K-V object as deleted. fn invalidate_key(&self, key: K) -> Result<(), (K, ErrorCode)>; /// Cleanup the store. fn garbage_collect(&self) -> Result<(), ErrorCode>; } }
(You can find the full definition in tock/kernel/src/hil/kv_system.rs
.)
In terms of TicKV, the KVSystem
interface only uses the TicKV header. The Tock
header is only used in the upper level API.
#![allow(unused)] fn main() { pub trait KVStore { /// Get key-value object. pub fn get(&self, key: [u8], value: [u8], perms: StoragePermissions) -> Result<(), (buffer, buffer, ErrorCode)>; /// Set or update a key-value object. pub fn set(&self, key: [u8], value: [u8], perms: StoragePermissions) -> Result<(), (buffer, buffer, ErrorCode)>; /// Delete a key-value object. pub fn delete(&self, key: [u8], perms: StoragePermissions) -> Result<(), (buffer, ErrorCode)>; } }
As you can see, each of these APIs requires a StoragePermissions
so the
capsule can verify that the requestor has access to the given K-V object.
Key-Value in Userspace
Userspace applications have access to the K-V store via the kv_driver.rs
capsule. This capsule provides an interface for applications to use the upper
layer get-set-delete API.
However, applications need permission to use persistent storage. This is granted via headers in the TBF header for the application.
Applications have three fields for permissions: a write ID, multiple read IDs, and multiple modify IDs.
write_id: u32
: This u32 specifies the ID used when the application creates a new K-V object. If this is 0, then the application does not have write access. (Awrite_id
of 0 is reserved for the kernel.)read_ids: [u32]
: These read IDs specify which k-v objects the application can callget()
on. If this is empty or does not include the application'swrite_id
, then the application will not be able to retrieve its own objects.modify_ids: [u32]
: These modify IDs specify which k-v objects the application can edit, either by replacing or deleting. Again, if this is empty or does not include the application'swrite_id
, then the application will not be able to update or delete its own objects.
These headers can be added at compilation time with elf2tab
or after the TAB
has been created using Tockloader.
To have elf2tab add the header, it needs to be run with additional flags:
elf2tab ... --write_id 10 --read_ids 10,11,12 --access_ids 10,11,12 <list of ELFs>
To add it with tockloader (run in the app directory):
tockloader tbf tlv add persistent_acl 10 10,11,12 10,11,12
Using K-V Storage
To use the K-V storage, load the kv-interactive app:
cd libtock-c/examples/tests/kv_interactive
make
tockloader tbf tlv add persistent_acl 10 10,11,12 10,11,12
tockloader install
Now via the terminal, you can create and view k-v objects by typing set
,
get
, or delete
.
$ tockloader listen
set mykey hello
Setting mykey=hello
Set key-value
get mykey
Getting mykey
Got value: hello
delete mykey
Deleting mykey
Managing TicKV Database on your Host Computer
You can interact with a board's k-v store via tockloader on your host computer.
View the Contents
To view the entire DB:
tockloader tickv dump
Which should give something like:
[INFO ] Using jlink channel to communicate with the board.
[INFO ] Using settings from KNOWN_BOARDS["nrf52dk"]
[STATUS ] Dumping entire TicKV database...
[INFO ] Using settings from KNOWN_BOARDS["nrf52dk"]
[INFO ] Dumping entire contents of Tock-style TicKV database.
REGION 0
TicKV Object hash=0xbbba2623865c92c0 version=1 flags=8 length=24 valid=True checksum=0xe83988e0
Value: 00000000000b000000
TockTicKV Object version=0 write_id=11 length=0
Value:
REGION 1
TicKV Object hash=0x57b15d172140dec1 version=1 flags=8 length=28 valid=True checksum=0x32542292
Value: 00040000000700000038313931
TockTicKV Object version=0 write_id=7 length=4
Value: 38313931
REGION 2
TicKV Object hash=0x71a99997e4830ae2 version=1 flags=8 length=28 valid=True checksum=0xbdc01378
Value: 000400000000000000000000ca
TockTicKV Object version=0 write_id=0 length=4
Value: 000000ca
REGION 3
TicKV Object hash=0x3df8e4a919ddb323 version=1 flags=8 length=30 valid=True checksum=0x70121c6a
Value: 0006000000070000006b6579313233
TockTicKV Object version=0 write_id=7 length=6
Value: 6b6579313233
REGION 4
TicKV Object hash=0x7bc9f7ff4f76f244 version=1 flags=8 length=15 valid=True checksum=0x1d7432bb
Value:
TicKV Object hash=0x9efe426e86d82864 version=1 flags=8 length=79 valid=True checksum=0xd2ac393f
Value: 001000000000000000a2a4a6a6a8aaacaec2c4c6c6c8caccce000000000000000000000000000000000000000000000000000000000000000000000000000000
TockTicKV Object version=0 write_id=0 length=16
Value: a2a4a6a6a8aaacaec2c4c6c6c8caccce
REGION 5
TicKV Object hash=0xa64cf33980ee8805 version=1 flags=8 length=29 valid=True checksum=0xa472da90
Value: 0005000000070000006d796b6579
TockTicKV Object version=0 write_id=7 length=5
Value: 6d796b6579
REGION 6
TicKV Object hash=0xf17b4d392287c6e6 version=1 flags=8 length=79 valid=True checksum=0x854d8de0
Value: 00030000000700000033343500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
TockTicKV Object version=0 write_id=7 length=3
Value: 333435
...
[INFO ] Finished in 3.468 seconds
You can see all of the hashed keys and stored values, as well as their headers.
Add a Key-Value Object
You can add a k-v object using tockloader:
tockloader tickv append newkey newvalue
Note that by default tockloader uses a write_id
of 0, so that k-v object will
only be accessible to the kernel. To specify a specific write_id
so an app can
access it:
tockloader tickv append appkey appvalue --write-id 10
Wrap-Up
You now know how to use a Key-Value store in your Tock apps as well as in the kernel. Tock's K-V stack supports access control on stored objects, and can be used simultaneously by both the kernel and userspace applications.
Write an environment sensing application
Process overview, relocation model and system call API
In this section, we're going to learn about processes (a.k.a applications) in Tock, and build our own applications in C.
Get a C application running on your board
You'll find the outline of a C application in the directory exercises/app
.
Take a look at the code in main.c
. So far, this application merely prints
"Hello, World!".
The code uses the standard C library routine printf
to compose a message using
a format string and print it to the console. Let's break down what the code
layers are here:
-
printf
is provided by the C standard library (implemented by newlib). It takes the format string and arguments, and generates an output string from them. To actually write the string to standard out,printf
calls_write
. -
_write
(inlibtock-c
'ssys.c
) is a wrapper for actually writing to output streams (in this case, standard out a.k.a. the console). It calls the Tock-specific console writing functionputnstr
. -
putnstr
(inlibtock-c
'sconsole.c
) is a buffers data to be written, callsputnstr_async
, and acts as a synchronous wrapper, yielding until the operation is complete. -
Finally,
putnstr_async
(inlibtock-c
'sconsole.c
) performs the actual system calls, calling toallow
,subscribe
, andcommand
to enable the kernel to access the buffer, request a callback when the write is complete, and begin the write operation respectively.
The application could accomplish all of this by invoking Tock system calls directly, but using libraries makes for a much cleaner interface and allows users to not need to know the inner workings of the OS.
Loading an application
Okay, let's build and load this simple program.
-
Erase all other applications from the development board:
$ tockloader erase-apps
-
Build the application and load it (Note:
tockloader install
automatically searches the current working directory and its subdirectories for Tock binaries.)$ tockloader install --make
-
Check that it worked:
$ tockloader listen
The output should look something like:
$ tockloader listen No device name specified. Using default "tock" Using "/dev/cu.usbserial-c098e5130012 - Hail IoT Module - TockOS" Listening for serial output. Hello, World!
Creating your own application
Now that you've got a basic app working, modify it so that it continuously
prints out Hello World
twice per second. You'll want to use the user library's
timer facilities to manage this:
Timer
You'll find the interface for timers in libtock/timer.h
. The function you'll
find useful today is:
#include <timer.h>
void delay_ms(uint32_t ms);
This function sleeps until the specified number of milliseconds have passed, and then returns. So we call this function "synchronous": no further code will run until the delay is complete.
Write an app that periodically samples the on-board sensors
Now that we have the ability to write applications, let's do something a little more complex. The development board you are using has several sensors on it. These sensors include a light sensor, a humidity sensor, and a temperature sensor. Each sensing medium can be accessed separately via the Tock user library. We'll just be using the light and temperature for this exercise.
Light
The interface in libtock/ambient_light.h
is used to measure ambient light
conditions in lux. imix uses the
ISL29035
sensor, but the userland library is abstracted from the details of particular
sensors. It contains the function:
#include <ambient_light.h>
int ambient_light_read_intensity_sync(int* lux);
Note that the light reading is written to the location passed as an argument, and the function returns non-zero in the case of an error.
Temperature
The interface in libtock/temperature.h
is used to measure ambient temperature
in degrees Celsius, times 100. imix uses the
SI7021
sensor. It contains the function:
#include <temperature.h>
int temperature_read_sync(int* temperature);
Again, this function returns non-zero in the case of an error.
Read sensors in a Tock application
Using the example program you're working on, write an application that reads all of the sensors on your development board and reports their readings over the serial port.
As a bonus, experiment with toggling an LED when readings are above or below a certain threshold:
LED
The interface in libtock/led.h
is used to control lights on Tock boards. On
the Hail board, there are three LEDs which can be controlled: Red, Blue, and
Green. The functions in the LED module are:
#include <led.h>
int led_count(void);
Which returns the number of LEDs available on the board.
int led_on(int led_num);
Which turns an LED on, accessed by its number.
int led_off(int led_num);
Which turns an LED off, accessed by its number.
int led_toggle(int led_num);
Which toggles the state of an LED, accessed by its number.
Graduation
Now that you have the basics of Tock down, we encourage you to continue to explore and develop with Tock! This book includes a "slimmed down" version of Tock to make it easy to get started, but you will likely want to get a more complete development environment setup to continue. Luckily, this shouldn't be too difficult since you have the tools installed already.
Using the latest kernel
The Tock kernel is actively developed, and you likely want to build upon the latest features. To do this, you should get the Tock source from the repository:
$ git clone https://github.com/tock/tock
While the master
branch tends to be relatively stable, you may want to use the
latest release instead. Tock is
thoroughly tested before a release, so this should be a reliable place to start.
To select a release, you should checkout the correct tag. For example, for the
1.4 release this looks like:
$ cd tock
$ git checkout release-1.4
You should use the latest release. Check the releases page for the name of the latest release.
Now, you can compile the board-specific kernel in the Tock repository. For example, to compile the kernel for imix:
$ cd boards/imix
$ make
All of the operations described in the course should work the same way on the main repository.
Using the full selection of apps
The book includes some very minimal apps, and many more can be found in the
libtock-c
repository. To use this, you should start by cloning the repository:
$ git clone https://github.com/tock/libtock-c
Now you can compile and run apps inside of the examples folder. For instance, you can install the basic "Hello World!" app:
$ cd libtock-c/examples/c_hello
$ make
$ tockloader install
With the libtock-c
repository you have access to the full suite of Tock apps,
and additional libraries include BLE and Lua support.
Deprecated Course Modules
These modules were previously developed but may not quite match the current Tock code at this point. That is, the general ideas are still relevant and correct, but the specific code might be somewhat outdated.
We keep these for interested readers, but want to note that it might take a bit more problem solving/updating to follow these steps than originally intended.
Keep the client happy
You, an engineer newly added to a top-secret project in your organization, have been directed to commission a new imix node for your most important client. The directions you receive are terse, but helpful:
On Sunday, Nov 4, 2018, Director Hines wrote:
Welcome to the team, need you to get started right away. The client needs an
imix setup with their two apps -- ASAP. Make sure it is working, we need to keep
this client happy.
- DH
Hmm, ok, not a lot to go on, but luckily in orientation you learned how to flash a kernel and apps on to the imix board, so you are all set for your first assignment.
Poking around, you notice a folder called "important-client". While that is a good start, you also notice that it has two apps inside of it! "Alright!" you are thinking, "My first day is shaping up to go pretty smoothly."
After installing those two apps, which are a little mysterious still, you decide
that it would also be a good idea to install an app you are more familiar with:
the "blink" app. After doing all of that, you run tockloader list
and see the
following:
$ tockloader list
No device name specified. Using default "tock"
Using "/dev/ttyUSB1 - imix IoT Module - TockOS"
[App 0]
Name: app2
Enabled: True
Sticky: False
Total Size in Flash: 16384 bytes
[App 1]
Name: app1
Enabled: True
Sticky: False
Total Size in Flash: 8192 bytes
[App 2]
Name: blink
Enabled: True
Sticky: False
Total Size in Flash: 2048 bytes
Finished in 1.959 seconds
Checkpoint
Make sure you have these apps installed correctly and
tockloader list
produces similar output as shown here.
Great! Now you check that the LED is blinking, and sure enough, no problems
there. The blink app was just for testing, so you tockloader uninstall blink
to remove that. So far, so good, Tock! But, before you prepare to head home
after a successful day, you start to wonder if maybe this was a little too easy.
Also, if you get this wrong, it's not going to look good as the new person on
the team.
Looking in the folders for the two applications, you notice a brief description of the apps, and a URL. Ok, maybe you can check if everything is working. After trying things for a little bit, everything seems to be in order. You tell the director the board is ready and head home a little early—you did just successfully complete your first project for a major client after all.
Back at Work the Next Day
Expecting a more challenging project after how well things went yesterday, you are instead greeted by this email:
On Monday, Nov 5, 2018, Director Hines wrote:
I know you are new, but what did you do?? I've been getting calls all morning
from the client, the imix board you gave them ran out battery already!! Are you
sure you set up the board correctly? Fix it, and get it back to me later today.
- DH
Well, that's not good. You already removed the blink app, so it can't be that. What you need is some way to inspect the board and see if something looks like it is going awry. You first try:
$ tockloader listen
to see if any debugging information is being printed. A little, but nothing helpful. Before trying to look around the code, you decided to try sending the board a plea for help:
help
and, surprisingly, it responded!
Welcome to the process console.
Valid commands are: help status list stop start
Ok! Maybe the process console can help. Try the status
command:
Total processes: 2
Active processes: 2
Timeslice expirations: 4277
It seems this tool is actually able to inspect the current system and the active processes! But hmmm, it seems there are a lot of "timeslice expirations". From orientation, you remember that processes are allocated only a certain quantum of time to execute, and if they exceed that the kernel forces a context switch back to the kernel. If that is happening a lot, then the board is likely unable to go to sleep! That could explain why the battery is draining so fast!
But which process is at fault? Perhaps we should try another command. Maybe
list
:
PID Name Quanta Syscalls Dropped Callbacks State
00 app2 0 336 0 Yielded
01 app1 8556 1439951 0 Running
Ok! Now we have the status of individual applications. And aha! We can clearly
see the faulty application. From our testing we know that one app detects button
presses and one app is transmitting sensor data. Let's see if we can disable the
faulty app somehow and see which data packets we are still getting. Going back
to the help command, the stop
command seems promising:
stop <app name>
Time to Fix the App
After debugging, we now know a couple things about the issue:
- The name of the faulty app.
- That it is functionally correct but is for some reason consuming excess CPU cycles.
Using this information, dig into the the faulty app.
A Quick Fix
To get the director off your back, you should be able to introduce a simple fix that will reduce wakeups by waiting a bit between samples.
A Better Way
While the quick fix will slow the number of wakeups, you know that you can do better than polling for something like a button press! Tock supports asynchronous operations allowing user processes to subscribe to interrupts.
Looking at the button interface (in button.h), it looks like we'll first have to enable interrupts and then sign up to listen to them.
Once this energy-optimal patch is in place, it'll be time to kick off a triumphant e-mail to the director, and then off to celebrate!
Create a "Hello World" capsule
Now that you've seen how Tock initializes and uses capsules, you're going to write a new one. At the end of this section, your capsule will sample the humidity sensor once a second and print the results as serial output. But you'll start with something simpler: printing "Hello World" to the debug console once on boot.
The imix
board configuration you've looked through has a capsule for the this
tutorial already set up. The capsule is a separate Rust crate located in
exercises/capsule
. You'll complete this exercise by filling it in.
In addition to a constructor, Our capsule has start
function defined that is
currently empty. The board configuration calls this function once it has
initialized the capsule.
Eventually, the start
method will kick off a state machine for periodic
humidity readings, but for now, let's just print something to the debug console
and return:
#![allow(unused)] fn main() { debug!("Hello from the kernel!"); }
$ cd [PATH_TO_BOOK]/imix
$ make program
$ tockloader listen
No device name specified.
Using default "tock"
Using "/dev/ttyUSB0 - Imix IoT Module - TockOS"
Listening for serial output.
Hello from the kernel!
Extend your capsule to print "Hello World" every second
In order for your capsule to keep track of time, it will need to depend on another capsule that implements the Alarm interface. We'll have to do something similar for reading the accelerometer, so this is good practice.
The Alarm HIL includes several traits, Alarm
, Client
, and Frequency
, all
in the kernel::hil::time
module. You'll use the set_alarm
and now
methods
from the Alarm
trait to set an alarm for a particular value of the clock. Note
that both methods accept arguments in the alarm's native clock frequency, which
is available using the Alarm trait's associated Frequency
type:
#![allow(unused)] fn main() { // native clock frequency in Herz let frequency = <A::Frequency>::frequency(); }
Your capsule already implements the alarm::Client
trait so it can receive
alarm events. The alarm::Client
trait has a single method:
#![allow(unused)] fn main() { fn fired(&self) }
Your capsule should now set an alarm in the start
method, print the debug
message and set an alarm again when the alarm fires.
Compile and program your new kernel:
$ make program
$ tockloader listen
No device name specified. Using default "tock" Using "/dev/ttyUSB0 - Imix IoT Module - TockOS"
Listening for serial output.
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
Extend your capsule to sample the humidity once a second
The steps for reading an accelerometer from your capsule are similar to using
the alarm. You'll use a capsule that implements the humidity HIL, which includes
the HumidityDriver
and HumidityClient
traits, both in
kernel::hil::sensors
.
The HumidityDriver
trait includes the method read_accelerometer
which
initiates an accelerometer reading. The HumidityClient
trait has a single
method for receiving readings:
#![allow(unused)] fn main() { fn callback(&self, humidity: usize); }
Implement logic to initiate a accelerometer reading every second and report the results.
Compile and program your kernel:
$ make program
$ tockloader listen
No device name specified. Using default "tock" Using "/dev/ttyUSB0 - Imix IoT Module - TockOS"
Listening for serial output.
Humidity 2731
Humidity 2732
Some further questions and directions to explore
Your capsule used the si7021 and virtual alarm. Take a look at the code behind each of these services:
-
Is the humidity sensor on-chip or a separate chip connected over a bus?
-
What happens if you request two humidity sensors back-to-back?
-
Is there a limit on how many virtual alarms can be created?
-
How many virtual alarms does the imix boot sequence create?
Extra credit: Write a virtualization capsule for humidity sensor (∞)
If you have extra time, try writing a virtualization capsule for the Humidity
HIL that will allow multiple clients to use it. This is a fairly open ended
task, but you might find inspiration in the virtua_alarm
and virtual_i2c
capsules.
Tock Mini Tutorials
These tutorials walk through how to use some various features of Tock. They are narrower in scope than the course, but try to explain in detail how various Tock apps work.
You will need the libtock-c
repository to run these tutorials. You should
check out a copy of libtock-c
by running:
$ git clone https://github.com/tock/libtock-c
libtock-c
contains many example Tock applications as well as the library
support code for running C and C++ apps on Tock. If you are looking to develop
Tock applications you will likely want to start with an existing app in
libtock-c
and modify it.
Setup
You need to be able to compile and load the Tock kernel and Tock applications. See the getting started guide on how to get setup.
You also need hardware that supports Tock.
The tutorials assume you have a Tock kernel loaded on your hardware board. To get a kernel installed, follow these steps.
-
Obtain the Tock Source. You can clone a copy of the Tock repository to get the kernel source:
$ git clone https://github.com/tock/tock $ cd tock
-
Compile Tock. In the root of the Tock directory, compile the kernel for your hardware platform. You can find a list of boards by running
make list
. For example if your board isimix
then:$ make list $ cd boards/imix $ make
If you have another board just replace "imix" with
<your-board>
This will create binaries of the Tock kernel. Tock is compiled with Cargo, a package manager for Rust applications. The first time Tock is built all of the crates must be compiled. On subsequent builds, crates that haven't changed will not have to be rebuilt and the compilation will be faster.
-
Load the Tock Kernel. The next step is to program the Tock kernel onto your hardware. To load the kernel, run:
$ make install
in the board directory. Now you have the kernel loaded onto the hardware. The kernel configures the hardware and provides drivers for many hardware resources, but does not actually include any application logic. For that, we need to load an application.
Note, you only need to program the kernel once. Loading applications does not alter the kernel, and applications can be re-programed without re-programming the kernel.
With the kernel setup, you are ready to try the mini tutorials.
Tutorials
- Blink an LED: Get your first Tock app running.
- Button to Printf(): Print to terminal in response to button presses.
- BLE Advertisement Scanning: Sense nearby BLE packets.
- Sample Sensors and Use Drivers: Use syscalls to interact with kernel drivers.
- Inter-process Communication: Tock's IPC mechanism.
Board compatiblity matrix
Tutorial # | Supported boards |
---|---|
1 | All |
2 | All Cortex-M based boards |
3 | Hail and imix |
4 | Hail and imix |
5 | All that support IPC |
Blink: Running Your First App
This guide will help you get the blink
app running on top of Tock kernel.
Instructions
-
Erase any existing applications. First, we need to remove any applications already on the board. Note that Tockloader by default will install any application in addition to whatever is already installed on the board.
$ tockloader erase-apps
-
Install Blink. Tock supports an "app store" of sorts. That is, tockloader can install apps from a remote repository, including Blink. To do this:
$ tockloader install blink
You will have to tell Tockloader that you are OK with fetching the app from the Internet.
Your specific board may require additional arguments, please see the readme in the
boards/
folder for more details. -
Compile and Install Blink. We can also compile the blink app and load our compiled version. The basic C version of blink is located in the libtock-c repository.
-
Clone that repository:
$ cd tock-book $ git clone https://github.com/tock/libtock-c
-
Then navigate to
examples/blink
:$ cd libtock-c/examples/blink
-
From there, you should be able to compile it and install it by:
$ make $ tockloader install
When the blink app is installed you should see the LEDs on the board blinking. Congratulations! You have just programmed your first Tock application.
-
Say "Hello!" On Every Button Press
This tutorial will walk you through calling printf()
in response to a button
press.
-
Start a new application. A Tock application in C looks like a typical C application. Lets start with the basics:
#include <stdio.h> int main(void) { return 0; }
You also need a makefile. Copying a makefile from an existing app is the easiest way to get started.
-
Setup a button callback handler. A button press in Tock is treated as an interrupt, and in an application this translates to a function being called, much like in any other event-driven system. To listen for button presses, we first need to define a callback function, then tell the kernel that the callback exists.
#include <stdio.h> #include <button.h> // Callback for button presses. // btn_num: The index of the button associated with the callback // val: 1 if pressed, 0 if depressed static void button_callback(int btn_num, int val, int arg2 __attribute__ ((unused)), void *user_data __attribute__ ((unused)) ) { } int main(void) { button_subscribe(button_callback, NULL); return 0; }
All callbacks from the kernel are passed four arguments, and the meaning of the four arguments depends on the driver. The first three are integers, and can be used to represent buffer lengths, pin numbers, button numbers, and other simple data. The fourth argument is a pointer to user defined object. This pointer is set in the subscribe call (in this example it is set to
NULL
), and returned when the callback fires. -
Enable the button interrupts. By default, the interrupts for the buttons are not enabled. To enable them, we make a syscall. Buttons, like other drivers in Tock, follow the convention that applications can ask the kernel how many there are. This is done by calling
button_count()
.#include <stdio.h> #include <button.h> // Callback for button presses. // btn_num: The index of the button associated with the callback // val: 1 if pressed, 0 if depressed static void button_callback(int btn_num, int val, int arg2 __attribute__ ((unused)), void *user_data __attribute__ ((unused)) ) { } int main(void) { button_subscribe(button_callback, NULL); // Ensure there is a button to use. int count = button_count(); if (count < 1) { // There are no buttons on this platform. printf("Error! No buttons on this platform."); } else { // Enable an interrupt on the first button. button_enable_interrupt(0); } // Can just return here. The application will continue to execute. return 0; }
The button count is checked, and the app only continues if there exists at least one button. To enable the button interrupt,
button_enable_interrupt()
is called with the index of the button to use. In this example we just use the first button. -
Call
printf()
on button press. To print a message, we callprintf()
in the callback.#include <stdio.h> #include <button.h> // Callback for button presses. // btn_num: The index of the button associated with the callback // val: 1 if pressed, 0 if depressed static void button_callback(int btn_num, int val, int arg2 __attribute__ ((unused)), void *user_data __attribute__ ((unused)) ) { // Only print on the down press. if (val == 1) { printf("Hello!\n"); } } int main(void) { button_subscribe(button_callback, NULL); // Ensure there is a button to use. int count = button_count(); if (count < 1) { // There are no buttons on this platform. printf("Error! No buttons on this platform.\n"); } else { // Enable an interrupt on the first button. button_enable_interrupt(0); } // Can just return here. The application will continue to execute. return 0; }
-
Run the application. To try this tutorial application, you can find it in the tutorials app folder. See the first tutorial for details on how to compile and install a C application.
Once installed, when you press the button, you should see "Hello!" printed to the terminal!
Look! A Wild BLE Packet Appeared!
Note! This tutorial will only work on Hail and imix boards.
This tutorial will walk you through getting an app running that scans for BLE advertisements. Most BLE devices typically broadcast advertisements periodically (usually once a second) to allow smartphones and other devices to discover them. The advertisements typically contain the BLE device's ID and name, as well as as which services the device provides, and sometimes raw data as well.
To provide BLE connectivity, several Tock boards use the Nordic nRF51822 as a BLE co-processor. In this configuration, the nRF51822 runs all of the BLE operations and exposes a command interface over a UART bus. Luckily for us, Nordic has defined and implemented the entire interface. Better yet, they made it interoperable with their nRF51 SDK. What this means is any BLE app that would run on the nRF51822 directly can be compiled to run on a different microcontroller, and any function calls that would have interacted with the BLE hardware are instead packaged and sent to the nRF51822 co-processor. Nordic calls this tool "BLE Serialization", and Tock has a port of the serialization libraries that Tock applications can use.
So, with that introduction, lets get going.
-
Initialize the BLE co-processor. The first step a BLE serialization app must do is initialize the BLE stack on the co-processor. This can be done with Nordic's SDK, but to simplify things Tock supports the Simple BLE library. The goal of
simple_ble.c
is to wrap the details of the nRF5 SDK and the intricacies of BLE in an easy-to-use library so you can get going with creating BLE devices and not learning the entire spec.#include <simple_ble.h> // Intervals for advertising and connections. // These are some basic settings for BLE devices. However, since we are // only interesting in scanning, these are not particularly relevant. simple_ble_config_t ble_config = { .platform_id = 0x00, // used as 4th octet in device BLE address .device_id = DEVICE_ID_DEFAULT, .adv_name = "Tock", .adv_interval = MSEC_TO_UNITS(500, UNIT_0_625_MS), .min_conn_interval = MSEC_TO_UNITS(1000, UNIT_1_25_MS), .max_conn_interval = MSEC_TO_UNITS(1250, UNIT_1_25_MS) }; int main () { printf("[Tutorial] BLE Scanning\n"); // Setup BLE. simple_ble_init(&ble_config); }
-
Scan for advertisements. With
simple_ble
this is pretty straightforward.int main () { printf("[Tutorial] BLE Scanning\n"); // Setup BLE. simple_ble_init(&ble_config); // Scan for advertisements. simple_ble_scan_start(); }
-
Handle the advertisement received event. Just as the main Tock microcontroller can send commands to the nRF co-processor, the co-processor can send events back. When these occur, a variety of callbacks are generated in
simple_ble
and then passed to users of the library. In this case, we only care aboutble_evt_adv_report()
which is called on each advertisement reception.// Called when each advertisement is received. void ble_evt_adv_report (ble_evt_t* p_ble_evt) { ble_gap_evt_adv_report_t* adv = (ble_gap_evt_adv_report_t*) &p_ble_evt->evt.gap_evt.params.adv_report; }
The
ble_evt_adv_report()
function is passed a pointer to able_evt_t
struct. This is a type from the Nordic nRF51 SDK, and more information can be found in the SDK documentation. -
Display a message for each advertisement. Once we have the advertisement callback, we can use
printf()
like normal.#include <stdio.h> #include <led.h> // Called when each advertisement is received. void ble_evt_adv_report (ble_evt_t* p_ble_evt) { ble_gap_evt_adv_report_t* adv = (ble_gap_evt_adv_report_t*) &p_ble_evt->evt.gap_evt.params.adv_report; // Print some details about the discovered advertisement. printf("Recv Advertisement: [%02x:%02x:%02x:%02x:%02x:%02x] RSSI: %d, Len: %d\n", adv->peer_addr.addr[5], adv->peer_addr.addr[4], adv->peer_addr.addr[3], adv->peer_addr.addr[2], adv->peer_addr.addr[1], adv->peer_addr.addr[0], adv->rssi, adv->dlen); // Also toggle the first LED. led_toggle(0); }
-
Handle some BLE annoyances. The last step to getting a working app is to handle some annoyances using BLE serialization with the
simple_ble
library. Typically errors generated by the nRF51 SDK are severe and mean there is a significant bug in the code. With serialization, however, messages between the two processors can be corrupted or misframed, causing parsing errors. We can ignore these errors safely and just drop the corrupted packet.Additionally, the
simple_ble
library makes it easy to set the address of a BLE device. However, this functionality only works when running on an actual nRF51822. To disable this, we override the weakly definedble_address_set()
function with an empty function.void app_error_fault_handler(uint32_t error_code, uint32_t line_num, uint32_t info) { } void ble_address_set () { }
-
Run the app and see the packets! To try this tutorial application, you can find it in the tutorials app folder.
For any new applications, ensure that the following is in the makefile so that the BLE serialization library is included.
include $(TOCK_USERLAND_BASE_DIR)/libnrfserialization/Makefile.app
Details
This section contains a few notes about the specific versions of BLE serialization used.
Tock currently supports the S130 softdevice version 2.0.0 and SDK 11.0.0.
Reading Sensors From Scratch
Note! This tutorial will only work on Hail and imix boards.
In this tutorial we will cover how to use the syscall interface from applications to kernel drivers, and guide things based on reading the ISL29035 digital light sensor and printing the readings over UART.
OK, lets get started.
-
Setup a generic app for handling asynchronous events. As with most sensors, the ISL29035 is read asynchronously, and a callback is generated from the kernel to userspace when the reading is ready. Therefore, to use this sensor, our application needs to do two things: 1) setup a callback the kernel driver can call when the reading is ready, and 2) instruct the kernel driver to start the measurement. Lets first sketch this out:
#include <tock.h> #define DRIVER_NUM 0x60002 // Callback when the ISL29035 has a light intensity measurement ready. static void isl29035_callback(int intensity, int unused1, int unused2, void* ud) { } int main() { // Tell the kernel about the callback. // Instruct the ISL29035 driver to begin a reading. // Wait until the reading is complete. // Print the resulting value. return 0; }
-
Fill in the application with syscalls. The standard Tock syscalls can be used to actually implement the sketch we made above. We first use the
subscribe
syscall to inform the kernel about the callback in our application. We then use thecommand
syscall to start the measurement. To wait, we use theyield
call to wait for the callback to actually fire. We do not need to useallow
for this application, and typically it is not required for reading sensors.For all syscalls that interact with drivers, the major number is set by the platform. In the case of the ISL29035, it is
0x60002
. The minor numbers are set by the driver and are specific to the particular driver.To save the value from the callback to use in the print statement, we will store it in a global variable.
#include <stdio.h> #include <tock.h> #define DRIVER_NUM 0x60002 static int isl29035_reading; // Callback when the ISL29035 has a light intensity measurement ready. static void isl29035_callback(int intensity, int unused1, int unused2, void* ud) { // Save the reading when the callback fires. isl29035_reading = intensity; } int main() { // Tell the kernel about the callback. subscribe(DRIVER_NUM, 0, isl29035_callback, NULL); // Instruct the ISL29035 driver to begin a reading. command(DRIVER_NUM, 1, 0); // Wait until the reading is complete. yield(); // Print the resulting value. printf("Light intensity reading: %d\n", isl29035_reading); return 0; }
-
Be smarter about waiting for the callback. While the above application works, it's really relying on the fact that we are only sampling a single sensor. In the current setup, if instead we had two sensors with outstanding commands, the first callback that fired would trigger the
yield()
call to return and then theprintf()
would execute. If, for example, sampling the ISL29035 takes 100 ms, and the new sensor only needs 10 ms, the new sensor's callback would fire first and theprintf()
would execute with an incorrect value.To handle this, we can instead use the
yield_for()
call, which takes a flag and only returns when that flag has been set. We can then set this flag in the callback to make sure that ourprintf()
only occurs when the light reading has completed.#include <stdio.h> #include <stdbool.h> #include <tock.h> #define DRIVER_NUM 0x60002 static int isl29035_reading; static bool isl29035_done = false; // Callback when the ISL29035 has a light intensity measurement ready. static void isl29035_callback(int intensity, int unused1, int unused2, void* ud) { // Save the reading when the callback fires. isl29035_reading = intensity; // Mark our flag true so that the `yield_for()` returns. isl29035_done = true; } int main() { // Tell the kernel about the callback. subscribe(DRIVER_NUM, 0, isl29035_callback, NULL); // Instruct the ISL29035 driver to begin a reading. command(DRIVER_NUM, 1, 0); // Wait until the reading is complete. yield_for(&isl29035_done); // Print the resulting value. printf("Light intensity reading: %d\n", isl29035_reading); return 0; }
-
Use the
libtock
library functions. Normally, applications don't use the barecommand
andsubscribe
syscalls. Typically, these are wrapped together into helpful commands inside oflibtock
and come with a function that hides theyield_for()
to a make a synchronous function which is useful for developing applications quickly. Lets port the ISL29035 sensing app to use the Tock Standard Library:#include <stdio.h> #include <isl29035.h> int main() { // Take the ISL29035 measurement synchronously. int isl29035_reading = isl29035_read_light_intensity(); // Print the resulting value. printf("Light intensity reading: %d\n", isl29035_reading); return 0; }
-
Explore more sensors. This tutorial highlights only one sensor. See the sensors app for a more complete sensing application.
Friendly Apps Share Data
This tutorial covers how to use Tock's IPC mechanism to allow applications to communicate and share memory.
Tock IPC Basics
IPC in Tock uses a client-server model. Applications can provide a service by telling the Tock kernel that they provide a service. Each application can only provide a single service, and that service's name is set to the name of the application. Other applications can then discover that service and explicitly share a buffer with the server. Once a client shares a buffer, it can then notify the server to instruct the server to somehow interact with the shared buffer. The protocol for what the server should do with the buffer is service specific and not specified by Tock. Servers can also notify clients, but when and why servers notify clients is service specific.
Example Application
To provide an overview of IPC, we will build an example system consisting of three apps: a random number service, a LED control service, and a main application that uses the two services. While simple, this example both demonstrates the core aspects of the IPC mechanism and should run on any hardware platform.
LED Service
Lets start with the LED service. The goal of this service is to allow other applications to use the shared buffer as a command message to instruct the LED service on how to turn on or off the system's LEDs.
-
We must tell the kernel that our app wishes to provide a service. All that we have to do is call
ipc_register_svc()
.#include "ipc.h" int main(void) { ipc_register_svc(ipc_callback, NULL); return 0; }
-
We also need that callback (
ipc_callback
) to handle IPC requests from other applications. This callback will be called when the client app notifies the service.static void ipc_callback(int pid, int len, int buf, void* ud) { // pid: An identifier for the app that notified us. // len: How long the buffer is that the client shared with us. // buf: Pointer to the shared buffer. }
-
Now lets fill in the callback for the LED application. This is a simplified version for illustration. The full example can be found in the
examples/tutorials
folder.#include "led.h" static void ipc_callback(int pid, int len, int buf, void* ud) { uint8_t* buffer = (uint8_t*) buf; // First byte is the command, second byte is the LED index to set, // and the third byte is whether the LED should be on or off. uint8_t command = buffer[0]; if (command == 1) { uint8_t led_id = buffer[1]; uint8_t led_state = buffer[2] > 0; if (led_state == 0) { led_off(led_id); } else { led_on(led_id); } // Tell the client that we have finished setting the specified LED. ipc_notify_client(pid); break; } }
RNG Service
The RNG service returns the requested number of random bytes in the shared folder.
-
Again, register that this service exists.
int main(void) { ipc_register_svc(ipc_callback, NULL); return 0; }
-
Also need a callback function for when the client signals the service. The client specifies how many random bytes it wants by setting the first byte of the shared buffer before calling notify.
#include <rng.h> static void ipc_callback(int pid, int len, int buf, void* ud) { uint8_t* buffer = (uint8_t*) buf; uint8_t rng[len]; uint8_t number_of_bytes = buffer[0]; // Fill the buffer with random bytes. int number_of_bytes_received = rng_sync(rng, len, number_of_bytes); memcpy(buffer, rng, number_of_bytes_received); // Signal the client that we have the number of random bytes requested. ipc_notify_client(pid); }
This is again not a complete example but illustrates the key aspects.
Main Logic Client Application
The third application uses the two services to randomly control the LEDs on the board. This application is not a server but instead is a client of the two service applications.
-
When using an IPC service, the first step is to discover the service and record its identifier. This will allow the application to share memory with it and notify it. Services are discovered by the name of the application that provides them. Currently these are set in the application Makefile or by default based on the folder name of the application. The examples in Tock commonly use a Java style naming format.
int main(void) { int led_service = ipc_discover("org.tockos.tutorials.ipc.led"); int rng_service = ipc_discover("org.tockos.tutorials.ipc.rng"); return 0; }
If the services requested are valid and exist the return value from
ipc_discover
is the identifier of the found service. If the service cannot be found an error is returned. -
Next we must share a buffer with each service (the buffer is the only way to share between processes), and setup a callback that is called when the server notifies us as a client. Once shared, the kernel will permit both applications to read/modify that memory.
char led_buf[64] __attribute__((aligned(64))); char rng_buf[64] __attribute__((aligned(64))); int main(void) { int led_service = ipc_discover("org.tockos.tutorials.ipc.led"); int rng_service = ipc_discover("org.tockos.tutorials.ipc.rng"); // Setup IPC for LED service ipc_register_client_cb(led_service, ipc_callback, NULL); ipc_share(led_service, led_buf, 64); // Setup IPC for RNG service ipc_register_client_cb(rng_service, ipc_callback, NULL); ipc_share(rng_service, rng_buf, 64); return 0; }
-
We of course need the callback too. For this app we use the
yield_for
function to implement the logical synchronously, so all the callback needs to do is set a flag to signal the end of theyield_for
.bool done = false; static void ipc_callback(int pid, int len, int arg2, void* ud) { done = true; }
-
Now we use the two services to implement our application.
#include <timer.h> void app() { while (1) { // Get two random bytes from the RNG service done = false; rng_buf[0] = 2; // Request two bytes. ipc_notify_svc(rng_service); yield_for(&done); // Control the LEDs based on those two bytes. done = false; led_buf[0] = 1; // Control LED command. led_buf[1] = rng_buf[0] % NUM_LEDS; // Choose the LED index. led_buf[2] = rng_buf[1] & 0x01; // On or off. ipc_notify_svc(led_service); // Notify to signal LED service. yield_for(&done); delay_ms(500); } }
Try It Out
To test this out, see the complete apps in the IPC tutorial example folder.
To install all of the apps on a board:
$ cd examples/tutorials/05_ipc
$ tockloader erase-apps
$ pushd led && make && tockloader install && popd
$ pushd rng && make && tockloader install && popd
$ pushd logic && make && tockloader install && popd
You should see the LEDs randomly turning on and off!
Kernel Development Guides
These guides provide walkthroughs for specific kernel development tasks. For example, there is a guide on how to add a new syscall interface for userspace applications. The guides are intended to be general and provide high-level instructions which will have to be adapted for the specific functionality to be added.
Overtime, these guides will inevitably become out-of-date in that the specific code examples will fail to compile. However, the general design aspects and considerations should still be relevant even if the specific code details have changed. You are encourage to use these guides as just that, a general guide, and to copy from up-to-date examples contained in the Tock repository.
Implementing a Chip Peripheral Driver
This guide covers how to implement a peripheral driver for a particular microcontroller (MCU). For example, if you wanted to add an analog to digital converter (ADC) driver for the Nordic nRF52840 MCU, you would follow the general steps described in this guide.
Overview
The general steps you will follow are:
- Determine the HIL you will implement.
- Create a register mapping for the peripheral.
- Create a struct for the peripheral.
- Implement the HIL interface for the peripheral.
- Create the peripheral driver object and cast the registers to the correct memory location.
The guide will walk through how to do each of these steps.
Background
Implementing a chip peripheral driver increases Tock's support for a particular microcontroller and allows capsules and userspace apps to take more advantage of the hardware provided by the MCU. Peripheral drivers for an MCU are generally implemented on an as-needed basis to support a particular use case, and as such the chips in Tock generally do not have all of the peripheral drivers implemented already.
Peripheral drivers are included in Tock as "trusted code" in the kernel. This
means that they can use the unsafe
keyword (in fact, they must). However, it
also means more care must be taken to ensure they are correct. The use of
unsafe
should be kept to an absolute minimum and only used where absolutely
necessary. This guide explains the one use of unsafe
that is required. All
other uses of unsafe
in a peripheral driver will likely be very scrutinized
during the pull request review period.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Determine the HIL you will implement.
The HILs in Tock are the contract between the MCU-specific hardware and the more generic capsules which use the hardware resources. They provide a common interface that is consistent between different microcontrollers, enabling code higher in the stack to use the interfaces without needing to know any details about the underlying hardware. This common interface also allows the same higher-level code to be portable across different microcontrollers. HILs are implemented as traits in Rust.
All HILs are defined in the
kernel/src/hil
directory. You should find a HIL that exposes the interface the peripheral you are writing a driver for can provide. There should only be one HIL that matches your peripheral.Note: As of Dec 2019, the
hil
directory also contains interfaces that are only provided by capsules for other capsules. For example, the ambient light HIL interface is likely not something an MCU would implement.It is possible Tock does not currently include a HIL that matches the peripheral you are implementing a driver for. In that case you will also need to create a HIL, which is explained in a different development guide.
Checkpoint: You have identified the HIL your driver will implement.
-
Create a register mapping for the peripheral.
To start implementing the peripheral driver, you must create a new source file within the MCU-specific directory inside of
chips/src
directory. The name of this file generally should match the name of the peripheral in the the MCU's datasheet.Include the name of this file inside of the
lib.rs
(or potentiallymod.rs
) file inside the same directory. This should look like:#![allow(unused)] fn main() { pub mod ast; }
Inside of the new file, you will first need to define the memory-mapped input/output (MMIO) registers that correspond to the peripheral. Different embedded code ecosystems have devised different methods for doing this, and Tock is no different. Tock has a special library and set of Rust macros to make defining the register map straightforward and using the registers intuitive.
The full register library is here, but to get started, you will first create a structure like this:
#![allow(unused)] fn main() { use tock_registers::registers::{ReadOnly, ReadWrite, WriteOnly}; register_structs! { XyzPeripheralRegisters { /// Control register. /// The 'Control' parameter constrains this register to only use /// fields from a certain group (defined below in the bitfields /// section). (0x000 => cr: ReadWrite<u32, Control::Register>), // Status register. (0x004 => s: ReadOnly<u8, Status::Register>), /// spacing between registers in memory (0x008 => _reserved), /// Another register with no meaningful fields. (0x014 => word: ReadWrite<u32>), // Etc. // The end of the struct is marked as follows. (0x100 => @END), } } }
You should replace
XyzPeripheral
with the name of the peripheral you are writing a driver for. Then, for each register defined in the datasheet, you must specify an entry in the macro. For example, a register is defined like:#![allow(unused)] fn main() { (0x000 => cr: ReadWrite<u32, Control::Register>), }
where:
0x000
is the offset (in bytes) of the register from the beginning of the register map.cr
is the name of the register in the datasheet.ReadWrite
is the access control of the register as defined in the datasheet.u32
is the size of the register.Control::Register
maps to the actual bitfields used in the register. You will create this type for this particular peripheral, so you can name this whatever makes sense at this point. Note that it will always end with::Register
due to how Rust macros work. If it doesn't make sense to define the specific bitfields in this register, you can omit this field. For example, an esoteric field in the register map that the implementation does not use likely does not need its bitfields mapped.
Once the register map is defined, you must specify the bitfields for any registers that you gave a specific type to. This looks like the following:
#![allow(unused)] fn main() { register_bitfields! [ // First parameter is the register width for the bitfields. Can be u8, // u16, u32, or u64. u32, // Each subsequent parameter is a register abbreviation, its descriptive // name, and its associated bitfields. The descriptive name defines this // 'group' of bitfields. Only registers defined as // ReadWrite<_, Control::Register> can use these bitfields. Control [ // Bitfields are defined as: // name OFFSET(shift) NUMBITS(num) [ /* optional values */ ] // This is a two-bit field which includes bits 4 and 5 RANGE OFFSET(4) NUMBITS(3) [ // Each of these defines a name for a value that the bitfield // can be written with or matched against. Note that this set is // not exclusive--the field can still be written with arbitrary // constants. VeryHigh = 0, High = 1, Low = 2 ], // A common case is single-bit bitfields, which usually just mean // 'enable' or 'disable' something. EN OFFSET(3) NUMBITS(1) [], INT OFFSET(2) NUMBITS(1) [] ], // Another example: // Status register Status [ TXCOMPLETE OFFSET(0) NUMBITS(1) [], TXINTERRUPT OFFSET(1) NUMBITS(1) [], RXCOMPLETE OFFSET(2) NUMBITS(1) [], RXINTERRUPT OFFSET(3) NUMBITS(1) [], MODE OFFSET(4) NUMBITS(3) [ FullDuplex = 0, HalfDuplex = 1, Loopback = 2, Disabled = 3 ], ERRORCOUNT OFFSET(6) NUMBITS(3) [] ], ] }
The name in each entry of the
register_bitfields! []
list must match the register type provided in the register map declaration. Each register that is used in the driver implementation should have its bitfields declared.Checkpoint: The register map is correctly described in the driver source file.
-
Create a struct for the peripheral.
Each peripheral driver is implemented with a struct which is later used to create an object that can be passed to code that will use this peripheral driver. The actual fields of the struct are very peripheral specific, but should contain any state that the driver needs to correctly function.
An example struct looks for a timer peripheral called the AST by the MCU datasheet looks like:
#![allow(unused)] fn main() { pub struct Ast<'a> { registers: StaticRef<AstRegisters>, callback: OptionalCell<&'a dyn hil::time::AlarmClient>, } }
The struct should contain a reference to the registers defined above (we will explain the
StaticRef
later). Typically, many drivers respond to certain events (like in this case a timer firing) and therefore need a reference to a client to notify when that event occurs. Notice that the type of the callback handler is specified in the HIL interface.Peripheral structs typically need a lifetime for references like the callback client reference. By convention Tock peripheral structs use
'a
for this lifetime, and you likely want to copy that as well.Think of what state your driver might need to keep around. This could include a direct memory access (DMA) reference, some configuration flags like the baud rate, or buffer indices. See other Tock peripheral drivers for more examples.
Note: you will most likely need to update this struct as you implement the driver, so to start with this just has to be a best guess.
Hint: you should avoid keeping any state in the peripheral driver struct that is already stored by the hardware itself. For example, if there is an "enabled" bit in a register, then you do not need an "enabled" flag in the struct. Replicating this state leads to bugs when those values get out of sync, and makes it difficult to update the driver in the future.
Peripheral driver structs make extensive use of different "cell" types to hold references to various shared state. The general wisdom is that if the value will ever need to be updated, then it needs to be contained in a cell. See the Tock cell documentation for more details on the cell types and when to use which one. In this example, the callback is stored in an
OptionalCell
, which can contain a value or not (if the callback is not set), and can be updated if the callback needs to change.With the struct defined, you should next create a
new()
function for that struct. This will look like:#![allow(unused)] fn main() { impl Ast { const fn new(registers: StaticRef<AstRegisters>) -> Ast { Ast { registers: registers, callback: OptionalCell::empty(), } } } }
Checkpoint: There is a struct for the peripheral that can be created.
-
Implement the HIL interface for the peripheral.
With the peripheral driver struct created, now the main work begins. You can now write the actual logic for the peripheral driver that implements the HIL interface you identified earlier. Implementing the HIL interface is done just like implementing a trait in Rust. For example, to implement the
Time
HIL for the AST:#![allow(unused)] fn main() { impl hil::time::Time for Ast<'a> { type Frequency = Freq16KHz; fn now(&self) -> u32 { self.get_counter() } fn max_tics(&self) -> u32 { core::u32::MAX } } }
You should include all of the functions from the HIL and decide how to implement them.
Some operations will be shared among multiple HIL functions. These should be implemented as functions for the original struct. For example, in the
Ast
example the HIL functionnow()
uses theget_counter()
function. This should be implemented on the mainAst
struct:#![allow(unused)] fn main() { impl Ast { const fn new(registers: StaticRef<AstRegisters>) -> Ast { Ast { registers: registers, callback: OptionalCell::empty(), } } fn get_counter(&self) -> u32 { let regs = &*self.registers; while self.busy() {} regs.cv.read(Value::VALUE) } } }
Note the
get_counter()
function also illustrates how to use the register reference and the Tock register library. The register library includes much more detail on the various register operations enabled by the library.Checkpoint: All of the functions in the HIL interface have MCU peripheral-specific implementations.
-
Create the peripheral driver object and cast the registers to the correct memory location.
The last step is to actually create the object so that the peripheral driver can be used by other code. Start by casting the register map to the correct memory address where the registers are actually mapped to. For example:
#![allow(unused)] fn main() { use kernel::common::StaticRef; const AST_BASE: StaticRef<AstRegisters> = unsafe { StaticRef::new(0x400F0800 as *const AstRegisters) }; }
StaticRef
is a type in Tock designed explicitly for this operation of casting register maps to the correct location in memory. The0x400F0800
is the address in memory of the start of the registers and this location will be specified by the datasheet.Note that creating the
StaticRef
requires using theunsafe
keyword. This is because doing this cast is a fundamentally memory-unsafe operation: this allows whatever is at that address in memory to be accessed through the register interface (which is exposed as a safe interface). In the normal case where the correct memory address is provided there is no concern for system safety as the register interface faithfully represents the underlying hardware. However, suppose an incorrect address was used, and that address actually points to live memory used by the Tock kernel. Now kernel data structures could be altered through the register interface, and this would violate memory safety.With the address reference created, we can now create the actual driver object:
#![allow(unused)] fn main() { pub static mut AST: Ast = Ast::new(AST_BASE); }
This object will be used by a board's main.rs file to pass, in this case, the driver for the timer hardware to various capsules and other code that needs the underlying timer hardware.
Wrap-Up
Congratulations! You have implemented a peripheral driver for a microcontroller in Tock! We encourage you to submit a pull request to upstream this to the Tock repository.
Implementing a Sensor Driver
This guide describes the steps necessary to implement a capsule in Tock that interfaces with an external IC, like a sensor, memory chip, or display. These are devices which are not part of the same chip as the main microcontroller (MCU), but are on the same board and connected via some physical connection.
Note: to attempt to be generic, this guide will use the term "IC" to refer to the device the driver is for.
Note: "driver" is a bit of an overloaded term in Tock. In this guide, "driver" is used in the generic sense to mean code that interfaces with the external IC.
To illustrate the steps, this guide will use a generic light sensor as the running example. You will need to adapt the generic steps for your particular use case.
Often the goal of an IC driver is to expose an interface to that sensor or other IC to userspace applications. This guide does not cover creating that userspace interface as that is covered in a different guide.
Background
As mentioned, this guide describes creating a capsule. Capsules in Tock are
units of Rust code that extend the kernel to add interesting features, like
interfacing with new sensors. Capsules are "untrusted", meaning they cannot call
unsafe code in Rust and cannot use the unsafe
keyword.
Overview
The high-level steps required are:
- Create a struct for the IC driver.
- Implement the logic to interface with the IC.
Optional:
- Provide a HIL interface for the IC driver.
- Provide a userspace interface for the IC driver.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Create a struct for the IC driver.
The driver will be implemented as a capsule, so the first step is to create a new file in the
capsules/src
directory. The name of this file should be[chipname].rs
where[chipname]
is the part number of the IC you are writing the driver for. There are several other examples in the capsules folder.For our example we will assume the part number is
ls1234
.You then need to add the filename to
capsules/src/lib.rs
like:#![allow(unused)] fn main() { pub mod ls1234; }
Now inside of the new file you should create a struct with the fields necessary to implement the driver for the IC. In our example we will assume the IC is connected to the MCU with an I2C bus. Your IC might use SPI, UART, or some other standard interface. You will need to adjust how you create the struct based on the interface. You should be able to find examples in the capsules directory to copy from.
The struct will look something like:
#![allow(unused)] fn main() { pub struct Ls1234 { i2c: &'a dyn I2CDevice, state: Cell<State>, buffer: TakeCell<'static, [u8]>, client: OptionalCell<&'a dyn Ls1234Client>, } }
You can see the resources this driver requires to successfully interface with the light sensor:
-
i2c
: This is a reference to the I2C bus that the driver will use to communicate with the IC. Notice in Tock the type isI2CDevice
, and no address is provided. This is because theI2CDevice
type wraps the address in internally, so that the driver code can only communicate with the correct address. -
state
: Often drivers will iterate through various states as they communicate with the IC, and it is common for drivers to keep some state variable to manage this. OurState
is defined as an enum, like so:#![allow(unused)] fn main() { #[derive(Copy, Clone, PartialEq)] enum State { Disabled, Enabling, ReadingLight, } }
Also note that the
state
variable uses aCell
. This is so that the driver can update the state. -
buffer
: This holds a reference to a buffer of memory the driver will use to send messages over the I2C bus. By convention, these buffers are defined statically in the same file as the driver, but then passed to the driver when the board boots. This provides the board flexibility on the buffer to use, while still allowing the driver to hint at the size required for successful operation. In our case the static buffer is defined as:#![allow(unused)] fn main() { pub static mut BUF: [u8; 3] = [0; 3]; }
Note the buffer is wrapped in a
TakeCell
such that it can be passed to the I2C hardware when necessary, and re-stored in the driver struct when the I2C code returns the buffer. -
client
: This is the callback that will be called after the driver has received a reading from the sensor. All execution is event-based in Tock, so the caller will not block waiting for a sample, but instead will expect a callback via the client when the same is ready. The driver has to define the type of the callback by defining theLs1234Client
trait in this case:#![allow(unused)] fn main() { pub trait Ls1234Client { fn callback(light_reading: usize); } }
Note that the client is stored in an
OptionalCell
. This allows the callback to not be set initially, and configured at bootup.
Your driver may require other state to be stored as well. You can update this struct as needed to for state required to successfully implement the driver. Note that if the state needs to be updated at runtime it will need to be stored in a cell type. See the cell documentation for more information on the various cell types in Tock.
Note: your driver should not keep any state in the struct that is also stored by the hardware. This easily leads to bugs when that state becomes out of sync, and makes further development on the driver difficult.
The last step is to write a function that enables creating an instance of your driver. By convention, the function is called
new()
and looks something like:#![allow(unused)] fn main() { impl Ls1234<'a> { pub fn new(i2c: &'a dyn I2CDevice, buffer: &'static mut [u8]) -> Ls1234<'a> { Ls1234 { i2c: i2c, alarm: alarm, state: Cell::new(State::Disabled), client: OptionalCell::empty(), } } } }
This function will get called by the board's
main.rs
file when the driver is instantiated. All of the static objects or configuration that the driver requires must be passed in here. In this example, a reference to the I2C device and the static buffer for passing messages must be provided.Checkpoint: You have defined the struct which will become the driver for the IC.
-
-
Implement the logic to interface with the IC.
Now, you will actually write the code that interfaces with the IC. This requires extending the
impl
of the driver struct with additional functions appropriate for your particular IC.With our light sensor example, we likely want to write a sample function for reading a light sensor value:
#![allow(unused)] fn main() { impl Ls1234<'a> { pub fn new(...) -> Ls1234<'a> {...} pub fn start_light_reading(&self) {...} } }
Note that the function name is "start light reading", which is appropriate because of the event-driven, non-blocking nature of the Tock kernel. Actually communicating with the sensor will take some time, and likely requires multiple messages to be sent to and received from the sensor. Therefore, our sample function will not be able to return the result directly. Instead, the reading will be provided in the callback function described earlier.
The start reading function will likely prepare the message buffer in a way that is IC-specific, then send the command to the IC. A rough example of that operation looks like:
#![allow(unused)] fn main() { impl Ls1234<'a> { pub fn new(...) -> Ls1234<'a> {...} pub fn start_light_reading(&self) { if self.state.get() == State::Disabled { self.buffer.take().map(|buf| { self.i2c.enable(); // Set the first byte of the buffer to the "on" command. // This is IC-specific and will be described in the IC // datasheet. buf[0] = 0b10100000; // Send the command to the chip and update our state // variable. self.i2c.write(buf, 1); self.state.set(State::Enabling); }); } } } }
The
start_light_reading()
function kicks off reading the light value from the IC and updates our internal state machine state to mark that we are waiting for the IC to turn on. Now theLs1234
code is finished for the time being and we now wait for the I2C message to finish being sent. We will know when this has completed based on a callback from the I2C hardware.#![allow(unused)] fn main() { impl I2CClient for Ls1234<'a> { fn command_complete(&self, buffer: &'static mut [u8], error: Error) { // Handle what happens with the I2C send is complete here. } } }
In our example, we have to send a new command after turning on the light sensor to actually read a sampled value. We use our state machine here to organize the code as in this example:
#![allow(unused)] fn main() { impl I2CClient for Ls1234<'a> { fn command_complete(&self, buffer: &'static mut [u8], _error: Error) { match self.state.get() { State::Enabling => { // Put the read command in the buffer and send it back to // the sensor. buffer[0] = 0b10100001; self.i2c.write_read(buf, 1, 2); // Update our state machine state. self.state.set(State::ReadingLight); } _ => {} } } } }
This will send another command to the sensor to read the actual light measurement. We also update our
self.state
variable because when this I2C transaction finishes the exact samecommand_complete
callback will be called, and we must be able to remember where we are in the process of communicating with the sensor.When the read finishes, the
command_complete()
callback will fire again, and we must handle the result. Since we now have the reading we can call our client's callback after updating out state machine.#![allow(unused)] fn main() { impl I2CClient for Ls1234<'a> { fn command_complete(&self, buffer: &'static mut [u8], _error: Error) { match self.state.get() { State::Enabling => { // Put the read command in the buffer and send it back to // the sensor. buffer[0] = 0b10100001; self.i2c.write_read(buf, 1, 2); // Update our state machine state. self.state.set(State::ReadingLight); } State::ReadingLight => { // Extract the light reading value. let mut reading: u16 = buffer[0] as 16; reading |= (buffer[1] as u16) << 8; // Update our state machine state. self.state.set(State::Disabled); // Trigger our callback with the result. self.client.map(|client| client.callback(reading)); } _ => {} } } } }
Note: likely the sensor would need to be disabled and returned to a low power state.
At this point your driver can read the IC and return the information from the IC. For your IC you will likely need to expand on this general template. You can add additional functions to the main struct implementation, and then expand the state machine to implement those functions. You may also need additional resources, like GPIO pins or timer alarms to implement the state machine for the IC. There are examples in the
capsules/src
folder with drivers that need different resources.
Optional Steps
-
Provide a HIL interface for the IC driver.
The driver so far has a very IC-specific interface. That is, any code that uses the driver must be written specifically with that IC in mind. In some cases that may be reasonable, for example if the IC is very unusual or has a very unique set of features. However, many ICs provide similar functionality, and higher-level code can be written without knowing what specific IC is being used on a particular hardware platform.
To enable this, some IC types have HILs in the
kernel/src/hil
folder in thesensors.rs
file. Drivers can implement one of these HILs and then higher-level code can use the HIL interface rather than a specific IC.To implement the HIL, you must implement the HIL trait functions:
#![allow(unused)] fn main() { impl AmbientLight for Ls1234<'a> { fn set_client(&self, client: &'static dyn AmbientLightClient) { } fn read_light_intensity(&self) -> ReturnCode { } } }
The user of the
AmbientLight
HIL will implement theAmbientLightClient
and provide the client through theset_client()
function. -
Provide a userspace interface for the IC driver.
Sometimes the IC is needed by userspace, and therefore needs a syscall interface so that userspace applications can use the IC. Please refer to a separate guide on how to implement a userspace interface for a capsule.
Wrap-Up
Congratulations! You have implemented an IC driver as a capsule in Tock! We encourage you to submit a pull request to upstream this to the Tock repository. Tock is happy to accept capsule drivers even if no boards in the Tock repository currently use the driver.
Implementing a System Call Interface for Userspace
This guide provides an overview and walkthrough on how to add a system call interface for userspace applications in Tock. The system call interface exposes some kernel functionality to applications. For example, this could be the ability to sample a new sensor, or use some service like doing AES encryption.
In this guide we will use a running example of providing a userspace interface for a hypothetical water level sensor (the "WS00123" water level sensor). This interface will allow applications to query the current water level, as well as get notified when the water level exceeds a certain threshold.
Setup
This guide assumes you already have existing kernel code that needs a userspace interface. Likely that means there is already a capsule implemented. Please see the other guides if you also need to implement the capsule.
We will assume there is a struct WS00123 {...}
object already implemented that
includes all of the logic needed to interface with this particular water sensor.
Overview
The high-level steps required are:
- Decide on the interface to expose to userspace.
- Map the interface to the existing syscalls in Tock.
- Create grant space for the application.
- Implement the
SyscallDriver
trait. - Document the interface.
- Expose the interface to userspace.
- Implement the syscall library in userspace.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Decide on the interface to expose to userspace.
Creating the interface for userspace means making design decisions on how applications should be able to interface with the kernel capsule. This can have a lasting impact, and is worth spending some time on up-front to avoid implementing an interface that is difficult to use or does not match the needs of applications.
While there is not a fixed algorithm on how to create such an interface, there are a couple tips that can help with creating the interface:
- Consider the interface for the same or similar functionality in other systems (e.g. Linux, Contiki, TinyOS, RIOT, etc.). These may have iterated on the design and include useful features.
- Ignore the specific details of the capsule that exists or how the particular sensor the syscall interface is for works, and instead consider what a user of that capsule might want. That is, if you were writing an application, how would you expect to use the interface? This might be different from how the sensor or other hardware exposes features.
- Consider other chips that provide similar functionality to the specific one you have. For example, imagine there is a competing water level sensor the "OWlS789". What features do both provide? How would a single interface be usable if a hardware board swapped one out for the other?
The interface should include both actions (called "commands" in Tock) that the application can take (for example, "sample this sensor now"), as well as events (called subscribe upcalls in Tock) that the kernel can trigger inside of an application (for example, when the sensed value is ready).
The interface can also include memory sharing between the application and the kernel. For example, if the application wants to receive a number of samples at once, or if the kernel needs to operate on many bytes (say for example encrypting a buffer), then the interface should allow the application to share some of its memory with the kernel to enable that functionality.
-
Map the interface to the existing syscalls in Tock.
With a sketch of the interface created, the next step is to map that interface to the specific syscalls that the Tock kernel supports. Tock has four main relevant syscall operations that applications can use when interfacing with the kernel:
-
allow_readwrite
: This lets an application share some of its memory with the kernel, which the kernel can read or write to. -
allow_readonly
: This lets an application share some of its memory with the kernel, which the kernel can only read. -
subscribe
: This provides a function pointer that the kernel can use to invoke an upcall on the application. -
command
: This enables the application to direct the kernel to take some action.
All four also include a couple other parameters to differentiate different commands, subscriptions, or allows. Refer to the more detailed documentation on the Tock syscalls for more information.
As the Tock kernel only supports these syscalls, each feature in the design you created in the first step must be mapped to one or more of them. To help, consider these hypothetical interfaces that an application might have for our water sensor:
- What is the maximum water level? This can be a simple command, where the return value of the command is the maximum water level.
- What is the current water level? This will require two steps. First, there needs to be a subscribe call where the application can setup an upcall function. The kernel will call this when the water level value has been acquired. Second, there will need to be a command to instruct the kernel to take the water level reading.
- Take ten water level samples. This will require three steps. First, the application must use a readwrite allow syscall to share a buffer with the kernel large enough to hold 10 water level readings. Then it must setup a subscribe upcall that the kernel will call when the 10 readings are ready (note this upcall function can be the same as in the single sample case). Finally it will use a command to tell the kernel to start sampling.
- Notify me when the water level exceeds a threshold. A likely way to implement this would be to first require a subscribe syscall for the application to set the function that will get called when the high water level event occurs. Then the application will need to use a command to enable the high water level detection and to optionally set the threshold.
As you do this, remember that kernel operations, and the above system calls, cannot execute for a long period of time. All of the four system calls are non-blocking. Long-running operations should involve an application starting the operation with a command, then having the kernel signal completion with an upcall.
Checkpoint: You have defined how many allow, subscribe, and command syscalls you need, and what each will do.
-
-
Create grant space for the application.
Grants are regions in a process's memory space that are shared with the kernel. The kernel uses these to store state on behalf of the process. To provide our syscall interface for the water level sensor, we need to setup a grant so that we can store state for all of the requests we may get from processes that want to use the sensor.
The first step to do this is to create a struct that contains fields for all of the state we want to store for each process that uses our syscall interface. By convention in Tock, this struct is named
App
, but it could have a different name.In our grant we need to store two things: the high water alert threshold and the upcall function pointer the app provided us when it called subscribe. We, however, only have to handle the threshold. As of Tock 2.0, the upcall is stored internally in the kernel. All we have to do is tell the kernel how many different upcall function pointers per app we need to store. In our case we only need to store one. This is provided as a parameter to
Grant
.We can now create an
App
struct which represents what will be stored in our grant:#![allow(unused)] fn main() { pub struct App { threshold: usize, } }
Now that we have the type we want to store in the grant region we can create the grant type for it by extending our
WS00123
struct:#![allow(unused)] fn main() { pub struct WS00123 { ... apps: Grant<App, 1>, } }
Grant<App, 1>
tells the kernel that we want to store the App struct in the grant, as well as one upcall function pointer.We will also need the grant region to be created by the board and passed in to us by adding it to the capsules
new()
function:#![allow(unused)] fn main() { impl WS00123 { pub fn new( ... grant: Grant<App, 1>, ) -> WS00123 { WS00123 { ..., apps: grant, } } } }
Now we have somewhere to store values on a per-process basis.
-
Implement the
SyscallDriver
trait.The
SyscallDriver
trait is how a capsule provides implementations for the various syscalls an application might call. The basic framework looks like:#![allow(unused)] fn main() { impl SyscallDriver for WS00123 { fn allow_readwrite( &self, appid: AppId, which, usize, slice: ReadWriteAppSlice, ) -> Result<ReadWriteAppSlice, (ReadWriteAppSlice, ErrorCode)> { } fn allow_readonly( &self, app: AppId, which: usize, slice: ReadOnlyAppSlice, ) -> Result<ReadOnlyAppSlice, (ReadOnlyAppSlice, ErrorCode)> { } fn command( &self, which: usize, r2: usize, r3: usize, caller_id: AppId) -> CommandReturn { } fn allocate_grant( &self, process_id: ProcessId) -> Result<(), crate::process::Error>; } }
For details on exactly how these methods work and their return values, TRD104 is their reference document. Notice that there is no
subscribe()
call, as that is handled entirely in the core kernel. However, the kernel will use the upcall slots passed as the second parameter toGrant<_, UPCALLS>
to implementsubscribe()
on your behalf.Note: there are default implementations for each of these, so in our water level sensor case we can simply omit the
allow_readwrite
andallow_readonly
calls.By Tock convention, every syscall interface must at least support the command call with
which == 0
. This allows applications to check if the syscall interface is supported on the current platform. The command must return aCommandReturn::success()
. If the command is not present, then the kernel automatically has it return a failure with an error code ofErrorCode::NOSUPPORT
. For our example, we use the simple case:#![allow(unused)] fn main() { impl SyscallDriver for WS00123 { fn command( &self, which: usize, r2: usize, r3: usize, caller_id: AppId) -> CommandReturn { match command_num { 0 => CommandReturn::success(), _ => CommandReturn::failure(ErrorCode::NOSUPPORT) } } } }
We also want to ensure that we implement the
allocate_grant()
call. This allows the kernel to ask us to setup our grant region since we know what the typeApp
is and how large it is. We just need the standard implementation that we can directly copy in.#![allow(unused)] fn main() { impl SyscallDriver for WS00123 { fn allocate_grant( &self, process_id: ProcessId) -> Result<(), kernel::process::Error> { // Allocation is performed implicitly when the grant region is entered. self.apps.enter(processid, |_, _| {}) } } }
Next we can implement more commands so that the application can direct our capsule as to what the application wants us to do. We need two commands, one to sample and one to enable the alert. In both cases the commands must return a
ReturnCode
, and call functions that likely already exist in the original implementation of theWS00123
sensor. If the functions don't quite exist, then they will need to be added as well.#![allow(unused)] fn main() { impl SyscallDriver for WS00123 { /// Command interface. /// /// ### `command_num` /// /// - `0`: Return SUCCESS if this driver is included on the platform. /// - `1`: Start a water level measurement. /// - `2`: Enable the water level detection alert. `data` is used as the /// height to set as the the threshold for detection. fn command( &self, which: usize, r2: usize, r3: usize, caller_id: AppId) -> CommandReturn { match command_num { 0 => CommandReturn::success(), 1 => self.start_measurement(app), 2 => { // Save the threshold for this app. self.apps .enter(app_id, |app, _| { app.threshold = data; CommandReturn::success() }) .map_or_else( |err| CommandReturn::failure(ErrorCode::from), |ok| self.set_high_level_detection() ) }, _ => CommandReturn::failure(ErrorCode::NOSUPPORT), } } } }
The last item that needs to be added is to actually use the upcall when the sensor has been sampled or the alert has been triggered. Actually issuing the upcall will need to be added to the existing implementation of the capsule. As an example, if our water sensor was attached to the board over I2C, then we might trigger the upcall in response to a finished I2C command:
#![allow(unused)] fn main() { impl i2c::I2CClient for WS00123 { fn command_complete(&self, buffer: &'static mut [u8], _error: i2c::Error) { ... let app_id = <get saved appid for the app that issued the command>; let measurement = <calculate water level based on returned I2C data>; self.apps.enter(app_id, |app, upcalls| { upcalls.schedule_upcall(0, (0, measurement, 0)).ok(); }); } } }
Note: the first argument to
schedule_upcall()
is the index of the upcall to use. Since we only have one upcall we use0
.There may be other cleanup code required to reset state or prepare the sensor for another sample by a different application, but these are the essential elements for implementing the syscall interface.
Finally, we need to assign our new
SyscallDriver
implementation a number so that the kernel (and userspace apps) can differentiate this syscall interface from all others that a board supports. By convention this is specified by a global value at the top of the capsule file:#![allow(unused)] fn main() { pub const DRIVER_NUM: usize = 0x80000A; }
The value cannot conflict with other capsules in use, but can be set arbitrarily, particularly for testing. Tock has a procedure for assigning numbers, and you may need to change this number if the capsule is to merged into the main Tock repository.
Checkpoint: You have the syscall interface translated from a design to code that can run inside the Tock kernel.
-
Document the interface.
A syscall interface is a contract between the kernel and any number of userspace processes, and processes should be able to be developed independently of the kernel. Therefore, it is helpful to document the new syscall interface you made so applications know how to use the various command, subscribe, and allow calls.
An example markdown file documenting our water level syscall interface is as follows:
--- driver number: 0x80000A --- # Water Level Sensor WS00123 ## Overview The WS00123 water level sensor can sample the depth of water as well as trigger an event if the water level gets too high. ## Command - ### Command number: `0` **Description**: Does the driver exist? **Argument 1**: unused **Argument 2**: unused **Returns**: SUCCESS if it exists, otherwise ENODEVICE - ### Command number: `1` **Description**: Initiate a sensor reading. When a reading is ready, a callback will be delivered if the process has `subscribed`. **Argument 1**: unused **Argument 2**: unused **Returns**: `EBUSY` if a reading is already pending, `ENOMEM` if there isn't sufficient grant memory available, or `SUCCESS` if the sensor reading was initiated successfully. - ### Command number: `2` **Description**: Enable the high water detection. THe callback will the alert will be delivered if the process has `subscribed`. **Argument 1**: The water depth to alert for. **Argument 2**: unused **Returns**: `EBUSY` if a reading is already pending, `ENOMEM` if there isn't sufficient grant memory available, or `SUCCESS` if the sensor reading was initiated successfully. ## Subscribe - ### Subscribe number: `0` **Description**: Subscribe an upcall for sensor readings and alerts. **Upcall signature**: The upcall's first argument is `0` if this is a measurement, and `1` if the callback is an alert. If it is a measurement the second value will be the water level. **Returns**: SUCCESS if the subscribe was successful or ENOMEM if the driver failed to allocate memory to store the upcall.
This file should be named
<driver_num>_<sensor>.md
, or in this case:80000A_ws00123.md
. -
Expose the interface to userspace.
The last kernel implementation step is to let the main kernel know about this new syscall interface so that if an application tries to use it the kernel knows which implementation of
SyscallDriver
to call. In each board'smain.rs
file (e.g.boards/hail/src/main.rs
) there is an implementation of theSyscallDriverLookup
trait where the board can setup which syscall interfaces it supports. To enable our water sensor interface we add a new entry to the match statement there:#![allow(unused)] fn main() { impl SyscallDriverLookup for Hail { fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R where F: FnOnce(Option<&dyn kernel::Driver>) -> R, { match driver_num { ... capsules::ws00123::DRIVER_NUM => f(Some(self.ws00123)), ... _ => f(None), } } } }
-
Implement the syscall library in userspace.
At this point userspace applications can use our new syscall interface and interact with the water sensor. However, applications would have to call all of the syscalls directly, and that is fairly difficult to get right and not user friendly. Therefore, we typically implement a small library layer in userspace to make using the interface easier.
In this guide we will be setting up a C library, and to do so we will create
libtock-c/libtock/ws00123.h
andlibtock-c/libtock/ws00123.c
, both of which will be added to the libtock-c repository. The .h file defines the public interface and constants:#pragma once #include "tock.h" #ifdef __cplusplus extern "C" { #endif #define DRIVER_NUM_WS00123 0x80000A int ws00123_set_callback(subscribe_cb callback, void* callback_args); int ws00123_read_water_level(); int ws00123_enable_alerts(uint32_t threshold); #ifdef __cplusplus } #endif
While the .c file provides the implementations:
#include "ws00123.h" #include "tock.h" int ws00123_set_callback(subscribe_cb callback, void* callback_args) { return subscribe(DRIVER_NUM_WS00123, 0, callback, callback_args); } int ws00123_read_water_level() { return command(DRIVER_NUM_WS00123, 1, 0, 0); } int ws00123_enable_alerts(uint32_t threshold) { return command(DRIVER_NUM_WS00123, 2, threshold, 0); }
This is a very basic implementation of the interface, but it provides some more readable names to the numbers that make up the syscall interface. See other examples in libtock for how to make synchronous versions of asynchronous operations (like reading the sensor).
Wrap-Up
Congratulations! You have added a new API for userspace applications using the Tock syscall interface! We encourage you to submit a pull request to upstream this to the Tock repository.
Implementing a HIL Interface
This guide describes the process of creating a new HIL interface in Tock. "HIL"s are one or more Rust traits that provide a standard and shared interface between pieces of the Tock kernel.
Background
The most canonical use for a HIL is to provide an interface to hardware peripherals to capsules. For example, a HIL for SPI provides an interface between the SPI hardware peripheral in a microcontroller and a capsule that needs a SPI bus for its operation. The HIL is a generic interface, so that same capsule can work on different microcontrollers, as long as each microcontroller implements the SPI HIL.
HILs are also used for other generic kernel interfaces that are relevant to capsules. For example, Tock defines a HIL for a "temperature sensor". While a temperature sensor is not generally a hardware peripheral, a capsule may want to use a generic temperature sensor interface and not be restricted to using a particular temperature sensor driver. Having a HIL allows the capsule to use a generic interface. For consistency, these HILs are also specified in the kernel crate.
Note: In the future Tock will likely split these interface types into separate groups.
HIL development often significantly differs from other development in Tock. In particular, HILs can often be written quickly, but tend to take numerous iterations over relatively long periods of time to refine. This happens for three general reasons:
- HILs are intended to be generic, and therefore implementable by a range of different hardware platforms. Designing an interface that works for a range of different hardware takes time and experience with various MCUs, and often incompatibilities aren't discovered until an implementation proves to be difficult (or impossible).
- HILs are Rust traits, and Rust traits are reasonably complex and offer a fair bit of flexibility. Balancing both leveraging the flexibility Rust provides and avoiding undue complexity takes time. Again, often trial-and-error is required to settle on how traits should be composed to best capture the interface.
- HILs are intended to be generic, and therefore will be used in a variety of different use cases. Ensuring that the HIL is expressive enough for a diverse set of uses takes time. Again, often the set of uses is not known initially, and HILs often have to be revised as new use cases are discovered.
Therefore, we consider HILs to be evolving interfaces.
Tips on HIL Development
As getting a HIL interface "correct" is difficult, Tock tends to prefer starting with simple HIL interfaces that are typically inspired by the hardware used when the HIL is initially created. Trying to generalize a HIL too early can lead to complexity that is never actually warranted, or complexity that didn't actually address a problem.
Also, Tock prefers to only include code (or in this case HIL interface functions) that are actually in use by the Tock code base. This ensures that there is at least some method of using or testing various components of Tock. This also suggests that initial HIL development should only focus on an interface that is needed by the initial use case.
Overview
The high-level steps required are:
- Determine that a new HIL interface is needed.
- Create the new HIL in the kernel crate.
- Ensure the HIL file includes sufficient documentation.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Determine that a new HIL interface is needed.
Tock includes a number of existing HIL interfaces, and modifying an existing HIL is preferred to creating a new HIL that is similar to an existing interface. Therefore, you should start by verifying an existing HIL does not already meet your need or could be modified to meet your need.
This may seem to be a straightforward step, but it can be complicated by microcontrollers calling similar functionality by different names, and the existing HIL using a standard name or a different name from another microcontroller.
Also, you can reach out via the email list or slack if you have questions about whether a new HIL is needed or an existing one should suffice.
-
Create the new HIL in the kernel crate.
Once you have determined a new HIL is required, you should create the appropriate file in
kernel/src/hil
. Often the best way to start is to copy an existing HIL that is similar in nature to the interface you are trying to create.As noted above, HILs evolve over time, and HILs will be periodically updated as issues are discovered or best practices for HIL design are learned. Unfortunately, this means that copying an existing HIL might lead to "mistakes" that must be remedied before the new HIL can be merged.
Likely, it is helpful to open a pull request relatively early in the HIL creation process so that any substantial issues can be detected and corrected quickly.
Tock has a reference guide for dos and don'ts when creating a HIL. Following this guide can help avoid many of the pitfalls that we have run into when creating HILs in the past.
Tock only uses non-blocking interfaces in the kernel, and HILs should reflect that as well. Therefore, for any operation that will take more than a couple cycles to complete, or would require waiting on a hardware flag, a split interface design should be used with a
Client
trait that receives a callback when the operation has completed. -
Ensure the HIL file includes sufficient documentation.
HIL files should be well commented with Rustdoc style (i.e.
///
) comments. These comments are the main source of documentation for HILs.As HILs grow in complexity or stability, they will be documented separately to fully explain their design and intended use cases.
Wrap-Up
Congratulations! You have implemented a new HIL in Tock! We encourage you to submit a pull request to upstream this to the Tock repository.
Implementing an in-kernel Virtualization Layer
This guide provides an overview and walkthrough on how to add an in-kernel virtualization layer, such that a given hardware interface can be used simultaneously by multiple kernel capsules, or used simultaneously by a single kernel capsule and userspace. Ideally, virtual interfaces will be available for all hardware interfaces in Tock. Some example interfaces which have already been virtualized include Alarm, SPI, Flash, UART, I2C, ADC, and others.
In this guide we will use a running example of virtualizing a single hardware SPI peripheral and bus for use as a SPI Master.
Setup
This guide assumes you already have existing kernel code that needs to be virtualized. There should be an existing HIL for the resource you are virtualizing.
We will assume there is a trait SpiMaster {...}
already defined and
implemented that includes all of the logic needed to interface with the
underlying SPI. We also assume there is a trait SpiMasterClient
that
determines the interface a client of the SPI exposes to the underlying resource.
In most cases, equivalent traits will represent a necessary precursor to
virtualization.
Overview
The high-level steps required are:
- Create a capsule file for your virtualizer
- Determine what portions of this interface should be virtualized.
- Create a
MuxXXX
struct, which will serve as the lone client of the underlying resource. - Create a
VirtualXXXDevice
which will implement the underlying HIL trait, allowing for the appearance of multiple of the lone resource. - Implement the logic for queuing requests from capsules.
- Implement the logic for dispatching callbacks from the underlying resource to the appropriate client.
- Document the interface.
- (Optional) Write tests for the virtualization logic.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Create a capsule file for your virtualizer
This step is easy. Navigate to the
capsules/src/
directory and create a new file namedvirtual_xxx
, wherexxx
is the name of the underlying resource being virtualized. All of the code you will write while following this guide belongs in that file. Additionally, opencapsules/src/lib.rs
and addpub mod virtual_xxx;
to the list of modules. -
Determine what portions of this interface should be virtualized
Generally, this step requires looking at the HIL being virtualized, and determining what portions of the HIL require additional logic to handle multiple concurrent clients. Lets take a look at the SPIMaster HIL:
#![allow(unused)] fn main() { pub trait SpiMaster { fn set_client(&self, client: &'static dyn SpiMasterClient); fn init(&self); fn is_busy(&self) -> bool; /// Perform an asynchronous read/write operation, whose /// completion is signaled by invoking SpiMasterClient on /// the initialized client. fn read_write_bytes( &self, write_buffer: &'static mut [u8], read_buffer: Option<&'static mut [u8]>, len: usize, ) -> ReturnCode; fn write_byte(&self, val: u8); fn read_byte(&self) -> u8; fn read_write_byte(&self, val: u8) -> u8; /// Tell the SPI peripheral what to use as a chip select pin. fn specify_chip_select(&self, cs: Self::ChipSelect); /// Returns the actual rate set fn set_rate(&self, rate: u32) -> u32; fn get_rate(&self) -> u32; fn set_clock(&self, polarity: ClockPolarity); fn get_clock(&self) -> ClockPolarity; fn set_phase(&self, phase: ClockPhase); fn get_phase(&self) -> ClockPhase; // These two functions determine what happens to the chip // select line between transfers. If hold_low() is called, // then the chip select line is held low after transfers // complete. If release_low() is called, then the chip select // line is brought high after a transfer completes. A "transfer" // is any of the read/read_write calls. These functions // allow an application to manually control when the // CS line is high or low, such that it can issue multi-byte // requests with single byte operations. fn hold_low(&self); fn release_low(&self); } }
For some of these functions, it is clear that no virtualization is required. For example,
get_rate()
,get_phase()
andget_polarity()
simply request information on the current configuration of the underlying hardware. Implementations of these can simply pass the call straight through the mux.Some other functions are not appropriate to expose to virtual clients at all. For example,
hold_low()
,release_low()
, andspecify_chip_select()
are not suitable for use when the underlying bus is shared.init()
does not make sense when it is unclear which client should call it. The mux should queue operations, so clients should not need access tois_busy()
.For other functions, it is clear that virtualization is necessary. For example, it is clear that if multiple clients are using the Mux, they cannot all be allowed set the rate of the underlying hardware at arbitrary times, as doing so could break an ongoing operation initiated by an underlying client. However, it is important to expose this functionality to clients. Thus
set_rate()
,set_clock()
andset_phase()
need to be virtualized, and provided to virtual clients.set_client()
needs to be adapted to support multiple simultaneous clients.Finally, virtual clients need a way to send and receive on the bus. Single byte writes and reads are typically only used under the assumption that a single client is going to make multiple single byte reads/writes consecutively, and thus are inappropriate to virtualize. Instead, the virtual interface should only include
read_write_bytes()
, as that encapsulates the entire transaction that would be desired by a virtual client.Given that not all parts of the original HIL trait (
SpiMaster
) are appropriate for virtualization, we should create a new trait in the SPI HIL that will represent the interface provided to clients of the Virtual SPI:#![allow(unused)] fn main() { //! kernel/src/hil/spi.rs ... /// SPIMasterDevice provides a chip-specific interface to the SPI Master /// hardware. The interface wraps the chip select line so that chip drivers /// cannot communicate with different SPI devices. pub trait SpiMasterDevice { /// Perform an asynchronous read/write operation, whose /// completion is signaled by invoking SpiMasterClient.read_write_done on /// the provided client. fn read_write_bytes( &self, write_buffer: &'static mut [u8], read_buffer: Option<&'static mut [u8]>, len: usize, ) -> ReturnCode; /// Helper function to set polarity, clock phase, and rate all at once. fn configure(&self, cpol: ClockPolarity, cpal: ClockPhase, rate: u32); fn set_polarity(&self, cpol: ClockPolarity); fn set_phase(&self, cpal: ClockPhase); fn set_rate(&self, rate: u32); fn get_polarity(&self) -> ClockPolarity; fn get_phase(&self) -> ClockPhase; fn get_rate(&self) -> u32; } }
Not all virtualizers will require a new trait to provide virtualization! For example,
VirtualMuxDigest
exposes the sameDigest
HIL as the underlying hardware. Same forVirtualAlarm
,VirtualUart
, andMuxFlash
.VirtualI2C
does use a different trait, similarly to SPI, andVirtualADC
introduces anAdcChannel
trait to enable virtualization that is not possible with the ADC interface implemented by hardware.There is no fixed algorithm for deciding exactly how to virtualize a given interface, and doing so will require thinking carefully about the requirements of the clients and nature of the underlying resource. Tock's threat model describes several requirements for virtualizers in its virtualization section.
Note: You should read these requirements!! They discuss things like the confidentiality and fairness requirements for virtualizers.
Beyond the threat model, you should think carefully about how virtual clients will use the interface, the overhead (in cycles / code size / RAM use) of different approaches, and how the interface will work in the face of multiple concurrent requests. It is also important to consider the potential for two layers of virtualization, when one of the clients of the virtualization capsule is a userspace driver that will also be virtualizing that same resource. In some cases (see: UDP port reservations) special casing the userspace driver may be valuable.
Frequently the best approach will involve looking for an already virtualized resource that is qualitatively similar to the resource you are working with, and using its virtualization as a template.
-
Create a
MuxXXX
struct, which will serve as the lone client of the underlying resource.In order to virtualize a hardware resource, we need to create some object that has a reference to the underlying hardware resource and that will hold the multiple "virtual" devices which clients will interact with. For the SPI interface, we call this struct
MuxSpiMaster
:#![allow(unused)] fn main() { /// The Mux struct manages multiple Spi clients. Each client may have /// at most one outstanding Spi request. pub struct MuxSpiMaster<'a, Spi: hil::spi::SpiMaster> { // The underlying resource being virtualized spi: &'a Spi, // A list of virtual devices which clients will interact with. // (See next step for details) devices: List<'a, VirtualSpiMasterDevice<'a, Spi>>, // Additional data storage needed to implement virtualization logic inflight: OptionalCell<&'a VirtualSpiMasterDevice<'a, Spi>>, } }
Here we use Tock's built-in
List
type, which is a LinkedList of statically allocated structures that implement a given trait. This type is required because Tock does not allow heap allocation in the Kernel.Typically, this struct will implement some number of private helper functions used as part of virtualization, and provide a public constructor. For now we will just implement the constructor:
#![allow(unused)] fn main() { impl<'a, Spi: hil::spi::SpiMaster> MuxSpiMaster<'a, Spi> { pub const fn new(spi: &'a Spi) -> MuxSpiMaster<'a, Spi> { MuxSpiMaster { spi: spi, devices: List::new(), inflight: OptionalCell::empty(), } } // TODO: Implement virtualization logic helper functions } }
-
Create a
VirtualXXXDevice
which will implement the underlying HIL traitIn the previous step you probably noticed the list of virtual devices referencing a
VirtualSpiMasterDevice
, which we had not created yet. We will define and implement that struct here. In practice, both must be defined simultaneously because each type references the other. TheVirtualSpiMasterDevice
should have a reference to the mux, aListLink
field (required so that lists ofVirtualSpiMasterDevice
s can be constructed), and other fields for data that needs to be stored for each client of the virtualizer.#![allow(unused)] fn main() { pub struct VirtualSpiMasterDevice<'a, Spi: hil::spi::SpiMaster> { //reference to the mux mux: &'a MuxSpiMaster<'a, Spi>, // Pointer to next element in the list of devices next: ListLink<'a, VirtualSpiMasterDevice<'a, Spi>>, // Per client data that must be stored across calls chip_select: Cell<Spi::ChipSelect>, txbuffer: TakeCell<'static, [u8]>, rxbuffer: TakeCell<'static, [u8]>, operation: Cell<Op>, client: OptionalCell<&'a dyn hil::spi::SpiMasterClient>, } impl<'a, Spi: hil::spi::SpiMaster> VirtualSpiMasterDevice<'a, Spi> { pub const fn new( mux: &'a MuxSpiMaster<'a, Spi>, chip_select: Spi::ChipSelect, ) -> VirtualSpiMasterDevice<'a, Spi> { VirtualSpiMasterDevice { mux: mux, chip_select: Cell::new(chip_select), txbuffer: TakeCell::empty(), rxbuffer: TakeCell::empty(), operation: Cell::new(Op::Idle), next: ListLink::empty(), client: OptionalCell::empty(), } } // Most virtualizers will use a set_client method that looks exactly like this pub fn set_client(&'a self, client: &'a dyn hil::spi::SpiMasterClient) { self.mux.devices.push_head(self); self.client.set(client); } } }
This is the struct that will implement whatever HIL trait we decided on in step 1. In our case, this is the
SpiMasterDevice
trait:#![allow(unused)] fn main() { // Given that there are multiple types of operations we might need to queue, // create an enum that can represent each operation and the data that operation // needs to store. #[derive(Copy, Clone, PartialEq)] enum Op { Idle, Configure(hil::spi::ClockPolarity, hil::spi::ClockPhase, u32), ReadWriteBytes(usize), SetPolarity(hil::spi::ClockPolarity), SetPhase(hil::spi::ClockPhase), SetRate(u32), } impl<Spi: hil::spi::SpiMaster> hil::spi::SpiMasterDevice for VirtualSpiMasterDevice<'_, Spi> { fn configure(&self, cpol: hil::spi::ClockPolarity, cpal: hil::spi::ClockPhase, rate: u32) { self.operation.set(Op::Configure(cpol, cpal, rate)); self.mux.do_next_op(); } fn read_write_bytes( &self, write_buffer: &'static mut [u8], read_buffer: Option<&'static mut [u8]>, len: usize, ) -> ReturnCode { self.txbuffer.replace(write_buffer); self.rxbuffer.put(read_buffer); self.operation.set(Op::ReadWriteBytes(len)); self.mux.do_next_op(); ReturnCode::SUCCESS } fn set_polarity(&self, cpol: hil::spi::ClockPolarity) { self.operation.set(Op::SetPolarity(cpol)); self.mux.do_next_op(); } fn set_phase(&self, cpal: hil::spi::ClockPhase) { self.operation.set(Op::SetPhase(cpal)); self.mux.do_next_op(); } fn set_rate(&self, rate: u32) { self.operation.set(Op::SetRate(rate)); self.mux.do_next_op(); } fn get_polarity(&self) -> hil::spi::ClockPolarity { self.mux.spi.get_clock() } fn get_phase(&self) -> hil::spi::ClockPhase { self.mux.spi.get_phase() } fn get_rate(&self) -> u32 { self.mux.spi.get_rate() } } }
Now we can begin to see the virtualization logic. Each
get_x()
method just forwards calls directly to the underlying hardware driver, as these operations are synchronous and non-blocking. But theset()
calls and the read/write calls are queued as operations. Each client can have only a single outstanding operation (a common requirement for virtualizers in Tock given the lack of dynamic allocation). These operations are "queued" by each client simply setting the operation field of itsVirtualSpiMasterDevice
to whatever operation it would like to perform next. The Mux can iterate through the list of devices to choose a pending operation. Clients learn about the completion of operations via callbacks, informing them that they can begin new operations. -
Implement the logic for queuing requests from capsules.
So far, we have sketched out a skelton for how we will queue requests from capsules, but not yet implemented the
do_next_op()
function that will handle the order in which operations are performed, or how operations are translated into calls by the actual hardware driver.We know that all operations in Tock are asynchronous, so it is always possible that the underlying hardware device is busy when
do_next_op()
is called -- accordingly, we need some mechanism for tracking if the underlying device is currently busy. We also need to restore the state expected by the device performing a given operaion (e.g. the chip select pin in use). Beyond that, we just forward calls to the hardware driver:#![allow(unused)] fn main() { fn do_next_op(&self) { if self.inflight.is_none() { let mnode = self .devices .iter() .find(|node| node.operation.get() != Op::Idle); mnode.map(|node| { self.spi.specify_chip_select(node.chip_select.get()); let op = node.operation.get(); // Need to set idle here in case callback changes state node.operation.set(Op::Idle); match op { Op::Configure(cpol, cpal, rate) => { // The `chip_select` type will be correct based on // what implemented `SpiMaster`. self.spi.set_clock(cpol); self.spi.set_phase(cpal); self.spi.set_rate(rate); } Op::ReadWriteBytes(len) => { // Only async operations want to block by setting // the devices as inflight. self.inflight.set(node); node.txbuffer.take().map(|txbuffer| { let rxbuffer = node.rxbuffer.take(); self.spi.read_write_bytes(txbuffer, rxbuffer, len); }); } Op::SetPolarity(pol) => { self.spi.set_clock(pol); } Op::SetPhase(pal) => { self.spi.set_phase(pal); } Op::SetRate(rate) => { self.spi.set_rate(rate); } Op::Idle => {} // Can't get here... } }); } } }
Notably, the SPI driver does not implement any fairness schemes, despite the requirements of the threat model. As of this writing, the threat model is still aspirational, and not followed for all virtualizers. Eventually, this driver should be updated to use round robin queueing of clients, rather than always giving priority to whichever client was added to the List first.
-
Implement the logic for dispatching callbacks from the underlying resource to the appropriate client.
We are getting close! At this point, we have a mechanism for adding clients to the virtualizer, and for queueing and making calls. However, we have not yet addressed how to handle callbacks from the underlying resource (usually used to forward interrupts up to the appropriate client). Additionally, our queueing logic is still incomplete, as we have not yet seen when subsequent operations are triggered if an operation is requested while the underlying device is in use.
Handling callbacks in virtualizers requires two layers of handling. First, the
MuxXXX
device must implement the appropriateXXXClient
trait such that it can subscribe to callbacks from the underlying resource, and dispatch them to the appropriateVirtualXXXDevice
:#![allow(unused)] fn main() { impl<Spi: hil::spi::SpiMaster> hil::spi::SpiMasterClient for MuxSpiMaster<'_, Spi> { fn read_write_done( &self, write_buffer: &'static mut [u8], read_buffer: Option<&'static mut [u8]>, len: usize, ) { self.inflight.take().map(move |device| { self.do_next_op(); device.read_write_done(write_buffer, read_buffer, len); }); } } }
This takes advantage of the fact that we stored a reference to device that initiated the inflight operation, so we can dispatch the callback directly to that device. One thing to note is that the call to
take()
setsinflight
toNone
, and then the callback callsdo_next_op()
, triggering any still queued operations. This ensures that all queued operations will take place. This all requires that the device also has implemented the callback:#![allow(unused)] fn main() { impl<Spi: hil::spi::SpiMaster> hil::spi::SpiMasterClient for VirtualSpiMasterDevice<'_, Spi> { fn read_write_done( &self, write_buffer: &'static mut [u8], read_buffer: Option<&'static mut [u8]>, len: usize, ) { self.client.map(move |client| { client.read_write_done(write_buffer, read_buffer, len); }); } }
Finally, we have dispatched the callback all the way up to the client of the virtualizer, completing the round trip process.
-
Document the interface.
Finally, you need to document the interface. Do so by placing a comment at the top of the file describing what the file does:
#![allow(unused)] fn main() { //! Virtualize a SPI master bus to enable multiple users of the SPI bus. }
and add doc comments (
/// doc comment example
) to any new traits created inkernel/src/hil
. -
(Optional) Write tests for the virtualization logic.
Some virtualizers provide additional stress tests of virtualization logic, which can be run on hardware to perform correct operation in edge cases. For examples of such tests, look at
capsules/src/test/virtual_uart.rs
orcapsules/src/test/random_alarm.rs
.
Wrap-Up
Congratulations! You have virtualized a resource in the Tock kernel! We encourage you to submit a pull request to upstream this to the Tock repository.
Implementing a Kernel Test
This guide covers how to write in-kernel tests of hardware functionality. For example, if you have implemented a chip peripheral, you may want to write in-kernel tests of that peripheral to test peripheral-specific functionality that will not be exposed via the HIL for that peripheral. This guide outlines the general steps for implementing kernel tests.
Setup
This guide assumes you have existing chip, board, or architecture specific code that you wish to test from within the kernel.
Note: If you wish to test kernel code with no hardware dependencies at all, such as a ring buffer implementation, you can use cargo's test framework instead. These tests can be run by simply calling
cargo test
within the crate that the test is located, and will be executed during CI for all tests merged into upstream Tock. An example of this approach can be found inkernel/src/collections/ring_buffer.rs
.
Overview
The general steps you will follow are:
- Determine the board(s) you want to run your tests on
- Add a test file in
boards/{board}/src/
- Determine where to write actual tests -- in the test file or a capsule test
- Write your tests
- Call the test from
main.rs
- Document the expected output from the test at the top of the test file
This guide will walk through how to do each of these steps.
Background
Kernel tests allow for testing of hardware-specific functionality that is not
exposed to userspace, and allows for fail-fast tests at boot that otherwise
would not be exposed until apps are loaded. Kernel tests can be useful to test
chip peripherals prior to exposing these peripherals outside the Kernel. Kernel
tests can also be included as required tests run prior to releases, to ensure
there have been no regressions for a particular component. Additionally, kernel
tests can be useful for testing capsule functionality from within the kernel,
such as when unsafe
is required to verify the results of tests, or for testing
virtualization capsules in a controlled environment.
Kernel tests are generally implemented on an as-needed basis, and are not
required for all chip peripherals in Tock. In general, they are not expected to
be run in the default case, though they should always be included from main.rs
so they are compiled. These tests are allowed to use unsafe
as needed, and are
permitted to conflict with normal operation, by stealing callbacks from drivers
or modifying global state.
Notably, your specific use case may differ some from the one outline here. It is
always recommended to attempt to copy from existing Tock code when developing
your own solutions. A good collection of kernel tests can be found in
boards/imix/src/tests/
for that purpose.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Determine the board(s) you want to run your test on.
If you are testing chip or architecture specific functionality, you simply need to choose a board that uses that chip or architecture. For board specific functionality you of course need to choose that board. If you are testing a virtualization capsule, then any board that implements the underlying resource being virtualized is acceptable. Currently, most kernel tests are implemented for the Imix platform, and can be found in
boards/imix/src/tests/
Checkpoint: You have identified the board you will implement your test for.
-
Add a test file in
boards/{board}/src/
To start implementing the test, you should create a new source file inside the
boards/{board}/src
directory. For boards with lots of tests, like the Imix board, there may be atests
subdirectory -- if so, the test should go intests
instead, and be added to thetests/mod.rs
file. The name of this test file generally should indicate the functionality being tested.Note: If the board you select is one of the nrf52dk variants (nrf52840_dongle, nrf52840dk, or nrf52dk), tests should be moved into the
nrf52dk_base/src/
folder, and called fromlib.rs
.Checkpoint: You have chosen a board for your test and created a test file.
-
Determine where to write actual tests -- in the test file or a capsule test.
Depending on what you are testing, it may be best practice to write a capsule test that you call from the test file you created in the previous step.
Writing a capsule test is best practice if your test meets the following criteria:
- Test does not require
unsafe
- The test is for a peripheral available on multiple boards
- A HIL or capsule exists for that peripheral, so it is accessible from the
capsules
crate - The test relies only on functionality exposed via the HIL or a capsule
- You care about being able to call this test from multiple boards
Examples:
- UART Virtualization (all boards support UART, there is a HIL for UART
devices and a capsule for the
virtual_uart
) - Alarm test (all boards will have some form of hardware alarm, there is an Alarm HIL)
- Other examples: see
capsules/src/test
If your test meets the criteria for writing a capsule test, follow these steps:
Add a file in
capsules/src/test/
, and then add the filename tocapsules/src/mod.rs
like this:#![allow(unused)] fn main() { pub mod virtual_uart; }
Next, create a test struct in this file that can be instantiated by any board using this test capsule. This struct should implement a
new()
function so it can be instantiated from the test file inboards
, and arun()
function that will run the actual tests. An example for UART follows:#![allow(unused)] fn main() { //! capsules/src/test/virtual_uart.rs pub struct TestVirtualUartReceive { device: &'static UartDevice<'static>, buffer: TakeCell<'static, [u8]>, } impl TestVirtualUartReceive { pub fn new(device: &'static UartDevice<'static>, buffer: &'static mut [u8]) -> Self { TestVirtualUartReceive { device: device, buffer: TakeCell::new(buffer), } } pub fn run(&self) { // TODO: See Next Step } } }
If your test does not meet the above requirements, you can simply implement your tests in the file that you created in step 2. This can involve creating a test structure with test methods. The UDP test file takes this approach, by defining a number of self-contained tests. One such example follows:
#![allow(unused)] fn main() { //! boards/imix/src/test/udp_lowpan_test.rs pub struct LowpanTest { port_table: &'static UdpPortManager, // ... } impl LowpanTest { // This test ensures that an app and capsule cant bind to the same port // but can bind to different ports fn bind_test(&self) { let create_cap = create_capability!(NetworkCapabilityCreationCapability); let net_cap = unsafe { static_init!( NetworkCapability, NetworkCapability::new(AddrRange::Any, PortRange::Any, PortRange::Any, &create_cap) ) }; let mut socket1 = self.port_table.create_socket().unwrap(); // Attempt to bind to a port that has already been bound by an app. let result = self.port_table.bind(socket1, 1000, net_cap); assert!(result.is_err()); socket1 = result.unwrap_err(); // Get the socket back //now bind to an open port let (_send_bind, _recv_bind) = self .port_table .bind(socket1, 1001, net_cap) .expect("UDP Bind fail"); debug!("bind_test passed"); } // ... } }
Checkpoint: There is a test capsule with
new()
andrun()
implementations. - Test does not require
-
Write your tests
The first part of this step takes place in the test file you just created -- writing the actual tests. This part is highly dependent on the functionality being verified. If you are writing your tests in test capsule, this should all be triggered from the
run()
function.Depending on the specifics of your test, you may need to implement additional functions or traits in this file to make your test functional. One example is implementing a client trait on the test struct so that the test can receive the results of asynchronous operations. Our UART example requires implementing the
uart::RecieveClient
on the test struct.#![allow(unused)] fn main() { //! boards/imix/src/test/virtual_uart_rx_test.rs impl TestVirtualUartReceive { // ... pub fn run(&self) { let buf = self.buffer.take().unwrap(); let len = buf.len(); debug!("Starting receive of length {}", len); let (err, _opt) = self.device.receive_buffer(buf, len); if err != ReturnCode::SUCCESS { panic!( "Calling receive_buffer() in virtual_uart test failed: {:?}", err ); } } } impl uart::ReceiveClient for TestVirtualUartReceive { fn received_buffer( &self, rx_buffer: &'static mut [u8], rx_len: usize, rcode: ReturnCode, _error: uart::Error, ) { debug!("Virtual uart read complete: {:?}: ", rcode); for i in 0..rx_len { debug!("{:02x} ", rx_buffer[i]); } debug!("Starting receive of length {}", rx_len); let (err, _opt) = self.device.receive_buffer(rx_buffer, rx_len); if err != ReturnCode::SUCCESS { panic!( "Calling receive_buffer() in virtual_uart test failed: {:?}", err ); } } } }
Note that the above test calls
panic!()
in the case of failure. This pattern, or the similar use ofassert!()
statements, is the preferred way to communicate test failures. If communicating errors in this way is not possible, tests can indicate success/failure by printing different results to the console in each case and asking users to verify the actual output matches the expected output.The next step in this process is determining all of the parameters that need to be passed to the test. It is preferred that all logically related tests be called from a single
pub unsafe fn run(/* optional args */)
to maintain convention. This ensures that all tests can be run by adding a single line tomain.rs
. Many tests require a reference to an alarm in order to separate tests in time, or a reference to a virtualization capsule that is being tested. Notably, therun()
function should initialize any components itself that would not have already been created inmain.rs
. As an example, the below function is a starting point for thevirtual_uart_receive
test for Imix:#![allow(unused)] fn main() { pub unsafe fn run_virtual_uart_receive(mux: &'static MuxUart<'static>) { debug!("Starting virtual reads."); } }
Next, a test function should initialize any objects required to run tests. This is best split out into subfunctions, like the following:
#![allow(unused)] fn main() { unsafe fn static_init_test_receive_small( mux: &'static MuxUart<'static>, ) -> &'static TestVirtualUartReceive { static mut SMALL: [u8; 3] = [0; 3]; let device = static_init!(UartDevice<'static>, UartDevice::new(mux, true)); device.setup(); let test = static_init!( TestVirtualUartReceive, TestVirtualUartReceive::new(device, &mut SMALL) ); device.set_receive_client(test); test } }
This initializes an instance of the test capsule we constructed earlier. Simpler tests (such as those not relying on capsule tests) might simply use
static_init!()
to initialize normal capsules directly and test them. The log test does this, for example:#![allow(unused)] fn main() { //! boards/imix/src/test/log_test.rs pub unsafe fn run( mux_alarm: &'static MuxAlarm<'static, Ast>, deferred_caller: &'static DynamicDeferredCall, ) { // Set up flash controller. flashcalw::FLASH_CONTROLLER.configure(); static mut PAGEBUFFER: flashcalw::Sam4lPage = flashcalw::Sam4lPage::new(); // Create actual log storage abstraction on top of flash. let log = static_init!( Log, log::Log::new( &TEST_LOG, &mut flashcalw::FLASH_CONTROLLER, &mut PAGEBUFFER, deferred_caller, true ) ); flash::HasClient::set_client(&flashcalw::FLASH_CONTROLLER, log); log.initialize_callback_handle( deferred_caller .register(log) .expect("no deferred call slot available for log storage"), ); // ... } }
Finally, your
run()
function should call the actual tests. This may involve simply calling arun()
function on a capsule test, or may involve calling test functions written in the board specific test file. The virtual UART testrun()
looks like this:#![allow(unused)] fn main() { pub unsafe fn run_virtual_uart_receive(mux: &'static MuxUart<'static>) { debug!("Starting virtual reads."); let small = static_init_test_receive_small(mux); let large = static_init_test_receive_large(mux); small.run(); large.run(); } }
As you develop your kernel tests, you may not immediately know what functions are required in your test capsule -- this is okay! It is often easiest to start with a basic test and expand this file to test additional functionality once basic tests are working.
Checkpoint: Your tests are written, and can be called from a single
run()
function. -
Call the test from
main.rs
, and iterate on it until it worksNext, you should run your test by calling it from the
reset_handler()
inmain.rs
. In order to do so, you will also need it import it into the file by adding a line like this:#![allow(unused)] fn main() { #[allow(dead_code)] mod virtual_uart_test; }
However, if your test is located inside a
test
module this is not needed -- your test will already be included.Typically, tests are called after completing setup of the board, immediately before the call to
load_processes()
:#![allow(unused)] fn main() { virtual_uart_rx_test::run_virtual_uart_receive(uart_mux); debug!("Initialization complete. Entering main loop"); extern "C" { /// Beginning of the ROM region containing app images. static _sapps: u8; /// End of the ROM region containing app images. /// /// This symbol is defined in the linker script. static _eapps: u8; } kernel::procs::load_processes( // ... }
Observe your results, and tune or add tests as needed.
Before you submit a PR including any kernel tests, however, please remove or comment out any lines of code that call these tests.
Checkpoint: You have a functional test that can be called in a single line from
main.rs
-
Document the expected output from the test at the top of the test file
For tests that will be merged to upstream, it is good practice to document how to run a test and what the expected output of a test is. This is best done using\ document level comments (
//!
) at the top of the test file. The documentation for the virtual UART test follows:#![allow(unused)] fn main() { //! Test reception on the virtualized UART by creating two readers that //! read in parallel. To add this test, include the line //! ``` //! virtual_uart_rx_test::run_virtual_uart_receive(uart_mux); //! ``` //! to the imix boot sequence, where `uart_mux` is a //! `capsules::virtual_uart::MuxUart`. There is a 3-byte and a 7-byte //! read running in parallel. Test that they are both working by typing //! and seeing that they both get all characters. If you repeatedly //! type 'a', for example (0x61), you should see something like: //! ``` //! Starting receive of length 3 //! Virtual uart read complete: CommandComplete: //! 61 //! 61 //! 61 //! 61 //! 61 //! 61 //! 61 //! Starting receive of length 7 //! Virtual uart read complete: CommandComplete: //! 61 //! 61 //! 61 //! ``` }
Checkpoint: You have documented your tests
Wrap-Up
Congratulations! You have written a kernel test for Tock! We encourage you to submit a pull request to upstream this to the Tock repository.
Implementing a Component
Each Tock board defines the peripherals, capsules, kernel settings, and syscall drivers to customize Tock for that board. Often, instantiating different resources (particularly capsules and drivers) requires subtle setup steps that are easy to get wrong. The setup steps are often shared from board-to-board. Together, this makes configuring a board redundant and easy to make a mistake.
Components are the Tock mechanism to help address this. Each component includes the static memory allocations and setup steps required to implement a particular piece of kernel functionality (i.e. a capsule). You can read more technical documentation here.
In this guide we will create a component for a hypothetical system call driver
called Notifier
. Our system call driver is going to use an alarm as a resource
and requires just one other parameter: a delay value in milliseconds. The steps
should be the same for any capsule you want to create a component for.
Setup
This guide assumes you already have the capsule created, and ideally that you have set it up with a board to test. Making a component then just makes it easier to include on a new board and share among boards.
Overview
The high-level steps required are:
- Define the static memory required for all objects used.
- Create a struct that holds all of the resources and configuration necessary for the capsules.
- Implement
finalize()
to initialize memory and perform setup.
Step-by-Step Guide
The steps from the overview are elaborated on here.
-
Define the static memory required for all objects used.
All objects in the kernel are statically allocated, so we need to statically allocate memory for the objects to live in. Due to constraints on the macros Tock provides for statically allocating memory, we must contain all calls to allocate this memory within another macro.
Create a file in
boards/components/src
to hold the component.We need to define a macro to setup our state. We will use the
static_buf!()
macro to help with this. In the file, create a macro with the name<your capsule>_component_static
. This naming convention must be followed.In our hypothetical case, we need to allocate room for the notifier capsule and a buffer. Each capsule might need slightly different resources.
#![allow(unused)] fn main() { #[macro_export] macro_rules! notifier_driver_component_static { ($A:ty $(,)?) => {{ let notifier_buffer = kernel::static_buf!([u8; 16]); let notifier_driver = kernel::static_buf!( capsules_extra::notifier::NotifierDriver<'static, $A> ); (notifier_buffer, notifier_driver) };}; } }
Notice how the macro uses the type
$A
which is the type of the underlying alarm. We also use full paths to avoid errors when the macro is used. The macro then "returns" the two statically allocated resources. -
Create a struct that holds all of the resources and configuration necessary for the capsules.
Now we create the actual component object which collects all of the resources and any configuration needed to successfully setup this capsule.
#![allow(unused)] fn main() { pub struct NotifierDriverComponent<A: 'static + time::Alarm<'static>> { board_kernel: &'static kernel::Kernel, driver_num: usize, alarm: &'static A, delay_ms: usize, } }
The component needs a reference to the board as well as the driver number to be used for this driver. This is to setup the grant, as we will see. If you are not setting up a syscall driver you will not need this. Finally we also need to keep track of the delay the kernel wants to use with this capsule.
Next we can create a constructor for this component object:
#![allow(unused)] fn main() { impl<A: 'static + time::Alarm<'static>> NotifierDriverComponent<A> { pub fn new( board_kernel: &'static kernel::Kernel, driver_num: usize, alarm: &'static A, delay_ms: usize, ) -> AlarmDriverComponent<A> { AlarmDriverComponent { board_kernel, driver_num, alarm, delay_ms, } } } }
Note, all configuration that is required must be passed in to this
new()
constructor. -
Implement
finalize()
to initialize memory and perform setup.The last step is to implement the
Component
trait and thefinalize()
method to actually setup the capsule.The general format looks like:
#![allow(unused)] fn main() { impl<A: 'static + time::Alarm<'static>> Component for NotifierDriverComponent<A> { type StaticInput = (...); type Output = ...; fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output {} } }
We need to define what statically allocated types we need, and what this method will produce:
#![allow(unused)] fn main() { impl<A: 'static + time::Alarm<'static>> Component for AlarmDriverComponent<A> { type StaticInput = ( &'static mut MaybeUninit<[u8; 16]>, &'static mut MaybeUninit<NotifierDriver<'static, $A>>, ); type Output = &'static NotifierDriver<'static, A>; fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output {} } }
Notice that the static input types must match the output of the macro. The output type is what we are actually creating.
Inside the
finalize()
method we need to initialize the static memory and configure/setup the capsules:#![allow(unused)] fn main() { impl<A: 'static + time::Alarm<'static>> Component for AlarmDriverComponent<A> { type StaticInput = ( &'static mut MaybeUninit<[u8; 16]>, &'static mut MaybeUninit<NotifierDriver<'static, $A>>, ); type Output = &'static NotifierDriver<'static, A>; fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output { let grant_cap = create_capability!(capabilities::MemoryAllocationCapability); let buf = static_buffer.0.write([0; 16]); let notifier = static_buffer.1.write(NotifierDriver::new( self.alarm, self.board_kernel.create_grant(self.driver_num, &grant_cap), buf, self.delay_ms, )); // Very important we set the callback client correctly. self.alarm.set_client(notifier); notifier } } }
We initialize the memory for the static buffer, create the grant for the syscall driver to use, provide the driver with the alarm resource, and pass in the delay value to use. Lastly, we return a reference to the actual notifier driver object.
Summary
Our full component looks like:
#![allow(unused)] fn main() { use core::mem::MaybeUninit; use capsules_extra::notifier::NotifierDriver; use kernel::capabilities; use kernel::component::Component; use kernel::create_capability; use kernel::hil::time::{self, Alarm}; #[macro_export] macro_rules! notifier_driver_component_static { ($A:ty $(,)?) => {{ let notifier_buffer = kernel::static_buf!([u8; 16]); let notifier_driver = kernel::static_buf!( capsules_extra::notifier::NotifierDriver<'static, $A> ); (notifier_buffer, notifier_driver) };}; } pub struct NotifierDriverComponent<A: 'static + time::Alarm<'static>> { board_kernel: &'static kernel::Kernel, driver_num: usize, alarm: &'static A, delay_ms: usize, } impl<A: 'static + time::Alarm<'static>> NotifierDriverComponent<A> { pub fn new( board_kernel: &'static kernel::Kernel, driver_num: usize, alarm: &'static A, delay_ms: usize, ) -> AlarmDriverComponent<A> { AlarmDriverComponent { board_kernel, driver_num, alarm, delay_ms, } } } impl<A: 'static + time::Alarm<'static>> Component for AlarmDriverComponent<A> { type StaticInput = ( &'static mut MaybeUninit<[u8; 16]>, &'static mut MaybeUninit<NotifierDriver<'static, $A>>, ); type Output = &'static NotifierDriver<'static, A>; fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output { let grant_cap = create_capability!(capabilities::MemoryAllocationCapability); let buf = static_buffer.0.write([0; 16]); let notifier = static_buffer.1.write(NotifierDriver::new( self.alarm, self.board_kernel.create_grant(self.driver_num, &grant_cap), buf, self.delay_ms, )); // Very important we set the callback client correctly. self.alarm.set_client(notifier); notifier } } }
Usage
In a board's main.rs file to use the component:
#![allow(unused)] fn main() { let notifier = components::notifier::NotifierDriverComponent::new( board_kernel, capsules_core::notifier::DRIVER_NUM, alarm, 100, ) .finalize(components::notifier_driver_component_static!(nrf52840::rtc::Rtc)); }
Wrap-Up
Congratulations! You have created a component to easily create a resource in the Tock kernel! We encourage you to submit a pull request to upstream this to the Tock repository.