Tock OS Book

This book introduces you to Tock, a secure embedded operating system for sensor networks and the Internet of Things. Tock is the first operating system to allow multiple untrusted applications to run concurrently on a microcontroller-based computer. The Tock kernel is written in Rust, a memory-safe systems language that does not rely on a garbage collector. Userspace applications are run in single-threaded processes that can be written in any language.

Getting Started

The book includes a quick start guide.

Tock Workshop Courses

For a more in-depth walkthough-style less, look here.

Development Guides

The book also has walkthoughs on how to implement different features in Tock OS.

Getting Started

This getting started guide covers how to get started using Tock.

Hardware

To really be able to use Tock and get a feel for the operating system, you will need a hardware platform that tock supports. The TockOS Hardware includes a list of supported hardware boards. You can also view the boards folder to see what platforms are supported.

As of February 2021, this getting started guide is based around five hardware platforms. Steps for each of these platforms are explicitly described here. Other platforms will work for Tock, but you may need to reference the README files in tock/boards/ for specific setup information. The five boards are:

  • Hail
  • imix
  • nRF52840dk (PCA10056)
  • Arduino Nano 33 BLE (regular or Sense version)
  • BBC Micro:bit v2

These boards are reasonably well supported, but note that others in Tock may have some "quirks" around what is implemented (or not), and exactly how to load code and test that it is working. This guides tries to be general, and Tock generally tries to follow a certain convention, but the project is under active development and new boards are added rapidly. You should definitely consult the board-specific README to see if there are any board-specific details you should be aware of.

When you are ready to use your board, see the hardware setup guide for information on any needed setup to get the board working with your machine.

Software

Tock, like many computing systems, is split between a kernel and userspace apps. These are developed, compiled, and loaded separately.

First, complete the quickstart guide to get all of the necessary tools installed.

The kernel is available in the Tock repository. See here for information on getting started.

Userspace apps are compiled and loaded separately from the kernel. You can install one or more apps without having to update or re-flash the kernel. See here for information on getting started.

Quickstart

Get started with Tock quickly! The general requirements are:

  • Rustup
  • Tockloader
  • GCC toolchains for ARM and RISC-V
  • Code loading tool

Choose the guide for your platform:

Quickstart: Mac

This guide assumes you have the Homebrew package manager installed.

Install the following:

  1. Command line utilities.

    $ brew install wget pipx git coreutils
    
  2. Clone the Tock kernel repository.

    $ git clone https://github.com/tock/tock
    
  3. rustup. This tool helps manage installations of the Rust compiler and related tools.

    $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
  4. arm-none-eabi toolchain and riscv64-unknown-elf toolchains. This enables you to compile apps written in C.

    $ brew install arm-none-eabi-gcc riscv64-elf-gcc
    
  5. tockloader. This is an all-in-one tool for programming boards and using Tock.

    $ pipx install tockloader
    

    Note: You may need to add tockloader to your path. If you cannot run it after installation, run the following:

    $ pipx ensurepath
    
  6. JLinkExe to load code onto your board. JLink is available from the Segger website. You want to install the "J-Link Software and Documentation Pack". There are various packages available depending on operating system.

  7. OpenOCD. Another tool to load code. You can install through package managers.

    $ brew install open-ocd
    

Quickstart: Linux

Install the following:

  1. Command line utilities.

    $ sudo apt install git wget zip curl python3 python3-pip python3-venv
    
  2. Clone the Tock kernel repository.

    $ git clone https://github.com/tock/tock
    
  3. rustup. This tool helps manage installations of the Rust compiler and related tools.

    $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
  4. arm-none-eabi toolchain and riscv64-unknown-elf toolchains. This enables you to compile apps written in C.

    $ sudo apt install gcc-arm-none-eabi gcc-riscv64-unknown-elf
    
  5. tockloader. This is an all-in-one tool for programming boards and using Tock.

    $ pipx install tockloader
    

    Note: You may need to add tockloader to your path. If you cannot run it after installation, run the following:

    $ pipx ensurepath
    
  6. JLinkExe to load code onto your board. JLink is available from the Segger website. You want to install the "J-Link Software and Documentation Pack". There are various packages available depending on operating system.

  7. OpenOCD. Another tool to load code. You can install through package managers.

    $ sudo apt install openocd
    

One-Time Fixups

On Linux, you might need to give your user access to the serial port used by the board. If you get permission errors or you cannot access the serial port, this is likely the issue.

You can fix this by setting up a udev rule to set the permissions correctly for the serial device when it is attached. You only need to run the command below for your specific board, but if you don't know which one to use, running both is totally fine, and will set things up in case you get a different hardware board!

$ sudo bash -c "echo 'ATTRS{idVendor}==\"0403\", ATTRS{idProduct}==\"6015\", MODE=\"0666\"' > /etc/udev/rules.d/99-ftdi.rules"
$ sudo bash -c "echo 'ATTRS{idVendor}==\"2341\", ATTRS{idProduct}==\"005a\", MODE=\"0666\"' > /etc/udev/rules.d/98-arduino.rules"

Afterwards, detach and re-attach the board to reload the rule.

Quickstart: Windows

Note: This is a work in progress. Any contributions are welcome!

We use WSL on Windows for Tock.

Install Tools

Configure WSL To Use USB

On Windows Subsystem for Linux (WSL)

Programming JLink devices with Tock in WSL:

Trying to program an nRF52840DK with WSL can be a little tricky because WSL abstracts away low level access for USB devices. WSL1 does not offer access to physical hardware, just an environment to use linux on microsoft. WSL2 on the other hand is unable to find JLink devices even if you have JLink installed because of the USB abstraction. To get around this limitation, we use USBIP - a tool that connects the USB device over a TCP tunnel.

This guide might apply for any device programmed via JLink.

Steps to connect to nRF52840DK with WSL:

  1. Get Ubuntu 22.04 from Microsoft store. Install it as a WSL distro with wsl --install -d Ubuntu-22.04 using Windows Powershell or Cmd prompt with admin privileges.

  2. Once Ubuntu 22.04 is installed, the Ubuntu 20.04 distro that ships as default with WSL must be uninstalled. Set the 22.04 distro as the WSL default by with the wsl --setdefault Ubuntu-22.04 command.

  3. Install JLink's linux package from their website on your WSL linux distro. You may need to modify jlink rules to allow JLink to access the nRF52840DK. This can be done with sudo nano /etc/udev/rules.d/99-jlink.rules and adding SUBSYSTEM=="tty", ATTRS{idVendor}=="1051", MODE="0666", GROUP="dialout" to the file.

  4. Next, the udev rules have to be reloaded and triggered with sudo udevadm control --reload-rules && udevadm trigger. Doing this should apply the new rules.

  5. On the windows platform, make sure WSL is set to version 2. Check the WSL version with wsl -l -v. If it is version 1, change it to WSL2 with wsl --set-version Ubuntu-22.04 2 (USBIP works with WSL2).

  6. Install USBIP from here. Version 4.x onwards removes USBIP tooling requirement from the client side, so you don't have to install anything on the linux subsystem.

  7. On windows, open powershell/cmd in admin mode and run usbipd wsl list. That should give you the list of devices. Note the Bus ID of your J-Link device.

  8. For the first time that you want to attach your device, you need to bind the bus between the host OS and the WSL using usbipd bind -b <bus-id>.

  9. Once bound, you can attach your device to WSL by running usbipd attach --wsl -b <busid> on powershell/cmd (When attaching a device for the first time, it has to be done with admin privileges).

  10. To check if the attach worked, run lsusb on WSL. If it worked, the device should be listed as SEGGER JLink.

  11. The kernel can now be flashed with make install and other tockloader commands should work.

Note:

  • A machine with an x64 processor is required. (x86 and Arm64 are currently not supported with USBIP).
  • Make sure your firewall is not blocking port 3240 as USBIP uses that port to interface windows and WSL. (Windows defender is usually the culprit if you don't have a third party firewall).
  • Add an inbound rule to Windows defender/ your third party firewall allowing USBIP to use port 3240 if you see a port blocked error.

One-Time Fixups

The serial device parameters stored in the FTDI chip do not seem to get passed to Ubuntu. Plus, WSL enumerates every possible serial device. Therefore, tockloader cannot automatically guess which serial port is the correct one, and there are a lot to choose from.

You will need to open Device Manager on Windows, and find which COM port the tock board is using. It will likely be called "USB Serial Port" and be listed as an FTDI device. The COM number will match what is used in WSL. For example, COM9 is /dev/ttyS9 on WSL.

To use tockloader you should be able to specify the port manually. For example: tockloader --port /dev/ttyS9 list.

Getting the Hardware Connected and Setup

Plug your hardware board into your computer. Generally this requires a micro USB cable, but your board may be different.

Note! Some boards have multiple USB ports.

Some boards have two USB ports, where one is generally for debugging, and the other allows the board to act as any USB peripheral. You will want to connect using the "debug" port.

Some example boards:

  • imix: Use the port labeled DEBUG.
  • nRF52 development boards: Use the port on the skinny side of the board (do NOT use the port labeled "nRF USB").

The board should appear as a regular serial device (e.g. /dev/tty.usbserial-c098e5130006 on my Mac or /dev/ttyUSB0 on my Linux box). On Linux, this may require some setup, see the "one-time fixups" box on the quickstart page for your platform (Linux or Windows].

One Time Board Setup

If you have a Hail, imix, or nRF52840dk please skip to the next section.

If you have an Arduino Nano 33 BLE (sense or regular), you need to update the bootloader on the board to the Tock bootloader. Please follow the bootloader update instructions.

If you have a Micro:bit v2 then you need to load the Tock bootloader. Please follow the bootloader installation instructions.

Test The Board

With the board connected, you should be able to use tockloader to interact with the board. For example, to retrieve serial UART data from the board, run tockloader listen, and you should see something like:

$ tockloader listen
No device name specified. Using default "tock"
Using "/dev/ttyUSB0 - Imix - TockOS"

Listening for serial output.
Initialization complete. Entering main loop

You may also need to reset (by pressing the reset button on the board) the board to see the message. You may also not see any output if the Tock kernel has not been flashed yet.

In case you have multiple serial devices attached to your computer, you may need to select the appropriate J-Link device:

$ tockloader listen
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] No serial port with device name "tock" found.
[INFO   ] Found 2 serial ports.
Multiple serial port options found. Which would you like to use?
[0]     /dev/ttyACM1 - J-Link - CDC
[1]     /dev/ttyACM0 - L830-EB - Fibocom L830-EB

Which option? [0] 0
[INFO   ] Using "/dev/ttyACM1 - J-Link - CDC".
[INFO   ] Listening for serial output.
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
tock$

In case you don't see any text printed after "Listening for serial output", try hitting [ENTER] a few times. You should be greeted with a tock$ shell prompt. You can use the reset command to restart your nRF chip and see the above greeting.

In case you want to use a different serial console monitor, you may need to identify the serial console device created for your board. On Linux, you can identify the J-Link debugger's serial port by running:

$ dmesg -Hw | grep tty
< ... some output ... >
< plug in the nRF52840DKs front USB (not "nRF USB") >
[  +0.003233] cdc_acm 1-3:1.0: ttyACM1: USB ACM device

In this case, the serial console can be accessed as /dev/ttyACM1.

You can also see if any applications are installed with tockloader list:

$ tockloader list
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] Using "/dev/cu.usbmodem14101 - Nano 33 BLE - TockOS".
[INFO   ] Paused an active tockloader listen in another session.
[INFO   ] Waiting for the bootloader to start
[INFO   ] No found apps.
[INFO   ] Finished in 2.928 seconds
[INFO   ] Resumed other tockloader listen session

If these commands fail you may not have installed Tockloader, or you may need to update to a later version of Tockloader. There may be other issues as well, and you can ask on Slack if you need help.

Testing You Can Compile the Kernel

To test if your environment is working enough to compile Tock, go to the tock/boards/ directory and then to the board folder for the hardware you have (e.g. tock/boards/imix for imix). Then run make in that directory. This should compile the kernel. It may need to compile several supporting libraries first (so may take 30 seconds or so the first time). You should see output like this:

$ cd tock/boards/imix
$ make
   Compiling tock-cells v0.1.0 (/Users/bradjc/git/tock/libraries/tock-cells)
   Compiling tock-registers v0.5.0 (/Users/bradjc/git/tock/libraries/tock-register-interface)
   Compiling enum_primitive v0.1.0 (/Users/bradjc/git/tock/libraries/enum_primitive)
   Compiling tock-rt0 v0.1.0 (/Users/bradjc/git/tock/libraries/tock-rt0)
   Compiling imix v0.1.0 (/Users/bradjc/git/tock/boards/imix)
   Compiling kernel v0.1.0 (/Users/bradjc/git/tock/kernel)
   Compiling cortexm v0.1.0 (/Users/bradjc/git/tock/arch/cortex-m)
   Compiling capsules v0.1.0 (/Users/bradjc/git/tock/capsules)
   Compiling cortexm4 v0.1.0 (/Users/bradjc/git/tock/arch/cortex-m4)
   Compiling sam4l v0.1.0 (/Users/bradjc/git/tock/chips/sam4l)
   Compiling components v0.1.0 (/Users/bradjc/git/tock/boards/components)
    Finished release [optimized + debuginfo] target(s) in 28.67s
   text    data     bss     dec     hex filename
 165376    3272   54072  222720   36600 /Users/bradjc/git/tock/target/thumbv7em-none-eabi/release/imix
   Compiling typenum v1.11.2
   Compiling byteorder v1.3.4
   Compiling byte-tools v0.3.1
   Compiling fake-simd v0.1.2
   Compiling opaque-debug v0.2.3
   Compiling block-padding v0.1.5
   Compiling generic-array v0.12.3
   Compiling block-buffer v0.7.3
   Compiling digest v0.8.1
   Compiling sha2 v0.8.1
   Compiling sha256sum v0.1.0 (/Users/bradjc/git/tock/tools/sha256sum)
6fa1b0d8e224e775d08e8b58c6c521c7b51fb0332b0ab5031fdec2bd612c907f  /Users/bradjc/git/tock/target/thumbv7em-none-eabi/release/imix.bin

You can check that tockloader is installed by running:

$ tockloader --help

If either of these steps fail, please double check that you followed the environment setup instructions above.

Flash the kernel

Now that the board is connected and you have verified that the kernel compiles (from the steps above), we can flash the board with the latest Tock kernel:

$ cd boards/<your board>
$ make

Boards provide the target make install as the recommended way to load the kernel.

$ make install

You can also look at the board's README for more details.

Installing Tock Applications

We have the kernel flashed, but the kernel doesn't actually do anything. Applications do! To load applications, we are going to use tockloader.

Loading Pre-built Applications

We're going to install some pre-built applications, but first, let's make sure we're in a clean state, in case your board already has some applications installed. This command removes any processes that may have already been installed.

$ tockloader erase-apps

Now, let's install two pre-compiled example apps. Remember, you may need to specify which board you are using and how to communicate with it for all of these commands. If you are using Hail or imix you will not have to.

$ tockloader install https://www.tockos.org/assets/tabs/blink.tab

The install subcommand takes a path or URL to an TAB (Tock Application Binary) file to install.

The board should restart and the user LED should start blinking. Let's also install a simple "Hello World" application:

$ tockloader install https://www.tockos.org/assets/tabs/c_hello.tab

If you now run tockloader listen you should be able to see the output of the Hello World! application. You may need to manually reset the board for this to happen.

$ tockloader listen
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] Using "/dev/cu.usbserial-c098e513000a - Hail IoT Module - TockOS".

[INFO   ] Listening for serial output.
Initialization complete. Entering main loop.
Hello World!
␀

Uninstalling and Installing More Apps

Lets check what's on the board right now:

$ tockloader list
...
┌──────────────────────────────────────────────────┐
│ App 0                                            |
└──────────────────────────────────────────────────┘
  Name:                  blink
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   2048 bytes


┌──────────────────────────────────────────────────┐
│ App 1                                            |
└──────────────────────────────────────────────────┘
  Name:                  c_hello
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   1024 bytes


[INFO   ] Finished in 2.939 seconds

As you can see, the apps are still installed on the board. We can remove apps with the following command:

$ tockloader uninstall

Following the prompt, if you remove the blink app, the LED will stop blinking, however the console will still print Hello World.

Now let's try adding a more interesting app:

$ tockloader install https://www.tockos.org/assets/tabs/sensors.tab

The sensors app will automatically discover all available sensors, sample them once a second, and print the results.

$ tockloader listen
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] Using "/dev/cu.usbserial-c098e513000a - Hail IoT Module - TockOS".

[INFO   ] Listening for serial output.
Initialization complete. Entering main loop.
[Sensors] Starting Sensors App.
Hello World!
␀[Sensors] All available sensors on the platform will be sampled.
ISL29035:   Light Intensity: 218
Temperature:                 28 deg C
Humidity:                    42%
FXOS8700CQ: X:               -112
FXOS8700CQ: Y:               23
FXOS8700CQ: Z:               987

Compiling and Loading Applications

There are many more example applications in the libtock-c repository that you can use. Let's try installing the ROT13 cipher pair. These two applications use inter-process communication (IPC) to implement a ROT13 cipher.

Start by uninstalling any applications:

$ tockloader uninstall

Get the libtock-c repository:

$ git clone https://github.com/tock/libtock-c

Build the rot13_client application and install it:

$ cd libtock-c/examples/rot13_client
$ make
$ tockloader install

Then make and install the rot13_service application:

$ cd ../rot13_service
$ tockloader install --make

Then you should be able to see the output:

$ tockloader listen
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] Using "/dev/cu.usbserial-c098e5130152 - Hail IoT Module - TockOS".
[INFO   ] Listening for serial output.
Initialization complete. Entering main loop.
12: Uryyb Jbeyq!
12: Hello World!
12: Uryyb Jbeyq!
12: Hello World!
12: Uryyb Jbeyq!
12: Hello World!
12: Uryyb Jbeyq!
12: Hello World!

Note: Tock platforms are limited in the number of apps they can load and run. However, it is possible to install more apps than this limit, since tockloader is (currently) unaware of this limitation and will allow to you to load additional apps. However the kernel will only load the first apps until the limit is reached.

Note about Identifying Boards

Tockloader tries to automatically identify which board is attached to make this process simple. This means for many boards (particularly the ones listed at the top of this guide) tockloader should "just work".

However, for some boards tockloader does not have a good way to identify which board is attached, and requires that you manually specify which board you are trying to program. This can be done with the --board argument. For example, if you have an nrf52dk or nrf52840dk, you would run Tockloader like:

$ tockloader <command> --board nrf52dk --jlink

The --jlink flag tells tockloader to use the JLink JTAG tool to communicate with the board (this mirrors using make flash above). Some boards support OpenOCD, in which case you would pass --openocd instead.

To see a list of boards that tockloader supports, you can run tockloader list-known-boards. If you have an imix or Hail board, you should not need to specify the board.

Note, a board listed in tockloader list-known-boards means there are default settings hardcoded into tockloader's source on how to support those boards. However, all of those settings can be passed in via command-line parameters for boards that tockloader does not know about. See tockloader --help for more information.

Familiarize Yourself with tockloader Commands

The tockloader tool is a useful and versatile tool for managing and installing applications on Tock. It supports a number of commands, and a more complete list can be found in the tockloader repository, located at github.com/tock/tockloader. Below is a list of the more useful and important commands for programming and querying a board.

tockloader install

This is the main tockloader command, used to load Tock applications onto a board. By default, tockloader install adds the new application, but does not erase any others, replacing any already existing application with the same name. Use the --no-replace flag to install multiple copies of the same app. To install an app, either specify the tab file as an argument, or navigate to the app's source directory, build it (probably using make), then issue the install command:

$ tockloader install

Tip: You can add the --make flag to have tockloader automatically run make before installing, i.e. tockloader install --make

Tip: You can add the --erase flag to have tockloader automatically remove other applications when installing a new one.

tockloader uninstall [application name(s)]

Removes one or more applications from the board by name.

tockloader erase-apps

Removes all applications from the board.

tockloader list

Prints basic information about the apps currently loaded onto the board.

tockloader info

Shows all properties of the board, including information about currently loaded applications, their sizes and versions, and any set attributes.

tockloader listen

This command prints output from Tock apps to the terminal. It listens via UART, and will print out anything written to stdout/stderr from a board.

Tip: As a long-running command, listen interacts with other tockloader sessions. You can leave a terminal window open and listening. If another tockloader process needs access to the board (e.g. to install an app update), tockloader will automatically pause and resume listening.

tockloader flash

Loads binaries onto hardware platforms that are running a compatible bootloader. This is used by the Tock Make system when kernel binaries are programmed to the board with make program.

Tock Course

The Tock course includes several different modules that guide you through various aspects of Tock and Tock applications. Each module is designed to be fairly standalone such that a full course can be composed of different modules depending on the interests and backgrounds of those doing the course. You should be able to do the lessons that are of interest to you.

Each module begins with a description of the lesson, and then includes steps to follow. The modules cover both programming in the kernel as well as applications.

Setup and Preparation

You should follow the getting started guide to get your development setup and ensure you can communicate with the hardware.

Compile the Kernel

All of the hands-on exercises will be done within the main Tock repository and the libtock-c or libtock-rs userspace repositories. To work on the kernel, pop open a terminal, and navigate to the repository. If you're using the VM, that'll be:

$ cd ~/tock

Make sure your Tock repository is up to date

$ git pull

This will fetch the lastest commit from the Tock kernel repository. Individual modules may ask you to check out specific commits or branches. In this case, be sure to have those revisions checked out instead.

Build the kernel

To build the kernel for your board, navigate to the boards/$YOUR_BOARD subdirectory. From within this subdirectory, a simple make should be sufficient to build a kernel. For instance, for the Nordic nRF52840DK board, run the following:

$ cd boards/nordic/nrf52840dk
$ make
   Compiling nrf52840 v0.1.0 (/home/tock/tock/chips/nrf52840)
   Compiling components v0.1.0 (/home/tock/tock/boards/components)
   Compiling nrf52_components v0.1.0 (/home/tock/tock/boards/nordic/nrf52_components)
    Finished release [optimized + debuginfo] target(s) in 24.07s
   text    data     bss     dec     hex filename
 167940       4   28592  196536   2ffb8 /home/tock/tock/target/thumbv7em-none-eabi/release/nrf52840dk
88302039a5698ab37d159ec494524cc466a0da2e9938940d2930d582404dc67a  /home/tock/tock/target/thumbv7em-none-eabi/release/nrf52840dk.bin

If this is the first time you are trying to make the kernel, the build system will use cargo and rustup to install various Tock dependencies.

Programming the kernel and interfacing with your board

Boards may require slightly different procedures for programming the Tock kernel.

If you are following along with the provided VM, do not forget to pass your board's USB interface(s) to the VM. In VirtualBox, this should work by selecting "Devices > USB" and then enabling the respective device (for example SEGGER J-Link [0001]).

Security USB Key with Tock

This module and submodules will walk you through how to create a USB security key using Tock.

Security Key

Hardware Notes

To fully follow this guide you will need a hardware board that supports a peripheral USB port (i.e. where the microcontroller has USB hardware support). We recommend using the nRF52840dk.

Compatible boards:

  • nRF52840dk
  • imix

You'll also need two USB cables, one for programming the board and the other for attaching it as a USB device.

Goal

Our goal is to create a standards-compliant HOTP USB key that we can use with a demo website. The key will support enrolling new URL domains and providing secure authentication.

The main logic of the key will be implemented as a userspace program. That userspace app will use the kernel to decrypt the shared key for each domain, send the HMAC output as a USB keyboard device, and store each encrypted key in a nonvolatile key-value storage.

nRF52840dk Hardware Setup

nRF52840dk

If you are using the nRF52840dk, there are a couple of configurations on the nRF52840DK board that you should double-check:

  1. The "Power" switch on the top left should be set to "On".
  2. The "nRF power source" switch in the top middle of the board should be set to "VDD".
  3. The "nRF ONLY | DEFAULT" switch on the bottom right should be set to "DEFAULT".

For now, you should plug one USB cable into the top of the board for programming (NOT into the "nRF USB" port on the side). We'll attach the other USB cable later.

Organization and Getting Oriented to Tock

This module will refer to various Tock components. This section briefly describes the general structure of Tock that you will need to be somewhat familiar with to follow the module.

Using Tock consists of two main building blocks:

  1. The Tock kernel, which runs as the operating system on the board. This is compiled from the Tock repository.
  2. Userspace applications, which run as processes and are compiled and loaded separately from the kernel.

The Tock kernel is compiled specifically for a particular hardware device, termed a "board". The location of the top-level file for the kernel on a specific board is in the Tock repository, under /tock/boards/<board name>. Any time you need to compile the kernel or edit the board file, you will go to that folder. You also install the kernel on the hardware board from that directory.

Userspace applications are stored in a separate repository, either libtock-c or libtock-rs (for C and Rust applications, respectively). Those applications are compiled within those repositories.

Stages

This module is broken into four stages:

  1. Configuring the kernel to provide necessary syscall drivers:
    1. USB Keyboard Device.
    2. HMAC
    3. Key-Value
  2. Creating an HOTP userspace application.
  3. Creating an in-kernel encryption oracle.
  4. Enforcing access control restrictions to the oracle.

Implementing a USB Keyboard Device

The Tock kernel supports implementing a USB device and we can setup our kernel so that it is recognized as a USB keyboard device. This is necessary to enable the HOTP key to send the generated key to the computer when logging in.

Background

This module configures your hardware board to be a USB HID device. From Wikipedia:

The USB human interface device class (USB HID class) is a part of the USB specification for computer peripherals: it specifies a device class (a type of computer hardware) for human interface devices such as keyboards, mice, game controllers and alphanumeric display devices.

The USB HID class describes devices used with nearly every modern computer. Many predefined functions exist in the USB HID class. These functions allow hardware manufacturers to design a product to USB HID class specifications and expect it to work with any software that also meets these specifications.

Enabling USB HID will allow your board to operate as a normal keyboard. As far as your computer is concerned, you plugged in a USB keyboard. This means your board and microcontroller can "type" to your computer.

Configuring the Kernel

We need to setup our kernel to include USB support, and particularly the USB HID (keyboard) profile. This requires modifying the board's lib.rs file. These steps will guide you through adding the USB HID device as a new resource provided by the Tock kernel on your hardware board. You will also expose this resource to userspace via the syscall interface.

1. USB Strings

You first need to create three strings that will represent this device to the USB host.

You should add the following setup near the end of lib.rs, just before the creating the Platform struct.

#![allow(unused)]
fn main() {
// Create the strings we include in the USB descriptor.
let strings = static_init!(
    [&str; 3],
    [
        "Nordic Semiconductor", // Manufacturer
        "nRF52840dk - TockOS",  // Product
        "serial0001",           // Serial number
    ]
);
}

2. Include USB HID Capsule Type

Now we need to instantiate the keyboard USB capsule in the board. This capsule provides the USB Keyboard HID stack needed to interface with the USB hardware and provide an interface to communicate as a HID device.

In general, adding a capsule to a Tock kernel can be somewhat cumbersome. To simplify this, we use what we call a "component" to bundle all of the setup. We can use the pre-made KeyboardHidComponent component.

First we define a type for the capsule, which is board-specific as it refers to the specific microcontroller on the board. This type can become unwieldy and redundant, so specifying a type makes adding the same capsule and component to multiple boards more consistent.

Near the top of the lib.rs file, include the correct definitions based on your board. In particular, the UsbHw definition must match the type of the USB hardware driver for your specific microcontroller.

#![allow(unused)]
fn main() {
// USB Keyboard HID - for nRF52840dk
type UsbHw = nrf52840::usbd::Usbd<'static>; // For any nRF52840 board.
type KeyboardHidDriver = components::keyboard_hid::KeyboardHidComponentType<UsbHw>;

// ------------------------------

// USB Keyboard HID - for imix
type UsbHw = sam4l::usbc::Usbc<'static>; // For any SAM4L board.
type KeyboardHidDriver = components::keyboard_hid::KeyboardHidComponentType<UsbHw>;
}

3. Include USB HID Capsule Component

Once we have the type we can include the actual component. This should go below the strings object declared before.

Again the usb_device variable must match for your specific board. Choose the type correctly from the examples in the code snippet.

#![allow(unused)]
fn main() {
// For nRF52840dk
let usb_device = &nrf52840_peripherals.usbd;

// For imix
let usb_device = &peripherals.usbc;

// Generic HID Keyboard component usage
let (keyboard_hid, keyboard_hid_driver) = components::keyboard_hid::KeyboardHidComponent::new(
    board_kernel,
    capsules_core::driver::NUM::KeyboardHid as usize,
    usb_device,
    0x1915, // Nordic Semiconductor
    0x503a,
    strings,
)
.finalize(components::keyboard_hid_component_static!(UsbHw));
}

4. Activate USB HID Support

Include the USB client trait:

#![allow(unused)]
fn main() {
use kernel::hil::usb::Client;
}

Towards the end of the lib.rs, you need to enable the USB HID driver:

#![allow(unused)]
fn main() {
keyboard_hid.enable();
keyboard_hid.attach();
}

5. Expose USB HID to Userspace

Finally, we need to make sure that userspace applications can use the USB HID interface.

First, we need to keep track of a reference to our USB HID stack by adding the driver to the Platform struct:

#![allow(unused)]
fn main() {
pub struct Platform {
	...
	keyboard_hid_driver: &'static KeyboardHidDriver,
    ...
}
}

and then adding the object to where Platform is constructed:

#![allow(unused)]
fn main() {
let platform = Platform {
    ...
    keyboard_hid_driver,
    ...
};
}

Next we need to map syscalls from userspace to our kernel driver by editing the SyscallDriverLookup implementation for the board:

#![allow(unused)]
fn main() {
// Keyboard HID Driver Num:
const KEYBOARD_HID_DRIVER_NUM: usize = capsules_core::driver::NUM::KeyboardHid as usize;

impl SyscallDriverLookup for Platform {
    fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
    where
        F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R,
    {
        match driver_num {
            ...
            KEYBOARD_HID_DRIVER_NUM => f(Some(self.keyboard_hid_driver)),
            ...
        }
    }
}
}

Compiling and Installing the Kernel

Now you should be able to compile the kernel and load it on to your board.

cd tock/boards/<board name>
make install

Connecting the USB Device

We will use both USB cables on our hardware. The main USB header is for debugging and programming. The USB header connected directly to the microcontroller will be the USB device. Ensure both USB devices are connected to your computer.

Testing the USB Keyboard

To test the USB keyboard device will will use a simple userspace application. libtock-c includes an example app which just prints a string via USB keyboard when a button is pressed.

cd libtock-c/examples/tests/keyboard_hid
make
tockloader install

Position your cursor somewhere benign, like a new terminal. Then press a button on the board.

Checkpoint: You should see a welcome message from your hardware!

Using HMAC-SHA256 in Userspace

Our next task is we need an HMAC engine for our HOTP application to use. Tock already includes HMAC-SHA256 as a capsule within the kernel, we just need to expose it to userspace.

Background

An HMAC engine is a necessary tool for a HOTP security key. From Wikipedia:

An HMAC...is a specific type of message authentication code (MAC) involving a cryptographic hash function and a secret cryptographic key. As with any MAC, it may be used to simultaneously verify both the data integrity and authenticity of a message. An HMAC is a type of keyed hash function that can also be used in a key derivation scheme or a key stretching scheme.

An HMAC is computed roughly using the following equation (for simplicity this omits details on padding):

HMAC = Hash(Key + Hash(Key + Message))

The result is a output the length of the output of the hash function used. Because the key is used inside the hash operation, only someone who knows the secret key can compute the correct HMAC (authenticity). And because the message is used inside the hash operation, if the message is altered the HMAC will no longer match (integrity).

HMAC supports any hash function, but the specific hash function used affects the resulting HMAC. Therefore we must specify. In this example, we will use the SHA256 hash algorithm. That means the resulting HMAC will be 32 bytes long.

Configuring the Kernel

1. Define Types for HMAC

For convenience we declare the component types at the top of main.rs for the HMAC capsules.

As we are using a software implementation of the SHA-256 algorithm, we do not need to customize any types for our specific microcontroller.

Include this near the top of main.rs (above the Platform struct):

#![allow(unused)]
fn main() {
// HMAC
type HmacSha256Software = components::hmac::HmacSha256SoftwareComponentType<
    capsules_extra::sha256::Sha256Software<'static>,
>;
type HmacDriver = components::hmac::HmacComponentType<HmacSha256Software, 32>;
}

2. Instantiate the Components

Next we need to use components to instantiate a software implementation of SHA256 and HMAC-SHA256. Add this towards the bottom of your main.rs file.

#![allow(unused)]
fn main() {
//--------------------------------------------------------------------------
// HMAC-SHA256
//--------------------------------------------------------------------------

let sha256_sw = components::sha::ShaSoftware256Component::new()
    .finalize(components::sha_software_256_component_static!());

let hmac_sha256_sw = components::hmac::HmacSha256SoftwareComponent::new(sha256_sw).finalize(
    components::hmac_sha256_software_component_static!(capsules_extra::sha256::Sha256Software),
);

let hmac = components::hmac::HmacComponent::new(
    board_kernel,
    capsules_extra::hmac::DRIVER_NUM,
    hmac_sha256_sw,
)
.finalize(components::hmac_component_static!(HmacSha256Software, 32));
}

3. Expose HMAC to Userspace

Next add these capsules to the Platform struct:

#![allow(unused)]
fn main() {
pub struct Platform {
	...
	hmac: &'static HmacDriver,
    ...
}

let platform = Platform {
    ...
    hmac,
    ...
};
}

And make them accessible to userspace by adding to the with_driver function:

#![allow(unused)]
fn main() {
impl SyscallDriverLookup for Platform {
    fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
    where
        F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R,
    {
        match driver_num {
        	...
            capsules_extra::hmac::DRIVER_NUM => f(Some(self.hmac)),
            ...
        }
    }
}
}

Testing

You should be able to install the libtock-c/examples/tests/hmac app and run it:

cd libtock-c/examples/tests/hmac
make
tockloader install

Checkpoint: HMAC is now accessible to userspace!

Using Key-Value Storage in Userspace

When we use the HOTP application to store new keys, we want those keys to be persistent across reboots. That is, if we unplug the USB key, we would like our saved keys to still be accessible when we plug the key back in.

To enable this, we are using Tock's Key-Value (KV) interface. This allows userspace applications to store data in the form of key-value pairs. Applications can retrieve data by querying for the given key.

Checking if Key-Value Support Already Exists

Having key-value support is useful for more cases than just implementing an HOTP key, and so it is possible that your board already has key-value support enabled.

To check this, load the kv_check test app onto your board:

cd libtock-c/examples/tests/kv_check
make
tockloader install

Run tockloader listen and reset the board. You should see the following output if KV support exists:

[KV] Check for Key-Value Support
Key-Value support is enabled.

If KV support already exists, you can skip this module!

Configuring the Kernel

Again we will use components to add key-value support to the kernel.

1. Include the Key-Value Stack Types

The KV stack includes many layers which leads to rather complex types. For more information about the KV stack in Tock, see the TicKV reference. To simplify somewhat, we define a series of types used at each layer of the stack. Include these towards the top of main.rs:

#![allow(unused)]
fn main() {
// TicKV
type Mx25r6435f = components::mx25r6435f::Mx25r6435fComponentType<
    nrf52840::spi::SPIM<'static>,
    nrf52840::gpio::GPIOPin<'static>,
    nrf52840::rtc::Rtc<'static>,
>;
const TICKV_PAGE_SIZE: usize =
    core::mem::size_of::<<Mx25r6435f as kernel::hil::flash::Flash>::Page>();
type Siphasher24 = components::siphash::Siphasher24ComponentType;
type TicKVDedicatedFlash =
    components::tickv::TicKVDedicatedFlashComponentType<Mx25r6435f, Siphasher24, TICKV_PAGE_SIZE>;
type TicKVKVStore = components::kv::TicKVKVStoreComponentType<
    TicKVDedicatedFlash,
    capsules_extra::tickv::TicKVKeyType,
>;
type KVStorePermissions = components::kv::KVStorePermissionsComponentType<TicKVKVStore>;
type VirtualKVPermissions = components::kv::VirtualKVPermissionsComponentType<KVStorePermissions>;
type KVDriver = components::kv::KVDriverComponentType<VirtualKVPermissions>;
}

Note the first type is the underlying flash driver where the KV database is actually stored. This will need to be customized for your specific board and flash device.

2. Include the KV Components

Now we can use those types to instantiate the components for each layer of the KV stack:

#![allow(unused)]
fn main() {
//--------------------------------------------------------------------------
// TICKV
//--------------------------------------------------------------------------

// Static buffer to use when reading/writing flash for TicKV.
let page_buffer = static_init!(
    <Mx25r6435f as kernel::hil::flash::Flash>::Page,
    <Mx25r6435f as kernel::hil::flash::Flash>::Page::default()
);

// SipHash for creating TicKV hashed keys.
let sip_hash = components::siphash::Siphasher24Component::new()
    .finalize(components::siphasher24_component_static!());

// TicKV with Tock wrapper/interface.
let tickv = components::tickv::TicKVDedicatedFlashComponent::new(
    sip_hash,
    mx25r6435f,
    0, // start at the beginning of the flash chip
    (capsules_extra::mx25r6435f::SECTOR_SIZE as usize) * 32, // arbitrary size of 32 pages
    page_buffer,
)
.finalize(components::tickv_dedicated_flash_component_static!(
    Mx25r6435f,
    Siphasher24,
    TICKV_PAGE_SIZE,
));

// KVSystem interface to KV (built on TicKV).
let tickv_kv_store = components::kv::TicKVKVStoreComponent::new(tickv).finalize(
    components::tickv_kv_store_component_static!(
        TicKVDedicatedFlash,
        capsules_extra::tickv::TicKVKeyType,
    ),
);

let kv_store_permissions = components::kv::KVStorePermissionsComponent::new(tickv_kv_store)
    .finalize(components::kv_store_permissions_component_static!(
        TicKVKVStore
    ));

// Share the KV stack with a mux.
let mux_kv = components::kv::KVPermissionsMuxComponent::new(kv_store_permissions).finalize(
    components::kv_permissions_mux_component_static!(KVStorePermissions),
);

// Create a virtual component for the userspace driver.
let virtual_kv_driver = components::kv::VirtualKVPermissionsComponent::new(mux_kv).finalize(
    components::virtual_kv_permissions_component_static!(KVStorePermissions),
);

// Userspace driver for KV.
let kv_driver = components::kv::KVDriverComponent::new(
    virtual_kv_driver,
    board_kernel,
    capsules_extra::kv_driver::DRIVER_NUM,
)
.finalize(components::kv_driver_component_static!(
    VirtualKVPermissions
));
}

This example is for the nRF52840dk board. You will likely need to change the mx25r6435f flash driver to the flash driver appropriate for your board.

3. Update the Platform Struct and Expose KV to Userspace

We need to include the kv_driver in the board's platform struct:

Then add these capsules to the Platform struct:

#![allow(unused)]
fn main() {
pub struct Platform {
    ...
    kv_driver: &'static KVDriver,
    ...
}

let platform = Platform {
    ...
    kv_driver,
    ...
};
}

And make the syscall interface available to userspace:

#![allow(unused)]
fn main() {
impl SyscallDriverLookup for Platform {
    fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
    where
        F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R,
    {
        match driver_num {
            ...
            capsules_extra::kv_driver::DRIVER_NUM => f(Some(self.kv_driver)),
            ...
        }
    }
}
}

Checkpoint: Key-Value is now accessible to userspace!

Testing and Trying Out KV Storage

With KV support in your Tock kernel, you can use the applications in libtock-c/examples/tests/kv* to experiment with KV storage. In particular, the kv_interactive app allows you to get and set key-value pairs.

HOTP Userspace Application

As a reminder, this module guides you through creating a USB security key: a USB device that can be connected to your computer and authenticate you to some service.

At this point, we have configured the Tock kernel to provide the baseline resources necessary to implement the USB security key and use it with real services. However, we still need to actually implement the security key's operational logic. This submodule will guide you through creating a userspace application that follows the HOTP protocol.

Background

HOTP USB Security Keys

One open standard for implementing USB security keys is HMAC-based One-Time Password (HOTP). It generates the 6 to 8 digit numeric codes which are used as a second-factor for some websites.

These security keys typically do not just calculate HOTP codes, but can also enter them to your computer automatically. We will also enable that functionality by having our devices function as a USB HID keyboard device as well. This means that when plugged in through the proper USB port, it appears as an additional keyboard to your computer and is capable of entering text.

Applications in Tock

Tock applications look much closer to applications on traditional OSes than to normal embedded software. They are compiled separately from the kernel and loaded separately onto the hardware. They can be started or stopped individually and can be removed from the hardware individually. Moreover, the kernel decides which applications to run and what permissions they should be given.

Applications make requests to the OS kernel through system calls. Applications instruct the kernel using "command" system calls, and the kernel notifies applications with "upcalls". Importantly, upcalls never interrupt a running application. The application must yield to receive upcalls (i.e. callbacks).

The userspace library ("libtock") wraps system calls in easier to use functions. The libtock library is completely asynchronous. Synchronous APIs to the system calls are in "libtock-sync". These functions include the call to yield and expose a synchronous driver interface. Application code can use either.

Submodule Overview

This stage builds up to a full-featured HOTP key application. We'll start with a basic HOTP application which has a pre-compiled HOTP secret key. Then, each milestone will add additional functionality:

  1. Milestone one adds user input to reconfigure the HOTP secret.
  2. Milestone two adds persistent storage for the HOTP information so it is remembered across resets and power cycles.
  3. Milestone three adds support for multiple HOTP secrets simultaneously.

We have provided starter code as well as completed code for each of the milestones. If you're facing some bugs which are limiting your progress, you can reference or even wholesale copy a milestone in order to advance to the next parts of the tutorial.

Setup

There are two steps to check before you begin:

  1. Make sure you have compiled and installed the Tock kernel with the USB HID, HMAC, and KV drivers on to your board.

  2. Make sure you have no testing apps installed. To remove all apps:

    tockloader erase-apps
    

Starter Code

We'll start with the starter code which implements a basic HOTP key.

  1. Within libtock-c, navigate to libtock-c/examples/tutorials/hotp/hotp_starter/.

    This contains the starter code for the HOTP application. It has a hardcoded HOTP secret and generates an HOTP code from it each time the Button 1 on the board is pressed.

  2. Compile the application and load it onto your board. In the app directory, run:

    make
    tockloader install
    
  3. To see console output from the application, run tockloader listen in a separate terminal.

    TIP: You can leave the console running, even when compiling and uploading new applications. It's worth opening a second terminal and leaving tockloader listen always running.

  4. Since this application creates a USB HID device to enter HOTP codes, you'll need a second USB cable which will connect directly to the microcontroller. If you are using the nRF52840dk, plug the USB cable into the port on the left-hand side of the nRF52840DK labeled "nRF USB".

    After attaching the USB cable, you should restart the application by hitting the reset button (on the nRF52840DK it is labeled "IF BOOT/RESET").

  5. To generate an HOTP code, press the first button ("Button 1" on the nRF5240DK). You should see a message printed to console output that says Counter: 0. Typed "750359" on the USB HID the keyboard.

    The HOTP code will also be written out over the USB HID device. The six-digit number should appear wherever your cursor is.

  6. Verify the HOTP values with https://www.verifyr.com/en/otp/check#hotp. Go to section "#2 Generate HOTP Code". Once there, enter:

    • "test" as the HOTP Code to auth
    • The current counter value from console as the Counter
    • "sha256" as the Algorithm
    • 6 as the Digits

    Click "Generate" and you'll see a six-digit HOTP code that should match the output of the Tock HOTP app.

The source code for this application is in the file main.c.

This is roughly 300 lines of code and includes Button handling, HMAC use and the HOTP state machine. Execution starts at the main() function at the bottom of the file.

Play around with the app and take a look through the code to make sure it makes sense. Don't worry too much about the HOTP next code generation, as it already works and you won't have to modify it.

Checkpoint: You should be able to run the application and have it output HOTP codes over USB to your computer when Button 1 is pressed.

Milestone One: Configuring Secrets

The first milestone is to modify the HOTP application to allow the user to set a secret, rather than having a pre-compiled default secret. Completed code is available in the hotp_milestone_one/ folder in case you run into issues.

  1. Modify the code in main() to detect when a user wants to change the HOTP secret rather than get the next code.

    The simplest way to do this is to sense how long the button is held for. You can delay a short period, roughly 500 ms would work well, and then read the button again and check if it's still being pressed. You can wait synchronously with the libtocksync_alarm_delay_ms() function and you can read a button with the libtock_button_read() function.

    • Note that buttons are indexed from 0 in Tock. So "Button 1" on the hardware is button number 0 in the application code. All four of the buttons on the nRF52840DK are accessible, although the initialize_buttons() helper function in main.c only initializes interrupts for button number 0. (You can change this if you want!)

    • An alternative design would be to use different buttons for different purposes. We'll focus on the first method, but feel free to implement this however you think would work best.

  2. For now, just print out a message when you detect the user's intent. Be sure to compile and upload your modified application to test it.

  3. Next, create a new helper function to allow for programming new secrets. This function will have three parts:

    1. The function should print a message about wanting input from the user.

      • Let them know that they've entered this mode and that they should type a new HOTP secret.
    2. The function should read input from the user to get the base32-encoded secret.

      • You'll want to use the Console functions libtocksync_console_write() and libtocksync_console_read(). libtocksync_console_read() can read characters of user input while libtocksync_console_write() can be used to echo each character the user types. Make a loop that reads the characters into a buffer.

      • Since the secret is in base32, special characters are not valid. The easiest way to handle this is to check the input character with isalnum() and ignore it if it isn't alphanumeric.

      • When the user hits the enter key, a \n or \r character will be received. This can be used to break from the loop.

    3. The function should decode the secret and save it in the hotp-key.

      • Use the program_default_secret() implementation for guidance here. The default_secret takes the place of the string you read from the user, but otherwise the steps are the same.
  4. Connect the two pieces of code you created to allow the user to enter a new key. Then upload your code to test it!

Checkpoint: Your HOTP application should now take in user-entered secrets and generate HOTP codes for them based on button presses.

Milestone Two: Persistent Secrets

The second milestone is to save the HOTP struct in persistent flash rather than in volatile memory. After doing so, the secret and current counter values will persist after resets and when the USB device is unplugged. We'll do the saving to flash with the Key-Value driver, which allows an application to save information as key-value pairs. Completed code is available in hotp_milestone_two/ in case you run into issues.

  1. In the HOTP application code we will store the persistent key data as the "value" in a key-value pair.

    Start by writing a function which saves the hotp_key_t object to a specific key (perhaps "hotp"). Use the libtocksync_kv_set() function.

  2. Now write a matching function which reads the same key to load the key data from persistent storage. Use the libtocksync_kv_get() function.

  3. Make sure to update the key-value pair whenever part of the HOTP key is modified, i.e. when programming a new secret or updating the counter.

  4. Make sure your app has permissions to use storage in the kernel. The app needs a TBF header to grant it permission. You can have the app automatically include this when compiling by adding these flags to the app Makefile:

    # Make sure we have storage permissions.
    ELF2TAB_ARGS += --write_id 0x4016 --read_ids 0x4016 --access_ids 0x4016
    
  5. Upload your code to test it. You should be able to keep the same secret and counter value on resets and also on power cycles.

  • There is an on/off switch on the top left of the nRF52840DK you can use for power cycling.

Checkpoint: Your application should now both allow for the configuring of HOTP secrets and the HOTP secret and counter should be persistent across reboots.

Milestone Three: Multiple HOTP Keys

The third and final application milestone is to add multiple HOTP keys and a method for choosing between them. This milestone is optional, as the rest of the tutorial will work without it. If you're short on time, you can skip it without issue. Completed code is available in hotp_milestone_three/ in case you run into issues.

  • The recommended implementation of multiple HOTP keys is to assign one key per button (so four total for the nRF52840DK). A short press will advance the counter and output the HOTP code while a long press will allow for reprogramming of the HOTP secret.

  • The implementation here is totally up to you. Here are some suggestions to consider:

    • Select which key you are using based on the button number of the most recent press. You'll also need to enable interrupts for all of the buttons instead of just Button 1.

    • Make the HOTP key into an array with up to four slots. Choose different key names for storage.

    • Having multiple key slots allows for different numbers of digits for the HOTP code on different slots, which you could experiment with.

Checkpoint: Your application should now hold multiple HOTP keys, each of which can be configured and is persistent across reboots.

Encryption Oracle Capsule

Our HOTP security key works by storing a number of secrets on the device, and using these secrets together with some moving factor (e.g., a counter value or the current time) in an HMAC operation. To be useful, our device needs some way to store these secrets, for instance in its internal flash.

However, storing such secrets in plaintext as we did in the previous submodule is not particularly secure. For instance, many microcontrollers offer debug ports which can be used to gain read and write access to flash. Even if these ports can be locked down, such protection mechanisms have been broken in the past. Apart from that, disallowing external flash access makes debugging and updating our device much more difficult.

To circumvent these issues, we will build an encryption oracle capsule: this Tock kernel module will allow applications to request decryption of some ciphertext, using a kernel-internal key not exposed to applications themselves. By only storing an encrypted version of their secrets, applications are free to use unprotected flash storage, or store them even external to the device itself. This is a commonly used paradigm in root of trust systems such as TPMs or OpenTitan, which feature hardware-embedded keys that are unique to a chip and hardened against key-readout attacks.

Our kernel module will use a hard-coded symmetric encryption key (AES-128 CTR-mode), embedded in the kernel binary. While this does not actually meaningfully increase the security of our example application, it demonstrates some important concepts in Tock:

  • How custom userspace drivers are implemented, and the different types of system calls supported.
  • How Tock implements asynchronous APIs in the kernel.
  • Tock's hardware-interface layers (HILs), which provide abstract interfaces for hardware or software implementations of algorithms, devices and protocols.

Background

Capsules – Tock's Kernel Modules

Most of Tock's functionality is implemented in the form of capsules – Tock's equivalent to kernel modules. Capsules are Rust modules contained in Rust crates under the capsules/ directory within the Tock kernel repository. They can be used to implement userspace drivers, hardware drivers (for example, a driver for an I²C-connected sensor), or generic reusable code snippets.

What makes capsules special is that they are semi-trusted: they are not allowed to contain any unsafe Rust code, and thus can never violate Tock's memory safety guarantees. They are only trusted with respect to liveness and correctness – meaning that they must not block the kernel execution for long periods of time, and should behave correctly according to their specifications and API contracts.

Capsule Directory in the Tock Repository

While a single "capsule" is generally self-contained in a Rust module (.rs file), these modules are again grouped into Rust crates such as capsules/core and capsules/extra, depending on certain policies. For instance, capsules in core have stricter requirements regarding their code quality and API stability. Neither core nor the extra extra capsules crates allow for external dependencies (outside of the Tock repository). The document on external dependencies further explains these policies.

Developing the Encryption Oracle

We start our encryption oracle driver by creating a new capsule called encryption_oracle. Create a file under capsules/extra/src/tutorials/encryption_oracle.rs in the Tock kernel repository with the following contents:

#![allow(unused)]
fn main() {
// Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors 2024.

pub static KEY: &'static [u8; kernel::hil::symmetric_encryption::AES128_KEY_SIZE] =
    b"InsecureAESKey12";

pub struct EncryptionOracleDriver {}

impl EncryptionOracleDriver {
    /// Create a new instance of our encryption oracle userspace driver:
    pub fn new() -> Self {
        EncryptionOracleDriver {}
    }
}

}

This is the basic skeleton for a Tock capsule.

To make this capsule accessible to other Rust modules and crates, add it to capsules/extra/src/tutorials/mod.rs:

  #[allow(dead_code)]
  pub mod encryption_oracle_chkpt5;

+ pub mod encryption_oracle;

EXERCISE: Make sure your new capsule compiles by running cargo check in the capsules/extra/ folder.

The capsules/tutorial crate already contains checkpoints of the encryption oracle capsule we'll be writing here. Feel free to use them if you're stuck. We indicate that your capsule should have reached an equivalent state to one of our checkpoints through blocks such as the following:

CHECKPOINT: encryption_oracle_chkpt0.rs

Userspace Drivers

Now that we have a basic capsule skeleton, we can think about how this code is going to interact with userspace applications. Not every capsule needs to offer a userspace API, but those that do must implement the SyscallDriver trait.

Tock supports different types of application-issued systems calls, four of which are relevant to userspace drivers:

  • subscribe: Allows an application to register upcalls, which are functions being invoked in response to certain events.

  • read-only allow: Allows an application to share a buffer with a kernel module. The kernel only has read access to the buffer.

  • read-write allow: Same as the read-only allow, but kernel modules can also mutate the application-provided buffer.

  • command: Allows applications to signal arbitrary events or send requests to the kernel module.

All Tock system calls are synchronous, which means that they should immediately return to the application. Capsules must not implement long-running operations by blocking on a command system call.

More information can be found in the syscalls documentation.

Application Grants

Now there's just one key part missing to understanding Tock's system calls: how kernel modules store application-specific data. To avoid using a standard heap, which could be exhausted and leave the kernel in an unrecoverable state, Tock uses grants. Grants are essentially regions of the application's allocated memory space that the kernel uses to store state on behalf of the process. This is distinct from an allow as the application never has access to grant data. More information can be found in the grants documentation.

Our encryption oracle driver will need to keep track of some per-process state. Thus we extend the above driver with a Rust struct to be stored within a grant, called ProcessState. For now, we just keep track of whether a process has requested a decryption operation. Add the following code snippet to your capsule:

#![allow(unused)]
fn main() {
#[derive(Default)]
pub struct ProcessState {
    request_pending: bool,
}
}

By implementing Default, grant types can be allocated and initialized on demand. We integrate this type into our EncryptionOracleDriver by adding a special process_grants variable of type Grant. This Grant struct takes a generic type parameter T (which we set to our ProcessState struct above) next to some constants: as a driver's subscribe upcall and allow buffer slots also consume some memory, we store them in the process-specific grant as well. Thus, UpcallCount, AllowRoCont, and AllowRwCount indicate how many of these slots should be allocated respectively. For now we don't use any of these slots, so we set their counts to zero. Add the process_grants variable to your EncryptionOracleDriver:

#![allow(unused)]
fn main() {
use kernel::grant::{Grant, UpcallCount, AllowRoCount, AllowRwCount};

pub struct EncryptionOracleDriver {
    process_grants: Grant<
        ProcessState,
        UpcallCount<0>,
        AllowRoCount<0>,
        AllowRwCount<0>,
    >,
}
}

EXERCISE: The Grant struct will be provided as an argument to constructor of the EncryptionOracleDriver. Extend new to accept it as an argument. Afterwards, make sure your code compiles by running cargo check in the capsules/extra/ directory.

Implementing a System Call

Next we can start to implement a proper system call. We start with the basics and implement a simple command-type system call: upon request by the application, the Tock kernel will call a method in our capsule.

For this, we implement the following SyscallDriver trait for our EncryptionOracleDriver struct. This trait contains two important methods:

  • command: this method is called whenever an application issues a command-type system call towards this driver, and
  • allocate_grant: this is a method required by Tock to allocate some space in the process' memory region. The implementation of this method always looks the same, and while it must be implemented by every userspace driver, it's exact purpose is not important right now.
#![allow(unused)]
fn main() {
use kernel::{ErrorCode, ProcessId};
use kernel::syscall::{SyscallDriver, CommandReturn};

impl SyscallDriver for EncryptionOracleDriver {
    fn command(
        &self,
        command_num: usize,
        _data1: usize,
        _data2: usize,
        processid: ProcessId,
    ) -> CommandReturn {
        // Syscall handling code here!
        unimplemented!()
    }

    // Required by Tock for grant memory allocation.
    fn allocate_grant(&self, processid: ProcessId) -> Result<(), kernel::process::Error> {
        self.process_grants.enter(processid, |_, _| {})
    }
}
}

The function signature of command tells us a lot about what we can do with this type of system call:

  • Applications can provide a command_num, which indicates what type of command they are requesting to be handled by a driver, and
  • they can optionally pass up to two usize data arguments.
  • The kernel further provides us with a unique identifier of the calling process, through a type called ProcessId.

Our driver can respond to this system call using a CommandReturn struct. This struct allows for returning either a success or a failure indication, along with some data (at most four usize return values). For more details, you can look at its definition and API here.

In our encryption oracle driver we only need to handle a single application request: to decrypt some ciphertext into its corresponding plaintext. As we are missing the actual cryptographic operations still, let's simply store that a process has made such a request. Because this is per-process state, we store it in the request_pending field of the process' grant region. To obtain a reference to this memory, we can conveniently use the ProcessId type provided to us by the kernel. The following code snippet shows how an implementation of the command could look like. Replace your command method body with this snippet:

#![allow(unused)]
fn main() {
match command_num {
    // Check whether the driver is present:
    0 => CommandReturn::success(),

    // Request the decryption operation:
    1 => {
        self
            .process_grants
            .enter(processid, |app, _kernel_data| {
			    kernel::debug!("Received request from process {:?}", processid);
                app.request_pending = true;
                CommandReturn::success()
            })
            .unwrap_or_else(|err| err.into())
    },

    // Unknown command number, return a NOSUPPORT error
    _ => CommandReturn::failure(ErrorCode::NOSUPPORT),
}
}

There's a lot to unpack here: first, we match on the passed command_num. By convention, command number 0 is reserved to check whether a driver is loaded on a kernel. If our code is executing, then this must be the case, and thus we simply return success. For all other unknown command numbers, we must instead return a NOSUPPORT error.

Command number 1 is assigned to start the decryption operation. To get a reference to our process-local state stored in its grant region, we can use the enter method: it takes a ProcessId, and in return will call a provided Rust closure that provides us access to the process' own ProcessState instance. Because entering a grant can fail (for instance when the process does not have sufficient memory available), we handle any errors by converting them into a CommandReturn.

EXERCISE: Make sure that your EncryptionOracleDriver implements the SyscallDriver trait as shown above. Then, verify that your code compiles by running cargo check in the capsules/extra/ folder.

CHECKPOINT: encryption_oracle_chkpt1.rs

Congratulations, you have implemented your first Tock system call! Next we will add a resource to the kernel module.

Including an AES Engine in the Driver

Our encryption oracle will encrypt the HOTP keys before we store them to flash. Therefore it needs access to an encryption engine. We will use AES.

We provide the encryption_oracle_chkpt2.rs checkpoint which has these changes integrated, feel free to use this code. We make the following mechanical changes to our types and constructor – don't worry about them too much right now.

First, we change our EncryptionOracleDriver struct to hold a reference to some generic type A, which must implement the AES128 and the AESCtr traits:

+ use kernel::hil::symmetric_encryption::{AES128Ctr, AES128};

- pub struct EncryptionOracleDriver {
+ pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
+     aes: &'a A,
      process_grants: Grant<
          ProcessState,
          UpcallCount<0>,

Then, we change our constructor to accept this aes member as a new argument:

- impl EncryptionOracleDriver {
+ impl<'a, A: AES128<'a> + AES128Ctr> EncryptionOracleDriver<'a, A> {
      /// Create a new instance of our encryption oracle userspace driver:
      pub fn new(
+         aes: &'a A,
+         _source_buffer: &'static mut [u8],
+         _dest_buffer: &'static mut [u8],
          process_grants: Grant<ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>>,
      ) -> Self {
          EncryptionOracleDriver {
              process_grants: process_grants,
+             aes: aes,
          }
      }
  }

And finally we update our implementation of SyscallDriver to match these new types:

- impl SyscallDriver for EncryptionOracleDriver {
+ impl<'a, A: AES128<'a> + AES128Ctr> SyscallDriver for EncryptionOracleDriver<'a, A> {
      fn command(
          &self,

Make sure that your modified capsule still compiles. We will actual use the AES engine later. Next, we will look into how to to integrate this driver into a kernel build.

CHECKPOINT: encryption_oracle_chkpt2.rs

Adding a Capsule to a Tock Kernel

To actually make our driver available in a given build of the kernel, we need to assign it a number and add it to our board's main.rs.

Specifying the Driver Number

Applications interact with our driver by passing a "driver number" alongside their system calls. The capsules/core/src/driver.rs module acts as a registry for driver numbers. For the purposes of this tutorial we'll use an unassigned driver number in the misc range, 0x99999, and add a constant to capsule accordingly:

#![allow(unused)]
fn main() {
pub const DRIVER_NUM: usize = 0x99999;
}

Instantiating the System Call Driver

Now, open the board's main file (for example boards/tutorials/nrf52840dk-hotp-tutorial/src/main.rs) and scroll down to the line that reads "PLATFORM SETUP, SCHEDULER, AND START KERNEL LOOP". We'll instantiate our encryption oracle driver right above that, with the following snippet:

#![allow(unused)]
fn main() {
const CRYPT_SIZE: usize = 7 * kernel::hil::symmetric_encryption::AES128_BLOCK_SIZE;
let aes_src_buffer = kernel::static_init!([u8; 16], [0; 16]);
let aes_dst_buffer = kernel::static_init!([u8; CRYPT_SIZE], [0; CRYPT_SIZE]);

let oracle = static_init!(
    capsules_extra::tutorials::encryption_oracle::EncryptionOracleDriver<
        'static,
        nrf52840::aes::AesECB<'static>,
    >,
    // Call our constructor:
    capsules_extra::tutorials::encryption_oracle::EncryptionOracleDriver::new(
        &nrf52840_peripherals.nrf52.ecb,
        aes_src_buffer,
        aes_dst_buffer,
		// Magic incantation to create our `Grant` struct:
        board_kernel.create_grant(
            capsules_extra::tutorials::encryption_oracle::DRIVER_NUM, // our driver number
            &create_capability!(capabilities::MemoryAllocationCapability)
        ),
    ),
);

// Leave commented out for now:
// kernel::hil::symmetric_encryption::AES128::set_client(&nrf52840_peripherals.nrf52.ecb, oracle);
}

If you are using a microcontroller other than the nRF52840, you will need to modify the types slightly and provide the correct reference to the AES hardware engine.

Now that we instantiated our capsule, we need to wire it up to Tock's system call handling facilities. This involves two steps: first, we need to store our instance in our Platform struct. That way, we can refer to our instance while the kernel is running. Then, we need to route system calls to our driver number (0x99999) to be handled by this driver.

Add the following line to the very bottom of the pub struct Platform { declaration:

  struct Platform {
      [...],
      base: nrf52840dk_lib::Platform,
+     oracle: &'static capsules_extra::tutorials::encryption_oracle::EncryptionOracleDriver<
+         'static,
+         nrf52840::aes::AesECB<'static>,
+     >,
  }

Furthermore, add our instantiated oracle to the let platform = Platform { instantiation:

  let platform = Platform {
      [...],
      screen,
+     oracle,
  };

Finally, to handle received system calls in our driver, add the following line to the match block in the with_driver method of the SyscallDriverLookup trait implementation:

  impl SyscallDriverLookup for Platform {
      fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
      where
          F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R,
      {
          match driver_num {
              capsules_extra::hmac::DRIVER_NUM => f(Some(self.hmac)),
              [...],
              capsules_extra::app_flash_driver::DRIVER_NUM => f(Some(self.app_flash)),
+             capsules_extra::tutorials::encryption_oracle::DRIVER_NUM => f(Some(self.oracle)),
              _ => self.base.with_driver(driver_num, f),
          }
      }
  }

That's it! We have just added a new driver to the nRF52840DK's Tock kernel build.

EXERCISE: Make sure your board compiles by running make. If you want, you can test your driver with a libtock-c application which executes the following:

command(
    0x99999, // driver number
    1,       // command number
    0, 0     // optional data arguments
);

Upon receiving this system call, the capsule should print the "Received request from process" message.

Interacting with HILs

The Tock operating system supports multiple hardware platforms, each with different implementations and hardware peripherals. To provide consistent intefaces to kernel modules, Tock uses Hardware-Interface Layers (HILs). HILs can be found under the kernel/src/hil/ directory. We will be working with the symmetric_encryption.rs HIL. You can read more about the design paradigms of HILs in this document.

HILs capture another important concept of the Tock kernel: asynchronous operations. Operations in the Tock kernel are implemented as asynchronous two-phase operations: one function call on the underlying implementation (e.g., of our AES engine) starts an operation, and another function call (issued by the underlying implementation) informs the driver that the operation has completed. You can see this paradigm embedded in all of Tock's HILs, including the symmetric_encryption HIL: the crypt() method is specified to return immediately (and return a Some(_) in case of an error). When the requested operation is finished, the implementor of AES128 will call the crypt_done() callback, on the client registered with set_client().

The below figure illustrates the way asynchronous operations are handled in Tock, using our encryption oracle capsule as an example. One further detail illustrated in this figure is the fact that providers of a given interface (e.g., AES128) may not always be able to perform a large user-space operation in a single call; this may be because of hardware-limitations, limited buffer allocations, or to avoid blocking the kernel for too long in software-implementations. In this case, a userspace-operation is broken up into multiple smaller operations on the underlying provider, and the next sub-operation is scheduled once a callback has been received:

An Illustration of Tock's Asynchronous Driver Model

To allow our capsule to receive crypt_done callbacks, add the following trait implementation:

#![allow(unused)]
fn main() {
use kernel::hil::symmetric_encryption::Client;

impl<'a, A: AES128<'a> + AES128Ctr> Client<'a> for EncryptionOracleDriver<'a, A> {
    fn crypt_done(&'a self, mut source: Option<&'static mut [u8]>, destination: &'static mut [u8]) {
	    unimplemented!()
    }
}
}

With this trait implemented, we can wire up the oracle driver instance to receive callbacks from the AES engine (nrf52840_peripherals.nrf52.ecb) by uncommenting the following line in main.rs:

- // Leave commented out for now:
- // kernel::hil::symmetric_encryption::AES128::set_client(&nrf52840_peripherals.nrf52.ecb, oracle);
+ kernel::hil::symmetric_encryption::AES128::set_client(&nrf52840_peripherals.nrf52.ecb, oracle);

If this is missing, our capsule will not be able to receive feedback from the AES hardware that an operation has finished, and it will thus refuse to start any new operation. This is an easy mistake to make – you should check whether all callbacks are set up correctly when the kernel is in such a stuck state.

Multiplexing Between Processes

While our underlying AES128 implementation can only handle one request at a time, multiple processes may wish to use this driver. Thus our capsule implements a queueing system: if our capsule is busy, another process can still mark a request which will set the request_pending flag in our ProcessState grant. Even better, we've already implemented the logic to set this flag!

Now, to actually implement our asynchronous decryption operation, it is important to keep track of which process' request we are currently working on. We add an additional state field to our EncryptionOracleDriver holding an OptionalCell: this is a container whose stored value can be modified even if we only hold an immutable Rust reference to it. The optional indicates that it behaves similar to an Option – it can either hold a value, or be empty.

  use kernel::utilities::cells::OptionalCell;

  pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
      aes: &'a A,
      process_grants: Grant<ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>>,
+     current_process: OptionalCell<ProcessId>,
  }

We need to add it to the constructor as well:

  pub fn new(
      aes: &'a A,
      _source_buffer: &'static mut [u8],
      _dest_buffer: &'static mut [u8],
      process_grants: Grant<ProcessState, UpcallCount<0>, AllowRoCount<0>, AllowRwCount<0>>,
  ) -> Self {
      EncryptionOracleDriver {
          process_grants,
          aes,
+         current_process: OptionalCell::empty(),
      }
  }

In practice, we simply want to find the next process request to work on. For this, we add a helper method to the impl of our EncryptionOracleDriver:

#![allow(unused)]
fn main() {
/// Return a `ProcessId` which has `request_pending` set, if there is some:
fn next_pending(&self) -> Option<ProcessId> {
    unimplemented!()
}
}

EXERCISE: Try to implement this method according to its specification. If you're stuck, see whether the documentation of the OptionalCell and Grant types help. Hint: to interact with the ProcessState of every processes, you can use the iter method on a Grant: the returned Iter type then has an enter method access the contents of an individual process' grant.

CHECKPOINT: encryption_oracle_chkpt3.rs

Interacting with Process Buffers and Scheduling Upcalls

For our encryption oracle, it is important to allow users to provide buffers containing the encryption initialization vector (to prevent an attacker from inferring relationships between messages encrypted with the same key), and the plaintext or ciphertext to encrypt and decrypt respectively. Furthermore, userspace must provide a mutable buffer for our capsule to write the operation's output to. These buffers are placed into read-only and read-write allow slots by applications accordingly. We allocate fixed IDs for those buffers:

#![allow(unused)]
fn main() {
/// Ids for read-only allow buffers
mod ro_allow {
    pub const IV: usize = 0;
    pub const SOURCE: usize = 1;
    /// The number of allow buffers the kernel stores for this grant
    pub const COUNT: u8 = 2;
}

/// Ids for read-write allow buffers
mod rw_allow {
    pub const DEST: usize = 0;
    /// The number of allow buffers the kernel stores for this grant
    pub const COUNT: u8 = 1;
}
}

To deliver upcalls to the application, we further allocate an allow-slot for the DONE callback:

#![allow(unused)]
fn main() {
/// Ids for subscribe upcalls
mod upcall {
    pub const DONE: usize = 0;
    /// The number of subscribe upcalls the kernel stores for this grant
    pub const COUNT: u8 = 1;
}
}

Now, we need to update our Grant type to actually reserve these new allow and subscribe slots:

  pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
      aes: &'a A,
      process_grants: Grant<
          ProcessState,
-         UpcallCount<0>,
-         AllowRoCount<0>,
-         AllowRwCount<0>,
+         UpcallCount<{ upcall::COUNT }>,
+         AllowRoCount<{ ro_allow::COUNT }>,
+         AllowRwCount<{ rw_allow::COUNT }>,

      >,

Update this type signature in your constructor as well.

While Tock applications can expose certain sections of their memory as buffers to the kernel, access to the buffers is limited while their grant region is entered (implemented through a Rust closure). Unfortunately, this implies that asynchronous operations cannot keep a hold of these buffers and use them while other code (or potentially the application itself) is executing.

For this reason, Tock uses static mutable slices (&'static mut [u8]) in HILs. These Rust types have the distinct advantage that they can be passed around the kernel as "persistent references": when borrowing a 'static reference into another 'static reference, the original reference becomes inaccessible. Tock features a special container to hold such mutable references, called TakeCell. We add such a container for each of our source and destination buffers:

  use core::cell::Cell;
  use kernel::utilities::cells::TakeCell;

  pub struct EncryptionOracleDriver<'a, A: AES128<'a> + AES128Ctr> {
      [...],
	  current_process: OptionalCell<ProcessId>,
+     source_buffer: TakeCell<'static, [u8]>,
+     dest_buffer: TakeCell<'static, [u8]>,
+     crypt_len: Cell<usize>,
  }
  ) -> Self {
      EncryptionOracleDriver {
          process_grants: process_grants,
          aes: aes,
          current_process: OptionalCell::empty(),
+         source_buffer: TakeCell::new(source_buffer),
+         dest_buffer: TakeCell::new(dest_buffer),
+         crypt_len: Cell::new(0),
      }
  }

Now we have all pieces in place to actually drive the AES implementation. As this is a rather lengthy implementation containing a lot of specifics relating to the AES128 trait, this logic is provided to you in the form of a single run() method. Fill in this implementation from encryption_oracle_chkpt4.rs:

#![allow(unused)]
fn main() {
use kernel::processbuffer::ReadableProcessBuffer;
use kernel::hil::symmetric_encryption::AES128_BLOCK_SIZE;

/// The run method initiates a new decryption operation or
/// continues an existing two-phase (asynchronous) decryption in
/// the context of a process.
///
/// If the process-state `offset` is `0`, we will initialize the
/// AES engine with an initialization vector (IV) provided by the
/// application, and configure it to perform an AES128-CTR
/// operation.
///
/// If the process-state `offset` is larger or equal to the
/// process-provided source or destination buffer size, we return
/// an error of `ErrorCode::NOMEM`. A caller can use this as a
/// method to check whether the descryption operation has
/// finished.
fn run(&self, processid: ProcessId) -> Result<(), ErrorCode> {
    // Copy in the provided code from `encryption_oracle_chkpt4.rs`
    unimplemented!()
}
}

A core part still missing is actually invoking this run() method, namely for each process that has its request_pending flag set. As we need to do this each time an application requests an operation, as well as each time we finish an operation (to work on the next enqueued) one, this is implemented in a helper method called run_next_pending.

#![allow(unused)]
fn main() {
/// Try to run another decryption operation.
///
/// If `self.current_current` process contains a `ProcessId`, this
/// indicates that an operation is still in progress. In this
/// case, do nothing.
///
/// If `self.current_process` is vacant, use your implementation
/// of `next_pending` to find a process with an active request. If
/// one is found, remove its `request_pending` indication and start
//  a new decryption operation with the following call:
///
///    self.run(processid)
///
/// If this method returns an error, return the error to the
/// process in the registered upcall. Try this until either an
/// operation was started successfully, or no more processes have
/// pending requests.
///
/// Beware: you will need to enter a process' grant both to set the
/// `request_pending = false` and to (potentially) schedule an error
/// upcall. `self.run()` will itself also enter the grant region.
/// However, *Tock's grants are non-reentrant*. This means that trying
/// to enter a grant while it is already entered will fail!
fn run_next_pending(&self) {
    unimplemented!()
}
}

EXERCISE: Implement the run_next_pending method according to its specification. To schedule a process upcall, you can use the second argument passed into the grant.enter() method (kernel_data):

kernel_data.schedule_upcall(
    <upcall slot>,
    (<arg0>, <arg1>, <arg2>)
)

By convention, errors are reported in the first upcall argument (arg0). You can convert an ErrorCode into a usize with the following method:

kernel::errorcode::into_statuscode(<error code>)

run_next_pending should be invoked whenever we receive a new encryption / decryption request from a process, so add it to the command() method implementation:

  // Request the decryption operation:
- 1 => self
-     .process_grants
-     .enter(processid, |grant, _kernel_data| {
-         grant.request_pending = true;
-         CommandReturn::success()
-     })
-     .unwrap_or_else(|err| err.into()),
+ 1 => {
+     let res = self
+         .process_grants
+         .enter(processid, |grant, _kernel_data| {
+             grant.request_pending = true;
+             CommandReturn::success()
+         })
+         .unwrap_or_else(|err| err.into());
+
+     self.run_next_pending();
+
+     res
+ }

We store res temporarily, as Tock's grant regions are non-reentrant: we can't invoke run_next_pending (which will attempt to enter grant regions), while we're in a grant already.

CHECKPOINT: encryption_oracle_chkpt4.rs

Now, to complete our encryption oracle capsule, we need to implement the crypt_done() callback. This callback performs the following actions:

  • copies the in-kernel destination buffer (&'static mut [u8]) as passed to crypt() into the process' destination buffer through its grant, and
  • attempts to invoke another encryption / decryption round by calling run().
    • If calling run() succeeds, another crypt_done() callback will be scheduled in the future.
    • If calling run() fails with an error of ErrorCode::NOMEM, this indicates that the current operation has been completed. Invoke the process' upcall to signal this event, and use our run_next_pending() method to schedule the next operation.

Similar to the run() method, we provide this snippet to you in encryption_oracle_chkpt5.rs:

#![allow(unused)]
fn main() {
use kernel::processbuffer::WriteableProcessBuffer;

impl<'a, A: AES128<'a> + AES128Ctr> Client<'a> for EncryptionOracleDriver<'a, A> {
    fn crypt_done(&'a self, mut source: Option<&'static mut [u8]>, destination: &'static mut [u8]) {
	     // Copy in the provided code from `encryption_oracle_chkpt5.rs`
         unimplemented!()
    }
}
}

CHECKPOINT: encryption_oracle_chkpt5.rs

Congratulations! You have written your first Tock capsule and userspace driver, and interfaced with Tock's asynchronous HILs. Your capsule should be ready to go now, go ahead and integrate it into your HOTP application! Don't forget to recompile your kernel such that it integrates the latest changes.

Integrating the Encryption Oracle Capsule into your libtock-c App

The encryption oracle capsule is compatible with the oracle.c and oracle.h implementation in the libtock-c part of the tutorial, under examples/tutorials/hotp/hotp_oracle_complete/.

You can try to integrate this with your application by using the interfaces provided in oracle.h. The main.c file in this repository contains an example of how these interfaces can be integrated into a fully-featured HOTP application.

Security Key Application Access Control

At this point we have a fully-featured HOTP USB security key implementation. However, the kernel APIs that enable this are exposed to any application running on the system. In this submodule, we will use additional features of the Tock kernel to restrict access to the encryption capsule to only trusted (credentialed) apps.

Background

We need two Tock mechanisms to implement this feature. First, we need a way to identify the trusted app that we will give access to the encryption engine. We will do this by adding credentials to the app's TBF (Tock Binary Format file) and verifying those credentials when the application is loaded. This mechanism allows developers to sign apps, and then the kernel can verify those signatures.

The second mechanism is way to permit syscall access to only specific applications. The Tock kernel already has a hook that runs on each syscall to check if the syscall should be permitted. By default this just approves every syscall. We will need to implement a custom policy which permits access to the encryption capsule to only the trusted HOTP apps.

Module Overview

Our goal is to add credentials to Tock apps, verify those credentials in the kernel, and then permit only verified apps to use the encryption oracle API. To keep this simple we will use a simple SHA-256 hash as our credential, and verify that the hash is valid within the kernel.

Step 1: Credentialed Apps

To implement our access control policy we need to include an offline-computed SHA256 hash with the app TBF, and then check it when running the app. The SHA256 credential is simple to create, and serves as a stand-in for more useful credentials such as cryptographic signatures.

This will require a couple pieces:

  • We need to actually include the hash in our app.
  • We need a mechanism in the kernel to check the hash exists and is valid.

Signing Apps

We can use Tockloader to add a hash to a compiled app. This will require Tockloader version 1.10.0 or newer.

First, compile the app:

$ cd libtock-c/examples/blink
$ make

Now, add the hash credential:

$ tockloader tbf credential add sha256

It's fine to add to all architectures or you can specify which TBF to add it to.

To check that the credential was added, we can inspect the TAB:

$ tockloader inspect-tab

You should see output like the following:

$ tockloader inspect-tab
[INFO   ] No TABs passed to tockloader.
[STATUS ] Searching for TABs in subdirectories.
[INFO   ] Using: ['./build/blink.tab']
[STATUS ] Inspecting TABs...
TAB: blink
  build-date: 2023-06-09 21:52:59+00:00
  minimum-tock-kernel-version: 2.0
  tab-version: 1
  included architectures: cortex-m0, cortex-m3, cortex-m4, cortex-m7

 Which TBF to inspect further? cortex-m4

cortex-m4:
  version               : 2
  header_size           :        104         0x68
  total_size            :      16384       0x4000
  checksum              :              0x722e64be
  flags                 :          1          0x1
    enabled             : Yes
    sticky              : No
  TLV: Main (1)                                   [0x10 ]
    init_fn_offset      :         41         0x29
    protected_size      :          0          0x0
    minimum_ram_size    :       5068       0x13cc
  TLV: Program (9)                                [0x20 ]
    init_fn_offset      :         41         0x29
    protected_size      :          0          0x0
    minimum_ram_size    :       5068       0x13cc
    binary_end_offset   :       8360       0x20a8
    app_version         :          0          0x0
  TLV: Package Name (3)                           [0x38 ]
    package_name        : blink
  TLV: Kernel Version (8)                         [0x4c ]
    kernel_major        : 2
    kernel_minor        : 0
    kernel version      : ^2.0
  TLV: Persistent ACL (7)                         [0x54 ]
    Write ID            :          11          0xb
    Read IDs (1)        : 11
    Access IDs (1)      : 11

TBF Footers
  Footer
    footer_size         :       8024       0x1f58
  Footer TLV: Credentials (128)
    Type: SHA256 (3) ✓ verified
    Length: 32
  Footer TLV: Credentials (128)
    Type: Reserved (0)
    Length: 7976

Note at the bottom, there is a Footer TLV with SHA256 credentials! Because tockloader was able to double-check the hash was correct there is ✓ verified next to it.

SUCCESS: We now have an app with a hash credential!

Verifying Credentials in the Kernel

To have the kernel check that our hash credential is present and valid, we need to add a credential checker before the kernel starts each process. For Tock's credential checking architecture, this actually requires three pieces:

  1. The app checking policy that verifies SHA256 credentials.
  2. An AppID assignment policy that assigns identifiers to applications with verified credentials.
  3. A credential checking engine that iterates over each process binary and checks all provided credentials.

To create these, we'll edit the board's main.rs file in the kernel. Tock includes a basic SHA256 credential checker, so we can use that. We also will use an AppID assigner that creates the ID based on the process's name.

The following code should be added to the main.rs file somewhere before the platform setup occurs (probably right after the encryption oracle capsule from the last module!).

#![allow(unused)]
fn main() {
//--------------------------------------------------------------------------
// CREDENTIALS CHECKING POLICY
//--------------------------------------------------------------------------

// Create the software-based SHA engine.
let sha = components::sha::ShaSoftware256Component::new()
    .finalize(components::sha_software_256_component_static!());

// Create the credential checker.
let checking_policy = components::appid::checker_sha::AppCheckerSha256Component::new(sha)
    .finalize(components::app_checker_sha256_component_static!());

// Create the AppID assigner.
let assigner = components::appid::assigner_name::AppIdAssignerNamesComponent::new()
    .finalize(components::appid_assigner_names_component_static!());

// Create the process checking machine.
let checker = components::appid::checker::ProcessCheckerMachineComponent::new(checking_policy)
    .finalize(components::process_checker_machine_component_static!());
}

That code creates a checker object. We will use that checker when processes are loaded. Now we setup the process loader which uses the process checker. This should go at the end of main(), replacing the existing call to kernel::process::load_processes:

#![allow(unused)]
fn main() {
let process_binary_array = static_init!(
    [Option<kernel::process::ProcessBinary>; NUM_PROCS],
    [None, None, None, None, None, None, None, None]
);

let loader = static_init!(
    kernel::process::SequentialProcessLoaderMachine<
        nrf52840::chip::NRF52<Nrf52840DefaultPeripherals>,
    >,
    kernel::process::SequentialProcessLoaderMachine::new(
        checker,
        &mut *addr_of_mut!(PROCESSES),
        process_binary_array,
        board_kernel,
        chip,
        core::slice::from_raw_parts(
            core::ptr::addr_of!(_sapps),
            core::ptr::addr_of!(_eapps) as usize - core::ptr::addr_of!(_sapps) as usize,
        ),
        core::slice::from_raw_parts_mut(
            core::ptr::addr_of_mut!(_sappmem),
            core::ptr::addr_of!(_eappmem) as usize - core::ptr::addr_of!(_sappmem) as usize,
        ),
        &FAULT_RESPONSE,
        assigner,
        &process_management_capability
    )
);
checker.set_client(loader);

loader.register();
loader.start();
}

Compile and install the updated kernel.

SUCCESS: We now have a kernel that can check credentials!

Installing Apps and Verifying Credentials

Now, our kernel will only run an app if it has a valid SHA256 credential. To verify this, recompile and install the blink app but do not add credentials:

cd libtock-c/examples/blink
touch main.c
make
tockloader install --erase

Now, we can list the processes on the board with the process console. Note we need to run the console-start command to active the tock process console.

$ tockloader listen
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAF0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
console-start
tock$

Now we can list the processes:

tock$ list
 PID    Name                Quanta  Syscalls  Restarts  Grants  State
tock$

Tip: You can re-disable the process console by using the console-stop command.

You can see our app is not there because it failed to load due to lack of proper credentials.

To fix this, we can add the SHA256 credential.

cd libtock-c/examples/blink
tockloader tbf credential add sha256
tockloader install

Now when we list the processes, we see:

tock$ list
 PID    ShortID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      0x3be6efaa blink                    0       323         0   1/16   Yielded

And we can verify the app is both running and now has a specifically assigned short ID.

Permitting Both Credentialed and Non-Credentialed Apps

The default operation is not quite what we want. We want all apps to run, but only credentialed apps to have access to the syscalls.

To allow all apps to run, even if they don't pass the credential check, we need to configure our checker. Doing that is actually quite simple. We just need to modify the credential checker we are using to not require credentials.

In tock/capsules/system/src/process_checker/basic.rs, modify the require_credentials() function to not require credentials:

#![allow(unused)]
fn main() {
impl AppCredentialsChecker<'static> for AppCheckerSha256 {
    fn require_credentials(&self) -> bool {
        false // change from true to false
    }
    ...
}
}

Then recompile and install. Now even a non-credentialed process should run:

tock$ list
 PID    ShortID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      Unique     c_hello                  0         8         0   1/16   Yielded

SUCCESS: We now can determine if an app is credentialed or not!

Step 2: Permitting Syscalls for only Credentialed Apps

Our second step is to implement a policy that permits syscall access to the encryption capsule only for credentialed apps. All other syscalls should be permitted.

Tock provides the SyscallFilter trait to do this. An object that implements this trait is used on every syscall to check if that syscall should be executed or not. By default all syscalls are permitted.

The interface looks like this:

#![allow(unused)]
fn main() {
pub trait SyscallFilter {
    // Return Ok(()) to permit the syscall, and any Err() to deny.
    fn filter_syscall(
        &self, process: &dyn process::Process, syscall: &syscall::Syscall,
    ) -> Result<(), errorcode::ErrorCode> {
        Ok(())
    }
}
}

We need to implement the single filter_syscall() function with out desired behavior.

To do this, create a new file called syscall_filter.rs in the board's src/ directory. Then insert the code below as a starting point:

#![allow(unused)]
fn main() {
use kernel::errorcode;
use kernel::platform::SyscallFilter;
use kernel::process;
use kernel::syscall;

pub struct TrustedSyscallFilter {}

impl SyscallFilter for TrustedSyscallFilter {
    fn filter_syscall(
        &self,
        process: &dyn process::Process,
        syscall: &syscall::Syscall,
    ) -> Result<(), errorcode::ErrorCode> {

        // To determine if the process has credentials we can use the
        // `process.short_app_id()` function.

        // Now inspect the `syscall` the app is calling. If the `driver_numer`
        // is not XXXXXX, then return `Ok(())` to permit the call. Otherwise, if
        // the process is not credentialed, return `Err(ErrorCode::NOSUPPORT)`. If
        // the process is credentialed return `Ok(())`.
    }
}
}

Documentation for the Syscall type is here.

Save this file and include it from the board's main.rs:

#![allow(unused)]
fn main() {
mod syscall_filter
}

Now to put our new policy into effect we need to use it when we configure the kernel via the KernelResources trait.

#![allow(unused)]
fn main() {
impl KernelResources for Platform {
    ...
    type SyscallFilter = syscall_filter::TrustedSyscallFilter;
    ...
    fn syscall_filter(&self) -> &'static Self::SyscallFilter {
        self.sysfilter
    }
    ...
}
}

Also you need to instantiate the TrustedSyscallFilter:

#![allow(unused)]
fn main() {
let sysfilter = static_init!(
    syscall_filter::TrustedSyscallFilter,
    syscall_filter::TrustedSyscallFilter {}
);
}

and add it to the Platform struct:

#![allow(unused)]
fn main() {
struct Platform {
    ...
    sysfilter: &'static syscall_filter::TrustedSyscallFilter,
}
}

Then when we create the platform object near the end of main(), we can add our checker:

#![allow(unused)]
fn main() {
let platform = Platform {
    ...
    sysfilter,
}
}

SUCCESS: We now have a custom syscall filter based on app credentials.

Verifying HOTP Now Needs Credentials

Now you should be able to install your HOTP app to the board without adding the SHA256 credential and verify that it is no longer able to access the encryption capsule. You should see output like this:

$ tockloader listen
Tock HOTP App Started. Usage:
* Press a button to get the next HOTP code for that slot.
* Hold a button to enter a new HOTP secret for that slot.
Flash read
Initialized state
ERROR cannot encrypt key

If you use tockloader to add credentials (tockloader tbf credential add sha256) and then re-install your app it should run as expected.

Wrap-up

You now have implemented access control on important kernel resources and enabled your app to use it. This provides platform builders robust flexibility in architecting the security framework for their devices.

Using the HOTP USB Security Key

With our fully functional USB security key, we can now put it to use.

OpenThread Temperature Sensor Network with Tock

This module and submodules will walk you through how to create a Tock temperature sensor network mote that communicates over a Thread network.

Hardware Notes

This tutorial requires a Tock-supported board that has an IEEE 802.15.4-compatible radio and supports Thread. While any such board should work, we recommend the nRF52840DK and assume this board is used throughout this tutorial.

Compatible boards:

Project Setting

In this project we want to demonstrate how the Tock operating system can function as a flexible and reliable platform to build integrated systems. In particular, we demonstrate Tock's ability to run mutliple, mutually-distrustful applications on a single microcontroller, and it's IEEE 802.15.4 / Thread communications stack.

To demonstrate these features, we will build an HVAC control system for a shared office environment. Each employee will have access to their own HVAC control unit, connected to the central HVAC system through a Thread network. As the temperature set point can be a contencious subject, we allow each employee to enter their desired temperature. In turn, their control unit will display the average temperature set across all controllers, in addition to the current temperature at the control unit. We use Tock's OpenThread-based communications stack and it's ability to run multiple concurrent applications to build this control unit (mote).

thread_net_figure

We divide the mote's functionality into three separate applications:

  • The control application is responsible for interacting with the user. It drives the connected screen to display the current temperature and the local and global-average set points.
  • The sensor application gathers readings from the nRF52840s internal temperature sensor and exposes them to the control application.
  • Last but not least, the communication application is responsible for exchanging data with other participants using the Thread network.

thread_net_tutorial_apps

By decoupling the sensor and communication applications, the Tock kernel ensures that the mote can remain responsive even in the case of failures in either application. In this tutorial we demonstratate this by injecting a bug into the communication application and deliberately faulting it with a malicious packet.

Software Prerequisites

  • Getting Started Guide
  • Rust
  • Make
  • GCC for ARM and RISC-V
  • Tockloader Python Package

nRF52840dk Hardware Setup

nRF52840dk

Make sure the switches and jumpers are properly configured on your board:

  1. The "Power" switch on the top left should be set to "On".
  2. The "nRF power source" switch in the top middle of the board should be set to "VDD".
  3. The "nRF ONLY | DEFAULT" switch on the bottom right should be set to "DEFAULT".

You should plug one USB cable into the top of the board for programming (NOT into the "nRF USB" port on the side).

If you have a SSD1306-based screen with I2C pins, you should attach it to pins P1.10 (SDA) and P1.11 (SCL).

See this diagram for the full configuration:

     ┌────────────────┬───┬─────────────────┐
     │┌POWER┐         │USB│← PROG/DEBUG     │
     ││ ON ▓│         └───┘                 │
     ││OFF ░│                               │
     │└─────┘          ┌──DEBUG──┐          │
     │                 │VDD nRF ▪│  P0.27 □ │
     │                 │VDD nRF ▪│  P0.26 □ │
     │ □ VDD   ┌SOURCE┐│SWD SEL ▪│  P0.02 □ │
     │ □ VDD   │LiPo ░││ SWD IO ▪│    GND □ │
     │ □ RESET │ VDD ▓││SWD CLK ▪│  P1.15 □ │
VCC →│ ▣ VDD   │ USB ░││    SWO ▪│  P1.14 □ │
     │ □ 5V    └──────┘│  RESET ▪│  P1.13 □ │
     │ □ GND           │        ▪│  P1.12 □ │
GND →│ ▣ GND     ┌────┐│    VIN ▪│  P1.11 ▣ │← I2C SCL
     │ □ NC      │JTAG││  VDDHV ▪│  P1.10 ▣ │← I2C SDA
     │         ┐ │    ││  VDDHV ▪│          │
     │ □ P0.03 │ └────┘│ VIOREF ▪│  P1.08 □ │
     │ □ P0.04 A       │        ▪│  P1.07 □ │
     │ □ P0.28 D       └─────────┘  P1.06 □ │
     │ □ P0.29 C                    P1.05 □ │
     │ □ P0.30 │                    P1.04 □ │
     │ □ P0.31 │                    P1.03 □ │
     │         ┘                    P1.02 □ │
     │                              P1.01 □ │
     │                                      │
     │                              P0.10 □ │
     │                              P0.09 □ │
     │                              P0.08 □ │
     │ ☉ RESET                      P0.07 □ │
     │   BTN                        P0.06 □ │
     ├───┐                          P0.05 □ │
     │USB│  nRF                     P0.01 □ │
     │   │← PERIPHERAL              P0.00 □ │
     ├───┘                           ┌─────┐│
     │                               │░ nRF││
     │BTN3 BTN1                      │▓ DEF││
     │ ☉    ☉                        └─────┘│
     │BTN4 BTN2                  LED3 LED1  │
     │ ☉    ☉                     □    □    │
     │                ┌───┐      LED4 LED2  │
     │ ┌─┐            │nRF│       □    □    │
     │ │ │NFC         └───┘                 │
     │ └─┘                                  │
     └───                    ───────────────┘
         ╲                  ╱
          ──────────────────

Organization and Getting Oriented to Tock

Tock consists of multiple inter-working components. We briefly describe the general structure of Tock and will deep-dive into these components throughout the tutorial:

A Tock system contains primarily two components:

  1. The Tock kernel, which runs as the operating system on the board. This is compiled from the Tock repository.
  2. Userspace applications, which run as processes and are compiled and loaded separately from the kernel.

The Tock kernel is compiled specifically for a particular hardware device, termed a "board". Tock provides a set of reference board files under /boards/<board name>. Any time you need to compile the kernel or edit the board file, you will go to that folder. You also install the kernel on the hardware board from that directory.

While the Tock kernel is written entirely in Rust, it supports userspace applications written in multiple languages. In particular, we provide two userspace libraries for application development in C and Rust respectively:

  • libtock-c for C applications (https://github.com/tock/libtock-c)
  • libtock-rs for Rust applications (https://github.com/tock/libtock-rs)

We will use libtock-c in this tutorial. Its example-applications are located in the /examples directory of the libtock-c repository.

Thread Router

For this tutorial, we assume that one nRF52840DK is dedicated to be a Thread router board. As a participant in a hosted tutorial, you will likely not need to set this up yourself. However, we do provide a pre-built image and some instructions for how to set up this router as well.

Stages

We divide this tutorial into four stages, with checkpoints that you can use to skip ahead. Each stage contains information on how to obtain all checkpoint-code required for it.

  1. Sensor Application: We start by creating a simple application that reads the nRF52840DK internal temperature sensor and prints the current temperature onto the console.

    This demonstrates how you can flash a Tock kernel and applications onto your development board, and introduces some key Tock concepts.

  2. We continue by extending this application into an "IPC service". This will make the current temperature accessible to other applications that request it.

  3. Our controller application takes this information and displays it onto an attached OLED screen. It provides a basic user interface, wiring up a screen driver, buttons, and an "IPC client".

  4. Following this, we develop the communication application. This application will let our mote join the Thread network and exchange messages.

  5. Finally, we demonstrate how Tock's mutually distrustful application model can protect the system from misbehavior in any given app.

Sound good? Let's get started.

Thread Router Setup

The thread network tutorial requires a Thread router to be present, which is able to accept certain messages from participant boards, average the supplied values, and broadcast them back. We provide a pre-built flash image that performs this task here: ot-central-controller.hex.

Flashing the Binary

You can flash this binary with an arbitrary tool that can program hex files, such as JLinkExe or probe-rs. It may be that you need to reset the board after flashing, for example by pressing the physical RESET button.

$ probe-rs
probe-rs download --chip nRF52840_xxAA --format hex ot-central-controller.hex
      Erasing ✔ [00:00:13] [##########] 516.00 KiB/516.00 KiB @ 37.43 KiB/s (eta 0s )
	  Programming ✔ [00:00:11] [########] 516.00 KiB/516.00 KiB @ 46.68 KiB/s (eta 0s )
	  Finished in 24.86s
$ JLinkExe
SEGGER J-Link Commander V7.94a (Compiled Dec  6 2023 16:07:30)
DLL version V7.94a, compiled Dec  6 2023 16:07:07

Connecting to J-Link via USB...O.K.
Firmware: J-Link OB-SAM3U128-V2-NordicSemi compiled Oct 30 2023 12:12:17
Hardware version: V1.00
J-Link uptime (since boot): 0d 00h 08m 40s
S/N: 683487279
License(s): RDI, FlashBP, FlashDL, JFlash, GDB
USB speed mode: High speed (480 MBit/s)
VTref=3.300V


Type "connect" to establish a target connection, '?' for help
J-Link>connect
Please specify device / core. <Default>: NRF52840_XXAA
Type '?' for selection dialog
Device>NRF52840_XXAA
Please specify target interface:
  J) JTAG (Default)
  S) SWD
  T) cJTAG
TIF>S
Specify target interface speed [kHz]. <Default>: 4000 kHz
Speed>
Device "NRF52840_XXAA" selected.


Connecting to target via SWD
InitTarget() start
InitTarget() end - Took 2.79ms
Found SW-DP with ID 0x2BA01477
DPIDR: 0x2BA01477
CoreSight SoC-400 or earlier
Scanning AP map to find all available APs
AP[2]: Stopped AP scan as end of AP map has been reached
AP[0]: AHB-AP (IDR: 0x24770011)
AP[1]: JTAG-AP (IDR: 0x02880000)
Iterating through AP map to find AHB-AP to use
AP[0]: Core found
AP[0]: AHB-AP ROM base: 0xE00FF000
CPUID register: 0x410FC241. Implementer code: 0x41 (ARM)
Found Cortex-M4 r0p1, Little endian.
Cortex-M: The connected J-Link (S/N 683487279) uses an old firmware module: V1 (current is 2)
FPUnit: 6 code (BP) slots and 2 literal slots
CoreSight components:
ROMTbl[0] @ E00FF000
[0][0]: E000E000 CID B105E00D PID 000BB00C SCS-M7
[0][1]: E0001000 CID B105E00D PID 003BB002 DWT
[0][2]: E0002000 CID B105E00D PID 002BB003 FPB
[0][3]: E0000000 CID B105E00D PID 003BB001 ITM
[0][4]: E0040000 CID B105900D PID 000BB9A1 TPIU
[0][5]: E0041000 CID B105900D PID 000BB925 ETM
Memory zones:
  Zone: "Default" Description: Default access mode
Cortex-M4 identified.
J-Link>loadfile ot-central-controller.hex
'loadfile': Performing implicit reset & halt of MCU.
Reset: Halt core after reset via DEMCR.VC_CORERESET.
Reset: Reset device via AIRCR.SYSRESETREQ.
Downloading file [ot-central-controller.hex]...
J-Link: Flash download: Bank 0 @ 0x00000000: 1 range affected (528384 bytes)
J-Link: Flash download: Total: 17.932s (Prepare: 0.161s, Compare: 0.045s, Erase: 10.840s, Program & Verify: 6.757s, Restore: 0.128s)
J-Link: Flash download: Program & Verify speed: 76 KB/s
O.K.
J-Link>quit

Interacting with the Router

Once you have flashed this image, it should provide you with an OpenThread CLI on the serial console. You can use this to inspect the state of the device, such as through tockloader listen:

$ tockloader listen
[INFO   ] Using "/dev/ttyACM0 - J-Link - CDC".
[INFO   ] Listening for serial output.

> state
leader
Done

You can get a list of attached devices through child table:

> child table
| ID  | RLOC16 | Timeout    | Age        | LQ In | C_VN |R|D|N|Ver|CSL|QMsgCnt|Suprvsn| Extended MAC     |
+-----+--------+------------+------------+-------+------+-+-+-+---+---+-------+-------+------------------+
|   1 | 0x1801 |        240 |         67 |     3 |  107 |1|1|1|  4| 0 |     0 |   129 | 0a5e0b97af0631ae |

Done

Writing a Temperature-Sensor App on Tock

In this stage, we write a simple application that will ask the Tock kernel for our chip's current temperature and then print this value to the serial console. By the end of this submodule, you will know how to:

  1. Compile and flash the Tock kernel.
  2. Compile and flash a libtock-c application.
  3. Interact with the tock process console.
  4. Interact with Tock syscalls, callbacks, and Inter-Process Communication.

Compiling and Installing the Kernel

For this tutorial, we provide a Tock kernel configuration that exposes all required peripherals to userspace applications. It is based on the nrf52840dk base board defition and adds an additional driver instantiation for the Ssd1306 1.3" OLED screen we are using in this tutorial.

You can compile this configuration board by entering into its respective directory and typing make:

$ cd tock/boards/tutorials/nrf52840dk-thread-tutorial
$ make
   [...]
   Compiling nrf52_components v0.1.0 (/home/leons/proj/tock/kernel/boards/nordic/nrf52_components)
   Compiling nrf52840dk v0.1.0 (/home/leons/proj/tock/kernel/boards/nordic/nrf52840dk)
    Finished `release` profile [optimized + debuginfo] target(s) in 11.09s
   text    data     bss     dec     hex filename
 233474      36   41448  274958   4320e tock/target/thumbv7em-none-eabi/release/nrf52840dk-thread-tutorial
cb0df7abb1...d47b383aaf  tock/target/thumbv7em-none-eabi/release/nrf52840dk-thread-tutorial.bin

To flash the kernel onto your nRF52840DK development board, make sure that you use the debug USB port (top-side, not "nRF USB"). Then type

$ make install
tockloader  flash --address 0x00000 --board nrf52dk --jlink tock/kernel/target/thumbv7em-none-eabi/release/nrf52840dk-thread-tutorial.bin
[INFO   ] Using settings from KNOWN_BOARDS["nrf52dk"]
[STATUS ] Flashing binary to board...
[INFO   ] Finished in 9.901 seconds

If these commands fail, ensure that you have all of rustup, tockloader, and the SEGGER J-Link software installed. You can test your connection with the integrated J-Link debug probe by running:

$ JLinkExe
SEGGER J-Link Commander V7.94a (Compiled Dec  6 2023 16:07:30)
DLL version V7.94a, compiled Dec  6 2023 16:07:07

Connecting to J-Link via USB...O.K.
Firmware: J-Link OB-SAM3U128-V2-NordicSemi compiled Oct 30 2023 12:12:17
Hardware version: V1.00
J-Link uptime (since boot): 0d 00h 39m 40s
S/N: 683487279
License(s): RDI, FlashBP, FlashDL, JFlash, GDB
USB speed mode: High speed (480 MBit/s)
VTref=3.300V

Connecting to the Tock Kernel

You can connect to your board's serial console using tockloader or any other serial console application. If your development board presents two console devices, the lower-numbered one is usually correct. Select 115200 baud, 1 stop bit, no partity, no flow control. The following command should also do the trick:

$ tockloader listen

By default, a Tock board without any applications will respond with a message similar to:

Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
tock$

If you don't see this prompt, try hitting ENTER or pressing the RESET button on your board (near the left-hand side USB port). In case you see the following selection dialog, the nRF52840DK exposes the chip's serial console on the first UART port (e.g., ttyACM0 instead of ttyACM1). If that does not work, simply try the available ports:

$ tockloader listen
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] No serial port with device name "tock" found.
[INFO   ] Found 2 serial ports.
Multiple serial port options found. Which would you like to use?
[0]     /dev/ttyACM1 - J-Link - CDC
[1]     /dev/ttyACM0 - J-Link - CDC

Which option? [0] 1
[INFO   ] Using "/dev/ttyACM0 - J-Link - CDC".
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
tock$

The small shell above is called the "process console". It allows you to start and stop applications, and control other parts of the tock kernel. For instance, reset will completely reset your chip, re-printing the above greeting. Use help to obtain a list of commands.

CHECKPOINT: You can interact with the process console.

Compiling and Installing an Application

With the kernel running we can now load applications onto our board. Tock applications are compiled and loaded separately from the kernel. For this tutorial we will use the libtock-c userspace library, whose source is located outside of the kernel repository here.

We provide some scaffolding for this tutorial. Make sure to enter the following directory:

$ cd libtock-c/examples/tutorials/thread_network
$ ls
00_sensor_hello
01_sensor_ipc
[...]

These applications represent checkpoints for different milestones of this tutorial. If you are ever stuck on something, you can try running or looking at the subsequent checkpoint. We'll start the tutorial off at checkpoint 00_sensor_hello. Whenever we reach a checkpoint, we indicate this through a message like the following:

CHECKPOINT: 00_sensor_hello

To compile and flash this application, we enter into its directory and run the following command:

$ cd 00_sensor_hello
$ make -j install
[...]
Application size report for arch family cortex-m:
Application size report for arch family rv32i:
   text    data     bss     dec     hex filename
   3708     204    2716    6628    19e4 build/cortex-m0/cortex-m0.elf
[...]
  13944     816   10864   25624    6418 (TOTALS)
   text    data     bss     dec     hex filename
   4248     100    2716    7064    1b98 build/rv32imac/rv32imac.0x20040080.0x80002800.elf
[...]
  51432    1000   27160   79592   136e8 (TOTALS)
[INFO   ] Using openocd channel to communicate with the board.
[INFO   ] Using settings from KNOWN_BOARDS["nrf52dk"]
[STATUS ] Installing app on the board...
[INFO   ] Flashing app org.tockos.thread-tutorial.sensor binary to board.
[INFO   ] Finished in 1.737 seconds

Once the binary is flashed to the board, you can connect to its serial port using tockloader listen. Upon reset, the board should now greet you:

$ tockloader listen
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
Hello World!
tock$

Congratulations, you have successfully installed and run your first Tock application! You can manage apps installed on a board with Tockloader. For instance, use the following commands to install, list, and erase applications:

$ tockloader install            # Installs an app
$ tockloader list               # Lists installed apps
$ tockloader erase-apps         # Erases all apps

Making Your First System Call

The goal of the sensor application is to sample this chip's internal temperature sensor, and to provide this value to other applications using Tock's Inter-Process Communcation facility.

However, an application in Tock runs as an unprivileged process, and as such it does not have direct access to any chip peripherals. Instead, the application needs to ask the Tock kernel to perform this operation. For this, libtock-c provides some system call wrappers that our application can use. These are defined in the libtock folder of the libtock-c repository. For this particular application, we are mainly interested in talking to Tock's sensor driver subsystem. For this, libtock-c/libtock/temperature.h provides convenient userspace wrapper functions, such as libtocksync_temperature_read:

#include <libtock-sync/sensors/temperature.h>

// Read the temperature sensor synchronously.
//
// ## Arguments
//
// - `temperature`: Set to the temperature value in hundredths of degrees
//   centigrade.
//
// ## Return Value
//
// A returncode indicating whether the temperature read was completed
// successfully.
returncode_t libtocksync_temperature_read(int* temperature);

For now, let's focus on using the API to make a system call to read the temperature. For this, we can extend the provided 00_sensor_hello application's main.c file with a call to that function. Your code should invoke this function and pass it a reference into which the temperature value will be written. You can then extend the printf call to print this number.

With these changes, compile and re-install your application by running make install again. Once that is done, you should see output similar to the following:

$ tockloader listen
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
Hello World, the temperature is: 2600
tock$

CHECKPOINT: 01_sensor_ipc

Implementing an IPC Service

In our next step, we want to extend this application into an IPC service, such that we can provide the most recent temperature reading to other applications as well.

Because we do not want to make a system call every time we get such an IPC request, we instead change the main function to run a loop and query the temperature periodically, such as once every 250 milliseconds. For this, we can use the libtocksync_alarm_delay_ms function:

#include <libtock-sync/services/alarm.h>

int main(void) {
  // Perform initialization, declare variables

  for (;;) {
    // Read temperature into global variable

    // Wait for 250ms
    libtocksync_alarm_delay_ms(250);
  }

  return 0;
}

It is worth noting at this point that the libtocksync_alarm_delay_ms function does not perform busy-waiting. It instead blocks this application from executing for some time, and unblocks it by notifying it after 250 ms. This notification comes in the form of a callback. A callback is a kernel-scheduled task in the userspace application that can run at specific, pre-determined points in the application: so-called yield-points. In contrast to, e.g., signal handlers on Linux, an application will not receive a callback between any arbitrary instructions. libtocksync_alarm_delay_ms is such a yield-point and allows any number of callbacks to be invoked until the 250ms wait-time has expired. When an application has no work to be done, the kernel is free to schedule other applications or place the chip into a low-power state.

In the above example, libtocksync_alarm_delay_ms internally configures an appropriate handler for the callback that is invoked when its wait-time has expired. However, other types of events require a developer to write and register a callback manually -- for instance, for IPC service requests. We do so by invoking the ipc_register_service_callback, defined in ipc.h:

#include <libtock/kernel/ipc.h>

// Registers a service callback for this process.
//
// Service callbacks are called in response to `notify`s from clients and take
// the following arguments in order:
//
//   pkg_name  - the package name of this service
//   callback  - the address callback function to execute when clients notify
//   void* ud  - `userdata`. data passed to callback function
int ipc_register_service_callback(const char *pkg_name,
                                  subscribe_upcall callback, void *ud);

In the above, ipc_register_service_callback takes a "package name" under which the IPC service will be reachable by clients. By convention this should be the same name that the application uses -- in our example, it should be org.tockos.thread-tutorial.sensor as defined in the Makefile. When a client sends an IPC request to a service, the provided callback will be invoked in the service application. This callback is invoked with some parameters provided by the IPC client, and is passed the ud pointer provided in the call to ipc_register_service_callback. This callback has a function signature as follows:

static void sensor_ipc_callback(int pid, int len, int buf, void *ud) {
  // Callback handler code
}

Here, pid is an identifier that can be used to send a notification back to the requesting client, using the following call:

ipc_notify_client(pid);

IPC clients and services communicate through memory sharing. In particular, an IPC client can share a region of its own memory with the IPC service, provided some constraints on buffer size and alignment. This shared buffer is then provided to the IPC service callback through the len and buf parameters, where buf should be cast to the appropriate pointer type.

EXERCISE: Implement an IPC service callback for your sensor application that writes the current temperature value into the provided buffer.

You should write the temperature value into a global variable in the main loop, and read this variable in the callback handler. You may use something along the lines of:

memcpy((uint8_t*) buf, (uint8_t*) &current_temperature, sizeof(current_temperature))

After copying the value, notify the calling client using the ipc_notify_client call.

Install the application.

CHECKPOINT: 02_sensor_final

Testing your IPC Service

To test whether this IPC service works we also need an appropriate IPC client. For this, we provide a client application that also forms the basis of our control application.

CHECKPOINT: 03_controller_screen

EXERCISE: Install the provided 03_controller_screen application next to the sensor IPC service. A tockloader list command should show both applications as being installed:

$ tockloader list
[INFO   ] Using jlink channel to communicate with the board.
[INFO   ] Using settings from KNOWN_BOARDS["nrf52dk"]
┌──────────────────────────────────────────────────┐
│ App 0                                            |
└──────────────────────────────────────────────────┘
  Name:                  org.tockos.thread-tutorial.controller
  Version:               0
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   16384 bytes


┌──────────────────────────────────────────────────┐
│ App 1                                            |
└──────────────────────────────────────────────────┘
  Name:                  org.tockos.thread-tutorial.sensor
  Version:               0
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   8192 bytes

[INFO   ] Finished in 4.381 seconds

When both applications are flashed onto a Tock board, the provided 03_controller_screen application should indicate that it is making repeated IPC calls to the sensor and retrieving a temperature value, which can look like the following. You can also trigger these prints by pressing button 1 or 2 on the board.

$ tockloader listen
NRF52 HW INFO: Variant: AAC0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
[controller] Discovered sensor service: 1
[controller] TODO: update screen! Measured temperature: 2500
tock$ [controller] TODO: update screen! Measured temperature: 2600
[controller] TODO: update screen! Measured temperature: 2700
[controller] TODO: update screen! Measured temperature: 2600
[controller] TODO: update screen! Measured temperature: 2500
[controller] TODO: update screen! Measured temperature: 2500

Take a moment to look at the 03_controller_screen/main.c implementation. It implements the IPC client logic by defining a sensor_callback, quite similar to the service callback we defined above. This callback is fired whenever the service notifies the client. This app also defines some logic to handle button presses and change a "set-point temperature", which it displays on the console. This part will be relevant in the next stage of the tutorial.

This concludes the first stage of this tutorial. In the next step, we will extend the controller application to utilize a more involved peripheral: an attached OLED screen. This screen, alongside the four buttons present on the nRF52840DK development board will serve as the user interface for our HVAC control system.

Continue here.

Building the User Interface

In the previous stage we have built a sensor application that is able to query the Tock kernel for the current temperature and expose this value as an IPC service. We also provide a minimal controller application which uses this service and prints the temperature value onto the console.

However, this is not a great user interface. In this stage of the tutorial, we will extend this application to display information on an OLED screen attached to the board. For this we use the Tock kernel's screen driver support, in addition to the u8g2 graphics library.

CHECKPOINT: 02_sensor_final + 03_controller_screen

We assume that the sensor application is already loaded onto the board, and that the provided control application is able to print the temperature retrieved via IPC.

Adding the u8g2 Library

Tock is able to run arbitrary code in its userspace applications, including existing C libraries. For this stage in particular, we are interested in displaying information on a screen. Without a library to render text or symbols, this can be quite cumbersome. Instead, we will use the u8g2 library with libtock-c bindings.

To add this library to our application we add the following two lines to our application's Makefile, before the AppMakefile.mk include:

STACK_SIZE  = 4096
EXTERN_LIBS += $(TOCK_USERLAND_BASE_DIR)/u8g2

We increase the size of the stack that is pre-allocated for the application, as libtock-c by default allocates a stack of 2 kB which is insufficient for u8g2. We then specify that our application depends on the u8g2 library, by adding the libtock-c/u8g2 directory to EXTERN_LIBS. This directory contains a wrapper that allows the u8g2 library to communicate with Tock's screen driver system calls and ensures that the library can be used from within our application.

Once this is done, we can add some initialization code to our controller application:

#include <u8g2.h>
#include <u8g2-tock.h>

// Global reference to the u8g2 context:
u8g2_t u8g2;

int main(void) {
  // Required initialization code:
  u8g2_tock_init(&u8g2);
  u8g2_SetFont(&u8g2, u8g2_font_profont12_tr);
  u8g2_SetFontPosTop(&u8g2);

  // Clear the screen:
  u8g2_ClearBuffer(&u8g2);
  u8g2_SendBuffer(&u8g2);

  [...]
}

When we now build and install this app, it should still display the temperature readouts on the serial console. However, it should also clear the screen and you may see repeatedly flicker when installing applications or resetting your board.

EXERCISE: Extend the above app to print a simple message on the screen. You can use the u8g2_SetDrawColor(&u8g2, 1); method to draw in either the 0 or 1 color (i.e., foreground or background). u8g2_DrawStr(&u8g2, $XCOORD, $YCOORD, $YOUR_STRING); can be used to print a string to the display. Make sure you update the display contents with a final call to u8g2_SendBuffer(&u8g2);.

Displaying the Current Temperature

As a first step to building our HVAC control user interface, we want the screen to display the current temperature. For this, we consult the the sensor application, which exposes this data via IPC.

The controller should regularly sample data from the sensor application. A naive way to implement this is shown in the pseudo-code example below:

void ipc_callback(int temperature) {
  // Print temperature onto screen.
}

int main(void) {
  for (;;) {
    // Issue IPC request...

	// Wait for 250ms between requests:
	libtocksync_alarm_delay_ms(250);
}

This architecture has a few issues though. For instance, during the call to delay_ms, the application is effectively prevented from doing other useful work. While delay_ms does not spin and allows the kernel, other applications or even callbacks into the same application to work, it does block the application's main loop.

Another issue with this design is that the ipc_callback function performs complex application code which may, in turn, wait on some asynchronous events (callbacks) by inserting a yield point. This means that during the execution of the ipc_callback, other callbacks -- including ipc_callback itself -- may be scheduled again. Consider the following example:

void ipc_callback() {
  // The call to yield allows other callbacks to be scheduled,
  // including `ipc_callback` itself!
  yield();
}

void main() {
  send_ipc_request();

  // This call allows the initial `ipc_callback` to be scheduled:
  yield();
}

While Tock applications are single-threaded and this type of reentrancy is less dangerous than, e.g., UNIX signal handlers, it can still cause issues. For instance, when a function called from within a callback performs a yield internally, it can unexpectedly be run within the execution of the function. This can in turn break the function's semantics. Thus, it is good practice to restrict callback handler code to only non-blocking operations.

As such, we instead architect our controller and sensor application interactions using two callbacks and an asynchronous timer. It will work as follows:

  1. The main function will request the sensor app to provide a temperature reading, and thus issue an IPC callback.
  2. The IPC client callback will save the temperature value, and request a timer callback in 250 ms.
  3. The timer callback will request an IPC service call from the sensor app, going back to step 2.

As such, this loop does not execute any blocking / yielding operations in any callback. It also moves all timing / scheduling logic out of the applications main loop, which can instead look like this:

int main(void) {
  // Send initial IPC request

  // Yield in a loop, allowing callbacks to be run:
  for (;;) {
    yield();
  }
}

The final piece of the puzzle is to run blocking code in response to these callbacks, but outside of the callback handlers themselves. For this, Tock provides the yield_for function: it yields the application, until a certain condition is met. For instance, the controller application sets the callback_event boolean variable to true every time a callback is run. When we want to wait on this event in our main function, we can use the following logic:

// Shared variable to signal whether a callback has fired:
bool callback_event = false;

void ipc_callback() {
  // Indicate that a callback has fired:
  callback_event = true;
}

int main(void) {
  // Send initial IPC request

  // Yield in a loop, allowing callbacks to be run:
  for (;;) {
    // Wait for callback_event to be true:
    yield_for(&callback_event);
	// Reset callback_event to false for the next iteration:
	callback_event = false;

	// This code is executed whenever one or more callbacks have
	// fired. It can be long running and yield and will not be
	// re-entered:
	// ...
  }
}

EXERCISE: The 03_controller_screen checkpoint already contains the logic outlined above. Extend the main function to, in response to a callback, write the current temperature on a screen. You can do this by extending the update_screen function. You might find it useful to split this code out into a different function.

Finally, we will wire up this application to the OpenThread network to send the current temperature setpoint to all other control units, and retrieve an average value back. We provide some useful scaffolding for this in the next checkpoint, so it is advisable to either switch to that, or copy the commented out function signatures for OpenThread communication and integration at this point:

CHECKPOINT: 04_controller_thread

We continue here.

Wireless Networking

We have created a device capable of sensing temperature, accepting user input, and displaying data. We now set out to utilize Tock's network capabilities to connect our temperature controller to a central node.

Background

IEEE 802.15.4

To facilitate wireless communication, Tock provides an IEEE 802.15.4 network stack. IEEE 802.15.4 (hence forth abbreviated 15.4) is a physical (PHY) and media access control (MAC) specification that is purpose built for low-rate wireless personal area networks. As such, 15.4 is harmonious with Tock's use case as an embedded operating system for resource constrained devices.

Notable examples of popular wireless network technologies utilizing 15.4 include:

  • Thread
  • Zigbee
  • 6LoWPAN
  • ISA100.11a

Tock exposes to userspace 15.4 functionality through a series of command syscalls. Within the kernel, a 15.4 capsule and 15.4 radio driver serve to virtualize radio resources across other kernel endpoints and applications. To provide platform agnostic 15.4 logic, Tock prescribes a 15.4 radio Hardware Interface Layer (HIL) that must be implemented for each 15.4 radio supported by Tock.

Thread

Thread networking is a low-power and low-latency wireless mesh networking protocol built using a 15.4, 6LoWPAN, UDP network stack. Notably, each Thread node possess a globally addressable IPv6 address given Thread's adoption of 6LoWPAN (an IPv6 compression scheme). Although we will not exhaustively describe Thread here, we will provide a brief overview and pointers to more in-depth resources that further describe Thread.

Thread devices fit into two broadly generalized device types: routers and children. Routers often possess a non constrained power supply (i.e. "plugged in") while children are often power constrained battery devices. Children form a star topology around their respective parent router while routers maintain a mesh network amongst routers. This division of responsibilities allows for the robustness and self healing capabilities a mesh network provides while not being prohibitive to power constrained devices.

Further resources on Thread networking can be found here.

Tock and OpenThread

OpenThread is an opensource implementation of the Thread standard. This implementation is the "de facto" Thread implementation.

In order for a given platform to support OpenThread, the platform must provide:

  • IEEE 802.15.4 radio
  • Random Number Generator
  • Alarm
  • Nonvolatile Storage

These functionalities are provided to OpenThread using OpenThread's platform abstraction layer (PAL) that a given platform implements as the "glue" between the OpenThread stack and the platform's hardware.

OpenThread is a popular network stack supported by other embedded platforms (e.g. Zephyr). In other embedded platforms, the OpenThread PAL is exposed either directly to hardware or links directly to the kernel. Tock faces a unique design challenge in supporting OpenThread as the Tock kernel's threat model explicitly bans external dependencies. Subsequently, Tock provides an OpenThread port that runs as an application. This provides the added benefit that a bug in OpenThread will not cause the entire system to crash and that a faulting OpenThread app can be recovered and restarted by the Tock kernel. The libtock-c OpenThread port can be found in the libopenthread directory for further details. libopenthread directly checks out the upstream OpenThread repository and as such possesses the entire set of OpenThread APIs.

Libopenthread

We assume that a single nRF52840DK board is used as a Thread router that also performs certain logic (such as averaging temperature setponts). In a hosted tutorial setting you will likely be provided with such a board; we do provide instructions for this here.

We now begin implementing an OpenThread app using libopenthread. Because Tock is able to run arbitrary code in userspace, we can make use of this existing library and tie it into the Tock ecosystem. As such, this part of our application works quite similar compared to other platforms.

For the purposes of this tutorial, we provide a hardcoded network key (commissioned joining would be a more secure authentication method). The major steps to join a Thread network include:

  1. Initializing the IP interface (ifconfig up)
  2. Creating a dataset (dataset init new)
  3. Adding the network key, panid, and channel to the dataset
  4. Committing the active dataset ('dataset commit active')
  5. Begin thread network attachment (thread start)

To send and receive UDP packets, we must also correctly configure UDP. Because of these steps are mostly OpenThread specific, we provide an application that performs the vast majority of these steps.

CHECKPOINT: 06_openthread

EXERCISE: Build and flash the openthread app, located under examples/tutorials/thread_network/06_openthread.

Upon successfully flashing the app, launch tockloader listen. Once in the tockloader console reset the board using:

tock$ reset

If you have successfully compiled and flashed the app, you will see:

tock$ [THREAD] Device IPv6 Addresses: fe80:0:0:0:b4ef:e680:d8ef:475e
[State Change] - Detached.
[State Change] - Child.
Successfully attached to Thread network as a child.

TROUBLESHOOTING

  1. Thread output not printed to the console.

    Run tockloader list and you should see:

    tock$ list
     PID    ShortID    Name                Quanta  Syscalls  Restarts  Grants  State
     0      Unique     org.tockos.thread-tutorial.openthread   125      1586         0   6/18   Running
     1      Unique     thread_controller        2       187         0   5/18   Yielded
     2      Unique     org.tockos.thread-tutorial.sensor     0       132         0   3/18   Yielded
    

    If you do not see this, you have not successfully flashed the app.

  2. Thread output does not say successfully joined.

    • First confirm that you have flashed the router with the provided instructions.
    • Attempt resetting your board again.

Congratulations! We now have a networked mote. We now must modify the provided implementation to be integrated with the controller app.

EXERCISE: We provide a list of the features and expected behaviors of this app. We leave the implementation of this logic to you. This will utilize a similar IPC framework as between the controller and sensor apps. The specified behavior is as follows.

  1. The openthread app will receive an IPC request (the specified local setpoint will be contained in the first byte of the shared buffer).
  2. The openthread app will multicast this value to all router devices.
  3. The router will average this value against all other received requests and then multicast the averaged value to all children.
  4. Upon receiving the multicasted response, our openthread app will place the received global average into the first byte of the shared IPC buffer. We then must notify the client that the requested service is completed.

More specifically, here is a todo list of things to implement. If you become stuck, we provide a checkpoint with the completed OpenThread app (07_openthread_final). To be implemented:

  1. Add IPC callback (mirroring structure of sensor IPC) and register the the service.
  2. Within this callback copy the local_setpoint found in the shared IPC buffer to the variable local_temperature_setpoint.i
  3. Send a UDP packet with the local temperature setpoint. You can use the udpSend() method. This function multicasts to all routers the value stored in the variable local_temperature_setpoint.
  4. We should ONLY copy the global setpoint into the shared IPC buffer and notify the controller client IF the mote is connected to a thread network. If we are not connected to a network, we have no way of knowing the global setpoint. (HINT: we can use the statechangedcallback to track when we are attached to a network).

CHECKPOINT: 07_openthread_final

We now have a completed OpenThread app that provides an IPC service capable of broadcasting the given mote's desired setpoint, receiving the global average setpoint, and notifying the IPC client.

Tock Robustness

Setting Tock apart from many other embedded operating systems is security design: applications are generally mutually distrustful. In practice, this means that any misbehavior in one application should not affect other applications. This includes both faults (such as invalid pointer dereferences, etc.), and excessive resource utilization.

Take for instance a standard network application that implements all logic within one application unit (i.e. links OpenThread directly to the platform implementation). We consider two illustrative scenarios below of what may go wrong and how Tock guards against such outcomes.

Scenario 1 - Faulting Application

OpenThread is a large code base and interacts with a number of buffers. Furthermore, our OpenThread app adds the complexity of sharing buffers across IPC. Given the challenges in writing C code, it is likely that some aspect of the application will fault at somepoint in the future.

In a traditional embedded platform, a fault in the OpenThread app or OpenThread code base would in turn result in the platform itself faulting. Tock guards against this by isolating different applications and the kernel using memory protection. Subsequently, a faulting app can be handled by the kernel and the broader system is left unharmed.

In practice, developers have the option to specify how the kernel should handle such faults through a fault policy. Such policies can be user-defined, and Tock includes some by default, such as:

  • StopFaultPolicy: stops the process upon a fault,
  • PanicFaultPolicy: causes the entire platform to panic upon any process fault (useful for debugging), or
  • RestartWithDebugFaultPolicy: restarts a process after it has faulted, and prints a message to the console informing users of this restart.

In this tutorial, our board-definition comes pre-configured with the PanicFaultPolicy.

Scenario 2 - Buggy Behavior

Alternatively, a bug in the controller app results in it entering some form of infinite loop (be it deadlock or busy waiting). In a non-preemptive platform, the system will be disabled due to this bug. However, because Tock preempts applications, such a buggy application will no longer function, but the broader system will be unharmed.

Tock Kernel

Up to this juncture, we have exclusively worked within userspace. To demonstrate Tock's ability to recover from faulting applications, we will first modify our application and deliberately introduce a fault -- this will cause the kernel to panic and print useful debug information. We then modify the kernel's fault policy to instead restart the application.

We can see the fault policy that is in use with the kernel by looking at the tock/boards/tutorials/nrf52840dk-thread-tutorial/src/main.rs file. It defines a FAULT_RESPONSE variable with an instance of the fault policy that we want to use:

#![allow(unused)]
fn main() {
// How should the kernel respond when a process faults.
const FAULT_RESPONSE: kernel::process::PanicFaultPolicy =
    kernel::process::PanicFaultPolicy {};
}

We can also artificially fault a process through Tock's process console. For instance, when faulting our controller app, this can look like:

tock$ list
 PID    ShortID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      Unique     thread_controller        1       130         0   5/18   Yielded
 1      Unique     org.tockos.thread-tutorial.sensor     0        91         0   3/18   Yielded
tock$ fault thread_controller

---| No debug queue found. You can set it with the DebugQueue component.

panicked at kernel/src/process_standard.rs:362:17:
Process thread_controller had a fault
        Kernel version release-2.1-2908-g9d9b87d83

---| Cortex-M Fault Status |---
No Cortex-M faults detected.

---| App Status |---
𝐀𝐩𝐩: thread_controller   -   [Faulted]
 Events Queued: 0   Syscall Count: 262   Dropped Upcall Count: 0
 Restart Count: 0
 Last Syscall: Yield { which: 1, address: 0x0 }
 Completion Code: None


 ╔═══════════╤══════════════════════════════════════════╗
 ║  Address  │ Region Name    Used | Allocated (bytes)  ║
 ╚0x2000E000═╪══════════════════════════════════════════╝
             │ Grant Ptrs      144
             │ Upcalls         320

Injecting a Fault into the Application

For the purposes of this tutorial, we will dedicate one button (Button 4) to inject an artificial fault into the control application. We can do this, for instance, by simply dereferencing the NULL pointer: even on chips where this is a valid memory location, Tock's memory protection will never expose this address to an application.

EXERCISE: Implement a button callback handler that dereferences the null pointer. You can do so with, for example:

*((char*) NULL) = 42;

Do not forget to register a callback handler for Button 4, too!

Now, whenever you press Button 4, your board should print output similar to the above. Because the kernel panics, it will loop forever and blink LED1 in a recognizable pattern. You will need to reset the board to restart the Tock kernel and all applications.

Switching the Fault Handler

With this application fault implemented, we can now switch the kernel's fault policy to restart the offending application, instead of panicing the overall kernel:

  // How should the kernel respond when a process faults.
- const FAULT_RESPONSE: kernel::process::PanicFaultPolicy = kernel::process::PanicFaultPolicy {};
+ const FAULT_RESPONSE: kernel::process::RestartWithDebugFaultPolicy =
+     kernel::process::RestartWithDebugFaultPolicy {};

After making this change, you will need to recompile the kernel, like so:

$ cd tock/boards/tutorials/nrf52840dk-thread-tutorial
$ make
   [...]
   Compiling nrf52_components v0.1.0 (/home/leons/proj/tock/kernel/boards/nordic/nrf52_components)
   Compiling nrf52840dk v0.1.0 (/home/leons/proj/tock/kernel/boards/nordic/nrf52840dk)
    Finished `release` profile [optimized + debuginfo] target(s) in 11.09s
   text    data     bss     dec     hex filename
 233474      36   41448  274958   4320e tock/target/thumbv7em-none-eabi/release/nrf52840dk-thread-tutorial
cb0df7abb1...d47b383aaf  tock/target/thumbv7em-none-eabi/release/nrf52840dk-thread-tutorial.bin

Finally, flash the new kernel using make install:

$ make install
tockloader  flash --address 0x00000 --board nrf52dk --jlink tock/kernel/target/thumbv7em-none-eabi/release/nrf52840dk-thread-tutorial.bin
[INFO   ] Using settings from KNOWN_BOARDS["nrf52dk"]
[STATUS ] Flashing binary to board...
[INFO   ] Finished in 9.901 seconds

Now, when you re-connect to the board, you should see that the application is automatically being restarted every time it encounters a fault:

$ tockloader listen
tock$
Process thread_controller faulted and will be restarted.
[controller] Discovered sensor service: 1
Process thread_controller faulted and will be restarted.
[controller] Discovered sensor service: 1
Process thread_controller faulted and will be restarted.
[controller] Discovered sensor service: 1

Conclusion

This concludes our tutorial on using Tock to build a Thread-connected HVAC control system. We hope you enjoyed it!

We covered the following topics:

  • how to build the Tock kernel, applications, and use Tockloader to install both onto a development board,
  • placing system calls to interact with hardware peripherals, such as the temperature sensor, buttons, a screen, etc.,
  • using existing C-based libraries in the libtock-c userspace library,
  • programming asynchronously and interacting between applications with IPC,
  • and communicating between boards using the OpenThread library running within a Tock process.

Tock is an operating system applicable to a broad set of application domains, such as low-power and security-critical systems. We provide a broad set of guides and documentation:

We also provide some community resources, which you can find here: https://tockos.org/community/

Kernel Boot and Setup

The goal of this module is to make you comfortable with the Tock kernel, how it is structured, how the kernel is setup at boot, and how capsules provide additional kernel functionality.

During this you will:

  1. Learn how Tock uses Rust's memory safety to provide isolation for free
  2. Read the Tock boot sequence, seeing how Tock uses static allocation
  3. Learn about Tock's event-driven programming

The Tock Boot Sequence

The very first thing that runs on a Tock board is an assembly function called initialize_ram_jump_to_main(). Rust requires that memory is configured before any Rust code executes, so this must run first. As the function name implies, control is then transferred to the main() function in the board's main.rs file. Tock intentionally tries to give the board as much control over the operation of the system as possible, hence why there is very little between reset and the board's main function being called.

Open the main.rs file for your board in your favorite editor. This file defines the board's platform: how it boots, what capsules it uses, and what system calls it supports for userland applications.

How is everything organized?

Find the declaration of "platform" struct. This is typically called struct Platform or named based on the name of the board (it's pretty early in the file). This declares the structure representing the platform. It has many fields, many of which are capsules that make up the board's platform. These fields are resources that the board needs to maintain a reference to for future use, for example for handling system calls or implementing kernel policies.

Recall that everything in the kernel is statically allocated. We can see that here. Every field in the platform struct is a reference to an object with a static lifetime.

Many capsules themselves take a lifetime as a parameter, which is currently always 'static.

The boot process is primarily the construction of this platform structure. Once everything is set up, the board will pass the constructed platform object to kernel::kernel_loop and we're off to the races.

How do things get started?

After RAM initialization, the reset handler invokes the main() function in the board main.rs file. main() is typically rather long as it must setup and configure all of the drivers and capsules the board needs. Many capsules depend on other, lower layer abstractions that need to be created and initialized as well.

Take a look at the first few lines of main(). The boot sequence generally sets up any low-level microcontroller configuration, initializes the MCU peripherals, and sets up debugging capabilities.

How do capsules get created?

The bulk of main() create and initializes capsules which provide the main functionality of the Tock system. For example, to provide userspace applications with ability to display serial data, boards typically create a console capsule. An example of this looks like:


pub unsafe fn main() {
    ...

    // Create a virtualizer on top of an underlying UART device. Use 115200 as
    // the baud rate.
    let uart_mux = components::console::UartMuxComponent::new(channel, 115200)
        .finalize(components::uart_mux_component_static!());

    // Instantiate the console capsule. This uses the virtualized UART provided
    // by the uart_mux.
    let console = components::console::ConsoleComponent::new(
        board_kernel,
        capsules_core::console::DRIVER_NUM,
        uart_mux,
    )
    .finalize(components::console_component_static!());

    ...
}

Eventually, once all of the capsules have been created, we will populate the platform structure with them:

pub unsafe fn main() {
    ...

    let platform = Platform {
        console: console,
        gpio: gpio,
        ...
    }

}

What Are Components?

When setting up the capsules (such as console), we used objects in the components crate to help. In Tock, components are helper objects that make it easier to correctly create and initialize capsules.

For example, if we look under the hood of the console component, the main initialization of console looks like:

#![allow(unused)]
fn main() {
impl Component for ConsoleComponent {
    fn finalize(self, s: Self::StaticInput) -> Console {
        let grant_cap = create_capability!(capabilities::MemoryAllocationCapability);

        let write_buffer = static_init!([u8; DEFAULT_BUF_SIZE], [0; DEFAULT_BUF_SIZE]);
        let read_buffer = static_init!([u8; DEFAULT_BUF_SIZE], [0; DEFAULT_BUF_SIZE]);

        let console_uart = static_init!(
            UartDevice,
            UartDevice::new(self.uart_mux, true)
        );
        // Don't forget to call setup() to register our new UartDevice with the
        // mux!
        console_uart.setup();

        let console = static_init!(
            Console<'static>,
            console::Console::new(
                console_uart,
                write_buffer,
                read_buffer,
                self.board_kernel.create_grant(self.driver_num, &grant_cap),
            )
        );
        // Very easy to figure to set the client reference for callbacks!
        hil::uart::Transmit::set_transmit_client(console_uart, console);
        hil::uart::Receive::set_receive_client(console_uart, console);

        console
    }
}
}

Much of the code within components is boilerplate that is copied for each board and easy to subtlety miss an important setup step. Components encapsulate the setup complexity and can be reused on each board Tock supports.

The static_init! macro is simply an easy way to allocate a static variable with a call to new. The first parameter is the type, the second is the expression to produce an instance of the type.

Components end up looking somewhat complex because they can be re-used across multiple boards and different microcontrollers. More detail here.

A brief aside on buffers:

Notice that the console needs both a read and write buffer for it to use. These buffers have to have a 'static lifetime. This is because low-level hardware drivers, especially those that use DMA, require 'static buffers. Since we don't know exactly when the underlying operation will complete, and we must promise that the buffer outlives the operation, we use the one lifetime that is assured to be alive at the end of an operation: 'static. Other code with buffers without a 'static lifetime, such as userspace processes, use capsules like Console by copying data into internal 'static buffers before passing them to the console. The buffer passing architecture looks like this:

Console/UART buffer lifetimes

Let's Make a Tock Board!

The code continues on, creating all of the other capsules that are needed by the platform. Towards the end of main(), we've created all of the capsules we need, and it's time to create the actual platform structure (let platform = Platform {...}).

Boards must implement two traits to successfully run the Tock kernel: SyscallDriverLookup and KernelResources.

SyscallDriverLookup

The first, SyscallDriverLookup, is how the kernel maps system calls from userspace to the correct capsule within the kernel. The trait requires one function:

#![allow(unused)]
fn main() {
trait SyscallDriverLookup {
    /// Mapping of syscall numbers to capsules.
    fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
    where
        F: FnOnce(Option<&dyn SyscallDriver>) -> R;
}
}

The with_driver() function executes the provided function f() by passing it the correct capsule based on the provided driver_num. A brief example of an implementation of SyscallDriverLookup looks like:

#![allow(unused)]
fn main() {
impl SyscallDriverLookup for Platform {
    fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
    where
        F: FnOnce(Option<&dyn kernel::syscall::SyscallDriver>) -> R,
    {
        match driver_num {
            capsules_core::console::DRIVER_NUM => f(Some(self.console)),
            capsules_core::gpio::DRIVER_NUM => f(Some(self.gpio)),
            ...
            _ => f(None),
        }
    }
}
}

Why require each board to provide this mapping? Why not implement this mapping centrally in the kernel? Tock requires boards to implement this mapping as we consider the assignment of driver numbers to specific capsules as a platform-specific decisions. While Tock does have a default mapping of driver numbers, boards are not obligated to use them. This flexibility allows boards to expose multiple copies of the same capsule to userspace, for example.

KernelResources

The KernelResources trait is the main method for configuring the operation of the core Tock kernel. Policies such as the syscall mapping described above, syscall filtering, and watchdog timers are configured through this trait. More information is contained in a separate course module.

Loading processes

Once the platform is all set up, the board is responsible for loading processes into memory:

pub unsafe fn main() {
    ...

    kernel::process::load_processes(
        board_kernel,
        chip,
        core::slice::from_raw_parts(
            &_sapps as *const u8,
            &_eapps as *const u8 as usize - &_sapps as *const u8 as usize,
        ),
        core::slice::from_raw_parts_mut(
            &mut _sappmem as *mut u8,
            &_eappmem as *const u8 as usize - &_sappmem as *const u8 as usize,
        ),
        &mut PROCESSES,
        &FAULT_RESPONSE,
        &process_management_capability,
    )
    .unwrap_or_else(|err| {
        debug!("Error loading processes!");
        debug!("{:?}", err);
    });

    ...
}

A Tock process is represented by a kernel::Process struct. In principle, a platform could load processes by any means. In practice, all existing platforms write an array of Tock Binary Format (TBF) entries to flash. The kernel provides the load_processes helper function that takes in a flash address and begins iteratively parsing TBF entries and making Processes.

A brief aside on capabilities:

To call load_processes(), the board had to provide a reference to a &process_management_capability. The load_processes() function internally requires significant direct access to memory, and it should only be called in very specific places. To prevent its misuse (for example from within a capsule), calling it requires a capability to be passed in with the arguments. To create a capability, the calling code must be able to call unsafe, Code (i.e. capsules) which cannot use unsafe therefore has no way to create a capability and therefore cannot call the restricted function.

Starting the kernel

Finally, the board passes a reference to the current platform, the chip the platform is built on (used for interrupt and power handling), and optionally an IPC capsule to start the main kernel loop:

#![allow(unused)]
fn main() {
board_kernel.kernel_loop(&platform, chip, Some(&platform.ipc), &main_loop_capability);

}

From here, Tock is initialized, the kernel event loop takes over, and the system enters steady state operation.

Tock Kernel Policies

As a kernel for a security-focused operating system, the Tock kernel is responsible for implementing various policies on how the kernel should handle processes. Examples of the types of questions these policies help answer are: What happens when a process has a hardfault? Is the process restarted? What syscalls are individual processes allowed to call? Which process should run next? Different systems may need to answer these questions differently, and Tock includes a robust platform for configuring each of these policies.

Background on Relevant Tock Design Details

If you are new to this aspect of Tock, this section provides a quick primer on the key aspects of Tock which make it possible to implement process policies.

The KernelResources Trait

The central mechanism for configuring the Tock kernel is through the KernelResources trait. Each board must implement KernelResources and provide the implementation when starting the main kernel loop.

The general structure of the KernelResources trait looks like this:

#![allow(unused)]
fn main() {
/// This is the primary method for configuring the kernel for a specific board.
pub trait KernelResources<C: Chip> {
    /// How driver numbers are matched to drivers for system calls.
    type SyscallDriverLookup: SyscallDriverLookup;

    /// System call filtering mechanism.
    type SyscallFilter: SyscallFilter;

    /// Process fault handling mechanism.
    type ProcessFault: ProcessFault;

    /// Context switch callback handler.
    type ContextSwitchCallback: ContextSwitchCallback;

    /// Scheduling algorithm for the kernel.
    type Scheduler: Scheduler<C>;

    /// Timer used to create the timeslices provided to processes.
    type SchedulerTimer: scheduler_timer::SchedulerTimer;

    /// WatchDog timer used to monitor the running of the kernel.
    type WatchDog: watchdog::WatchDog;

    // Getters for each policy/mechanism.

    fn syscall_driver_lookup(&self) -> &Self::SyscallDriverLookup;
    fn syscall_filter(&self) -> &Self::SyscallFilter;
    fn process_fault(&self) -> &Self::ProcessFault;
    fn context_switch_callback(&self) -> &Self::ContextSwitchCallback;
    fn scheduler(&self) -> &Self::Scheduler;
    fn scheduler_timer(&self) -> &Self::SchedulerTimer;
    fn watchdog(&self) -> &Self::WatchDog;
}
}

Many of these resources can be effectively no-ops by defining them to use the () type. Every board that wants to support processes must provide:

  1. A SyscallDriverLookup, which maps the DRIVERNUM in system calls to the appropriate driver in the kernel.
  2. A Scheduler, which selects the next process to execute. The kernel provides several common schedules a board can use, or boards can create their own.

Application Identifiers

The Tock kernel can implement different policies based on different levels of trust for a given app. For example, a trusted core app written by the board owner may be granted full privileges, while a third-party app may be limited in which system calls it can use or how many times it can fail and be restarted.

To implement per-process policies, however, the kernel must be able to establish a persistent identifier for a given process. To do this, Tock supports process credentials which are hashes, signatures, or other credentials attached to the end of a process's binary image. With these credentials, the kernel can cryptographically verify that a particular app is trusted. The kernel can then establish a persistent identifier for the app based on its credentials.

A specific process binary can be appended with zero or more credentials. The AppCredentialsPolicy then uses these credentials to establish if the kernel should run this process. If the credentials policy approves the process, the AppIdPolicy determines what identifier it should have. The Tock kernel design does not impose any restrictions on how applications or processes are identified. For example, it is possible to use a SHA256 hash of the binary as an identifier, or a RSA4096 signature as the identifier. As different use cases will want to use different identifiers, Tock avoids specifying any constraints.

However, long identifiers are difficult to use in software. To enable more efficiently handling of application identifiers, Tock also includes mechanisms for a per-process ShortId which is stored in 32 bits. This can be used internally by the kernel to differentiate processes. As with long identifiers, ShortIds are set by AppIdPolicy (specifically the Compress trait) and are chosen on a per-board basis. The only property the kernel enforces is that ShortIds must be unique among processes installed on the board. For boards that do not need to use ShortIds, the ShortId type includes a LocallyUnique option which ensures the uniqueness invariant is upheld without the overhead of choosing distinct, unique numbers for each process.

#![allow(unused)]
fn main() {
pub enum ShortId {
    LocallyUnique,
    Fixed(core::num::NonZeroU32),
}
}

Module Overview

In this module, we are going to experiment with using the KernelResources trait to implement per-process restart policies. We will create our own ProcessFaultPolicy that implements different fault handling behavior based on whether the process included a hash in its credentials footer.

Custom Process Fault Policy

A process fault policy decides what the kernel does with a process when it crashes (i.e. hardfaults). The policy is implemented as a Rust module that implements the following trait:

#![allow(unused)]
fn main() {
pub trait ProcessFaultPolicy {
    /// `process` faulted, now decide what to do.
    fn action(&self, process: &dyn Process) -> process::FaultAction;
}
}

When a process faults, the kernel will call the action() function and then take the returned action on the faulted process. The available actions are:

#![allow(unused)]
fn main() {
pub enum FaultAction {
    /// Generate a `panic!()` with debugging information.
    Panic,
    /// Attempt to restart the process.
    Restart,
    /// Stop the process.
    Stop,
}
}

Let's create a custom process fault policy that restarts signed processes up to a configurable maximum number of times, and immediately stops unsigned processes.

We start by defining a struct for this policy:

#![allow(unused)]
fn main() {
pub struct RestartTrustedAppsFaultPolicy {
	/// Number of times to restart trusted apps.
    threshold: usize,
}
}

We then create a constructor:

#![allow(unused)]
fn main() {
impl RestartTrustedAppsFaultPolicy {
    pub const fn new(threshold: usize) -> RestartTrustedAppsFaultPolicy {
        RestartTrustedAppsFaultPolicy { threshold }
    }
}
}

Now we can add a template implementation for the ProcessFaultPolicy trait:

#![allow(unused)]
fn main() {
impl ProcessFaultPolicy for RestartTrustedAppsFaultPolicy {
    fn action(&self, process: &dyn Process) -> process::FaultAction {
        process::FaultAction::Stop
    }
}
}

To determine if a process is trusted, we will use its ShortId. A ShortId is a type as follows:

#![allow(unused)]
fn main() {
pub enum ShortId {
	/// No specific ID, just an abstract value we know is unique.
    LocallyUnique,
    /// Specific 32 bit ID number guaranteed to be unique.
    Fixed(core::num::NonZeroU32),
}
}

If the app has a short ID of ShortId::LocallyUnique then it is untrusted (i.e. the kernel could not validate its signature or it was not signed). If the app has a concrete number as its short ID (i.e. ShortId::Fixed(u32)), then we consider the app to be trusted.

To determine how many times the process has already been restarted we can use process.get_restart_count().

Putting this together, we have an outline for our custom policy:

#![allow(unused)]
fn main() {
use kernel::process;
use kernel::process::Process;
use kernel::process::ProcessFaultPolicy;

pub struct RestartTrustedAppsFaultPolicy {
	/// Number of times to restart trusted apps.
    threshold: usize,
}

impl RestartTrustedAppsFaultPolicy {
    pub const fn new(threshold: usize) -> RestartTrustedAppsFaultPolicy {
        RestartTrustedAppsFaultPolicy { threshold }
    }
}

impl ProcessFaultPolicy for RestartTrustedAppsFaultPolicy {
    fn action(&self, process: &dyn Process) -> process::FaultAction {
    	let restart_count = process.get_restart_count();
    	let short_id = process.short_app_id();

    	// Check if the process is trusted. If so, return the restart action
    	// if the restart count is below the threshold. Otherwise return stop.

    	// If the process is not trusted, return stop.
        process::FaultAction::Stop
    }
}
}

TASK: Finish implementing the custom process fault policy.

Save your completed custom fault policy in your board's src/ directory as trusted_fault_policy.rs. Then add mod trusted_fault_policy; to the top of the board's main.rs file.

Testing Your Custom Fault Policy

First we need to configure your kernel to use your new fault policy.

  1. Find where your fault_policy was already defined. Update it to use your new policy:

    #![allow(unused)]
    fn main() {
    let fault_policy = static_init!(
        trusted_fault_policy::RestartTrustedAppsFaultPolicy,
        trusted_fault_policy::RestartTrustedAppsFaultPolicy::new(3)
    );
    }
  2. Now we need to configure the process loading mechanism to use this policy for each app.

    #![allow(unused)]
    fn main() {
    kernel::process::load_processes(
        board_kernel,
        chip,
        flash,
        memory,
        &mut PROCESSES,
        fault_policy, // this is where we provide our chosen policy
        &process_management_capability,
    )
    }
  3. Now we can compile the updated kernel and flash it to the board:

    # in your board directory:
    make install
    

Now we need an app to actually crash so we can observe its behavior. Tock has a test app called crash_dummy that causes a hardfault when a button is pressed. Compile that and load it on to the board:

  1. Compile the app:

    cd libtock-c/examples/tests/crash_dummy
    make
    
  2. Install it on the board:

    tockloader install
    

With the new kernel installed and the test app loaded, we can inspect the status of the board. Use tockloader to connect to the serial port:

tockloader listen

Note: if multiple serial port options appear, generally the lower numbered port is what you want to use.

Now we can use the onboard console to inspect which processes we have on the board. Run the list command:

tock$ list
 PID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      crash_dummy              0         6         0   1/15   Yielded

Note that crash_dummy is in the Yielded state. This means it is just waiting for a button press.

Press the first button on your board (it is "Button 1" on the nRF52840-dk). This will cause the process to fault. You won't see any output, and since the app was not signed it was just stopped. Now run the list command again:

tock$ list
 PID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      crash_dummy              0         6         0   0/15   Faulted

Now the process is in the Faulted state! This means the kernel will not try to run it. Our policy is working! Next we have to verify signed apps so that we can restart trusted apps.

App Credentials

With our custom fault policy, we can implement different responses based on whether an app is trusted or not. Now we need to configure the kernel to verify apps, and check if we trust them or not. For this example we will use a simple credential: a sha256 hash. This credential is simple to create, and serves as a stand-in for more useful credentials such as cryptographic signatures.

This will require a couple pieces:

  • We need to actually include the hash in our app.
  • We need a mechanism in the kernel to check the hash exists and is valid.

Signing Apps

We can use Tockloader to add a hash to a compiled app.

First, compile the app:

$ cd libtock-c/examples/blink
$ make

Now, add the hash credential:

$ tockloader tbf credential add sha256

It's fine to add to all architectures or you can specify which TBF to add it to.

To check that the credential was added, we can inspect the TAB:

$ tockloader inspect-tab

You should see output like the following:

$ tockloader inspect-tab
[INFO   ] No TABs passed to tockloader.
[STATUS ] Searching for TABs in subdirectories.
[INFO   ] Using: ['./build/blink.tab']
[STATUS ] Inspecting TABs...
TAB: blink
  build-date: 2023-06-09 21:52:59+00:00
  minimum-tock-kernel-version: 2.0
  tab-version: 1
  included architectures: cortex-m0, cortex-m3, cortex-m4, cortex-m7

 Which TBF to inspect further? cortex-m4

cortex-m4:
  version               : 2
  header_size           :        104         0x68
  total_size            :      16384       0x4000
  checksum              :              0x722e64be
  flags                 :          1          0x1
    enabled             : Yes
    sticky              : No
  TLV: Main (1)                                   [0x10 ]
    init_fn_offset      :         41         0x29
    protected_size      :          0          0x0
    minimum_ram_size    :       5068       0x13cc
  TLV: Program (9)                                [0x20 ]
    init_fn_offset      :         41         0x29
    protected_size      :          0          0x0
    minimum_ram_size    :       5068       0x13cc
    binary_end_offset   :       8360       0x20a8
    app_version         :          0          0x0
  TLV: Package Name (3)                           [0x38 ]
    package_name        : kv_interactive
  TLV: Kernel Version (8)                         [0x4c ]
    kernel_major        : 2
    kernel_minor        : 0
    kernel version      : ^2.0
  TLV: Persistent ACL (7)                         [0x54 ]
    Write ID            :          11          0xb
    Read IDs (1)        : 11
    Access IDs (1)      : 11

TBF Footers
  Footer
    footer_size         :       8024       0x1f58
  Footer TLV: Credentials (128)
    Type: SHA256 (3) ✓ verified
    Length: 32
  Footer TLV: Credentials (128)
    Type: Reserved (0)
    Length: 7976

Note at the bottom, there is a Footer TLV with SHA256 credentials! Because tockloader was able to double-check the hash was correct there is ✓ verified next to it.

SUCCESS: We now have an app with a hash credential!

Verifying Credentials in the Kernel

To have the kernel check that our hash credential is present and valid, we need to add a credential checker before the kernel starts each process.

In main.rs, we need to create the app checker. Tock includes a basic SHA256 credential checker, so we can use that:

#![allow(unused)]
fn main() {
// Create the software-based SHA engine.
let sha = components::sha::ShaSoftware256Component::new()
    .finalize(components::sha_software_256_component_static!());

// Create the credential checker.
let checking_policy = components::appid::checker_sha::AppCheckerSha256Component::new(sha)
    .finalize(components::app_checker_sha256_component_static!());

// Create the AppID assigner.
let assigner = components::appid::assigner_name::AppIdAssignerNamesComponent::new()
    .finalize(components::appid_assigner_names_component_static!());

// Create the process checking machine.
let checker = components::appid::checker::ProcessCheckerMachineComponent::new(checking_policy)
    .finalize(components::process_checker_machine_component_static!());
}

To use the checker, we must switch to asynchronous process loading. Many boards by default use a synchronous loader which iterates through flash discovering processes. However, to verify credentials, we need asynchronous operations during loading and therefore need an asynchronous process loader.

#![allow(unused)]
fn main() {
let process_binary_array = static_init!(
    [Option<kernel::process::ProcessBinary>; NUM_PROCS],
    [None, None, None, None, None, None, None, None]
);

let loader = static_init!(
    kernel::process::SequentialProcessLoaderMachine<
        nrf52840::chip::NRF52<Nrf52840DefaultPeripherals>,
    >,
    kernel::process::SequentialProcessLoaderMachine::new(
        checker,
        &mut *addr_of_mut!(PROCESSES),
        process_binary_array,
        board_kernel,
        chip,
        core::slice::from_raw_parts(
            core::ptr::addr_of!(_sapps),
            core::ptr::addr_of!(_eapps) as usize - core::ptr::addr_of!(_sapps) as usize,
        ),
        core::slice::from_raw_parts_mut(
            core::ptr::addr_of_mut!(_sappmem),
            core::ptr::addr_of!(_eappmem) as usize - core::ptr::addr_of!(_sappmem) as usize,
        ),
        &FAULT_RESPONSE,
        assigner,
        &process_management_capability
    )
);

checker.set_client(loader);

loader.register();
loader.start();
}

(Instead of the kernel::process::load_processes(...) function.)

Compile and install the updated kernel.

SUCCESS: We now have a kernel that can check credentials!

Installing Apps and Verifying Credentials

Now, our kernel will only run an app if it has a valid SHA256 credential. To verify this, recompile and install the blink app but do not add credentials:

cd libtock-c/examples/blink
touch main.c
make
tockloader install --erase

Now, if we list the processes on the board with the process console:

$ tockloader listen
Initialization complete. Entering main loop
NRF52 HW INFO: Variant: AAF0, Part: N52840, Package: QI, Ram: K256, Flash: K1024
tock$ list
 PID    Name                Quanta  Syscalls  Restarts  Grants  State
tock$

You can see our app does not show up. That is because it did not pass the credential check.

We can see this more clearly by updating the kernel to use the ProcessLoadingAsyncClient client. We can implement this client for Platform:

#![allow(unused)]
fn main() {
impl kernel::process::ProcessLoadingAsyncClient for Platform {
    fn process_loaded(&self, result: Result<(), kernel::process::ProcessLoadError>) {
        match result {
            Ok(()) => {},
            Err(e) => {
                kernel::debug!("Process failed to load: {:?}", e);
            }
        }
    }

    fn process_loading_finished(&self) { }
}
}

And then configure it with the loader:

#![allow(unused)]
fn main() {
loader.set_client(platform);
}

Now re-compiling and flashing the kernel and we will see the process load error when the kernel boots.

To fix this, we can add the SHA256 credential.

cd libtock-c/examples/blink
tockloader tbf credential add sha256
tockloader install

Now when we list the processes, we see:

tock$ list
 PID    ShortID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      0x3be6efaa blink                    0       323         0   1/16   Yielded

And we can verify the app is both running and now has a specifically assigned short ID.

Implementing the Privileged Behavior

The default operation is not quite what we want. We want all apps to run, but only credentialed apps to be restarted.

First, we need to allow all apps to run, even if they don't pass the credential check. Doing that is actually quite simple. We just need to modify the credential checker we are using to not require credentials.

In tock/capsules/system/src/process_checker/basic.rs, modify the require_credentials() function to not require credentials:

#![allow(unused)]
fn main() {
impl AppCredentialsChecker<'static> for AppCheckerSha256 {
    fn require_credentials(&self) -> bool {
        false // change from true to false
    }
    ...
}
}

Then recompile and install. Now both processes should run:

tock$ list
 PID    ShortID    Name                Quanta  Syscalls  Restarts  Grants  State
 0      0x3be6efaa blink                    0       193         0   1/16   Yielded
 1      Unique     c_hello                  0         8         0   1/16   Yielded

But note, only the credential app (blink) has a specific short ID.

Second, we need to use the presence of a specific short ID in our fault policy to only restart credentials apps. We just need to check if the short ID is fixed or not:

#![allow(unused)]
fn main() {
impl ProcessFaultPolicy for RestartTrustedAppsFaultPolicy {
    fn action(&self, process: &dyn Process) -> process::FaultAction {
        let restart_count = process.get_restart_count();
        let short_id = process.short_app_id();

        // Check if the process is trusted based on whether it has a fixed short
        // ID. If so, return the restart action if the restart count is below
        // the threshold. Otherwise return stop.
        match short_id {
            kernel::process::ShortId::LocallyUnique => process::FaultAction::Stop,
            kernel::process::ShortId::Fixed(_) => {
                if restart_count < self.threshold {
                    process::FaultAction::Restart
                } else {
                    process::FaultAction::Stop
                }
            }
        }
    }
}
}

That's it! Now we have the full policy: we verify application credentials, and handle process faults accordingly.

Task

Compile and install multiple applications, including the crash dummy app, and verify that only credentialed apps are successfully restarted.

SUCCESS: We now have implemented an end-to-end security policy in Tock!

TicKV Key-Value Store

TicKV is a flash-optimized key-value store written in Rust. Tock supports using TicKV within the OS to enable the kernel and processes to store and retrieve key-value objects in local flash memory.

TicKV and Key-Value Design

This section provides a quick overview of the TicKV and Key-Value stack in Tock.

TicKV Structure and Format

TicKV can store 8 byte keys and values up to 2037 bytes. TicKV is page-based, meaning that each object is stored entirely on a single page in flash.

Note: for familiarity, we use the term "page", but in actuality TicKV uses the size of the smallest erasable region, not necessarily the actual size of a page in the flash memory.

Each object is assigned to a page based on the lowest 16 bits of the key:

object_page_index = (key & 0xFFFF) % <number of pages>

Each object in TicKV has the following structure:

0        3            11                  (bytes)
---------------------------------- ... -
| Header | Key        | Value          |
---------------------------------- ... -

The header has this structure:

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4    (bits)
-------------------------------------------------
| Version=1     |V| res | Length                |
-------------------------------------------------
  • Version: Format of the object, currently this is always 1.
  • Valid (V): 1 if this object is valid, 0 otherwise. This is set to 0 to delete an object.
  • Length (Len): The total length of the object, including the length of the header (3 bytes), key (8 bytes), and value.

Subsequent keys either start at the first byte of a page or immediately after another object. If a key cannot fit on the page assigned by the object_page_index, it is stored on the next page with sufficient room.

Objects are updated in TicKV by invalidating the existing object (setting the V flag to 0) and then writing the new value as a new object. This removes the need to erase and re-write an entire page of flash to update a specific value.

TicKV on Tock Format

The previous section describes the generic format of TicKV. Tock builds upon this format by adding a header to the value buffer to add additional features.

The full object format for TicKV objects in Tock has the following structure:

0        3            11  12       16       20              (bytes)
------------------------------------------------ ... ----
| TicKV  | Key        |Ver| Length | Write  |   Value   |
| Header |            |   |        |  ID    |           |
------------------------------------------------ ... ----
<--TicKV Header+Key--><--Tock TicKV Header+Value-...---->
  • Version (Ver): One byte version of the Tock header. Currently 0.
  • Length: Four byte length of the value.
  • Write ID: Four byte identifier for restricting access to this object.

The central addition is the Write ID, which is a u32 indicating the identifier of the writer that added the key-value object. The write ID of 0 is reserved for the kernel to use. Each process can be assigned using TBF headers its own write ID to use for storing state, such as in a TicKV database. Each process and the kernel can then be granted specific read and update permissions, based on the stored write ID. If a process has read permissions for the specific ID stored in the Write ID field, then it can access that key-value object. If a process has update permissions for the specific ID stored in the Write ID field, then it can change the value of that key-value object.

Tock Key-Value APIs

Tock supports two key-value orientated APIs: one that uses key-value objects directly and one that requires permissions.

The base interface looks like this. Note, this version is simplified for illustration, the actual version is complete Rust.

#![allow(unused)]
fn main() {
pub trait KV {
    /// Retrieve a value from the store.
    fn get(&self, key: [u8], value: [u8]) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Insert a key-value object into the store. Overwrite if needed.
    fn set(&self, key: [u8], value: [u8]) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Insert a key-value object into the store if it doesn't exist.
    fn add(&self, key: [u8], value: [u8]) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Modify a key-value object into the store if it already exists.
    fn update(&self, key: [u8], value: [u8]) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Remove a key-value object from the store.
    fn delete(&self, key: [u8]) -> Result<(), ([u8], ErrorCode)>;
}
}

(You can find the full definition in tock/kernel/src/hil/kv.rs.)

To enable access control, we layer an additional KV interface on top of the KV interface.

#![allow(unused)]
fn main() {
pub trait KVPermissions {
    /// Retrieve a value from the store.
    fn get(&self, key: [u8], value: [u8], permissions: Perm) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Insert a key-value object into the store. Overwrite if needed.
    fn set(&self, key: [u8], value: [u8], permissions: Perm) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Insert a key-value object into the store if it doesn't exist.
    fn add(&self, key: [u8], value: [u8], permissions: Perm) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Modify a key-value object into the store if it already exists.
    fn update(&self, key: [u8], value: [u8], permissions: Perm) -> Result<(), ([u8], [u8], ErrorCode)>;

    /// Remove a key-value object from the store.
    fn delete(&self, key: [u8], permissions: Perm) -> Result<(), ([u8], ErrorCode)>;
}
}

As you can see, each of these APIs requires a permission so the capsule can verify that the requestor has access to the given K-V object. We use the TicKV on Tock storage format to store permissions to verify later queries.

Key-Value Stack in Tock

The KV stack is structured as the following:

+============================================================+
||                        Userspace                         ||
+============================================================+

----------------------Syscall Interface-----------------------

+------------------------------------------------------------+
|  KV Driver                         (capsules/kv_driver.rs) |
+------------------------------------------------------------+

  hil::kv::KVPermissions

+------------------------------------------------------------+
| Virtualizer                       (capsules/virtual_kv.rs) |
+------------------------------------------------------------+

  hil::kv::KVPermissions

+------------------------------------------------------------+
|  K-V store Permissions  (capsules/kv_store_permissions.rs) |
+------------------------------------------------------------+

  hil::kv::KV

+------------------------------------------------------------+
|  TickVKVStore                 (capsules/tickv_kv_store.rs) |
+------------------------------------------------------------+

  capsules::tickv::KVSystem

+------------------------------------------------------------+
|  TicKV                                 (capsules/tickv.rs) |
+------------------------------------------------------------+
     |             |
 hil::flash        |
             +-----------------+
             | libraries/tickv |
             +-----------------+

Key-Value in Userspace

Userspace applications have access to the K-V store via the kv_driver.rs capsule. This capsule provides an interface for applications to use the upper layer get-set-add-update-delete API.

However, applications need permission to use persistent storage. This is granted via headers in the TBF header for the application.

Applications have three fields for permissions: a write ID, multiple read IDs, and multiple modify IDs.

  • write_id: u32: This u32 specifies the ID used when the application creates a new K-V object. If this is 0, then the application does not have write access. (A write_id of 0 is reserved for the kernel.)
  • read_ids: [u32]: These read IDs specify which k-v objects the application can call get() on. If this is empty or does not include the application's write_id, then the application will not be able to retrieve its own objects.
  • modify_ids: [u32]: These modify IDs specify which k-v objects the application can edit, either by replacing or deleting. Again, if this is empty or does not include the application's write_id, then the application will not be able to update or delete its own objects.

These headers can be added at compilation time with elf2tab or after the TAB has been created using Tockloader.

To have elf2tab add the header, it needs to be run with additional flags:

elf2tab ... --write_id 10 --read_ids 10,11,12 --access_ids 10,11,12 <list of ELFs>

To add it with tockloader (run in the app directory):

tockloader tbf tlv add persistent_acl 10 10,11,12 10,11,12

Using K-V Storage

To use the K-V storage, load the kv-interactive app:

cd libtock-c/examples/tests/kv_interactive
make
tockloader tbf tlv add persistent_acl 10 10,11,12 10,11,12
tockloader install

Now via the terminal, you can create and view k-v objects by typing set, get, or delete.

$ tockloader listen
set mykey hello
Setting mykey=hello
Set key-value
get mykey
Getting mykey
Got value: hello
delete mykey
Deleting mykey

Managing TicKV Database on your Host Computer

You can interact with a board's k-v store via tockloader on your host computer.

View the Contents

To view the entire DB:

tockloader tickv dump

Which should give something like:

[INFO   ] Using jlink channel to communicate with the board.
[INFO   ] Using settings from KNOWN_BOARDS["nrf52dk"]
[STATUS ] Dumping entire TicKV database...
[INFO   ] Using settings from KNOWN_BOARDS["nrf52dk"]
[INFO   ] Dumping entire contents of Tock-style TicKV database.
REGION 0
TicKV Object hash=0xbbba2623865c92c0 version=1 flags=8 length=24 valid=True checksum=0xe83988e0
  Value: 00000000000b000000
  TockTicKV Object version=0 write_id=11 length=0
    Value:

REGION 1
TicKV Object hash=0x57b15d172140dec1 version=1 flags=8 length=28 valid=True checksum=0x32542292
  Value: 00040000000700000038313931
  TockTicKV Object version=0 write_id=7 length=4
    Value: 38313931

REGION 2
TicKV Object hash=0x71a99997e4830ae2 version=1 flags=8 length=28 valid=True checksum=0xbdc01378
  Value: 000400000000000000000000ca
  TockTicKV Object version=0 write_id=0 length=4
    Value: 000000ca

REGION 3
TicKV Object hash=0x3df8e4a919ddb323 version=1 flags=8 length=30 valid=True checksum=0x70121c6a
  Value: 0006000000070000006b6579313233
  TockTicKV Object version=0 write_id=7 length=6
    Value: 6b6579313233

REGION 4
TicKV Object hash=0x7bc9f7ff4f76f244 version=1 flags=8 length=15 valid=True checksum=0x1d7432bb
  Value:
TicKV Object hash=0x9efe426e86d82864 version=1 flags=8 length=79 valid=True checksum=0xd2ac393f
  Value: 001000000000000000a2a4a6a6a8aaacaec2c4c6c6c8caccce000000000000000000000000000000000000000000000000000000000000000000000000000000
  TockTicKV Object version=0 write_id=0 length=16
    Value: a2a4a6a6a8aaacaec2c4c6c6c8caccce

REGION 5
TicKV Object hash=0xa64cf33980ee8805 version=1 flags=8 length=29 valid=True checksum=0xa472da90
  Value: 0005000000070000006d796b6579
  TockTicKV Object version=0 write_id=7 length=5
    Value: 6d796b6579

REGION 6
TicKV Object hash=0xf17b4d392287c6e6 version=1 flags=8 length=79 valid=True checksum=0x854d8de0
  Value: 00030000000700000033343500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
  TockTicKV Object version=0 write_id=7 length=3
    Value: 333435

...

[INFO   ] Finished in 3.468 seconds

You can see all of the hashed keys and stored values, as well as their headers.

Add a Key-Value Object

You can add a k-v object using tockloader:

tockloader tickv append newkey newvalue

Note that by default tockloader uses a write_id of 0, so that k-v object will only be accessible to the kernel. To specify a specific write_id so an app can access it:

tockloader tickv append appkey appvalue --write-id 10

Wrap-Up

You now know how to use a Key-Value store in your Tock apps as well as in the kernel. Tock's K-V stack supports access control on stored objects, and can be used simultaneously by both the kernel and userspace applications.

Write an Environmental Sensing Application

To start we will focus on creating a sensing application that can collect data by reading sensors.

Setup

You will need the libtock library to provide the library functions for calling system calls provided by the Tock kernel. We will use libtock-c, which you can clone:

git clone https://github.com/tock/libtock-c

Make sure you can compile an application:

cd libtock-c/examples/blink
make

Create a Hello World Application

Create a new folder in the libtock-c/examples folder called simsense. Copy the Makefile from the blink app.

cd examples
mkdir simsense
cp blink/Makefile simsense

Now create main.c in simsense/ and create a basic hello world application:

#include <stdio.h>

int main(void) {
  printf("Hello, World!\n");
}

Background on printf()

The code uses the standard C library routine printf to compose a message using a format string and print it to the console. Let's break down what the code layers are here:

  1. printf() is provided by the C standard library (implemented by newlib). It takes the format string and arguments, and generates an output string from them. To actually write the string to standard out, printf calls _write.
  2. _write (in libtock-c's sys.c) is a wrapper for actually writing to output streams (in this case, standard out a.k.a. the console). It calls the Tock-specific console writing function putnstr.
  3. putnstr(in libtock-c's console.c) is a buffers data to be written, calls putnstr_async, and acts as a synchronous wrapper, yielding until the operation is complete.
  4. Finally, putnstr_async (in libtock-c's console.c) performs the actual system calls, calling to allow, subscribe, and command to enable the kernel to access the buffer, request a callback when the write is complete, and begin the write operation respectively.

The application could accomplish all of this by invoking Tock system calls directly, but using libraries makes for a much cleaner interface and allows users to not need to know the inner workings of the OS.

Loading the Application

Okay, let's build and load this simple program.

  1. Erase all other applications from the development board:

    tockloader erase-apps
    
  2. Build the application and load it (Note: tockloader install automatically searches the current working directory and its subdirectories for Tock binaries.)

    make
    tockloader install
    
  3. Check that it worked with a separate terminal:

    tockloader listen
    

    The output should look something like:

    $ tockloader listen
    No device name specified. Using default "tock"
    Using "/dev/cu.usbserial-c098e5130012 - Hail IoT Module - TockOS"
    
    Listening for serial output.
    Hello, World!
    

Checkpoint: You can compile and run your own Hello World app.

Discovering Sensors

Now we want to go beyond printing fixed strings and sample onboard sensors. Because Tock separates apps from the kernel, an application doesn't necessarily know which sensors are available. To start, we will test for various sensors and see which are available.

Background

Tock apps use system calls to communicate with the kernel. Drivers for various kernel drivers (e.g. accessing sensors, controlling LEDs, or printing serial messages) are identified by a DRIVER_NUM. Apps can then call Commands for each driver, where commands are identified by a COMMAND_NUM.

To aid with discovery, COMMAND_NUM == 0 is reserved as an existence check. Userspace apps can call a Command syscall with the COMMAND_NUM of 0 and check the return value. If SUCCESS, that driver exists.

Check for Ambient Light Sensor

Let's start by checking if our board has an ambient light sensor. The library interface for ambient light is in the libtock-c/libtock folder.

We can use the ambient_light_exists() function. In main.c of our simsense app:

#include <stdio.h>
#include <ambient_light.h>

int main(void) {
  printf("Checking for ambient light sensor.\n");

  printf("Ambient Light: ");
  if (ambient_light_exists()) {
    printf("Exists!\n");
  } else {
    printf("Does not exist.\n");
  }
}

Compile and run your updated app.

Tip: To see which apps are loaded on a board, run tockloader list.

Checkpoint: You can check if you have an ambient light sensor. What is the result for your hardware?

Check for Additional Sensors

The next step is to check for other sensor types (you might not have a light sensor). Expand your application to check for other sensors. Some you might use:

Checkpoint: Your app now checks for the presence of several sensors. Which are available on your board?

Sampling Data from Available Sensors

Now that we know which sensors are available, we want to get data from the sensors that exist.

Use the libtock libraries to sample the sensors. For simplicity, you want to use the functions which end in _sync so you can avoid writing the asynchronous code.

Print the readings to the serial console. As a starting point, consider the following code:

int take_measurement(void) {
  int val;
  int ret;

  ret = sensor_sample_sync(&val);
  if (ret == RETURNCODE_SUCCESS) {
    printf("Sensor Reading: %d\n", val);
  }
}

Example: Ambient Light

The interface in libtock/ambient_light.h is used to measure ambient light conditions in lux. imix uses the ISL29035 sensor, but the userland library is abstracted from the details of particular sensors. It contains the function:

#include <ambient_light.h>
int ambient_light_read_intensity_sync(int* lux);

Note that the light reading is written to the location passed as an argument, and the function returns non-zero in the case of an error.

Example: Temperature

The interface in libtock/temperature.h is used to measure ambient temperature in degrees Celsius, times 100. imix uses the SI7021 sensor. It contains the function:

#include <temperature.h>
int temperature_read_sync(int* temperature);

Again, this function returns non-zero in the case of an error.

Checkpoint: Your app prints readings from all available sensors.

Take Multiple Readings

Finally to complete our sensor we want to take multiple sensor readings. Put your sampling code in a loop. Use the delay_ms() function to sample only periodically.

You'll find the interface for timers in libtock/timer.h. The function you'll find useful today is:

#include <timer.h>
void delay_ms(uint32_t ms);

This function sleeps until the specified number of milliseconds have passed, and then returns. So we call this function "synchronous": no further code will run until the delay is complete.

An example loop structure:

int main(void) {
  while (1) {
    take_measurement();
    delay_ms(2000);
  }
}

Checkpoint: You app prints readings from each sensors multiple times.

To be able to see if our device is sampling periodically without observing the console output, we will add an LED toggle on each sample. This is straightforward in Tock:

#include <led.h>

int main(void) {
  while (1) {
    take_measurement();
    led_toggle(0);
    delay_ms(2000);
  }
}

Checkpoint: You have an environmental sensing application!

Graduation

Now that you have the basics of Tock down, we encourage you to continue to explore and develop with Tock! This book includes a "slimmed down" version of Tock to make it easy to get started, but you will likely want to get a more complete development environment setup to continue. Luckily, this shouldn't be too difficult since you have the tools installed already.

Using the latest kernel

The Tock kernel is actively developed, and you likely want to build upon the latest features. To do this, you should get the Tock source from the repository:

$ git clone https://github.com/tock/tock

While the master branch tends to be relatively stable, you may want to use the latest release instead. Tock is thoroughly tested before a release, so this should be a reliable place to start. To select a release, you should checkout the correct tag. For example, for the 1.4 release this looks like:

$ cd tock
$ git checkout release-1.4

You should use the latest release. Check the releases page for the name of the latest release.

Now, you can compile the board-specific kernel in the Tock repository. For example, to compile the kernel for imix:

$ cd boards/imix
$ make

All of the operations described in the course should work the same way on the main repository.

Using the full selection of apps

The book includes some very minimal apps, and many more can be found in the libtock-c repository. To use this, you should start by cloning the repository:

$ git clone https://github.com/tock/libtock-c

Now you can compile and run apps inside of the examples folder. For instance, you can install the basic "Hello World!" app:

$ cd libtock-c/examples/c_hello
$ make
$ tockloader install

With the libtock-c repository you have access to the full suite of Tock apps, and additional libraries include BLE and Lua support.

Deprecated Course Modules

These modules were previously developed but may not quite match the current Tock code at this point. That is, the general ideas are still relevant and correct, but the specific code might be somewhat outdated.

We keep these for interested readers, but want to note that it might take a bit more problem solving/updating to follow these steps than originally intended.

Keep the client happy

You, an engineer newly added to a top-secret project in your organization, have been directed to commission a new imix node for your most important client. The directions you receive are terse, but helpful:

On Sunday, Nov 4, 2018, Director Hines wrote:

Welcome to the team, need you to get started right away. The client needs an
imix setup with their two apps -- ASAP. Make sure it is working, we need to keep
this client happy.

- DH

Hmm, ok, not a lot to go on, but luckily in orientation you learned how to flash a kernel and apps on to the imix board, so you are all set for your first assignment.

Poking around, you notice a folder called "important-client". While that is a good start, you also notice that it has two apps inside of it! "Alright!" you are thinking, "My first day is shaping up to go pretty smoothly."

After installing those two apps, which are a little mysterious still, you decide that it would also be a good idea to install an app you are more familiar with: the "blink" app. After doing all of that, you run tockloader list and see the following:

$ tockloader list

No device name specified. Using default "tock"
Using "/dev/ttyUSB1 - imix IoT Module - TockOS"

[App 0]
  Name:                  app2
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   16384 bytes


[App 1]
  Name:                  app1
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   8192 bytes


[App 2]
  Name:                  blink
  Enabled:               True
  Sticky:                False
  Total Size in Flash:   2048 bytes


Finished in 1.959 seconds

Checkpoint

Make sure you have these apps installed correctly and tockloader list produces similar output as shown here.


Great! Now you check that the LED is blinking, and sure enough, no problems there. The blink app was just for testing, so you tockloader uninstall blink to remove that. So far, so good, Tock! But, before you prepare to head home after a successful day, you start to wonder if maybe this was a little too easy. Also, if you get this wrong, it's not going to look good as the new person on the team.

Looking in the folders for the two applications, you notice a brief description of the apps, and a URL. Ok, maybe you can check if everything is working. After trying things for a little bit, everything seems to be in order. You tell the director the board is ready and head home a little early—you did just successfully complete your first project for a major client after all.

Back at Work the Next Day

Expecting a more challenging project after how well things went yesterday, you are instead greeted by this email:

On Monday, Nov 5, 2018, Director Hines wrote:

I know you are new, but what did you do?? I've been getting calls all morning
from the client, the imix board you gave them ran out battery already!! Are you
sure you set up the board correctly? Fix it, and get it back to me later today.

- DH

Well, that's not good. You already removed the blink app, so it can't be that. What you need is some way to inspect the board and see if something looks like it is going awry. You first try:

$ tockloader listen

to see if any debugging information is being printed. A little, but nothing helpful. Before trying to look around the code, you decided to try sending the board a plea for help:

help

and, surprisingly, it responded!

Welcome to the process console.
Valid commands are: help status list stop start

Ok! Maybe the process console can help. Try the status command:

Total processes: 2
Active processes: 2
Timeslice expirations: 4277

It seems this tool is actually able to inspect the current system and the active processes! But hmmm, it seems there are a lot of "timeslice expirations". From orientation, you remember that processes are allocated only a certain quantum of time to execute, and if they exceed that the kernel forces a context switch back to the kernel. If that is happening a lot, then the board is likely unable to go to sleep! That could explain why the battery is draining so fast!

But which process is at fault? Perhaps we should try another command. Maybe list:

 PID    Name                Quanta  Syscalls  Dropped Callbacks    State
  00	app2                     0       336                  0  Yielded
  01	app1                  8556   1439951                  0  Running

Ok! Now we have the status of individual applications. And aha! We can clearly see the faulty application. From our testing we know that one app detects button presses and one app is transmitting sensor data. Let's see if we can disable the faulty app somehow and see which data packets we are still getting. Going back to the help command, the stop command seems promising:

stop <app name>

Time to Fix the App

After debugging, we now know a couple things about the issue:

  • The name of the faulty app.
  • That it is functionally correct but is for some reason consuming excess CPU cycles.

Using this information, dig into the the faulty app.

A Quick Fix

To get the director off your back, you should be able to introduce a simple fix that will reduce wakeups by waiting a bit between samples.

A Better Way

While the quick fix will slow the number of wakeups, you know that you can do better than polling for something like a button press! Tock supports asynchronous operations allowing user processes to subscribe to interrupts.

Looking at the button interface (in button.h), it looks like we'll first have to enable interrupts and then sign up to listen to them.

Once this energy-optimal patch is in place, it'll be time to kick off a triumphant e-mail to the director, and then off to celebrate!

Create a "Hello World" capsule

Now that you've seen how Tock initializes and uses capsules, you're going to write a new one. At the end of this section, your capsule will sample the humidity sensor once a second and print the results as serial output. But you'll start with something simpler: printing "Hello World" to the debug console once on boot.

The imix board configuration you've looked through has a capsule for the this tutorial already set up. The capsule is a separate Rust crate located in exercises/capsule. You'll complete this exercise by filling it in.

In addition to a constructor, Our capsule has start function defined that is currently empty. The board configuration calls this function once it has initialized the capsule.

Eventually, the start method will kick off a state machine for periodic humidity readings, but for now, let's just print something to the debug console and return:

#![allow(unused)]
fn main() {
debug!("Hello from the kernel!");
}
$ cd [PATH_TO_BOOK]/imix
$ make program
$ tockloader listen
No device name specified.
Using default "tock"
Using "/dev/ttyUSB0 - Imix IoT Module - TockOS"
Listening for serial output.
Hello from the kernel!

Extend your capsule to print "Hello World" every second

In order for your capsule to keep track of time, it will need to depend on another capsule that implements the Alarm interface. We'll have to do something similar for reading the accelerometer, so this is good practice.

The Alarm HIL includes several traits, Alarm, Client, and Frequency, all in the kernel::hil::time module. You'll use the set_alarm and now methods from the Alarm trait to set an alarm for a particular value of the clock. Note that both methods accept arguments in the alarm's native clock frequency, which is available using the Alarm trait's associated Frequency type:

#![allow(unused)]
fn main() {
// native clock frequency in Herz
let frequency = <A::Frequency>::frequency();
}

Your capsule already implements the alarm::Client trait so it can receive alarm events. The alarm::Client trait has a single method:

#![allow(unused)]
fn main() {
fn fired(&self)
}

Your capsule should now set an alarm in the start method, print the debug message and set an alarm again when the alarm fires.

Compile and program your new kernel:

$ make program
$ tockloader listen
No device name specified. Using default "tock"                                                                         Using "/dev/ttyUSB0 - Imix IoT Module - TockOS"
Listening for serial output.
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World
TOCK_DEBUG(0): /home/alevy/hack/helena/rustconf/tock/boards/imix/src/accelerate.rs:31: Hello World

Sample Solution

Extend your capsule to sample the humidity once a second

The steps for reading an accelerometer from your capsule are similar to using the alarm. You'll use a capsule that implements the humidity HIL, which includes the HumidityDriver and HumidityClient traits, both in kernel::hil::sensors.

The HumidityDriver trait includes the method read_accelerometer which initiates an accelerometer reading. The HumidityClient trait has a single method for receiving readings:

#![allow(unused)]
fn main() {
fn callback(&self, humidity: usize);
}

Implement logic to initiate a accelerometer reading every second and report the results.

Structure of rustconf capsule

Compile and program your kernel:

$ make program
$ tockloader listen
No device name specified. Using default "tock"                                                                         Using "/dev/ttyUSB0 - Imix IoT Module - TockOS"
Listening for serial output.
Humidity 2731
Humidity 2732

Sample solution

Some further questions and directions to explore

Your capsule used the si7021 and virtual alarm. Take a look at the code behind each of these services:

  1. Is the humidity sensor on-chip or a separate chip connected over a bus?

  2. What happens if you request two humidity sensors back-to-back?

  3. Is there a limit on how many virtual alarms can be created?

  4. How many virtual alarms does the imix boot sequence create?

Extra credit: Write a virtualization capsule for humidity sensor (∞)

If you have extra time, try writing a virtualization capsule for the Humidity HIL that will allow multiple clients to use it. This is a fairly open ended task, but you might find inspiration in the virtua_alarm and virtual_i2c capsules.

Tock Mini Tutorials

These tutorials walk through how to use some various features of Tock. They are narrower in scope than the course, but try to explain in detail how various Tock apps work.

You will need the libtock-c repository to run these tutorials. You should check out a copy of libtock-c by running:

$ git clone https://github.com/tock/libtock-c

libtock-c contains many example Tock applications as well as the library support code for running C and C++ apps on Tock. If you are looking to develop Tock applications you will likely want to start with an existing app in libtock-c and modify it.

Setup

You need to be able to compile and load the Tock kernel and Tock applications. See the getting started guide on how to get setup.

You also need hardware that supports Tock.

The tutorials assume you have a Tock kernel loaded on your hardware board. To get a kernel installed, follow these steps.

  1. Obtain the Tock Source. You can clone a copy of the Tock repository to get the kernel source:

    $ git clone https://github.com/tock/tock
    $ cd tock
    
  2. Compile Tock. In the root of the Tock directory, compile the kernel for your hardware platform. You can find a list of boards by running make list. For example if your board is imix then:

    $ make list
    $ cd boards/imix
    $ make
    

    If you have another board just replace "imix" with <your-board>

    This will create binaries of the Tock kernel. Tock is compiled with Cargo, a package manager for Rust applications. The first time Tock is built all of the crates must be compiled. On subsequent builds, crates that haven't changed will not have to be rebuilt and the compilation will be faster.

  3. Load the Tock Kernel. The next step is to program the Tock kernel onto your hardware. To load the kernel, run:

    $ make install
    

    in the board directory. Now you have the kernel loaded onto the hardware. The kernel configures the hardware and provides drivers for many hardware resources, but does not actually include any application logic. For that, we need to load an application.

    Note, you only need to program the kernel once. Loading applications does not alter the kernel, and applications can be re-programed without re-programming the kernel.

With the kernel setup, you are ready to try the mini tutorials.

Tutorials

  1. Blink an LED: Get your first Tock app running.
  2. Button to Printf(): Print to terminal in response to button presses.
  3. BLE Advertisement Scanning: Sense nearby BLE packets.
  4. Sample Sensors and Use Drivers: Use syscalls to interact with kernel drivers.
  5. Inter-process Communication: Tock's IPC mechanism.

Board compatibility matrix

Tutorial #Supported boards
1All
2All with a button
3Hail and imix
4All with a light sensor
5All that support IPC

Blink: Running Your First App

This guide will help you get the blink app running on top of Tock kernel.

Instructions

  1. Erase any existing applications. First, we need to remove any applications already on the board. Note that Tockloader by default will install any application in addition to whatever is already installed on the board.

    $ tockloader erase-apps
    
  2. Install Blink. Tock supports an "app store" of sorts. That is, tockloader can install apps from a remote repository, including Blink. To do this:

    $ tockloader install blink
    

    You will have to tell Tockloader that you are OK with fetching the app from the Internet.

    Your specific board may require additional arguments, please see the readme in the boards/ folder for more details.

  3. Compile and Install Blink. We can also compile the blink app and load our compiled version. The basic C version of blink is located in the libtock-c repository.

    1. Clone that repository:

      $ cd tock-book
      $ git clone https://github.com/tock/libtock-c
      
    2. Then navigate to examples/blink:

      $ cd libtock-c/examples/blink
      
    3. From there, you should be able to compile it and install it by:

      $ make
      $ tockloader install
      

    When the blink app is installed you should see the LEDs on the board blinking. Congratulations! You have just programmed your first Tock application.

Say "Hello!" On Every Button Press

This tutorial will walk you through calling printf() in response to a button press.

  1. Start a new application. A Tock application in C looks like a typical C application. Lets start with the basics:

    #include <stdio.h>
    
    int main(void) {
      return 0;
    }
    

    You also need a makefile. Copying a makefile from an existing app is the easiest way to get started.

  2. Setup a button callback handler. A button press in Tock is treated as an interrupt, and in an application this translates to a function being called, much like in any other event-driven system. To listen for button presses, we first need to define a callback function.

    #include <stdio.h>
    #include <libtock/interface/button.h>
    
    // Callback for button presses.
    //   btn_num: The index of the button associated with the callback
    //   val: true if pressed, false if depressed
    static void button_callback(
      returncode_t ret,
      int          btn_num,
      bool         val) {
    }
    
    int main(void) {
      return 0;
    }
    

    All callbacks in the libtock are specific to the individual driver, and the values provided depend on how the individual drivers work.

  3. Enable the button interrupts. By default, the interrupts for the buttons are not enabled. To enable them, we make a syscall. Buttons, like other drivers in Tock, follow the convention that applications can ask the kernel how many there are. This is done by calling button_count().

    #include <stdio.h>
    #include <libtock/interface/button.h>
    
    // Callback for button presses.
    //   btn_num: The index of the button associated with the callback
    //   val: true if pressed, false if depressed
    static void button_callback(
      returncode_t ret,
      int          btn_num,
      bool         val) {
    }
    
    int main(void) {
      // Ensure there is a button to use.
      int count;
      libtock_button_count(&count);
      if (count < 1) {
        // There are no buttons on this platform.
        printf("Error! No buttons on this platform.\n");
      } else {
        // Enable an interrupt on the first button.
        libtock_button_notify_on_press(0, button_callback);
      }
    
      // Loop forever waiting on button presses.
      while (1) {
        yield();
      }
    }
    

    The button count is checked, and the app only continues if there exists at least one button. To enable the button interrupt, libtock_button_notify_on_press() is called with the index of the button to use. In this example we just use the first button.

    We then need to wait in a loop calling yield() to continue to receive button presses.

  4. Call printf() on button press. To print a message, we call printf() in the callback.

    #include <stdio.h>
    #include <libtock/interface/button.h>
    
    // Callback for button presses.
    //   btn_num: The index of the button associated with the callback
    //   val: true if pressed, false if depressed
    static void button_callback(
      __attribute__ ((unused)) returncode_t ret,
      __attribute__ ((unused)) int          btn_num,
      bool                                  val) {
      // Only print on the down press.
      if (val) {
        printf("Hello!\n");
      }
    }
    
    int main(void) {
      // Ensure there is a button to use.
      int count;
      libtock_button_count(&count);
      if (count < 1) {
        // There are no buttons on this platform.
        printf("Error! No buttons on this platform.\n");
      } else {
        // Enable an interrupt on the first button.
        libtock_button_notify_on_press(0, button_callback);
      }
    
      // Loop forever waiting on button presses.
      while (1) {
        yield();
      }
    }
    
  5. Run the application. To try this tutorial application, you can find it in the tutorials app folder. See the first tutorial for details on how to compile and install a C application.

    Once installed, when you press the button, you should see "Hello!" printed to the terminal!

Look! A Wild BLE Packet Appeared!

Note! This tutorial will only work on Hail and imix boards.

This tutorial will walk you through getting an app running that scans for BLE advertisements. Most BLE devices typically broadcast advertisements periodically (usually once a second) to allow smartphones and other devices to discover them. The advertisements typically contain the BLE device's ID and name, as well as as which services the device provides, and sometimes raw data as well.

To provide BLE connectivity, several Tock boards use the Nordic nRF51822 as a BLE co-processor. In this configuration, the nRF51822 runs all of the BLE operations and exposes a command interface over a UART bus. Luckily for us, Nordic has defined and implemented the entire interface. Better yet, they made it interoperable with their nRF51 SDK. What this means is any BLE app that would run on the nRF51822 directly can be compiled to run on a different microcontroller, and any function calls that would have interacted with the BLE hardware are instead packaged and sent to the nRF51822 co-processor. Nordic calls this tool "BLE Serialization", and Tock has a port of the serialization libraries that Tock applications can use.

So, with that introduction, lets get going.

  1. Initialize the BLE co-processor. The first step a BLE serialization app must do is initialize the BLE stack on the co-processor. This can be done with Nordic's SDK, but to simplify things Tock supports the Simple BLE library. The goal of simple_ble.c is to wrap the details of the nRF5 SDK and the intricacies of BLE in an easy-to-use library so you can get going with creating BLE devices and not learning the entire spec.

    #include <simple_ble.h>
    
    // Intervals for advertising and connections.
    // These are some basic settings for BLE devices. However, since we are
    // only interesting in scanning, these are not particularly relevant.
    simple_ble_config_t ble_config = {
      .platform_id       = 0x00, // used as 4th octet in device BLE address
      .device_id         = DEVICE_ID_DEFAULT,
      .adv_name          = "Tock",
      .adv_interval      = MSEC_TO_UNITS(500, UNIT_0_625_MS),
      .min_conn_interval = MSEC_TO_UNITS(1000, UNIT_1_25_MS),
      .max_conn_interval = MSEC_TO_UNITS(1250, UNIT_1_25_MS)
    };
    
    int main () {
        printf("[Tutorial] BLE Scanning\n");
    
        // Setup BLE.
        simple_ble_init(&ble_config);
    }
    
  2. Scan for advertisements. With simple_ble this is pretty straightforward.

    int main () {
        printf("[Tutorial] BLE Scanning\n");
    
        // Setup BLE.
        simple_ble_init(&ble_config);
    
        // Scan for advertisements.
        simple_ble_scan_start();
    }
    
  3. Handle the advertisement received event. Just as the main Tock microcontroller can send commands to the nRF co-processor, the co-processor can send events back. When these occur, a variety of callbacks are generated in simple_ble and then passed to users of the library. In this case, we only care about ble_evt_adv_report() which is called on each advertisement reception.

    // Called when each advertisement is received.
    void ble_evt_adv_report (ble_evt_t* p_ble_evt) {
      ble_gap_evt_adv_report_t* adv = (ble_gap_evt_adv_report_t*) &p_ble_evt->evt.gap_evt.params.adv_report;
    }
    

    The ble_evt_adv_report() function is passed a pointer to a ble_evt_t struct. This is a type from the Nordic nRF51 SDK, and more information can be found in the SDK documentation.

  4. Display a message for each advertisement. Once we have the advertisement callback, we can use printf() like normal.

    #include <stdio.h>
    
    #include <libtock/interface/led.h>
    
    // Called when each advertisement is received.
    void ble_evt_adv_report (ble_evt_t* p_ble_evt) {
      ble_gap_evt_adv_report_t* adv = (ble_gap_evt_adv_report_t*) &p_ble_evt->evt.gap_evt.params.adv_report;
    
      // Print some details about the discovered advertisement.
      printf("Recv Advertisement: [%02x:%02x:%02x:%02x:%02x:%02x] RSSI: %d, Len: %d\n",
        adv->peer_addr.addr[5], adv->peer_addr.addr[4], adv->peer_addr.addr[3],
        adv->peer_addr.addr[2], adv->peer_addr.addr[1], adv->peer_addr.addr[0],
        adv->rssi, adv->dlen);
    
      // Also toggle the first LED.
      libtock_led_toggle(0);
    }
    
  5. Handle some BLE annoyances. The last step to getting a working app is to handle some annoyances using BLE serialization with the simple_ble library. Typically errors generated by the nRF51 SDK are severe and mean there is a significant bug in the code. With serialization, however, messages between the two processors can be corrupted or misframed, causing parsing errors. We can ignore these errors safely and just drop the corrupted packet.

    Additionally, the simple_ble library makes it easy to set the address of a BLE device. However, this functionality only works when running on an actual nRF51822. To disable this, we override the weakly defined ble_address_set() function with an empty function.

    void app_error_fault_handler(uint32_t error_code, uint32_t line_num, uint32_t info) { }
    void ble_address_set () { }
    
  6. Run the app and see the packets! To try this tutorial application, you can find it in the tutorials app folder.

    For any new applications, ensure that the following is in the makefile so that the BLE serialization library is included.

    EXTERN_LIBS += $(TOCK_USERLAND_BASE_DIR)/libnrfserialization
    

Details

This section contains a few notes about the specific versions of BLE serialization used.

Tock currently supports the S130 softdevice version 2.0.0 and SDK 11.0.0.

Reading Sensors From Scratch

Note! This tutorial will only work on boards with a light sensor.

In this tutorial we will cover how to use the syscall interface from applications to kernel drivers, and guide things based on reading a light sensor and printing the readings over UART.

Note: This example demonstrates using the low-level system call interface directly to read a sensor. In general, we would not write applications this way. However, this tutorial serves as an illustrative guide for learning more about the Tock system call interface. See the fourth step for the conventional approach.

OK, lets get started.

  1. Setup a generic app for handling asynchronous events. As with most sensors, the light sensor is read asynchronously, and a callback is generated from the kernel to userspace when the reading is ready. Therefore, to use this sensor, our application needs to do two things: 1) setup a callback the kernel driver can call when the reading is ready, and 2) instruct the kernel driver to start the measurement. Lets first sketch this out:

    #include <libtock/tock.h>
    
    #define DRIVER_NUM 0x60002
    
    // Callback when the light sensor has a light intensity measurement ready.
    static void light_callback(int intensity, int unused1, int unused2, void* ud) {
    
    }
    
    int main() {
        // Tell the kernel about the callback.
    
        // Instruct the light sensor driver to begin a reading.
    
        // Wait until the reading is complete.
    
        // Print the resulting value.
    
        return 0;
    }
    
  2. Fill in the application with syscalls. The standard Tock syscalls can be used to actually implement the sketch we made above. We first use the subscribe syscall to inform the kernel about the callback in our application. We then use the command syscall to start the measurement. To wait, we use the yield call to wait for the callback to actually fire. We do not need to use allow for this application, and typically it is not required for reading sensors.

    For all syscalls that interact with drivers, the major number is set by the platform. In the case of the light sensor, it is 0x60002. The minor numbers are set by the driver and are specific to the particular driver.

    To save the value from the callback to use in the print statement, we will store it in a global variable.

    #include <stdio.h>
    
    #include <libtock/tock.h>
    
    #define DRIVER_NUM 0x60002
    
    static int sensor_reading;
    
    // Callback when the light sensor has a light intensity measurement ready.
    static void light_callback(int intensity, int unused1, int unused2, void* ud) {
        // Save the reading when the callback fires.
        sensor_reading = intensity;
    }
    
    int main() {
        // Tell the kernel about the callback.
        subscribe(DRIVER_NUM, 0, light_callback, NULL);
    
        // Instruct the light sensor driver to begin a reading.
        command(DRIVER_NUM, 1, 0, 0);
    
        // Wait until the reading is complete.
        yield();
    
        // Print the resulting value.
        printf("Light sensor reading: %d\n", sensor_reading);
    
        return 0;
    }
    
  3. Be smarter about waiting for the callback. While the above application works, it's really relying on the fact that we are only sampling a single sensor. In the current setup, if instead we had two sensors with outstanding commands, the first callback that fired would trigger the yield() call to return and then the printf() would execute. If, for example, sampling the light sensor takes 100 ms, and the new sensor only needs 10 ms, the new sensor's callback would fire first and the printf() would execute with an incorrect value.

    To handle this, we can instead use the yield_for() call, which takes a flag and only returns when that flag has been set. We can then set this flag in the callback to make sure that our printf() only occurs when the light reading has completed.

    #include <stdio.h>
    #include <stdbool.h>
    
    #include <libtock/tock.h>
    
    #define DRIVER_NUM 0x60002
    
    static int sensor_reading;
    static bool sensor_done = false;
    
    // Callback when the light sensor has a light intensity measurement ready.
    static void light_callback(int intensity, int unused1, int unused2, void* ud) {
        // Save the reading when the callback fires.
        sensor_reading = intensity;
    
        // Mark our flag true so that the `yield_for()` returns.
        sensor_done = true;
    }
    
    int main() {
        // Tell the kernel about the callback.
        subscribe(DRIVER_NUM, 0, light_callback, NULL);
    
        // Instruct the light sensor driver to begin a reading.
        command(DRIVER_NUM, 1, 0, 0);
    
        // Wait until the reading is complete.
        yield_for(&sensor_done);
    
        // Print the resulting value.
        printf("Light sensor reading: %d\n", sensor_reading);
    
        return 0;
    }
    
  4. Use the libtock library functions. Normally, applications don't use the bare command and subscribe syscalls. Typically, these are wrapped together into helpful commands inside of libtock and libtock-sync and come with a function that hides the yield_for() to a make a synchronous function which is useful for developing applications quickly. Lets port the light sensing app to use the Tock Standard Library:

    #include <stdio.h>
    
    #include <libtock-sync/sensors/ambient_light.h>
    
    int main() {
        // Take the light sensor measurement synchronously.
        int sensor_reading;
        libtocksync_ambient_light_read_intensity(&sensor_reading);
    
        // Print the resulting value.
        printf("Light sensor reading: %d\n", sensor_reading);
    
        return 0;
    }
    
  5. Explore more sensors. This tutorial highlights only one sensor. See the sensors app for a more complete sensing application.

Friendly Apps Share Data

This tutorial covers how to use Tock's IPC mechanism to allow applications to communicate and share memory.

Tock IPC Basics

IPC in Tock uses a client-server model. Applications can provide a service by telling the Tock kernel that they provide a service. Each application can only provide a single service, and that service's name is set to the name of the application. Other applications can then discover that service and explicitly share a buffer with the server. Once a client shares a buffer, it can then notify the server to instruct the server to somehow interact with the shared buffer. The protocol for what the server should do with the buffer is service specific and not specified by Tock. Servers can also notify clients, but when and why servers notify clients is service specific.

Example Application

To provide an overview of IPC, we will build an example system consisting of three apps: a random number service, a LED control service, and a main application that uses the two services. While simple, this example both demonstrates the core aspects of the IPC mechanism and should run on any hardware platform.

LED Service

Lets start with the LED service. The goal of this service is to allow other applications to use the shared buffer as a command message to instruct the LED service on how to turn on or off the system's LEDs.

  1. We must tell the kernel that our app wishes to provide a service. All that we have to do is call ipc_register_svc().

    #include "ipc.h"
    
    int main(void) {
      ipc_register_svc(ipc_callback, NULL);
      return 0;
    }
    
  2. We also need that callback (ipc_callback) to handle IPC requests from other applications. This callback will be called when the client app notifies the service.

    static void ipc_callback(int pid, int len, int buf, void* ud) {
      // pid: An identifier for the app that notified us.
      // len: How long the buffer is that the client shared with us.
      // buf: Pointer to the shared buffer.
    }
    
  3. Now lets fill in the callback for the LED application. This is a simplified version for illustration. The full example can be found in the examples/tutorials folder.

    #include <tock/interface/led.h>
    
    static void ipc_callback(int pid, int len, int buf, void* ud) {
      uint8_t* buffer = (uint8_t*) buf;
    
      // First byte is the command, second byte is the LED index to set,
      // and the third byte is whether the LED should be on or off.
      uint8_t command = buffer[0];
      if (command == 1) {
          uint8_t led_id = buffer[1];
          uint8_t led_state = buffer[2] > 0;
    
          if (led_state == 0) {
            libtock_led_off(led_id);
          } else {
            libtock_led_on(led_id);
          }
    
          // Tell the client that we have finished setting the specified LED.
          ipc_notify_client(pid);
          break;
      }
    }
    

RNG Service

The RNG service returns the requested number of random bytes in the shared folder.

  1. Again, register that this service exists.

    int main(void) {
      ipc_register_svc(ipc_callback, NULL);
      return 0;
    }
    
  2. Also need a callback function for when the client signals the service. The client specifies how many random bytes it wants by setting the first byte of the shared buffer before calling notify.

    #include <libtock-sync/peripherals/rng.h>
    
    static void ipc_callback(int pid, int len, int buf, void* ud) {
      uint8_t* buffer = (uint8_t*) buf;
      uint8_t rng[len];
    
      uint8_t number_of_bytes = buffer[0];
    
      // Fill the buffer with random bytes.
      int number_of_bytes_received;
      libtocksync_rng_get_random_bytes(rng, len, number_of_bytes, &number_of_bytes_received);
      memcpy(buffer, rng, number_of_bytes_received);
    
      // Signal the client that we have the number of random bytes requested.
      ipc_notify_client(pid);
    }
    

    This is again not a complete example but illustrates the key aspects.

Main Logic Client Application

The third application uses the two services to randomly control the LEDs on the board. This application is not a server but instead is a client of the two service applications.

  1. When using an IPC service, the first step is to discover the service and record its identifier. This will allow the application to share memory with it and notify it. Services are discovered by the name of the application that provides them. Currently these are set in the application Makefile or by default based on the folder name of the application. The examples in Tock commonly use a Java style naming format.

    int main(void) {
      int led_service = ipc_discover("org.tockos.tutorials.ipc.led");
      int rng_service = ipc_discover("org.tockos.tutorials.ipc.rng");
    
      return 0;
    }
    

    If the services requested are valid and exist the return value from ipc_discover is the identifier of the found service. If the service cannot be found an error is returned.

  2. Next we must share a buffer with each service (the buffer is the only way to share between processes), and setup a callback that is called when the server notifies us as a client. Once shared, the kernel will permit both applications to read/modify that memory.

    char led_buf[64] __attribute__((aligned(64)));
    char rng_buf[64] __attribute__((aligned(64)));
    
    int main(void) {
      int led_service = ipc_discover("org.tockos.tutorials.ipc.led");
      int rng_service = ipc_discover("org.tockos.tutorials.ipc.rng");
    
      // Setup IPC for LED service
      ipc_register_client_cb(led_service, ipc_callback, NULL);
      ipc_share(led_service, led_buf, 64);
    
      // Setup IPC for RNG service
      ipc_register_client_cb(rng_service, ipc_callback, NULL);
      ipc_share(rng_service, rng_buf, 64);
    
      return 0;
    }
    
  3. We of course need the callback too. For this app we use the yield_for() function to implement the logical synchronously, so all the callback needs to do is set a flag to signal the end of the yield_for().

    bool done = false;
    
    static void ipc_callback(int pid, int len, int arg2, void* ud) {
      done = true;
    }
    
  4. Now we use the two services to implement our application.

    #include <timer.h>
    
    void app() {
      while (1) {
        // Get two random bytes from the RNG service
        done = false;
        rng_buf[0] = 2; // Request two bytes.
        ipc_notify_svc(rng_service);
        yield_for(&done);
    
        // Control the LEDs based on those two bytes.
        done = false;
        led_buf[0] = 1;                     // Control LED command.
        led_buf[1] = rng_buf[0] % NUM_LEDS; // Choose the LED index.
        led_buf[2] = rng_buf[1] & 0x01;     // On or off.
        ipc_notify_svc(led_service);        // Notify to signal LED service.
        yield_for(&done);
    
        delay_ms(500);
      }
    }
    

Try It Out

To test this out, see the complete apps in the IPC tutorial example folder.

To install all of the apps on a board:

$ cd examples/tutorials/05_ipc
$ tockloader erase-apps
$ pushd led && make && tockloader install && popd
$ pushd rng && make && tockloader install && popd
$ pushd logic && make && tockloader install && popd

You should see the LEDs randomly turning on and off!

Kernel Development Guides

These guides provide walkthroughs for specific kernel development tasks. For example, there is a guide on how to add a new syscall interface for userspace applications. The guides are intended to be general and provide high-level instructions which will have to be adapted for the specific functionality to be added.

Overtime, these guides will inevitably become out-of-date in that the specific code examples will fail to compile. However, the general design aspects and considerations should still be relevant even if the specific code details have changed. You are encourage to use these guides as just that, a general guide, and to copy from up-to-date examples contained in the Tock repository.

List of Guides:

  1. Chip Peripheral Driver
  2. Sensor Driver
  3. System Call Interface
  4. HIL
  5. Virtualizers
  6. Kernel Tests
  7. Component
  8. Optimize Code Size
  9. Porting Tock
  10. Porting From 1.x to 2.x
  11. VSCode Debugging

Implementing a Chip Peripheral Driver

This guide covers how to implement a peripheral driver for a particular microcontroller (MCU). For example, if you wanted to add an analog to digital converter (ADC) driver for the Nordic nRF52840 MCU, you would follow the general steps described in this guide.

Overview

The general steps you will follow are:

  1. Determine the HIL you will implement.
  2. Create a register mapping for the peripheral.
  3. Create a struct for the peripheral.
  4. Implement the HIL interface for the peripheral.
  5. Create the peripheral driver object and cast the registers to the correct memory location.

The guide will walk through how to do each of these steps.

Background

Implementing a chip peripheral driver increases Tock's support for a particular microcontroller and allows capsules and userspace apps to take more advantage of the hardware provided by the MCU. Peripheral drivers for an MCU are generally implemented on an as-needed basis to support a particular use case, and as such the chips in Tock generally do not have all of the peripheral drivers implemented already.

Peripheral drivers are included in Tock as "trusted code" in the kernel. This means that they can use the unsafe keyword (in fact, they must). However, it also means more care must be taken to ensure they are correct. The use of unsafe should be kept to an absolute minimum and only used where absolutely necessary. This guide explains the one use of unsafe that is required. All other uses of unsafe in a peripheral driver will likely be very scrutinized during the pull request review period.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Determine the HIL you will implement.

    The HILs in Tock are the contract between the MCU-specific hardware and the more generic capsules which use the hardware resources. They provide a common interface that is consistent between different microcontrollers, enabling code higher in the stack to use the interfaces without needing to know any details about the underlying hardware. This common interface also allows the same higher-level code to be portable across different microcontrollers. HILs are implemented as traits in Rust.

    All HILs are defined in the kernel/src/hil directory. You should find a HIL that exposes the interface the peripheral you are writing a driver for can provide. There should only be one HIL that matches your peripheral.

    Note: As of Dec 2019, the hil directory also contains interfaces that are only provided by capsules for other capsules. For example, the ambient light HIL interface is likely not something an MCU would implement.

    It is possible Tock does not currently include a HIL that matches the peripheral you are implementing a driver for. In that case you will also need to create a HIL, which is explained in a different development guide.

    Checkpoint: You have identified the HIL your driver will implement.

  2. Create a register mapping for the peripheral.

    To start implementing the peripheral driver, you must create a new source file within the MCU-specific directory inside of chips/src directory. The name of this file generally should match the name of the peripheral in the the MCU's datasheet.

    Include the name of this file inside of the lib.rs (or potentially mod.rs) file inside the same directory. This should look like:

    #![allow(unused)]
    fn main() {
    pub mod ast;
    }

    Inside of the new file, you will first need to define the memory-mapped input/output (MMIO) registers that correspond to the peripheral. Different embedded code ecosystems have devised different methods for doing this, and Tock is no different. Tock has a special library and set of Rust macros to make defining the register map straightforward and using the registers intuitive.

    The full register library is here, but to get started, you will first create a structure like this:

    #![allow(unused)]
    fn main() {
    use tock_registers::registers::{ReadOnly, ReadWrite, WriteOnly};
    
    register_structs! {
        XyzPeripheralRegisters {
            /// Control register.
            /// The 'Control' parameter constrains this register to only use
            /// fields from a certain group (defined below in the bitfields
            /// section).
            (0x000 => cr: ReadWrite<u32, Control::Register>),
            // Status register.
            (0x004 => s: ReadOnly<u8, Status::Register>),
            /// spacing between registers in memory
            (0x008 => _reserved),
            /// Another register with no meaningful fields.
            (0x014 => word: ReadWrite<u32>),
    
            // Etc.
    
            // The end of the struct is marked as follows.
            (0x100 => @END),
        }
    }
    }

    You should replace XyzPeripheral with the name of the peripheral you are writing a driver for. Then, for each register defined in the datasheet, you must specify an entry in the macro. For example, a register is defined like:

    #![allow(unused)]
    fn main() {
    (0x000 => cr: ReadWrite<u32, Control::Register>),
    }

    where:

    • 0x000 is the offset (in bytes) of the register from the beginning of the register map.
    • cr is the name of the register in the datasheet.
    • ReadWrite is the access control of the register as defined in the datasheet.
    • u32 is the size of the register.
    • Control::Register maps to the actual bitfields used in the register. You will create this type for this particular peripheral, so you can name this whatever makes sense at this point. Note that it will always end with ::Register due to how Rust macros work. If it doesn't make sense to define the specific bitfields in this register, you can omit this field. For example, an esoteric field in the register map that the implementation does not use likely does not need its bitfields mapped.

    Once the register map is defined, you must specify the bitfields for any registers that you gave a specific type to. This looks like the following:

    #![allow(unused)]
    fn main() {
    register_bitfields! [
        // First parameter is the register width for the bitfields. Can be u8,
        // u16, u32, or u64.
        u32,
    
        // Each subsequent parameter is a register abbreviation, its descriptive
        // name, and its associated bitfields. The descriptive name defines this
        // 'group' of bitfields. Only registers defined as
        // ReadWrite<_, Control::Register> can use these bitfields.
        Control [
            // Bitfields are defined as:
            // name OFFSET(shift) NUMBITS(num) [ /* optional values */ ]
    
            // This is a two-bit field which includes bits 4 and 5
            RANGE OFFSET(4) NUMBITS(3) [
                // Each of these defines a name for a value that the bitfield
                // can be written with or matched against. Note that this set is
                // not exclusive--the field can still be written with arbitrary
                // constants.
                VeryHigh = 0,
                High = 1,
                Low = 2
            ],
    
            // A common case is single-bit bitfields, which usually just mean
            // 'enable' or 'disable' something.
            EN  OFFSET(3) NUMBITS(1) [],
            INT OFFSET(2) NUMBITS(1) []
        ],
    
        // Another example:
        // Status register
        Status [
            TXCOMPLETE  OFFSET(0) NUMBITS(1) [],
            TXINTERRUPT OFFSET(1) NUMBITS(1) [],
            RXCOMPLETE  OFFSET(2) NUMBITS(1) [],
            RXINTERRUPT OFFSET(3) NUMBITS(1) [],
            MODE        OFFSET(4) NUMBITS(3) [
                FullDuplex = 0,
                HalfDuplex = 1,
                Loopback = 2,
                Disabled = 3
            ],
            ERRORCOUNT OFFSET(6) NUMBITS(3) []
        ],
    ]
    }

    The name in each entry of the register_bitfields! [] list must match the register type provided in the register map declaration. Each register that is used in the driver implementation should have its bitfields declared.

    Checkpoint: The register map is correctly described in the driver source file.

  3. Create a struct for the peripheral.

    Each peripheral driver is implemented with a struct which is later used to create an object that can be passed to code that will use this peripheral driver. The actual fields of the struct are very peripheral specific, but should contain any state that the driver needs to correctly function.

    An example struct looks for a timer peripheral called the AST by the MCU datasheet looks like:

    #![allow(unused)]
    fn main() {
    pub struct Ast<'a> {
        registers: StaticRef<AstRegisters>,
        callback: OptionalCell<&'a dyn hil::time::AlarmClient>,
    }
    }

    The struct should contain a reference to the registers defined above (we will explain the StaticRef later). Typically, many drivers respond to certain events (like in this case a timer firing) and therefore need a reference to a client to notify when that event occurs. Notice that the type of the callback handler is specified in the HIL interface.

    Peripheral structs typically need a lifetime for references like the callback client reference. By convention Tock peripheral structs use 'a for this lifetime, and you likely want to copy that as well.

    Think of what state your driver might need to keep around. This could include a direct memory access (DMA) reference, some configuration flags like the baud rate, or buffer indices. See other Tock peripheral drivers for more examples.

    Note: you will most likely need to update this struct as you implement the driver, so to start with this just has to be a best guess.

    Hint: you should avoid keeping any state in the peripheral driver struct that is already stored by the hardware itself. For example, if there is an "enabled" bit in a register, then you do not need an "enabled" flag in the struct. Replicating this state leads to bugs when those values get out of sync, and makes it difficult to update the driver in the future.

    Peripheral driver structs make extensive use of different "cell" types to hold references to various shared state. The general wisdom is that if the value will ever need to be updated, then it needs to be contained in a cell. See the Tock cell documentation for more details on the cell types and when to use which one. In this example, the callback is stored in an OptionalCell, which can contain a value or not (if the callback is not set), and can be updated if the callback needs to change.

    With the struct defined, you should next create a new() function for that struct. This will look like:

    #![allow(unused)]
    fn main() {
    impl Ast {
        const fn new(registers: StaticRef<AstRegisters>) -> Ast {
            Ast {
                registers: registers,
                callback: OptionalCell::empty(),
            }
        }
    }
    }

    Checkpoint: There is a struct for the peripheral that can be created.

  4. Implement the HIL interface for the peripheral.

    With the peripheral driver struct created, now the main work begins. You can now write the actual logic for the peripheral driver that implements the HIL interface you identified earlier. Implementing the HIL interface is done just like implementing a trait in Rust. For example, to implement the Time HIL for the AST:

    #![allow(unused)]
    fn main() {
    impl hil::time::Time for Ast<'a> {
        type Frequency = Freq16KHz;
    
        fn now(&self) -> u32 {
            self.get_counter()
        }
    
        fn max_tics(&self) -> u32 {
            core::u32::MAX
        }
    }
    }

    You should include all of the functions from the HIL and decide how to implement them.

    Some operations will be shared among multiple HIL functions. These should be implemented as functions for the original struct. For example, in the Ast example the HIL function now() uses the get_counter() function. This should be implemented on the main Ast struct:

    #![allow(unused)]
    fn main() {
    impl Ast {
        const fn new(registers: StaticRef<AstRegisters>) -> Ast {
            Ast {
                registers: registers,
                callback: OptionalCell::empty(),
            }
        }
    
        fn get_counter(&self) -> u32 {
            let regs = &*self.registers;
            while self.busy() {}
            regs.cv.read(Value::VALUE)
        }
    }
    }

    Note the get_counter() function also illustrates how to use the register reference and the Tock register library. The register library includes much more detail on the various register operations enabled by the library.

    Checkpoint: All of the functions in the HIL interface have MCU peripheral-specific implementations.

  5. Create the peripheral driver object and cast the registers to the correct memory location.

    The last step is to actually create the object so that the peripheral driver can be used by other code. Start by casting the register map to the correct memory address where the registers are actually mapped to. For example:

    #![allow(unused)]
    fn main() {
    use kernel::common::StaticRef;
    
    const AST_BASE: StaticRef<AstRegisters> =
        unsafe { StaticRef::new(0x400F0800 as *const AstRegisters) };
    }

    StaticRef is a type in Tock designed explicitly for this operation of casting register maps to the correct location in memory. The 0x400F0800 is the address in memory of the start of the registers and this location will be specified by the datasheet.

    Note that creating the StaticRef requires using the unsafe keyword. This is because doing this cast is a fundamentally memory-unsafe operation: this allows whatever is at that address in memory to be accessed through the register interface (which is exposed as a safe interface). In the normal case where the correct memory address is provided there is no concern for system safety as the register interface faithfully represents the underlying hardware. However, suppose an incorrect address was used, and that address actually points to live memory used by the Tock kernel. Now kernel data structures could be altered through the register interface, and this would violate memory safety.

    With the address reference created, we can now create the actual driver object:

    #![allow(unused)]
    fn main() {
    pub static mut AST: Ast = Ast::new(AST_BASE);
    }

    This object will be used by a board's main.rs file to pass, in this case, the driver for the timer hardware to various capsules and other code that needs the underlying timer hardware.

Wrap-Up

Congratulations! You have implemented a peripheral driver for a microcontroller in Tock! We encourage you to submit a pull request to upstream this to the Tock repository.

Implementing a Sensor Driver

This guide describes the steps necessary to implement a capsule in Tock that interfaces with an external IC, like a sensor, memory chip, or display. These are devices which are not part of the same chip as the main microcontroller (MCU), but are on the same board and connected via some physical connection.

Note: to attempt to be generic, this guide will use the term "IC" to refer to the device the driver is for.

Note: "driver" is a bit of an overloaded term in Tock. In this guide, "driver" is used in the generic sense to mean code that interfaces with the external IC.

To illustrate the steps, this guide will use a generic light sensor as the running example. You will need to adapt the generic steps for your particular use case.

Often the goal of an IC driver is to expose an interface to that sensor or other IC to userspace applications. This guide does not cover creating that userspace interface as that is covered in a different guide.

Background

As mentioned, this guide describes creating a capsule. Capsules in Tock are units of Rust code that extend the kernel to add interesting features, like interfacing with new sensors. Capsules are "untrusted", meaning they cannot call unsafe code in Rust and cannot use the unsafe keyword.

Overview

The high-level steps required are:

  1. Create a struct for the IC driver.
  2. Implement the logic to interface with the IC.

Optional:

  1. Provide a HIL interface for the IC driver.
  2. Provide a userspace interface for the IC driver.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Create a struct for the IC driver.

    The driver will be implemented as a capsule, so the first step is to create a new file in the capsules/src directory. The name of this file should be [chipname].rs where [chipname] is the part number of the IC you are writing the driver for. There are several other examples in the capsules folder.

    For our example we will assume the part number is ls1234.

    You then need to add the filename to capsules/src/lib.rs like:

    #![allow(unused)]
    fn main() {
    pub mod ls1234;
    }

    Now inside of the new file you should create a struct with the fields necessary to implement the driver for the IC. In our example we will assume the IC is connected to the MCU with an I2C bus. Your IC might use SPI, UART, or some other standard interface. You will need to adjust how you create the struct based on the interface. You should be able to find examples in the capsules directory to copy from.

    The struct will look something like:

    #![allow(unused)]
    fn main() {
    pub struct Ls1234 {
        i2c: &'a dyn I2CDevice,
        state: Cell<State>,
        buffer: TakeCell<'static, [u8]>,
        client: OptionalCell<&'a dyn Ls1234Client>,
    }
    }

    You can see the resources this driver requires to successfully interface with the light sensor:

    • i2c: This is a reference to the I2C bus that the driver will use to communicate with the IC. Notice in Tock the type is I2CDevice, and no address is provided. This is because the I2CDevice type wraps the address in internally, so that the driver code can only communicate with the correct address.

    • state: Often drivers will iterate through various states as they communicate with the IC, and it is common for drivers to keep some state variable to manage this. Our State is defined as an enum, like so:

      #![allow(unused)]
      fn main() {
      #[derive(Copy, Clone, PartialEq)]
      enum State {
          Disabled,
          Enabling,
          ReadingLight,
      }
      }

      Also note that the state variable uses a Cell. This is so that the driver can update the state.

    • buffer: This holds a reference to a buffer of memory the driver will use to send messages over the I2C bus. By convention, these buffers are defined statically in the same file as the driver, but then passed to the driver when the board boots. This provides the board flexibility on the buffer to use, while still allowing the driver to hint at the size required for successful operation. In our case the static buffer is defined as:

      #![allow(unused)]
      fn main() {
      pub static mut BUF: [u8; 3] = [0; 3];
      }

      Note the buffer is wrapped in a TakeCell such that it can be passed to the I2C hardware when necessary, and re-stored in the driver struct when the I2C code returns the buffer.

    • client: This is the callback that will be called after the driver has received a reading from the sensor. All execution is event-based in Tock, so the caller will not block waiting for a sample, but instead will expect a callback via the client when the same is ready. The driver has to define the type of the callback by defining the Ls1234Client trait in this case:

      #![allow(unused)]
      fn main() {
      pub trait Ls1234Client {
      	 fn callback(light_reading: usize);
      }
      }

      Note that the client is stored in an OptionalCell. This allows the callback to not be set initially, and configured at bootup.

    Your driver may require other state to be stored as well. You can update this struct as needed to for state required to successfully implement the driver. Note that if the state needs to be updated at runtime it will need to be stored in a cell type. See the cell documentation for more information on the various cell types in Tock.

    Note: your driver should not keep any state in the struct that is also stored by the hardware. This easily leads to bugs when that state becomes out of sync, and makes further development on the driver difficult.

    The last step is to write a function that enables creating an instance of your driver. By convention, the function is called new() and looks something like:

    #![allow(unused)]
    fn main() {
    impl Ls1234<'a> {
        pub fn new(i2c: &'a dyn I2CDevice, buffer: &'static mut [u8]) -> Ls1234<'a> {
            Ls1234 {
                i2c: i2c,
                alarm: alarm,
                state: Cell::new(State::Disabled),
                client: OptionalCell::empty(),
            }
        }
    }
    }

    This function will get called by the board's main.rs file when the driver is instantiated. All of the static objects or configuration that the driver requires must be passed in here. In this example, a reference to the I2C device and the static buffer for passing messages must be provided.

    Checkpoint: You have defined the struct which will become the driver for the IC.

  2. Implement the logic to interface with the IC.

    Now, you will actually write the code that interfaces with the IC. This requires extending the impl of the driver struct with additional functions appropriate for your particular IC.

    With our light sensor example, we likely want to write a sample function for reading a light sensor value:

    #![allow(unused)]
    fn main() {
    impl Ls1234<'a> {
        pub fn new(...) -> Ls1234<'a> {...}
    
        pub fn start_light_reading(&self) {...}
    }
    }

    Note that the function name is "start light reading", which is appropriate because of the event-driven, non-blocking nature of the Tock kernel. Actually communicating with the sensor will take some time, and likely requires multiple messages to be sent to and received from the sensor. Therefore, our sample function will not be able to return the result directly. Instead, the reading will be provided in the callback function described earlier.

    The start reading function will likely prepare the message buffer in a way that is IC-specific, then send the command to the IC. A rough example of that operation looks like:

    #![allow(unused)]
    fn main() {
    impl Ls1234<'a> {
        pub fn new(...) -> Ls1234<'a> {...}
    
        pub fn start_light_reading(&self) {
            if self.state.get() == State::Disabled {
                self.buffer.take().map(|buf| {
                    self.i2c.enable();
    
                    // Set the first byte of the buffer to the "on" command.
                    // This is IC-specific and will be described in the IC
                    // datasheet.
                    buf[0] = 0b10100000;
    
                    // Send the command to the chip and update our state
                    // variable.
                    self.i2c.write(buf, 1);
                    self.state.set(State::Enabling);
                });
            }
        }
    }
    }

    The start_light_reading() function kicks off reading the light value from the IC and updates our internal state machine state to mark that we are waiting for the IC to turn on. Now the Ls1234 code is finished for the time being and we now wait for the I2C message to finish being sent. We will know when this has completed based on a callback from the I2C hardware.

    #![allow(unused)]
    fn main() {
    impl I2CClient for Ls1234<'a> {
        fn command_complete(&self, buffer: &'static mut [u8], error: Error) {
            // Handle what happens with the I2C send is complete here.
        }
    }
    }

    In our example, we have to send a new command after turning on the light sensor to actually read a sampled value. We use our state machine here to organize the code as in this example:

    #![allow(unused)]
    fn main() {
    impl I2CClient for Ls1234<'a> {
        fn command_complete(&self, buffer: &'static mut [u8], _error: Error) {
            match self.state.get() {
                State::Enabling => {
                    // Put the read command in the buffer and send it back to
                    // the sensor.
                    buffer[0] = 0b10100001;
                    self.i2c.write_read(buf, 1, 2);
                    // Update our state machine state.
                    self.state.set(State::ReadingLight);
                }
                _ => {}
            }
        }
    }
    }

    This will send another command to the sensor to read the actual light measurement. We also update our self.state variable because when this I2C transaction finishes the exact same command_complete callback will be called, and we must be able to remember where we are in the process of communicating with the sensor.

    When the read finishes, the command_complete() callback will fire again, and we must handle the result. Since we now have the reading we can call our client's callback after updating out state machine.

    #![allow(unused)]
    fn main() {
    impl I2CClient for Ls1234<'a> {
        fn command_complete(&self, buffer: &'static mut [u8], _error: Error) {
            match self.state.get() {
                State::Enabling => {
                    // Put the read command in the buffer and send it back to
                    // the sensor.
                    buffer[0] = 0b10100001;
                    self.i2c.write_read(buf, 1, 2);
                    // Update our state machine state.
                    self.state.set(State::ReadingLight);
                }
                State::ReadingLight => {
                    // Extract the light reading value.
                    let mut reading: u16 = buffer[0] as 16;
                    reading |= (buffer[1] as u16) << 8;
    
                    // Update our state machine state.
                    self.state.set(State::Disabled);
    
                    // Trigger our callback with the result.
                    self.client.map(|client| client.callback(reading));
                }
                _ => {}
            }
        }
    }
    }

    Note: likely the sensor would need to be disabled and returned to a low power state.

    At this point your driver can read the IC and return the information from the IC. For your IC you will likely need to expand on this general template. You can add additional functions to the main struct implementation, and then expand the state machine to implement those functions. You may also need additional resources, like GPIO pins or timer alarms to implement the state machine for the IC. There are examples in the capsules/src folder with drivers that need different resources.

Optional Steps

  1. Provide a HIL interface for the IC driver.

    The driver so far has a very IC-specific interface. That is, any code that uses the driver must be written specifically with that IC in mind. In some cases that may be reasonable, for example if the IC is very unusual or has a very unique set of features. However, many ICs provide similar functionality, and higher-level code can be written without knowing what specific IC is being used on a particular hardware platform.

    To enable this, some IC types have HILs in the kernel/src/hil folder in the sensors.rs file. Drivers can implement one of these HILs and then higher-level code can use the HIL interface rather than a specific IC.

    To implement the HIL, you must implement the HIL trait functions:

    #![allow(unused)]
    fn main() {
    impl AmbientLight for Ls1234<'a> {
        fn set_client(&self, client: &'static dyn AmbientLightClient) {
    
        }
    
        fn read_light_intensity(&self) -> ReturnCode {
    
        }
    }
    }

    The user of the AmbientLight HIL will implement the AmbientLightClient and provide the client through the set_client() function.

  2. Provide a userspace interface for the IC driver.

    Sometimes the IC is needed by userspace, and therefore needs a syscall interface so that userspace applications can use the IC. Please refer to a separate guide on how to implement a userspace interface for a capsule.

Wrap-Up

Congratulations! You have implemented an IC driver as a capsule in Tock! We encourage you to submit a pull request to upstream this to the Tock repository. Tock is happy to accept capsule drivers even if no boards in the Tock repository currently use the driver.

Implementing a System Call Interface for Userspace

This guide provides an overview and walkthrough on how to add a system call interface for userspace applications in Tock. The system call interface exposes some kernel functionality to applications. For example, this could be the ability to sample a new sensor, or use some service like doing AES encryption.

In this guide we will use a running example of providing a userspace interface for a hypothetical water level sensor (the "WS00123" water level sensor). This interface will allow applications to query the current water level, as well as get notified when the water level exceeds a certain threshold.

Setup

This guide assumes you already have existing kernel code that needs a userspace interface. Likely that means there is already a capsule implemented. Please see the other guides if you also need to implement the capsule.

We will assume there is a struct WS00123 {...} object already implemented that includes all of the logic needed to interface with this particular water sensor.

Overview

The high-level steps required are:

  1. Decide on the interface to expose to userspace.
  2. Map the interface to the existing syscalls in Tock.
  3. Create grant space for the application.
  4. Implement the SyscallDriver trait.
  5. Document the interface.
  6. Expose the interface to userspace.
  7. Implement the syscall library in userspace.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Decide on the interface to expose to userspace.

    Creating the interface for userspace means making design decisions on how applications should be able to interface with the kernel capsule. This can have a lasting impact, and is worth spending some time on up-front to avoid implementing an interface that is difficult to use or does not match the needs of applications.

    While there is not a fixed algorithm on how to create such an interface, there are a couple tips that can help with creating the interface:

    • Consider the interface for the same or similar functionality in other systems (e.g. Linux, Contiki, TinyOS, RIOT, etc.). These may have iterated on the design and include useful features.
    • Ignore the specific details of the capsule that exists or how the particular sensor the syscall interface is for works, and instead consider what a user of that capsule might want. That is, if you were writing an application, how would you expect to use the interface? This might be different from how the sensor or other hardware exposes features.
    • Consider other chips that provide similar functionality to the specific one you have. For example, imagine there is a competing water level sensor the "OWlS789". What features do both provide? How would a single interface be usable if a hardware board swapped one out for the other?

    The interface should include both actions (called "commands" in Tock) that the application can take (for example, "sample this sensor now"), as well as events (called subscribe upcalls in Tock) that the kernel can trigger inside of an application (for example, when the sensed value is ready).

    The interface can also include memory sharing between the application and the kernel. For example, if the application wants to receive a number of samples at once, or if the kernel needs to operate on many bytes (say for example encrypting a buffer), then the interface should allow the application to share some of its memory with the kernel to enable that functionality.

  2. Map the interface to the existing syscalls in Tock.

    With a sketch of the interface created, the next step is to map that interface to the specific syscalls that the Tock kernel supports. Tock has four main relevant syscall operations that applications can use when interfacing with the kernel:

    1. allow_readwrite: This lets an application share some of its memory with the kernel, which the kernel can read or write to.

    2. allow_readonly: This lets an application share some of its memory with the kernel, which the kernel can only read.

    3. subscribe: This provides a function pointer that the kernel can use to invoke an upcall on the application.

    4. command: This enables the application to direct the kernel to take some action.

    All four also include a couple other parameters to differentiate different commands, subscriptions, or allows. Refer to the more detailed documentation on the Tock syscalls for more information.

    As the Tock kernel only supports these syscalls, each feature in the design you created in the first step must be mapped to one or more of them. To help, consider these hypothetical interfaces that an application might have for our water sensor:

    • What is the maximum water level? This can be a simple command, where the return value of the command is the maximum water level.
    • What is the current water level? This will require two steps. First, there needs to be a subscribe call where the application can setup an upcall function. The kernel will call this when the water level value has been acquired. Second, there will need to be a command to instruct the kernel to take the water level reading.
    • Take ten water level samples. This will require three steps. First, the application must use a readwrite allow syscall to share a buffer with the kernel large enough to hold 10 water level readings. Then it must setup a subscribe upcall that the kernel will call when the 10 readings are ready (note this upcall function can be the same as in the single sample case). Finally it will use a command to tell the kernel to start sampling.
    • Notify me when the water level exceeds a threshold. A likely way to implement this would be to first require a subscribe syscall for the application to set the function that will get called when the high water level event occurs. Then the application will need to use a command to enable the high water level detection and to optionally set the threshold.

    As you do this, remember that kernel operations, and the above system calls, cannot execute for a long period of time. All of the four system calls are non-blocking. Long-running operations should involve an application starting the operation with a command, then having the kernel signal completion with an upcall.

    Checkpoint: You have defined how many allow, subscribe, and command syscalls you need, and what each will do.

  3. Create grant space for the application.

    Grants are regions in a process's memory space that are shared with the kernel. The kernel uses these to store state on behalf of the process. To provide our syscall interface for the water level sensor, we need to setup a grant so that we can store state for all of the requests we may get from processes that want to use the sensor.

    The first step to do this is to create a struct that contains fields for all of the state we want to store for each process that uses our syscall interface. By convention in Tock, this struct is named App, but it could have a different name.

    In our grant we need to store two things: the high water alert threshold and the upcall function pointer the app provided us when it called subscribe. We, however, only have to handle the threshold. As of Tock 2.0, the upcall is stored internally in the kernel. All we have to do is tell the kernel how many different upcall function pointers per app we need to store. In our case we only need to store one. This is provided as a parameter to Grant.

    We can now create an App struct which represents what will be stored in our grant:

    #![allow(unused)]
    fn main() {
    pub struct App {
        threshold: usize,
    }
    }

    Now that we have the type we want to store in the grant region we can create the grant type for it by extending our WS00123 struct:

    #![allow(unused)]
    fn main() {
    pub struct WS00123 {
    	...
        apps: Grant<App, 1>,
    }
    }

    Grant<App, 1> tells the kernel that we want to store the App struct in the grant, as well as one upcall function pointer.

    We will also need the grant region to be created by the board and passed in to us by adding it to the capsules new() function:

    #![allow(unused)]
    fn main() {
    impl WS00123 {
        pub fn new(
            ...
            grant: Grant<App, 1>,
        ) -> WS00123 {
            WS00123 {
                ...,
                apps: grant,
            }
        }
    }
    }

    Now we have somewhere to store values on a per-process basis.

  4. Implement the SyscallDriver trait.

    The SyscallDriver trait is how a capsule provides implementations for the various syscalls an application might call. The basic framework looks like:

    #![allow(unused)]
    fn main() {
    impl SyscallDriver for WS00123 {
    	fn allow_readwrite(
    	    &self,
    	    appid: AppId,
    	    which, usize,
    	    slice: ReadWriteAppSlice,
    	) -> Result<ReadWriteAppSlice, (ReadWriteAppSlice, ErrorCode)> { }
    
        fn allow_readonly(
            &self,
            app: AppId,
            which: usize,
            slice: ReadOnlyAppSlice,
        ) -> Result<ReadOnlyAppSlice, (ReadOnlyAppSlice, ErrorCode)> { }
    
        fn command(
            &self,
    	    which: usize,
    		r2: usize,
    		r3: usize,
    		caller_id: AppId) -> CommandReturn { }
    
        fn allocate_grant(
            &self,
            process_id: ProcessId) -> Result<(), crate::process::Error>;
    }
    }

    For details on exactly how these methods work and their return values, TRD104 is their reference document. Notice that there is no subscribe() call, as that is handled entirely in the core kernel. However, the kernel will use the upcall slots passed as the second parameter to Grant<_, UPCALLS> to implement subscribe() on your behalf.

    Note: there are default implementations for each of these, so in our water level sensor case we can simply omit the allow_readwrite and allow_readonly calls.

    By Tock convention, every syscall interface must at least support the command call with which == 0. This allows applications to check if the syscall interface is supported on the current platform. The command must return a CommandReturn::success(). If the command is not present, then the kernel automatically has it return a failure with an error code of ErrorCode::NOSUPPORT. For our example, we use the simple case:

    #![allow(unused)]
    fn main() {
    impl SyscallDriver for WS00123 {
        fn command(
            &self,
            which: usize,
            r2: usize,
    		r3: usize,
    		caller_id: AppId) -> CommandReturn {
    			match command_num {
    				0 => CommandReturn::success(),
    				_ => CommandReturn::failure(ErrorCode::NOSUPPORT)
    			}
            }
    }
    }

    We also want to ensure that we implement the allocate_grant() call. This allows the kernel to ask us to setup our grant region since we know what the type App is and how large it is. We just need the standard implementation that we can directly copy in.

    #![allow(unused)]
    fn main() {
    impl SyscallDriver for WS00123 {
        fn allocate_grant(
            &self,
            process_id: ProcessId) -> Result<(), kernel::process::Error> {
                // Allocation is performed implicitly when the grant region is entered.
                self.apps.enter(processid, |_, _| {})
        }
    }
    }

    Next we can implement more commands so that the application can direct our capsule as to what the application wants us to do. We need two commands, one to sample and one to enable the alert. In both cases the commands must return a ReturnCode, and call functions that likely already exist in the original implementation of the WS00123 sensor. If the functions don't quite exist, then they will need to be added as well.

    #![allow(unused)]
    fn main() {
    impl SyscallDriver for WS00123 {
    	/// Command interface.
    	///
    	/// ### `command_num`
    	///
    	/// - `0`: Return SUCCESS if this driver is included on the platform.
    	/// - `1`: Start a water level measurement.
    	/// - `2`: Enable the water level detection alert. `data` is used as the
    	///        height to set as the the threshold for detection.
        fn command(
            &self,
            which: usize,
            r2: usize,
    		r3: usize,
    		caller_id: AppId) -> CommandReturn {
      	    match command_num {
    			0 => CommandReturn::success(),
    			1 => self.start_measurement(app),
    			2 => {
    				// Save the threshold for this app.
    				self.apps
    				    .enter(app_id, |app, _| {
    				        app.threshold = data;
    				        CommandReturn::success()
    				    })
    				    .map_or_else(
    				    	|err| CommandReturn::failure(ErrorCode::from),
    				    	|ok| self.set_high_level_detection()
    				    )
    			},
    
    			_ => CommandReturn::failure(ErrorCode::NOSUPPORT),
    		}
        }
    }
    }

    The last item that needs to be added is to actually use the upcall when the sensor has been sampled or the alert has been triggered. Actually issuing the upcall will need to be added to the existing implementation of the capsule. As an example, if our water sensor was attached to the board over I2C, then we might trigger the upcall in response to a finished I2C command:

    #![allow(unused)]
    fn main() {
    impl i2c::I2CClient for WS00123 {
        fn command_complete(&self, buffer: &'static mut [u8], _error: i2c::Error) {
        	...
        	let app_id = <get saved appid for the app that issued the command>;
        	let measurement = <calculate water level based on returned I2C data>;
    
        	self.apps.enter(app_id, |app, upcalls| {
        	    upcalls.schedule_upcall(0, (0, measurement, 0)).ok();
        	});
        }
    }
    }

    Note: the first argument to schedule_upcall() is the index of the upcall to use. Since we only have one upcall we use 0.

    There may be other cleanup code required to reset state or prepare the sensor for another sample by a different application, but these are the essential elements for implementing the syscall interface.

    Finally, we need to assign our new SyscallDriver implementation a number so that the kernel (and userspace apps) can differentiate this syscall interface from all others that a board supports. By convention this is specified by a global value at the top of the capsule file:

    #![allow(unused)]
    fn main() {
    pub const DRIVER_NUM: usize = 0x80000A;
    }

    The value cannot conflict with other capsules in use, but can be set arbitrarily, particularly for testing. Tock has a procedure for assigning numbers, and you may need to change this number if the capsule is to merged into the main Tock repository.

    Checkpoint: You have the syscall interface translated from a design to code that can run inside the Tock kernel.

  5. Document the interface.

    A syscall interface is a contract between the kernel and any number of userspace processes, and processes should be able to be developed independently of the kernel. Therefore, it is helpful to document the new syscall interface you made so applications know how to use the various command, subscribe, and allow calls.

    An example markdown file documenting our water level syscall interface is as follows:

    ---
    driver number: 0x80000A
    ---
    
    # Water Level Sensor WS00123
    
    ## Overview
    
    The WS00123 water level sensor can sample the depth of water as well as
    trigger an event if the water level gets too high.
    
    ## Command
    
    - ### Command number: `0`
    
      **Description**: Does the driver exist?
    
      **Argument 1**: unused
    
      **Argument 2**: unused
    
      **Returns**: SUCCESS if it exists, otherwise ENODEVICE
    
    - ### Command number: `1`
    
      **Description**: Initiate a sensor reading. When a reading is ready, a
      callback will be delivered if the process has `subscribed`.
    
      **Argument 1**: unused
    
      **Argument 2**: unused
    
      **Returns**: `EBUSY` if a reading is already pending, `ENOMEM` if there
      isn't sufficient grant memory available, or `SUCCESS` if the sensor reading
      was initiated successfully.
    
    - ### Command number: `2`
    
      **Description**: Enable the high water detection. THe callback will the
      alert will be delivered if the process has `subscribed`.
    
      **Argument 1**: The water depth to alert for.
    
      **Argument 2**: unused
    
      **Returns**: `EBUSY` if a reading is already pending, `ENOMEM` if there
      isn't sufficient grant memory available, or `SUCCESS` if the sensor reading
      was initiated successfully.
    
    ## Subscribe
    
    - ### Subscribe number: `0`
    
      **Description**: Subscribe an upcall for sensor readings and alerts.
    
      **Upcall signature**: The upcall's first argument is `0` if this is a
      measurement, and `1` if the callback is an alert. If it is a measurement
      the second value will be the water level.
    
      **Returns**: SUCCESS if the subscribe was successful or ENOMEM if the
      driver failed to allocate memory to store the upcall.
    

    This file should be named <driver_num>_<sensor>.md, or in this case: 80000A_ws00123.md.

  6. Expose the interface to userspace.

    The last kernel implementation step is to let the main kernel know about this new syscall interface so that if an application tries to use it the kernel knows which implementation of SyscallDriver to call. In each board's main.rs file (e.g. boards/hail/src/main.rs) there is an implementation of the SyscallDriverLookup trait where the board can setup which syscall interfaces it supports. To enable our water sensor interface we add a new entry to the match statement there:

    #![allow(unused)]
    fn main() {
    impl SyscallDriverLookup for Hail {
        fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
        where
            F: FnOnce(Option<&dyn kernel::Driver>) -> R,
        {
            match driver_num {
            	...
                capsules::ws00123::DRIVER_NUM => f(Some(self.ws00123)),
                ...
                _ => f(None),
            }
        }
    }
    }
  7. Implement the syscall library in userspace.

    At this point userspace applications can use our new syscall interface and interact with the water sensor. However, applications would have to call all of the syscalls directly, and that is fairly difficult to get right and not user friendly. Therefore, we typically implement a small library layer in userspace to make using the interface easier.

    In this guide we will be setting up a C library, and to do so we will create libtock-c/libtock/ws00123.h and libtock-c/libtock/ws00123.c, both of which will be added to the libtock-c repository. The .h file defines the public interface and constants:

    #pragma once
    
    #include "tock.h"
    
    #ifdef __cplusplus
    extern "C" {
    #endif
    
    #define DRIVER_NUM_WS00123 0x80000A
    
    int ws00123_set_callback(subscribe_cb callback, void* callback_args);
    int ws00123_read_water_level();
    int ws00123_enable_alerts(uint32_t threshold);
    
    #ifdef __cplusplus
    }
    #endif
    

    While the .c file provides the implementations:

    #include "ws00123.h"
    #include "tock.h"
    
    int ws00123_set_callback(subscribe_cb callback, void* callback_args) {
      return subscribe(DRIVER_NUM_WS00123, 0, callback, callback_args);
    }
    
    int ws00123_read_water_level() {
      return command(DRIVER_NUM_WS00123, 1, 0, 0);
    }
    
    int ws00123_enable_alerts(uint32_t threshold) {
      return command(DRIVER_NUM_WS00123, 2, threshold, 0);
    }
    

    This is a very basic implementation of the interface, but it provides some more readable names to the numbers that make up the syscall interface. See other examples in libtock for how to make synchronous versions of asynchronous operations (like reading the sensor).

Wrap-Up

Congratulations! You have added a new API for userspace applications using the Tock syscall interface! We encourage you to submit a pull request to upstream this to the Tock repository.

Implementing a HIL Interface

This guide describes the process of creating a new HIL interface in Tock. "HIL"s are one or more Rust traits that provide a standard and shared interface between pieces of the Tock kernel.

Background

The most canonical use for a HIL is to provide an interface to hardware peripherals to capsules. For example, a HIL for SPI provides an interface between the SPI hardware peripheral in a microcontroller and a capsule that needs a SPI bus for its operation. The HIL is a generic interface, so that same capsule can work on different microcontrollers, as long as each microcontroller implements the SPI HIL.

HILs are also used for other generic kernel interfaces that are relevant to capsules. For example, Tock defines a HIL for a "temperature sensor". While a temperature sensor is not generally a hardware peripheral, a capsule may want to use a generic temperature sensor interface and not be restricted to using a particular temperature sensor driver. Having a HIL allows the capsule to use a generic interface. For consistency, these HILs are also specified in the kernel crate.

Note: In the future Tock will likely split these interface types into separate groups.

HIL development often significantly differs from other development in Tock. In particular, HILs can often be written quickly, but tend to take numerous iterations over relatively long periods of time to refine. This happens for three general reasons:

  1. HILs are intended to be generic, and therefore implementable by a range of different hardware platforms. Designing an interface that works for a range of different hardware takes time and experience with various MCUs, and often incompatibilities aren't discovered until an implementation proves to be difficult (or impossible).
  2. HILs are Rust traits, and Rust traits are reasonably complex and offer a fair bit of flexibility. Balancing both leveraging the flexibility Rust provides and avoiding undue complexity takes time. Again, often trial-and-error is required to settle on how traits should be composed to best capture the interface.
  3. HILs are intended to be generic, and therefore will be used in a variety of different use cases. Ensuring that the HIL is expressive enough for a diverse set of uses takes time. Again, often the set of uses is not known initially, and HILs often have to be revised as new use cases are discovered.

Therefore, we consider HILs to be evolving interfaces.

Tips on HIL Development

As getting a HIL interface "correct" is difficult, Tock tends to prefer starting with simple HIL interfaces that are typically inspired by the hardware used when the HIL is initially created. Trying to generalize a HIL too early can lead to complexity that is never actually warranted, or complexity that didn't actually address a problem.

Also, Tock prefers to only include code (or in this case HIL interface functions) that are actually in use by the Tock code base. This ensures that there is at least some method of using or testing various components of Tock. This also suggests that initial HIL development should only focus on an interface that is needed by the initial use case.

Overview

The high-level steps required are:

  1. Determine that a new HIL interface is needed.
  2. Create the new HIL in the kernel crate.
  3. Ensure the HIL file includes sufficient documentation.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Determine that a new HIL interface is needed.

    Tock includes a number of existing HIL interfaces, and modifying an existing HIL is preferred to creating a new HIL that is similar to an existing interface. Therefore, you should start by verifying an existing HIL does not already meet your need or could be modified to meet your need.

    This may seem to be a straightforward step, but it can be complicated by microcontrollers calling similar functionality by different names, and the existing HIL using a standard name or a different name from another microcontroller.

    Also, you can reach out via the email list or slack if you have questions about whether a new HIL is needed or an existing one should suffice.

  2. Create the new HIL in the kernel crate.

    Once you have determined a new HIL is required, you should create the appropriate file in kernel/src/hil. Often the best way to start is to copy an existing HIL that is similar in nature to the interface you are trying to create.

    As noted above, HILs evolve over time, and HILs will be periodically updated as issues are discovered or best practices for HIL design are learned. Unfortunately, this means that copying an existing HIL might lead to "mistakes" that must be remedied before the new HIL can be merged.

    Likely, it is helpful to open a pull request relatively early in the HIL creation process so that any substantial issues can be detected and corrected quickly.

    Tock has a reference guide for dos and don'ts when creating a HIL. Following this guide can help avoid many of the pitfalls that we have run into when creating HILs in the past.

    Tock only uses non-blocking interfaces in the kernel, and HILs should reflect that as well. Therefore, for any operation that will take more than a couple cycles to complete, or would require waiting on a hardware flag, a split interface design should be used with a Client trait that receives a callback when the operation has completed.

  3. Ensure the HIL file includes sufficient documentation.

    HIL files should be well commented with Rustdoc style (i.e. ///) comments. These comments are the main source of documentation for HILs.

    As HILs grow in complexity or stability, they will be documented separately to fully explain their design and intended use cases.

Wrap-Up

Congratulations! You have implemented a new HIL in Tock! We encourage you to submit a pull request to upstream this to the Tock repository.

Implementing an in-kernel Virtualization Layer

This guide provides an overview and walkthrough on how to add an in-kernel virtualization layer, such that a given hardware interface can be used simultaneously by multiple kernel capsules, or used simultaneously by a single kernel capsule and userspace. Ideally, virtual interfaces will be available for all hardware interfaces in Tock. Some example interfaces which have already been virtualized include Alarm, SPI, Flash, UART, I2C, ADC, and others.

In this guide we will use a running example of virtualizing a single hardware SPI peripheral and bus for use as a SPI Master.

Setup

This guide assumes you already have existing kernel code that needs to be virtualized. There should be an existing HIL for the resource you are virtualizing.

We will assume there is a trait SpiMaster {...} already defined and implemented that includes all of the logic needed to interface with the underlying SPI. We also assume there is a trait SpiMasterClient that determines the interface a client of the SPI exposes to the underlying resource. In most cases, equivalent traits will represent a necessary precursor to virtualization.

Overview

The high-level steps required are:

  1. Create a capsule file for your virtualizer
  2. Determine what portions of this interface should be virtualized.
  3. Create a MuxXXX struct, which will serve as the lone client of the underlying resource.
  4. Create a VirtualXXXDevice which will implement the underlying HIL trait, allowing for the appearance of multiple of the lone resource.
  5. Implement the logic for queuing requests from capsules.
  6. Implement the logic for dispatching callbacks from the underlying resource to the appropriate client.
  7. Document the interface.
  8. (Optional) Write tests for the virtualization logic.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Create a capsule file for your virtualizer

    This step is easy. Navigate to the capsules/src/ directory and create a new file named virtual_xxx, where xxx is the name of the underlying resource being virtualized. All of the code you will write while following this guide belongs in that file. Additionally, open capsules/src/lib.rs and add pub mod virtual_xxx; to the list of modules.

  2. Determine what portions of this interface should be virtualized

    Generally, this step requires looking at the HIL being virtualized, and determining what portions of the HIL require additional logic to handle multiple concurrent clients. Lets take a look at the SPIMaster HIL:

    #![allow(unused)]
    fn main() {
    pub trait SpiMaster {
        fn set_client(&self, client: &'static dyn SpiMasterClient);
    
        fn init(&self);
        fn is_busy(&self) -> bool;
    
        /// Perform an asynchronous read/write operation, whose
        /// completion is signaled by invoking SpiMasterClient on
        /// the initialized client.
        fn read_write_bytes(
            &self,
            write_buffer: &'static mut [u8],
            read_buffer: Option<&'static mut [u8]>,
            len: usize,
        ) -> ReturnCode;
        fn write_byte(&self, val: u8);
        fn read_byte(&self) -> u8;
        fn read_write_byte(&self, val: u8) -> u8;
    
        /// Tell the SPI peripheral what to use as a chip select pin.
        fn specify_chip_select(&self, cs: Self::ChipSelect);
    
        /// Returns the actual rate set
        fn set_rate(&self, rate: u32) -> u32;
        fn get_rate(&self) -> u32;
        fn set_clock(&self, polarity: ClockPolarity);
        fn get_clock(&self) -> ClockPolarity;
        fn set_phase(&self, phase: ClockPhase);
        fn get_phase(&self) -> ClockPhase;
    
        // These two functions determine what happens to the chip
        // select line between transfers. If hold_low() is called,
        // then the chip select line is held low after transfers
        // complete. If release_low() is called, then the chip select
        // line is brought high after a transfer completes. A "transfer"
        // is any of the read/read_write calls. These functions
        // allow an application to manually control when the
        // CS line is high or low, such that it can issue multi-byte
        // requests with single byte operations.
        fn hold_low(&self);
        fn release_low(&self);
    }
    }

    For some of these functions, it is clear that no virtualization is required. For example, get_rate(), get_phase() and get_polarity() simply request information on the current configuration of the underlying hardware. Implementations of these can simply pass the call straight through the mux.

    Some other functions are not appropriate to expose to virtual clients at all. For example, hold_low(), release_low(), and specify_chip_select() are not suitable for use when the underlying bus is shared. init() does not make sense when it is unclear which client should call it. The mux should queue operations, so clients should not need access to is_busy().

    For other functions, it is clear that virtualization is necessary. For example, it is clear that if multiple clients are using the Mux, they cannot all be allowed set the rate of the underlying hardware at arbitrary times, as doing so could break an ongoing operation initiated by an underlying client. However, it is important to expose this functionality to clients. Thus set_rate(), set_clock() and set_phase() need to be virtualized, and provided to virtual clients. set_client() needs to be adapted to support multiple simultaneous clients.

    Finally, virtual clients need a way to send and receive on the bus. Single byte writes and reads are typically only used under the assumption that a single client is going to make multiple single byte reads/writes consecutively, and thus are inappropriate to virtualize. Instead, the virtual interface should only include read_write_bytes(), as that encapsulates the entire transaction that would be desired by a virtual client.

    Given that not all parts of the original HIL trait (SpiMaster) are appropriate for virtualization, we should create a new trait in the SPI HIL that will represent the interface provided to clients of the Virtual SPI:

    #![allow(unused)]
    fn main() {
    //! kernel/src/hil/spi.rs
    ...
    /// SPIMasterDevice provides a chip-specific interface to the SPI Master
    /// hardware. The interface wraps the chip select line so that chip drivers
    /// cannot communicate with different SPI devices.
    pub trait SpiMasterDevice {
        /// Perform an asynchronous read/write operation, whose
        /// completion is signaled by invoking SpiMasterClient.read_write_done on
        /// the provided client.
        fn read_write_bytes(
            &self,
            write_buffer: &'static mut [u8],
            read_buffer: Option<&'static mut [u8]>,
            len: usize,
        ) -> ReturnCode;
    
        /// Helper function to set polarity, clock phase, and rate all at once.
        fn configure(&self, cpol: ClockPolarity, cpal: ClockPhase, rate: u32);
        fn set_polarity(&self, cpol: ClockPolarity);
        fn set_phase(&self, cpal: ClockPhase);
        fn set_rate(&self, rate: u32);
    
        fn get_polarity(&self) -> ClockPolarity;
        fn get_phase(&self) -> ClockPhase;
        fn get_rate(&self) -> u32;
    }
    }

    Not all virtualizers will require a new trait to provide virtualization! For example, VirtualMuxDigest exposes the same Digest HIL as the underlying hardware. Same for VirtualAlarm, VirtualUart, and MuxFlash. VirtualI2C does use a different trait, similarly to SPI, and VirtualADC introduces an AdcChannel trait to enable virtualization that is not possible with the ADC interface implemented by hardware.

    There is no fixed algorithm for deciding exactly how to virtualize a given interface, and doing so will require thinking carefully about the requirements of the clients and nature of the underlying resource. Tock's threat model describes several requirements for virtualizers in its virtualization section.

    Note: You should read these requirements!! They discuss things like the confidentiality and fairness requirements for virtualizers.

    Beyond the threat model, you should think carefully about how virtual clients will use the interface, the overhead (in cycles / code size / RAM use) of different approaches, and how the interface will work in the face of multiple concurrent requests. It is also important to consider the potential for two layers of virtualization, when one of the clients of the virtualization capsule is a userspace driver that will also be virtualizing that same resource. In some cases (see: UDP port reservations) special casing the userspace driver may be valuable.

    Frequently the best approach will involve looking for an already virtualized resource that is qualitatively similar to the resource you are working with, and using its virtualization as a template.

  3. Create a MuxXXX struct, which will serve as the lone client of the underlying resource.

    In order to virtualize a hardware resource, we need to create some object that has a reference to the underlying hardware resource and that will hold the multiple "virtual" devices which clients will interact with. For the SPI interface, we call this struct MuxSpiMaster:

    #![allow(unused)]
    fn main() {
    /// The Mux struct manages multiple Spi clients. Each client may have
    /// at most one outstanding Spi request.
    pub struct MuxSpiMaster<'a, Spi: hil::spi::SpiMaster> {
        // The underlying resource being virtualized
        spi: &'a Spi,
    
        // A list of virtual devices which clients will interact with.
        // (See next step for details)
        devices: List<'a, VirtualSpiMasterDevice<'a, Spi>>,
    
        // Additional data storage needed to implement virtualization logic
        inflight: OptionalCell<&'a VirtualSpiMasterDevice<'a, Spi>>,
    }
    }

    Here we use Tock's built-in List type, which is a LinkedList of statically allocated structures that implement a given trait. This type is required because Tock does not allow heap allocation in the Kernel.

    Typically, this struct will implement some number of private helper functions used as part of virtualization, and provide a public constructor. For now we will just implement the constructor:

    #![allow(unused)]
    fn main() {
    impl<'a, Spi: hil::spi::SpiMaster> MuxSpiMaster<'a, Spi> {
        pub const fn new(spi: &'a Spi) -> MuxSpiMaster<'a, Spi> {
            MuxSpiMaster {
                spi: spi,
                devices: List::new(),
                inflight: OptionalCell::empty(),
            }
        }
    
        // TODO: Implement virtualization logic helper functions
    }
    }
  4. Create a VirtualXXXDevice which will implement the underlying HIL trait

    In the previous step you probably noticed the list of virtual devices referencing a VirtualSpiMasterDevice, which we had not created yet. We will define and implement that struct here. In practice, both must be defined simultaneously because each type references the other. The VirtualSpiMasterDevice should have a reference to the mux, a ListLink field (required so that lists of VirtualSpiMasterDevices can be constructed), and other fields for data that needs to be stored for each client of the virtualizer.

    #![allow(unused)]
    fn main() {
    pub struct VirtualSpiMasterDevice<'a, Spi: hil::spi::SpiMaster> {
        //reference to the mux
        mux: &'a MuxSpiMaster<'a, Spi>,
    
        // Pointer to next element in the list of devices
        next: ListLink<'a, VirtualSpiMasterDevice<'a, Spi>>,
    
        // Per client data that must be stored across calls
        chip_select: Cell<Spi::ChipSelect>,
        txbuffer: TakeCell<'static, [u8]>,
        rxbuffer: TakeCell<'static, [u8]>,
        operation: Cell<Op>,
        client: OptionalCell<&'a dyn hil::spi::SpiMasterClient>,
    }
    
    impl<'a, Spi: hil::spi::SpiMaster> VirtualSpiMasterDevice<'a, Spi> {
        pub const fn new(
            mux: &'a MuxSpiMaster<'a, Spi>,
            chip_select: Spi::ChipSelect,
        ) -> VirtualSpiMasterDevice<'a, Spi> {
            VirtualSpiMasterDevice {
                mux: mux,
                chip_select: Cell::new(chip_select),
                txbuffer: TakeCell::empty(),
                rxbuffer: TakeCell::empty(),
                operation: Cell::new(Op::Idle),
                next: ListLink::empty(),
                client: OptionalCell::empty(),
            }
        }
    
        // Most virtualizers will use a set_client method that looks exactly like this
        pub fn set_client(&'a self, client: &'a dyn hil::spi::SpiMasterClient) {
            self.mux.devices.push_head(self);
            self.client.set(client);
        }
    }
    }

    This is the struct that will implement whatever HIL trait we decided on in step 1. In our case, this is the SpiMasterDevice trait:

    #![allow(unused)]
    fn main() {
    // Given that there are multiple types of operations we might need to queue,
    // create an enum that can represent each operation and the data that operation
    // needs to store.
    #[derive(Copy, Clone, PartialEq)]
    enum Op {
        Idle,
        Configure(hil::spi::ClockPolarity, hil::spi::ClockPhase, u32),
        ReadWriteBytes(usize),
        SetPolarity(hil::spi::ClockPolarity),
        SetPhase(hil::spi::ClockPhase),
        SetRate(u32),
    }
    
    impl<Spi: hil::spi::SpiMaster> hil::spi::SpiMasterDevice for VirtualSpiMasterDevice<'_, Spi> {
        fn configure(&self, cpol: hil::spi::ClockPolarity, cpal: hil::spi::ClockPhase, rate: u32) {
            self.operation.set(Op::Configure(cpol, cpal, rate));
            self.mux.do_next_op();
        }
    
        fn read_write_bytes(
            &self,
            write_buffer: &'static mut [u8],
            read_buffer: Option<&'static mut [u8]>,
            len: usize,
        ) -> ReturnCode {
            self.txbuffer.replace(write_buffer);
            self.rxbuffer.put(read_buffer);
            self.operation.set(Op::ReadWriteBytes(len));
            self.mux.do_next_op();
            ReturnCode::SUCCESS
        }
    
        fn set_polarity(&self, cpol: hil::spi::ClockPolarity) {
            self.operation.set(Op::SetPolarity(cpol));
            self.mux.do_next_op();
        }
    
        fn set_phase(&self, cpal: hil::spi::ClockPhase) {
            self.operation.set(Op::SetPhase(cpal));
            self.mux.do_next_op();
        }
    
        fn set_rate(&self, rate: u32) {
            self.operation.set(Op::SetRate(rate));
            self.mux.do_next_op();
        }
    
        fn get_polarity(&self) -> hil::spi::ClockPolarity {
            self.mux.spi.get_clock()
        }
    
        fn get_phase(&self) -> hil::spi::ClockPhase {
            self.mux.spi.get_phase()
        }
    
        fn get_rate(&self) -> u32 {
            self.mux.spi.get_rate()
        }
    }
    }

    Now we can begin to see the virtualization logic. Each get_x() method just forwards calls directly to the underlying hardware driver, as these operations are synchronous and non-blocking. But the set() calls and the read/write calls are queued as operations. Each client can have only a single outstanding operation (a common requirement for virtualizers in Tock given the lack of dynamic allocation). These operations are "queued" by each client simply setting the operation field of its VirtualSpiMasterDevice to whatever operation it would like to perform next. The Mux can iterate through the list of devices to choose a pending operation. Clients learn about the completion of operations via callbacks, informing them that they can begin new operations.

  5. Implement the logic for queuing requests from capsules.

    So far, we have sketched out a skelton for how we will queue requests from capsules, but not yet implemented the do_next_op() function that will handle the order in which operations are performed, or how operations are translated into calls by the actual hardware driver.

    We know that all operations in Tock are asynchronous, so it is always possible that the underlying hardware device is busy when do_next_op() is called -- accordingly, we need some mechanism for tracking if the underlying device is currently busy. We also need to restore the state expected by the device performing a given operaion (e.g. the chip select pin in use). Beyond that, we just forward calls to the hardware driver:

    #![allow(unused)]
    fn main() {
    fn do_next_op(&self) {
        if self.inflight.is_none() {
            let mnode = self
                .devices
                .iter()
                .find(|node| node.operation.get() != Op::Idle);
            mnode.map(|node| {
                self.spi.specify_chip_select(node.chip_select.get());
                let op = node.operation.get();
                // Need to set idle here in case callback changes state
                node.operation.set(Op::Idle);
                match op {
                    Op::Configure(cpol, cpal, rate) => {
                        // The `chip_select` type will be correct based on
                        // what implemented `SpiMaster`.
                        self.spi.set_clock(cpol);
                        self.spi.set_phase(cpal);
                        self.spi.set_rate(rate);
                    }
                    Op::ReadWriteBytes(len) => {
                        // Only async operations want to block by setting
                        // the devices as inflight.
                        self.inflight.set(node);
                        node.txbuffer.take().map(|txbuffer| {
                            let rxbuffer = node.rxbuffer.take();
                            self.spi.read_write_bytes(txbuffer, rxbuffer, len);
                        });
                    }
                    Op::SetPolarity(pol) => {
                        self.spi.set_clock(pol);
                    }
                    Op::SetPhase(pal) => {
                        self.spi.set_phase(pal);
                    }
                    Op::SetRate(rate) => {
                        self.spi.set_rate(rate);
                    }
                    Op::Idle => {} // Can't get here...
                }
            });
        }
    }
    }

    Notably, the SPI driver does not implement any fairness schemes, despite the requirements of the threat model. As of this writing, the threat model is still aspirational, and not followed for all virtualizers. Eventually, this driver should be updated to use round robin queueing of clients, rather than always giving priority to whichever client was added to the List first.

  6. Implement the logic for dispatching callbacks from the underlying resource to the appropriate client.

    We are getting close! At this point, we have a mechanism for adding clients to the virtualizer, and for queueing and making calls. However, we have not yet addressed how to handle callbacks from the underlying resource (usually used to forward interrupts up to the appropriate client). Additionally, our queueing logic is still incomplete, as we have not yet seen when subsequent operations are triggered if an operation is requested while the underlying device is in use.

    Handling callbacks in virtualizers requires two layers of handling. First, the MuxXXX device must implement the appropriate XXXClient trait such that it can subscribe to callbacks from the underlying resource, and dispatch them to the appropriate VirtualXXXDevice:

    #![allow(unused)]
    fn main() {
    impl<Spi: hil::spi::SpiMaster> hil::spi::SpiMasterClient for MuxSpiMaster<'_, Spi> {
        fn read_write_done(
            &self,
            write_buffer: &'static mut [u8],
            read_buffer: Option<&'static mut [u8]>,
            len: usize,
        ) {
            self.inflight.take().map(move |device| {
                self.do_next_op();
                device.read_write_done(write_buffer, read_buffer, len);
            });
        }
    }
    }

    This takes advantage of the fact that we stored a reference to device that initiated the inflight operation, so we can dispatch the callback directly to that device. One thing to note is that the call to take() sets inflight to None, and then the callback calls do_next_op(), triggering any still queued operations. This ensures that all queued operations will take place. This all requires that the device also has implemented the callback:

    #![allow(unused)]
    fn main() {
    impl<Spi: hil::spi::SpiMaster> hil::spi::SpiMasterClient for VirtualSpiMasterDevice<'_, Spi> {
    fn read_write_done(
        &self,
        write_buffer: &'static mut [u8],
        read_buffer: Option<&'static mut [u8]>,
        len: usize,
    ) {
        self.client.map(move |client| {
            client.read_write_done(write_buffer, read_buffer, len);
        });
    }
    }

    Finally, we have dispatched the callback all the way up to the client of the virtualizer, completing the round trip process.

  7. Document the interface.

    Finally, you need to document the interface. Do so by placing a comment at the top of the file describing what the file does:

    #![allow(unused)]
    fn main() {
    //! Virtualize a SPI master bus to enable multiple users of the SPI bus.
    
    }

    and add doc comments (/// doc comment example) to any new traits created in kernel/src/hil.

  8. (Optional) Write tests for the virtualization logic.

    Some virtualizers provide additional stress tests of virtualization logic, which can be run on hardware to perform correct operation in edge cases. For examples of such tests, look at capsules/src/test/virtual_uart.rs or capsules/src/test/random_alarm.rs.

Wrap-Up

Congratulations! You have virtualized a resource in the Tock kernel! We encourage you to submit a pull request to upstream this to the Tock repository.

Implementing a Kernel Test

This guide covers how to write in-kernel tests of hardware functionality. For example, if you have implemented a chip peripheral, you may want to write in-kernel tests of that peripheral to test peripheral-specific functionality that will not be exposed via the HIL for that peripheral. This guide outlines the general steps for implementing kernel tests.

Setup

This guide assumes you have existing chip, board, or architecture specific code that you wish to test from within the kernel.

Note: If you wish to test kernel code with no hardware dependencies at all, such as a ring buffer implementation, you can use cargo's test framework instead. These tests can be run by simply calling cargo test within the crate that the test is located, and will be executed during CI for all tests merged into upstream Tock. An example of this approach can be found in kernel/src/collections/ring_buffer.rs.

Overview

The general steps you will follow are:

  1. Determine the board(s) you want to run your tests on
  2. Add a test file in boards/{board}/src/tests/
  3. Determine where to write actual tests -- in the test file or a capsule test
  4. Write your tests
  5. Call the test from main.rs
  6. Document the expected output from the test at the top of the test file

This guide will walk through how to do each of these steps.

Background

Kernel tests allow for testing of hardware-specific functionality that is not exposed to userspace, and allows for fail-fast tests at boot that otherwise would not be exposed until apps are loaded. Kernel tests can be useful to test chip peripherals prior to exposing these peripherals outside the Kernel. Kernel tests can also be included as required tests run prior to releases, to ensure there have been no regressions for a particular component. Additionally, kernel tests can be useful for testing capsule functionality from within the kernel, such as when unsafe is required to verify the results of tests, or for testing virtualization capsules in a controlled environment.

Kernel tests are generally implemented on an as-needed basis, and are not required for all chip peripherals in Tock. In general, they are not expected to be run in the default case, though they should always be included from main.rs so they are compiled. These tests are allowed to use unsafe as needed, and are permitted to conflict with normal operation, by stealing callbacks from drivers or modifying global state.

Notably, your specific use case may differ some from the one outline here. It is always recommended to attempt to copy from existing Tock code when developing your own solutions. A good collection of kernel tests can be found in boards/imix/src/tests/ for that purpose.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Determine the board(s) you want to run your test on.

    If you are testing chip or architecture specific functionality, you simply need to choose a board that uses that chip or architecture. For board specific functionality you of course need to choose that board. If you are testing a virtualization capsule, then any board that implements the underlying resource being virtualized is acceptable. Currently, most kernel tests are implemented for the Imix platform, and can be found in boards/imix/src/tests/

    Checkpoint: You have identified the board you will implement your test for.

  2. Add a test file in boards/{board}/src/tests/

    To start implementing the test, you should create a new source file inside the boards/{board}/src/tests directory and add the file to the tests/mod.rs file. The name of this test file generally should indicate the functionality being tested.

    Note: If the board you select is one of the nrf52dk variants (nrf52840_dongle, nrf52840dk, or nrf52dk), tests should be moved into the nrf52dk_base/src/ folder, and called from lib.rs.

    Checkpoint: You have chosen a board for your test and created a test file.

  3. Determine where to write actual tests -- in the test file or a capsule test.

    Depending on what you are testing, it may be best practice to write a capsule test that you call from the test file you created in the previous step.

    Writing a capsule test is best practice if your test meets the following criteria:

    1. Test does not require unsafe
    2. The test is for a peripheral available on multiple boards
    3. A HIL or capsule exists for that peripheral, so it is accessible from the capsules crate
    4. The test relies only on functionality exposed via the HIL or a capsule
    5. You care about being able to call this test from multiple boards

    Examples:

    • UART Virtualization (all boards support UART, there is a HIL for UART devices and a capsule for the virtual_uart)
    • Alarm test (all boards will have some form of hardware alarm, there is an Alarm HIL)
    • Other examples: see capsules/core/src/test

    If your test meets the criteria for writing a capsule test, follow these steps:

    Add a file in capsules/extra/src/test/, and then add the filename to capsules/extra/src/mod.rs like this:

    #![allow(unused)]
    fn main() {
    pub mod virtual_uart;
    }

    Next, create a test struct in this file that can be instantiated by any board using this test capsule. This struct should implement a new() function so it can be instantiated from the test file in boards, and a run() function that will run the actual tests. The test should implement CapsuleTest and hold a CapsuleTestClient to notify when the test has finished.

    An example for UART follows:

    #![allow(unused)]
    fn main() {
    //! capsules/src/test/virtual_uart.rs
    
    pub struct TestVirtualUartReceive {
        device: &'static UartDevice<'static>,
        buffer: TakeCell<'static, [u8]>,
        client: OptionalCell<&'static dyn CapsuleTestClient>,
    }
    
    impl TestVirtualUartReceive {
        pub fn new(device: &'static UartDevice<'static>, buffer: &'static mut [u8]) -> Self {
            TestVirtualUartReceive {
                device: device,
                buffer: TakeCell::new(buffer),
                client: OptionalCell::empty(),
            }
        }
    
        pub fn run(&self) {
            // TODO: See Next Step
        }
    }
    
    impl CapsuleTest for TestVirtualUartReceive {
        fn set_client(&self, client: &'static dyn CapsuleTestClient) {
            self.client.set(client);
        }
    }
    }

If your test does not meet the above requirements, you can simply implement your tests in the file that you created in step 2. This can involve creating a test structure with test methods. The UDP test file takes this approach, by defining a number of self-contained tests. One such example follows:

#![allow(unused)]
fn main() {
//! boards/imix/src/test/udp_lowpan_test.rs

pub struct LowpanTest {
    port_table: &'static UdpPortManager,
    // ...
}

impl LowpanTest {

    // This test ensures that an app and capsule cant bind to the same port
    // but can bind to different ports
    fn bind_test(&self) {
        let create_cap = create_capability!(NetworkCapabilityCreationCapability);
        let net_cap = unsafe {
            static_init!(
                NetworkCapability,
                NetworkCapability::new(AddrRange::Any, PortRange::Any, PortRange::Any, &create_cap)
            )
        };
        let mut socket1 = self.port_table.create_socket().unwrap();
        // Attempt to bind to a port that has already been bound by an app.
        let result = self.port_table.bind(socket1, 1000, net_cap);
        assert!(result.is_err());
        socket1 = result.unwrap_err(); // Get the socket back

        //now bind to an open port
        let (_send_bind, _recv_bind) = self
            .port_table
            .bind(socket1, 1001, net_cap)
            .expect("UDP Bind fail");

        debug!("bind_test passed");
    }
    // ...
}
}

Checkpoint: There is a test capsule with new() and run() implementations.

  1. Write your tests

    The first part of this step takes place in the test file you just created -- writing the actual tests. This part is highly dependent on the functionality being verified. If you are writing your tests in test capsule, this should all be triggered from the run() function.

    Depending on the specifics of your test, you may need to implement additional functions or traits in this file to make your test functional. One example is implementing a client trait on the test struct so that the test can receive the results of asynchronous operations. Our UART example requires implementing the uart::RecieveClient on the test struct.

    When finished, the test should call the CapsuleTestClient with the result (pass/fail) of the test. If the test succeed, the callback should be passed Ok(()). If the test failed, the callback should be called with Err(CapsuleTestError).

    #![allow(unused)]
    fn main() {
    //! boards/imix/src/test/virtual_uart_rx_test.rs
    
    impl TestVirtualUartReceive {
        // ...
    
        pub fn run(&self) {
            let buf = self.buffer.take().unwrap();
            let len = buf.len();
            debug!("Starting receive of length {}", len);
            let (err, _opt) = self.device.receive_buffer(buf, len);
            if err != ReturnCode::SUCCESS {
                debug!(
                    "Calling receive_buffer() in virtual_uart test failed: {:?}",
                    err
                );
                self.client.map(|client| {
                    client.done(Err(CapsuleTestError::ErrorCode(ErrorCode::FAIL)));
                });
            }
        }
    }
    
    impl uart::ReceiveClient for TestVirtualUartReceive {
        fn received_buffer(
            &self,
            rx_buffer: &'static mut [u8],
            rx_len: usize,
            rcode: ReturnCode,
            _error: uart::Error,
        ) {
            debug!("Virtual uart read complete: {:?}: ", rcode);
            for i in 0..rx_len {
                debug!("{:02x} ", rx_buffer[i]);
            }
            debug!("Starting receive of length {}", rx_len);
            let (err, _opt) = self.device.receive_buffer(rx_buffer, rx_len);
            if err == ReturnCode::SUCCESS {
                self.client.map(|client| {
                    client.done(Ok(()));
                });
            } else {
                debug!(
                    "Calling receive_buffer() in virtual_uart test failed: {:?}",
                    err
                );
                self.client.map(|client| {
                    client.done(Err(CapsuleTestError::ErrorCode(ErrorCode::FAIL)));
                });
            }
        }
    }
    }

    The next step in this process is determining all of the parameters that need to be passed to the test. It is preferred that all logically related tests be called from a single pub unsafe fn run(/* optional args */) to maintain convention. This ensures that all tests can be run by adding a single line to main.rs. Many tests require a reference to an alarm in order to separate tests in time, or a reference to a virtualization capsule that is being tested. Notably, the run() function should initialize any components itself that would not have already been created in main.rs. As an example, the below function is a starting point for the virtual_uart_receive test for Imix:

    #![allow(unused)]
    fn main() {
    pub unsafe fn run_virtual_uart_receive(mux: &'static MuxUart<'static>) {
        debug!("Starting virtual reads.");
    }
    }

    Next, a test function should initialize any objects required to run tests. This is best split out into subfunctions, like the following:

    #![allow(unused)]
    fn main() {
    unsafe fn static_init_test_receive_small(
        mux: &'static MuxUart<'static>,
    ) -> &'static TestVirtualUartReceive {
        static mut SMALL: [u8; 3] = [0; 3];
        let device = static_init!(UartDevice<'static>, UartDevice::new(mux, true));
        device.setup();
        let test = static_init!(
            TestVirtualUartReceive,
            TestVirtualUartReceive::new(device, &mut SMALL)
        );
        device.set_receive_client(test);
        test
    }
    }

    This initializes an instance of the test capsule we constructed earlier. Simpler tests (such as those not relying on capsule tests) might simply use static_init!() to initialize normal capsules directly and test them. The log test does this, for example:

    #![allow(unused)]
    fn main() {
    //! boards/imix/src/test/log_test.rs
    
    pub unsafe fn run(
        mux_alarm: &'static MuxAlarm<'static, Ast>,
        deferred_caller: &'static DynamicDeferredCall,
    ) {
        // Set up flash controller.
        flashcalw::FLASH_CONTROLLER.configure();
        static mut PAGEBUFFER: flashcalw::Sam4lPage = flashcalw::Sam4lPage::new();
    
        // Create actual log storage abstraction on top of flash.
        let log = static_init!(
            Log,
            log::Log::new(
                &TEST_LOG,
                &mut flashcalw::FLASH_CONTROLLER,
                &mut PAGEBUFFER,
                deferred_caller,
                true
            )
        );
        flash::HasClient::set_client(&flashcalw::FLASH_CONTROLLER, log);
        log.initialize_callback_handle(
            deferred_caller
                .register(log)
                .expect("no deferred call slot available for log storage"),
        );
    
        // ...
    }
    }

    Finally, your run() function should call the actual tests. This may involve simply calling a run() function on a capsule test, or may involve calling test functions written in the board specific test file. The virtual UART test run() looks like this:

    #![allow(unused)]
    fn main() {
    pub unsafe fn run_virtual_uart_receive(mux: &'static MuxUart<'static>) {
        debug!("Starting virtual reads.");
        let small = static_init_test_receive_small(mux);
        let large = static_init_test_receive_large(mux);
        small.run();
        large.run();
    }
    }

    As you develop your kernel tests, you may not immediately know what functions are required in your test capsule -- this is okay! It is often easiest to start with a basic test and expand this file to test additional functionality once basic tests are working.

    Checkpoint: Your tests are written, and can be called from a single run() function.

  2. Call the test from main.rs, and iterate on it until it works

    Next, you should run your test by calling it from the reset_handler() in main.rs. In order to do so, you will also need it import it into the file by adding a line like this:

    #![allow(unused)]
    fn main() {
    #[allow(dead_code)]
    mod virtual_uart_test;
    }

    However, if your test is located inside a test module this is not needed -- your test will already be included.

    Typically, tests are called after completing setup of the board, immediately before the call to load_processes():

    #![allow(unused)]
    fn main() {
    virtual_uart_rx_test::run_virtual_uart_receive(uart_mux);
    debug!("Initialization complete. Entering main loop");
    
    extern "C" {
        /// Beginning of the ROM region containing app images.
        static _sapps: u8;
    
        /// End of the ROM region containing app images.
        ///
        /// This symbol is defined in the linker script.
        static _eapps: u8;
    }
    kernel::procs::load_processes(
      // ...
    }

    Observe your results, and tune or add tests as needed.

    Before you submit a PR including any kernel tests, however, please remove or comment out any lines of code that call these tests.

    Checkpoint: You have a functional test that can be called in a single line from main.rs

  3. Document the expected output from the test at the top of the test file

    For tests that will be merged to upstream, it is good practice to document how to run a test and what the expected output of a test is. This is best done using\ document level comments (//!) at the top of the test file. The documentation for the virtual UART test follows:

    #![allow(unused)]
    fn main() {
    //! Test reception on the virtualized UART by creating two readers that
    //! read in parallel. To add this test, include the line
    //! ```
    //!    virtual_uart_rx_test::run_virtual_uart_receive(uart_mux);
    //! ```
    //! to the imix boot sequence, where `uart_mux` is a
    //! `capsules::virtual_uart::MuxUart`.  There is a 3-byte and a 7-byte
    //! read running in parallel. Test that they are both working by typing
    //! and seeing that they both get all characters. If you repeatedly
    //! type 'a', for example (0x61), you should see something like:
    //! ```
    //! Starting receive of length 3
    //! Virtual uart read complete: CommandComplete:
    //! 61
    //! 61
    //! 61
    //! 61
    //! 61
    //! 61
    //! 61
    //! Starting receive of length 7
    //! Virtual uart read complete: CommandComplete:
    //! 61
    //! 61
    //! 61
    //! ```
    }

    Checkpoint: You have documented your tests

Wrap-Up

Congratulations! You have written a kernel test for Tock! We encourage you to submit a pull request to upstream this to the Tock repository.

Implementing a Component

Each Tock board defines the peripherals, capsules, kernel settings, and syscall drivers to customize Tock for that board. Often, instantiating different resources (particularly capsules and drivers) requires subtle setup steps that are easy to get wrong. The setup steps are often shared from board-to-board. Together, this makes configuring a board redundant and easy to make a mistake.

Components are the Tock mechanism to help address this. Each component includes the static memory allocations and setup steps required to implement a particular piece of kernel functionality (i.e. a capsule). You can read more technical documentation here.

In this guide we will create a component for a hypothetical system call driver called Notifier. Our system call driver is going to use an alarm as a resource and requires just one other parameter: a delay value in milliseconds. The steps should be the same for any capsule you want to create a component for.

Setup

This guide assumes you already have the capsule created, and ideally that you have set it up with a board to test. Making a component then just makes it easier to include on a new board and share among boards.

Overview

The high-level steps required are:

  1. Define the static memory required for all objects used.
  2. Create a struct that holds all of the resources and configuration necessary for the capsules.
  3. Implement finalize() to initialize memory and perform setup.
  4. Define a helper type for using components in boards.

Step-by-Step Guide

The steps from the overview are elaborated on here.

  1. Define the static memory required for all objects used.

    All objects in the kernel are statically allocated, so we need to statically allocate memory for the objects to live in. Due to constraints on the macros Tock provides for statically allocating memory, we must contain all calls to allocate this memory within another macro.

    Create a file in boards/components/src to hold the component.

    We need to define a macro to setup our state. We will use the static_buf!() macro to help with this. In the file, create a macro with the name <your capsule>_component_static. This naming convention must be followed.

    In our hypothetical case, we need to allocate room for the notifier capsule and a buffer. Each capsule might need slightly different resources.

    #![allow(unused)]
    fn main() {
    #[macro_export]
    macro_rules! notifier_driver_component_static {
        ($A:ty $(,)?) => {{
            let notifier_buffer = kernel::static_buf!([u8; 16]);
            let notifier_driver = kernel::static_buf!(
                capsules_extra::notifier::NotifierDriver<'static, $A>
            );
    
            (notifier_buffer, notifier_driver)
        };};
    }
    }

    Notice how the macro uses the type $A which is the type of the underlying alarm. We also use full paths to avoid errors when the macro is used. The macro then "returns" the two statically allocated resources.

  2. Create a struct that holds all of the resources and configuration necessary for the capsules.

    Now we create the actual component object which collects all of the resources and any configuration needed to successfully setup this capsule.

    #![allow(unused)]
    fn main() {
    pub struct NotifierDriverComponent<A: 'static + time::Alarm<'static>> {
        board_kernel: &'static kernel::Kernel,
        driver_num: usize,
        alarm: &'static A,
        delay_ms: usize,
    }
    }

    The component needs a reference to the board as well as the driver number to be used for this driver. This is to setup the grant, as we will see. If you are not setting up a syscall driver you will not need this. Finally we also need to keep track of the delay the kernel wants to use with this capsule.

    Next we can create a constructor for this component object:

    #![allow(unused)]
    fn main() {
    impl<A: 'static + time::Alarm<'static>> NotifierDriverComponent<A> {
        pub fn new(
            board_kernel: &'static kernel::Kernel,
            driver_num: usize,
            alarm: &'static A,
            delay_ms: usize,
        ) -> AlarmDriverComponent<A> {
            AlarmDriverComponent {
                board_kernel,
                driver_num,
                alarm,
                delay_ms,
            }
        }
    }
    }

    Note, all configuration that is required must be passed in to this new() constructor.

  3. Implement finalize() to initialize memory and perform setup.

    The last major step is to implement the Component trait and the finalize() method to actually setup the capsule.

    The general format looks like:

    #![allow(unused)]
    fn main() {
    impl<A: 'static + time::Alarm<'static>> Component for NotifierDriverComponent<A> {
        type StaticInput = (...);
        type Output = ...;
    
        fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output {}
    }
    }

    We need to define what statically allocated types we need, and what this method will produce:

    #![allow(unused)]
    fn main() {
    impl<A: 'static + time::Alarm<'static>> Component for AlarmDriverComponent<A> {
        type StaticInput = (
            &'static mut MaybeUninit<[u8; 16]>,
            &'static mut MaybeUninit<NotifierDriver<'static, $A>>,
        );
        type Output = &'static NotifierDriver<'static, A>;
    
        fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output {}
    }
    }

    Notice that the static input types must match the output of the macro. The output type is what we are actually creating.

    Inside the finalize() method we need to initialize the static memory and configure/setup the capsules:

    #![allow(unused)]
    fn main() {
    impl<A: 'static + time::Alarm<'static>> Component for AlarmDriverComponent<A> {
        type StaticInput = (
            &'static mut MaybeUninit<[u8; 16]>,
            &'static mut MaybeUninit<NotifierDriver<'static, $A>>,
        );
        type Output = &'static NotifierDriver<'static, A>;
    
        fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output {
        	 let grant_cap = create_capability!(capabilities::MemoryAllocationCapability);
    
        	 let buf = static_buffer.0.write([0; 16]);
    
        	 let notifier = static_buffer.1.write(NotifierDriver::new(
        	     self.alarm,
        	     self.board_kernel.create_grant(self.driver_num, &grant_cap),
        	     buf,
        	     self.delay_ms,
        	 ));
    
          // Very important we set the callback client correctly.
        	 self.alarm.set_client(notifier);
    
        	 notifier
        }
    }
    }

    We initialize the memory for the static buffer, create the grant for the syscall driver to use, provide the driver with the alarm resource, and pass in the delay value to use. Lastly, we return a reference to the actual notifier driver object.

  4. Define a helper type for using components in boards.

    Finally, we define a helper type which simplifies using components in boards' main.rs files.

    This type is named to match the component struct and matches the output type of the component. In our case this looks like:

    #![allow(unused)]
    fn main() {
    pub struct NotifierDriverType<A> = capsules_extra::notifier::NotifierDriver<'static, A>;
    }

    This should be placed right above the component struct definition.

Summary

Our full component looks like:

#![allow(unused)]
fn main() {
use core::mem::MaybeUninit;

use capsules_extra::notifier::NotifierDriver;
use kernel::capabilities;
use kernel::component::Component;
use kernel::create_capability;
use kernel::hil::time::{self, Alarm};

#[macro_export]
macro_rules! notifier_driver_component_static {
    ($A:ty $(,)?) => {{
        let notifier_buffer = kernel::static_buf!([u8; 16]);
        let notifier_driver = kernel::static_buf!(
            capsules_extra::notifier::NotifierDriver<'static, $A>
        );

        (notifier_buffer, notifier_driver)
    };};
}

pub struct NotifierDriverType<A> = capsules_extra::notifier::NotifierDriver<'static, A>;

pub struct NotifierDriverComponent<A: 'static + time::Alarm<'static>> {
    board_kernel: &'static kernel::Kernel,
    driver_num: usize,
    alarm: &'static A,
    delay_ms: usize,
}

impl<A: 'static + time::Alarm<'static>> NotifierDriverComponent<A> {
    pub fn new(
        board_kernel: &'static kernel::Kernel,
        driver_num: usize,
        alarm: &'static A,
        delay_ms: usize,
    ) -> AlarmDriverComponent<A> {
        AlarmDriverComponent {
            board_kernel,
            driver_num,
            alarm,
            delay_ms,
        }
    }
}

impl<A: 'static + time::Alarm<'static>> Component for AlarmDriverComponent<A> {
    type StaticInput = (
        &'static mut MaybeUninit<[u8; 16]>,
        &'static mut MaybeUninit<NotifierDriver<'static, $A>>,
    );
    type Output = &'static NotifierDriver<'static, A>;

    fn finalize(self, static_buffer: Self::StaticInput) -> Self::Output {
		let grant_cap = create_capability!(capabilities::MemoryAllocationCapability);

		let buf = static_buffer.0.write([0; 16]);

		let notifier = static_buffer.1.write(NotifierDriver::new(
			self.alarm,
			self.board_kernel.create_grant(self.driver_num, &grant_cap),
			buf,
			self.delay_ms,
		));

		// Very important we set the callback client correctly.
		self.alarm.set_client(notifier);

		notifier
    }
}
}

Usage

In a board's main.rs file to use the component:

#![allow(unused)]
fn main() {
type NotifierDriver = components::notifier::NotifierDriverType<nrf52840::rtc::Rtc>;

let notifier = components::notifier::NotifierDriverComponent::new(
    board_kernel,
    capsules_core::notifier::DRIVER_NUM,
    alarm,
    100,
)
.finalize(components::notifier_driver_component_static!(nrf52840::rtc::Rtc));
}

Wrap-Up

Congratulations! You have created a component to easily create a resource in the Tock kernel! We encourage you to submit a pull request to upstream this to the Tock repository.

Minimizing Tock Code Size

Many embedded applications are ultimately limited by the flash space available on the board in use. This document provides tips on how to write Rust code such that it does not require an undue amount of flash, and highlights some options which can be used to reduce the size required for a particular image.

Code Style: tips for keeping Rust code small

When to use generic types with trait bounds versus trait objects (dyn)

Polymorphic structs and functions are one of the biggest sources of bloat in Rust binaries -- use of generic types can lead to bloat from monomorphization, while use of trait objects introduces vtables into the binary and limits opportunities for inlining.

Use dyn when the function in question will be called with multiple concrete types; otherwise code size is increased for every concrete type used (monomorphization).

#![allow(unused)]
fn main() {
fn set_gpio_client(&dyn GpioClientTrait) -> Self {//...}

// elsewhere
let radio: Radio = Radio::new();
set_gpio_client(&radio);

let button: Button = Button::new();
set_gpio_client(&button);
}

Use generics with trait bounds when the function is only ever called with a single public type per board; this reduces code size and run time cost. This increases source code complexity for decreased image size and decreased clock cycles used.

#![allow(unused)]
fn main() {
// On a given chip, there is only a single FlashController. We use generics so
// that there can be a shared interface by all FlashController's on different
// chips, but in a given binary this function will never be called with multiple
// types.
impl<'a, F: FlashController> StorageDriverBackend<'a, F> {
    pub fn new(
        storage: &'a StorageController<'a, F>,
    ) -> Self { ... }

}

Similarly, only use const generics when there will not be monomorphization, or if the body of the method which would be monomorphized is sufficiently small that it will be inlined anyways.

Non-generic-inner-functions

Sometimes, generic monomorphization is unavoidable (much of the code in grant.rs is an example of this). When generics must be used despite functions being called with multiple different types, use the non-generic-inner-function method, written about here, and applied in our codebase (see PR 2648 for an example).

Panics

Panics add substantial code size to Tock binaries -- on the order of 50-75 bytes per panic. Returning errors is much cheaper than panicing, and also produces more dependable code. Whenever possible, return errors instead of panicing. Often, this will not only mean avoiding explicit panics: many core library functions panic internally depending on the input.

The most common panics in Tock binaries are from array accesses, but these can often be ergonomically replaced with result-based error handling:

#![allow(unused)]
fn main() {
// BAD: produces bloat
fn do_stuff(&mut self) -> Result<(), ErrorCode> {
    if self.my_array[4] == 7 {
        self.other_array[3] = false;
        Ok(())
    } else {
        Err(ErrorCode::SIZE)
    }
}

// GOOD
fn do_stuff(&mut self) -> Result<(), ErrorCode> {
    if self.my_array.get(4).ok_or(ErrorCode::FAIL)? == 7 {
        *(self.other_array.get_mut(3).ok_or(ErrorCode::FAIL)?) = false;
        Ok(())
    } else {
        Err(ErrorCode::SIZE)
    }
}
}

Similarly, avoid code that could divide by 0, and avoid signed division which could divide a types MIN value by -1. Finally, avoid using unwrap() / expect(), and make sure to give the compiler enough information that it can guarantee copy_from_slice() is only being called on two slices of equal length.

Formatting overhead

Implementations of fmt::Debug and fmt::Display are expensive -- the core library functions they rely on include multiple panics and lots of (size) expensive formatting/unicode code that is unnecessary for simple use cases. This is well-documented elsewhere. Accordingly, use #[derive(Debug)] and fmt::Display sparingly. For simple enums, manual to_string(&self) -> &str methods can be substantially cheaper. For example, consider the following enum/use:

#![allow(unused)]
fn main() {
// BAD
#[derive(Debug)]
enum TpmState {
    Idle,
    Ready,
    CommandReception,
    CommandExecutionComplete,
    CommandExecution,
    CommandCompletion,
}

let tpm_state = TpmState::Idle;
debug!("{:?}", tpm_state);

// GOOD
enum TpmState {
    Idle,
    Ready,
    CommandReception,
    CommandExecutionComplete,
    CommandExecution,
    CommandCompletion,
}

impl TpmState {
    fn to_string(&self) -> &str {
        use TpmState::*;
        match self {
            Idle => "Idle",
            Ready => "Ready",
            CommandReception => "CommandReception",
            CommandExecutionComplete => "CommandExecutionComplete",
            CommandExecution => "CommandExecution",
            CommandCompletion => "CommandCompletion",
        }
    }
}

let tpm_state = TpmState::Idle;
debug!("{}", tpm_state.to_string());
}

The latter example is 112 bytes smaller than the former, despite being functionally equivalent.

For structs with runtime values that cannot easily be turned into &str representations this process is not so straightforward, consider whether the substantial overhead of calling these methods is worth the debugability improvement.

64 bit division

Avoid all 64 bit division/modulus, it adds ~1kB if used, as the software techniques for performing these are speed oriented. Often bit manipulation approaches will be much cheaper, especially if one of the operands to the division is a compile-time constant.

Global arrays

For global const/static mut variables, don't store collections in arrays unless all elements of the array are used.

The canonical example of this is GPIO -- if you have 100 GPIO pins, but your binary only uses 3 of them:

#![allow(unused)]
fn main() {
pub const GPIO_PINS: [Pin; 100] = [//...]; //BAD -- UNUSED PINS STILL IN BINARY

// GOOD APPROACH
pub const GPIO_PIN_0: Pin = Pin::new(0);
pub const GPIO_PIN_1: Pin = Pin::new(1);
pub const GPIO_PIN_2: Pin = Pin::new(2);
// ...and so on.
}

The latter approach ensures that the compiler can remove pins which are not used from the binary.

Combine register accesses

Combine register accesses into as few volatile operations as possible. E.g.

#![allow(unused)]
fn main() {
regs.dcfg.modify(DevConfig::DEVSPD::FullSpeed1_1);
regs.dcfg.modify(DevConfig::DESCDMA::SET);
regs.dcfg.modify(DevConfig::DEVADDR.val(0));
}

is much more expensive than:

#![allow(unused)]
fn main() {
regs.dcfg.modify(
    DevConfig::DEVSPD::FullSpeed1_1 + DevConfig::DESCDMA::SET + DevConfig::DEVADDR.val(0),
);
}

because each individual modify is volatile so the compiler cannot optimize the calls together.

Minimize calls to Grant::enter()

Grants are fundamental to Tock's architecture, but the internal implementation of Grants are relatively complex. Further, Grants are generic over all types that are stored in Grants, so multiple copies of many Grant functions end up in the binary. The largest of these is Grant::enter(), which is called often in capsule code. That said, it is often possible to reduce the number of calls to this method. For example: you can combine calls to apps.enter():

#![allow(unused)]
fn main() {
// BAD -- DONT DO THIS
match command_num {
    0 => self.apps.enter(|app, _| {app.perform_cmd_0()},
    1 => self.apps.enter(|app, _| {app.perform_cmd_1()},
}

// GOOD -- DO THIS
self.apps.enter(|app, _| {
    match command_num {
        0 => app.perform_cmd_0(),
        1 => app.perform_cmd_1(),
    }
})
}

The latter saves ~100 bytes because each additional call to Grant::enter() leads to an additional monomorphized copy of the body of Grant::enter().

Scattered additional tips

  • Avoid calling functions in core::str, there is lots of overhead here that is not optimized out. For example: if you have a space separated string, using text.split_ascii_whitespace() costs 150 more bytes than using text.as_bytes().split(|b| *b == b' ');.
  • Avoid static mut globals when possible, and favor global constants. static mut variables are placed in .relocate, so they consume both flash and RAM, and cannot be optimized as well because the compiler cannot make its normal aliasing assumptions.
  • Use const generics to pass sized arrays instead of slices, unless this will lead to monomorphization. In addition to removing panics on array accesses, this allows for passing smaller objects (references to arrays are just a pointer, slices are pointer + length), and lets the compiler make additional optimizations based on the known array length.
  • Test the effect of #[inline(always/never)] directives, sometimes the result will surprise you. If the savings are small, it is usually better to leave it up to the compiler, for increased resilience to future changes.
  • For functions that will not be inlined, try to keep arguments/returns in registers. On RISC-V, this means using <= 8 1-word arguments, no arguments > 2 words, and <= 2 words in return value.

Reducing the size of an out-of-tree board

In general, upstream Tock strives to produce small binaries. However, there is often a tension between code size, debugability, and user friendliness. As a result, upstream Tock does not always choose the most minimal configuration possible. For out-of-tree boards especially focused on code size, there are a few steps you can take to further reduce code size:

  • Disable the debug_panic_info option in kernel/src/config.rs -- this will remove a lot of debug information that is provided in panics, but can reduce code size by 8-12 kB.
  • Implement your own peripheral struct that does not include peripherals you do not need. Often, the DefaultPeripherals struct for a chip may include peripherals not used in your project, and the structure of the interrupt handler means that you will pay the code size cost of the unused peripherals unless you implement your own Peripheral struct. The option to do this was first introduced in PR 2069 and is explained there.
  • Modify your panic handler to not use the PanicInfo struct. This will allow LLVM to optimize out the paths, panic messages, line numbers, etc. which would otherwise be stored in the binary to allow users to backtrace and debug panics.
  • Remove the implementation of debug!(): if you really want size savings, and are ok not printing anything, you can remove the implementation of debug!() and replace it with an empty macro. This will remove the code associated with any calls to debug!() in the core kernel or chip crates that you depend on, as well as any remaining code associated with the fmt machinery.
  • Fine-tune your inline-threshold. This can have a significant impact, but the ideal value is dependent on your particular code base, and changes as the compiler does -- update it when you update the compiler version! In practice, we have observed that very small values are often optimal (e.g., in the range of 2 to 10). This is done by passing -C inline-threshold=x to rustc.
  • Try opt-level=s instead of opt-level=z. In practice, s (when combined with a reduced inline threshold) often seems to produce smaller binaries. This is worth revisiting periodically, given that z is supposed to lead to smaller binaries than s.

Porting Tock

This guide covers how to port Tock to a new platform.

This guide is a work in progress. Comments and pull requests are appreciated!

Overview

At a high level, to port Tock to a new platform you will need to create a new "board" as a crate, as well as potentially add additional "chip" and "arch" crates. The board crate specifies the exact resources available on a hardware platform by stitching capsules together with the chip crates (e.g. assigning pins, setting baud rates, allocating hardware peripherals etc.). The chip crate implements the peripheral drivers (e.g. UART, GPIO, alarms, etc.) for a specific microcontroller by implementing the traits found in kernel/src/hil. If your platform uses a microcontroller already supported by Tock then you can use the existing chip crate. The arch crate implements the low-level code for a specific hardware architecture (e.g. what happens when the chip first boots and how system calls are implemented).

Is Tock a Good Fit for my Hardware?

Before porting Tock to a new platform or microcontroller, you should determine if Tock is a good fit. While we do not have an exact rubric, there are some requirements that we generally look for:

  • Must have requirements:

    • Memory protection support. This is generally the MPU on Cortex-M platforms or the PMP on RISC-V platforms.
    • At least 32-bit support. Tock is not designed for 16-bit platforms.
    • Enough RAM and flash to support userspace applications. "Enough" is underspecified, but generally boards should have at least 64 kB of RAM and 128 kB of flash.
  • Generally expected requirements:

    • The platform should be 32-bit. Tock may support 64-bit in the future.
    • The platform should be single core. A multicore CPU is OK, but the expectation is that only one core will be used with Tock.

Crate Details

This section includes more details on what is required to implement each type of crate for a new hardware platform.

arch Crate

Tock currently supports the ARM Cortex-M0, Cortex-M3, and Cortex M4, and the rv32i architectures. There is not much architecture-specific code in Tock, the list is pretty much:

  • Syscall entry/exit
  • Interrupt configuration
  • Top-half interrupt handlers
  • MPU configuration (if appropriate)
  • Power management configuration (if appropriate)

It would likely be fairly easy to port Tock to another ARM Cortex M (specifically the M0+, M23, M4F, or M7) or another rv32i variant. It will probably be more work to port Tock to other architectures. While we aim to be architecture agnostic, this has only been tested on a small number of architectures.

If you are interested in porting Tock to a new architecture, it's likely best to reach out to us via email or Slack before digging in too deep.

chip Crate

The chip crate is specific to a particular microcontroller, but should attempt to be general towards a family of microcontrollers. For example, support for the nRF58240 and nRF58230 microcontrollers is shared in the chips/nrf52 and chips/nrf5x crates. This helps reduce duplicated code and simplifies adding new specific microcontrollers.

The chip crate contains microcontroller-specific implementations of the interfaces defined in kernel/src/hil.

Chips have a lot of features and Tock supports a large number of interfaces to express them. Build up the implementation of a new chip incrementally. Get reset and initialization code working. Set it up to run on the chip's default clock and add a GPIO interface. That's a good point to put together a minimal board that uses the chip and validate with an end-to-end userland application that uses GPIOs.

Once you have something small like GPIOs working, it's a great time to open a pull request to Tock. This lets others know about your efforts with this chip and can hopefully attract additional support. It also is a chance to get some feedback from the Tock core team before you have written too much code.

Moving forward, chips tend to break down into reasonable units of work. Implement something like kernel::hil::UART for your chip, then submit a pull request. Pick a new peripheral and repeat!

Historically, Tock chips defined peripherals as static mut global variables, which made them easy to access but encouraged use of unsafe code and prevented boards from instantiating only the set of peripherals they needed. Now, peripherals are instantiated at runtime in main.rs, which resolves these issues. To prevent each board from having to instantiate peripherals individually, chips should provide a ChipNameDefaultPeripherals struct that defines and creates all peripherals available for the chip in Tock. This will be used by upstream boards using the chip, without forcing the overhead and code size of all peripherals on more minimal out-of-tree boards.

Tips and Tools

  • Using System View Description (SVD) files for specific microcontrollers can help with setting up the register mappings for individual peripherals. See the tools/svd2regs.py tool (./svd2regs.py -h) for help with automatically generating the register mappings.

board Crate

The board crate, in boards/src, is specific to a physical hardware platform. The board file essentially configures the kernel to support the specific hardware setup. This includes instantiating drivers for sensors, mapping communication buses to those sensors, configuring GPIO pins, etc.

Tock is leveraging "components" for setting up board crates. Components are contained structs that include all of the setup code for a particular driver, and only require boards to pass in the specific options that are unique to the particular platform. For example:

#![allow(unused)]
fn main() {
let isl29035 = components::isl29035::Isl29035Component::new(sensors_i2c, mux_alarm)
    .finalize(components::isl29035_component_static!(sam4l::ast::Ast));

let ambient_light = components::isl29035::AmbientLightComponent::new(
    board_kernel,
    capsules::ambient_light::DRIVER_NUM,
    isl29035,
)
.finalize(components::ambient_light_component_static!());
}

instantiates the components for a specific light sensor (the ISL29035) and for an ambient light sensor interface for userspace. Board initiation should be largely done using components, but not all components have been created yet, so board files are generally a mix of components and verbose driver instantiation. The best bet is to start from an existing board's main.rs file and adapt it. Initially, you will likely want to delete most of the capsules and add them slowly as you get things working.

Warning: [capsule name]_component_static!() macros are singletons, and must not be called in a loop or within a function. These macros should instead be instantiated directly in main().

Component Creation

Creating a component for a capsule has two main benefits: 1) all subtleties and any complexities with setting up the capsule can be contained in the component, reducing the chance for error when using the capsule, and 2) the details of instantiating a capsule are abstracted from the high-level setup of a board. Therefore, Tock encourages boards to use components for their main startup process.

Basic components generally have a structure like the following simplified example for a Console component:

#![allow(unused)]
fn main() {
use core::mem::MaybeUninit;

/// Helper macro that calls `static_buf!()`. This helps allow components to be
/// instantiated multiple times.
#[macro_export]
macro_rules! console_component_static {
    () => {{
        let console = kernel::static_buf!(capsules::console::Console<'static>);
        console
    }};
}

/// Main struct that represents the component. This should contain all
/// configuration and resources needed to instantiate this capsule.
pub struct ConsoleComponent {
    uart: &'static capsules::virtual_uart::UartDevice<'static>,
}

impl ConsoleComponent {
    /// The constructor for the component where the resources and configuration
    /// are provided.
    pub fn new(
        uart: &'static capsules::virtual_uart::UartDevice,
    ) -> ConsoleComponent {
        ConsoleComponent {
            uart,
        }
    }
}

impl Component for ConsoleComponent {
    /// The statically defined (using `static_buf!()`) structures where the
    /// instantiated capsules will actually be stored.
    type StaticInput = &'static mut MaybeUninit<capsules::console::Console<'static>>;
    /// What will be returned to the user of the component.
    type Output = &'static capsules::console::Console<'static>;

    /// Initializes and configures the capsule.
    unsafe fn finalize(self, s: Self::StaticInput) -> Self::Output {
        /// Call `.write()` on the static buffer to set its contents with the
        /// constructor from the capsule.
        let console = s.write(console::Console::new(self.uart));

        /// Set any needed clients or other configuration steps.
        hil::uart::Transmit::set_transmit_client(self.uart, console);
        hil::uart::Receive::set_receive_client(self.uart, console);

        /// Return the static reference to the newly created capsule object.
        console
    }
}
}

Using a basic component like this console example looks like:

#![allow(unused)]
fn main() {
// in main.rs:

let console = ConsoleComponent::new(uart_device)
    .finalize(components::console_component_static!());
}

When creating components, keep the following steps in mind:

  • All static buffers needed for the component MUST be created using static_buf!() inside of a macro, and nowhere else. This is necessary to help allow components to be used multiple times (for example if a board has two temperature sensors). Because the same static_buf!() call cannot be executed multiple times, static_buf!() cannot be placed in a function, and must be called directly from main.rs. To preserve the ergonomics of components, we wrap the call to static_buf!() in a macro, and call the macro from main.rs instead of static_buf!() directly.

    The naming convention of the macro that wraps static_buf!() should be [capsule name]_component_static!() to indicate this is where the static buffers are created. The macro should only create static buffers.

  • All configuration and resources not related to static buffers should be passed to the new() constructor of the component object.

Finally, some capsules and resources are templated over chip-specific resources. This slightly complicates defining the static buffers for certain capsules. To ensure that components can be re-used across different boards and microcontrollers, components use the same macro strategy for other static buffers.

#![allow(unused)]
fn main() {
use core::mem::MaybeUninit;

#[macro_export]
macro_rules! alarm_mux_component_static {
    ($A: ty) => {{
        let alarm = kernel::static_buf!(capsules::virtual_alarm::MuxAlarm<'static, $A>);
        alarm
    }};
}

pub struct AlarmMuxComponent<A: 'static + time::Alarm<'static>> {
    alarm: &'static A,
}

impl<A: 'static + time::Alarm<'static>> AlarmMuxComponent<A> {
    pub fn new(alarm: &'static A) -> AlarmMuxComponent<A> {
        AlarmMuxComponent { alarm }
    }
}

impl<A: 'static + time::Alarm<'static>> Component for AlarmMuxComponent<A> {
    type StaticInput = &'static mut MaybeUninit<capsules::virtual_alarm::MuxAlarm<'static, A>>;
    type Output = &'static MuxAlarm<'static, A>;

    unsafe fn finalize(self, s: Self::StaticInput) -> Self::Output {
        let mux_alarm = s.write(MuxAlarm::new(self.alarm));
        self.alarm.set_alarm_client(mux_alarm);
        mux_alarm
    }
}
}

Here, the alarm_mux_component_static!() macro needs the type of the underlying alarm hardware. The usage looks like:

#![allow(unused)]
fn main() {
let mux_alarm = components::alarm::AlarmMuxComponent::new(&peripherals.ast)
    .finalize(components::alarm_mux_component_static!(sam4l::ast::Ast));
}

Board Support

In addition to kernel code, boards also require some support files. These specify metadata such as the board name, how to load code onto the board, and anything special that userland applications may need for this board.

panic!s (aka io.rs)

Each board must author a custom routine to handle panic!s. Most panic! machinery is handled by the Tock kernel, but the board author must provide some minimalist access to hardware interfaces, specifically LEDs and/or UART.

As a first step, it is simplest to just get LED-based panic! working. Have your panic! handler set up a prominent LED and then call kernel::debug::panic_blink_forever.

If UART is available, the kernel is capable of printing a lot of very helpful additional debugging information. However, as we are in a panic! situation, it's important to strip this down to a minimalist implementation. In particular, the supplied UART must be synchronous (note that this in contrast to the rest of the kernel UART interfaces, which are all asynchronous). Usually implementing a very simple Writer that simply writes one byte at a time directly to the UART is easiest/best. It is not important that panic! UART writer be efficient. You can then replace the call to kernel::debug::panic_blink_forever with a call to kernel::debug::panic.

For largely historical reasons, panic implementations for all boards live in a file named io.rs adjacent to the board's main.rs file.

Board Cargo.toml, build.rs

Every board crate must author a top-level manifest, Cargo.toml. In general, you can probably simply copy this from another board, modifying the board name and author(s) as appropriate.

Note that Tock also provides a build script, boards/build.rs, that you should add to your Cargo.toml manifest. The build script simply adds a dependency on any link scripts to ensure the board is rebuilt when any changes.

Board Makefile

There is a Makefile in the root of every board crate, at a minimum, the board Makefile must include:

# Makefile for building the tock kernel for the Hail platform

TARGET=thumbv7em-none-eabi      # Target triple
PLATFORM=hail                   # Board name here

include ../Makefile.common      # ../ assumes board lives in $(TOCK)/boards/<board>

Tock provides boards/Makefile.common that drives most of the build system. In general, you should not need to dig into this Makefile -- if something doesn't seem to be working, hop on slack and ask.

Getting the built kernel onto a board

In addition to building the kernel, the board Makefile should include rules for getting code onto the board. This will naturally be fairly board-specific, but Tock does have two targets normally supplied:

  • make program: For "plug-'n-plug" loading. Usually these are boards with a bootloader or some other support IC. The expectation is that during normal operation, a user could simply plug in a board and type make program to load code.
  • make flash: For "more direct" loading. Usually this means that a JTAG or some equivalent interface is being used. Often it implies that external hardware is required, though some of the development kit boards have an integrated JTAG on-board, so external hardware is not a hard and fast rule.
  • make install: This should be an alias to either program or flash, whichever is the preferred approach for this board.

If you don't support program or flash, you should define an empty rule that explains how to program the board:

.PHONY: program
        echo "To program, run SPEICAL_COMMAND"
        exit 1
Board README

Every board must have a README.md file included in the top level of the crate. This file must:

  • Provide links to information about the platform and how to purchase/acquire the platform. If there are different versions of the platform the version used in testing should be clearly specified.
  • Include an overview on how to program the hardware, including any additional dependencies that are required.

Loading Apps

Ideally, Tockloader will support loading apps on to your board (perhaps with some flags set to specific values). If that is not the case, please create an issue on the Tockloader repo so we can update the tool to support loading code onto your board.

Common Pitfalls

  • Make sure you are careful when setting up the board main.rs file. In particular, it is important to ensure that all of the required set_client functions for capsules are called so that callbacks are not lost. Forgetting these often results in the platform looking like it doesn't do anything.

Adding a Platform to Tock Repository

After creating a new platform, we would be thrilled to have it included in mainline Tock. However, Tock has a few guidelines for the minimum requirements of a board that is merged into the main Tock repository:

  1. The hardware must be widely available. Generally that means the hardware platform can be purchased online.
  2. The port of Tock to the platform must include at least:
    • Console support so that debug!() and printf() work.
    • Timer support.
    • GPIO support with interrupt functionality.
  3. The contributor must be willing to maintain the platform, at least initially, and help test the platform for future releases.

With these requirements met we should be able to merge the platform into Tock relatively quickly. In the pull request to add the platform, you should add this checklist:

### New Platform Checklist

- [ ] Hardware is widely available.
- [ ] I can support the platform, which includes release testing for the
      platform, at least initially.
- Basic features are implemented:
  - [ ] `Console`, including `debug!()` and userspace `printf()`.
  - [ ] Timers.
  - [ ] GPIO with interrupts.

Porting Tock 1.x Capsules to Tock 2.0

This guide covers how to port Tock capsules from the 1.x system call API to the 2.x system call API. It outlines how the API has changed and gives code examples.

Overview

Version 2 of the Tock operating system changes the system call API and ABI in several ways. This document describes the changes and their implications to capsule implementations. It gives guidance on how to port a capsule from Tock 1.x to 2.0.

Tock 2.0 System Call API

The Tock system call API is implemented in the Driver trait. Tock 2.0 updates this trait to be more precise and correctly support Rust's memory semantics.

SyscallDriver

This is the signature for the 2.0 Driver trait:

#![allow(unused)]
fn main() {
pub trait SyscallDriver {
    fn command(&self, which: usize, r2: usize, r3: usize, caller_id: ProcessId) -> CommandResult {
        CommandResult::failure(ErrorCode::NOSUPPORT)
    }

    fn allow_readwrite(
        &self,
        process_id: ProcessId,
        allow_num: usize,
        buffer: ReadWriteProcessBuffer,
    ) -> Result<ReadWriteProcessBuffer, (ReadWriteProcessBuffer, ErrorCode)> {
        Err((slice, ErrorCode::NOSUPPORT))
    }

    fn allow_readonly(
        &self,
        process_id: ProcessId,
        allow_num: usize,
        buffer: ReadOnlyProcessBuffer,
    ) -> Result<ReadOnlyProcessBuffer, (ReadOnlyProcessBuffer, ErrorCode)> {
        Err((slice, ErrorCode::NOSUPPORT))
    }

    fn allocate_grant(&self, processid: ProcessId) -> Result<(), crate::process::Error>;
}
}

The first thing to note is that there are now two versions of the old allow method: one for a read/write buffer and one for a read-only buffer. They pass different types of slices.

The second thing to note is that the two methods that pass pointers, allow_readwrite and allow_readonly, return a Result. The success case (Ok) returns a pointer back in the form of an application slice. The failure case (Err) returns the same structure back but also has an ErrorCode.

These two methods follow a swapping calling convention: you pass in a pointer and get one back. If the call fails, you get back the one you passed in. If the call succeeds, you get back the one the capsule previously had. That is, you call allow_readwrite with an application slice A and it succeeds, then the next successful call to allow_readwrite will return A.

These swapping semantics allow the kernel to maintain an invariant that there is only one instance of a particular application slice at any time. Since an application slice represents a region of application memory, having two objects representing the same region of memory violates Rust's memory guarantees. When the scheduler calls allow_readwrite, allow_readonly or subscribe, it moves the application slice or callback into the capsule. The capsule, in turn, moves the previous one out.

The command method behaves differently, because commands only operate on values, not pointers. Each command has its own arguments and number of return types. This is encapsulated within CommandResult.

The third thing to note is that there is no longer a subscribe() method. This has been removed and instead all upcalls are managed entirely by the kernel. Scheduling an upcall is now done with a provided object from entering a grant.

The fourth thing to note is the new allocate_grant() method. This allows the kernel to request that a capsule enters its grant region so that it is allocated for the specific process. This should be implemented with a roughly boilerplate implementation described below.

Porting Capsules and Example Code

The major change you'll see in porting your code is that capsule logic becomes simpler: Options have been replaced by structures, and there's a basic structure to swapping application slices.

Examples of command and CommandResult

The LED capsule implements only commands, so it provides a very simple example of what commands look like.

#![allow(unused)]
fn main() {
 fn command(&self, command_num: usize, data: usize, _: usize, _: ProcessId) -> CommandResult {
        self.leds
            .map(|leds| {
                match command_num {
...
                    // on
                    1 => {
                        if data >= leds.len() {
                            CommandResult::failure(ErrorCode::INVAL) /* led out of range */
                        } else {
                            leds[data].on();
                            CommandResult::success()
                        }
                    },

}

The capsule dispatches on the command number. It uses the first argument, data, as which LED to turn activate. It then returns either a CommandResult::Success (generated with CommandResult::success()) or a CommandResult::Failure (generated with CommandResult::failure()).

A CommandResult is a wrapper around a GenericSyscallReturnValue, constraining it to the versions of GenericSyscallReturnValue that can be returned by a command.

Here is a slightly more complex implementation of Command, from the console capsule.

#![allow(unused)]
fn main() {
fn command(&self, cmd_num: usize, arg1: usize, _: usize, processid: ProcessId) -> CommandResult{
    let res = match cmd_num {
        0 => Ok(Ok(())),
        1 => { // putstr
            let len = arg1;
            self.apps.enter(processid, |app, _| {
                self.send_new(processid, app, len)
            }).map_err(ErrorCode::from)
        },
        2 => { // getnstr
            let len = arg1;
            self.apps.enter(processid, |app, _| {
                self.receive_new(processid, app, len)
            }).map_err(ErrorCode::from)
        },
        3 => { // Abort RX
            self.uart.receive_abort();
            Ok(Ok(()))
        }
        _ => Err(ErrorCode::NOSUPPORT)
    };
    match res {
        Ok(r) => {
            CommandResult::from(r),
        },
        Err(e) => CommandResult::failure(e)
    }
}
}

This implementation is more complex because it uses a grant region that stores per-process state. Grant::enter returns a Result<Result<(), ErrorCode>, grant::Error>. An outer Err return type means the grant could not be entered successfully and the closure was not invoked: this returns what grant error occurred. An Ok return type means the closure was executed, but it is possible that an error occurred during its execution. So there are three cases:

  • Ok(Ok(()))
  • Ok(Err(ErrorCode:: error cases))
  • Err(grant::Error)

The bottom match statement separates these two. In the Ok() case, it checks whether the inner Result contains an Err(ErrorCode). If not (Err), this means it was a success, and the result was a success, so it returns a CommandResult::Success. If it can be converted into an error code, or if the grant produced an error, it returns a CommandResult::Failure.

One of the requirements of commands in 2.0 is that each individual command_num have a single failure return type and a single success return size. This means that for a given command_num, it is not allowed for it to sometimes return CommandResult::Success and other times return Command::SuccessWithValue, as these are different sizes. As part of easing this transition, Tock 2.0 removed the SuccessWithValue variant of ReturnCode, and then later in the transition removed ReturnCode entirely, replacing all uses of ReturnCode with Result<(), ErrorCode>.

If, while porting, you encounter a construction of ReturnCode::SuccessWithValue{v} in command() for an out-of-tree capsule, replace it with a construction of CommandResult::success_u32(v), and make sure that it is impossible for that command_num to return CommandResult::Success in any other scenario.

ReturnCode versus ErrorCode

Because the new system call ABI explicitly distinguishes failures and successes, it replaces ReturnCode with ErrorCode to denote which error in failure cases. ErrorCode is simply ReturnCode without any success cases, and with names that remove the leading E since it's obvious they are an error: ErrorCode::FAIL is the equivalent of ReturnCode::EFAIL. ReturnCode is still used in the kernel, but may be deprecated in time.

Examples of allow_readwrite and allow_readonly

Because ReadWriteProcessBuffer and ReadOnlyProcessBuffer represent access to userspace memory, the kernel tightly constrains how these objects are constructed and passed. They do not implement Copy or Clone, so only one instance of these objects exists in the kernel at any time.

Note that console has one ReadOnlyProcessBuffer for printing/putnstr and one ReadWriteProcessBuffer for reading/getnstr. Here is a sample implementation of allow_readwrite for the console capsule:

#![allow(unused)]
fn main() {
pub struct App {
    write_buffer: ReadOnlyProcessBuffer,
...
	fn allow_readonly(
        &self,
        process_id: ProcessId,
        allow_num: usize,
        mut buffer: ReadOnlyProcessBuffer,
    ) -> Result<ReadOnlyProcessBuffer, (ReadOnlyProcessBuffer, ErrorCode)> {
        let res = match allow_num {
            1 => self
                .apps
                .enter(processid, |process_id, _| {
                    mem::swap(&mut process_id.write_buffer, &mut buffer);
                })
                .map_err(ErrorCode::from),
            _ => Err(ErrorCode::NOSUPPORT),
        };

        if let Err(e) = res {
            Err((slice, e))
        } else {
            Ok(slice)
        }
    }
}

The implementation is quite simple: if there is a valid grant region, the method swaps the passed ReadOnlyProcessBuffer and the one in the App region, returning the one that was in the app region. It then returns slice, which is either the passed slice or the swapped out one.

The new subscription mechanism

Tock 2.0 introduces a guarantee for the subscribe syscall that for every unique subscribe (i.e. (driver_num, subscribe_num) tuple), userspace will be returned the previously subscribe upcall (or null if this is the first subscription). This guarantee means that once an upcall is returned, the kernel will never schedule the upcall again (unless it is re-subscribed in the future), and userspace can deallocate the upcall function if it so chooses.

Providing this guarantee necessitates changes to the capsule interface for declaring and using upcalls. To declare upcalls, a capsule now provides the number of upcalls as a templated value on Grant.

#![allow(unused)]
fn main() {
struct capsule {
    ...
    apps: Grant<T, NUM_UPCALLS>,
    ...
}
}

The second parameter tells the kernel how many upcalls to save. Capsules no longer can store an object of type Upcall in their grant region.

To ensure that the kernel can store the upcalls, a capsule must implement the allocate_grant() method. The typical implementation looks like:

#![allow(unused)]
fn main() {
fn allocate_grant(&self, processid: ProcessId) -> Result<(), kernel::procs::Error> {
   self.apps.enter(processid, |_, _| {})
}
}

Finally to schedule an upcall any calls to app.upcall.schedule() should be replaced with code like:

#![allow(unused)]
fn main() {
self.apps.enter(processid, |app, upcalls| {
    upcalls.schedule_upcall(upcall_number, (r0, r1, r2));
});
}

The parameter upcall_number matches the subscribe_num the process used with the subscribe syscall.

Using ReadOnlyProcessBuffer and ReadWriteProcessBuffer: console

One key change in the Tock 2.0 API is explicitly acknowledging that application slices may disappear at any time. For example, if a process passes a slice into the kernel, it can later swap it out with a later allow call. Similarly, application grants may disappear at any time.

This means that ReadWriteProcessBuffer and ReadOnlyProcessBuffer now do not allow you to obtain their pointers and lengths. Instead, they provide a map_or method. This is how console uses this, for example, to copy process data into its write buffer and call the underlying transmit_buffer:

#![allow(unused)]
fn main() {
fn send(&self, process_id: ProcessId, app: &mut App) {
    if self.tx_in_progress.is_none() {
        self.tx_in_progress.set(process_id);
        self.tx_buffer.take().map(|buffer| {
            let len = app.write_buffer.map_or(0, |data| data.len());
            if app.write_remaining > len {
                // A slice has changed under us and is now smaller than
                // what we need to write -- just write what we can.
                app.write_remaining = len;
            }
			let transaction_len = app.write_buffer.map_or(0, |data| {
                for (i, c) in data[data.len() - app.write_remaining..data.len()]
                    .iter()
                    .enumerate()
                {
                    if buffer.len() <= i {
                        return i;
                    }
                    buffer[i] = *c;
                }
                app.write_remaining
            });

            app.write_remaining -= transaction_len;
            let (_err, _opt) = self.uart.transmit_buffer(buffer, transaction_len);
        });
    } else {
        app.pending_write = true;
    }
}
}

Note that the implementation looks at the length of the slice: it doesn't copy it out into grant state. If a slice was suddenly truncated, it checks and adjust the amount it has written.

Using ReadOnlyProcessBuffer and ReadWriteProcessBuffer: spi_controller

This is a second example, taken from spi_controller. Because SPI transfers are bidirectional, there is an RX buffer and a TX buffer. However, a client can ignore what it receives, and only pass a TX buffer if it wants: the RX buffer can be zero length. As with other bus transfers, the SPI driver needs to handle the case when its buffers change in length under it. For example, a client may make the following calls:

  1. allow_readwrite(rx_buf, 200)
  2. allow_readonly(tx_buf, 200)
  3. command(SPI_TRANSFER, 200)
  4. (after some time, while transfer is ongoing) allow_readonly(tx_buf2, 100)

Because the underlying SPI tranfer typically uses DMA, the buffer passed to the peripheral driver is static. The spi_controller has fixed-size static buffers. It performs a transfer by copying application slice data into/from these buffers. A very long application transfer may be broken into multiple low-level transfers.

If a transfer is smaller than the static buffer, it is simple: spi_controller copies the application slice data into its static transmit buffer and starts the transfer. If the process rescinds the buffer, it doesn't matter, as the capsule has the data. Similarly, the presence of a receive application slice only matters when the transfer completes, and the capsule decides whether to copy what it received out.

The principal complexity is when the buffers change during a low-level transfer and then the capsule needs to figure out whether to continue with a subsequent low-level transfer or finish the operation. The code needs to be careful to not access past the end of a slice and cause a kernel panic.

The code looks like this:

#![allow(unused)]
fn main() {
// Assumes checks for busy/etc. already done
// Updates app.index to be index + length of op
fn do_next_read_write(&self, app: &mut App) {
    let write_len = self.kernel_write.map_or(0, |kwbuf| {
        let mut start = app.index;
        let tmp_len = app.app_write.map_or(0, |src| {
            let len = cmp::min(app.len - start, self.kernel_len.get());
            let end = cmp::min(start + len, src.len());
            start = cmp::min(start, end);

            for (i, c) in src.as_ref()[start..end].iter().enumerate() {
                kwbuf[i] = *c;
            }
            end - start
        });
        app.index = start + tmp_len;
        tmp_len
    });
    self.spi_master.read_write_bytes(
        self.kernel_write.take().unwrap(),
        self.kernel_read.take(),
        write_len,
    );
}
}

The capsule keeps track of its current write position with app.index. This points to the first byte of untransmitted data. When a transfer starts in response to a system call, the capsule checks that the requested length of the transfer is not longer than the length of the transmit buffer, and also that the receive buffer is either zero or at least as long. The total length of a transfer is stored in app.len.

But if the transmit buffer is swapped during a transfer, it may be shorter than app.index. In the above code, the variable len stores the desired length of the low-level transfer: it's the minimum of data remaining in the transfer and the size of the low-level static buffer. The variable end stores the index of the last byte that can be safely transmitted: it is the minimum of the low-level transfer end (start + len) and the length of the application slice (src.len()). Note that end can be smaller than start if the application slice is shorter than the current write position. To handle this case, start is set to be the minimum of start and end: the transfer will be of length zero.

VSCode Debugging

This is a guide on how to perform remote debugging via JTAG in Tock using VSCode (at the moment (Feb 2018) nRF51-DK and nRF52-DK are supported).

Requirements

  1. VSCode
  2. VSCode Native Debug Extension
  3. VSCode Rust Extension

Installation

  1. Install VSCode for your platform
  2. Open VSCode
  3. Enter the extensions menu by pressing View/Extensions
  4. Install Native Debug and Rust in that view by searching for them

You are now good to run the debugger and the debugging configurations are already set for you. But, if you want change the configuration, for example to run some special GDB commands before starting, you can do that here.

Enabling breakpoints

Let's now test if this works by configuring some breakpoints:

  1. Enter Explorer mode by pressing View/Explorer

  2. Browse and open a file where you want to enable a breakpoint

  3. In my case I want to have a breakpoint in the main in main.rs

  4. Click to the left of the line number to enable a breakpoint. You should see a red dot now as the figure below:

    Enable breakpoint VSCode

Running the debugger

  1. You need to start the GDB Server before launching a debugging session in VSCode (check out the instructions for how to do that for your board).

  2. Enter Debug mode in VSCode by pressing View/Debug. You should now see a debug view somewhere on your screen as in the figure below:

    VSCode Debug mode

  3. Choose your board in the scroll bar and then click on the green arrow or Debug/Start Debugging.

  4. You should now see that program stopped at the breakpoint as the figure below:

    Running

  5. Finally, if want to use specific GDB commands you can use the debug console in VSCode which is very useful.

Issues

  1. Sometimes GDB behaves unpredictably and stops at the wrong source line. For example, sometimes we have noticed that debugger stops at /kernel/src/support/arm.rs instead of the main. If that occurs just press step over and it should hopefully jump to correct location.

  2. Rust in release mode is optimizing using things such as inlining and mangling which makes debugging harder and values may not be visible. To perform more reliable debugging mark the important functions with:

    #[no_mangle]
    #[inline(never)]
    
  3. Enable rust-pretty printer or something similar because viewing variables is very limited in VSCode.

Kernel Documentation

This portion of the Tock Book describes details of the design and structure of the Tock kernel.

For API-level documentation, view the rustdocs.

Tock Overview

Tock is a secure, embedded operating system for Cortex-M and RISC-V microcontrollers. Tock assumes the hardware includes a memory protection unit (MPU), as systems without an MPU cannot simultaneously support untrusted processes and retain Tock's safety and security properties. The Tock kernel and its extensions (called capsules) are written in Rust.

Tock can run multiple, independent untrusted processes written in any language. The number of processes Tock can simultaneously support is constrained by MCU flash and RAM. Tock can be configured to use different scheduling algorithms, but the default Tock scheduler is preemptive and uses a round-robin policy. Tock uses a microkernel architecture: complex drivers and services are often implemented as untrusted processes, which other processes, such as applications, can invoke through inter-process commmunication (IPC).

This document gives an overview of Tock's architecture, the different classes of code in Tock, the protection mechanisms it uses, and how this structure is reflected in the software's directory structure.

Tock Architecture

Tock architecture

The above Figure shows Tock's architecture. Code falls into one of three categories: the core kernel, capsules, and processes.

The core kernel and capsules are both written in Rust. Rust is a type-safe systems language; other documents discuss the language and its implications to kernel design in greater detail, but the key idea is that Rust code can't use memory differently than intended (e.g., overflow buffers, forge pointers, or have pointers to dead stack frames). Because these restrictions prevent many things that an OS kernel has to do (such as access a peripheral that exists at a memory address specified in a datasheet), the very small core kernel is allowed to break them by using "unsafe" Rust code. Capsules, however, cannot use unsafe features. This means that the core kernel code is very small and carefully written, while new capsules added to the kernel are safe code and so do not have to be trusted.

Processes can be written in any language. The kernel protects itself and other processes from bad process code by using a hardware memory protection unit (MPU). If a process tries to access memory it's not allowed to, this triggers an exception. The kernel handles this exception and kills the process.

The kernel provides four major system calls:

  • command: makes a call from the process into the kernel
  • subscribe: registers a callback in the process for an upcall from the kernel
  • allow: gives kernel access to memory in the process
  • yield: suspends process until after a callback is invoked

Every system call except yield is non-blocking. Commands that might take a long time (such as sending a message over a UART) return immediately and issue a callback when they complete. The yield system call blocks the process until a callback is invoked; userland code typically implements blocking functions by invoking a command and then using yield to wait until the callback completes.

The command, subscribe, and allow system calls all take a driver ID as their first parameter. This indicates which driver in the kernel that system call is intended for. Drivers are capsules that implement the system call.

Tock Design

Most operating systems provide isolation between components using a process-like abstraction: each component is given its own slice of the system memory (for its stack, heap, data) that is not accessible by other components. Processes are great because they provide a convenient abstraction for both isolation and concurrency. However, on resource-limited systems, like microcontrollers with much less than 1MB of memory, this approach leads to a trade-off between isolation granularity and resource consumption.

Tock's architecture resolves this trade-off by using a language sandbox to isolated components and a cooperative scheduling model for concurrency in the kernel. As a result, isolation is (more or less) free in terms of resource consumption at the expense of preemptive scheduling (so a malicious component could block the system by, e.g., spinning in an infinite loop).

To first order, all components in Tock, including those in the kernel, are mutually distrustful. Inside the kernel, Tock achieves this with a language-based isolation abstraction called capsules that incurs no memory or computation overhead. In user-space, Tock uses (more-or-less) a traditional process model where process are isolated from the kernel and each other using hardware protection mechanisms.

In addition, Tock is designed with other embedded systems-specific goals in mind. Tock favors overall reliability of the system and discourages components (prevents when possible) from preventing system progress when buggy.

Architecture

Tock architecture

Tock includes three architectural components: a small trusted kernel, written in Rust, which implements a hardware abstraction layer (HAL); scheduler; and platform-specific configuration. Other system components are implemented in one of two protection mechanisms: capsules, which are compiled with the kernel and use Rust’s type and module systems for safety, and processes, which use the MPU for protection at runtime.

System components (an application, driver, virtualization layer, etc.) can be implemented in either a capsule or process, but each mechanism trades off concurrency and safety with memory consumption, performance, and granularity.

CategoryCapsuleProcess
ProtectionLanguageHardware
Memory OverheadNoneSeparate stack
Protection GranularityFineCoarse
ConcurrencyCooperativePreemptive
Update at RuntimeNoYes

As a result, each is more appropriate for implementing different components. In general, drivers and virtualization layers are implemented as capsules, while applications and complex drivers using existing code/libraries, such as networking stacks, are implemented as processes.

Capsules

A capsule is a Rust struct and associated functions. Capsules interact with each other directly, accessing exposed fields and calling functions in other capsules. Trusted platform configuration code initializes them, giving them access to any other capsules or kernel resources they need. Capsules can protect internal state by not exporting certain functions or fields.

Capsules run inside the kernel in privileged hardware mode, but Rust’s type and module systems protect the core kernel from buggy or malicious capsules. Because type and memory safety are enforced at compile-time, there is no overhead associated with safety, and capsules require minimal error checking. For example, a capsule never has to check the validity of a reference. If the reference exists, it points to valid memory of the right type. This allows extremely fine-grained isolation since there is virtually no overhead to splitting up components.

Rust’s language protection offers strong safety guarantees. Unless a capsule is able to subvert the Rust type system, it can only access resources explicitly granted to it, and only in ways permitted by the interfaces those resources expose. However, because capsules are cooperatively scheduled in the same single-threaded event loop as the kernel, they must be trusted for system liveness. If a capsule panics, or does not yield back to the event handler, the system can only recover by restarting.

Processes

Processes are independent applications that are isolated from the kernel and run with reduced privileges in separate execution threads from the kernel. The kernel schedules processes preemptively, so processes have stronger system liveness guarantees than capsules. Moreover, uses hardware protection to enforce process isolation at runtime. This allows processes to be written in any language and to be safely loaded at runtime.

Memory Layout

Processes are isolated from each other, the kernel, and the underlying hardware explicitly by the hardware Memory Protection Unit (MPU). The MPU limits which memory addresses a process can access. Accesses outside of a process's permitted region result in a fault and trap to the kernel.

Code, stored in flash, is made accessible with a read-only memory protection region. Each process is allocated a contiguous region of RAM. One novel aspect of a process is the presence of a "grant" region at the top of the address space. This is memory allocated to the process covered by a memory protection region that the process can neither read nor write. The grant region, discussed below, is needed for the kernel to be able to borrow memory from a process in order to ensure liveness and safety in response to system calls.

Grants

Capsules are not allowed to allocate memory dynamically since dynamic allocation in the kernel makes it hard to predict if memory will be exhausted. A single capsule with poor memory management could cause the rest of the kernel to fail. Moreover, since it uses a single stack, the kernel cannot easily recover from capsule failures.

However, capsules often need to dynamically allocate memory in response to process requests. For example, a virtual timer driver must allocate a structure to hold metadata for each new timer any process creates. Therefore, Tock allows capsules to dynamically allocate from the memory of a process making a request.

It is unsafe, though, for a capsule to directly hold a reference to process memory. Processes crash and can be dynamically loaded, so, without explicit checks throughout the kernel code, it would not be possible to ensure that a reference to process memory is still valid.

For a capsule to safely allocate memory from a process, the kernel must enforce three properties:

  1. Allocated memory does not allow capsules to break the type system.

  2. Capsules can only access pointers to process memory while the process is alive.

  3. The kernel must be able to reclaim memory from a terminated process.

Tock provides a safe memory allocation mechanism that meets these three requirements through memory grants. Capsules can allocate data of arbitrary type from the memory of processes that interact with them. This memory is allocated from the grant segment.

Just as with buffers passed through allow, references to granted memory are wrapped in a type-safe struct that ensures the process is still alive before dereferencing. Unlike shared buffers, which can only be a buffer type in a capsule, granted memory can be defined as any type. Therefore, processes cannot access this memory since doing so might violate type-safety.

In-Kernel Design Principles

To help meet Tock's goals, encourage portability across hardware, and ensure a sustainable operating system, several design principles have emerged over time for the Tock kernel. These are general principles that new contributions to the kernel should try to uphold. However, these principles have been informed by Tock's development, and will likely continue to evolve as Tock and the Rust ecosystem evolve.

Role of HILs

Generally, the Tock kernel is structured into three layers:

  1. Chip-specific drivers: these typically live in a crate in the chips subdirectory, or an equivalent crate in an different repository (e.g. the Titan port is out of tree but its h1b crate is the equivalent here). These drivers have implementations that are specific to the hardware of a particular microcontroller. Ideally, their implementation is fairly simple, and they merely adhere to a common interface (a HIL). That's not always the case, but that's the ideal.

  2. Chip-agnostic, portable, peripheral drivers and subsystems. These typically live in the capsules crate. These include things like virtual alarms and virtual I2C stack, as well as drivers for hardware peripherals not on the chip itself (e.g. sensors, radios, etc). These drivers typically rely on the chip-specific drivers through the HILs.

  3. System call drivers, also typically found in the capsules crate. These are the drivers that implement a particular part of the system call interfaces, and are often even more abstracted from the hardware than (2) - for example, the temperature sensor system call driver can use any temperature sensor, including several implemented as portable peripheral drivers.

    The system call interface is another point of standardization that can be implemented in various ways. So it is perfectly reasonable to have several implementations of the same system call interface that use completely different hardware stacks, and therefore HILs and chip-specific drivers (e.g. a console driver that operates over USB might just be implemented as a different system call driver that implements the same system calls, rather than trying to fit USB into the UART HIL).

Because of their importance, the interfaces between these layers are a key part of Tock's design and implementation. These interfaces are called Tock's Hardware Interface Layer, or HIL. A HIL is a portable collection of Rust traits that can be implemented in either a portable or a non-portable way. An example of a non-portable implementation of a HIL is an Alarm that is implemented in terms of counter and compare registers of a specific chip, while an example of a portable implementation is a virtualization layer that multiplexes multiple Alarms top of a single underlying Alarm.

A HIL consists of one or more Rust traits that are intended to be used together. In some cases, implementations may only implement a subset of a HIL's traits. For example the analog-to-digital (ADC) conversion HIL may have traits both for single and streams of samples. A particular implementation may only support single samples and so not implement the streaming traits.

The choice of particular HIL interfaces is pretty important, and we have some general principles we follow:

  1. HIL implementations should be fairly general. If we have an interface that doesn't work very well across different hardware, we probably have the wrong interface - it's either too high level, or too low level, or it's just not flexible enough. But HILs shouldn't generally be designed to optimize for particular applications or hardware, and definitely not for a particular combination of applications and hardware. If there are cases where that is really truly necessary, a driver can be very chip or board specific and circumvent the HILs entirely.

    Sometimes there are useful interfaces that some chips can provide natively, while other chips lack the necessary hardware support, but the functionality could be emulated in some way. In these cases, Tock sometimes uses "advanced" traits in HILs that enable a chip to expose its more sophisticated features while not requiring that all implementors of the HIL have to implement the function. For example, the UART HIL includes a ReceiveAdvanced trait that includes a special function receive_automatic() which receives bytes on the UART until a pause between bytes is detected. This is supported directly by the SAM4L hardware, but can also be emulated using timers and GPIO interrupts. By including this in an advanced trait, capsules can still use the interface but other UART implementations that do not have that required feature do not have to implement it.

  2. A HIL implementation may assume it is the only way the device will be used. As a result, Tock tries to avoid having more than one HIL for a particular service or abstraction, because it will not, in general, be possible for the kernel to support simultaneously using different HILs for the same device. For example, suppose there were two different HILs for a UART with slightly different APIs. The chip-specific implementation of each one will need to read and write hardware registers and handle interrupts, so they cannot exist simultaneously. By allowing a HIL to assume it is the only way the device will be used, Tock allows HILs to precisely define their semantics without having to worry about potential future conflicts or use cases.

Split-phase Operation

While processes are time sliced and preemptive in Tock, the kernel is not. Everything is run-to-completion. That is an important design choice because it allows the kernel to avoid allocating lots of stacks for lots of tasks, and it makes it possible to reason more simply about static and other shared variables.

Therefore, all I/O operations in the Tock kernel are asynchronous and non-blocking. A method call starts an operation and returns immediately. When the operation completes, the struct implementing the operation calls a callback. Tock uses callbacks rather than closures because closures typically require dynamic memory allocation, which the kernel avoids and does not generally support.

This design does add complexity when writing drivers as a blocking API is generally simpler to use. However, this is a conscious choice to favor overall safety of the kernel (e.g. avoiding running out of memory or preventing other code from running on time) over functional correctness of individual drivers (because they might be more error-prone, not because they cannot be written correctly).

There are limited cases when the kernel can briefly block. For example, the SAM4L's GPIO controller can take up to 5 cycles to become ready between operations. Technically, a completely asynchronous driver would make this split-phase: the operation returns immediately, and issues a callback when it completes. However, because just setting up the callback will take more than 5 cycles, spinning for 5 cycles is not only simpler, it's also cheaper. The implementation therefore spins for a handful of cycles before returning, such that the operation is synchronous. These cases are rare, though: the operation has to be so fast that it's not worth allowing other code to run during the delay.

External Dependencies

Tock generally prohibits any external crates within the Tock kernel to avoid including external unsafe code. However, in certain situations Tock does allow external dependencies. This is decided on a case by case basis. For more details on this see External Dependencies.

Tock uses some external libraries by vendoring them within the libraries folder. This puts the library's source in the same repository, while keeping the library as a clearly separate crate. This adds a maintenance requirement and complicates updates, so this is also used on a limited basis.

Using unsafe and Capabilities

Tock attempts to minimize the amount of unsafe code in the kernel. Of course, there are a number of operations that the kernel must do which fundamentally violate Rust's memory safety guarantees, and we try to compartmentalize these operations and explain how to use them in an ultimately safe manner.

For operations that violate Rust safety, Tock marks the functions, structs, and traits as unsafe. This restricts the crates that can use these elements. Generally, Tock tries to make it clear where an unsafe operation is occurring by requiring the unsafe keyword be present. For example, with memory-mapped input/output (MMIO) registers, casting an arbitrary pointer to a struct that represents those registers violates memory safety unless the register map and address are verified to be correct. To denote this, doing the cast is clearly marked as unsafe. However, once the cast is complete, accessing those registers no longer violates memory safety. Therefore, using the registers does not require the unsafe keyword.

Not all potentially dangerous code violates Rust's safety model, however. For example, stopping a process from running on the board does not violate language-level safety, but is still a potentially problematic operation from a security and system reliability standpoint, as not all kernel code should be able halt arbitrary processes (in particular, untrusted capsules should not have this access to this API). One way to restrict access to these types of functions would be to re-use the unsafe mechanism, since cargo will emit a warning if code that is prohibited from using unsafe attempts to invoke an unsafe function. However, this muddles the use of unsafe, and makes it difficult to understand if code potentially violates safety or is a restricted API.

Instead, Tock uses capabilities to restrict access to important APIs. As such, any public APIs inside the kernel that should be very restricted in what other code can use them should require a specific capability in their function signatures. This prevents code that has not explicitly been granted the capability from calling the protected API.

To promote the principle of least privilege, capabilities are relatively fine-grained and provide narrow access to specific APIs. This means that generally new APIs will require defining new capabilities.

Ease of Use and Understanding

Whenever possible, Tock's design optimizes to lower the barrier for new users or developers to understand and use Tock. Sometimes, this means intentionally making a design choice that prioritizes readability or clarity over performance.

As an example, Tock generally avoids using Rust's features and #[cfg()] attribute to enable conditional compilation. While using a set of features can lead to optimizing exactly what code should be included when the kernel is built, it also makes it very difficult for users unfamiliar with the features to decide which features to enable and when. Likely, these users will use the default configuration, reducing the benefit of having the features available. Also, conditional compilation makes it very difficult to understand exactly what version of the kernel is running on any particular board as the features can substantially change what code is running. Finally, the non-default options are unlikely to be tested as robustly as the default configuration, leading to versions of the kernel which are no longer available.

Tock also tries to ensure Tock "just works" for users. This manifests by trying to minimize the number of steps to get Tock running. The build system uses make which is familiar to many developers, and just running make in a board folder will compile the kernel. The most supported boards (Hail and imix) can then be programmed by just running make install. Installing an app just requires one more command: tockloader install blink. Tockloader will continue to expand to support the ease-of-use with Tock. Now, "just works" is a design goal that Tock is not completely meeting. But, future design decisions should continue to encourage Tock to "just work".

Demonstrated Features

Tock discourages adding functionality to the kernel unless a clear use case has been established. For example, adding a red-black tree implementation to kernel/src/common might be useful in the future for some new Tock feature. However, that would be unlikely to be merged without a use case inside of the kernel that motivates needing a red-black tree. This general principle provides a starting point for evaluating new features in pull requests.

Requiring a use case also makes the code more likely to be tested and used, as well as updated as other internal kernel APIs change.

Merge Aggressively, Archive Unabashedly

As an experimental embedded operating system with roots in academic research, Tock is likely to receive contributions of new, risky, experimental, or narrowly focused code that may or may not be useful for the long-term growth of Tock. Rather than use a "holding" or "contribution" repository for new, experimental code, Tock tries to merge new features into mainline Tock. This both eases the maintenance burden of the code (it doesn't have to be maintained out-of-tree) and makes the feature more visible.

However, not all features catch on, or are completed, or prove useful, and having the code in mainline Tock becomes an overall maintenance burden. In these cases, Tock will move the code to an archive repository.

Soundness and Unsafe Issues

An operating system necessarily must use unsafe code. This document explains the rationale behind some of the key mechanisms in Tock that do use unsafe code but should still preserve safety in the overall OS.

static_init!

The "type" of static_init! is basically:

#![allow(unused)]
fn main() {
T => (fn() -> T) -> &'static mut T
}

Meaning that given a function that returns something of type T, static_init! returns a mutable reference to T with static lifetime.

This is effectively meant to be equivalent to declaring a mutable static variable:

#![allow(unused)]
fn main() {
static mut MY_VAR: SomeT = SomeT::const_constructor();
}

Then creating a reference to it:

#![allow(unused)]
fn main() {
let my_ref: &'static mut = &mut MY_VAR;
}

However, the rvalue in static declarations must be const (because Rust doesn't have pre-initialization sections). So static_init! basically allows static variables that have non-const initializers.

Note that in both of these cases, the caller must wrap the calls in unsafe since references a mutable variable is unsafe (due to aliasing rules).

Use

static_init! is used in Tock to initialize capsules, which will eventually reference each other. In all cases, these references are immutable. It is important for these to be statically allocated for two reasons. First, it helps surface memory pressure issues at link time (if they are allocated on the stack, they won't trivially show up as out-of-memory link errors if the stack isn't sized properly). Second, the lifetimes of mutually-dependent capsules needs to be equal, and 'static is a convenient way of achieving this.

However, in a few cases, it is useful to start with a mutable reference in order to enforce who can make certain calls. For example, setting up buffers in the SPI driver is, for practical reasons, deferred until after construction but we would like to enforce that it can only be called by the platform initialization function (before the kernel loop starts). This is enforced because all references after the platform is setup are immutable, and the config_buffers method takes an &mut self (Note: it looks like this is not strictly necessary, so maybe not a big deal if we can't do this).

Soundness

The thing that would make the use of static_init! unsafe is if it was used to create aliases to mutable references. The fact that it returns an &'static mut is a red flag, so it bears explanation why I think this is OK.

Just as with any &mut, as soon as it is reborrowed it can no longer be used. What we do in Tock, specifically, is use it mutably in some cases immediately after calling static_init!, then reborrow it immutably to pass into capsules. If a particular capsule happened to accept a &mut, the compiler would try to move the reference and it would either fail that call (if it's already reborrowed immutably elsewhere) or disallow further reborrows. Note that this is fine if it is indeed not used as a shared reference (although I don't think we have examples of that use).

It is important, though, that the same code calling static_init! is not executed twice. This creates two major issues. First, it could technically result in multiple mutable references. Second, it would run the constructor twice, which may create other soundness or functional issues with existing references to the same memory. I believe this is not different that code that takes a mutable reference to a static variable. To prohibit this, static_init! internally uses an Option-like structure to mark when the static buffer has been initialized, and causes a panic! if the same buffer is re-initialized (i.e. the same static_init! was called twice). With this check, we can mark static_init! as safe.

Alternatives

It seems technically possible to return an immutable static reference from static_init! instead. It would require a bit of code changes, and wouldn't allow us to restrict certain capsule methods to initialization, but may not be a particularly big deal.

Also, something something static variables of type Option everywhere (ugh... but maybe reasonable).

Capabilities: Restricting Access to Certain Functions and Operations

Certain operations and functions, particularly those in the kernel crate, are not "unsafe" from a language perspective, but are unsafe from an isolation and system operation perspective. For example, restarting a process, conceptually, does not violate type or memory safety (even though the specific implementation in Tock does), but it would violate overall system safety if any code in the kernel could restart any arbitrary process. Therefore, Tock must be careful with how it provides a function like restart_process(), and, in particular, must not allow capsules, which are untrusted code that must be sandboxed by Rust, to have access to the restart_process() function.

Luckily, Rust provides a primitive for doing this restriction: use of the unsafe keyword. Any function marked as unsafe can only be called from a different unsafe function or from an unsafe block. Therefore, by removing the ability to define an unsafe block, using the #![forbid(unsafe_code)] attribute in a crate, all modules in that crate cannot call any functions marked with unsafe. In the case of Tock, the capsules crate is marked with this attribute, and therefore all capsules cannot use unsafe functions. While this approach is effective, it is very coarse-grained: it provides either access to all unsafe functions or none. To provide more nuanced control, Tock includes a mechanism called Capabilities.

Capabilities are essentially zero-memory objects that are required to call certain functions. Abstractly, restricted functions, like restart_process(), would require that the caller has a certain capability:

#![allow(unused)]
fn main() {
restart_process(process_id: usize, capability: ProcessRestartCapability) {}
}

Any attempt to call that function without possessing that capability would result in code that does not compile. To prevent unauthorized uses of capabilities, capabilities can only be created by trusted code. In Tock, this is implemented by defining capabilities as unsafe traits, which can only be implemented for an object by code capable of calling unsafe. Therefore, code in the untrusted capsules crate cannot generate a capability on its own, and instead must be passed the capability by module in a different crate.

Capabilities can be defined for very broad purposes or very narrowly, and code can "request" multiple capabilities. Multiple capabilities in Tock can be passed by implementing multiple capability traits for a single object.

Capability Examples

  1. One example of how capabilities are useful in Tock is with loading processes. Loading processes is left as a responsibility of the board, since a board may choose to handle its processes in a certain way, or not support userland processes at all. However, the kernel crate provides a helpful function called load_processes() that provides the Tock standard method for finding and loading processes. This function is defined in the kernel crate so that all Tock boards can share it, which necessitates that the function be made public. This has the effect that all modules with access to the kernel crate can call load_processes(), even though calling it twice would lead to unwanted behavior. One approach is to mark the function as unsafe, so only trusted code can call it. This is effective, but not explicit, and conflates language-level safety with system operation-level safety. By instead requiring that the caller of load_processes() has a certain capability, the expectations of the caller are more explicit, and the unsafe function does not have to be repurposed.

  2. A similar example is a function like restart_all_processes() which causes all processes on the board to enter a fault state and restart from their original _start point with all grants removed. Again, this is a function that could violate the system-level goals, but could be very useful in certain situations or for debugging grant cleanup when apps fail. Unlike load_processes(), however, it might make sense for a capsule to be able to call restart_all_processes(), in response to a certain event or to act as a watchdog. In that case, restricting access by marking it as unsafe will not work: capsules cannot call unsafe code. By using capabilities, only a caller with the correct capability can call restart_all_processes(), and individual boards can be very explicit about which capsules they grant which capabilities.

Lifetimes

Values in the Tock kernel can be allocated in three ways:

  1. Static allocation. Statically allocated values are never deallocated. These values are represented as Rust "borrows" with a 'static lifetime.

  2. Stack allocation. Stack allocated values have a lexically bound lifetime. That is, we know by looking at the source code when they will be deallocated. When you create a reference to such a value, the Rust type system ensures that reference is never used after the value is deallocated by assigning a "lifetime" to the reference.

  3. Grant values. Values allocated from a process's grant region have a runtime-dependent lifetime. For example, when they are deallocated depends on whether the processes crashes. Since we can't represent runtime-dependent lifetimes in Rust's type-system, references to grant values in Tock are done through the Grant type, which is owned by its referrer.

Next we'll discuss how Rust's notion of lifetimes maps to the lifetimes of values in Tock and how this affects the use of different types of values in the kernel.

Rust lifetimes

Each reference (called a borrow) in Rust has lifetime associated with its type that determines in what scope it is valid. The lifetime of a reference must be more constrained than the value it was borrowed from. The compiler, in turn, ensures that references cannot escape their valid scope.

As a result, data structures that store a reference must declare the minimal lifetime of that reference. For example:

#![allow(unused)]
fn main() {
struct Foo<'a> {
  bar: &'a Bar
}
}

defines a data structure Foo that contains a reference to another type, Bar. The reference has a lifetime 'a, which is a type parameter of Foo. Note that 'a is an arbitrary choice of name for the lifetime, such as E in a generic List<E>. It is also possible to use the explicit lifetime 'static rather than a type parameter when the reference should always live forever, regardless of how long the containing type (e.g. Foo) lives:

#![allow(unused)]
fn main() {
struct Foo {
  bar: &'static Bar
}
}

Buffer management

Buffers used in asynchronous hardware operations must be static. On the one hand, we need to guarantee (to the hardware) that the buffer will not be deallocated before the hardware relinquishes its pointer. On the other hand, the hardware has no way of telling us (i.e. the Rust compiler) that it will only access the buffer within a certain lexical bound (because we are using the hardware asynchronously). To resolve this, buffers passed to hardware should be allocated statically.

Circular dependencies

Tock uses circular dependencies to give capsules access to each other. Specifically, two capsules that depend on each other will each have a field containing a reference to the other. For example, a client of the timer Alarm trait needs a reference to an instance of the timer in order to start/stop it, while the instance of timer needs a reference to the client in order to propagate events. This is handled by the set_client function, which allows the platform definition to connect objects after creation.

#![allow(unused)]
fn main() {
impl Foo<'a> {
  fn set_client(&self, client: &'a Client) {
    self.client.set(client);
  }
}
}

Tock Threat Model

Overview

Tock provides hardware-based isolation between processes as well as language-based isolation between kernel capsules.

Tock supports a variety of hardware, including boards defined in the Tock repository and boards defined "out of tree" in a separate repository. Additionally, Tock's installation model may vary between different use cases even when those use cases are based on the same hardware. As a result of Tock's flexibility, the mechanisms it uses to provide isolation — and the strength of that isolation — vary from deployment to deployment.

This threat model describes the isolation provided by Tock as well as the trust model that Tock uses to implement that isolation. Users of Tock, which include board integrators and application developers, should use this threat model to understand what isolation Tock provides to them (and what isolation it may not provide). Tock developers should use this threat model as a guide for how to provide Tock's isolation guarantees.

Definitions

These definitions are shared between the documents in this directory.

A process is a runtime instantiation of an application binary. When an application binary "restarts", its process is terminated and a new process is started using the same binary. Note that the kernel is not considered a process, although it is a thread of execution.

Process data includes a process' binary in non-volatile storage, its memory footprint in RAM, and any data that conceptually belongs to the process that is held by the kernel or other processes. For example, if a process is reading from a UART then the data in the UART buffer is considered the process' data, even when it is stored in a location in RAM only readable by the kernel.

Kernel data includes the kernel's image in non-volatile storage as well as data in RAM that does not conceptually belong to processes. For example, the scheduler's data structures are kernel data.

Capsule data is data that is associated with a particular kernel capsule. This data can be either kernel data or process data, depending on its conceptual owner. For example, an ADC driver's configuration is kernel data, while samples an ADC driver takes on behalf of a process are process data.

Tock's users refers to entities that make use of Tock OS. In the context of threat modelling, this typically refers to board integrators (entities that combine Tock components into an OS to run on a specific piece of hardware) and application developers (who consume Tock's APIs and rely on the OS' guarantees).

Isolation Provided to Processes

Confidentiality: A process' data may not be accessed by other processes or by capsules, unless explicitly permitted by the process. Note that Tock does not generally provide defense against side channel attacks; see the Side Channel Defense heading below for more details. Additionally, Virtualization describes some limitations on isolation for shared resources.

Integrity: Process data may not be modified by other processes or by capsules, except when allowed by the process.

Availability: Processes may not deny service to each other at runtime. As an exception to this rule, some finite resources may be allocated on a first-come-first-served basis. This exception is described in detail in Virtualization.

Isolation Provided to Kernel Code

Confidentiality: Kernel data may not be accessed by processes, except where explicitly permitted by the owning component. Kernel data may not be accessed by capsules, except where explicitly permitted by the owning component. The limitations about side channel defense and Virtualization that apply to process data also apply to kernel data.

Integrity: Processes and capsules may not modify kernel data except through APIs intentionally exposed by the owning code.

Availability: Processes cannot starve the kernel of resources or otherwise perform denial-of-service attacks against the kernel. This does not extend to capsule code; capsule code may deny service to trusted kernel code. As described in Virtualization, kernel APIs should be designed to prevent starvation.

Isolation that Tock does NOT Provide

There are practical limits to the isolation that Tock can provide; this section describes some of those limits.

Side Channel Defense

In general, Tock's users should assume that Tock does NOT provide side channel mitigations except where Tock's documentation indicates side channel mitigations exist.

Tock's answer to "should code X mitigate side channel Y" is generally "no". Many side channels that Tock can mitigate in theory are too expensive for Tock to mitigate in practice. As a result, Tock does not mitigate side channels by default. However, specific Tock components may provide and document their own side channel mitigation. For instance, Tock may provide a cryptography API that implements constant-time operations, and may document the side channel defense in the cryptography API's documentation.

In deciding whether to mitigate a side channel, Tock developers should consider both the cost of mitigating the side channel as well as the value provided by mitigating that side channel. For example:

  1. Tock does not hide a process' CPU usage from other processes. Hiding CPU utilization generally requires making significant performance tradeoffs, and CPU utilization is not a particularly sensitive signal.

  2. Although Tock protects a process' data from unauthorized access, Tock does not hide the size of a process' data regions. Without virtual memory hardware, it is very difficult to hide a process' size, and that size is not particularly sensitive.

  3. It is often practical to build constant-time cryptographic API implementations, and protecting the secrecy of plaintext is valuable. As such, it may make sense for a Tock board to expose a cryptographic API with some side channel defenses.

Guaranteed Launching of Binaries

Tock does not guarantee that binaries it finds are launched as processes. For example, if there is not enough RAM available to launch every binary then the kernel will skip some binaries.

This parallels the "first-come, first-served" resource reservation process described in Virtualization.

Components Trusted to Provide Isolation

The Tock kernel depends on several components (including hardware and software) in order to implement the above isolation guarantees. Some of these components, such as the application loader, may vary depending on Tock's use case. The following documents describe the trust model that exists between the Tock kernel and its security-relevant dependencies:

  • Capsule Isolation describes the coding practices used to isolate capsules from the remainder of the kernel.

  • Application Loader describes the trust placed in the application deployment mechanism.

  • TBF Headers describes the trust model associated with the Tock Binary Format headers.

  • Code Review describes code review practices used to ensure the trustworthiness of Tock's codebase.

What is an "Application"?

Formally, a Tock application is the set of all processes that have a particular application ID (as detailed in the AppID TRD).

Every process has an application ID (which may be global or locally unique), so every process is part of an application.

Application IDs are generally used as a way to grant access to something. For example, a process that wants to send a message to another process will generally do so by sending the message to that process' application ID. Doing so grants that application access to the message. The IPC system is responsible for identifying which process has that application ID (if any) and giving the message to that process.

In the context of storage, it often makes sense for a process to share data with its own application ID. That allows future processes belonging to the same application (e.g. after a system reboot) to access that data.

Capsule Isolation

Isolation Mechanism

Capsules are limited to what they can access within Rust's type system without using unsafe. That isolation is implemented by banning unsafe from use in capsule code and by banning the use of unaudited libraries (except those that ship with Rust's toolchain) in kernel code. This isolation is vulnerable to code that exploits compiler bugs or bugs in unsafe code in toolchain libraries. When a board integrator chooses to use a capsule, they are responsible for auditing the code of the capsule to confirm the policies are followed and to detect potentially malicious behavior. The use of Rust's type system as a security isolation mechanism relies in part on Rust's resistance to underhanded programming techniques (stealthy obfuscation), and is a weaker form of isolation than the hardware-backed isolation used to isolate the kernel (and other processes) from processes.

Capsules are scheduled cooperatively with the rest of the kernel, and as such they can deny service to the rest of the system.

Impact on Kernel API Design

Kernel APIs should be designed to limit the data that capsules have access to. Trusted kernel code should use capabilities as necessary in its API to limit the access that capsule code has. For example, an API that allows its clients to access data that is not owned by either the API or caller should require a "trusted" capability.

Virtualization

Tock components that share resources between multiple clients (which may be kernel components, processes, or a mix of both) are responsible for providing confidentiality and availability guarantees to those clients.

Data Sharing (Confidentiality)

In general, kernel components with multiple clients should not share data between their clients. Furthermore, data from a client should not end up in a capsule the client is unaware of.

When a capsule with multiple clients is given a buffer by one of those clients, it must do one of the following:

  1. Avoid sharing the buffer with any other kernel code. Return the buffer to the same client.

  2. Only share the buffer downwards, to lower-level components. For example, a capsule providing virtualized access to a piece of hardware may pass the buffer to the driver for that hardware.

  3. Wipe the buffer before sharing it with another client.

Kernel components with multiple clients that retrieve data on behalf of those clients must implement isolation commensurate with their functionality. When possible, components reading from shared buses should mux data transferred over those buses. For example:

  1. A UDP API can provide a mechanism for clients (processes and/or capsules) to gain exclusive access to a port. The UDP API should then prevent clients from reading messages sent to other clients or impersonating other clients.

  2. A UART API with multiple clients should implement a protocol that allows the UART API to determine which client a received packet belongs to and route it accordingly (in other words, it should implement some form of muxing).

  3. Analog-to-Digital Converter (ADC) hardware does not have a concept of which process "owns" data, nor is there a way to implement such a concept. As such, an ADC API that allows clients to take samples upon request does not need to take separate samples for different clients. An ADC API that receives simultaneous requests to sample the same source may take a single reading and distribute it to multiple clients.

Fairness (Availability)

Tock components do not need to guarantee fairness between clients. For example, a UART virtualization layer may allow capsules/processes using large buffers to see higher throughputs than capsules/processes using small buffers. However, components should prevent starvation when the semantics of the operation allow it. For the UART example, this means using round-robin scheduling rather than preferring lower-numbered clients.

When it is not possible to prevent starvation — such as shared resources that may be locked for indefinite amounts of time — then components have two options:

  1. Allow resource reservations on a first-come, first-served basis. This is essentially equivalent to allowing clients to take out unreturnable locks on the resources.

  2. Restrict access to the API using a kernel capability (only possible for internal kernel APIs).

An example of an API that would allow first-come-first-served reservations is crypto hardware with a finite number of non-sharable registers. In this case, different processes can use different registers, but if the registers are overcommitted then later/slower processes will be unable to reserve resources.

An example of an API that would be protected via a kernel capability is indefinite continuous ADC sampling that blocks other ADC requests. In this case, first-come-first-served reservations do not make sense because only one client can be supported anyway.

Application Loader

What is an Application Loader?

The term "application loader" refers to the mechanism used to add Tock applications to a Tock system. It can take several forms; here are a few examples:

  1. Tockloader is an application loader that runs on a host system. It uses various host-to-board interfaces (e.g. JTAG, UART bootloader, etc) to manipulate application binaries on the Tock system's nonvolatile storage.

  2. Some build systems combine the kernel and apps at build time into a single, monolithic image. This monolithic image is then deployed using a programming tool.

  3. A kernel-assisted installer may be a Tock capsule that receives application binaries over USB and writes them into flash.

Why Must We Trust It?

The application loader has the ability to read and modify application binaries. As a result, the application loader must be trusted to provide confidentiality and sometimes integrity guarantees to applications. For example, the application loader must not modify or exfiltrate applications other than the application(s) it was asked to operate on.

Tock kernels that require all application binaries to be signed do not need to trust the application loader for application integrity, as that is done by validating the signature instead. Tock kernels that do not require signed application binaries must trust the application loader to not maliciously modify applications.

To protect the kernel's confidentiality, integrity, and availability the application loader must not modify, erase, or exfiltrate kernel data. On most boards, the application loader must be trusted to not modify, erase, or exfiltrate kernel data. However, Tock boards may use other mechanisms to protect the kernel without trusting the application loader. For example, a board with access-control hardware between its flash storage and the application loader may use that hardware to protect the kernel's data without trusting the application loader.

Tock Binary Format (TBF) Total Size Verification Requirement

The application loader is required to confirm that the TBF header's total_size field is correct for the specified format version (as specified in the Tock Binary Format) before deploying an application binary. This is to prevent the newly-deployed application from executing the following attacks:

  1. Specifying a too-large total_size that includes the subsequent application(s) binary, allowing the malicious application to read the binary (impacting confidentiality).

  2. Specifying a too-small total_size and making the kernel parse the end of its image as the subsequent application binary's TBF headers (impacting integrity).

Trusted Compute Base in the Application Loader

The application loader may be broken into multiple pieces, only some of which need to be trusted. The resulting threat model depends on the form the application loader takes. For example:

  1. Tockloader has the access it needs to directly delete, corrupt, and exfiltrate the kernel. As a result, Tockloader must be trusted for Tock's confidentiality, integrity, and availability guarantees.

  2. A build system that combines apps into a single image must be trusted to correctly compile and merge the apps and kernel. The build system must be trusted to provide confidentiality, integrity, and availability guarantees. The firmware deployment mechanism must be trusted for confidentiality and availability guarantees. If the resulting image is signed (and the signature verified by a bootloader), then the firmware deployment mechanism need not be trusted for integrity. If there is no signature verification in the bootloader then the firmware deployment mechanism must be trusted for integrity as well.

  3. An application loader that performs the nonvolatile storage write from within Tock's kernel may make its confidentiality, integrity, and availability guarantees in the Tock kernel. Such a loader would need to perform the total_size field verification within the kernel. In that case, the kernel code is the only code that needs to be trusted, even if there are other components to the application loader (such as a host binary that transmits the application over USB).

TBF Headers

TBF is the Tock Binary Format. It is the format of application binaries in a Tock system's flash storage.

TBF headers are considered part of an application, and are mostly untrusted. As such, TBF header parsing must be robust against malicious inputs (e.g. pointers must be checked to confirm they are in-bounds for the binary).

However, because the kernel relies on the TBF's total_size field to load the binaries, the application loader is responsible for verifying the total_size field at install time. The kernel trusts the total_size field for confidentiality and integrity.

When possible, TLV types should be designed so that the kernel does not need to trust their correctness. When a TLV type is defined that the kernel must trust, then the threat model must be updated to indicate that application loaders are responsible for verifying the value of that TLV type.

Code Review

Kernel Code Review

Changes to the Tock OS kernel (in the kernel/ directory of the repository) are reviewed by the Tock core working group. However, not all ports of Tock (which include chip crates, board crates, and hardware-specific capsules) are maintained by the Tock core working group.

The Tock repository must document which working group (if any) is responsible for each hardware-specific crate or capsule.

Third-Party Dependencies

Tock OS repositories permit third party dependencies for critical components that are impractical to author directly. Each repository containing embedded code (including tock, libtock-c, and libtock-rs) must have a written policy documenting:

  1. All unaudited required dependencies. For example, Tock depends on Rust's libcore, and does not audit libcore's source.

  2. How to avoid pulling in unaudited optional dependencies.

A dependency may be audited by vendoring it into the repository and putting it through code review. This policy does not currently apply to host-side tools, such as elf2tab and tockloader, but may be extended in the future.

Implementation

Documentation related to the implementation of Tock.

How does Tock compile?

There are two types of compilation artifacts in Tock: the kernel and user-level processes (i.e. apps). Each type compiles differently. In addition, each platform has a different way of programming the kernel and processes. Below is an explanation of both kernel and process compilation as well as some examples of how platforms program each onto an actual board.

Compiling the kernel

The kernel is divided into five Rust crates (i.e. packages):

  • A core kernel crate containing key kernel operations such as handling interrupts and scheduling processes, shared kernel libraries such as SubSlice, and the Hardware Interface Layer (HIL) definitions. This is located in the kernel/ folder.

  • An architecture (e.g. ARM Cortex M4) crate that implements context switching, and provides memory protection and systick drivers. This is located in the arch/ folder.

  • A chip-specific (e.g. Atmel SAM4L) crate which handles interrupts and implements the hardware abstraction layer for a chip's peripherals. This is located in the chips/ folder.

  • One (or more) crates for hardware independent drivers and virtualization layers. This is the capsules/ folder in Tock. External projects using Tock may create additional crates for their own drivers.

  • A platform-specific (e.g. Imix) crate that configures the chip and its peripherals, assigns peripherals to drivers, sets up virtualization layers, and defines a system call interface. This is located in boards/.

These crates are compiled using Cargo, Rust's package manager, with the platform crate as the base of the dependency graph. In practice, the use of Cargo is masked by the Makefile system in Tock. Users can simply type make from the proper directory in boards/ to build the kernel for that platform.

Internally, the Makefile is simply invoking Cargo to handle the build. For example, make on the imix platform roughly translates to:

$ cargo build --release --target=thumbv7em-none-eabi

The --release argument tells Cargo to invoke the Rust compiler with optimizations turned on. --target points Cargo to the target specification which includes the LLVM data-layout definition and architecture definitions for the compiler. Note, Tock uses additional compiler and linker flags to generate correct and optimized kernel binaries for our supported embedded targets.

Life of a Tock compilation

When Cargo begins compiling the platform crate, it first resolves all dependencies recursively. It chooses package versions that satisfy the requirements across the dependency graph. Dependencies are defined in each crate's Cargo.toml file and refer to paths in the local file-system, a remote git repository, or a package published on crates.io.

Second, Cargo compiles each crate in turn as dependencies are satisfied. Each crate is compiled as an rlib (an ar archive containing object files) and combined into an executable ELF file by the compilation of the platform crate.

You can see each command executed by cargo by passing it the --verbose argument. In our build system, you can run make V=1 to see the verbose commands.

Platform Build Scripts

Cargo supports build scripts when compiling crates, and Tock provides the boards/build.rs build script. In Tock, these build scripts are primarily used to instruct cargo to rebuild the kernel if a linker script changes.

Cargo's build.rs scripts are small Rust programs that must be compiled as part of the kernel build process. Since these scripts execute on the host machine, this means building Tock requires a Rust toolchain valid for the host machine and its architecture. Cargo runs the compiled build script when compiling the platform crate.

LLVM Binutils

Tock uses the lld, objcopy, and size tools included with the Rust toolchain to produce kernel binaries that are executed on microcontrollers. This has two main ramifications:

  1. The tools are not entirely feature-compatible with the GNU versions. While they are very similar, there are edge cases where they do not behave exactly the same. This will likely improve with time, but it is worth noting in case unexpected issues arise.
  2. The tools will automatically update with Rust versions. The tools are provided in the llvm-tools rustup component that is compiled for and ships with every version of the Rust toolchain. Therefore, if Rust updates the version they use in the Rust repository, Tock will also see those updates.

Special .apps section

Tock kernels include a .apps section in the kernel .elf file that is at the same physical address where applications will be loaded. When compiling the kernel, this is just a placeholder and is not populated with any meaningful data. It exists to make it easy to update the kernel .elf file with an application binary to make a monolithic .elf file so that the kernel and apps can be flashed together.

When the Tock build system creates the kernel binary, it explicitly removes this section so that the placeholder is not included in the kernel binary.

To use the special .apps section, objcopy can replace the placeholder with an actual app binary. The general command looks like:

$ arm-none-eabi-objcopy --update-section .apps=libtock-c/examples/c_hello/build/cortex-m4/cortex-m4.tbf target/thumbv7em-none-eabi/release/stm32f412gdiscovery.elf target/thumbv7em-none-eabi/release/stm32f4discovery-app.elf

This replaces the placeholder section .apps with the "c_hello" application TBF in the stm32f412gdiscovery.elf kernel ELF, and creates a new .elf called stm32f4discovery-app.elf.

Compiling a process

Unlike many other embedded systems, compilation of application code is entirely separated from the kernel in Tock. An application uses a libtock library and is built into a free-standing binary. The binary can then be uploaded onto a Tock platform with an already existing kernel to be loaded and run.

Tock can support applications using any programming language and compiler provided the applications can run with only access to fixed regions in flash and RAM and without virtual memory.

Each Tock process requires a header that informs the kernel of the size of the application's binary and where the location of the entry point is within the compiled binary.

Executing without Virtual Memory

Tock supports resource constrained microcontrollers which do not support virtual memory. This means Tock process cannot assume a known address space. Tock supports two methods for enabling processes despite the lack of virtual memory: embedded PIC (FDPIC) and fixed address loading.

Position Independent Code

Since Tock loads applications separately from the kernel and is capable of running multiple applications concurrently, applications cannot know in advance at which address they will be loaded. This problem is common to many computer systems and is typically addressed by dynamically linking and loading code at runtime.

Tock, however, makes a different choice and requires applications to be compiled as position independent code. Compiling with FDPIC makes all control flow relative to the current PC, rather than using jumps to specified absolute addresses. All data accesses are relative to the start of the data segment for that app, and the address of the data segment is stored in a register referred to as the base register. This allows the segments in Flash and RAM to be placed anywhere, and the OS only has to correctly initialize the base register.

FDPIC code can be inefficient on some architectures such as x86, but the ARM instruction set is optimized for FDPIC operation and allows most code to execute with little to no overhead. Using FDPIC still requires some fixup at runtime, but the relocations are simple and cause only a one-time cost when an application is loaded. A more in-depth discussion of dynamically loading applications can be found on the Tock website: Dynamic Code Loading on a MCU.

For applications compiled with arm-none-eabi-gcc, building FDPIC code for Tock requires four flags:

  • -fPIC: only emit code that uses relative addresses.
  • -msingle-pic-base: force the use of a consistent base register for the data sections.
  • -mpic-register=r9: use register r9 as the base register.
  • -mno-pic-data-is-text-relative: do not assume that the data segment is placed at a constant offset from the text segment.

Each Tock application uses a linker script that places Flash at address 0x80000000 and SRAM at address 0x00000000. This allows relocations pointing at Flash to be easily differentiated from relocations pointing at RAM.

Fixed Address Loading

Unfortunately, not all compilers support FDPIC. As of August 2023, LLVM and riscv-gcc both do not support FDPIC. This complicates running Tock processes, but Tock supports an alternative method using fixed addresses. This method works by compiling Tock processes for fixed addresses in both flash and RAM (as typical embedded compilation would do) and then processes are placed in flash so that they match their fixed flash address and the kernel sets their RAM region so their RAM addresses match. While this simplifies compilation, ensuring that those addresses are properly met involves several components.

Fixed Address TBF Header

The first step is the linker must communicate which addresses it expects the process to be placed at in both flash and RAM at execution time. It does this with two symbols in the .elf file:

  • _flash_origin: The address in flash the app was compiled for.
  • _sram_origin: The address in ram the app was compiled for.

These symbols are then parsed by elf2tab. elf2tab uses _flash_origin to ensure the .tbf file is properly created so that the compiled binary will end up at the correct address. Both _flash_origin and _sram_origin are used to create a FixedAddresses TBF TLV that is included in the TBF header. An example of the Fixed Addresses TLV:

TLV: Fixed Addresses (5)                        [0x40 ]
  fixed_address_ram   :  536920064   0x2000c000
  fixed_address_flash :  268599424   0x10028080

With the Fixed Addresses TLV included in the TBF header, the kernel and other tools now understand that for this process its address requirements must be met.

By convention, userspace apps compiled for fixed flash and RAM addresses include the addresses in the .tbf filenames. For example, the leds example compiled as a libtock-rs app might have a TAB that looks like:

[STATUS ] Inspecting TABs...
TAB: leds
  build-date: 2023-08-08 22:24:07+00:00
  minimum-tock-kernel-version: 2.1
  tab-version: 1
  included architectures: cortex-m0, cortex-m4, riscv32imc
  tbfs:
   cortex-m0.0x10020000.0x20004000
   cortex-m0.0x10028000.0x2000c000
   cortex-m4.0x00030000.0x20008000
   cortex-m4.0x00038000.0x20010000
   cortex-m4.0x00040000.0x10002000
   cortex-m4.0x00040000.0x20008000
   cortex-m4.0x00042000.0x2000a000
   cortex-m4.0x00048000.0x1000a000
   cortex-m4.0x00048000.0x20010000
   cortex-m4.0x00080000.0x20006000
   cortex-m4.0x00088000.0x2000e000
   riscv32imc.0x403b0000.0x3fca2000
   riscv32imc.0x40440000.0x3fcaa000
Loading Fixed Address Processes into Flash

When installing fixed address processes on a board the loading tool must ensure that it places the TBF at the correct address in flash so that the process binary executes at the address the linker intended. Tockloader supports installing apps on boards and placing them at their fixed address location. Tockloader will try to find a sort order based on available TBFs to install all of the requested apps at valid fixed addresses.

With the process loaded at its fixed flash address, its essential that the RAM address the process is expecting can also be met. However, the valid RAM addresses for process is determined by the memory the kernel has reserved for processes. Typically, this memory region is dynamic based on memory the kernel is not using. The loader tool needs to know what memory is available for processes so it can choose the compiled TBF that expects a RAM address the kernel will actually be able to satisfy.

For the loader tool to learn what RAM addresses are available for processes the kernel includes a TLV kernel attributes structure in flash immediately before the start of apps. Tockloader can read these attributes to determine the valid RAM range for processes so it can choose suitable TBFs when installing apps.

Booting Fixed Address Processes

The final step is for the kernel to initialize and execute processes. The processes are already stored in flash, but the kernel must allocate a RAM region that meets the process's fixed RAM requirements. The kernel will leave gaps in RAM between processes to ensure processes have the RAM addresses they expected during compilation.

Tock Binary Format

In order to be loaded correctly, applications must follow the Tock Binary Format. This means the initial bytes of a Tock app must follow this format so that Tock can load the application correctly.

In practice, this is automatically handled for applications. As part of the compilation process, a tool called Elf to TAB does the conversion from ELF to Tock's expected binary format, ensuring that sections are placed in the expected order, adding a section that lists necessary load-time relocations, and creating the TBF header.

Tock Application Bundle

To support ease-of-use and distributable applications, Tock applications are compiled for multiple architectures and bundled together into a "Tock Application Bundle" or .tab file. This creates a standalone file for an application that can be flashed onto any board that supports Tock, and removes the need for the board to be specified when the application is compiled. The TAB has enough information to be flashed on many or all Tock compatible boards, and the correct binary is chosen when the application is flashed and not when it is compiled.

TAB Format

.tab files are tared archives of TBF compatible binaries along with a metadata.toml file that includes some extra information about the application. A simplified example command that creates a .tab file is:

tar cf app.tab cortex-m0.bin cortex-m4.bin metadata.toml

Metadata

The metadata.toml file in the .tab file is a TOML file that contains a series of key-value pairs, one per line, that provides more detailed information and can help when flashing the application. Existing fields:

tab-version = 1                         // TAB file format version
name = "<package name>"                 // Package name of the application
only-for-boards = <list of boards>      // Optional list of board kernels that this application supports
build-date = 2017-03-20T19:37:11Z       // When the application was compiled

Loading the kernel and processes onto a board

There is no particular limitation on how code can be loaded onto a board. JTAG and various bootloaders are all equally possible. For example, the hail and imix platforms primarily use the serial "tock-bootloader", and the other platforms use jlink or openocd to flash code over a JTAG connection. In general, these methods are subject to change based on whatever is easiest for users of the platform.

In order to support multiple concurrent applications, the easiest option is to use tockloader (git repo) to manage multiple applications on a platform. Importantly, while applications currently share the same upload process as the kernel, they are planned to support additional methods in the future. Application loading through wireless methods especially is targeted for future editions of Tock.

Kernel Configuration

Because Tock is meant to run on various platforms (spanning multiple architectures and various available peripherals), and with multiple use cases in mind (for example, "production" vs. debug build with various levels of debugging detail), Tock provides various configuration options so that each build can be adapted to each use case. In general, there are three variants of kernel configuration that Tock supports:

  1. Per-board customization of kernel components. For example, choosing the scheduling algorithm the kernel uses. The policies guide goes into more depth on this configuration variant.
  2. Crate-level composition of kernel code. Building a functional kernel consists of using several crates, and choosing specific crates can configure the kernel for a specific board or use case.
  3. Compile-time configuration to conditionally compile certain kernel features.

Tock attempts to support these configuration variants while avoiding undue confusion as to what exact code is being included in any particular kernel compilation. Specifically, Tock tries to avoid pitfalls of "ifdef" conditional code (which can be tricky to reason about which code is being include and to suitable test).

Crate-Level Configuration

Each level of abstraction (e.g. core kernel, CPU architecture, chip, board) has its own crate. Configuring a board is then done by including the relevant crates for the particular chip.

For example, many microcontrollers have a family of related chips. Depending on which specific version of a MCU a board uses often makes subtle adjustments to which peripherals are available. A board makes these configurations by careful choosing which crates to include as dependencies. Consider a board which uses the nRF52840 MCU, a version in the nRF52 family. It's board-level dependency tree might look like:


                 ┌────────────────┐
                 │                │
                 │ Board Crate    │
                 │                │
                 └─────┬─────────┬┘
                       │         └───────┬───────────────┐
            ┌──► ┌─────┴────────┐     ┌──┴───────┐ ┌─────┴────┐
            │    │ nRF52840     │     │ Capsules │ │ Kernel   │
            │    └─────┬────────┘     └──────────┘ └──────────┘
            │      ┌───┴──────┐
            │      │ nRF52    │
      Chips │      └───┬──────┘
            │      ┌───┴──────┐
            │      │ nRF5     │
            └──►   └──────────┘

where choosing the specific chip-variant as a dependency configures the code included in the kernel. These dependencies are expressed via normal Cargo crate dependencies.

Compile-Time Configuration Options

To facilitate fine-grained configuration of the kernel (for example to enable syscall tracing), a Config struct is defined in kernel/src/config.rs. The Config struct defines a collection of boolean values which can be imported throughout the kernel crate to configure the behavior of the kernel. As these values are const booleans, the compiler can statically optimize away any code that is not used based on the settings in Config, while still checking syntax and types.

To make it easier to configure the values in Config, the values of these booleans are determined by cargo features. Individual boards can determine which features of the kernel crate are included without users having to manually modify the code in the kernel crate. Because of how feature unification works, all features are off-by-default, so if the Tock kernel wants a default value for a config option to be turning something on, the feature should be named appropriately (e.g. the no_debug_panics feature is enabled to set the debug_panics config option to false).

To enable any feature, modify the Cargo.toml in your board crate as follows:

[dependencies]
# Turn off debug_panics, turn on trace_syscalls
kernel = { path = "../../kernel", features = ["no_debug_panics", "trace_syscalls"]}

These features should not be set from any crate other than the top-level board crate. If you prefer not to rely on the features, you can still directly modify the boolean config value in kernel/src/config.rs if you prefer---this can be easier when rapidly debugging on an upstream board, for example.

To use the configuration within the kernel crate, simply read the values. For example, to use a boolean configuration, just use an if statement.

Kernel Attributes

Kernel attributes are stored in a data structure at the end of the kernel's allocated flash region. These attributes describe properties of the flashed kernel on a particular hardware board. External tools can read these attributes to learn about the kernel installed on the board.

Format

Kernel attributes are stored in a descending TLV (type-length-value) structure. That means they start at the highest address in flash, and are appended in descending flash addresses.

The first four bytes are a sentinel that spells "TOCK" (in ASCII). This sentinel allows external tools to check if kernel attributes are present. Note, "first" in this context means the four bytes with the largest address since this structure is stored at the end of flash.

The next byte is a version byte. This allows for future changes to the structure.

The next three bytes are reserved.

After the header are zero or more TLV structures that hold the kernel attributes.

Header Format

0          1          2          3          4 (bytes)
+----------+----------+----------+----------+
|                            TLVs...        |
+----------+----------+----------+----------+
| Reserved | Reserved | Reserved | Version  |
+----------+----------+----------+----------+
| T (0x54) | O (0x4F) | C (0x43) | K (0x4B) |
+----------+----------+----------+----------+
                                            ^
                        end of flash region─┘

TLV Format

0          1          2          3          4 (bytes)
+----------+----------+----------+----------+
|                           Value...        |
+----------+----------+----------+----------+
| Type                | Length              |
+----------+----------+----------+----------+
  • Type: Indicates which TLV this is. Little endian.
  • Length: The length of the value. Little endian.
  • Value: Length bytes corresponding to the TLV.

TLVs

The TLV types used for kernel attributes are unrelated to the TLV types used for the Tock Binary Format. However, to minimize possible confusion, type values for each should not use the same numbers.

App Memory (0x0101)

Specifies the region of memory the kernel will use for applications.

0          1          2          3          4 (bytes)
+----------+----------+----------+----------+
| Start Address                             |
+----------+----------+----------+----------+
| App Memory Length                         |
+----------+----------+----------+----------+
| Type = 0x0101       | Length = 8          |
+----------+----------+----------+----------+
  • Start Address: The address in RAM the kernel will use to start allocation memory for apps. Little endian.
  • App Memory Length: The number of bytes in the region of memory for apps. Little endian.

Kernel Binary (0x0102)

Specifies where the kernel binary is and its size.

0          1          2          3          4 (bytes)
+----------+----------+----------+----------+
| Start Address                             |
+----------+----------+----------+----------+
| Binary Length                             |
+----------+----------+----------+----------+
| Type = 0x0102       | Length = 8          |
+----------+----------+----------+----------+
  • Start Address: The address in flash the kernel binary starts at. Little endian.
  • Binary Length: The number of bytes in the kernel binary. Little endian.

Kernel Attributes Location

Kernel attributes are stored at the end of the kernel's flash region and immediately before the start of flash for TBFs.

Memory Layout

This document describes how the memory in Tock is structured and used for the kernel, applications, and supporting state.

Note: This is a general guide describing the canonical memory layout for Tock. In practice, embedded hardware is fairly varied and individual chips may deviate from this either subtly or substantially.

Tock is intended to run on microcontrollers like the Cortex-M, which have non-volatile flash memory (for code) and RAM (for stack and data) in a single address space. While the Cortex-M architecture specifies a high-level layout of the address space, the exact layout of Tock can differ from board to board. Most boards simply define the beginning and end of flash and SRAM in their layout.ld file and then include the generic Tock memory map.

Flash

The nonvolatile flash memory holds the kernel code and a linked-list of sorts of process code.

Kernel code

The kernel code is split into two major regions. The first is .text, which holds the vector table, program code, initialization routines, and other read-only data. This section is written to the beginning of flash.

The second major region following up the .text region is the .relocate region. It holds values that need to exist in SRAM, but have non-zero initial values that Tock copies from flash to SRAM as part of its initialization (see Startup docs).

Process code

Processes are placed in flash starting at a known address which can be retrieved in the kernel using the symbol _sapps. Each process starts with a Tock Binary Format (TBF) header and then the actual application binary. Processes are placed continuously in flash, and each process's TBF header includes the entire size of the process in flash. This creates a linked-list structure that the kernel uses to traverse apps. The end of the valid processes are denoted by an invalid TBF header. Typically the flash page after the last valid process is set to all 0x00 or 0xFF.

RAM

The RAM holds the data currently being used by both the kernel and processes.

Kernel RAM

The kernel RAM contains three major regions:

  1. Kernel stack.
  2. Kernel data: initialized memory, copied from flash at boot.
  3. Kernel BSS: uninitialized memory, zeroed at boot.

Process RAM

The process RAM is memory space divided between all running apps.

A process's RAM contains four major regions:

  1. Process stack
  2. Process data
  3. Process heap
  4. Grant

The figure below shows the memory space of one process.

Process' RAM

Hardware Implementations

SAM4L

The SAM4L is a microcontroller used on the Hail and Imix platforms, among others. The structure of its flash and RAM is as follows.

Flash

Address RangeLength (bytes)ContentDescription
0x0-3FF1024BootloaderReserved flash for the bootloader. Likely the vector table.
0x400-0x5FF512FlagsReserved space for flags. If the bootloader is present, the first 14 bytes are "TOCKBOOTLOADER".
0x600-0x9FF1024AttributesUp to 16 key-value pairs of attributes that describe the board and the software running on it.
0xA00-0xFFFF61.5kBootloaderThe software bootloader provides non-JTAG methods of programming the kernel and applications.
0x10000-0x3FFFF128kKernelFlash space for the kernel.
0x3FFxx-0x3FFFFvariableAttributesKernel attributes that describe various properties of the kernel.
0x40000-0x7FFFF320kAppsFlash space for applications.

RAM

Address RangeLength (bytes)ContentDescription
0x20000000-0x2000FFFF64kKernel and app RAMThe kernel links with all of the RAM, and then allocates a buffer internally for application use.

Overview

The following image gives an example of how things are currently laid out in practice. It shows the address space of both flash and RAM with three running applications: crc, ip_sense, and analog_comparator.

Process memory layout

Mutable References, Memory Containers, and Cells

Borrows are a critical part of the Rust language that help provide its safety guarantees. However, when there is no dynamic memory allocation (no heap), as with Tock, event-driven code runs into challenges with Rust's borrow semantics. Often multiple structs need to be able to call (share) a struct based on what events occur. For example, a struct representing a radio interface needs to handle callbacks both from the bus it uses as well as handle calls from higher layers of a networking stack. Both of these callers need to be able to change the state of the radio struct, but Rust's borrow checker does not allow them to both have mutable references to the struct.

To solve this problem, Tock builds on the observation that having two references to a struct that can modify it is safe, as long as no references to memory inside the struct are leaked (there is no interior mutability). Tock uses memory containers, a set of types that allow mutability but not interior mutability, to achieve this goal. The Rust standard library has two memory container types, Cell and RefCell. Tock uses Cell extensively, but also adds five new memory container types, each of which is tailored to a specific use common in kernel code.

Brief Overview of Borrowing in Rust

Ownership and Borrowing are two design features in Rust which prevent race conditions and make it impossible to write code that produces dangling pointers.

Borrowing is the Rust mechanism to allow references to memory. Similar to references in C++ and other languages, borrows make it possible to efficiently pass large structures by passing pointers rather than copying the entire structure. The Rust compiler, however, limits borrows so that they cannot create race conditions, which are caused by concurrent writes or concurrent reads and writes to memory. Rust limits code to either a single mutable (writeable) reference or any number of read-only references.

If a piece of code has a mutable reference to a piece of memory, it's also important that other code does not have any references within that memory. Otherwise, the language is not safe. For example, consider this case of an enum which can be either a pointer or a value:

#![allow(unused)]
fn main() {
enum NumOrPointer {
  Num(u32),
  Pointer(&'static mut u32)
}
}

A Rust enum is like a type-safe C union. Suppose that code has both a mutable reference to a NumOrPointer and a read-only reference to the encapsulated Pointer. If the code with the NumOrPointer reference changes it to be a Num, it can then set the Num to be any value. However, the reference to Pointer can still access the memory as a pointer. As these two representations use the same memory, this means that the reference to Num can create any pointer it wants, breaking Rust's type safety:

#![allow(unused)]
fn main() {
// n.b. illegal example
let external : &mut NumOrPointer;
match external {
  &mut Pointer(ref mut internal) => {
    // This would violate safety and
    // write to memory at 0xdeadbeef
    *external = Num(0xdeadbeef);
    *internal = 12345;
  },
  ...
}
}

As the Tock kernel is single threaded, it doesn't have race conditions and so in some cases it may be safe for there to be multiple references, as long as they do not point inside each other (as in the number/pointer example). But Rust doesn't know this, so its rules still hold. In practice, Rust's rules cause problems in event-driven code.

Issues with Borrowing in Event-Driven code

Event-driven code often requires multiple writeable references to the same object. Consider, for example, an event-driven embedded application that periodically samples a sensor and receives commands over a serial port. At any given time, this application can have two or three event callbacks registered: a timer, sensor data acquisition, and receiving a command. Each callback is registered with a different component in the kernel, and each of these components requires a reference to the object to issue a callback on. That is, the generator of each callback requires its own writeable reference to the application. Rust's rules, however, do not allow multiple mutable references.

Cells in Tock

Tock uses several Cell types for different data types. This table summarizes the various types, and more detail is included below.

Cell TypeBest Used ForExampleCommon Uses
CellPrimitive typesCell<bool>, sched/kernel.rsState variables (holding an enum), true/false flags, integer parameters like length.
TakeCellSmall static buffersTakeCell<'static, [u8]>, spi.rsHolding static buffers that will receive or send data.
MapCellLarge static buffersMapCell<App>, spi.rsDelegating reference to large buffers (e.g. application buffers).
OptionalCellOptional parametersclient: OptionalCell<&'static hil::nonvolatile_storage::NonvolatileStorageClient>, nonvolatile_to_pages.rsKeeping state that can be uninitialized, like a Client before one is set.
VolatileCellRegistersVolatileCell<u32>Accessing MMIO registers, used by tock_registers crate.

The TakeCell abstraction

While the different memory containers each have specialized uses, most of their operations are common across the different types. We therefore explain the basic use of memory containers in the context of TakeCell, and the additional/specialized functionality of each other type in its own section. From tock/libraries/tock-cells/src/take_cell.rs:

A TakeCell is a potential reference to mutable memory. Borrow rules are enforced by forcing clients to either move the memory out of the cell or operate on a borrow within a closure.

A TakeCell can be full or empty: it is like a safe pointer that can be null. If code wants to operate on the data contained in the TakeCell, it must either move the data out of the TakeCell (making it empty), or it must do so within a closure with a map call. Using map passes a block of code for the TakeCell to execute. Using a closure allows code to modify the contents of the TakeCell inline, without any danger of a control path accidentally not replacing the value. However, because it is a closure, a reference to the contents of the TakeCell cannot escape.

TakeCell allows code to modify its contents when it has a normal (non-mutable) reference. This in turn means that if a structure stores its state in TakeCells, then code which has a regular (non-mutable) reference to the structure can change the contents of the TakeCell and therefore modify the structure. Therefore, it is possible for multiple callbacks to have references to the structure and modify its state.

Example use of .take() and .replace()

When TakeCell.take() is called, ownership of a location in memory moves out of the cell. It can then be freely used by whoever took it (as they own it) and then put back with TakeCell.put() or TakeCell.replace().

For example, this piece of code from chips/nrf51/src/clock.rs sets the callback client for a hardware clock:

#![allow(unused)]
fn main() {
pub fn set_client(&self, client: &'static ClockClient) {
    self.client.replace(client);
}
}

If there is a current client, it's replaced with client. If self.client is empty, then it's filled with client.

This piece of code from chips/sam4l/src/dma.rs cancels a current direct memory access (DMA) operation, removing the buffer in the current transaction from the TakeCell with a call to take:

#![allow(unused)]
fn main() {
pub fn abort_transfer(&self) -> Option<&'static mut [u8]> {
    self.registers
        .idr
        .write(Interrupt::TERR::SET + Interrupt::TRC::SET + Interrupt::RCZ::SET);

    // Reset counter
    self.registers.tcr.write(TransferCounter::TCV.val(0));

    self.buffer.take()
}
}

Example use of .map()

Although the contents of a TakeCell can be directly accessed through a combination of take and replace, Tock code typically uses TakeCell.map(), which wraps the provided closure between a TakeCell.take() and TakeCell.replace(). This approach has the advantage that a bug in control flow that doesn't correctly replace won't accidentally leave the TakeCell empty.

Here is a simple use of map, taken from chips/sam4l/src/dma.rs:

#![allow(unused)]
fn main() {
pub fn disable(&self) {
    let registers: &SpiRegisters = unsafe { &*self.registers };

    self.dma_read.map(|read| read.disable());
    self.dma_write.map(|write| write.disable());
    registers.cr.set(0b10);
}
}

Both dma_read and dma_write are of type TakeCell<&'static mut DMAChannel>, that is, a TakeCell for a mutable reference to a DMA channel. By calling map, the function can access the reference and call the disable function. If the TakeCell has no reference (it is empty), then map does nothing.

Here is a more complex example use of map, taken from chips/sam4l/src/spi.rs:

#![allow(unused)]
fn main() {
self.client.map(|cb| {
    txbuf.map(|txbuf| {
        cb.read_write_done(txbuf, rxbuf, len);
    });
});
}

In this example, client is a TakeCell<&'static SpiMasterClient>. The closure passed to map has a single argument, the value which the TakeCell contains. So in this case, cb is the reference to an SpiMasterClient. Note that the closure passed to client.map then itself contains a closure, which uses cb to invoke a callback passing txbuf.

.map() variants

TakeCell.map() provides a convenient method for interacting with a TakeCell's stored contents, but it also hides the case when the TakeCell is empty by simply not executing the closure. To allow for handling the cases when the TakeCell is empty, rust (and by extension Tock) provides additional functions.

The first is .map_or(). This is useful for returning a value both when the TakeCell is empty and when it has a contained value. For example, rather than:

#![allow(unused)]
fn main() {
let return = if txbuf.is_some() {
    txbuf.map(|txbuf| {
        write_done(txbuf);
    });
    Ok(())
} else {
    Err(ErrorCode::RESERVE)
};
}

.map_or() allows us to do this instead:

#![allow(unused)]
fn main() {
let return = txbuf.map_or(Err(ErrorCode::RESERVE), |txbuf| {
    write_done(txbuf);
    Ok(())
});
}

If the TakeCell is empty, the first argument (the error code) is returned, otherwise the closure is executed and Ok(()) is returned.

Sometimes we may want to execute different code based on whether the TakeCell is empty or not. Again, we could do this:

#![allow(unused)]
fn main() {
if txbuf.is_some() {
    txbuf.map(|txbuf| {
        write_done(txbuf);
    });
} else {
    write_done_failure();
};
}

Instead, however, we can use the .map_or_else() function. This allows us to pass in two closures, one for if the TakeCell is empty, and one for if it has contents:

#![allow(unused)]
fn main() {
txbuf.map_or_else(|| {
    write_done_failure();
}, |txbuf| {
    write_done(txbuf);
});
}

Note, in both the .map_or() and .map_or_else() cases, the first argument corresponds to when the TakeCell is empty.

MapCell

A MapCell is very similar to a TakeCell in its purpose and interface. What differs is the underlying implementation. In a TakeCell, when something take()s the contents of the cell, the memory inside is actually moved. This is a performance problem if the data in a TakeCell is large, but saves both cycles and memory if the data is small (like a pointer or slice) because the internal Option can be optimized in many cases and the code operates on registers as opposed to memory. On the flip side, MapCells introduce some accounting overhead for small types and require a minimum number of cycles to access.

The commit that introduced MapCell includes some performance benchmarks, but exact performance will vary based on the usage scenario. Generally speaking, medium to large sized buffers should prefer MapCells.

OptionalCell

OptionalCell is effectively a wrapper for a Cell that contains an Option, like:

#![allow(unused)]
fn main() {
struct OptionalCell {
  c: Cell<Option<T>>,
}
}

This to an extent mirrors the TakeCell interface, where the Option is hidden from the user. So instead of my_optional_cell.get().map(|| {}), the code can be: my_optional_cell.map(|| {}).

OptionalCell can hold the same values that Cell can, but can also be just None if the value is effectively unset. Using an OptionalCell (like a NumCell) makes the code clearer and hides extra tedious function calls. This is particularly useful when a capsule needs to hold some mutable state (therefore requiring a Cell) but there isn't a meaningful value to use in the new() constructor.

Comparison to TakeCell

TakeCell and OptionalCell are quite similar, but the key differentiator is the Copy bound required for items to use some of the methods defined on OptionalCell, such as map(). The Copy bound enables safe "reentrant" access to the stored value, because multiple accesses will be operating on different copies of the same stored item. The semantic difference is the name: a TakeCell is designed for something that must literally be taken, e.g. commonly a buffer that is given to a different subsystem in a way not easily captured by the Rust borrow mechanisms (commonly when a buffer is passed into, borrowed, "by" a hardware peripheral, and returned when hardware event has filled the buffer). #2360 has some examples where trying to convert a TakeCell into an OptionalCell does not work.

VolatileCell

A VolatileCell is just a helper type for doing volatile reads and writes to a value. This is mostly used for accessing memory-mapped I/O registers. The get() and set() functions are wrappers around core::ptr::read_volatile() and core::ptr::write_volatile().

Cell Extensions

In addition to custom types, Tock adds extensions to some of the standard cells to enhance and ease usability. The mechanism here is to add traits to existing data types to enhance their ability. To use extensions, authors need only use kernel::common::cells::THE_EXTENSION to pull the new traits into scope.

NumericCellExt

NumericCellExt extends cells that contain "numeric" types (like usize or i32) to provide some convenient functions (add() and subtract(), for example). This extension makes for cleaner code when storing numbers that are increased or decreased. For example, with a typical Cell, adding one to the stored value looks like: my_cell.set(my_cell.get() + 1). With a NumericCellExt it is a little easier to understand: my_cell.increment() (or my_cell.add(1)).

Tock Processes

This document explains how application code works in Tock. This is not a guide to writing applications, but rather documentation of the overall design of how applications are implemented in Tock.

Overview of Processes in Tock

Processes in Tock run application code meant to accomplish some type of task for the end user. Processes run in user mode. Unlike kernel code, which runs in supervisor mode and handles device drivers, chip-specific details, as well as general operating system tasks, application code running in processes is independent of the details of the underlying hardware (except the instruction set architecture). Unlike many existing embedded operating systems, in Tock processes are not compiled with the kernel. Instead they are entirely separate code that interact with the kernel and each other through system calls.

Since processes are not a part of the kernel, application code running in a process may be written in any language that can be compiled into code capable of running on a microcontroller. Tock supports running multiple processes concurrently. Co-operatively multiprogramming is the default, but processes may also be time sliced. Processes may share data with each other via Inter-Process Communication (IPC) through system calls.

Processes run code in unprivileged mode (e.g., user mode on Cortex-M or RV32I microcontrollers). The Tock kernel uses hardware memory protection (an MPU on CortexM and a PMP on RV32I) to restrict which addresses application code running in a process can access. A process makes system calls to access hardware peripherals or modify what memory is accessible to it.

Tock supports dynamically loading and unloading independently compiled applications. In this setting, applications do not know at compile time what address they will be installed at and loaded from. To be dynamically loadable, application code must be compiled as position independent code (PIC). This allows them to be run from any address they happen to be loaded into.

In some cases, applications may know their location at compile-time. This happens, for example, in cases where the kernel and applications are combined into a single cryptographically signed binary that is accepted by a secure bootloader. In these cases, compiling an application with explicit addresses works.

Tock supports running multiple processes at the same time. The maximum number of processes supported by the kernel is typically a compile-time constant in the range of 2-4, but is limited only by the available RAM and Flash resources of the chip. Tock scheduling generally assumes that it is a small number (e.g., uses O(n) scheduling algorithms).

System Calls

System calls are how processes and the kernel share data and interact. These could include commands to drivers, subscriptions to callbacks, granting of memory to the kernel so it can store data related to the application, communication with other application code, and many others. In practice, system calls are made through library code and the application need not deal with them directly.

For example, consider the following system call that sets a GPIO pin high:

int gpio_set(GPIO_Pin_t pin) {
  return command(GPIO_DRIVER_NUM, 2, pin);
}

The command system call itself is implemented as the ARM assembly instruction svc (service call):

int __attribute__((naked))
command(uint32_t driver, uint32_t command, int data) {
  asm volatile("svc 2\nbx lr" ::: "memory", "r0");
}

A detailed description of Tock's system call API and ABI can be found in TRD104. The system call documentation describes how the are implemented in the kernel.

Upcalls and Termination

The Tock kernel is completely non-blocking, and it pushes this asynchronous behavior to userspace code. This means that system calls (with one exception) do not block. Instead, they always return very quickly. Long-running operations (e.g., sending data over a bus, sampling a sensor) signal their completion to userspace through upcalls. An upcall is a function call the kernel makes on userspace code.

Yield system calls are the exception to this non-blocking rule. The yield-wait system call blocks until the kernel invokes an upcall on the process. The kernel only invokes upcalls when a process issues the yield system call. The kernel does not invoke upcalls at arbitrary points in the program.

For example, consider the case of when a process wants to sleep for 100 milliseconds. The timer library might break this into three operations:

  1. It registers an upcall for the timer system call driver with a Subscribe system call.
  2. It tells the timer system call driver to issue an upcall in 100 milliseconds by invoking a Command system call.
  3. It calls the yield-wait system call. This causes the process to block until the timer upcall executes. The kernel pushes a stack frame onto the process to execute the upcall; this function call returns to the instruction after yield was invoked.

When a process registers an upcall with a call to a Subscribe system call, it may pass a pointer userdata. The kernel does not access or use this data: it simply passes it back on each invocation of the upcall. This allows a process to register the same function as multiple upcalls, and distinguish them by the data passed in the argument.

It is important to note that upcalls are not executed until a process calls yield. The kernel will enqueue upcalls as events occur within the kernel, but the application will not handle them until it yields.

Applications which are "finished" should call an Exit system call. There are two variants of Exit: exit-terminate and exit-restart. They differ in what they signal to the kernel: does the application wish to stop running, or be rebooted?

Inter-Process Communication

Inter-process communication (IPC) allows for separate processes to communicate directly through shared buffers. IPC in Tock is implemented with a service-client model. Each process can support one service. The service is identified by the name of the application running in the process, which is included in the Tock Binary Format Header for the application. A process can communicate with multiple services and will get a unique handle for each discovered service. Clients and services communicate through shared buffers. Each client can share some of its own application memory with the service and then notify the service to instruct it to parse the shared buffer.

Services

Services are named by the package name included in the app's TBF header. To register a service, an app can call ipc_register_svc() to setup a callback. This callback will be called whenever a client calls notify on that service.

Clients

Clients must first discover services they wish to use with the function ipc_discover(). They can then share a buffer with the service by calling ipc_share(). To instruct the service to do something with the buffer, the client can call ipc_notify_svc(). If the app wants to get notifications from the service, it must call ipc_register_client_cb() to receive events from when the service when the service calls ipc_notify_client().

See ipc.h in libtock-c for more information on these functions.

Application Entry Point

An application specifies the first function the kernel should call by setting the variable init_fn_offset in its TBF header. This function should have the following signature:

void _start(void* text_start, void* mem_start, void* memory_len, void* app_heap_break);

Process RAM and Flash Memory

The actual process binary and TBF header are stored in nonvolatile flash. This flash region is fixed when the application is installed.

When a process is loaded by the kernel, the process is assigned a fixed, contiguous region of memory in RAM. This is the entire amount of memory the process can use during its entire lifetime. This region includes the typical memory regions for a process (i.e. stack, data, and heap), but also includes the kernel's grant region for the process and the process control block.

Process RAM is memory space divided between all running apps. The figure below shows the memory space of a process.

Process' RAM

The Tock kernel tries to impart no requirements on how a process uses its own accessible memory. As such, a process starts in a very minimal environment, with an initial stack sufficient to support a syscall, but not much more. Application startup routines should first move their program break to accommodate their desired layout, and then setup local stack and heap tracking in accordance with their runtime.

Stack and Heap

Applications can specify their working memory requirements by setting the minimum_ram_size variable in their TBF headers. Note that the Tock kernel treats this as a minimum, depending on the underlying platform, the amount of memory may be larger than requested, but will never be smaller.

If there is insufficient memory to load your application, the kernel will fail during loading and print a message.

If an application exceeds its allotted memory during runtime, the application will crash (see the Debugging section for an example).

Isolation

The kernel limits processes to only accessing their own memory regions by using hardware memory protection units. On Cortex-M platforms this is the MPU and on RV32I platforms this is the PMP (or ePMP).

Before doing a context switch to a process the kernel configures the memory protection unit for that process. Only the memory regions assigned to the process are set as accessible.

Flash Isolation

Processes cannot access arbitrary addresses in flash, including bootloader and kernel code. They are also prohibited from reading or writing the nonvolatile regions of other processes.

Processes do have access to their own memory in flash. Certain regions, including their Tock Binary Format (TBF) header and a protected region after the header, are read-only, as the kernel must be able to ensure the integrity of the header. In particular, the kernel needs to know the total size of the app to find the next app in flash. The kernel may also wish to store nonvolatile information about the app (e.g. how many times it has entered a failure state) that the app should not be able to alter.

The remainder of the app, and in particular the actual code of the app, is considered to be owned by the app. The app can read the flash to execute its own code. If the MCU uses flash for its nonvolatile memory the app can not likely directly modify its own flash region, as flash typically requires some hardware peripheral interaction to erase or write flash. In this case, the app would require kernel support to modify its flash region.

RAM Isolation

For the process's RAM region, the kernel maintains a brk pointer and gives the process full access to only its memory region below that brk pointer. Processes can use the Memop syscall to increase the brk pointer. Memop syscalls can also be used by the process to inform the kernel of where it has placed its stack and heap, but these are entirely used for debugging. The kernel does not need to know how the process has organized its memory for normal operation.

All kernel-owned data on behalf of a process (i.e. grant and PCB) is stored at the top (i.e. highest addresses) of the process's memory region. Processes are never given any access to this memory, even though it is within the process's allocated memory region.

Processes can choose to explicitly share portions of their RAM with the kernel through the use of Allow syscalls. This gives capsules read/write access to the process's memory for use with a specific capsule operation.

Debugging

If an application crashes, Tock provides a very detailed stack dump. By default, when an application crashes Tock prints a crash dump over the platform's default console interface. When your application crashes, we recommend looking at this output very carefully: often we have spent hours trying to track down a bug which in retrospect was quite obviously indicated in the dump, if we had just looked at the right fields.

Note that because an application is relocated when it is loaded, the binaries and debugging .lst files generated when the app was originally compiled will not match the actual executing application on the board. To generate matching files (and in particular a matching .lst file), you can use the make debug target app directory to create an appropriate .lst file that matches how the application was actually executed. See the end of the debug print out for an example command invocation.

---| Fault Status |---
Data Access Violation:              true
Forced Hard Fault:                  true
Faulting Memory Address:            0x00000000
Fault Status Register (CFSR):       0x00000082
Hard Fault Status Register (HFSR):  0x40000000

---| App Status |---
App: crash_dummy   -   [Fault]
 Events Queued: 0   Syscall Count: 0   Dropped Callback Count: 0
 Restart Count: 0
 Last Syscall: None

 ╔═══════════╤══════════════════════════════════════════╗
 ║  Address  │ Region Name    Used | Allocated (bytes)  ║
 ╚0x20006000═╪══════════════════════════════════════════╝
             │ ▼ Grant         948 |    948
  0x20005C4C ┼───────────────────────────────────────────
             │ Unused
  0x200049F0 ┼───────────────────────────────────────────
             │ ▲ Heap            0 |   4700               S
  0x200049F0 ┼─────────────────────────────────────────── R
             │ Data            496 |    496               A
  0x20004800 ┼─────────────────────────────────────────── M
             │ ▼ Stack          72 |   2048
  0x200047B8 ┼───────────────────────────────────────────
             │ Unused
  0x20004000 ┴───────────────────────────────────────────
             .....
  0x00030400 ┬─────────────────────────────────────────── F
             │ App Flash       976                        L
  0x00030030 ┼─────────────────────────────────────────── A
             │ Protected        48                        S
  0x00030000 ┴─────────────────────────────────────────── H

  R0 : 0x00000000    R6 : 0x20004894
  R1 : 0x00000001    R7 : 0x20004000
  R2 : 0x00000000    R8 : 0x00000000
  R3 : 0x00000000    R10: 0x00000000
  R4 : 0x00000000    R11: 0x00000000
  R5 : 0x20004800    R12: 0x12E36C82
  R9 : 0x20004800 (Static Base Register)
  SP : 0x200047B8 (Process Stack Pointer)
  LR : 0x000301B7
  PC : 0x000300AA
 YPC : 0x000301B6

 APSR: N 0 Z 1 C 1 V 0 Q 0
       GE 0 0 0 0
 EPSR: ICI.IT 0x00
       ThumbBit true

 Cortex-M MPU
  Region 0: base: 0x20004000, length: 8192 bytes; ReadWrite (0x3)
  Region 1: base:    0x30000, length: 1024 bytes; ReadOnly (0x6)
  Region 2: Unused
  Region 3: Unused
  Region 4: Unused
  Region 5: Unused
  Region 6: Unused
  Region 7: Unused

To debug, run `make debug RAM_START=0x20004000 FLASH_INIT=0x30059`
in the app's folder and open the .lst file.

Applications

For example applications, see the language specific userland repos:

Scheduling

This describes how processes are scheduled by the Tock kernel.

Tock Scheduling

The kernel defines a Scheduler trait that the main kernel loop uses to determine which process to execute next. Here is a simplified view of that trait:

#![allow(unused)]
fn main() {
pub trait Scheduler {
    /// Decide which process to run next.
    fn next(&self) -> SchedulingDecision;

    /// Inform the scheduler of why the last process stopped executing, and how
    /// long it executed for.
    fn result(&self, result: StoppedExecutingReason, execution_time_us: Option<u32>);

    /// Tell the scheduler to execute kernel work such as interrupt bottom
    /// halves and dynamic deferred calls. Most schedulers will use the default
    /// implementation.
    unsafe fn execute_kernel_work(&self, chip: &C) {...}

    /// Ask the scheduler whether to take a break from executing userspace
    /// processes to handle kernel tasks.
    unsafe fn do_kernel_work_now(&self, chip: &C) -> bool {...}

    /// Ask the scheduler whether to continue trying to execute a process.
    /// Most schedulers will use this default implementation.
    unsafe fn continue_process(&self, _id: ProcessId, chip: &C) -> bool {...}
}
}

Individual boards can choose which scheduler to use, and implementing new schedulers just requires implementing this trait.

Process State

In Tock, a process can be in one of these states:

  • Running: Normal operation. A Running process is eligible to be scheduled for execution, although is subject to being paused by Tock to allow interrupt handlers or other processes to run. During normal operation, a process remains in the Running state until it explicitly yields. Upcalls from other kernel operations are not delivered to Running processes (i.e. upcalls do not interrupt processes), rather they are enqueued until the process yields.
  • Yielded: Suspended operation. A Yielded process will not be scheduled by Tock. Processes often yield while they are waiting for I/O or other operations to complete and have no immediately useful work to do. Whenever the kernel issues an upcall to a Yielded process, the process is transitioned to the Running state.
  • YieldedFor: Suspended operation. Like a Yielded process, a YieldedFor process will not be scheduled by Tock. A YieldedFor process is waiting for a specific UpcallId (i.e., a specific upcall for a specific driver). The process will only be transitioned to the Running state when the kernel issues that specific upcall.
  • Fault: Erroneous operation. A Fault-ed process will not be scheduled by Tock. Processes enter the Fault state by performing an illegal operation, such as accessing memory outside of their address space.
  • Terminated: The process ended itself by calling the Exit system call and the kernel has not restarted it.
  • Stopped: The process was running or yielded but was then explicitly stopped by the kernel (e.g., by the process console). A process in these states will not be made runnable until it is started, at which point it will continue execution where it was stopped.

Tock Startup

This document walks through how all of the components of Tock start up.

When a microcontroller boots (or resets, or services an interrupt) it loads an address for a function from a table indexed by interrupt type known as the vector table. The location of the vector table in memory is chip-specific, thus it is placed in a special section for linking.

Cortex-M microcontrollers expect a vector table to be at address 0x00000000. This can either be a software bootloader or the Tock kernel itself.

RISC-V gives hardware designers a great deal of design freedom for how booting works. Typically, after coming out of reset, a RISC-V processor will start executing out of ROM but this may be configurable. The HiFive1 board, for example, supports booting out ROM, One-Time programmable (OTP) memory or a QSPI flash controller.

Optional Bootloader

Many Tock boards (including Hail and imix) use a software bootloader that executes when the MCU first boots. The bootloader provides a way to talk to the chip over serial and to load new code, as well as potentially other administrative tasks. When the bootloader has finished, it tells the MCU that the vector table has moved (to a known address), and then jumps to a new address.

Tock first instructions

ARM Vector Table and IRQ table

On ARM chips, Tock splits the vector table into two sections, .vectors which hold the first 16 entries, common to all ARM cores, and .irqs, which is appended to the end and holds chip-specific interrupts.

In the source code then, the vector table will appear as an array that is marked to be placed into the .vectors section.

In Rust, a vector table will look something like this:

#![allow(unused)]
fn main() {
#[link_section=".vectors"]
#[used] // Ensures that the symbol is kept until the final binary
pub static BASE_VECTORS: [unsafe extern fn(); 16] = [
    _estack,                        // Initial stack pointer value
    tock_kernel_reset_handler,      // Tock's reset handler function
    /* NMI */ unhandled_interrupt,  // Generic handler function
    ...
}

In C, a vector table will look something like this:

__attribute__ ((section(".vectors")))
interrupt_function_t interrupt_table[] = {
        (interrupt_function_t) (&_estack),
        tock_kernel_reset_handler,
        NMI_Handler,

At the time of this writing (November 2018), typical chips (like the sam4l and nrf52) use the same handler for all interrupts, and look something like:

#![allow(unused)]
fn main() {
#[link_section = ".vectors"]
#[used] // Ensures that the symbol is kept until the final binary
pub static IRQS: [unsafe extern "C" fn(); 80] = [generic_isr; 80];
}

RISC-V

All RISC-V boards are linked to run the _start function as the first function that gets run before jumping to main. This is currently inline assembly as of this writing:

#![allow(unused)]
fn main() {
#[cfg(all(target_arch = "riscv32", target_os = "none"))]
#[link_section = ".riscv.start"]
#[export_name = "_start"]
#[naked]
pub extern "C" fn _start() {
    unsafe {
        asm! ("

}

Reset Handler

On boot, the MCU calls the reset handler function defined in vector table. In Tock, the implementation of the reset handler function is architecture specific and handles memory initialization.

Memory Initialization

The main operation the reset handler does is setup the kernel's memory by copying it from flash. For the SAM4L, this is in the initialize_ram_jump_to_main() function in arch/cortex-m/src/lib.rs. Once finished the reset handler jumps to the main() function defined by each board.

The memory initialization function is implemented in assembly as Rust expects that memory is correctly initialized before any Rust instructions execute.

RISC-V Trap setup

The mtvec register needs to be set on RISC-V to handle traps. Setting of the vectors is handled by chip specific functions. The common RISC-V trap handler is _start_trap, defined in arch/rv32i/src/lib.rs.

MCU Setup

Any normal MCU initialization is typically handled next. This includes things like enabling the correct clocks or setting up DMA channels.

Peripheral and Capsule Initialization

After the MCU is set up, main initializes peripherals and capsules. Peripherals are on-chip subsystems, such as UARTs, ADCs, and SPI buses; they are chip-specific code that read and write memory-mapped I/O registers and are found in the corresponding chips directory. While peripherals are chip-specific implementations, they typically provide hardware-independent traits, called hardware independent layer (HIL) traits, found in kernel/src/hil.

Capsules are software abstractions and services; they are chip-independent and found in the capsules directory. For example, on the imix and hail platforms, the SAM4L SPI peripheral is implemented in chips/sam4l/src/spi.rs, while the capsule that virtualizes the SPI so multiple capsules can share it is in capsules/src/virtual_spi.rs. This virtualizer can be chip-independent because the chip-specific code implements the SPI HIL (kernel/src/hil/spi.rs). The capsule that implements a system call API to the SPI for processes is in capsules/src/spi.rs.

Boards that initialize many peripherals and capsules use the Component trait to encapsulate this complexity from main. The Component trait (kernel/src/component.rs) encapsulates any initialization a particular peripheral, capsule, or set of capsules need inside a call to the function finalize(). Changing what the build of the kernel includes involve changing just which Components are initialized, rather than changing many lines of main. Components are typically found in the components crate in the /boards folder, but may also be board-specific and found inside a components subdirectory of the board directory, e.g. boards/imix/src/imix_components.

Application Startup

Once the kernel components have been setup and initialized, the applications must be loaded. This procedure essentially iterates over the processes stored in flash, extracts and validates their Tock Binary Format header, and adds them to an internal array of process structs.

An example version of this loop is in kernel/src/process.rs as the load_processes() function. After setting up pointers, it tries to create a process from the starting address in flash and with a given amount of memory remaining. If the header is validated, it tries to load the process into memory and initialize all of the bookkeeping in the kernel associated with the process. This can fail if the process needs more memory than is available on the chip. If the process is successfully loaded the kernel importantly notes the address of the application's entry function which is called when the process is started.

The load process loop ends when the kernel runs out of statically allocated memory to store processes in, available RAM for processes, or there is an invalid TBF header in flash.

Scheduler Execution

Tock provides a Scheduler trait that serves as an abstraction to allow for plugging in different scheduling algorithms. Schedulers should be initialized at the end of the reset handler. The final thing that the reset handler must do is call kernel.kernel_loop(). This starts the Tock scheduler and the main operation of the kernel.

Syscalls

This document explains how system calls work in Tock with regards to both the kernel and applications. TRD104 contains the more formal specification of the system call API and ABI for 32-bit systems. This document describes the considerations behind the system call design.

Overview of System Calls in Tock

System calls are the method used to send information from applications to the kernel. Rather than directly calling a function in the kernel, applications trigger a context switch to the kernel. The kernel then uses the values in registers and the stack at the time of the interrupt call to determine how to route the system call and which driver function to call with which data values.

Using system calls has three advantages. First, the act of triggering a service call interrupt can be used to change the processor state. Rather than being in unprivileged mode (as applications are run) and limited by the Memory Protection Unit (MPU), after the service call the kernel switches to privileged mode where it has full control of system resources (more detail on ARM processor modes).

Second, context switching to the kernel allows it to do other resource handling before returning to the application. This could include running other applications, servicing queued upcalls, or many other activities.

Finally, and most importantly, using system calls allows applications to be built independently from the kernel. The entire codebase of the kernel could change, but as long as the system call interface remains identical, applications do not even need to be recompiled to work on the platform. Applications, when separated from the kernel, no longer need to be loaded at the same time as the kernel. They could be uploaded at a later time, modified, and then have a new version uploaded, all without modifying the kernel running on a platform.

Tock System Call Types

Tock has 7 general types (i.e. "classes") of system calls:

Syscall Class
Yield
Subscribe
Command
Read-Write Allow
Read-Only Allow
Memop
Exit

All communication and interaction between applications and the kernel uses only these system calls.

Within these system calls, there are two general groups of syscalls: administrative and capsule-specific.

  1. Administrative Syscalls: These adjust the execution or resources of the running process, and are handled entirely by the core kernel. These calls always behave the same way no matter which kernel resources are exposed to userspace. This group includes:

    • Yield
    • Memop
    • Exit
  2. Capsule-Specific Syscalls: These interact with specific capsules (i.e. kernel modules). While the general semantics are the same no matter the underlying capsule or resource being accessed, the actual behavior of the syscall depends on which capsule is being accessed. For example, a command to a timer capsule might start a timer, whereas a command to a temperature sensor capsule might start a temperature measurement. This group includes:

    • Subscribe
    • Command
    • Read-Write Allow
    • Read-Only Allow

All Tock system calls are synchronous, which means they immediately return to the application. Capsules must not implement long-running operations by blocking on a command system call, as this prevents other applications or kernel routines from running – kernel code is never preempted.

System Call Descriptions

This provides an introduction to each type of Tock system call. These are described in much more detail in TRD104.

  • Yield: An application yields its execution back to the kernel. The kernel will only trigger an upcall for a process after it has called yield.

  • Memop: This group of "memory operations" allows a process to adjust its memory break (i.e. request more memory be available for the process to use), learn about its memory allocations, and provide debug information.

  • Exit: An application can call exit to inform the kernel it no longer needs to execute and its resources can be freed. This also lets the process request a restart.

  • Subscribe: An application can issue a subscribe system call to register upcalls, which are functions being invoked in response to certain events. These upcalls are similar in concept to UNIX signal handlers. A driver can request an application-provided upcall to be invoked. Every system call driver can provide multiple "subscribe slots", each of which the application can register a upcall to.

  • Command: Applications can use command-type system calls to signal arbitrary events or send requests to the userspace driver. A common use-case for command-style systems calls is, for instance, to request that a driver start some long-running operation.

  • Read-only Allow: An application may expose some data for drivers to read. Tock provides the read-only allow system call for this purpose: an application invokes this system call passing a buffer, the contents of which are then made accessible to the requested driver. Every driver can have multiple "allow slots", each of which the application can place a buffer in.

  • Read-write Allow: Works similarly to read-only allow, but enables drivers to also mutate the application-provided buffer.

Data Movement Between Userspace and Kernel

All data movement and communication between userspace and the kernel happens through syscalls. This section describes the general mechanisms for data movement that syscalls enable. In this case, we use "data" to be very general and describe any form of information transfer.

Userspace → Kernel

Moving data from a userspace application to the kernel happens in two forms.

  1. Instruction with simple options. Applications often want to instruct the kernel to take some action (e.g. play a sound, turn on an LED, or take a sensor reading). Some of these may require small amounts of configuration (e.g. which LED, or the resolution of the sensor reading). This data transfer is possible with the Command syscall.

    There are two important considerations for Command. First, the amount of data that can be transferred for configuration is on the order of 32 bits. Second, Command is non-blocking, meaning the Command syscall will finish before the requested operation completes.

  2. Arbitrary buffers of data. Applications often need to pass data to the kernel for the kernel to use it for some action (e.g. audio samples to play, data packets to transmit, or data buffers to encrypt). This data transfer is possible with the "allow" family of syscalls, specifically the Read-only allow.

    Once an application shares a buffer with the kernel via allow, the process should not use that buffer until it has "un-shared" the buffer with the kernel.

Kernel → Userspace

Moving data from the kernel to a userspace application to the kernel happens in three ways.

  1. Small data that is synchronously available. The kernel may have status information or fixed values it can send to an application (e.g. how many packets have been sent, or the maximum resolution of an ADC). This can be shared via the return value to a Command syscall. An application must call the Command syscall, and the return value must be immediately available, but the kernel can provide about 12 bytes of data back to the application via the return value to the command syscall.

  2. Arbitrary buffers of data. The kernel may have more data to send to application (e.g. an incoming data packet, or ADC readings). This data can be shared with the application by filling in a buffer the application has already shared with the kernel via an allow syscall. For the kernel to be able to modify the buffer, the application must have called the Read-write allow syscall.

  3. Events with small amounts of data. The kernel may need to notify an application about a recent event or provide small amounts of new data (e.g. a button was pressed, a sensor reading is newly available, or a incoming packet has arrived). This is accomplished by the kernel issuing an "upcall" to the application. You can think of an upcall as a callback, where when the process resumes running it executes a particular function provided with particular arguments.

    For the kernel to be able to trigger an upcall, the process must have first called Subscribe to pass the address of the function the upcall will execute.

    The kernel can pass a few arguments (roughly 12 bytes) with the upcall. This is useful for providing small amounts of data, like a reading sensor reading.

System Call Implementations

All system calls are implemented via context switches. A couple values are passed along with the context switch to indicate the type and manner of the syscall. A process invokes a system call by triggering context switch via a software interrupt that transitions the microcontroller to supervisor/kernel mode. The exact mechanism for this is architecture-specific. TRD104 specifies how userspace and the kernel pass values to each other for Cortex-M and RV32I platforms.

Handling a context switch is one of the few pieces of architecture-specific Tock code. The code is located in lib.rs within the arch/ folder under the appropriate architecture. As this code deals with low-level functionality in the processor it is written in assembly wrapped as Rust function calls.

Context Switch Interface

The architecture crates (in the /arch folder) are responsible for implementing the UserspaceKernelBoundary trait which defines the functions needed to allow the kernel to correctly switch to userspace. These functions handle the architecture-specific details of how the context switch occurs, such as which registers are saved on the stack, where the stack pointer is stored, and how data is passed for the Tock syscall interface.

Cortex-M Architecture Details

Starting in the kernel before any application has been run but after the process has been created, the kernel calls switch_to_user. This code sets up registers for the application, including the PIC base register and the process stack pointer, then triggers a service call interrupt with a call to svc. The svc handler code automatically determines if the system desired a switch to application or to kernel and sets the processor mode. Finally, the svc handler returns, directing the PC to the entry point of the app.

The application runs in unprivileged mode while executing. When it needs to use a kernel resource it issues a syscall by running svc instruction. The svc_handler determines that it should switch to the kernel from an app, sets the processor mode to privileged, and returns. Since the stack has changed to the kernel's stack pointer (rather than the process stack pointer), execution returns to switch_to_user immediately after the svc that led to the application starting. switch_to_user saves registers and returns to the kernel so the system call can be processed.

On the next switch_to_user call, the application will resume execution based on the process stack pointer, which points to the instruction after the system call that switched execution to the kernel.

Syscalls may clobber userspace memory, as the kernel may write to buffers previously given to it using Allow. The kernel will not clobber any userspace registers except for the return value register (r0). However, Yield must be treated as clobbering more registers, as it can call an upcall in userspace before returning. This upcall can clobber r0-r3, r12, and lr. See this comment in the libtock-c syscall code for more information about Yield.

RISC-V Architecture Details

Tock assumes that a RISC-V platform that supports context switching has two privilege modes: machine mode and user mode.

The RISC-V architecture provides very lean support for context switching, providing significant flexibility in software on how to support context switches. The hardware guarantees the following will happen during a context switch: when switching from kernel mode to user mode by calling the mret instruction, the PC is set to the value in the mepc CSR, and the privilege mode is set to the value in the MPP bits of the mstatus CSR. When switching from user mode to kernel mode using the ecall instruction, the PC of the ecall instruction is saved to the mepc CSR, the correct bits are set in the mcause CSR, and the privilege mode is restored to machine mode. The kernel can store 32 bits of state in the mscratch CSR.

Tock handles context switching using the following process. When switching to userland, all register contents are saved to the kernel's stack. Additionally, a pointer to a per-process struct of stored process state and the PC of where in the kernel to resume executing after the process switches back to kernel mode are stored to the kernel's stack. Then, the PC of the process to start executing is put into the mepc CSR, the kernel stack pointer is saved in mscratch, and the previous contents of the app's registers from the per-process stored state struct are copied back into the registers. Then mret is called to switch to user mode and begin executing the app.

An application calls a system call with the ecall instruction. This causes the trap handler to execute. The trap handler checks mscratch, and if the value is nonzero then it contains the stack pointer of the kernel and this trap must have happened while the system was executing an application. Then, the kernel stack pointer from mscratch is used to find the pointer to the stored state struct, and all process registers are saved. The trap handler also saves the process PC from the mepc CSR and the mcause CSR. It then loads the kernel address of where to resume the context switching code to mepc and calls mret to exit the trap handler. Back in the context switching code, the kernel restores its registers from its stack. Then, using the contents of mcause the kernel decides why the application stopped executing, and if it was a system call which one it is. Returning the context switch reason ends the context switching process.

All values for the system call functions are passed in registers a0-a4. No values are stored to the application stack. The return value for system call is set in a0. In most system calls the kernel will not clobber any userspace registers except for this return value register (a0). However, the yield() system call results in a upcall executing in the process. This can clobber all caller saved registers, as well as the return address (ra) register.

Upcalls

The kernel can signal events to userspace via upcalls. Upcalls run a function in userspace after a context switch. The kernel, as part of the upcall, provides four 32 bit arguments. The address of the function to run is provided via the Subscribe syscall.

Process Startup

Upon process initialization, the kernel starts executing a process by running an upcall to the process's entry point. A single function call task is added to the process's upcall queue. The function is determined by the ENTRY point in the process TBF header (typically the _start symbol) and is passed the following arguments in registers r0 - r3:

  • r0: the base address of the process code
  • r1: the base address of the processes allocated memory region
  • r2: the total amount of memory in its region
  • r3: the current process memory break

How System Calls Connect to Capsules (Drivers)

After a system call is made, the call is handled and routed by the Tock kernel in kernel.rs through a series of steps.

  1. For Command, Subscribe, Read-Write Allow, and Read-Only Allow system calls, the kernel calls a platform-defined system call filter function. This function determines if the kernel should handle the system call or not. Yield, Exit, and Memop system calls are not filtered. This filter function allows the kernel to impose security policies that limit which system calls a process might invoke. The filter function takes the system call and which process issued the system call to return a Result<(), ErrorCode> to signal if the system call should be handled or if an error should be returned to the process. If the filter function disallows the system call it returns Err(ErrorCode) and the ErrorCode is provided to the process as the return code for the system call. Otherwise, the system call proceeds. The filter interface is unstable and may be changed in the future.

  2. The kernel scheduler loop handles the Exit and Yield system calls.

  3. To handle Memop system calls, the scheduler loop invokes the memop module, which implements the Memop class.

  4. Command, Subscribe, Read-Write Allow, and Read-Only Allow follow a more complex execution path because are implemented by drivers. To route these system calls, the scheduler loop calls a struct that implements the SyscallDriverLookup trait. This trait has a with_driver() function that the driver number as an argument and returns either a reference to the corresponding driver or None if it is not installed. The kernel uses the returned reference to call the appropriate system call function on that driver with the remaining system call arguments.

    An example board that implements the SyscallDriverLookup trait looks something like this:

    #![allow(unused)]
    fn main() {
    struct TestBoard {
        console: &'static Console<'static, usart::USART>,
    }
    
    impl SyscallDriverLookup for TestBoard {
        fn with_driver<F, R>(&self, driver_num: usize, f: F) -> R
            where F: FnOnce(Option<&kernel::syscall::SyscallDriver>) -> R
        {
    
            match driver_num {
                0 => f(Some(self.console)), // use capsules::console::DRIVER_NUM rather than 0 in real code
                _ => f(None),
            }
        }
    }
    }

    TestBoard then supports one driver, the UART console, and maps it to driver number 0. Any command, subscribe, and allow sycalls to driver number 0 will get routed to the console, and all other driver numbers will return Err(ErrorCode::NODEVICE).

Identifying Syscalls

A series of numbers and conventions identify syscalls as they pass via a context switch.

Syscall Class

The first identifier specifies which syscall it is. The values are specified as in the table and are fixed by convention.

Syscall ClassSyscall Class Number
Yield0
Subscribe1
Command2
Read-Write Allow3
Read-Only Allow4
Memop5
Exit6

Driver Numbers

For capsule-specific syscalls, the syscall must be directed to the correct capsule (driver). The with_driver() function takes an argument driver_num to identify the driver.

To enable the kernel and userspace to agree, we maintain a list of known driver numbers.

To support custom capsules and driver, a driver_num whose highest bit is set is private and can be used by out-of-tree drivers.

Syscall-Specific Numbers

For each capsule/driver, the driver can support more than one of each syscall (e.g. it can support multiple commands). Another number included in the context switch indicates which of the syscall the call refers to.

For the Command syscall, the command_num 0 is reserved as an existence check: userspace can call a command for a driver with command_num 0 to check if the driver is installed on the board. Otherwise, the numbers are entirely driver-specific.

For Subscribe, Read-only allow, and Read-write allow, the numbers start at 0 and increment for each defined use of the various syscalls. There cannot be a gap between valid subscribe or allow numbers. The actual meaning of each subscribe or allow number is driver-specific.

Identifying Error and Return Types

Tock includes some defined types and conventions for errors and return values between the kernel and userspace. These allow the kernel to indicate success and failure to userspace.

Naming Conventions

  • *Code (e.g. ErrorCode, StatusCode): These types are mappings between numeric values and semantic meanings. These can always be encoded in a usize.
  • *Return (e.g. SyscallReturn): These are more complex return types that can include arbitrary values, errors, or *Code types.

Type Descriptions

  • *Code Types:

    • ErrorCode: A standard set of errors and their numeric representations in Tock. This is used to represent errors for syscalls, and elsewhere in the kernel.

    • StatusCode: All errors in ErrorCode plus a Success value (represented by 0). This is used to pass a success/error status between the kernel and userspace.

      StatusCode is a pseudotype that is not actually defined as a concrete Rust type. Instead, it is always encoded as a usize. Even though it is not a concrete type, it is useful to be able to return to it conceptually, so we give it the name StatusCode.

      The intended use of StatusCode is to convey success/failure to userspace in upcalls. To try to keep things simple, we use the same numeric representations in StatusCode as we do with ErrorCode.

  • *Return Types:

    • SyscallReturn: The return type for a syscall. Includes whether the syscall succeeded or failed, optionally additional data values, and in the case of failure an ErrorCode.

Tock Binary Format

Tock userspace applications must follow the Tock Binary Format (TBF). This format describes how the binary data representing a Tock app is formatted. A TBF Object has four parts:

  1. A header section: encodes metadata about the TBF Object
  2. The actual Userspace Binary
  3. An optional Footer section: encodes credentials for the TBF Object
  4. Padding (optional).

The general TBF format is structured as depicted:

Tock App Binary:

Start of app ─►┌──────────────────────┐◄┐          ◄┐          ◄┐
               │ TBF Header           │ │ Protected │           │
               ├──────────────────────┤ │ region    │           │
               │ Protected trailer    │ │           │ Covered   │
               │ (Optional)           │ │           │ by        │
               ├──────────────────────┤◄┘           │ integrity │
               │                      │             │           │ Total
               │ Userspace            │             │           │ size
               │ Binary               │             │           │
               │                      │             │           │
               │                      │             │           │
               │                      │             │           │
               ├──────────────────────┤            ◄┘           │
               │ TBF Footer           │                         │
               │ (Optional)           │                         │
               ├──────────────────────┤                         │
               │ Padding (Optional)   │                         │
               └──────────────────────┘                        ◄┘

The Header is interpreted by the kernel (and other tools, like tockloader) to understand important aspects of the app. In particular, the kernel must know where in the application binary is the entry point that it should start executing when running the app for the first time.

The Header is encompassed in the Protected Region, which is the region at the beginning of the app that the app itself cannot access or modify at runtime. This provides a mechanism for the kernel to store persistent data on behalf of the app.

After the Protected Region the app is free to include whatever Userspace Binary it wants, and the format is completely up to the app. This is generally the output binary as created by a linker, but can include any additional binary data. This must include all data needed to actually execute the app. All support for relocations must be handled by the app itself, for example.

If the TBF Object has a Program Header in the Header section, the Userspace Binary can be followed by optional TBF Footers.

TBF Headers and Footers differ in how they are handled for TBF Object integrity. Integrity values (e.g., hashes) for a TBF Object are computed over the Protected Region section and Userspace Binary but not the Footer section or the padding after footers. TBF Headers are covered by integrity, while TBF Footers are not covered by integrity.

Finally, the TBF Object can be padded to a specific length. This is useful when a memory protection unit (MPU) restricts the length and offset of protection regions to powers of two. In such cases, padding allows a TBF Object to be padded to a power of two in size, so the next TBF Object is at a valid alignment.

TBF Design Requirements

The TBF format supports several design choices within the kernel, including:

  • App discovery at boot
  • Signed apps
  • Extensibility and backwards compatibility

App Discovery

When the Tock kernel boots it must discover installed applications. The TBF format supports this by enabling a linked-list structure of apps, where TBF Objects in Tock are stored sequentially in flash memory. The start of TBF Object N+1 is immediately at the end of TBF Object N. The start of the first TBF Object is placed at a well-known address. The kernel then discovers apps by iterating through this array of TBF Objects.

To enable this, the TBF Header specifies the length of the TBF Object so that the kernel can find the start of the next one. If there is a gap between TBF Objects an "empty object" can be inserted to keep the structure intact.

Tock apps are typically stored in sorted order, from longest to shortest. This is to help match MPU rules about alignment.

A TBF Object can contain no code. A TBF Object can be marked as disabled to act as padding between other objects.

Signed Apps

TBF Objects can include a credential to provide integrity or other security properties. Credentials are stored in the TBF Footer. As credentials cannot include themselves, credentials are not computed over the TBF Footer.

The TBF Footer region can include any number of credentials.

TBF Headers and Footers Format

Both TBF Footers and Headers use a "TLV" (type-length-value) format. This means individual entries within the Header and Footer are self-identifying, and different applications can include different entries in the Header and Footer. This also simplifies adding new features to the TBF format over time, as new TLV objects can be defined.

In general, unknown TLVs should be ignored during parsing.

Both TBF Footers and Headers use the same format to simplify parsing.

TBF Header Section

The TBF Header section contains all of a TBF Object's headers. All TBF Objects have a Base Header and the Base Header is always first. All headers are a multiple of 4 bytes long; the TBF Header section is multiple of 4 bytes long.

After the Base Header come optional headers. Optional headers are structured as TLVs (type-length-values). Footers are encoded in the same way. Footers are also called headers for historical reasons: originally TBFs only had headers, and since footers follow the same format TBFs keep these types without changing their names.

TBF Header TLV Types

Each header is identified by a 16-bit number, as specified:

#![allow(unused)]
fn main() {
// Identifiers for the optional header structs.
enum TbfHeaderTypes {
    TbfHeaderMain = 1,
    TbfHeaderWriteableFlashRegions = 2,
    TbfHeaderPackageName = 3,
    TbfHeaderPicOption1 = 4,
    TbfHeaderFixedAddresses = 5,
    TbfHeaderPermissions = 6,
    TbfHeaderPersistent = 7,
    TbfHeaderKernelVersion = 8,
    TbfHeaderProgram = 9,
    TbfHeaderShortId = 10,
    TbfFooterCredentials = 128,
}
}

Each header starts with the following TLV structure:

#![allow(unused)]
fn main() {
// Type-length-value header to identify each struct.
struct TbfHeaderTlv {
    tipe: TbfHeaderTypes,    // 16 bit specifier of which struct follows
                             // When highest bit of the 16 bit specifier is set
                             // it indicates out-of-tree (private) TLV entry
    length: u16,             // Number of bytes of the following struct
}
}

TLV elements are aligned to 4 bytes. If a TLV element size is not 4-byte aligned, it will be padded with up to 3 bytes. Each element begins with a 16-bit type and 16-bit length followed by the element data:

0             2             4
+-------------+-------------+-----...---+
| Type        | Length      | Data      |
+-------------+-------------+-----...---+
  • Type is a 16-bit unsigned integer specifying the element type.
  • Length is a 16-bit unsigned integer specifying the size of the data field in bytes.
  • Data is the element specific data. The format for the data field is determined by its type.

TBF Header Base

The TBF Header section contains a Base Header, followed by a sequence of type-length-value encoded elements. All fields in both the base header and TLV elements are little-endian. The base header is 16 bytes, and has 5 fields:

#![allow(unused)]
fn main() {
struct TbfHeaderV2Base {
    version: u16,     // Version of the Tock Binary Format (currently 2)
    header_size: u16, // Number of bytes in the TBF header section
    total_size: u32,  // Total padded size of the program image in bytes, including header
    flags: u32,       // Various flags associated with the application
    checksum: u32,    // XOR of all 4 byte words in the header, including existing optional structs
}
}

Encoding in flash:

0             2             4             6             8
+-------------+-------------+---------------------------+
| Version     | Header Size | Total Size                |
+-------------+-------------+---------------------------+
| Flags                     | Checksum                  |
+---------------------------+---------------------------+
  • Version a 16-bit unsigned integer specifying the TBF header version. Always 2.

  • Header Size a 16-bit unsigned integer specifying the length of the entire TBF header in bytes (including the base header and all TLV elements).

  • Total Size a 32-bit unsigned integer specifying the total size of the TBF in bytes (including the header).

  • Flags specifies properties of the process.

       3                   2                   1                   0
     1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    | Reserved                                                  |S|E|
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    
    • Bit 0 marks the process enabled. A 1 indicates the process is enabled. Disabled processes will not be launched at startup.
    • Bit 1 marks the process as sticky. A 1 indicates the process is sticky. Sticky processes require additional confirmation to be erased. For example, tockloader requires the --force flag erase them. This is useful for services running as processes that should always be available.
    • Bits 2-31 are reserved and should be set to 0.
  • Checksum the result of XORing each 4-byte word in the header, excluding the word containing the checksum field itself.

Header TLVs

TBF may contain arbitrary element types. To avoid type ID collisions between elements defined by the Tock project and elements defined out-of-tree, the ID space is partitioned into two segments. Type IDs defined by the Tock project will have their high bit (bit 15) unset, and type IDs defined out-of-tree should have their high bit set.

1 Main

All apps must have either a Main header or a Program header. The Main header is deprecated in favor of the Program header.

#![allow(unused)]
fn main() {
// All apps must have a Main Header or a Program Header; it may have both.
// Without either, the "app" is considered padding and used to insert an empty
// linked-list element into the app flash space. If an app has both, it is the
// kernel's decision which to use. Older kernels use Main Headers, while newer
// (>= 2.1) kernels use Program Headers.
struct TbfHeaderMain {
    base: TbfHeaderTlv,
    init_fn_offset: u32,         // The function to call to start the application
    protected_trailer_size: u32, // The number of app-immutable bytes after the header
    minimum_ram_size: u32,       // How much RAM the application is requesting
}
}

The Main element has three 32-bit fields:

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (1)    | Length (12) | init_offset               |
+-------------+-------------+---------------------------+
| protected_trailer_size    | min_ram_size              |
+---------------------------+---------------------------+
  • init_offset the offset in bytes from the beginning of binary payload (i.e. the actual application binary) that contains the first instruction to execute (typically the _start symbol).
  • protected_trailer_size the size of the protected region after the TBF headers. Processes do not have write access to the protected region. TBF headers are contained in the protected region, but are not counted towards protected_trailer_size. The protected region thus starts at the first byte of the TBF base header, and is header_size + protected_trailer_size bytes in size.
  • minimum_ram_size the minimum amount of memory, in bytes, the process needs.

If the Main TLV header is not present, these values all default to 0.

The Main Header and Program Header have overlapping functionality. If a TBF Object has both, the kernel decides which to use. Tock is transitioning to having the Program Header as the standard one to use, but older kernels (2.0 and earlier) do not recognize it and use the Main Header.

2 Writeable Flash Region

#![allow(unused)]
fn main() {
// A defined flash region inside of the app's flash space.
struct TbfHeaderWriteableFlashRegion {
    writeable_flash_region_offset: u32,
    writeable_flash_region_size: u32,
}

// One or more specially identified flash regions the app intends to write.
struct TbfHeaderWriteableFlashRegions {
    base: TbfHeaderTlv,
    writeable_flash_regions: [TbfHeaderWriteableFlashRegion],
}
}

Writeable flash regions indicate portions of the binary that the process intends to mutate in flash.

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (2)    | Length      | offset                    |
+-------------+-------------+---------------------------+
| size                      | ...
+---------------------------+
  • offset the offset from the beginning of the binary of the writeable region.
  • size the size of the writeable region.

3 Package Name

#![allow(unused)]
fn main() {
// Optional package name for the app.
struct TbfHeaderPackageName {
    base: TbfHeaderTlv,
    package_name: [u8],      // UTF-8 string of the application name
}
}

The Package name specifies a unique name for the binary. Its only field is an UTF-8 encoded package name.

0             2             4
+-------------+-------------+----------...-+
| Type (3)    |   Length    | package_name |
+-------------+-------------+----------...-+
  • package_name is an UTF-8 encoded package name

5 Fixed Addresses

#![allow(unused)]
fn main() {
// Fixed and required addresses for process RAM and/or process flash.
struct TbfHeaderV2FixedAddresses {
    base: TbfHeaderTlv,
    start_process_ram: u32,
    start_process_flash: u32,
}
}

Fixed Addresses allows processes to specify specific addresses they need for flash and RAM. Tock supports position-independent apps, but not all apps are position-independent. This allows the kernel (and other tools) to avoid loading a non-position-independent binary at an incorrect location.

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (5)    | Length (8)  | ram_address               |
+-------------+-------------+---------------------------+
| flash_address             |
+---------------------------+
  • ram_address the address in memory the process's memory address must start at. If a fixed address is not required this should be set to 0xFFFFFFFF.
  • flash_address the address in flash that the process binary (not the header) must be located at. This would match the value provided for flash to the linker. If a fixed address is not required this should be set to 0xFFFFFFFF.

6 Permissions

#![allow(unused)]
fn main() {
struct TbfHeaderDriverPermission {
    driver_number: u32,
    offset: u32,
    allowed_commands: u64,
}

// A list of permissions for this app
struct TbfHeaderV2Permissions {
    base: TbfHeaderTlv,
    length: u16,
    perms: [TbfHeaderDriverPermission],
}
}

The Permissions section allows an app to specify driver permissions that it is allowed to use. All driver syscalls that an app will use must be listed. The list should not include drivers that are not being used by the app.

The data is stored in the optional TbfHeaderV2Permissions field. This includes an array of all the perms.

0             2             4             6
+-------------+-------------+-------------+---------...--+
| Type (6)    | Length      | # perms     | perms        |
+-------------+-------------+-------------+---------...--+

The perms array is made up of a number of elements of TbfHeaderDriverPermission. The first 16-bit field in the TLV is the number of driver permission structures included in the perms array. The elements in TbfHeaderDriverPermission are described below:

Driver Permission Structure:
0             2             4             6             8
+-------------+-------------+---------------------------+
| driver_number             | offset                    |
+-------------+-------------+---------------------------+
| allowed_commands                                      |
+-------------------------------------------------------+
  • driver_number is the number of the driver that is allowed. This for example could be 0x00000 to indicate that the Alarm syscalls are allowed.
  • allowed_commands is a bit mask of the allowed commands. For example a value of 0b0001 indicates that only command 0 is allowed. 0b0111 would indicate that commands 2, 1 and 0 are all allowed. Note that this assumes offset is 0, for more details on offset see below.
  • The offset field in TbfHeaderDriverPermission indicates the offset of the allowed_commands bitmask. All of the examples described in the paragraph above assume an offset of 0. The offset field indicates the start of the allowed_commands bitmask. The offset is multiple by 64 (the size of the allowed_commands bitmask). For example an offset of 1 and a allowed_commands value of 0b0001 indicates that command 64 is allowed.

Subscribe and allow commands are always allowed as long as the specific driver_number has been specified. If a driver_number has not been specified for the capsule driver then allow and subscribe will be blocked.

Multiple TbfHeaderDriverPermission with the same driver_numer can be included, so long as no offset is repeated for a single driver. When multiple offsets and allowed_commandss are used they are ORed together, so that they all apply.

7 Storage Permissions

#![allow(unused)]
fn main() {
// A list of storage permissions for accessing persistent storage
struct TbfHeaderV2StoragePermissions {
    base: TbfHeaderTlv,
    write_id: u32,
    read_length: u16,
    read_ids: [u32],
    modify_length: u16,
    modify_ids: [u32],
}
}

The Storage Permissions section is used to identify what access the app has to persistent storage.

The data is stored in the TbfHeaderV2StoragePermissions field, which includes a write_id, a number of read_ids, and a number of modify_ids.

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (7)    | Length      | write_id                  |
+-------------+-------------+---------------------------+
| # Read IDs  | read_ids (4 bytes each)                 |
+-------------+------------------------------------...--+
| # Modify IDs| modify_ids (4 bytes each)               |
+--------------------------------------------------...--+
  • write_id indicates the id that all new persistent data is written with. All new data created will be stored with permissions from the write_id field. For existing data see the modify_ids section below. write_id does not need to be unique, that is multiple apps can have the same id. A write_id of 0x00 indicates that the app can not perform write operations.
  • read_ids list all of the ids that this app has permission to read. The read_length specifies the length of the read_ids in elements (not bytes). read_length can be 0 indicating that there are no read_ids.
  • modify_ids list all of the ids that this app has permission to modify or remove. modify_ids are different from write_id in that write_id applies to new data while modify_ids allows modification of existing data. The modify_length specifies the length of the modify_ids in elements (not bytes). modify_length can be 0 indicating that there are no modify_ids and the app cannot modify existing stored data (even data that it itself wrote).

For example, consider an app that has a write_id of 1, read_ids of 2, 3 and modify_ids of 3, 4. If the app was to write new data, it would be stored with id 1. The app is able to read data stored with id 2 or 3, note that it cannot read the data that it writes. The app is also able to overwrite existing data that was stored with id 3 or 4.

An example of when modify_ids would be useful is on a system where each app logs errors in its own write_region. An error-reporting app reports these errors over the network, and once the reported errors are acked erases them from the log. In this case, modify_ids allow an app to erase multiple different regions.

8 Kernel Version

#![allow(unused)]
fn main() {
// Kernel Version
struct TbfHeaderV2KernelVersion {
    base: TbfHeaderTlv,
    major: u16,
    minor: u16
}
}

The compatibility header is designed to prevent the kernel from running applications that are not compatible with it.

It defines the following two items:

  • Kernel major or V is the kernel major number (for Tock 2.0, it is 2)
  • Kernel minor or v is the kernel minor number (for Tock 2.0, it is 0)

Apps defining this header are compatible with kernel version ^V.v (>= V.v and < (V+1).0).

The kernel version header refers only to the ABI and API exposed by the kernel itself, it does not cover API changes within drivers.

A kernel major and minor version guarantees the ABI for exchanging data between kernel and userspace and the the system call numbers.

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (8)    | Length (4)  | Kernel major| Kernel minor|
+-------------+-------------+---------------------------+

9 Program

#![allow(unused)]
fn main() {
// A Program Header specifies the end of the application binary within the
// TBF, such that the application binary can be followed by footers. It also
// has a version number, such that multiple versions of the same application
// can be installed.
pub struct TbfHeaderV2Program {
    init_fn_offset: u32,
    protected_trailer_size: u32,
    minimum_ram_size: u32,
    binary_end_offset: u32,
    version: u32,
}
}

A Program Header is an extended form of the Main Header. It adds two fields, binary_end_offset and version. The binary_end_offset field allows the kernel to identify where in the TBF object the application binary ends. The gap between the end of the application binary and the end of the TBF object can contain footers.

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (9)    | Length (20) | init_offset               |
+-------------+-------------+---------------------------+
| protected_trailer_size    | min_ram_size              |
+---------------------------+---------------------------+
| binary_end_offset         | version                   |
+---------------------------+---------------------------+
  • init_offset the offset in bytes from the beginning of binary payload (i.e. the actual application binary) that contains the first instruction to execute (typically the _start symbol).
  • protected_trailer_size the size of the protected region after the TBF headers. Processes do not have write access to the protected region. TBF headers are contained in the protected region, but are not counted towards protected_trailer_size. The protected region thus starts at the first byte of the TBF base header, and is header_size + protected_trailer_size bytes in size.
  • minimum_ram_size the minimum amount of memory, in bytes, the process needs.
  • binary_end_offset specifies the offset from the beginning of the TBF Object at which the Userspace Binary ends and optional footers begin.
  • version specifies a version number for the application implemented by the Userspace Binary. This allows a kernel to distinguish different versions of a given application.

If a Program header is not present, binary_end_offset can be considered to be total_size of the Base Header and version is 0.

The Main Header and Program Header have overlapping functionality. If a TBF Object has both, the kernel decides which to use. Tock is transitioning to having the Program Header as the standard one to use, but older kernels (2.0 and earlier) do not recognize it and use the Main Header.

10 ShortID

#![allow(unused)]
fn main() {
struct TbfHeaderV2ShortId {
    base: TbfHeaderTlv,
    short_id: u32,
}
}

This header allows the compile-time workflow to specify a fixed ShortId the kernel can use to assign a ShortId to this application. The header only includes the 32 bit ShortId.

Note, fixed ShortIds are defined to be nonzero. Therefore, even if this header is present, but with a short_id field of 0, the kernel will not be able to assign a fixed ShortId of 0.

Also, this header is just one method for indicating what ShortId an application should be assigned. The kernel must be configured to use this header and any particular Tock kernel may ignore this header.

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (10)   | Length (4)  | short_id                  |
+-------------+-------------+---------------------------+
  • short_id: The 32 bit nonzero fixed ShortId.
#![allow(unused)]
fn main() {
// Credentials footer. The length field of the TLV determines the size of the
// data slice.
pub struct TbfFooterV2Credentials {
    format: u32,
    data: &'static [u8],
}
}

A Credentials Footer contains cryptographic credentials for the integrity and possibly identity of a Userspace Binary. A Credentials Footer has the following format:

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (128)  | Length (4+n)| format                    |
+-------------+-------------+---------------------------+
| data                                                  |
+--------------------------------------------------...--+

The length of the data field is defined by the Length field. If the data field is n bytes long, the Length field is 4+n. The format field defines the format of the data field:

Format IdentifierCredential TypeCredential Description
0Reserved
1Rsa3072KeyA 384 byte RSA public key n and a 384 byte PKCS1.15 signature.
2Rsa4096KeyA 512 byte RSA public key n and a 512 byte PKCS1.15 signature.
3SHA256A SHA256 hash.
4SHA384A SHA384 hash.
5SHA512A SHA512 hash.
0xARSA2048A 256 byte PKCS1.15 signature.

Module Documentation

These pages document specific modules in the Tock codebase.

Process Console

process_console is a capsule that implements a small shell over the UART that allows a terminal to inspect the kernel and control userspace processes.

Setup

Here is how to add process_console to a board's main.rs (the example is taken from the microbit's implementation of the Process console):

#![allow(unused)]
fn main() {
let process_printer = components::process_printer::ProcessPrinterTextComponent::new()
      .finalize(components::process_printer_text_component_static!());

let _process_console = components::process_console::ProcessConsoleComponent::new(
      board_kernel,
      uart_mux,
      mux_alarm,
      process_printer,
      Some(reset_function),
  )
  .finalize(components::process_console_component_static!(
      nrf52833::rtc::Rtc
  ));
let _ = _process_console.start();
}

Using Process Console

With this capsule properly added to a board's main.rs and the Tock kernel loaded to the board, make sure there is a serial connection to the board. Likely, this just means connecting a USB cable from a computer to the board. Next, establish a serial console connection to the board. An easy way to do this is to run:

$ tockloader listen
[INFO   ] No device name specified. Using default name "tock".
[INFO   ] No serial port with device name "tock" found.
[INFO   ] Found 2 serial ports.
Multiple serial port options found. Which would you like to use?
[0]	/dev/ttyS4 - n/a
[1]	/dev/ttyACM0 - "BBC micro:bit CMSIS-DAP" - mbed Serial Port

Which option? [0] 1
[INFO   ] Using "/dev/ttyACM0 - "BBC micro:bit CMSIS-DAP" - mbed Serial Port".
[INFO   ] Listening for serial output.

tock$

Commands

This module provides a simple text-based console to inspect and control which processes are running. The console has several commands:

  • help - prints the available commands and arguments
  • list - lists the current processes with their IDs and running state
  • status - prints the current system status
  • start n - starts the stopped process with name n
  • stop n - stops the process with name n
  • terminate n - terminates the running process with name n, moving it to the Terminated state
  • boot n - tries to restart a Terminated process with name n
  • fault n - forces the process with name n into a fault state
  • panic - causes the kernel to run the panic handler
  • reset - causes the board to reset
  • kernel - prints the kernel memory map
  • process n - prints the memory map of process with name n
  • console-start - activate the process console
  • console-stop - deactivate the process console

For the examples below we will have 2 processes on the board: blink (which will blink all the LEDs that are connected to the kernel), and c_hello (which prints 'Hello World' when the console is started). Also, a micro:bit v2 board was used as support for the commands, so the results may vary on other devices. We will assume that the user has a serial connection to the board, either by using tockloader or another serial port software. With that console open, you can issue commands.

help

To get a list of the available commands, use the help command:

tock$ help
Welcome to the process console.
Valid commands are: help status list stop start fault boot terminate process kernel panic

list

To see all of the processes on the board, use list:

tock$ list
PID    Name                Quanta  Syscalls  Restarts  Grants  State
0      blink                    0     26818         0   1/14   Yielded
1      c_hello                  0         8         0   1/14   Yielded

list Command Fields

  • PID: The identifier for the process. This can change if the process restarts.
  • Name: The process name.
  • Quanta: How many times this process has exceeded its allotted time quanta.
  • Syscalls: The number of system calls the process has made to the kernel.
  • Restarts: How many times this process has crashed and been restarted by the kernel.
  • Grants: The number of grants that have been initialized for the process out of the total number of grants defined by the kernel.
  • State: The state the process is in.

status

To get a general view of the system, use the status command:

tock$ status
Total processes: 2
Active processes: 2
Timeslice expirations: 0

start and stop

You can control processes with the start and stop commands:

tock$ stop blink
Process blink stopped
tock$ list
PID    Name                Quanta  Syscalls  Restarts  Grants  State
2      blink                    0     22881         1   1/14   StoppedYielded
1      c_hello                  0         8         0   1/14   Yielded
tock$ start blink
Process blink resumed.
tock$ list
PID    Name                Quanta  Syscalls  Restarts  Grants  State
2      blink                    0     23284         1   1/14   Yielded
1      c_hello                  0         8         0   1/14   Yielded

terminate and boot

You can kill a process with terminate and then restart it with boot:

tock$ terminate blink
Process blink terminated
tock$ list
PID    Name                Quanta  Syscalls  Restarts  Grants  State
2      blink                    0     25640         1   0/14   Terminated
1      c_hello                  0         8         0   1/14   Yielded
tock$ boot blink
tock$ list
PID    Name                Quanta  Syscalls  Restarts  Grants  State
3      blink                    0       251         2   1/14   Yielded
1      c_hello                  0         8         0   1/14   Yielded

fault

To force a process into a fault state, you should use the fault command:

tock$ fault blink
panicked at 'Process blink had a fault', kernel/src/process_standard.rs:412:17
  Kernel version 899d73cdd

---| No debug queue found. You can set it with the DebugQueue component.

---| Cortex-M Fault Status |---
No Cortex-M faults detected.

---| App Status |---
𝐀𝐩𝐩: blink   -   [Faulted]
Events Queued: 0   Syscall Count: 2359   Dropped Upcall Count: 0
Restart Count: 0
Last Syscall: Yield { which: 1, address: 0x0 }
Completion Code: None


╔═══════════╤══════════════════════════════════════════╗
║  Address  │ Region Name    Used | Allocated (bytes)  ║
╚0x20006000═╪══════════════════════════════════════════╝
            │ Grant Ptrs      112
            │ Upcalls         320
            │ Process         920
  0x20005AB8 ┼───────────────────────────────────────────
            │ ▼ Grant          24
  0x20005AA0 ┼───────────────────────────────────────────
            │ Unused
  0x200049FC ┼───────────────────────────────────────────
            │ ▲ Heap            0 |   4260               S
  0x200049FC ┼─────────────────────────────────────────── R
            │ Data            508 |    508               A
  0x20004800 ┼─────────────────────────────────────────── M
            │ ▼ Stack         232 |   2048
  0x20004718 ┼───────────────────────────────────────────
            │ Unused
  0x20004000 ┴───────────────────────────────────────────
            .....
  0x00040800 ┬─────────────────────────────────────────── F
            │ App Flash      1996                        L
  0x00040034 ┼─────────────────────────────────────────── A
            │ Protected        52                        S
  0x00040000 ┴─────────────────────────────────────────── H

  R0 : 0x00000001    R6 : 0x000406B0
  R1 : 0x00000000    R7 : 0x20004000
  R2 : 0x0000000B    R8 : 0x00000000
  R3 : 0x0000000B    R10: 0x00000000
  R4 : 0x200047AB    R11: 0x00000000
  R5 : 0x200047AB    R12: 0x20004750
  R9 : 0x20004800 (Static Base Register)
  SP : 0x20004778 (Process Stack Pointer)
  LR : 0x00040457
  PC : 0x0004045E
YPC : 0x0004045E

APSR: N 0 Z 0 C 1 V 0 Q 0
      GE 0 0 0 0
EPSR: ICI.IT 0x00
      ThumbBit true

Total number of grant regions defined: 14
  Grant  0 : --          Grant  5 : --          Grant 10 : --
  Grant  1 : --          Grant  6 : --          Grant 11 : --
  Grant  2 0x0: 0x20005aa0  Grant  7 : --          Grant 12 : --
  Grant  3 : --          Grant  8 : --          Grant 13 : --
  Grant  4 : --          Grant  9 : --

Cortex-M MPU
  Region 0: [0x20004000:0x20005000], length: 4096 bytes; ReadWrite (0x3)
    Sub-region 0: [0x20004000:0x20004200], Enabled
    Sub-region 1: [0x20004200:0x20004400], Enabled
    Sub-region 2: [0x20004400:0x20004600], Enabled
    Sub-region 3: [0x20004600:0x20004800], Enabled
    Sub-region 4: [0x20004800:0x20004A00], Enabled
    Sub-region 5: [0x20004A00:0x20004C00], Disabled
    Sub-region 6: [0x20004C00:0x20004E00], Disabled
    Sub-region 7: [0x20004E00:0x20005000], Disabled
  Region 1: Unused
  Region 2: [0x00040000:0x00040800], length: 2048 bytes; UnprivilegedReadOnly (0x2)
    Sub-region 0: [0x00040000:0x00040100], Enabled
    Sub-region 1: [0x00040100:0x00040200], Enabled
    Sub-region 2: [0x00040200:0x00040300], Enabled
    Sub-region 3: [0x00040300:0x00040400], Enabled
    Sub-region 4: [0x00040400:0x00040500], Enabled
    Sub-region 5: [0x00040500:0x00040600], Enabled
    Sub-region 6: [0x00040600:0x00040700], Enabled
    Sub-region 7: [0x00040700:0x00040800], Enabled
  Region 3: Unused
  Region 4: Unused
  Region 5: Unused
  Region 6: Unused
  Region 7: Unused

To debug, run `make debug RAM_START=0x20004000 FLASH_INIT=0x4005d`
in the app's folder and open the .lst file.

𝐀𝐩𝐩: c_hello   -   [Yielded]
Events Queued: 0   Syscall Count: 8   Dropped Upcall Count: 0
Restart Count: 0
Last Syscall: Yield { which: 1, address: 0x0 }
Completion Code: None


╔═══════════╤══════════════════════════════════════════╗
║  Address  │ Region Name    Used | Allocated (bytes)  ║
╚0x20008000═╪══════════════════════════════════════════╝
            │ Grant Ptrs      112
            │ Upcalls         320
            │ Process         920
  0x20007AB8 ┼───────────────────────────────────────────
            │ ▼ Grant          76
  0x20007A6C ┼───────────────────────────────────────────
            │ Unused
  0x20006A04 ┼───────────────────────────────────────────
            │ ▲ Heap            0 |   4200               S
  0x20006A04 ┼─────────────────────────────────────────── R
            │ Data            516 |    516               A
  0x20006800 ┼─────────────────────────────────────────── M
            │ ▼ Stack         128 |   2048
  0x20006780 ┼───────────────────────────────────────────
            │ Unused
  0x20006000 ┴───────────────────────────────────────────
            .....
  0x00041000 ┬─────────────────────────────────────────── F
            │ App Flash      1996                        L
  0x00040834 ┼─────────────────────────────────────────── A
            │ Protected        52                        S
  0x00040800 ┴─────────────────────────────────────────── H

  R0 : 0x00000001    R6 : 0x00040C50
  R1 : 0x00000000    R7 : 0x20006000
  R2 : 0x00000000    R8 : 0x00000000
  R3 : 0x00000000    R10: 0x00000000
  R4 : 0x00040834    R11: 0x00000000
  R5 : 0x20006000    R12: 0x200067F0
  R9 : 0x20006800 (Static Base Register)
  SP : 0x200067D0 (Process Stack Pointer)
  LR : 0x00040A3F
  PC : 0x00040A46
YPC : 0x00040A46

APSR: N 0 Z 0 C 1 V 0 Q 0
      GE 0 0 0 0
EPSR: ICI.IT 0x00
      ThumbBit true

Total number of grant regions defined: 14
  Grant  0 : --          Grant  5 : --          Grant 10 : --
  Grant  1 : --          Grant  6 : --          Grant 11 : --
  Grant  2 : --          Grant  7 : --          Grant 12 : --
  Grant  3 : --          Grant  8 : --          Grant 13 : --
  Grant  4 0x1: 0x20007a6c  Grant  9 : --

Cortex-M MPU
  Region 0: [0x20006000:0x20007000], length: 4096 bytes; ReadWrite (0x3)
    Sub-region 0: [0x20006000:0x20006200], Enabled
    Sub-region 1: [0x20006200:0x20006400], Enabled
    Sub-region 2: [0x20006400:0x20006600], Enabled
    Sub-region 3: [0x20006600:0x20006800], Enabled
    Sub-region 4: [0x20006800:0x20006A00], Enabled
    Sub-region 5: [0x20006A00:0x20006C00], Enabled
    Sub-region 6: [0x20006C00:0x20006E00], Disabled
    Sub-region 7: [0x20006E00:0x20007000], Disabled
  Region 1: Unused
  Region 2: [0x00040800:0x00041000], length: 2048 bytes; UnprivilegedReadOnly (0x2)
    Sub-region 0: [0x00040800:0x00040900], Enabled
    Sub-region 1: [0x00040900:0x00040A00], Enabled
    Sub-region 2: [0x00040A00:0x00040B00], Enabled
    Sub-region 3: [0x00040B00:0x00040C00], Enabled
    Sub-region 4: [0x00040C00:0x00040D00], Enabled
    Sub-region 5: [0x00040D00:0x00040E00], Enabled
    Sub-region 6: [0x00040E00:0x00040F00], Enabled
    Sub-region 7: [0x00040F00:0x00041000], Enabled
  Region 3: Unused
  Region 4: Unused
  Region 5: Unused
  Region 6: Unused
  Region 7: Unused

To debug, run `make debug RAM_START=0x20006000 FLASH_INIT=0x4085d`
in the app's folder and open the .lst file.

panic

You can also force a kernel panic with the panic command:

tock$ panic

panicked at 'Process Console forced a kernel panic.', capsules/src/process_console.rs:959:29
  Kernel version 899d73cdd

---| No debug queue found. You can set it with the DebugQueue component.

---| Cortex-M Fault Status |---
No Cortex-M faults detected.

---| App Status |---
𝐀𝐩𝐩: blink   -   [Yielded]
Events Queued: 0   Syscall Count: 1150   Dropped Upcall Count: 0
Restart Count: 0
Last Syscall: Yield { which: 1, address: 0x0 }
Completion Code: None


╔═══════════╤══════════════════════════════════════════╗
║  Address  │ Region Name    Used | Allocated (bytes)  ║
╚0x20006000═╪══════════════════════════════════════════╝
            │ Grant Ptrs      112
            │ Upcalls         320
            │ Process         920
  0x20005AB8 ┼───────────────────────────────────────────
            │ ▼ Grant          24
  0x20005AA0 ┼───────────────────────────────────────────
            │ Unused
  0x200049FC ┼───────────────────────────────────────────
            │ ▲ Heap            0 |   4260               S
  0x200049FC ┼─────────────────────────────────────────── R
            │ Data            508 |    508               A
  0x20004800 ┼─────────────────────────────────────────── M
            │ ▼ Stack         232 |   2048
  0x20004718 ┼───────────────────────────────────────────
            │ Unused
  0x20004000 ┴───────────────────────────────────────────
            .....
  0x00040800 ┬─────────────────────────────────────────── F
            │ App Flash      1996                        L
  0x00040034 ┼─────────────────────────────────────────── A
            │ Protected        52                        S
  0x00040000 ┴─────────────────────────────────────────── H

  R0 : 0x00000001    R6 : 0x000406B0
  R1 : 0x00000000    R7 : 0x20004000
  R2 : 0x00000004    R8 : 0x00000000
  R3 : 0x00000004    R10: 0x00000000
  R4 : 0x200047AB    R11: 0x00000000
  R5 : 0x200047AB    R12: 0x20004750
  R9 : 0x20004800 (Static Base Register)
  SP : 0x20004778 (Process Stack Pointer)
  LR : 0x00040457
  PC : 0x0004045E
YPC : 0x0004045E

APSR: N 0 Z 0 C 1 V 0 Q 0
      GE 0 0 0 0
EPSR: ICI.IT 0x00
      ThumbBit true

Total number of grant regions defined: 14
  Grant  0 : --          Grant  5 : --          Grant 10 : --
  Grant  1 : --          Grant  6 : --          Grant 11 : --
  Grant  2 0x0: 0x20005aa0  Grant  7 : --          Grant 12 : --
  Grant  3 : --          Grant  8 : --          Grant 13 : --
  Grant  4 : --          Grant  9 : --

Cortex-M MPU
  Region 0: [0x20004000:0x20005000], length: 4096 bytes; ReadWrite (0x3)
    Sub-region 0: [0x20004000:0x20004200], Enabled
    Sub-region 1: [0x20004200:0x20004400], Enabled
    Sub-region 2: [0x20004400:0x20004600], Enabled
    Sub-region 3: [0x20004600:0x20004800], Enabled
    Sub-region 4: [0x20004800:0x20004A00], Enabled
    Sub-region 5: [0x20004A00:0x20004C00], Disabled
    Sub-region 6: [0x20004C00:0x20004E00], Disabled
    Sub-region 7: [0x20004E00:0x20005000], Disabled
  Region 1: Unused
  Region 2: [0x00040000:0x00040800], length: 2048 bytes; UnprivilegedReadOnly (0x2)
    Sub-region 0: [0x00040000:0x00040100], Enabled
    Sub-region 1: [0x00040100:0x00040200], Enabled
    Sub-region 2: [0x00040200:0x00040300], Enabled
    Sub-region 3: [0x00040300:0x00040400], Enabled
    Sub-region 4: [0x00040400:0x00040500], Enabled
    Sub-region 5: [0x00040500:0x00040600], Enabled
    Sub-region 6: [0x00040600:0x00040700], Enabled
    Sub-region 7: [0x00040700:0x00040800], Enabled
  Region 3: Unused
  Region 4: Unused
  Region 5: Unused
  Region 6: Unused
  Region 7: Unused

To debug, run `make debug RAM_START=0x20004000 FLASH_INIT=0x4005d`
in the app's folder and open the .lst file.

𝐀𝐩𝐩: c_hello   -   [Yielded]
Events Queued: 0   Syscall Count: 8   Dropped Upcall Count: 0
Restart Count: 0
Last Syscall: Yield { which: 1, address: 0x0 }
Completion Code: None


╔═══════════╤══════════════════════════════════════════╗
║  Address  │ Region Name    Used | Allocated (bytes)  ║
╚0x20008000═╪══════════════════════════════════════════╝
            │ Grant Ptrs      112
            │ Upcalls         320
            │ Process         920
  0x20007AB8 ┼───────────────────────────────────────────
            │ ▼ Grant          76
  0x20007A6C ┼───────────────────────────────────────────
            │ Unused
  0x20006A04 ┼───────────────────────────────────────────
            │ ▲ Heap            0 |   4200               S
  0x20006A04 ┼─────────────────────────────────────────── R
            │ Data            516 |    516               A
  0x20006800 ┼─────────────────────────────────────────── M
            │ ▼ Stack         128 |   2048
  0x20006780 ┼───────────────────────────────────────────
            │ Unused
  0x20006000 ┴───────────────────────────────────────────
            .....
  0x00041000 ┬─────────────────────────────────────────── F
            │ App Flash      1996                        L
  0x00040834 ┼─────────────────────────────────────────── A
            │ Protected        52                        S
  0x00040800 ┴─────────────────────────────────────────── H

  R0 : 0x00000001    R6 : 0x00040C50
  R1 : 0x00000000    R7 : 0x20006000
  R2 : 0x00000000    R8 : 0x00000000
  R3 : 0x00000000    R10: 0x00000000
  R4 : 0x00040834    R11: 0x00000000
  R5 : 0x20006000    R12: 0x200067F0
  R9 : 0x20006800 (Static Base Register)
  SP : 0x200067D0 (Process Stack Pointer)
  LR : 0x00040A3F
  PC : 0x00040A46
YPC : 0x00040A46

APSR: N 0 Z 0 C 1 V 0 Q 0
      GE 0 0 0 0
EPSR: ICI.IT 0x00
      ThumbBit true

Total number of grant regions defined: 14
  Grant  0 : --          Grant  5 : --          Grant 10 : --
  Grant  1 : --          Grant  6 : --          Grant 11 : --
  Grant  2 : --          Grant  7 : --          Grant 12 : --
  Grant  3 : --          Grant  8 : --          Grant 13 : --
  Grant  4 0x1: 0x20007a6c  Grant  9 : --

Cortex-M MPU
  Region 0: [0x20006000:0x20007000], length: 4096 bytes; ReadWrite (0x3)
    Sub-region 0: [0x20006000:0x20006200], Enabled
    Sub-region 1: [0x20006200:0x20006400], Enabled
    Sub-region 2: [0x20006400:0x20006600], Enabled
    Sub-region 3: [0x20006600:0x20006800], Enabled
    Sub-region 4: [0x20006800:0x20006A00], Enabled
    Sub-region 5: [0x20006A00:0x20006C00], Enabled
    Sub-region 6: [0x20006C00:0x20006E00], Disabled
    Sub-region 7: [0x20006E00:0x20007000], Disabled
  Region 1: Unused
  Region 2: [0x00040800:0x00041000], length: 2048 bytes; UnprivilegedReadOnly (0x2)
    Sub-region 0: [0x00040800:0x00040900], Enabled
    Sub-region 1: [0x00040900:0x00040A00], Enabled
    Sub-region 2: [0x00040A00:0x00040B00], Enabled
    Sub-region 3: [0x00040B00:0x00040C00], Enabled
    Sub-region 4: [0x00040C00:0x00040D00], Enabled
    Sub-region 5: [0x00040D00:0x00040E00], Enabled
    Sub-region 6: [0x00040E00:0x00040F00], Enabled
    Sub-region 7: [0x00040F00:0x00041000], Enabled
  Region 3: Unused
  Region 4: Unused
  Region 5: Unused
  Region 6: Unused
  Region 7: Unused

To debug, run `make debug RAM_START=0x20006000 FLASH_INIT=0x4085d`
in the app's folder and open the .lst file.

reset

You can also reset the board with the reset command:

tock$ reset

kernel

You can view the kernel memory map with the kernel command:

tock$ kernel
Kernel version: 2.1 (build 899d73cdd)

╔═══════════╤══════════════════════════════╗
║  Address  │ Region Name    Used (bytes)  ║
╚0x20003DAC═╪══════════════════════════════╝
            │   BSS         11692
  0x20001000 ┼─────────────────────────────── S
            │   Relocate        0            R
  0x20001000 ┼─────────────────────────────── A
            │ ▼ Stack        4096            M
  0x20000000 ┼───────────────────────────────
            .....
  0x0001A000 ┼─────────────────────────────── F
            │   RoData      27652            L
  0x000133FC ┼─────────────────────────────── A
            │   Code        78844            S
  0x00000000 ┼─────────────────────────────── H

process

You can also view the memory map for a process with the process command:

tock$ process c_hello
𝐀𝐩𝐩: c_hello   -   [Yielded]
Events Queued: 0   Syscall Count: 8   Dropped Upcall Count: 0
Restart Count: 0
Last Syscall: Yield { which: 1, address: 0x0 }
Completion Code: None


╔═══════════╤══════════════════════════════════════════╗
║  Address  │ Region Name    Used | Allocated (bytes)  ║
╚0x20008000═╪══════════════════════════════════════════╝
            │ Grant Ptrs      112
            │ Upcalls         320
            │ Process         920
  0x20007AB8 ┼───────────────────────────────────────────
            │ ▼ Grant          76
  0x20007A6C ┼───────────────────────────────────────────
            │ Unused
  0x20006A04 ┼───────────────────────────────────────────
            │ ▲ Heap            0 |   4200               S
  0x20006A04 ┼─────────────────────────────────────────── R
            │ Data            516 |    516               A
  0x20006800 ┼─────────────────────────────────────────── M
            │ ▼ Stack         128 |   2048
  0x20006780 ┼───────────────────────────────────────────
            │ Unused
  0x20006000 ┴───────────────────────────────────────────
            .....
  0x00041000 ┬─────────────────────────────────────────── F
            │ App Flash      1996                        L
  0x00040834 ┼─────────────────────────────────────────── A
            │ Protected        52                        S
  0x00040800 ┴─────────────────────────────────────────── H

console-start

This command activates the process console so that it responds to commands and shows the prompt. This reverses console-stop.

console-stop

This command puts the process console in a hibernation state. The console is still running in the sense that it is receiving UART data, but it will not respond to any commands other than console-start. It will also not show the prompt.

The purpose of this mode is to "free up" the general UART console for apps that use the console extensively or interactively.

The console can be re-activated with console-start.

Features

Command History

You can use the up and down arrows to scroll through the command history and to view the previous commands you have run. If you inserted more commands than the command history can hold, oldest commands will be overwritten. You can view the commands in bidirectional order, up arrow for oldest commands and down arrow for newest.

If the user custom size for the history is set to 0, the history will be disabled and the rust compiler will be able to optimize the binary file by removing dead code. If you are typing a command and accidentally press the up arrow key, you can press down arrow in order to retrieve the command you were typing. If you scroll through the history and you want to edit a command and accidentally press the up or down arrow key, scroll to the bottom of the history and you will get back to the command you were typing.

Here is how to add a custom size for the command history used by the ProcessConsole structure to keep track of the typed commands, in the main.rs of boards:

#![allow(unused)]
fn main() {
const COMMAND_HISTORY_LEN : usize = 30;

pub struct Platform {
    ...
    pconsole: &'static capsules::process_console::ProcessConsole<
        'static,
        COMMAND_HISTORY_LEN,
        // or { capsules::process_console::DEFAULT_COMMAND_HISTORY_LEN }
        // for the default behaviour
        VirtualMuxAlarm<'static, nrf52840::rtc::Rtc<'static>>,
        components::process_console::Capability,
    >,
    ...
}

let _process_console = components::process_console::ProcessConsoleComponent::new(
    board_kernel,
    uart_mux,
    mux_alarm,
    process_printer,
    Some(reset_function),
)
.finalize(components::process_console_component_static!(
    nrf52833::rtc::Rtc,
    COMMAND_HISTORY_LEN // or nothing for the default behaviour
));
}

Note: In order to disable any functionality for the command history set the COMMAND_HISTORY_LEN to 0 or 1 (the history will be disabled for a size of 1, because the first position from the command history is reserved for accidents by pressing up or down arrow key).

Command Navigation

Using Left and Right arrow keys you can navigate in a command, in order to move the cursor to your desired position in a command. By pressing Home key the cursor will be moved to the beginning of the command and by pressing End key the cursor will be moved to the end of the command that is currently displayed. When typing a character, it will be displayed under the cursor (basically it will be inserted in the command at the cursor position). After that the cursor will advance to next position (to the right). If you press backspace the character before the cursor will be removed (the opposite action of inserting a character) and the cursor will advance to left by one position. Using Delete key, you can remove the cursor under the cursor. In this case the cursor will not advance to any new position.

Pressing Enter in the middle of a command, is the same as pressing Enter at the end of the command (basically you do not need to press End and then Enter in order to send the command in order to be processed).

Note: These functions try to achieve the same experience as working in the bash terminal, moving freely in a command and modifying the command without rewriting it from the beginning.

Inserting multiple whitespace characters between commands or at the beginning of a command does not affect the resulting command, for example

# The command:
tock$        stop      blink

# Will be interpreted as:
tock$ stop blink

Tock Networking Stack Design Document

NOTE: This document is a work in progress.

This document describes the design of the Networking stack on Tock.

The design described in this document is based off of ideas contributed by Phil Levis, Amit Levy, Paul Crews, Hubert Teo, Mateo Garcia, Daniel Giffin, and Hudson Ayers.

Table of Contents

This document is split into several sections. These are as follows:

  1. Principles - Describes the main principles which the design of this stack intended to meet, along with some justification of why these principles matter. Ultimately, the design should follow from these principles.

  2. Stack Diagram - Graphically depicts the layout of the stack

  3. Explanation of queuing - Describes where packets are queued prior to transmission.

  4. List of Traits - Describes the traits which will exist at each layer of the stack. For traits that may seem surprisingly complex, provide examples of specific messages that require this more complex trait as opposed to the more obvious, simpler trait that might be expected.

  5. Explanation of Queuing - Describe queuing principles for this stack

  6. Description of rx path

  7. Description of the userland interface to the networking stack

  8. Implementation Details - Describes how certain implementations of these traits will work, providing some examples with pseudocode or commented explanations of functionality

  9. Example Message Traversals - Shows how different example messages (Thread or otherwise) will traverse the stack

Principles

  1. Keep the simple case simple

    • Sending an IP packet via an established network should not require a more complicated interface than send(destination, packet)
    • If functionality were added to allow for the transmission of IP packets over the BLE interface, this IP send function should not have to deal with any options or MessageInfo structs that include 802.15.4 layer information.
    • This principle reflects a desire to limit the complexity of Thread/the tock networking stack to the capsules that implement the stack. This prevents the burden of this complexity from being passed up to whatever applications use Thread
  2. Layering is separate from encapsulation

    • Libraries that handle encapsulation should not be contained within any specific layering construct. For example, If the Thread control unit wants to encapsulate a UDP payload inside of a UDP packet inside of an IP packet, it should be able to do so using encapsulation libraries and get the resulting IP packet without having to pass through all of the protocol layers
    • Accordingly, implementations of layers can take advantage of these encapsulation libraries, but are not required to.
  3. Dataplane traits are Thread-independent

    • For example, the IP trait should not make any assumption that send() will be called for a message that will be passed down to the 15.4 layer, in case this IP trait is used on top of an implementation that passes IP packets down to be sent over a BLE link layer. Accordingly the IP trait can not expose any arguments regarding 802.15.4 security parameters.
    • Even for instances where the only implementation of a trait in the near future will be a Thread-based implementation, the traits should not require anything that limit such a trait to Thread-based implementations
  4. Transmission and reception APIs are decoupled

    • This allows for instances where receive and send_done callbacks should be delivered to different clients (ex: Server listening on all addresses but also sending messages from specific addresses)
    • Prevents send path from having to navigate the added complexity required for Thread to determine whether to forward received messages up the stack

Stack Diagram

IPv6 over ethernet:      Non-Thread 15.4:   Thread Stack:                                       Encapsulation Libraries
+-------------------+-------------------+----------------------------+
|                         Application                                |-------------------\
----------------------------------------+-------------+---+----------+                    \
|TCP Send| UDP Send |TCP Send| UDP Send |  | TCP Send |   | UDP Send |--\                  v
+--------+----------+--------+----------+  +----------+   +----------+   \               +------------+  +------------+
|     IP Send       |     IP Send       |  |         IP Send         |    \      ----->  | UDP Packet |  | TCP Packet |
|                   |                   |  +-------------------------+     \    /        +------------+  +------------+
|                   |                   |                            |      \  /         +-----------+
|                   |                   |                            |       -+------->  | IP Packet |
|                   |                   |       THREAD               |       /           +-----------+
| IP Send calls eth | IP Send calls 15.4|                   <--------|------>            +-------------------------+
| 6lowpan libs with | 6lowpan libs with |                            |       \ ------->  | 6lowpan compress_Packet |
| default values    | default values    |                            |        \          +-------------------------+
|                   |                   |                            |         \         +-------------------------+
|                   |                   +                +-----------|          ------>  | 6lowpan fragment_Packet |
|                   |                   |                | 15.4 Send |                   +-------------------------+
|-------------------|-------------------+----------------------------+
|     ethernet      |          IEEE 802.15.4 Link Layer              |
+-------------------+------------------------------------------------+

Notes on the stack:

  • IP messages sent via Thread networks are sent through Thread using an IP Send method that exposes only the parameters specified in the IP_Send trait. Other parameters of the message (6lowpan decisions, link layer parameters, many IP header options) are decided by Thread.
  • The stack provides an interface for the application layer to send raw IPv6 packets over Thread.
  • When the Thread control plane generates messages (MLE messages etc.), they are formatted using calls to the encapsulation libraries and then delivered to the 802.15.4 layer using the 15.4 send function
  • This stack design allows Thread to control header elements from transport down to link layer, and to set link layer security parameters and more as required for certain packers
  • The application can either directly send IP messages using the IP Send implementation exposed by the Thread stack or it can use the UDP Send and TCP send implementation exposed by the Thread stack. If the application uses the TCP or UDP send implementations it must use the transport packet library to insert its payload inside a packet and set certain header fields. The transport send method uses the IP Packet library to set certain IP fields before handing the packet off to Thread. Thread then sets other parameters at other layers as needed before sending the packet off via the 15.4 send function implemented for Thread.
  • Note that currently this design leaves it up to the application layer to decide what interface any given packet will be transmitted from. This is because currently we are working towards a minimum functional stack. However, once this is working we intend to add a layer below the application layer that would handle interface multiplexing by destination address via a forwarding table. This should be straightforward to add in to our current design.
  • This stack does not demonstrate a full set of functionality we are planning to implement now. Rather it demonstrates how this setup allows for multiple implementations of each layer based off of traits and libraries such that a flexible network stack can be configured, rather than creating a network stack designed such that applications can only use Thread.

Explanation of Queuing

Queuing happens at the application layer in this stack. The userland interface to the networking stack (described in greater detail in Networking_Userland.md) already handles queuing multiple packets sent from userland apps. In the kernel, any application which wishes to send multiple UDP packets must handle queuing itself, waiting for a send_done to return from the radio before calling send on the next packet in a series of packets.

List of Traits

This section describes a number of traits which must be implemented by any network stack. It is expected that multiple implementations of some of these traits may exist to allow for Tock to support more than just Thread networking.

Before discussing these traits - a note on buffers:

Prior implementations of the tock networking stack passed around references to 'static mut [u8] to pass packets along the stack. This is not ideal from a standpoint of wanting to be able to prevent as many errors as possible at compile time. The next iteration of code will pass 'typed' buffers up and down the stack. There are a number of packet library traits defined below (e.g. IPPacket, UDPPacket, etc.). Transport Layer traits will be implemented by a struct that will contain at least one field - a [u8] buffer with lifetime 'a. Lower level traits will simply contain payload fields that are Transport Level packet traits (thanks to a TransportPacket enum). This design allows for all buffers passed to be passed as type 'UDPPacket', 'IPPacket', etc. An added advantage of this design is that each buffer can easily be operated on using the library functions associated with this buffer type.

The traits below are organized by the network layer they would typically be associated with.

Transport Layer

Thus far, the only transport layer protocol implemented in Tock is UDP.

Documentation describing the structs and traits that define the UDP layer can be found in capsules/src/net/udp/(udp.rs, udp_send.rs, udp_recv.rs)

Additionally, a driver exists that provides a userland interface via which udp packets can be sent and received. This is described in greater detail in Networking_Userland.md

Network Stack Receive Path

  • The radio in the kernel has a single RxClient, which is set as the mac layer (awake_mac, typically)
  • The mac layer (i.e. AwakeMac) has a single RxClient, which is the mac_device(ieee802154::Framer::framer)
  • The Mac device has a single receive client - MuxMac (virtual MAC device).
  • The MuxMac can have multiple "users" which are of type MacUser
  • Any received packet is passed to ALL MacUsers, which are expected to filter packets themselves accordingly.
  • Right now, we initialize two MacUsers in the kernel (in main.rs/components). These are the 'radio_mac', which is the MacUser for the RadioDriver that enables the userland interface to directly send 802154 frames, and udp_mac, the mac layer that is ultimately associated with the udp userland interface.
  • The udp_mac MacUser has a single receive client, which is the sixlowpan_state struct
  • sixlowpan_state has a single rx_client, which in our case is a single struct that implements the ip_receive trait.
  • the ip_receive implementing struct (IP6RecvStruct) has a single client, which is udp_recv, a UDPReceive struct.
  • The UDPReceive struct is a field of the UDPDriver, which ultimately passes the packets up to userland.

So what are the implications of all this?

  1. Currently, any userland app could receive udp packets intended for anyone else if the app implmenets 6lowpan itself on the received raw frames.

  2. Currently, packets are only muxed at the Mac layer.

  3. Right now the IPReceive struct receives all IP packets sent to the MAC address of this device, and soon will drop all packets sent to non-local addresses. Right now, the device effectively only has one address anyway, as we only support 6lowpan over 15.4, and as we haven't implemented a loopback interface on the IP_send path. If, in the future, we implement IP forwarding on Tock, we will need to add an IPSend object to the IPReceiver which would then retransmit any packets received that were not destined for local addresses.

Explanation of Configuration

This section describes how the IP stack can be configured, including setting addresses and other parameters of the MAC layer.

  • Source IP address: An array of local interfaces on the device is contained in main.rs. Currently, this array contains two hardcoded addresses, and one address generated from the unique serial number on the sam4l.

  • Destination IP address: The destination IP address is configured by passing the address to the send_to() call when sending IPv6 packets.

  • src MAC address: This address is configured in main.rs. Currently, the src mac address for each device is configured by default to be a 16-bit short address representing the last 16 bits of the unique 120 bit serial number on the sam4l. However, userland apps can change the src address by calling ieee802154_set_address()

  • dst MAC address: This is currently a constant set in main.rs. (DST_MAC_ADDR). In the future this will change, once Tock implements IPv6 Neighbor Discovery.

  • src pan: This is set via a constant configured in main.rs (PAN_ID). The same constant is used for the dst pan.

  • dst pan: Same as src_pan. If we need to support use of the broadcast PAN as a dst_pan, this may change.

  • radio channel: Configured as a constant in main.rs (RADIO_CHANNEL).

Tock Userland Networking Design

This section describes the current userland interface for the networking stack on Tock. This section should serve as a description of the abstraction provided by libTock - what the exact system call interface looks like or how libTock or the kernel implements this functionality is out-of-scope of this document.

Overview

The Tock networking stack and libTock should attempt to expose a networking interface that is similar to the POSIX networking interface. The primary motivation for this design choice is that application programmers are used to the POSIX networking interface design, and significant amounts of code have already been written for POSIX-style network interfaces. By designing the libTock networking interface to be as similar to POSIX as possible, we hope to improve developer experience while enabling the easy transition of networking code to Tock.

Design

udp.c and udp.h in libtock-c/libtock define the userland interface to the Tock networking stack. These files interact with capsules/src/net/udp/driver.rs in the main tock repository. driver.rs implements an interface for sending and receiving UDP messages. It also exposes a list of interace addresses to the application layer. The primary functionality embedded in the UDP driver is within the allow(), subscribe(), and command() calls which can be made to the driver.

Details of this driver can be found in doc/syscalls folder

udp.c and udp.h in libtock-c make it easy to interact with this driver interface. Important functions available to userland apps written in c include:

udp_socket() - sets the port on which the app will receive udp packets, and sets the src_port of outgoing packets sent via that socket. Once socket binding is implemented in the kernel, this function will handle reserving ports to listen on and send from.

udp_close() - currently just returns success, but once socket binding has been implemented in the kernel, this function will handle freeing bound ports.

udp_send_to() - Sends a udp packet to a specified addr/port pair, returns the result of the tranmission once the radio has transmitted it (or once a failure has occured).

udp_recv_from_sync() - Pass an interface to listen on and an incoming source address to listen for. Sets up a callback to wait for a received packet, and yeilds until that callback is triggered. This function never returns if a packet is not received.

udp_recv_from() - Pass an interface to listen on and an incoming source address to listen for. However, this takes in a buffer to which the received packet should be placed, and returns the callback that will be triggered when a packet is received.

udp_list_ifaces() - Populates the passed pointer of ipv6 addresses with the available ipv6 addresses of the interfaces on the device. Right now this merely returns a constant hardcoded into the UDP driver, but should change to return the source IP addresses held in the network configuration file once that is created. Returns up to len addresses.

Other design notes:

The current design of the driver has a few limitations, these include:

  • Currently, any app can listen on any address/port pair

  • The current tx implementation allows for starvation, e.g. an app with an earlier app ID can starve a later ID by sending constantly.

POSIX Socket API Functions

Below is a fairly comprehensive overview of the POSIX networking socket interface. Note that much of this functionality pertains to TCP or connection- based protocols, which we currently do not handle.

  • socket(domain, type, protocol) -> int fd

    • domain: AF_INET, AF_INET6, AF_UNIX
    • type: SOCK_STREAM (TCP), SOCK_DGRAM (UDP), SOCK_SEQPACKET (?), SOCK_RAW
    • protocol: IPPROTO_TCP, IPPROTO_SCTP, IPPROTO_UDP, IPPROTO_DCCP
  • bind(socketfd, my_addr, addrlen) -> int success

    • socketfd: Socket file descriptor to bind to
    • my_addr: Address to bind on
    • addrlen: Length of address
  • listen(socketfd, backlog) -> int success

    • socketfd: Socket file descriptor
    • backlog: Number of pending connections to be queued

    Only necessary for stream-oriented data modes

  • connect(socketfd, addr, addrlen) -> int success

    • socketfd: Socket file descriptor to connect with
    • addr: Address to connect to (server protocol address)
    • addrlen: Length of address

    When used with connectionless protocols, defines the remote address for sending and receiving data, allowing the use of functions such as send() and recv() and preventing the reception of datagrams from other sources.

  • accept(socketfd, cliaddr, addrlen) -> int success

    • socketfd: Socket file descriptor of the listening socket that has the connection queued
    • cliaddr: A pointer to an address to receive the client's address information
    • addrlen: Specifies the size of the client address structure
  • send(socketfd, buffer, length, flags) -> int success

    • socketfd: Socket file descriptor to send on
    • buffer: Buffer to send
    • length: Length of buffer to send
    • flags: Various flags for the transmission

    Note that the send() function will only send a message when the socketfd is connected (including for connectionless sockets)

  • sendto(socketfd, buffer, length, flags, dst_addr, addrlen) -> int success

    • socketfd: Socket file descriptor to send on
    • buffer: Buffer to send
    • length: Length of buffer to send
    • flags: Various flags for the transmission
    • dst_addr: Address to send to (ignored for connection type sockets)
    • addrlen: Length of dst_addr

    Note that if the socket is a connection type, dst_addr will be ignored.

  • recv(socketfd, buffer, length, flags)

    • socketfd: Socket file descriptor to receive on
    • buffer: Buffer where the message will be stored
    • length: Length of buffer
    • flags: Type of message reception

    Typically used with connected sockets as it does not permit the application to retrieve the source address of received data.

  • recvfrom(socketfd, buffer, length, flags, address, addrlen)

    • socketfd: Socket file descriptor to receive on
    • buffer: Buffer to store the message
    • length: Length of the buffer
    • flags: Various flags for reception
    • address: Pointer to a structure to store the sending address
    • addrlen: Length of address structure

    Normally used with connectionless sockets as it permits the application to retrieve the source address of received data

  • close(socketfd)

    • socketfd: Socket file descriptor to delete
  • gethostbyname()/gethostbyaddr() Legacy interfaces for resolving host names and addresses

  • select(nfds, readfds, writefds, errorfds, timeout)

    • nfds: The range of file descriptors to be tested (0..nfds)
    • readfds: On input, specifies file descriptors to be checked to see if they are ready to be read. On output, indicates which file descriptors are ready to be read
    • writefds: Same as readfds, but for writing
    • errorfds: Same as readfds, writefds, but for errors
    • timeout: A structure that indicates the max amount of time to block if no file descriptors are ready. If None, blocks indefinitely
  • poll(fds, nfds, timeout)

    • fds: Array of structures for file descriptors to be checked. The array members are structures which contain the file descriptor, and events to check for plus areas to write which events occurred
    • nfds: Number of elements in the fds array
    • timeout: If 0 return immediately, or if -1 block indefinitely. Otherwise, wait at least timeout milliseconds for an event to occur
  • getsockopt()/setsockopt()

Tock Userland API

Below is a list of desired functionality for the libTock userland API.

  • struct sock_addr_t ipv6_addr_t: IPv6 address (single or ANY) port_t: Transport level port (single or ANY)

  • struct sock_handle_t Opaque to the user; allocated in userland by malloc (or on the stack)

  • list_ifaces() -> iface[] ifaces: A list of ipv6_addr_t, name pairs corresponding to all interfaces available

  • udp_socket(sock_handle_t, sock_addr_t) -> int socketfd socketfd: Socket object to be initialized as a UDP socket with the given address information sock_addr_t: Contains an IPv6 address and a port

  • udp_close(sock_handle_t) sock_handle_t: Socket to close

  • send_to(sock_handle_t, buffer, length, sock_addr_t)

    • sock_handle_t: Socket to send using
    • buffer: Buffer to send
    • length: Length of buffer to send
    • sock_addr_t: Address struct (IPv6 address, port) to send the packet from
  • recv_from(sock_handle_t, buffer, length, sock_addr_t)

    • sock_handle_t: Receiving socket
    • buffer: Buffer to receive into
    • length: Length of buffer
    • sock_addr_t: Struct where the kernel writes the received packet's sender information

Differences Between the APIs

There are two major differences between the proposed Tock APIs and the standard POSIX APIs. First, the POSIX APIs must support connection-based protocols such as TCP, whereas the Tock API is only concerned with connectionless, datagram based protocols. Second, the POSIX interface has a concept of the sock_addr_t structure, which is used to encapsulate information such as port numbers to bind on and interface addresses. This makes bind_to_port redundant in POSIX, as we can simply set the port number in the sock_addr_t struct when binding. I think one of the major questions is whether to adopt this convention, or to use the above definitions for at least the first iteration.

Example: ip_sense

An example use of the userland networking stack can be found in libtock-c/examples/ip_sense

Implementation Details for potential future Thread implementation

This section was written when the networking stack was incomplete, and aspects may be outdated. This goes for all sections following this point in the document.

The Thread specification determines an entire control plane that spans many different layers in the OSI networking model. To adequately understand the interactions and dependencies between these layers' behaviors, it might help to trace several types of messages and see how each layer processes the different types of messages. Let's trace carefully the way OpenThread handles messages.

We begin with the most fundamental message: a data-plane message that does not interact with the Thread control plane save for passing through a Thread-defined network interface. Note that some of the procedures in the below traces will not make sense when taken independently: the responsibility-passing will only make sense when all the message types are taken as a whole. Additionally, no claim is made as to whether or not this sequence of callbacks is the optimal way to express these interactions: it is just OpenThread's way of doing it.

Data plane: IPv6 datagram

  1. Upper layer (application) wants to send a payload
  • Provides payload
  • Specifies the IP6 interface to send it on (via some identifier)
  • Specifies protocol (IP6 next header field)
  • Specifies destination IP6 address
  • Possibly doesn't specify source IP6 address
  1. IP6 interface dispatcher (with knowledge of all the interfaces) fills in the IP6 header and produces an IP6 message
  • Payload, protocol, and destination address used directly from the upper layer
  • Source address is more complicated
    • If the address is specified and is not multicast, it is used directly
    • If the address is unspecified or multicast, source address is determined from the specific IP6 selected AND the destination address via a matching scheme on the addresses associated with the interface.
  • Now that the addresses are determined, the IP6 layer computes the pseudoheader checksum.
    • If the application layer's payload has a checksum that includes the pseudoheader (UDP, ICMP6), this partial checksum is now used to update the checksum field in the payload.
  1. The actual IP6 interface (Thread-controlled) tries to send that message
  • First step is to determine whether the message can be sent immediately or not (sleepy child or not). This passes the message to the scheduler. This is important for sleepy children where there is a control scheme that determines when messages are sent.
  • Next, determine the MAC src/dest addresses.
    • If this is a direct transmission, there is a source matching scheme to determine if the destination address used should be short or long. The same length is used for the source MAC address, obtained from the MAC interface.
  • Notify the MAC layer to notify you that your message can be sent.
  1. The MAC layer schedules its transmissions and determines that it can send the above message
  • MAC sets the transmission power
  • MAC sets the channel differently depending on the message type
  1. The IP6 interface fills up the frame. This is the chance for the IP6 interface to do things like fragmentation, retransmission, and so on. The MAC layer just wants a frame.
  • XXX: The IP6 interface fills up the MAC header. This should really be the responsibility of the MAC layer. Anyway, here is what is done:
    • Channel, source PAN ID, destination PAN ID, and security modes are determined by message type. Note that the channel set by the MAC layer is sometimes overwritten.
    • A mesh extension header is added for some messages. (eg. indirect transmissions)
  • The IP6 message is then 6LoWPAN-compressed/fragmented into the payload section of the frame.
  1. The MAC layer receives the raw frame and tries to send it
  • MAC sets the sequence number of the frame (from the previous sequence number for the correct link neighbor), if it is not a retransmission
  • The frame is secured if needed. This is another can of worms:
    • Frame counter is dependent on the link neighbor and whether or not the frame is a retransmission
    • Key is dependent on which key id mode is selected, and also the link neighbor's key sequence
    • Key sequence != frame counter
    • One particular mode requires using a key, source and frame counter that is a Thread-defined constant.
  • The frame is transmitted, an ACK is waited for, and the process completes.

As you can see, the data dependencies are nowhere as clean as the OSI model dictates. The complexity mostly arises because

  • Layer 4 checksum can include IPv6 pseudoheader
  • IP6 source address (mesh local? link local? multicast?) is determined by interface and destination address
  • MAC src/dest addresses are dependent on the next device on the route to the IP6 destination address
  • Channel, src/dest PAN ID, security is dependent on message type
  • Mesh extension header presence is dependent on message type
  • Sequence number is dependent on message type and destination

Note that all of the MAC layer dependencies in step 5 can be pre-decided so that the MAC layer is the only one responsible for writing the MAC header.

This gives a pretty good overview of what minimally needs to be done to even be able to send normal IPv6 datagrams, but does not cover all of Thread's complexities. Next, we look at some control-plane messages.

Control plane: MLE messages

  1. The MLE layer encapsulates its messages in UDP on a constant port
  • Security is determined by MLE message type. If MLE-layer security is required, the frame is secured using the same CCM* encryption scheme used in the MAC layer, but with a different key discipline.
  • MLE key sequence is global across a single Thread device
  • MLE sets IP6 source address to the interface's link local address
  1. This UDP-encapsulated MLE message is sent to the IP6 dispatch again
  2. The actual IP6 interface (Thread-controlled) tries to send that message
  3. The MAC layer schedules the transmission
  4. The IP6 interface fills up the frame.
  • MLE messages disable link-layer security when MLE-layer security is present. However, if link-layer security is disabled and the MLE message doesn't fit in a single frame, link-layer security is enabled so that fragmentation can proceed.
  1. The MAC layer receives the raw frame and tries to send it

The only cross-layer dependency introduced by the MLE layer is the dependency between MLE-layer security and link-layer security. Whether or not the MLE layer sits atop an actual UDP socket is an implementation detail.

Control plane: Mesh forwarding

If Thread REED devices are to be eventually supported in Tock, then we must also consider this case. If a frame is sent to a router which is not its final destination, then the router must forward that message to the next hop.

  1. The MAC layer receives a frame, decrypts it and passes it to the IP6 interface
  2. The IP6 reception reads the frame and realizes that it is an indirect transmission that has to be forwarded again
  • The frame must contain a mesh header, and the HopsLeft field in it should be decremented
  • The rest of the payload remains the same
  • Hence, the IP6 interface needs to send a raw 6LoWPAN-compressed frame
  1. The IP6 transmission interface receives a raw 6LoWPAN-compressed frame to be transmitted again
  • This frame must still be scheduled: it might be destined for a sleepy device that is not yet awake
  1. The MAC layer schedules the transmission
  2. The IP6 transmission interface copies the frame to be retransmitted verbatim, but with the modified mesh header and a new MAC header
  3. The MAC layer receives the raw frame and tries to send it

This example shows that the IP6 transmission interface may need to handle more message types than just IP6 datagrams: there is a case where it is convenient to be able to handle a datagram that is already 6LoWPAN compressed.

Control plane: MAC data polling

From time to time, a sleepy edge device will wake up and begin polling its parent to check if any frames are available for it. This is done via a MAC command frame, which must still be sent through the transmission pipeline with link security enabled (Key ID mode 1). OpenThread does this by routing it through the IP6 transmission interface, which arguably isn't the right choice.

  1. Data poll manager send a data poll message directly to the IP6 transmission interface, skipping the IP6 dispatch
  2. The IP6 transmission interface notices the different type of message, which always warrants a direct transmission.
  3. The MAC layer schedules the transmission
  4. The IP6 transmission interface fills in the frame
  • The MAC dest is set to the parent of this node and the MAC src is set to be the same length as the address of the parent
  • The payload is filled up to contain the Data Request MAC command
  • The MAC security level and key ID mode is also fixed for MAC commands under the Thread specification
  1. The MAC layer secures the frame and sends it out

We could imagine giving the data poll manager direct access as a client of the MAC layer to avoid having to shuffle data through the IP6 transmission interface. This is only justified because MAC command frames are never 6LoWPAN-compressed or fragmented, nor do they depend on the IP6 interface in any way.

Control plane: Child supervision

This type of message behaves similarly to the MAC data polls. The message is essentially and empty MAC frame, but OpenThread chooses to also route it through the IP6 transmission interface. It would be far better to allow a child supervision implementation to be a direct client of the MAC interface.

Control plane: Joiner entrust and MLE announce

These two message types are also explicitly marked, because they require a specific Key ID Mode to be selected when producing the frame for the MAC interface.

Caveat about MAC layer security

So far, it seems like we can expect the MAC layer to have no cross-layer dependencies: it receives frames with a completely specified description of how they are to be secured and transmitted, and just does so. However, this is not entirely the case.

When the frame is being secured, the key ID mode has been set by the upper layers as described above, and this key ID mode is used to select between a few different key disciplines. For example, mode 0 is only used by Joiner entrust messages and uses the Thread KEK sequence. Mode 1 uses the MAC key sequence and Mode 2 is a constant key used only in MLE announce messages. Hence, this key ID mode selection is actually enabling an upper layer to determine the specific key being used in the link layer.

Note that we cannot just reduce this dependency by allowing the upper layer to specify the key used in MAC encryption. During frame reception, the MAC layer itself has to know which key to use in order to decrypt the frames correctly.

Bluetooth Low Energy Design

This document describes the design of the BLE stack in Tock.

System call interface

The system call interface is modeled after the HCI interface defined in the Bluetooth specification.

Device address

The kernel assigns the device address. The process may read the device address using an allow system call.

Advertising

For advertising, the system call interface allows a process to configure an advertising payload, advertising event type, scan response payload, interval and tx power. Permissible advertising types include:

  • Connectable undirected
  • Connectable directed
  • Non-connectable undirected
  • Scannable undirected

The driver is not responsible for validating that the payload for these advertising types follows any particular specification. Advertising event types that require particular interactions at the link-layer with peer devices (e.g. scanning or establishing connections) are not permissible:

  • Scan request
  • Scan response
  • Connect request

Scan response are sent automatically if a scan response payload is configured. Scan request and connection requests are handled by other parts of the system call interface.

To set up an advertisement:

  1. Configure the advertisement payload, type, interval, tx power and, optionally, scan response payload.

    • Advertisement payload allow
    • Advertisement type command
    • If the advertising type is scannable, you SHOULD configure a scan response payload using allow
    • Advertisement interval command
    • Advertisement tx power command
  2. Start periodic advertising using a command

Any changes to the configuration while periodic advertising is happening will take effect in a future advertising event. The kernel will use best effort to reconfigure advertising in as few events as possible.

To stop advertising

  1. Stop periodic advertising a command

Scanning

Connection-oriented communication

Hardware Interface Layer (HIL)

The Bluetooth Low Energy Radio HIL defines a cross-platform interface for interacting with on-chip BLE radios (i.e. it does not necessarily work for radios on a dedicated IC connected over a bus).

The goal of this interface is to expose low-level details of the radio that are common across platforms, except in cases where abstraction is needed for common cases to meet timing constraints.

#![allow(unused)]
fn main() {
pub trait BleRadio {
    /// Sets the channel on which to transmit or receive packets.
    ///
    /// Returns Err(ErrorCode::BUSY) if the radio is currently transmitting or
    /// receiving, otherwise Ok(()).
    fn set_channel(&self, channel: RadioChannel) -> Result<(), ErrorCode>;

    /// Sets the transmit power
    ///
    /// Returns Err(ErrorCode::BUSY) if the radio is currently transmitting or
    /// receiving, otherwise Ok(()).
    fn set_tx_power(&self, power: u8) -> Result<(), ErrorCode>;

    /// Transmits a packet over the radio
    ///
    /// Returns Err(ErrorCode::BUSY) if the radio is currently transmitting or
    /// receiving, otherwise Ok(()).
    fn transmit_packet(
        &self,
        buf: &'static mut [u8],
        disable: bool) -> Result<(), ErrorCode>;

    /// Receives a packet of at most `buf.len()` size
    ///
    /// Returns Err(ErrorCode::BUSY) if the radio is currently transmitting or
    /// receiving, otherwise Ok(()).
    fn receive_packet(&self, buf: &'static mut [u8]) -> Result<(), ErrorCode>;

    // Aborts an ongoing transmision
    //
    // Returns None if no transmission was ongoing, or the buffer that was
    // being transmitted.
    fn abort_tx(&self) -> Option<&'static mut [u8]>;

    // Aborts an ongoing reception
    //
    // Returns None if no transmission was ongoing, or the buffer that was //
    // being received into. The returned buffer may or may not have some populated
    // bytes.
    fn abort_rx(&self) -> Option<&'static mut [u8]>;

    // Disable periodic advertisements
    //
    // Returns always Ok(()) because it does not respect whether
    // the driver is actively advertising or not
    fn disable(&self) -> Result<(), ErrorCode>;
}

pub trait RxClient {
    fn receive_event(&self, buf: &'static mut [u8], len: u8, result: Result<(), ErrorCode>);
}

pub trait TxClient {
    fn transmit_event(&self, buf: &'static mut [u8], result: Result<(), ErrorCode>);
}

pub enum RadioChannel {
    DataChannel0 = 4,
    DataChannel1 = 6,
    DataChannel2 = 8,
    DataChannel3 = 10,
    DataChannel4 = 12,
    DataChannel5 = 14,
    DataChannel6 = 16,
    DataChannel7 = 18,
    DataChannel8 = 20,
    DataChannel9 = 22,
    DataChannel10 = 24,
    DataChannel11 = 28,
    DataChannel12 = 30,
    DataChannel13 = 32,
    DataChannel14 = 34,
    DataChannel15 = 36,
    DataChannel16 = 38,
    DataChannel17 = 40,
    DataChannel18 = 42,
    DataChannel19 = 44,
    DataChannel20 = 46,
    DataChannel21 = 48,
    DataChannel22 = 50,
    DataChannel23 = 52,
    DataChannel24 = 54,
    DataChannel25 = 56,
    DataChannel26 = 58,
    DataChannel27 = 60,
    DataChannel28 = 62,
    DataChannel29 = 64,
    DataChannel30 = 66,
    DataChannel31 = 68,
    DataChannel32 = 70,
    DataChannel33 = 72,
    DataChannel34 = 74,
    DataChannel35 = 76,
    DataChannel36 = 78,
    AdvertisingChannel37 = 2,
    AdvertisingChannel38 = 26,
    AdvertisingChannel39 = 80,
}
}

Tock Reference Documents

Tock Reference Documents (TRDs) are formal specifications of various APIs and systems within Tock.

TRDs:

  1. TRD1: TRDs
  2. TRD3: HIL Design
  3. TRD4: Legal
  4. TRD102: ADC
  5. TRD103: GPIO
  6. TRD104: Syscalls
  7. TRD105: Time
  8. TRD106: Completion Codes
  9. Drafts
    1. TRD AppID
    2. TRD Digest
    3. TRD Public/Private Keys
    4. TRD Radio
    5. TRD SPI
    6. TRD Storage Permissions
    7. TRD UART
    8. TRD Userspace Read Allow

Tock Reference Document (TRD) Structure and Keywords

TRD: 1
Working Group: Kernel
Type: Best Common Practice
Status: Final
Authors: Philip Levis, Daniel Griffin

Abstract

This document describes the structure followed by all Tock Reference Documents (TRDs), and defines the meaning of several key words in those documents.

1 Introduction

To simplify management, reading, and tracking development, all Tock Reference Documents (TRDs) MUST have a particular structure. Additionally, to simplify development and improve implementation interoperability, all TRDs MUST observe the meaning of several key words that specify levels of compliance. This document describes and follows both.

2 Keywords

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in TRD1.

Note that the force of these words is modified by the requirement level of the document in which they are used. These words hold their special meanings only when capitalized, and documents SHOULD avoid using these words uncapitalized in order to minimize confusion.

2.1 MUST

MUST: This word, or the terms "REQUIRED" or "SHALL", mean that the definition is an absolute requirement of the document.

2.2 MUST NOT

MUST NOT: This phrase, or the phrase "SHALL NOT", mean that the definition is an absolute prohibition of the document.

2.3 SHOULD

SHOULD: This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

2.4 SHOULD NOT

SHOULD NOT: This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.

2.5 MAY

MAY: This word, or the adjective "OPTIONAL", mean that an item is truly optional. One implementer may choose to include the item because a particular application requires it or because the implementer feels that it enhances the system while another implementer may omit the same item. An implementation which does not include a particular option MUST be prepared to interoperate with another implementation which does include the option, though perhaps with reduced functionality. Similarly, an implementation which does include a particular option MUST be prepared to interoperate with another implementation which does not include the option (except, of course, for the feature the option provides.)

2.6 Guidance in the use of these Imperatives

Imperatives of the type defined in this memo must be used with care and sparingly. In particular, they MUST only be used where it is actually required for interoperation or to limit behavior which has potential for causing harm (e.g., limiting retransmissions) For example, they must not be used to try to impose a particular method on implementors where the method is not required for interoperability.

3 TRD Structure

A TRD MUST begin with a title, and then follow with a header and a body. The header states document metadata, for management and status. The body contains the content of the proposal.

All TRDs MUST conform to Markdown syntax to enable translation to HTML and LaTeX, and for useful display in web tools.

3.1 TRD Header

The TRD header has several fields which MUST be included, as well as others which MAY be included. The TRD header MUST NOT include fields which are not specified in TRD 1 or supplementary Best Common Practice TRDs. The first five header fields MUST be included in all TRDs, in the order stated below. The Markdown syntax to use when composing a header is modeled by this document's header.

The first field is "TRD," and specifies the TRD number of the document. A TRD's number is unique. This document is TRD 1. The TRD type (discussed below) determines TRD number assignment. Generally, when a document is ready to be a TRD, it is assigned the smallest available number. BCP TRDs start at 1 and all other TRDs (Documentary, Experimental, and Informational) start at 101.

The second field, "Working Group," states the name of the working group that produced the document. This document was produced by the Kernel Working Group.

The third field is "Type," and specifies the type of TRD the document is. There are four types of TRD: Best Current Practice (BCP), Documentary, Informational, and Experimental. This document's type is Best Current Practice.

Best Current Practice is the closest thing TRDs have to a standard: it represents conclusions from significant experience and work by its authors. Developers desiring to add code (or TRDs) to Tock SHOULD follow all current BCPs.

Documentary TRDs describe a system or protocol that exists; a documentary TRD MUST reference an implementation that a reader can easily obtain. Documentary TRDs simplify interoperability when needed, and document Tock implementations.

Informational TRDs provide information that is of interest to the community. Informational TRDs include data gathered on radio behavior, hardware characteristics, other aspects of Tock software/hardware, organizational and logistic information, or experiences which could help the community achieve its goals.

Experimental TRDs describe a completely experimental approach to a problem, which are outside the Tock release stream and will not necessarily become part of it. Unlike Documentary TRDs, Experimental TRDs may describe systems that do not have a reference implementation.

The fourth field is "Status," which specifies the status of the TRD. A TRD's status can be either "Draft," which means it is a work in progress, or "Final," which means it is complete and will not change. Once a TRD has the status "Final," the only change allowed is the addition of an "Obsoleted By" field.

The "Obsoletes" field is a backward pointer to an earlier TRD which the current TRD renders obsolete. An Obsoletes field MAY have multiple TRDs listed. For example, if TRD 121 were to replace TRDs 111 and 116, it would have the field "Obsoletes: 111, 116".

The "Obsoleted By" field is added to a Final TRD when another TRD has rendered it obsolete. The field contains the number of the obsoleting TRD. For example, if TRD 111 were obsoleted by TRD 121, it would have the field "Obsoleted By: 121".

"Obsoletes" and "Obsoleted By" fields MUST agree. For a TRD to list another TRD in its Obsoletes field, then that TRD MUST list it in the Obsoleted By field.

The obsoletion fields are used to keep track of evolutions and modifications of a single abstraction. They are not intended to force a single approach or mechanism over alternative possibilities.

The final required field is "Authors," which states the names of the authors of the document. Full contact information should not be listed here (see Section 3.2).

There is an optional field, "Extends." The "Extends" field refers to another TRD. The purpose of this field is to denote when a TRD represents an addition to an existing TRD. Meeting the requirements of a TRD with an Extends field requires also meeting the requirements of all TRDs listed in the Extends field.

If a TRD is a Draft, then four additional fields MUST be included: Draft-Created, Draft-Modified, Draft-Version, and Draft-Discuss. Draft-Created states the date the document was created, Draft-Modified states when it was last modified. Draft-Version specifies the version of the draft, which MUST increase every time a modification is made. Draft-Discuss specifies the email address of a mailing list where the draft is being discussed. Final and Obsolete TRDs MUST NOT have these fields, which are for Drafts only.

3.2 TRD Body

A TRD body SHOULD begin with an Abstract, which gives a brief overview of the content of the TRD. A longer TRD MAY, after the Abstract, have a Table of Contents. After the Abstract and Table of Contents there SHOULD be an Introduction, stating the problem the TRD seeks to solve and providing needed background information.

If a TRD is Documentary, it MUST have a section entitled "Implementation," which instructs the reader how to obtain the implementation documented.

If a TRD is Best Current Practice, it MUST have a section entitled "Reference," which points the reader to one or more reference uses of the practices.

The last three sections of a TRD are author information, citations, and appendices. A TRD MUST have an author information section titled entitled "Author's Address" or "Authors' Addresses." A TRD MAY have a citation section entitled "Citations." A citations section MUST immediately follow the author information section. A TRD MAY have appendices. Appendices MUST immediately follow the citations section, or if there is no citations section, the author information section. Appendices are lettered. Please refer to Appendix A for details.

4 File names

TRDs MUST be stored in the Tock repository with a file name of

trd[number]-[desc].md

Where number is the TRD number and desc is a short, one word description. The name of this document is trd1-trds.md.

5 Reference

The reference use of this document is TRD 1 (itself).

6 Acknowledgments

The definitions of the compliance terms are a direct copy of definitions taken from IETF RFC 2119. This document is heavily copied from TinyOS Enhancement Proposal 1 (TEP 1).

7 Author's Address

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305

phone - +1 650 725 9046

email - pal@cs.stanford.edu

Appendix A Example Appendix

This is an example appendix. Appendices begin with the letter A.

Design of Kernel Hardware Interface Layers (HILs)

TRD: 3
Working Group: Kernel
Type: Best Current Practice
Status: Final
Obsoletes: 2
Author: Brad Campbell, Philip Levis, Hudson Ayers

Abstract

This document describes design rules of hardware interface layers (HILs) in the Tock operating system. HILs are Rust traits that provide a standard interface to a hardware resource, such as a sensor, a flash chip, a cryptographic accelerator, a bus, or a radio. Developers adding new HILs to Tock should read this document and verify they have followed these guidelines.

Introduction

In Tock, a hardware interface layer (HIL) is a collection of Rust traits and types that provide a standardized API to a hardware resource such as a sensor, flash chip, cryptographic accelerator, bus, or a radio.

Capsules use HILs to implement their functionality. For example, a system call driver capsule that gives processes access to a temperature sensor relies on having a reference to an implementation of the kernel::hil::sensors::TemperatureDriver trait. This allows the system call driver capsule to work on top of any implementation of the TemperatureDriver trait, whether it is a local, on-chip sensor, an analog sensor connected to an ADC, or a digital sensor over a bus.

Capsules use HILs in many different ways. They can be directly accessed by kernel services, such as the in-kernel process console using the UART HIL. They can be exposed to processes with system driver capsules, such as with GPIO. They can be virtualized to allow multiple clients to share a single resource, such as with the virtual timer capsule.

This variety of use cases places a complex set of requirements on how a HIL must behave. For example, Tock expects that every HIL is virtualizable: it is possible to take one instance of the trait and allow multiple clients to use it simultaneously, such that each one thinks it has its own, independent instance of the trait. Because virtualization often means requests can be queued and the Tock kernel has a single stack, all HILs must be nonblocking and so have a callback for completion. This has implications to buffer management and ownership.

This document describes these requirements and describes a set of design rules for HILs. They are:

  1. Don't make synchronous callbacks.
  2. Split-phase operations return a synchronous Result type which includes an error code in its Err value.
  3. For split-phase operations, Ok means a callback will occur while Err with an error code besides BUSY means one won't.
  4. Error results of split-phase operations with a buffer parameter include a reference to passed buffer. This returns the buffer to the caller.
  5. Split-phase operations with a buffer parameter take a mutable reference even if their access is read-only.
  6. Split-phase completion callbacks include a Result parameter whose Err contains an error code; these errors are a superset of the synchronous errors.
  7. Split-phase completion callbacks for an operation with a buffer parameter return the buffer.
  8. Use fine-grained traits that separate out different use cases.
  9. Separate control and datapath operations into separate traits.
  10. Blocking APIs are not general: use them sparingly, if at all.
  11. initialize() methods, when needed, should be in a separate trait and invoked in an instantiating Component.
  12. Traits that can trigger callbacks should have a set_client method.
  13. Use generic lifetimes where possible, except for buffers used in split-phase operations, which should be 'static.

The rest of this document describes each of these rules and their reasoning.

While these are design rules, they are not sacrosanct. There are reasons or edge cases why a particular HIL might need to break one (or more) of them. In such cases, be sure to understand the reasoning behind the rule; if those considerations don't apply in your use case, then it might be acceptable to break it. But it's important to realize the exception is true for all implementations of the HIL, not just yours; a HIL is intended to be a general, reusable API, not a specific implementation.

The key recurring point in these guidelines is that a HIL should encapsulate a wide range of possible implementations and use cases. It might be that the hardware you are using or designing a HIL for has particular properties or behavior. That does not mean all hardware does. For example, a software pseudo-random generator can synchronously return random numbers. However, a hardware-based one typically cannot (without blocking). If you write a blocking random number HIL because you are working with a software one, you are precluding hardware implementations from using your HIL. This means that a developer must decide to use either the blocking HIL (which in some cases can't exist) or the non-blocking one, making software less reusable.

Rule 1: Don't Make Synchronous Callbacks

Consider the following API for requesting 32 bits of randomness:

#![allow(unused)]
fn main() {
trait Random {
  fn random(&self) -> Result<(), ErrorCode>;
  fn set_client(&self, client: &'static Client);
}

trait Client {
  fn random_ready(&self, result: Result<u32, ErrorCode>);
}
}

If Random is implemented on top of a hardware random number generator, the random bits might not be ready until an interrupt is issued. E.g., if the implementation generates random numbers by running AES128 in counter mode on a hidden seed[HCG], then generating random bits may require an interrupt.

But AES128 computes 4 32-bit values of randomness. So a smart implementation will compute 128 bits, and call back with 32 of them. The next 3 calls to random can produce data from the remaining data. The simple implementation for this algorithm is to call random_ready inside the call to random if cached results are ready: the values are ready, so issue the callback immediately.

Making the random_ready callback from inside random is a bad idea for two reasons: call loops and client code complexity.

The first issue that arises is it can create call loops. Suppose that the client wants 1024 bits (so 32 words) or randomness. It needs to invoke random 32 times. The standard call pattern is to call random, then in the random_ready callback, store the new random bits and call random again. This repeats 32 times.

If the implementation uses an interrupt every 4 calls, then this call pattern isn't terrible: it would result in 8 stack frames. But suppose that the implementation chooses to generate not 128 bits at a time, but rather 1024 bits (e.g., runs counter mode on 32 words). Then one could have up to 64 stack frames. It might be that the compiler inlines this, but it also might not. Assuming the compiler always does a specific optimization for you is dangerous: there all sorts of edge cases and heuristics, and trying to adjust source code to coax it to do what you want (which can change with each compiler release) is brittle.

The second, and more dangerously, client logic becomes much more complex. For example, consider this client code:

#![allow(unused)]
fn main() {
  ...
  if self.state.get() == State::Idle {
    let result = random.random();
    match result {
      Ok(()) => self.state.set(State::Waiting),
      Err(e) => self.state.set(State::Error),
    }
  }
  ...

fn random_ready(&self, bits: u32, result: Result<(), ErrorCode>) {
  match result {
    Ok(()) => {
      // Use the random bits
      self.state.set(State::Idle);
    }
    Err(e) => {
      self.state.set(State::Error);
    }
  }
}
}

The result of starting a split-phase call indicates whether there will be a callback: Ok means there will be a callback, while Err means there will not. If the implementation of Random issues a synchronous callback, then the state variable of the client will be in an incorrect state. Before the call to random returns, the callback executes and sets state to State::Idle. Then, the call to random returns, and sets state to State::Waiting. If the callback checks whether it's in the Waiting state (e.g., to guard against spurious/buggy callbacks), this check will fail. The problem is that the callback occurs before the caller even knows that it will occur.

There are ways to guard against this. The caller can optimistically assume that random will succeed:

#![allow(unused)]
fn main() {
  ...
  if self.state.get() == State::Idle {
    self.state.set(State::Waiting);
    let result = random.random();
    match result {
      Err(e) => self.state.set(State::Error),
      Ok(()) => {} // Do nothing
    }
  }
  ...

fn random_ready(&self, bits: u32, result: Result<(), ErrorCode>) {
  match result {
    Ok(()) => {
      // Use the random bits
      self.state.set(State::Idle);
    }
    Err(e) => {
      self.state.set(State::Error);
    }
  }
}
}

After the first match (where random is called), self.state can be in 3 states:

  1. State::Waiting, if the call succeeded but the callback is asynchronous.
  2. State::Error, if the call or callback failed.
  3. State::Idle, if it received a synchronous callback.

This progresses up the call stack. The client that invoked this module might receive a callback invoked from within the random_ready callback.

Expert programmers who are fully prepared for a re-entrant callback might realize this and program accordingly, but most programmers aren't. Some of the Tock developers who have been writing event-driven embedded for code decades have mistakenly handled this case. Having synchronous callbacks makes all code need to be as carefully written as interrupt handling code, since from the caller's standpoint the callback can preempt execution.

Issuing an asynchronous callback requires that the module be invoked again later: it needs to return now, and then after that call stack is popped, invoke the callback. For callbacks that will be triggered by interrupts, this occurs naturally. However, if, such as in the random number generation example, the callback is purely from software, the module needs a way to make itself be invoked later, but as quickly as possible. The standard mechanism to achieve this in Tock is through deferred procedure calls. This mechanism allows a module to tell the Tock scheduler to call again later, from the main scheduling loop. For example, a caching implementation of Random might look like this:

#![allow(unused)]
fn main() {
impl Random for CachingRNG {
  fn random(&self) -> Result<(), ErrorCode> {
    if self.busy.get() {
      return Err(ErrorCode::BUSY);
    }

    self.busy.set(true);
    if self.cached_words.get() > 0 {
      // This tells the scheduler to issue a deferred procedure call,
      self.deferred_call.set();
    } else {
      self.request_more_randomness();
    }
  }
  ...
}

impl<'a> DeferredCallClient for CachingRNG<'a> {
  fn handle_deferred_call(&self) {
    let rbits = self.pop_cached_word();
    self.client.random_ready(rbits, Ok(()));
  }

  // This function must be called during board initialization.
  fn register(&'static self) {
      self.deferred_call.register(self);
  }
}
}

Rule 2: Return Synchronous Errors

Methods that invoke hardware can fail. It could be that the hardware is not configured as expected, it is powered down, or it has been disabled. Generally speaking, every HIL operation should return a Rust Result type, whose Err variant includes an error code. The Tock kernel provides a standard set of error codes, oriented towards system calls, in the kernel::ErrorCode enum.

HILs SHOULD return ErrorCode. Sometimes, however, these error codes don't quite fit the use case and so in those cases a HIL may defines its own error codes. The I2C HIL, for example, defines an i2c::Error enumeration for cases such as address and data negative acknowledgments, which can occur in I2C. In cases when a HIL returns its own error code type, this error code type should also be able to represent all of the errors returned in a callback (see Rule 6 below).

If a method doesn't return a synchronous error, there is no way for a caller to know if the operation succeeded. This is especially problematic for split-phase calls: whether the operation succeeds indicates whether there will be a callback.

Rule 3: Split-phase Result Values Indicate Whether a Callback Will Occur

Suppose you have a split-phase call, such as for a SPI read/write operation:

#![allow(unused)]
fn main() {
pub trait SpiMasterClient {
  /// Called when a read/write operation finishes
  fn read_write_done(
    &self,
    write_buffer: &'static mut [u8],
    read_buffer: Option<&'static mut [u8]>,
    len: usize,
  );
}
pub trait SpiMaster {
  fn read_write_bytes(
    &self,
    write_buffer: &'static mut [u8],
    read_buffer: Option<&'static mut [u8]>,
    len: usize,
  ) -> Result<(), ErrorCode>;
}
}

One issue that arises is whether a client calling SpiMaster::read_write_bytes should expect a callback invocation of SpiMasterClient::read_write_done. Often, when writing event-driven code, modules are state machines. If the client is waiting for an operation to complete, then it shouldn't call read_write_bytes again. Similarly, if it calls read_write_bytes and the operation doesn't start (so there won't be a callback), then it can try to call read_write_bytes again later.

It's very important to a caller to know whether a callback will be issued. If there will be a callback, then it knows that it will be invoked again: it can use this invocation to dequeue a request, issue its own callbacks, or perform other operations. If there won't be a callback, then it might never be invoked again, and can be in a stuck state.

For this reason, the standard calling convention in Tock is that an Ok result means there will be a callback in response to this call, and an Err result means there will not be a callback in response to this call. Note that it is possible for an Err result to be returned yet there will be a callback in the future. This depends on which ErrorCode is passed. A common calling pattern is for a trait to return ErrorCode::BUSY if there is already an operation pending and a callback will be issued. This error code is unique in this way: the general rule is that Ok means there will be a callback in response to this call, Err with ErrorCode::BUSY means this call did not start a new operation but there will be a callback in response to a prior call, and any other Err means there will not be a callback.

Cancellation calls are a slight variation on this approach. A call to a cancel method (such as uart::Transmit::transmit_abort) also returns a Result type, for which an Ok value means there will be a callback in the future while an Err value means there will not be a callback. In this way the return type reflects the original call. The Ok value of a cancel method, however, needs to distinguish between two cases:

  1. There was an outstanding operation, so there will be a callback, but it was not cancelled.
  2. There was an outstanding operation, so there will be a callback, and it was cancelled.

The Result::Ok type for cancel calls therefore often contains information that signals whether the operation was successfully cancelled.

Rule 4: Return Passed Buffers in Error Results

Consider this method:

#![allow(unused)]
fn main() {
// Anti-pattern: caller cannot regain buf on an error
fn send(&self, buf: &'static mut [u8]) -> Result<(), ErrorCode>;
}

This method is for a split-phase call: there is a corresponding completion callback that passes the buffer back:

#![allow(unused)]
fn main() {
fn send_done(&self, buf: &'static mut[u8]);
}

The send method follows Rule 2: it returns a synchronous error. But suppose that calling it returns an Err(ErrorCode): what happens to the buffer?

Rust's ownership rules mean that the caller can't still hold the reference: it passed the reference to the implementer of send. But since the operation did not succeed, the caller does not expect a callback. Forcing the callee to issue a callback on a failed operation typically forces it to include an alarm or other timer. Following Rule 1 means it can't do so synchronously, so it needs an asynchronous event to invoke the callback from. This leads to every implementer of the HIL requiring an alarm or timer, which use RAM, has more complex logic, and makes initialization more complex.

As a result, in the above interface, if there is an error on send, the buffer is lost. It's passed into the callee, but the callee has no way to pass it back.

If a split-phase operation takes a reference to a buffer as a parameter, it should return a reference to a buffer in the Err case:

#![allow(unused)]
fn main() {
fn send(&self, buf: &'static mut [u8]) -> Result<(), (ErrorCode, &'static mut [u8])>;
}

Before Tock transitioned to using Result, this calling pattern was typically implemented with an Option:

#![allow(unused)]
fn main() {
fn send(&self, buf: &'static mut [u8]) -> (ReturnCode, Option<&'static mut [u8]>);
}

In this approach, when the ReturnCode is SUCCESS, the Option is always supposed to be None; it the ReturnCode has an error value, the Option contains the passed buffer. This invariant, however, cannot be checked. Transitioning to using Result both makes Tock more in line with standard Rust code and enforces the invariant.

Rule 5: Always Pass a Mutable Reference to Buffers

Suppose you are designing a trait to write some text to an LCD screen. The trait takes a buffer of ASCII characters, which it puts on the LCD:

#![allow(unused)]
fn main() {
// Anti-pattern: caller is forced to discard mutability
trait LcdTextDisplay {
  // This is an anti-pattern: the `text` buffer should be `mut`, for reasons explained below
  fn display_text(&self, text: &'static [u8]) -> Result<(), ErrorCode>;
  fn set_client(&self, client: &'static Client);
}

trait Client {
  fn text_displayed(&self, text: &'static [u8], result: Result<(), ErrorCode>);
}
}

Because the text display only needs to read the provided text, the reference to the buffer is not mutable.

This is a mistake.

The issue that arises is that because the caller passes the reference to the LCD screen, it loses access to it. Suppose that the caller has a mutable reference to a buffer, which it uses to read in data typed from a user before displaying it on the screen. Or, more generally, it has a mutable reference so it can create new text to display to the screen.

#![allow(unused)]
fn main() {
enum State {
  Idle,
  Reading,
  Writing,
}

struct TypeToText {
  buffer: Option<&'static mut [u8]>,
  uart: &'static dyn uart::Receive<'static>,
  lcd: &'static dyn LcdTextDisplay,
  state: Cell<State>,
}

impl TypeToText {
  fn display_more(&self) -> Result<(), ErrorCode> {
    if self.state.get() != State::Idle || self.buffer.is_none() {
      return Err(ErrorCode::BUSY);
    }
    let buffer = self.buffer.take();
    let uresult = uart.receive_buffer(buffer, buffer.len());
    match uresult {
      Ok(()) => {
        self.state.set(State::Reading);
        return Ok(());
      }
      Err(e) => return Err(e),
    }
  }
}

impl uart::ReceiveClient<'static> for TypeToText {
  fn received_buffer(&self, buf: &'static mut [u8]) {
    self.lcd.display_text(buf); // discards mut
  }
}
}

The problem is in this last line. TypeToText needs a mutable reference so it can read into it. But once it passes the reference to LcdTextDisplay, it discards mutability and cannot get it back: text_displayed provides an immutable reference, which then cannot be put back into the buffer field of TypeToText.

For this reason, split phase operations that take references should generally take mutable references, even if they only need read-only access. Because the reference will not be returned back until the callback, the caller cannot rely on the call stack and scoping to retain mutability.

Rule 6: Include a Result in Completion Callbacks That Includes an Error Code in its Err

Any error that can occur synchronously can usually occur asynchronously too. Therefore, callbacks need to indicate that an error occurred and pass that back to the caller. Callbacks therefore should include a Result type, whose Err variant includes an error code. This error code type SHOULD be the same type that is returned synchronously, to simplify error processing and returning errors to userspace when needed.

The common case for this is virtualization, where a capsule turns one instance of a trait into a set of instances that can be used by many clients, each with their own callback. A typical virtualizer queues requests. When a request comes in, if the underlying resource is idle, the virtualizer forwards the request and marks itself busy. If the request on the underlying resource returns an error, the virtualizer returns this error to the client immediately and marks itself idle again.

If the underlying resource is busy, then the virtualizer returns an Ok to the caller and queues the request. Later, when the request is dequeued, the virtualizer invokes the underlying resource. If this operation returns an error, then the virtualizer issues a callback to the client, passing the error. Because virtualizers queue and delay operations, they also delay errors. If a HIL does not pass a Result in its callback, then there is no way for the virtualizer inform the client that the operation failed.

Note that abstractions which can be virtualized concurrently may not need to pass a Result in their callback. Alarm, for example, can be virtualized into many alarms. These alarms, however, are not queued in a way that implies future failure. A call to Alarm::set_alarm cannot fail, so there is no need to return a Result in the callback.

Rule 7: Always Return the Passed Buffer in a Completion Callback

If a client passes a buffer to a module for an operation, it needs to be able to reclaim it when the operation completes. Rust ownership (and the fact that passed references must be mutable, see Rule 5 above) means that the caller must pass the reference to the HIL implementation. The HIL needs to pass it back.

Rule 8: Use Fine-grained Traits That Separate Different Use Cases

Access to a trait gives access to functionality. If several pieces of functionality are coupled into a single trait, then a client that needs access to only some of them gets all of them. HILs should therefore decompose their abstractions into fine-grained traits that separate different use cases. For clients that need multiple pieces of functionality, the HIL can also define composite traits, such that a single reference can provide multiple traits.

Consider, for example, an early version of the Alarm trait:

#![allow(unused)]
fn main() {
pub trait Alarm: Time {
  fn now(&self) -> u32;
  fn set_alarm(&self, tics: u32);
  fn get_alarm(&self) -> u32;
}
}

This trait coupled two operations: setting an alarm for a callback and being able to get the current time. A module that only needs to be able to get the current time (e.g., for a timestamp) must also be able to set an alarm, which implies RAM/state allocation somewhere.

The modern versions of the traits look like this:

#![allow(unused)]
fn main() {
pub trait Time {
  type Frequency: Frequency; // The number of ticks per second
  type Ticks: Ticks; // The width of a time value
  fn now(&self) -> Self::Ticks;
}

pub trait Alarm<'a>: Time {
  fn set_alarm_client(&'a self, client: &'a dyn AlarmClient);
  fn set_alarm(&self, reference: Self::Ticks, dt: Self::Ticks);
  fn get_alarm(&self) -> Self::Ticks;

  fn disarm(&self) -> ReturnCode;
  fn is_armed(&self) -> bool;
  fn minimum_dt(&self) -> Self::Ticks;
}
}

They decouple getting a timestamp (the Time trait) from an alarm that issues callbacks at a particular timestamp (the Alarm trait).

Separating a HIL into fine-grained traits allows Tock to follow the security principle of least privilege. In the case of GPIO, for example, being able to read a pin does not mean a client should be able to reconfigure or write it. Similarly, for a UART, being able to transmit data does not mean that a client should always also be able to read data, or reconfigure the UART parameters.

Rule 9: Separate Control and Datapath Operations into Separate Traits

This rule is a direct corollary for Rule 8, but has some specific considerations that make it a rather hard and fast rule. Rule 8 (separate HILs into fine-grained traits) has a lot of flexibility in design sensibility in terms of what operations can be coupled together. This rule, however, is more precise and strict.

Many abstractions combine data operations and control operations. For example, a SPI bus has data operations for sending and receiving data, but it also has control operations for setting its speed, polarity, and chip select. An ADC has data operations for sampling a pin, but also has control operations for setting the bit width of a sample, the reference voltage, and the sampling clock used. Finally, a radio has data operations to send and receive packets, but also control operations for setting transmit power, frequencies, and local addresses.

HILs should separate these operations: control and data operations should (almost) never be in the same trait. The major reason is security: allowing a capsule to send packets should not also allow it to set the local node address. The second major reason is virtualization. For example, a UART virtualizer that allows multiple concurrent readers cannot allow them to change the speed or UART configuration, as it is shared among all of them. A capsule that can read a GPIO pin should not always be able to reconfigure the pin (what if other capsules need to be able to read it too?).

For example, returning to the UART example, this is an early version of the UART trait (v1.3):

#![allow(unused)]
fn main() {
// Anti-pattern: combining data and control operations makes this
// trait unvirtualizable, as multiple clients cannot configure a
// shared UART. It also requires every client to handle both
// receive and transmit callbacks.
pub trait UART {
  fn set_client(&self, client: &'static Client);
  fn configure(&self, params: UARTParameters) -> ReturnCode;
  fn transmit(&self, tx_data: &'static mut [u8], tx_len: usize);
  fn receive(&self, rx_buffer: &'static mut [u8], rx_len: usize);
  fn abort_receive(&self);
}
}

It breaks both Rule 8 and Rule 9. It couples reception and transmission (Rule 8). It also couples configuration with data (Rule 9). This HIL was fine when there was only a single user of the UART. However, once the UART was virtualized, configure could not work for virtualized clients. There were two options: have configure always return an error for virtual clients, or write a new trait for virtual clients that did not have configure. Neither is a good solution. The first pushes failures to runtime: a capsule that needs to adjust the configuration of the UART can be connected to a virtual UART and compile fine, but then fails when it tries to call configure. If that occurs rarely, then it might be a long time until the problem is discovered. The second solution (a new trait) breaks the idea of virtualization: a client has to be bound to either a physical UART or a virtual one, and can't be swapped between them even if it never calls configure.

The modern UART HIL looks like this:

#![allow(unused)]
fn main() {
pub trait Configure {
  fn configure(&self, params: Parameters) -> ReturnCode;
}
pub trait Transmit<'a> {
  fn set_transmit_client(&self, client: &'a dyn TransmitClient);
  fn transmit_buffer(
    &self,
    tx_buffer: &'static mut [u8],
    tx_len: usize,
  ) -> (ReturnCode, Option<&'static mut [u8]>);
  fn transmit_word(&self, word: u32) -> ReturnCode;
  fn transmit_abort(&self) -> ReturnCode;
}
pub trait Receive<'a> {
  fn set_receive_client(&self, client: &'a dyn ReceiveClient);
  fn receive_buffer(
    &self,
    rx_buffer: &'static mut [u8],
    rx_len: usize,
  ) -> (ReturnCode, Option<&'static mut [u8]>);
  fn receive_word(&self) -> ReturnCode;
  fn receive_abort(&self) -> ReturnCode;
}
pub trait Uart<'a>: Configure + Transmit<'a> + Receive<'a> {}
pub trait UartData<'a>: Transmit<'a> + Receive<'a> {}
}

Rule 10: Avoid Blocking APIs

The Tock kernel is non-blocking: I/O operations are split-phase and have a completion callback. If an operation blocks, it blocks the entire system.

There are cases when operations are synchronous sometimes. The random number generator in Rule 1 is an example. If random bits are cached, then a call to request random bits can sometimes return those bits synchronously. If the random number generator needs to engage the underlying AES engine, then the random bits have to be asynchronous. As Rule 1 goes into, even operations that could be synchronous should have a callback that executes asynchronously.

Having a conditional synchronous operation and an asynchronous backup is a poor solution. While it might seem to make the synchronous cases simpler, a caller still needs to handle the asynchronous ones. The code ends up being more complex and larger/longer, as it is now conditional: a caller has to handle both cases.

The more attractive case is when a particular implementation of a HIL seems like it can always be synchronous, therefore its HIL is synchronous. For example, writes to flash are typically asynchronous: the chip issues an interrupt once the bits are written. However, if the flash chip being written is the same as the one code is fetched from, then the chip may block reads while the write completes. From the perspective of the caller, writing to flash is blocking, as the core stops fetching instructions. A synchronous flash HIL allows implementations to be simpler, straight-line code.

Capsules implemented on a synchronous HIL only work for implementations with synchronous behavior. Such a HIL limits reuse. For example, a storage system built on top of this synchronous API can only work on the same flash bank instructions are stored on: otherwise, the operations will be split-phase.

There are use cases when splitting HILs in this way is worth it. For example, straight-line code can often be shorter and simpler than event-driven systems. By providing a synchronous API for the subset of devices that can support it, one can reduce code size and produce more light-weight implementations. For this reason, the rule is to avoid blocking APIs, not to never implement them. They can and should at times exist, but their uses cases should be narrow and constrained as they are fundamentally not as reusable.

Rule 11: initialize() methods, when needed, should be in a separate trait and invoked in an instantiating Component

Occasionally, HIL implementations need an initialize method to set up state or configure hardware before their first use. When one-time initialization is needed, doing it deterministically at boot is preferable than doing it dynamically on the first operation (e.g., by having a is_initialized field and calling initialize if it is false, then setting it true). Doing at boot has two advantages. First, it is fail-fast: if the HIL cannot initialize, this will be detected immediately at boot instead of potentially non-deterministically on the first operation. Second, it makes operations more deterministic in their execution time, which is useful for precise applications.

Because one-time initializations should only be invoked at boot, they should not be part of standard HIL traits, as those traits are used by clients and services. Instead, they should either be in a separate trait or part of a structure's implementation.

Because forgetting to initialize a module is a common source of errors, modules that require one-time initialization should, if at all possible, put this in an instantiable Component for that module. The Component can handle all of the setup needed for the module, including invoking the call to initialize.

Rule 12: Traits that can trigger callbacks should have a set_client method

If a HIL trait can trigger callbacks it should include a method for setting the client that handles the callbacks. There are two reasons. First, it is generally important to be able to change callbacks at runtime, e.g., in response to requests, virtualization, or other dynamic runtime behavior. Second, a client that can trigger a callback should also be able to control what method the callback invokes. This gives the client flexibility if it needs to change dispatch based on internal state, or perform some form of proxying. It also allows the client to disable callbacks (by passing an implementation of the trait that does nothing).

Rule 13: Use generic lifetimes, except for buffers in split-phase operations, which should be 'static

HIL implementations should use generic lifetimes whenever possible. This has two major advantages. First, it leaves open the possibilty that the kernel might, in the future, support loadable modules which have a finite lifetime. Second, an explicit `static lifetime brings safety and borrow-checker limitations, because mutably accessing `static variables is generally considered unsafe.

If possible, use a single lifetime unless there are compelling reasons or requirements otherwise. The standard lifetime name in Tock code is `a, although code can use other ones if they make sense. In practice today, these `a lifetimes are all bound to `static. However, by not using `static explicitly these HILs can be used with arbitrary lifetimes.

Buffers used in split-phase operations are the one exception to generic lifetimes. In most cases, these buffers will be used by hardware in DMA or other operations. In that way, their lifetime is not bound to program execution (i.e., lifetimes or stack frames) in a simple way. For example, a buffer passed to a DMA engine may be held by the engine indefinitely if it is never started. For this reason, buffers that touch hardware usually must be `static. If their lifetime is not `static, then they must be copied into a static buffer to be used (this is usually what happens when application AppSlice buffers are passed to hardware). To avoid unnecessary copies, HILs should use `static lifetimes for buffers used in split-phase operations.

Author Addresses

Philip Levis
414 Gates Hall
Stanford University
Stanford, CA 94305

email: Philip Levis <pal@cs.stanford.edu>
phone: +1 650 725 9046

Brad Campbell 
Computer Science	
241 Olsson	
P.O. Box 400336
Charlottesville, Virginia 22904 

email: Brad Campbell <bradjc@virginia.edu>

Licensing and Copyrights

TRD: 4
Working Group: Core
Type: Best Current Practice
Status: Final
Author: Pat Pannuto

Abstract

This document describes Tock’s policy on licensing and copyright. It explains the rationale behind the license selection (dual-license, MIT or Apache2) and copyright policy (optional, and authors may retain copyright if desired). It further outlines how licensing and copyright shall be handled throughout projects under the Tock umbrella.

Explicitly not discussed in this TRD are issues of trademark, the Tock brand, logos, or other non-code assets and artifacts.

1 Introduction

Tock’s goal is to provide a safe, secure, and efficient operating system for microcontroller-class devices. The Tock project further believes it is important to enable and support widespread adoption of safe, secure, and efficient software. Tock also seeks to be an open and inclusive project, and Tock welcomes contributions from any individuals or entities wishing to improve the safety, security, reliability, efficiency, or usability of Tock and the Tock ecosystem.

The intent of these policies is to best satisfy the needs of all stakeholders in the Tock ecosystem.

2 Licensing

All software artifacts under the umbrella of the Tock project are dual-licensed as Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0) or MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT).

All contributions to the Tock kernel (the code hosted at https://github.com/tock/tock) MUST be licensed under these terms.

3 Copyright

Entities contributing resources to open-source projects often require attribution to recognize their efforts. Copyright notices are a common means to provide this. For downstream users, the license terms of Tock ensures unencumbered use.

Copyrights in Tock projects are retained by their contributors. No copyright assignment is required to contribute to Tock projects.

Artifacts in the Tock project MAY include explicit copyright notices. Substantial updates to an artifact MAY add additional copyright notices to an artifact. In general, modifications to a file are expected to retain existing copyright notices.

For full authorship information, see the version control history.

4 Implementation

Where possible, all textual files that allow comments MUST include a license notice and copyright notice(s). Files that are not authored by Tock contributors (such as files copied from other projects) are exempt from this policy.

Copyright notices SHOULD include a year. Newer copyright notices SHOULD be placed after existing copyright notices. If non-trivial updates are performed by an original copyright author, they MAY amend the year(s) indicated on their existing copyright statement or MAY add an additional copyright line, at their discretion.

4.1 Format

License and copyright information SHOULD have at least one (1) blank line separating it from any other content in the file.

Text described in this section SHOULD be pre-fixed or post-fixed with technically necessary characters (i.e. to mark as a comment in source code) as appropriate.

The first line of license text SHOULD appear as-follows:

Licensed under the Apache License, Version 2.0 or the MIT License.

The second line of license SHOULD adhere to the SPDX specification for license description. As of this writing, it SHOULD appear as-follows:

SPDX-License-Identifier: Apache-2.0 OR MIT

The current (v2.3) normative rules permit case-insensitive matches of the license identifier but require case-sensitive matching of the disjunction operator. To simplify enforcement of licensing and documentation rules, license information SHOULD preserve case as-shown in the SPDX license list (i.e. as-presented above).

Copyright lines SHOULD follow this pattern:

Copyright {entity} {year}([,year],[-year]).

The {entity} field should reflect the entity wishing to claim copyright. The {year} field SHOULD reflect when the copyright is first established. Substantial updates in the future MAY indicate renewed copyright, via additional comma-separated years or via range syntax, at the copyright holder’s discretion. The initial year SHALL NOT be removed unless it is the express intent of the copyright holder to relinquish the initial copyright.

4.1.1 Examples

The common-case format is:

#![allow(unused)]
fn main() {
// Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors <YYYY>.

//! Module-level documentation...
}

placed at the top of the file.

If you wish to specifically call out the contribution by you or your company, you may do so by adding another copyright line:

#![allow(unused)]
fn main() {
// Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors <YYYY>.
// Copyright <you/your company> <YYYY>.

//! Module-level documentation...
}

A file with a long history and multiple copyrights may look as follows:

#!/usr/bin/env bash

# Licensed under the Apache License, Version 2.0 or the MIT License.
# SPDX-License-Identifier: Apache-2.0 OR MIT
# Copyright Tock Contributors 2014.
# Copyright Pat Pannuto 2014,2016-2018,2021.
# Copyright Amit Levy 2016-2019.
# Copyright Bradford James Campbell 2022.

set -e
...

Many additional examples are available throughout the Tock repositories.

4.2 Enforcement

To ensure coverage and compliance with these policies, the Core Team SHALL author and maintain tooling which checks the presence and expected format of license and copyright information. This SHOULD be automated and integrated with continuous integration systems. Contributions which do not satisfy these license and copyright rules MUST NOT be accepted.

In exceptional situations, consensus from the Core Team MAY circumvent this policy. Such situations MUST include public explanation and public record of non-anonymized vote results. This is not expected to ever occur.

5 Author’s Address

Pat Pannuto
3202 EBU3, Mail Code #0404
9500 Gilman Dr
La Jolla, CA 92093, USA
ppannuto@ucsd.edu

Kernel Analog-to-Digital Conversion HIL

TRD: 102
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Philip Levis and Branden Ghena
Draft-Created: Dec 18, 2016
Draft-Modified: June 12, 2017
Draft-Version: 2
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the hardware independent layer interface (HIL) for analog-to-digital conversion in the Tock operating system kernel. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document also describes an implementation of the ADC HIL for the SAM4L. This document is in full compliance with TRD1.

1 Introduction

Analog-to-digital converters (ADCs) are devices that convert analog input signals to discrete digital output signals, typically voltage to a binary number. While different microcontrollers can have very different control registers and operating modes, the basic high-level interface they provide is very uniform. Software that wishes to use more advanced features can directly use the per-chip implementations, which may export these features.

The ADC HIL is the kernel crate, in module hil::adc. It provides three traits:

  • kernel::hil::adc::Adc - provides basic interface for individual analog samples
  • kernel::hil::adc::Client - receives individual analog samples from the ADC
  • kernel::hil::adc::AdcHighSpeed - provides high speed buffered analog sampling interface
  • kernel::hil::adc::HighSpeedClient - receives buffers of analog samples from the ADC

The rest of this document discusses each in turn.

2 Adc trait

The Adc trait is for requesting individual analog to digital conversions, either one-shot or repeatedly. It is implemented by chip drivers to provide ADC functionality. Data is provided through the Client trait. It has four functions and one associated type:

#![allow(unused)]
fn main() {
/// Simple interface for reading an ADC sample on any channel.
pub trait Adc {
    /// The chip-dependent type of an ADC channel.
    type Channel;

    /// Initialize must be called before taking a sample.
    fn initialize(&self) -> Result<(), ErrorCode>;

    /// Request a single ADC sample on a particular channel.
    /// Used for individual samples that have no timing requirements.
    fn sample(&self, channel: &Self::Channel) -> Result<(), ErrorCode>;

    /// Request repeated ADC samples on a particular channel.
    /// Callbacks will occur at the given frequency with low jitter and can be
    /// set to any frequency supported by the chip implementation. However
    /// callbacks may be limited based on how quickly the system can service
    /// individual samples, leading to missed samples at high frequencies.
    fn sample_continuous(&self, channel: &Self::Channel, frequency: u32) -> Result<(), ErrorCode>;

    /// Stop a sampling operation.
    /// Can be used to stop any simple or high-speed sampling operation. No
    /// further callbacks will occur.
    fn stop_sampling(&self) -> Result<(), ErrorCode>;

    fn set_client(&self, client: &'static dyn Client);
}
}

The initialize function configures the hardware to perform analog sampling. It MUST be called at least once before any samples are taken. It only needs to be called once, not once per sample. This function MUST return Ok(()) upon correct initialization or FAIL if the hardware fails to initialize successfully. If the driver is already initialized, the function SHOULD return Ok(()).

The sample function starts a single conversion on the specified ADC channel. The exact binding of this channel to external or internal analog inputs is board-dependent. The function MUST return Ok(()) if the analog conversion has been started, OFF if the ADC is not initialized or enabled, BUSY if a conversion is already in progress, or INVAL if the specified channel is invalid. The sample_ready callback of the client MUST be called when the conversion is complete.

The sample_continuous function begins repeated individual conversions on a specified channel. Conversions MUST continue at the specified frequency until stop_sampling is called. The sample_ready callback of the client MUST be called when each conversion is complete. The channels and frequency ranges supported are board-dependent. The function MUST return Ok(()) if repeated analog conversions have been started, OFF if the ADC is not initialized or enabled, BUSY if a conversion is already in progress, or INVAL if the specified channel or frequency are invalid.

The stop_sampling function can be used to stop any sampling operation, single, continuous, or high speed. Conversions which have already begun are canceled. stop_sampling MUST be safe to call from any callback in the Client or HighSpeedClient traits. The function MUST return Ok(()), OFF, or INVAL. Ok(()) indicates that all conversions are stopped and no further callbacks will occur, OFF means the ADC is not initialized or enabled, and INVAL means the ADC was not active.

The channel type is used to signify which ADC channel to sample data on for various commands. What it maps to is implementation-specific, possibly an I/O pin number or abstract notion of a channel. One approach used for channels by the SAM4L implementation is for the capsule to keep an array of possible channels, which are connected to pins by the board main.rs file, and selected from by userland applications.

3 Client trait

The Client trait handles responses from Adc trait sampling commands. It is implemented by capsules to receive chip driver responses. It has one function:

#![allow(unused)]
fn main() {
/// Trait for handling callbacks from simple ADC calls.
pub trait Client {
    /// Called when a sample is ready.
    fn sample_ready(&self, sample: u16);
}
}

The sample_ready function is called whenever data is available from a sample or sample_continuous call. It is safe to call stop_sampling within the sample_ready callback. The sample data returned is a maximum of 16 bits in resolution, with the exact data resolution being chip-specific. If data is less than 16 bits (for example 12-bits on the SAM4L), it SHOULD be placed in the least significant bits of the sample value.

4 AdcHighSpeed trait

The AdcHighSpeed trait is used for sampling data at high frequencies such that receiving individual samples would be untenable. Instead, it provides an interface that returns buffers filled with samples. This trait relies on the Adc trait being implemented as well in order to provide primitives like initialize and stop_sampling which are used for ADCs in this mode as well. While we expect many chips to support the Adc trait, we expect the AdcHighSpeed trait to be implemented due to a high-speed sampling need on a platform. The trait has three functions:

#![allow(unused)]
fn main() {
/// Interface for continuously sampling at a given frequency on a channel.
/// Requires the AdcSimple interface to have been implemented as well.
pub trait AdcHighSpeed: Adc {
    /// Start sampling continuously into buffers.
    /// Samples are double-buffered, going first into `buffer1` and then into
    /// `buffer2`. A callback is performed to the client whenever either buffer
    /// is full, which expects either a second buffer to be sent via the
    /// `provide_buffer` call. Length fields correspond to the number of
    /// samples that should be collected in each buffer. If an error occurs,
    /// the buffers will be returned.
    fn sample_highspeed(&self,
                        channel: &Self::Channel,
                        frequency: u32,
                        buffer1: &'static mut [u16],
                        length1: usize,
                        buffer2: &'static mut [u16],
                        length2: usize)
                        -> (Result<(), ErrorCode>, Option<&'static mut [u16]>,
                            Option<&'static mut [u16]>);

    /// Provide a new buffer to fill with the ongoing `sample_continuous`
    /// configuration.
    /// Expected to be called in a `buffer_ready` callback. Note that if this
    /// is not called before the second buffer is filled, samples will be
    /// missed. Length field corresponds to the number of samples that should
    /// be collected in the buffer. If an error occurs, the buffer will be
    /// returned.
    fn provide_buffer(&self,
                      buf: &'static mut [u16],
                      length: usize)
                      -> (Result<(), ErrorCode>, Option<&'static mut [u16]>);

    /// Reclaim ownership of buffers.
    /// Can only be called when the ADC is inactive, which occurs after a
    /// successful `stop_sampling`. Used to reclaim buffers after a sampling
    /// operation is complete. Returns success if the ADC was inactive, but
    /// there may still be no buffers that are `some` if the driver had already
    /// returned all buffers.
    fn retrieve_buffers(&self)
                        -> (Result<(), ErrorCode>, Option<&'static mut [u16]>,
                            Option<&'static mut [u16]>);

    fn set_highspeed_client(&self, client: &'static dyn HighSpeedClient);
}
}

The sample_highspeed function is used to perform high-speed double-buffered sampling. After the first buffer is filled with samples, the samples_ready function will be called and sampling will immediately continue into the second buffer in order to reduce jitter between samples. Additional buffers SHOULD be passed through the provide_buffer call. However, if none are provided, the driver MUST cease sampling once it runs out of buffers. In case of an error, the buffers will be immediately returned from the function. The channels and frequencies acceptable are chip-specific. The return code MUST be Ok(()) if sampling has begun successfully, OFF if the ADC is not enabled or initialized, BUSY if the ADC is in use, or INVAL if the channel or frequency are invalid.

The provide_buffer function is used to provide additional buffers to an ongoing high-speed sampling operation. It is expected to be called within a samples_ready callback in order to keep sampling running without delay. In case of an error, the buffer will be immediately returned from the function. It is not an error to fail to call provide_buffer and the underlying driver MUST cease sampling if no buffers are remaining. It is an error to call provide_buffer twice without having received a buffer through samples_ready. The prior settings for channel and frequency will persist. The return code MUST be Ok(()) if the buffer has been saved for later use, OFF if the ADC is not initialized or enabled, INVAL if there is no currently running continuous sampling operation, or BUSY if an additional buffer has already been provided.

The retrieve_buffers function returns ownership of all buffers owned by the chip implementation. All ADC operations MUST be stopped before buffers are returned. Any data within the buffers SHOULD be considered invalid. It is expected that retrieve_buffers will be called from within a samples_ready callback after calling stop_sampling. Up to two buffers will be returned by the function. The return code MUST be Ok(()) if the ADC is not in operation (although as few as zero buffers may be returned), INVAL MUST be returned if an ADC operation is still in progress.

5 HighSpeedClient trait

The HighSpeedClient trait is used to receive samples from a call to sample_highspeed. It is implemented by a capsule to receive chip driver responses. It has one function:

#![allow(unused)]
fn main() {
/// Trait for handling callbacks from high-speed ADC calls.
pub trait HighSpeedClient {
    /// Called when a buffer is full.
    /// The length provided will always be less than or equal to the length of
    /// the buffer. Expects an additional call to either provide another buffer
    /// or stop sampling
    fn samples_ready(&self, buf: &'static mut [u16], length: usize);
}
}

The samples_ready function receives a buffer filled with up to length number of samples. Each sample MAY be up to 16 bits in size. Smaller samples SHOULD be aligned such that the data is in the least significant bits of each value. The length field MUST match the length passed in with the buffer (through either sample_highspeed or provide_buffer). Within the samples_ready callback, the capsule SHOULD call provide_buffer if it wishes to continue sampling. Alternatively, stop_sampling and retrieve_buffers SHOULD be called to stop the ongoing ADC operation.

6 Example Implementation: SAM4L

The SAM4L ADC has a flexible ADC, supporting differential and single-ended inputs, 8 or 12 bit samples, configurable clocks, reference voltages, and grounds. It supports periodic sampling supported by an internal timer. The SAM4L ADC uses generic clock 10 (GCLK10). The ADC is peripheral 38, so its control registers are found at address 0x40038000. A complete description of the ADC can be found in Chapter 38 (Page 995) of the SAM4L datasheet.

The current implementation, found in chips/sam4l/adc.rs, implements the Adc and AdcHighSpeed traits.

6.1 ADC Channels

In order to provide a list of ADC channels to the capsule and userland, the SAM4L implementation creates an AdcChannel struct which contains and enum defining its value. Each possible ADC channel is then statically created. Other chips may want to consider a similar system.

#![allow(unused)]
fn main() {
/// Representation of an ADC channel on the SAM4L.
pub struct AdcChannel {
    chan_num: u32,
    internal: u32,
}

/// SAM4L ADC channels.
#[derive(Copy,Clone,Debug)]
#[repr(u8)]
enum Channel {
    AD0 = 0x00,
    AD1 = 0x01,
    ...
    ReferenceGround = 0x17,
}

/// Initialization of an ADC channel.
impl AdcChannel {
    /// Create a new ADC channel.
    /// channel - Channel enum representing the channel number and whether it is
    ///           internal
    const fn new(channel: Channel) -> AdcChannel {
        AdcChannel {
            chan_num: ((channel as u8) & 0x0F) as u32,
            internal: (((channel as u8) >> 4) & 0x01) as u32,
        }
    }
}

/// Statically allocated ADC channels. Used in board configurations to specify
/// which channels are used on the platform.
pub static mut CHANNEL_AD0: AdcChannel = AdcChannel::new(Channel::AD0);
pub static mut CHANNEL_AD1: AdcChannel = AdcChannel::new(Channel::AD1);
...
pub static mut CHANNEL_REFERENCE_GROUND: AdcChannel = AdcChannel::new(Channel::ReferenceGround);
}

6.2 Client Handling

As ADC functionality is split between two traits, there are two callback traits. ADC driver implementations that use both Adc and AdcHighSpeed need two clients, which must both be set:

#![allow(unused)]
fn main() {
hil::adc::Adc::set_client(&peripherals.adc, adc);
hil::adc::AdcHighSpeed::set_client(&peripherals.adc, adc);
}

6.3 Clock Initialization

The ADC clock on the SAM4L is poorly documented. It is required to both generate a clock based on the PBA clock as well as GCLK10. However, the clock used for samples by the ADC run at 1.5 MHz at the highest (for single sampling mode). In order to handle this, the SAM4L ADC implementation first divides down the clock to reach a value less than or equal to 1.5 MHz (exactly 1.5 MHz in practice for a CPU clock running at 48 MHz).

6.4 ADC Initialization

The process of initializing the ADC is well documented in the SAM4L datasheet, unfortunately it seems to be entirely false. While following the documentation allows for single sampling, high speed sampling fails in practice after a small number of samples (order less than 100) have been collected. After much experimentation and comparison to other SAM4L code available online, it was determined that the initialization process should be:

  1. Enable clock
  2. Configure ADC
  3. Reset ADC
  4. Enable ADC
  5. Wait until ADC status is set to enabled
  6. Enable the Bandgap and Reference Buffers
  7. Wait until the buffers are enabled

It is quite possible that other orders of initialization are valid, however proceed with caution.

7 Authors' Address

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305

phone - +1 650 725 9046

email - pal@cs.stanford.edu
Branden Ghena

email - brghena@umich.edu

8 Citations

[TRD1] Tock Reference Document (TRD) Structure and Keywords

Kernel General Purpose I/O (GPIO) HIL

TRD: 103
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Amit Levy, Philip Levis
Draft-Created: Feb 05, 2017
Draft-Modified: April 09, 2021
Draft-Version: 3
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the hardware independent layer interface (HIL) for General Purpose Input/Output (GPIO) in the Tock operating system kernel. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document is in full compliance with TRD1.

1 Introduction

General Purpose Input/Output (GPIO) controls generic pins. User code can control the output level on the pin (high or low), read the externally drive logic level and often configure pull-up or pull-down resistence. Typically, microcontrollers expose pins in groups called ports however Tock's GPIO HIL exposes pins individually since ports often do not group pins as they are actually used on a board. Software that wishes to control a whole port (e.g. for efficiency) should use the per-chip implementation, which may export this feature.

The GPIO HIL is the kernel crate, in module hil::gpio. It provides the following traits:

  • kernel::hil::gpio::Output controls an output pin.
  • kernel::hil::gpio::Input controls an input pin.
  • kernel::hil::gpio::Configure configures a pin.
  • kernel::hil::gpio::ConfigureInputOutput configures a pin that can simultaneously be an input and an output (some hardware supports this. It depends on Configure).
  • kernel::hil::gpio::Interrupt controls an interrupt pin. It depends on Input.
  • kernel::hil::gpio::Client handles callbacks from pin interrupts.
  • kernel::hil::gpio::InterruptWithValue controls an interrupt pin that provides a value in its callbacks. It depends on Input.
  • kernel::hil::gpio::ClientWithValue handles callbacks from pin interrupts that provide a value (InterruptWithValue).
  • kernel::hil::gpio::Pin depends on Input, Output, and Configure`.
  • kernel::hil::gpio::InterruptPin depends on PinandInterrupt`.
  • kernel::hil::gpio::InterruptValuePin depends on PinandInterruptWithValue`.

The rest of this document discusses each in turn.

2 Output

The Output trait controls a pin that is an output. It has four methods:

#![allow(unused)]
fn main() {
pub trait Output {
    /// Set the GPIO pin high. If the pin is not an output or
    /// input/output, this call is ignored.
    fn set(&self);

    /// Set the GPIO pin low. If the pin is not an output or
    /// input/output, this call is ignored.
    fn clear(&self);

    /// Toggle the GPIO pin. If the pin was high, set it low. If
    /// the pin was low, set it high. If the pin is not an output or
    /// input/output, this call is ignored. Return the new value
    /// of the pin (false is cleared, true is set).
    fn toggle(&self) -> bool;

    /// Activate or deactivate a GPIO pin, for a given activation mode.
    fn write_activation(&self, state: ActivationState, mode: ActivationMode);
}
}

The write_activation method has a default implementation. This method allows software to interact with a GPIO using logical, rather than physical behavior. For example, consider a button which is "active" when it is pushed. If the button is connected to ground and a pull-up input pin, then it is active when the pin is low; if it is connected to Vdd and a pull-down input pin, it is active when the pin is high. Similarly, an LED may be connected through an PNP transistor, whose base is controlled by a GPIO pin, such that setting the pin low turns on the LED and setting the pin high turns it off. Rather than keeping track of these polarities, software can use ActivationState to specify whether the device should be active or inactive, and ActivationMode specifies the polarity.

#![allow(unused)]
fn main() {
#[derive(Clone, Copy, PartialEq, Eq)]
pub enum ActivationState {
    Inactive = 0,
    Active = 1,
}

/// Whether a GPIO is in the `ActivationState::Active` when the signal is high
/// or low.
#[derive(Clone, Copy)]
pub enum ActivationMode {
    ActiveHigh,
    ActiveLow,
}
}

3 Input

The Input trait controls an input pin. It has two methods:

#![allow(unused)]
fn main() {
pub trait Input {
    /// Get the current state of an input GPIO pin. For an output
    /// pin, return the output; for an input pin, return the input;
    /// for disabled or function pins the value is undefined.
    fn read(&self) -> bool;

    /// Get the current state of a GPIO pin, for a given activation mode.
    fn read_activation(&self, mode: ActivationMode) -> ActivationState {
        let value = self.read();
        match (mode, value) {
            (ActivationMode::ActiveHigh, true) | (ActivationMode::ActiveLow, false) => {
                ActivationState::Active
            }
            (ActivationMode::ActiveLow, true) | (ActivationMode::ActiveHigh, false) => {
                ActivationState::Inactive
            }
        }
    }
}
}

The read_activation method is similar to the write_activation method in Output, described below, but operates on input rather than output bits.

4 Configure

The Configure trait allows a caller to configure a GPIO pin. It has 10 methods, two of which have default implementations.

#![allow(unused)]
fn main() {
pub enum Configuration {
    LowPower,    // Cannot be read or written or used; effectively inactive.
    Input,       // Calls to the `Input` trait are valid.
    Output,      // Calls to the `Output` trait are valid.
    InputOutput, // Calls to both the `Input` and `Output` traits are valid.
    Function,    // Chip-specific, requires chip-specific API for more detail,
    Other,       // In a state not covered by other values.
}

pub enum FloatingState {
    PullUp,
    PullDown,
    PullNone,
}

pub trait Configure {
    fn configuration(&self) -> Configuration;
    fn make_output(&self) -> Configuration;
    fn disable_output(&self) -> Configuration;
    fn make_input(&self) -> Configuration;
    fn disable_input(&self) -> Configuration;
    fn deactivate_to_low_power(&self);
    fn set_floating_state(&self, state: FloatingState);
    fn floating_state(&self) -> FloatingState;

    // Have default implementations
    fn is_input(&self) -> bool;
    fn is_output(&self) -> bool;
}
}

The Configuration enum describes the current configuration of a pin. The key property of the enumeration, which prompts its use, is the fact that some hardware allows a pin to simultaneously be an input and an output, while in other hardware these states are mutually exclusive. For example, the Atmel SAM4L GPIO pins are always inputs, and reading them "indicates the level of the GPIO pins regardless of the pins being driven by the GPIO or by an external component". In contrast, on the nRF52 series, a GPIO pin is either an input or an output.

The Configuration enumeration encapsulates this by reporting the current configuration after a change. For example, suppose a pin has Configuration::Input and software calls make_output on it. A SAM4L will return Configuration::InputOutput while an nRF52 will return Configuration::Output.

If a client requires a pin be both an input and an output, it can use the ConfigureInputOutput trait:

#![allow(unused)]
fn main() {
pub trait ConfigureInputOutput: Configure {
    /// Make the pin a simultaneously input and output; should always
    /// return `Configuration::InputOutput`.
    fn make_input_output(&self) -> Configuration;
    fn is_input_output(&self) -> bool;
}
}

Chips that support simultaneous input/output MAY implement this trait, while others that do not support simultaneous input/output MUST NOT implement this trait. Therefore, at compile time, one can distinguish whether the client can operate properly.

The Configure::deactivate_to_low_power method exists because the best configuration for GPIO pins can depend not only on the chip but also how they are connected in a system. This method puts the pin into whatever state is lowest power and causes it to be both unreadable and unwritable. E.g., even if the lowest power state is as a pull-down input, when in this state a client cannot read the pin. Blocking functionality in this way tries to prevent clients making assumptions about the underlying hardware.

5 Interrupt and Client

The Interrupt and Client traits are how software can control and handle interrupts generated from a GPIO pin.

#![allow(unused)]
fn main() {
pub enum InterruptEdge {
    RisingEdge,
    FallingEdge,
    EitherEdge,
}

pub trait Interrupt<'a>: Input {
    fn set_client(&self, client: &'a dyn Client);
    fn enable_interrupts(&self, mode: InterruptEdge);
    fn disable_interrupts(&self);
    fn is_pending(&self) -> bool;
}

pub trait Client {
    fn fired(&self);
}
}

These traits assume that hardware can generate interrupts on rising, falling, or either edges. They do not support level (high/low) interrupts. Some hardware does not support level interrupts. The nRF52 GPIOTE peripheral, for example, doesn't. Chips or capsules that wish to support level interrupts can define a new trait that depends on the Interrupt trait.

An important aspect of these traits is that they cannot fail. For example, enable_interrupts does not return anything, so there is no way to signal failure. Because interrupts are an extremely low-level aspect of the kernel, these traits preclude there being complex conditional logic that might cause them to fail (e.g., some form of dynamic allocation or mapping). Interrupt implementations that can fail at runtime should define and use alternative traits.

5 InterruptWithValue and ClientWithValue

The InterruptWithValue and ClientWithValue traits extend interrupt handling to pass a value with an interrupt. This is useful when a single method needs to handle callbacks from multiple pins. Each pin's interrupt can have a different value, and the callback function can determine which pin the interrupt is from based on the value passed. This is used, for example, in the GPIO capsule that allows userspace to handle interrupts from multiple interrupt pins. If there weren't a ClientWithValue trait, the capsule would have to define N different callback methods for N pins. These would likely each then call a helper function with a parameter indicating which one was invoked: ClientWithValue provides this mechanism automatically.

#![allow(unused)]
fn main() {
pub trait InterruptWithValue<'a>: Input {
    fn set_client(&self, client: &'a dyn ClientWithValue);
    fn enable_interrupts(&self, mode: InterruptEdge) -> Result<(), ErrorCode>;
    fn disable_interrupts(&self);
    fn is_pending(&self) -> bool;

    fn set_value(&self, value: u32);
    fn value(&self) -> u32;
}

pub trait ClientWithValue {
    fn fired(&self, value: u32);
}
}

The InterruptWithValue trait does not depend on the Interrupt trait because its client has a different type. Supporting both types of clients would require case logic within the GPIO implementation, whose cost (increased storage for the variably-typed reference, increased code for handling the cases) is not worth the benefit (being able to pass a Client to an InterruptWithValue.

The GPIO HIL provides a standard implementation of a wrapper that implements InterruptWithValue. It wraps around an implementation of Interrupt, defining itself as a Client and using Client:callback to invoke ClientWithValue::callback.

#![allow(unused)]
fn main() {
impl<'a, IP: InterruptPin<'a>> InterruptValueWrapper<'a, IP> {
    pub fn new(pin: &'a IP) -> Self {...}
}

InterruptValueWrapper implements InterruptWithValue, Client, Input, Output, and Configure.

6 Composite Traits: Pin, InterruptPin, InterruptValuePin

The GPIO HIL uses fine-grained traits in order to follow the security principle of least privilege. For example, something that needs to be able to read a GPIO pin should not necessarily be able to reconfigure or write to it. However, because handling multiple small traits at once can be cumbersome, the GPIO HIL defines several standard composite traits:

#![allow(unused)]
fn main() {
pub trait Pin: Input + Output + Configure {}
pub trait InterruptPin<'a>: Pin + Interrupt<'a> {}
pub trait InterruptValuePin<'a>: Pin + InterruptWithValue<'a> {}
}

6 Example Implementation

As of this writing (April 2021; Tock v1.6 and v2.0), there are example implementations of the GPIO HIL for the Atmel SAM4L, lowRISC, nrf5x, sifive, stm32f303xc, stm32f4xx, imxrt10xx, apollo3, and msp432 chips. The lowrisc, sam4l, and sifive chips support Configuration::InputOutput mode, while the others support only input or output mode.

7 Authors' Address

Philip Levis
414 Gates Hall
Stanford University
Stanford, CA 94305
email: Philip Levis <pal@cs.stanford.edu>
phone: +1 650 725 9046

Amit Levy
email: Amit Levy <aalevy@cs.princeton.edu>

System Calls

TRD: 104
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Hudson Ayers, Guillaume Endignoux, Jon Flatley, Philip Levis, Amit Levy, Pat Pannuto, Leon Schuermann, Johnathan Van Why, dcz
Draft-Created: August 31, 2020
Draft-Modified: June 14, 2024
Draft-Version: 9
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the system call application binary interface (ABI) between user space processes and the Tock kernel for 32-bit ARM Cortex-M and RISC-V RV32I platforms.

1 Introduction

The Tock operating system can run multiple independent userspace applications. Each application image is a separate process: it has its own address space and thread stack. Because applications are untrusted, the kernel uses hardware memory protection to isolate the kernel from processes. This allows applications written in C (or even assembly) to safely run on Tock. Applications invoke operations on and receive upcalls from the Tock kernel through the system call programming interface.

This document describes Tock's system call programming interface (API) and application binary interface (ABI) for 32-bit ARM Cortex-M and RISC-V RV32I platforms. It describes the system calls that Tock implements, their semantics, and how a userspace process invokes them. The ABI for other architectures, if supported, will be described in other documents.

2 Design Considerations

Three design considerations guide the design of Tock's system call API and ABI.

  1. Tock is currently supported on the ARM CortexM and RISC-V architectures. It may support others in the future. Its ABI must support both architectures and be flexible enough to support future ones.
  2. Tock userspace applications can be written in any language. The system call API must support their calling semantics in a safe way. Rust is especially important.
  3. Both the API and ABI must be efficient and support common call patterns in an efficient way.

2.1 Architectural Support and ABIs

The primary question for the ABI is how many and which registers transfer data between the kernel and userspace. Passing more registers has the benefit of the kernel and userspace being able to transfer more information without relying on pointers to memory structures. It has the cost of requiring every system call to transfer and manipulate more registers.

2.2 Programming Language APIs

Userspace support for Rust is an important requirement for Tock. A key invariant in Rust is that a given memory object can either have multiple references or a single mutable reference. If userspace passes a writeable (mutable) buffer into the kernel, it must relinquish any references to that buffer. As a result, the only way for userspace to regain a reference to the buffer is for the kernel to pass it back.

2.3 Efficiency

Programming language calling conventions are another consideration because they affect efficiency. For example, the C calling convention in ARM says that the first four arguments to a function are stored in r0-r3. Additional arguments are stored on the stack. Therefore, if the system call ABI says that arguments are stored in different registers than r0-r3, a C function call that invokes a system call will need to move the C arguments into those registers.

3 System Call ABI

This section describes the ABI for Tock on 32-bit platforms, including the exact register mappings for the CortexM and 32-bit RISC-V architectures. The ABI for 64-bit platforms is currently undefined but may be specified in a future TRD. The register mappings for future 32-bit architectures can be specified in supplemental TRDs.

3.1 Registers

When userspace invokes a system call, it passes 4 registers to the kernel as arguments. It also pass an 8-bit value of which type of system call (see Section 4) is being invoked (the Syscall Class ID). When the system call returns, it returns 4 registers as return values. When the kernel invokes an upcall on userspace, it passes 4 registers to userspace as arguments and has no return value.

CortexMRISC-V
Syscall Argumentsr0-r3a0-a3
Syscall Return Valuesr0-r3a0-a3
Syscall Class IDsvca4
Upcall Argumentsr0-r3a0-a3
Upcall Return ValuesNoneNone

How registers are mapped to arguments can affect performance and code size. For system calls implemented by capsules and drivers (command, subscribe, and allow), arguments that are passed to these calls should be placed in the same registers that will be used to invoke those calls. This allows the system call handlers in the kernel to pass them unchanged, rather than have to move them between registers.

For example, command has this signature:

#![allow(unused)]
fn main() {
fn command(&self, minor_num: usize, r2: usize, r3: usize, caller_id: AppId) -> Result<(), ErrorCode>
}

This means that the value which will be passed as r2 to the command should be placed in register r2 when userspace invokes the system call. That way, the system call handler can just leave register r2 unchanged. If, instead, the argument r2 were passed in register r3, the system call handler would have to spend an instruction moving register r3 to register r2.

Driver system call implementations in the Tock kernel typically pass a reference to self as their first argument. Therefore, r0 is usually used to dispatch onto the correct driver; this argument is consumed by the system call handler and replaced with &self when the actual system call method is invoked.

3.2 Return Values

All system calls have the same return value format. A system call can return one of several variants, having different associated value types, which are shown here. r0-r3 refer to the return value registers: for CortexM they are r0-r3 and for RISC-V they are a0-a3.

System call return variantr0r1r2r3
Failure0Error code--
Failure with u321Error codeReturn Value 0
Failure with 2 u322Error codeReturn Value 0Return Value 1
Failure with u643Error codeReturn Value 0 LSBReturn Value 0 MSB
Success128
Success with u32129Return Value 0
Success with 2 u32130Return Value 0Return Value 1
Success with u64131Return Value 0 LSBReturn Value 0 MSB
Success with 3 u32132Return Value 0Return Value 1Return Value 2
Success with u32 and u64133Return Value 0Return Value 1 LSBReturn Value 1 MSB

There are many failure and success variants because different system calls need to pass different amounts of data. A command that requests a 64-bit timestamp, for example, needs its success to return a u64, but its failure can return nothing. In contrast, a system call that passes a pointer into the kernel may have a simple success return value but requires a failure with one 32-bit value so the pointer can be passed back.

Every system call MUST return only one failure and only one success variant. Different system calls may use different failure and success variants, but any specific system call returns exactly one of each. If an operation might have multiple success return variants or failure return variants, then it MUST be split into multiple system calls.

This requirement of a single failure variant and a single success variant is to simplify userspace implementations and preclude them from having to handle many different cases. The presence of many difference cases suggests that the operation should be split up, as there is non-determinism in its execution or its meaning is overloaded. The requirement of a single failure and a single success variant also fits well with Rust's Result type.

If userspace tries to invoke a system call that the kernel does not support, the system call will return a Failure result with an error code of NODEVICE or NOSUPPORT (Section 4). As the Allow and Subscribe system call classes have defined Failure types, the kernel can produce the expected type with known failure variants. Command, however, can return any variant. This means that Commands can appear to have two failure variants: the one expected (e.g., Failure with u32) as well as Failure. To avoid this ambiguity for NODEVICE, userspace can use the reserved "exists" command (Command Identifier 0), described in Section 4.3.1. If this command returns Success, the driver is installed and will not return a Failure with NODEVICE for Commands. The driver may still return NOSUPPORT, however. Because this implies a misunderstanding of the system call API by userspace (it is invoking system calls that do not exist), userspace is responsible for handling this case.

For all Failure types, the passed Error code MUST be placed in r1.

All 32-bit values not specified for r0 in the above table are reserved. Reserved r0 values MAY be used by a future TRD and MUST NOT be returned by the kernel unless specified in a TRD. Therefore, for future compatibility, userspace code MUST handle r0 values that it does not recognize.

3.3 Error Codes

All system call failures return an error code. These error codes are a superset of kernel error codes. They include all kernel error codes so errors from calls on kernel HILs can be easily mapped to userspace system calls when suitable. There are additional error codes to include errors related to userspace.

ValueError CodeMeaning
1FAILGeneral failure condition: no further information available.
2BUSYThe driver or kernel is busy: retry later.
3ALREADYThis operation is already ongoing can cannot be executed more times in parallel.
4OFFThis subsystem is powered off and must be turned on before issuing operations.
5RESERVEMaking this call requires some form of prior reservation, which has not been performed.
6INVALIDOne of more of the parameters passed to the operation was invalid.
7SIZEThe size specified is too large or too small.
8CANCELThe operation was actively cancelled by a call to a cancel() method or function.
9NOMEMThe operation required memory that was not available (e.g. a grant region or a buffer).
10NOSUPPORTThe system call is not available to or not supported for the calling process.
11NODEVICEThe driver specified by the driver number is not available to the calling process.
12UNINSTALLEDThe resource was removed or uninstalled (e.g., an SD card).
13NOACKThe packet transmission was sent but not acknowledged.
1024BADRVALThe variant of the return value did not match what the system call should return.

Values in the range 1-1023 reflect kernel return value error codes. Kernel error codes not specified above are reserved. TRDs MAY specify additional kernel error codes from these reserved values, but MUST NOT specify kernel error codes greater than 1023. The Tock kernel MUST NOT return an error code unless the error code is specified in a TRD.

Values greater than 1023 are reserved for userspace library use. Value 1024 (BADRVAL) is for when a system call returns a different failure or success variant than the userspace library expects.

3.4 Returning To Userspace

When the kernel returns to userspace, it only gets to set registers for one stack frame. In practice, we have two cases:

Direct Resume

Userspace resumes execution directly after the svc invocation, so the assembly that follows the svc command can use the values in r0-r3 as-set by the kernel.

Pushed Callback

Userspace resumes execution at the start of the callback function.

The values in r0-r3 are consumed by the callback. When the callback finishes, it will pop {lr} (or similar), where the link register in the callback stack frame has been set by the kernel to the instruction after the svc that relinquished control to the kernel.

The assembly that invoked the syscall now gets to run. At this point r0-r3 are unknown as those are caller-save registers (which means the Upcall callback can clobber them freely). The assembly that invoked the svc cannot make any assumptions about the values in r0-r3, nor can the kernel use them to pass things "to" the calling assembly. Thus, the PushedCallback case has to use a pointer-based approach for the kernel to communicate with the assembly that invokes the svc (e.g. yield-param-A in Yield-NoWait).

4 System Call API

Tock has 7 classes or types of system calls. When a system call is invoked, the class is encoded as the Syscall Class Number. Some system call classes are implemented by the core kernel and so the supported calls are the same across kernels. Others are implemented by system call drivers, which can be added and removed in different kernel builds. The full set of valid system calls a kernel supports therefore depends on what system call drivers it has installed.

The 7 classes are:

Syscall ClassSyscall Class Number
Yield0
Subscribe1
Command2
Read-Write Allow3
Read-Only Allow4
Memop5
Exit6

All of the system call classes except Yield and Exit are non-blocking. When a userspace process calls a Subscribe, Command, Read-Write Allow, Read-Only Allow, or Memop syscall, the kernel will not put the process on a wait queue while handling the syscall. Instead, the kernel will complete the syscall and prepare the return value for the syscall immediately. The kernel scheduler may not, however, run the process immediately after handling the syscall, and may instead decide to suspend the process due to a timeslice expiration or the kernel thread being runnable. If an operation is long-running (e.g., I/O), its completion is signaled by an upcall (see the Subscribe call in 4.2).

Successful calls to Exit system calls do not return (the process exits).

System calls implemented by system call drivers (Subscribe, Command, Read-Write Allow, Read-Only Allow) all include two arguments, a driver number and a syscall number. The driver number specifies which system call driver to invoke. The syscall number (which is different than the Syscall Class Number in the table above) specifies which instance of that system call on that driver to invoke. Both arguments are unsigned 32-bit integers. For example, by convention the Console system call driver has driver number 0x1 and a Command to the console driver with syscall number 0x2 starts receiving console data into a buffer.

If userspace invokes a system call on a peripheral driver that is not installed in the kernel, the kernel MUST return a Failure result with an error of NODEVICE. If userspace invokes an unrecognized system call on a peripheral driver, the peripheral driver MUST return a Failure result with an error of NOSUPPORT.

4.1 Yield (Class ID: 0)

The Yield system call class is how a userspace process handles upcalls, relinquishes the processor to other processes, or waits for one of its long-running calls to complete. The Yield system call class implements the only blocking system calls in Tock that return: Yield-Wait and Yield-WaitFor. The kernel invokes upcalls only in response to Yield system calls.

There are three Yield system call variants:

  • Yield-Wait
  • Yield-NoWait
  • Yield-WaitFor

The register arguments for Yield system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Yield numberr0
yield-param-Ar1
yield-param-Br2
yield-param-Cr3

The Yield number (in r0) specifies which call is invoked:

System callYield number value
yield-no-wait0
yield-wait1
yield-wait-for2

All other yield number values are reserved. If an invalid yield number is passed the kernel MUST return immediately and MUST NOT use yield-param-A, yield-param-B, or yield-param-C.

The meaning of yield-param-X is specific to the yield type.

4.1.1 Yield-NoWait

Yield number 0, Yield-NoWait, executes a single upcall if any is pending. If no upcalls are pending it returns immediately. There are no return values from Yield-NoWait. This is because if an upcall was invoked, the kernel pushes that function call onto the stack, such that the return value may be the return value of the upcall.

Yield-NoWait will use yield-param-A as the memory address of an 8-bit byte to write to indicate whether an upcall was invoked. If invoking Yield-NoWait resulted in an upcall executing, Yield-NoWait writes 1 to the field address. If invoking Yield-NoWait resulted in no upcall executing, Yield-NoWait writes 0 to the field address. Userspace SHOULD ensure that yield-param-A points to a valid address in the current process. If userspace does not wish to receive the Yield-NoWait result, it SHOULD set yield-param-A to 0x0. The kernel SHALL write the Yield-NoWait result if yield-param-A points to any valid process memory and SHALL NOT write the Yield-NoWait result if it points to an address not in the memory allocated to the calling process.

Yield-NoWait can use the Yield-NoWait result to allow userspace loops that want to flush the upcall queue to execute Yield-NoWait until the queue is empty.

yield-param-B and yield-param-C are unused and reserved.

4.1.2 Yield-Wait

Yield number 1, Yield-Wait, blocks until an upcall executes. It is commonly used when applications have no other work to do and are waiting for an event (upcall) to occur to do more work.

This call will deliver events to the userspace application in the order they occurred in time in the kernel. If an application has multiple subscriptions, the userspace upcall handler is responsible for in some way noting which callback occurred if necessary.

Note: This will only return after an upcall executes. If an event occurs which would normally generate an upcall, but that upcall is currently assigned to the Null Upcall, no upcall executes and thus this syscall will not return.

Yield-Wait has no return value. This is because invoking an upcall pushes that function call onto the stack, such that the return value of a call to yield system call may be the return value of the upcall.

yield-param-A, yield-param-B, and yield-param-C are unused and reserved.

4.1.3 Yield-WaitFor

The third call, Yield-WaitFor, blocks until one specific upcall is ready to execute. If other events arrive that would invoke an upcall on this process, they are queued by the kernel, and will be delivered in response to subsequent Yield calls. Event order in this queue is maintained.

The specific upcall is identified by a Driver number and a Subscribe number (which together form an UpcalId).

  • Driver number: yield-param-A
  • Subscribe number: yield-param-B

This process will resume execution when an event in the kernel generates an upcall that matches the specified upcall. No userspace callback function will invoked by the kernel. Instead, the contents of r0-r2 will be set to the Upcall Arguments provided by the driver when the upcall is scheduled.

yield-param-C is unused and reserved.

4.2 Subscribe (Class ID: 1)

The Subscribe system call class is how a userspace process registers upcalls with the kernel. Subscribe system calls are implemented by peripheral syscall drivers, so the set of valid Subscribe calls depends on the platform and what drivers were compiled into the kernel.

The register arguments for Subscribe system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Driver numberr0
Subscribe numberr1
Upcall pointerr2
Application datar3

The upcall pointer is the address of the first instruction of the upcall function. The application data argument is a parameter that an application passes in and the kernel passes back in upcalls unmodified.

The upcall pointer SHOULD be a valid upcall, i.e., either a SubscribeUpcall or the Null Upcall, as defined in the next section.

If the passed upcall is not valid (is outside process executable memory and is not the Null Upcall described below), the kernel MUST NOT invoke the requested driver and MUST immediately return a failure with a error code of INVALID. The currently registered upcall remains registered and the kernel does not cancel any pending invocations of the existing upcall.

Any upcall passed from a process MUST remain valid until the next successful invocation of subscribe by that process with the same syscall and driver number. When a process makes a successful subscribe system call (one which results in the Success with 2 u32 return variant), the kernel MUST cancel all pending upcalls on that process for that driver and subscribe number: it MUST NOT invoke the previous upcall after the call to subscribe, and MUST NOT invoke the new upcall for events that the kernel handled before the call to subscribe.

Note that these semantics create a period over which upcalls might be lost: any upcalls that were pending when subscribe was called will not be invoked. On one hand, losing upcalls can create strange behavior in userspace. On the other, ensuring correctness is difficult. If the pending upcalls are invoked on the old function, there is a safety/liveness issue; this means that an upcall function must exist after it has been removed, and so for safety may need to be static (exist for the lifetime of the process). Therefore, to allow dynamic upcalls, an upcall can't be invoked after it's unregistered.

Invoking the new upcall in response to prior events has its own correctness issues. For example, suppose that userspace registers an upcall for receiving a certain type of event (e.g., a rising edge on a GPIO pin). It then changes the type of event (to falling edge) and registers a new upcall. Invoking the new upcall on the previous events will be incorrect.

If userspace requires that it not lose any upcalls, it should not re-subscribe and instead use some form of userspace dispatch.

The return variants for Subscribe system calls are Failure with 2 u32 and Success with 2 u32. For success, the first u32 is the upcall pointer passed in the previous call to Subscribe (the existing upcall) and the second u32 is the application data pointer passed in the previous call to Subscribe (the existing application data). For failure, the first u32 is the passed upcall pointer and the second u32 is the passed application data pointer. For the first successful call to Subscribe for a given upcall, the upcall pointer and application data pointer returned MUST be the Null Upcall (described below).

4.2.1 The Null Upcall

The Tock kernel defines an upcall pointer as the Null Upcall. The Null Upcall denotes an upcall that the kernel will never invoke. The Null Upcall is used for two reasons. First, a userspace process passing the Null Upcall as the upcall pointer for Subscribe indicates that there should be no more upcalls. Second, the first time a userspace process calls Subscribe for a particular upcall, the kernel needs to return upcall and application pointers indicating the current configuration; in this case, the kernel returns the Null Upcall. The Tock kernel MUST NOT invoke the Null Upcall.

The Null Upcall upcall pointer MUST be 0x0. This means it is not possible for userspace to pass address 0x0 as a valid code entry point. Unlike systems with virtual memory, where 0x0 can be reserved a special meaning, in microcontrollers with only physical memory 0x0 is a valid memory location. It is possible that a Tock kernel is configured so its applications start at address 0x0. However, even if they do begin at 0x0, the Tock Binary Format for application images mean that the first address will not be executable code and so 0x0 will not be a valid function. In the case that 0x0 is valid application code and where the linker places an upcall function, the first instruction of the function should be a no-op and the address of the second instruction passed instead.

If a userspace process invokes subscribe on a driver ID that is not installed in the kernel, the kernel MUST return a failure with an error code of NODEVICE and an upcall of the Null Upcall.

4.3 Command (Class ID: 2)

The Command system call class is how a userspace process calls a function in the kernel, either to return an immediate result or start a long-running operation. Command system calls are implemented by syscall drivers, so the set of valid Command calls depends on the platform and what drivers were compiled into the kernel.

The register arguments for Command system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Driver numberr0
Command numberr1
Argument 0r2
Argument 1r3

Argument 0 and argument 1 are unsigned 32-bit integers. Command calls should never pass pointers: those are passed with Allow calls, as they can adjust memory protection to allow the kernel to access them.

The return variants of Command are instance-specific. Each specific Command instance (combination of Driver and Command number) specifies its failure variant and success variant. If userspace invokes a command on a peripheral that is not installed, the kernel returns a failure variant of Failure, with an associated error code of NODEVICE. Therefore, command invocations that need to handle userspace/kernel mismatches should be able to handle Failure in addition to the expected failure variant (if different than Failure).

4.3.1 Command Identifier 0

Command Identifier 0 provides an existence check for drivers. Command Identifier 0 MUST return either Success or Failure with ENODEVICE. Success indicates that the driver is present and the userspace process can issue system calls to it. If the driver is not accessible, Command Identifier 0 returns Failure with an error code of ENODEVICE. A driver may be not accessible because the kernel does not have it, the process does not have the required permissions to use it, or other reasons.

4.4 Read-Write Allow (Class ID: 3)

The Read-Write Allow system call class is how a userspace process shares a buffer with the kernel that the kernel can read and write.

The register arguments for Read-Write Allow system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Driver numberr0
Allow numberr1
Addressr2
Sizer3

The allow number argument is an ordinal number (index) of the buffer. When Read-Write Allow is called, the provided buffer SHALL get assigned to the provided allow number, replacing the previous buffer assigned to that allow number, if there was one. The supported allow numbers are defined by the driver.

The Tock kernel MUST check that the passed buffer is contained within the calling process's writeable address space. Every byte of a passed buffer must be readable and writeable by the process. Zero-length buffers may therefore have arbitrary addresses. If the passed buffer is not complete within the calling process's writeable address space, the kernel MUST return a failure result with an error code of INVALID.

The return variants for Read-Write Allow system calls are Failure with 2 u32 and Success with 2 u32. In both cases, Argument 0 contains an address and Argument 1 contains a length. When a driver implementing the Read-Write Allow system call returns a failure result, it MUST return the same address and length as those that were passed in the call. When a driver implementing the Read-Write Allow system call returns a success result, the returned address and length MUST be those that were passed in the previous call, unless this is the first call. On the first successful invocation of a particular Read-Write Allow system call, an driver implementation MUST return address 0 and size 0.

If the kernel cannot access the grant region for this process, NOMEM will be returned. This can be caused by either running out a space in the grant region of RAM for the process, or the grant was never registered with the kernel during capsule creation at board startup. If the specified allow number is not supported by the driver, the kernel will return INVALID.

The standard access model for allowed buffers is that userspace does not read or write a buffer that has been allowed: access to the memory is intended to be exclusive either to userspace or to the kernel. To regain access to a passed buffer B, the process calls the same Read-Write Allow system call again. If this call returns a success result, the result contains buffer B. The process can call with a zero-length buffer if it wishes to pass no memory to the kernel. Once a buffer has been returned to userspace as part of a Read-Write Allow system call, it is guaranteed for the kernel to no longer have access to the described memory region, unless it is currently shared with the kernel as part of the passed in buffer or another Allow mechanism.

Note that buffers held by the kernel are still considered part of a process address space, even if conceptually the process should not access that memory. This means, for example, that userspace may extend a buffer by calling allow with the same pointer and a longer length and such a call is not required to return an error code of INVALID. Similarly, it is possible for userspace to allow the same buffer multiple times to the kernel. This means, in practice, that the kernel may have multiple writeable references to the same memory and MUST take precautions to ensure this does not violate safety within the kernel.

Finally, because a process conceptually relinquishes access to a buffer when it makes a Read-Write Allow call with it, a userspace API MUST NOT assume or rely on a process accessing an allowed buffer. If userspace needs to read or write to a buffer held by the kernel, it MUST first regain access to it by calling the corresponding Read-Write Allow.

4.4.1 Buffers Can Change

The standard use of Read-Write Allow requires that userspace does not access a buffer once it has been allowed. However, the kernel MUST NOT assume that an allowed buffer does not change: there could be a bug, compromise, or other error in the userspace code. The fact that the kernel thread always preempts any user thread in Tock allows capsules to assume that a series of accesses to an allowed buffer is atomic. However, if the capsule relinquishes execution (e.g., returns from a method called on it), it may be that userspace runs in the meantime and modifies the buffer. Note that userspace could also, in this time, issue another allow call to revoke the buffer, or crash, such that the buffer is no longer valid.

The canonical case of incorrectly assuming a buffer does not change involves the length of a buffer. In this example, taken from the SPI controller capsule, userspace allows a buffer, then a command specifies a length (arg1) of how many bytes of the buffer to read or write. The variable mlen is the length of the buffer.

#![allow(unused)]
fn main() {
if mlen >= arg1 && arg1 > 0 {
    app.len = arg1;
    app.index = 0;
    self.busy.set(true);
    self.do_next_read_write(app);
    CommandReturn::success()
}
}

Checking that the length fits within the allowed buffer when the command is issued is insufficient, as it could be that the buffer changes during the underlying hardware I/O operation. If the buffer is replaced with one that is much smaller, the length passed in the command may now be too large. The index variable keeps track of where in the buffer the next write should occur: the capsule breaks up long writes into multiple, smaller writes to bound the size of its static kernel buffer. If capsule code blindly copies the number of bytes specified in the command, without re-checking buffer length, then it can cause the kernel to panic for an out-of-bounds error.

Therefore, in the read_write_done callback, the capsule checks the length of the buffer that userspace wants to read data into. The third line checks that the end of the just completed operation isn't past the end of the current userspace buffer (which could happen if the userspace buffer became shorter).

#![allow(unused)]
fn main() {
let end = index;
let start = index - length;
let end = cmp::min(end, dest.len());
let start = cmp::min(start, end);

let real_len = cmp::min(end - start, src.len());
let dest_area = &mut dest[start..end];

for (i, c) in src[0..real_len].iter().enumerate() {
    dest_area[i] = *c;
}
}

For similar reasons, a capsule should not cache computations on values from an allowed buffer. If the buffer changes, then those computations may no longer be correct (e.g., computing a length based on fields in the buffer).

4.5 Read-Only Allow (Class ID: 4)

The Read-Only Allow class is very similar to the Read-Write Allow class. It differs in some ways:

  1. The buffer it passes to the kernel is read-only, and the process MAY freely read the buffer.
  2. The kernel MUST NOT write to a buffer shared with a Read-Only Allow.
  3. The allow numbers in the Read-Only Allow are independent from those in the Read-Write Allow.

The semantics and calling conventions of Read-Only Allow are otherwise identical to Read-Write Allow: a userspace API MUST NOT depend on writing to a shared buffer and the kernel MUST NOT assume the buffer does not change.

This restriction on writing to buffers is to limit the complexity of code review in the kernel. If a userspace library relies on writes to shared buffers, then kernel code correspondingly relies on them. This sort of concurrent access can have unforeseen edge cases which cause the kernel to panic, e.g., because values changed between method calls.

The Read-Only Allow class exists so that userspace can pass references to constant data to the kernel. This is useful, for example, when a process prints a constant string to the console; it wants to allow the constant string to the kernel as an application slice, then call a command that transmits the allowed slice. Constant strings are usually stored in flash, rather than RAM, which Tock's memory protection marks as read-only memory. Therefore, if a process tries to pass a constant string stored in flash through a Read-Write Allow, the allow will fail because the kernel detects that the passed slice is not writeable.

Another common use case for Read-Only allow is passing test or diagnostic data. A U2F authentication key, for example, will often run some cryptographic tests at boot to ensure correct operation. These tests store input data, keys, and expected output data as constants in flash. An encrypt operation, for example, wants to be able to pass a read-only input and read-only key to obtain a ciphertext. Without a read-only allow, all of this read-only data has to be copied into RAM, and for software engineering reasons these RAM buffers may be difficult to reuse.

Having a Read-Only Allow allows a system call driver to clearly specify whether data is read-only or read-write and also saves processes the RAM overhead of having to copy read-only data into RAM so it can be passed with a Read-Write Allow.

The Tock kernel MUST check that the passed buffer is contained within the calling process's readable address space. Every byte of the passed buffer must be readable by the process. Zero-length buffers may therefore have arbitrary addresses. If the passed buffer is not complete within the calling process's readable address space, the kernel MUST return a failure result with an error code of INVALID.

4.6 Memop (Class ID: 5)

The Memop class is how a userspace process requests and provides information about its address space. The register arguments for Memop system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Operationr0
Operation argumentr1
unusedr2
unusedr3

The operation argument specifies which memory operation to perform. There are 12:

Memop OperationOperationSuccess
0BreakSuccess
1SBreakSuccess with u32
2Get process RAM start addressSuccess with u32
3Get address immediately after process RAM allocationSuccess with u32
4Get process flash start addressSuccess with u32
5Get address immediately after process flash regionSuccess with u32
6Get lowest address (end) of the grant regionSuccess with u32
7Get number of writeable flash regions in process headerSuccess with u32
8Get start address of a writeable flash regionSuccess with u32
9Get end address of a writeable flash regionSuccess with u32
10Set the start of the process stackSuccess
11Set the start of the process heapSuccess

The success return variant is Memop class system call specific and specified in the table above. All Memop class system calls have a Failure failure type.

4.7 Exit (Class ID: 6)

The Exit system call class is how a userspace process terminates. Successful calls to Exit system calls do not return.

There are two Exit system calls:

  • exit-terminate
  • exit-restart

The first call, exit-terminate, terminates the process and tells the kernel that it may reclaim and reallocate the process as well as all of its resources. Usually this indicates that the process has completed its work.

The second call, exit-restart, terminates the process and tells the kernel that the application would like to restart if possible. If the kernel restarts the application, it MUST assign it a new process identifier. The kernel MAY reuse existing process resources (e.g., RAM regions) or MAY allocate new ones.

The register arguments for Exit system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Exit numberr0
Completion coder1

The exit number specifies which call is invoked.

System callExit number value
exit-terminate0
exit-restart1

The difference between exit-terminate and exit-restart is what behavior the application asks from the kernel. With exit-terminate, the application tells the kernel that it considers itself completed and does not need to run again. With exit-restart, it tells the kernel that it would like to be rebooted and run again. For example, exit-terminate might be used by a process that stores some one-time data on flash, while exit-restart might be used if the process runs out of memory.

The completion code is an unsigned 32-bit number which indicates status. This information can be stored in the kernel and used in management or policy decisions. The definition of these status codes is outside the scope of this document.

If an exit syscall is successful, it does not return. Therefore, the return value of an exit syscall is always Failure. exit-restart and exit-terminate MUST always succeed and so never return.

5 libtock-c Userspace Library Methods

This section describes the method signatures for system calls and upcalls in C, as an example of how they appear to application/userspace code.

Because C allows a single return value but Tock system calls can return multiple values, they do not easily map to idiomatic C. These low-level APIs are translated into standard C code by the userspace library. The general calling convention is that the complex return types are returned as structs. Since these structs are composite types larger than a single word, the ARM and RISC-V calling conventions pass them on the stack.

The system calls are implemented as inline assembly. This assembly moves arguments into the correct registers and invokes the system call, and on return copies the returned data into the return type on the stack.

5.1 Yield

The Yield system calls have these function prototypes:

int yield_no_wait(void);
void yield(void);

yield_no_wait returns 1 if an upcall was invoked and 0 if one was not invoked.

5.2 Subscribe

The subscribe system call has this function prototype:

typedef void (subscribe_upcall)(int, int, int, void*);

typedef struct {
  bool success;
  subscribe_upcall* upcall;
  void* userdata;
  tock_error_t error;
} subscribe_return_t;

subscribe_return_t subscribe(uint32_t driver, uint32_t subscribe,
                             subscribe_upcall uc, void* userdata);

The success field indicates whether the call to subscribe succeeded. If it failed, the error code is stored in error. If it succeeded, the value in error is undefined.

5.3 Command

The subscribe system call has this function prototype:

typedef struct {
  syscall_rtype_t type;
  uint32_t data[3];
} syscall_return_t;

syscall_return_t command(uint32_t driver, uint32_t command, int data, int arg2);

Because a command can return any failure or success variant, it returns a direct mapping of the return registers. rtype contains the value of r0, while data[0] contains what was passed in r1, data[1] contains was passed in r2, and data[2] contains what was passed in r3.

5.4 Read-Write Allow

The read-write allow system call has this function prototype:

typedef struct {
  bool success;
  void* ptr;
  size_t size;
  tock_error_t error;
} allow_rw_return_t;

allow_rw_return_t allow_readwrite(uint32_t driver, uint32_t allow, void* ptr, size_t size);

The success field indicates whether the call succeeded. If it failed, the error code is stored in error. If it succeeded, the value in error is undefined. ptr and size contain the pointer and size of the passed buffer.

5.5 Read-Only Allow

The read-only allow system call has this function prototype:

typedef struct {
  bool success;
  const void* ptr;
  size_t size;
  tock_error_t error;
} allow_ro_return_t;

allow_ro_return_t allow_readonly(uint32_t driver, uint32_t allow, const void* ptr, size_t size);

The success field indicates whether the call succeeded. If it failed, the error code is stored in error. If it succeeded, the value in error is undefined. ptr and size contain the pointer and size of the passed buffer.

5.6 Memop

Because the Memop system calls are defined by the kernel and not extensible, they are directly defined by libtock-c as library functions:

void* tock_app_memory_begins_at(void);
void* tock_app_memory_ends_at(void);
void* tock_app_flash_begins_at(void);
void* tock_app_flash_ends_at(void);
void* tock_app_grant_begins_at(void);
int tock_app_number_writeable_flash_regions(void);
void* tock_app_writeable_flash_region_begins_at(int region_index);
void* tock_app_writeable_flash_region_ends_at(int region_index);

They wrap around an underlying function which uses inline assembly:

void* memop(uint32_t op_type, int arg1);

5.7 Exit

The Exit system calls have these function prototypes:

void tock_exit(uint32_t completion_code);
void tock_restart(uint32_t completion_code);

Since these two variants of Exit never return, they have no return value.

6 Authors' Address

Guillaume Endignoux <guillaumee@google.com>

Jon Flatley <jflat@google.com>

Philip Levis
414 Gates Hall
Stanford University
Stanford, CA 94305

Phone: +1 650 725 9046
Email: pal@cs.stanford.edu


Amit Levy <aalevy@cs.princeton.edu>

Pat Pannuto <ppannuto@ucsd.edu>

Leon Schuermann <leon@is.currently.online>

Johnathan Van Why <jrvanwhy@google.com>

7 References and Additional Information

Kernel Time HIL

TRD: 105
Working Group: Kernel
Type: Documentary
Status: Draft
Obsoletes: 101
Author: Guillaume Endignoux, Amit Levy, Philip Levis, and Jett Rink
Draft-Created: 2021/07/23
Draft-Modified: 2021/07/23
Draft-Version: 1.0
Draft-Discuss: Github PR

Abstract

This document describes the hardware independent layer interface (HIL) for time in the Tock operating system kernel. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document is in full compliance with TRD1.

1 Introduction

Microcontrollers provide a variety of hardware controllers that keep track of time. The Tock kernel organizes these various types of controllers into two broad categories: alarms and timers. Alarms continuously increment a clock and can fire an event when the clock reaches a specific value. Timers can fire an event after a certain number of clock ticks have elapsed.

The time HIL is in the kernel crate, in module hil::time. It provides six main traits:

  • kernel::hil::time::Time: provides an abstraction of a moment in time. It has two associated types. One describes the width and maximum value of a time value. The other specifies the frequency of the ticks of the time value.
  • kernel::hil::time::Counter: derives from Time and provides an abstraction of a free-running counter that can be started or stopped. A Counter's moment in time is the current value of the counter.
  • kernel::hil::time::Alarm: derives from Time, and provides an abstraction of being able to receive a callback at a future moment in time.
  • kernel::hil::time::Timer: derives from Time, and provides an abstraction of being able to receive a callback at some amount of time in the future, or a series of callbacks at a given period.
  • kernel::hil::time::OverflowClient: handles an overflow callback from a Counter.
  • kernel::hil::time::AlarmClient: handles the callback from an Alarm.
  • kernel::hil::time::TimerClient: handles the callback from a Timer.

In addition, to provide a level of minimal platform independence, a port of Tock to a given microcontoller is expected to implement certain instances of these traits. This allows, for example, system call capsules for alarm callbacks to work across boards and chips.

This document describes these traits, their semantics, and the instances that a Tock chip is expected to implement.

2 Time, Frequency, Ticks, and ConvertTicks traits

The Time trait represents a moment in time, which is obtained by calling now.

The trait has two associated types. The first, Frequency, is an implementation of the Frequency trait which describes how many ticks there are in a second. The inverse of the frequency defines the time interval between two ticks of time.

The second associated type, Ticks, defines the width of the time value. This is an associated type because different microcontrollers represent time with different bit widths: most Cortex-M microcontrollers, for example, use 32 bits, while RISC-V uses 64 bits and the Nordic nRF51822 provides only a 24-bit counter. The Ticks associated type defines this, such that users of the Time trait can know when wraparound will occur.

The Ticks trait requires several other traits from core::cmp: Ord, PartialOrd, and Eq. This is so that methods such as min_by_key can be used with Iterators for when examining a set of Ticks values. The MuxAlarm structure in capsules::virtual_alarm does this, for example, to find the next alarm that should fire.

#![allow(unused)]
fn main() {
pub trait Ticks: Clone + Copy + From<u32> + fmt::Debug + Ord + PartialOrd + Eq {
    fn into_usize(self) -> usize;
    fn into_u32(self) -> u32;

    fn wrapping_add(self, other: Self) -> Self;
    fn wrapping_sub(self, other: Self) -> Self;

    // Returns whether `self` is in the range of [`start`, `end`), using
    // unsigned arithmetic and considering wraparound. It returns true
    // if, incrementing from `start`, `self` will be reached before `end`.
    // Put another way, it returns `self - start < end - start` in
    // unsigned arithmetic.
    fn within_range(self, start: Self, end: Self);

    fn max_value() -> Self;

    /// Converts the specified val into this type if it fits, otherwise the
    /// `max_value()` is returned
    fn from_or_max(val: u64) -> Self;

    /// Scales the ticks by the specified numerator and denominator. If the
    /// resulting value would be greater than u32,`u32::MAX` is returned instead
    fn saturating_scale(self, numerator: u32, denominator: u32) -> u32;
}

pub trait Frequency {
    fn frequency() -> u32; // Represented in Hz
}

pub trait Time {
    type Frequency: Frequency;
    type Ticks: Ticks;

    fn now(&self) -> Self::Ticks;
}

pub trait ConvertTicks<T: Ticks> {
    /// Returns the number of ticks in the provided number of seconds,
    /// rounding down any fractions. If the value overflows Ticks, it
    /// returns `Ticks::max_value()`.
    fn ticks_from_seconds(&self, s: u32) -> T;
    /// Returns the number of ticks in the provided number of milliseconds,
    /// rounding down any fractions. If the value overflows Ticks, it
    /// returns `Ticks::max_value()`.
    fn ticks_from_ms(&self, ms: u32) -> T;
    /// Returns the number of ticks in the provided number of microseconds,
    /// rounding down any fractions. If the value overflows Ticks, it
    /// returns `Ticks::max_value()`.
    fn ticks_from_us(&self, us: u32) -> T;
    /// Returns the number of seconds in the provided number of ticks,
    /// rounding down any fractions. If the value overflows u32, `u32::MAX`
    /// is returned,
    fn ticks_to_seconds(&self, tick: T) -> u32;
    /// Returns the number of milliseconds in the provided number of ticks,
    /// rounding down any fractions. If the value overflows u32, `u32::MAX`
    /// is returned,
    fn ticks_to_ms(&self, tick: T) -> u32;
    /// Returns the number of microseconds in the provided number of ticks,
    /// rounding down any fractions. If the value overflows u32, `u32::MAX`
    /// is returned,
    fn ticks_to_us(&self, tick: T) -> u32;
}
}

Frequency is defined with an associated type of the Time trait (Time::Frequencey). It MUST implement the Frequency trait, which has a single method, frequency. frequency returns the frequency in Hz, e.g. 1 MHz is 1000000. Clients can use this to write code that is independent of the underlying frequency.

An instance of Time or derived trait MUST NOT have a Frequency which is greater than its underlying frequency precision. It must be able to accurately return every possible value in the range of Ticks without further quantization. It is therefore not allowed to take a 32 kHz clock and present it as an instance of Time with a frequency of Freq16MHz.

Frequency allows a user of Time to know the granularity of ticks and so avoid quantization error when two different times map to the same time tick. For example, if a user of Time needs microsecond precision, then the associated type can be used to statically check that it is not put on top of an implementation with 32 kHz precision.

The ConvertTicks trait is auto-implemented on any object that implements the Time trait. This auto-implemented trait is provided for convenience to help convert seconds, milliseconds, or microsecond to/from ticks. These helper methods all round down the result. This means, for example, that if the Time instance has a frequency of 32 kHz, calling ticks_from_us(20) returns 0, because a single tick of a 32 kHz clock is 30.5 microseconds.

3 Counter and OverflowClient traits

The Counter trait is the abstraction of a free-running counter that can be started and stopped. This trait derives from the Time trait, so it has associated Frequency and Tick types. The Counter trait allows a client to register for callbacks when the counter overflows.

#![allow(unused)]
fn main() {
pub trait OverflowClient {
  fn overflow(&self);
}

pub trait Counter<'a>: Time {
  fn start(&self) -> Result<(), ErrorCode>;
  fn stop(&self) -> Result<(), ErrorCode>;
  fn reset(&self) -> Result<(), ErrorCode>;
  fn is_running(&self) -> bool;
  fn set_overflow_client(&self, &'a dyn OverflowClient);
}
}

The OverflowClient trait is separated from the AlarmClient trait because there are cases when software simply wants a free-running counter to keep track of time, but does not need triggers at a particular time. For hardware that has a limited number of compare registers, allocating one of them when the compare itself isn't needed would be wasteful.

Note that Tock's concurrency model means interrupt bottom halves can be delayed until the current bottom half (or syscall invocation) completes. This means that an overflow callback can seem to occur after an overflow. For example, suppose there is an 8-bit counter. The following execution is possible:

  1. Client code calls Time::now, which returns 250.
  2. An overflow happens, marking an interrupt as pending but the bottom half doesn't execute yet.
  3. Client code calls Time::now, which returns 12.
  4. The main event loop runs, invoking the bottom half.
  5. The Counter calls OverflowClient::overflow, notifying the client of the overflow.

A Counter implementation MUST NOT provide a Frequency of a higher resolution than an underlying hardware counter. For example, if the underlying hardware counter has a frequency of 32 kHz, then a Counter cannot say it has a frequency of 1MHz by multiplying the underlying counter by 32. A Counter implementation MAY provide a Frequency of a lower resolution (e.g., by stripping bits).

The reset method of Counter resets the counter to 0.

4 Alarm and AlarmClient traits

Instances of the Alarm trait track an incrementing clock and can trigger callbacks when the clock reaches a specific value as well as when it overflows. The trait is derived from Time trait and therefore has associated Time::Frequency and Ticks types.

The AlarmClient trait handles callbacks from an instance of Alarm. The trait derives from OverflowClient and adds an additional callback denoting that the time specified to the Alarm has been reached.

Alarm and Timer (presented below) differ in their level of abstraction. An Alarm presents the abstraction of receiving a callback when a point in time is reached or on an overflow. In contrast, Timer allows one to request callbacks at some interval in the future, either once or periodically. Alarm requests a callback at an absolute moment while Timer requests a callback at a point relative to now.

#![allow(unused)]
fn main() {
pub trait AlarmClient {
  fn alarm(&self);
}

pub trait Alarm: Time {
  fn set_alarm(&self, reference: Self::Ticks, dt: Self::Ticks);
  fn get_alarm(&self) -> Self::Ticks;
  fn disarm(&self) -> Result<(), ErrorCode>;
  fn set_alarm_client(&self, client: &'a dyn AlarmClient);
}
}

Alarm has a disable in order to cancel an existing alarm. Calling set_alarm enables an alarm. If there is currently no alarm set, this sets a new alarm. If there is an alarm set, calling set_alarm cancels the previous alarm and replaces the it with the new one. It cancels the previous alarm so a client does not have to disambiguate which alarm it is handling, the previous or current one.

The reference parameter of set_alarm is typically a sample of Time::now just before set_alarm is called, but it can also be a stored value from a previous call. The reference parameter follows the invariant that it is in the past: its value is by definition equal to or less than a call to Time::now.

The set_alarm method takes a reference and a dt parameter to handle edge cases in which it can be impossible distinguish between alarms for the very near past and alarms for the very far future. The edge case occurs when the underlying counter increments past the compare value between when the call was made and the compare register is actually set. Because the counter has moved past the intended compare value, it will have to wrap around before the alarm will fire. However, one cannot assume that the counter has moved past the intended compare and issue a callback: the software may have requested an alarm very far in the future, close to the width of the counter.

Having a reference and dt parameters disambiguates these two cases. Suppose the current counter value is current. If current is not within the range [reference, reference + dt) (considering unsigned wraparound), then this means the requested firing time has passed and the callback should be issued immediately (e.g., with a deferred procedure call, or setting the alarm very short in the future).

5 Timer and TimerClient traits

The Timer trait presents the abstraction of a timer. The timer can either be one-shot or periodic with a fixed interval. Timer derives from Time, therefore has associated Time::Frequency and Ticks types.

The TimerClient trait handles callbacks from an instance of Timer. The trait has a single callback, denoting that the timer has fired.

#![allow(unused)]
fn main() {
pub trait TimerClient {
  fn timer(&self);
}

pub trait Timer<'a>: Time {
  fn set_timer_client(&self, &'a dyn TimerClient);
  fn oneshot(&self, interval: Self::Ticks) -> Self::Ticks;
  fn repeating(&self, interval: Self::Ticks) -> Self::Ticks;

  fn interval(&self) -> Option<Self::Ticks>;
  fn is_oneshot(&self) -> bool;
  fn is_repeating(&self) -> bool;

  fn time_remaining(&self) -> Option<Self::Ticks>;
  fn is_enabled(&self) -> bool;

  fn cancel(&self) -> Result<(), ErrorCode>;
}
}

The oneshot method causes the timer to issue the TimerClient's fired method exactly once when interval clock ticks have elapsed. Calling oneshot MUST invalidate and replace any previous calls to oneshot or repeating. The method returns the actual number of ticks in the future that the callback will execute. This value MAY be greater than interval to prevent certain timer race conditions (e.g., that require a compare be set at least N ticks in the future) but MUST NOT be less than interval.

The repeating method causes the timer to call the Client's fired method periodically, every interval clock ticks. Calling oneshot MUST invalidate and replace any previous calls to oneshot or repeat. The method returns the actual number of ticks in the future that the first callback will execute. This value MAY be greater than interval to prevent certain timer race conditions (e.g., that require a compare be set at least N ticks in the future) but MUST NOT be less than interval.

6 Frequency and Ticks Implementations

The time HIL provides four standard implementations of Frequency:

#![allow(unused)]
fn main() {
pub struct Freq16MHz;
pub struct Freq1MHz;
pub struct Freq32KHz;
pub struct Freq16KHz;
pub struct Freq1KHz;
}

The time HIL provides three standard implementaitons of Ticks:

#![allow(unused)]
fn main() {
pub struct Ticks24Bits(u32);
pub struct Ticks32Bits(u32);
pub struct Ticks64Bits(u64);
}

The 24 bits implementation is to support some Nordic Semiconductor nRF platforms (e.g. nRF52840) that only support a 24-bit counter.

7 Capsules

The Tock kernel provides three standard capsules:

  • capsules::alarm::AlarmDriver provides a system call driver for an Alarm.
  • capsules::virtual_alarm provides a set of abstractions for virtualizing a single Alarm into many.
  • capsules::virtual_timer provides a set of abstractions for virtualizing a single Alarm into many Timer instances.

8 Required Modules

A chip MUST provide an instance of Alarm with a Frequency of Freq32KHz and a Ticks of Ticks32Bits.

A chip MUST provide an instance of Time with a Frequency of Freq32KHz and a Ticks of Ticks64Bits.

A chip SHOULD provide an Alarm with a Frequency of Freq1MHz and a Ticks of Ticks32Bits.

9 Implementation Considerations

This section describes implementation considerations for hardware implementations.

The trickiest aspects of implementing the traits in this document relate to the Alarm trait and the semantics of how and when callbacks are triggered. In particular, if set_alarm indicates a time that has already passed, then the implementation should adjust it so that it will trigger very soon (rather than wait for a wrap-around).

This is complicated by the fact that as the code is executing, the underlying counter continues to tick. Therefore an implementation must also be careful that this "very soon" time does not fall into the past. Furthermore, many instances of timer hardware requires that a compare value be some minimum number of ticks in the future. In practice, this means setting "very soon" to be a safe number of ticks in the future is a better implementation approach than trying to be extremely precise and inadvertently choosing too soon and then waiting for a wraparound.

Pseudocode to handle these cases is as follows:

set_alarm(self, reference, dt):
  now = now()
  expires = reference.wrapping_add(dt)
  if !now.within_range(reference, expired):
    expires = now

  if expires.wrapping_sub(now) < MIN_DELAY:
    expires = now.wrapping_add(MIN_DELAY)

  clear_alarm()
  set_compare(expires)
  enable_alarm()

10 Acknowledgements

The traits and abstractions in this document draw from contributions and ideas from Patrick Mooney and Guillaume Endignoux as well as others.

11 Modification After TRD 101

This TRD obsoletes TRD 101, and the changes include:

  • The tick_from_ helper methods moved from Time trait to ConvertTicks trait
    • This allow downstream clients to use trait objects (i.e. dyn) with the Time, Alarm, and Timer traits. Even though it is possible to use trait objects with Time and sub traits, it is still beneficial to use generic parameters when there is only a single concrete type.
  • Added ticks_to_ helper methods on ConvertTicks
  • Added a few more support methods on Ticks trait.

12 Authors' Address

Amit Levy
amit@amitlevy.com

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305
USA
pal@cs.stanford.edu

Guillaume Endignoux
guillaumee@google.com

Jett Rink
jettrink@google.com

Application Completion Codes

TRD: 106
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Alyssa Haroldsen
Draft-Created: December 6, 2021
Draft-Modified: January 25, 2022
Draft-Version: 1
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This advisory document describes the expected behavior of application completion codes when terminating via the exit syscall, as described in TRD 104.

1 Introduction

When an application exits via the exit syscall, it can specify a completion code, an unsigned 32-bit number which indicates status. This information can be stored in the kernel and used in management or policy decisions.

This number is called an "exit status", "exit code", or "result code" on other platforms.

2 Design Considerations

When possible, Tock applications should follow existing conventions and terminology from other major platforms. This assists in helping the project be more understandable to newcomers by following the principle of least astonishment.

This advisory document provides guidance for the ecosystem of Tock applications using the exit syscall, and does not define the behavior of the syscall itself.

3 Design

A completion code of 0 passed to the exit syscall MUST indicate normal app termination. A non-zero completion code SHOULD be used to indicate abnormal termination. This distinction is useful so that a Tock kernel can handle success/failure cases differently, e.g. by printing error messages, and so that kernel extensions (such as process exit handlers defined by a board) or external tools (such as a tool designed to parse the output from a kernel with trace_syscalls enabled) can match on these two cases. This behavior also matches the convention for Unix exit codes, such that it likely matches the expectations for users coming from that domain.

A completion code between 1 and 1024 inclusive SHOULD be the same value as one of the error codes specified in TRD 104. This requirement is a SHOULD rather than a MUST because it is useful in the common case (it allows software to infer something about the cause of an error that led to an exit, and possibly print a useful message) but also allows a process to do something else if needed (e.g. for compatibility with some other standard of exit codes).

Accordingly, the core kernel MUST NOT assume any semantic meaning for completion codes or take actions based on their values besides printing error messages unless

  • there is a specification of a particular application's completion code space written in a TRD, and

  • the kernel can reliably identify that application and associate it with this specification. While there are common and conventional uses of certain values, applications are not required to follow these and may assign their own semantic meanings to values.

Completion CodeMeaning
0Success
1-1024SHOULD be a TRD 104 error code
1025-u32::MAXNot defined

4 Implementation

As of writing, libtock currently implements this TRD via the Termination trait.

#![allow(unused)]
fn main() {
pub trait Termination {
    fn complete<S: Syscalls>(self) -> !;
}

impl Termination for () {
    fn complete<S: Syscalls>(self) -> ! {
        S::exit_terminate(0)
    }
}

impl Termination for Result<(), ErrorCode> {
    fn complete<S: Syscalls>(self) -> ! {
        let exit_code = match self {
            Ok(()) => 0,
            Err(ec) => ec as u32,
        };
        S::exit_terminate(exit_code);
    }
}
}

5 Author's Address

Alyssa Haroldsen <kupiakos@google.com>
Hudson Ayers <hayers@stanford.edu>

Draft TRDs

These TRDs have not been finalized.

Application IDs (AppID), Credentials, and Process Loading

TRD:
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Philip Levis, Johnathan Van Why
Draft-Created: 2021/09/01
Draft-Modified: 2022/10/14
Draft-Version: 10
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the design and implementation of application identifiers (AppIDs) in the Tock operating system. AppIDs provide a mechanism to identify the application contained in a userspace binary that is distinct from a process identifier. AppIDs allow the kernel to apply security policies to applications as their code evolves and their binaries change. A board defines how the kernel verifies AppIDs and which AppIDs the kernel will load. This document describes the Rust traits and software architecture for AppIDs as well as the reasoning behind them. This document is in full compliance with TRD1.

1 Introduction

The Tock kernel needs to be able to manage and restrict what userspace applications can do. Examples include:

  • making sure other applications cannot access an application's sensitive data stored in non-volatile memory,
  • restricting certain system calls to be used only by trusted applications,
  • run and load only applications that a trusted third party has signed.

In order to accomplish this, the kernel needs a way to identify an application and know whether a particular userspace binary belongs to an application. Multiple binaries can be associated with a single application. For example, software updates may cause a system to have more than one version of an application, such that it can roll back to the old version if there is a problem with the new one. In this case, there are two different userspace binaries, both associated with the same application.

To remain flexible and support many use cases, the Tock kernel makes minimal assumptions on the structure and form of application credentials and corresponding application identifiers. Application credentials are arbitrary k-byte sequences that are stored in a userspace binary's Tock binary format (TBF) footers. Before a process is eligible to execute, a Tock board uses an AppID (application identifier) checker to determine the AppIDs of each userspace binary available on the board and decide whether to load the binary into a process.

The Tock kernel ensures that each running process has a unique application identifier; if two userspace binaries have the same AppID, the kernel will only permit one of them to run at any time.

Most of the complications in AppIDs stem from the fact that they are a general mechanism used for many different use cases. Therefore, the exact structure and semantics of application credentials can vary widely. Tock's TBF footer formats, kernel interfaces and mechanisms must accommodate this wide range.

The interfaces and standard implementations for AppIDs and AppID checkers are in the kernel crate, in the module process_checker. There are three main traits:

  • kernel::process_checker::AppCredentialsPolicy is responsible for defining which types of application credentials the kernel accepts and whether it accepts a particular application credential for a specific application binary. The kernel only loads userspace programs that the AppCredentialsPolicy accepts.

  • kernel::process_checker::AppUniqueness compares the application identifiers of two processes and reports whether they differ. The kernel uses this trait to ensure that each running process has a unique application identifier.

  • kernel::process_checker::Compress compresses application identifiers into short, 32-bit identifiers called ShortIds. ShortIds provide a mechanism for fast comparison, e.g., for an application identifier against an access control list.

Example implementations can be found in kernel::process_checker::basic.

In normal use of Tock, a software tool running on a host copies TBF Objects into an application flash region. When the Tock kernel boots, it scans this application flash region for TBF Objects. After inspecting the Userspace Binary, TBF headers, and TBF Footers in a TBF Object, the kernel assigns it an Application Identifier and decides whether to run it.

2 Terminology

This document uses several terms in precise ways. Because these terms overlap somewhat with general terminology in the Tock kernel, this section defines them for clarity. The Tock kernel often uses the term "application" to refer to what this document calls an "Application Binary."

Userspace Binary: a code image compiled to run in a Tock process, consisting of text, data, read-only data, and other segments.

TBF Object: a Tock binary format object stored on a Tock device, containing TBF headers, a Userspace Binary, and TBF footers. TBF Objects are typically generated from ELF files using the elf2tab tool and are the standard binary format for Tock userspace processes.

Application: userspace software developed and maintained by an individual, group, corporation, or organization that meets the requirements of a Tock device use case. An Application can have multiple Userspace Binaries, e.g., to support versioning.

Application Identifier: a numerical identifier for an application. Each loaded process has a single Application Identifier. Application Identifiers are not unique across loaded processes: multiple loaded processes can share the same application identifier. Application Identifiers, however, are unique across running processes. If multiple loaded processes share the same Application Identifier, at most one of them can run at any time. An Application Identifier can be persistent across boots or restarts of a userspace binary. The Tock kernel assigns Application Identifiers to processes using a Identifier Policy.

Application Credentials: metadata that establish integrity of a Userspace Binary. Application Credentials are usually stored in Tock Binary Format footers. A TBF object can have multiple Application Credentials.

Process Checker: the component of the Tock kernel which is responsible for validating Application Credentials and determining which Application Credential (if any) the kernel should apply to a process.

Identifier Policy: the algorithm that the Process Checker uses to assign Application Identifiers to processes. An Identifier Policy defines an Application Identifier space.

Credentials Checking Policy: the algorithm that the Process Checker uses to decide how Tock responds to particular Application Credentials. The boot sequence typically passes the Credentials Checking Policy to the Process Checker to use when loading processes.

Global Application Identifier: an Application Identifier which, given an expected combination of Credentials Checking Policy and Identifier Policy, is both globally consistent across all TBF objects for a particular Application and unique to that Application. All instances of the Application loaded with this combination of policies have this Application Identifier. No instances of other Applications loaded with this Credentials Checking Policy have this Application Identifier. One example of a Global Application Identifier is a public key used to verify the digital signature of every TBF Object of a single Application. Another example of a Global Application Identifier is a string name stored in a TBF Object header; in this case the party installing TBF Objects needs to make sure there are no unintended collisions between these string names.

Locally Unique Application Identifier: a special kind of Application Identifier that is by definition unique from all other Application Identifiers. Locally Unique Application Identifiers do not have a concrete value that can be examined or stored. All tests for equality with a Locally Unique Application Identifier return false. Locally Unique Application Identifiers exist in part to be an easy way to indicate that a process has no special privileges and its identity is irrelevant from a security standpoint.

Short ID: a 32-bit compressed representation of an Application Identifier. Application Identifiers can be large (e.g., an RSA key) or expensive to compare (a string name); Short IDs exist as a way for an Identifier Policy to map Application Identifiers to a small identifier space in order to improve both the space and time costs of checking identity.

3 Application Identifiers and Application Credentials

Application Identifiers and Application Credentials are related but they are not the same thing. An Application Identifier is a numerical representation of the Application's identity. Application Credentials are data that, combined with an Identifier Policy, can cryptographically bind an Application Identifier to a process.

For example, suppose there are two versions (v1.1 and v1.2) of the same Application. They have different Userspace Binaries. Each version has an Application Credentials consisting of a signature over the TBF headers and Userspace Binary, signed by a known public key. The Identifier Policy is that the public key defines the Application Identifier: all versions of this Application have Application Credentials signed by this key. The two versions have different Application Credentials, because their hashes differ, but they have the same Application Identifier.

3.1 Application Identifiers

The key restriction Application Identifiers impose is that the kernel MUST NOT simultaneously run two processes that have the same Application Identifier. This restriction is because an Application Identifier provides an identity for a Userspace Binary. Two processes with the same Application Identifier are two copies or versions of the same Application. As Application Identifiers are used to control access to resources such as storage, this restriction ensures there is at most one process accessing resources or data belonging to an Application Identifier, which precludes the need for consistency mechanisms for concurrent access.

Application Identifiers can be used for security policy decisions in the rest of the kernel. For example, a kernel may allow only Applications whose Application Credentials use a particular trusted public key to access restricted functionality, but restrict other applications to use a subset of available system calls. By defining the Application Identifier of a process to be the public key, the system can map this key to a Short ID (described below) that gives access to restricted functionality.

The Tock kernel assigns each Tock process a unique process identifier, which can be re-used over time (like POSIX process identifiers). These process identifiers are separate from and unrelated to Application Identifiers. An Application Identifier identifies an Application, while a process identifier identifies a particular execution of a binary. For example, if a Userspace Binary exits and runs a second time, the second execution will have the same Application Identifier but may have a different process identifier.

3.1.1 Global Application Identifiers

Global Application Identifiers are a class of Application Identifiers that have properties which make them useful for security policies. For Applications that use Global Application Identifiers, the combination of the Application Credentials put in TBF Objects, Credentials Checking Policy, and Identifier Policy establish a one-to-one mapping between Applications and Global Application Identifiers. If an Application has a Global Application Identifier, then every process running that Application has that Global Application Identifier. Conversely, that Global Application Identifier is unique to that Application; two Applications do not share a Global Application Identifier.

One important implication of this mapping is that Global Application Identifiers MUST persist across process restarts or reloads.

Poor management of Global Application Identifiers can lead to unintended collisions. For example, an Identifier Policy might define the Global Application Identifier of processes to be the public key of a key pair to sign an Application Credential. If a developer accidentally uses the wrong key to sign a Userspace Binary, the Tock kernel will think that Userspace Binary is a different Application. Similarly, if the Identifier Policy uses a string name in a TBF Object header as the Global Application Identifier, then incorrectly giving two different programs the same name could lead them to sharing data.

3.1.2 The "Locally Unique" Identifier

Some Tock use cases do not require a real notion of Application identity. In many research or prototype systems, for example, every Userspace Binary has complete access to the system and there is no need for persistent storage or identity. Running processes need an Application Identifier, but in these cases it is not necessary for a Tock kernel and Application build system to manage Global Application Identifiers.

In such use cases, the Identifier Policy can assign a special Application Identifier called the "Locally Unique Identifier". This identifier does not have a concrete value: it is simply a value that is by definition different from all other Application Identifiers. Because it does not have a concrete value, one cannot test for equality with Locally Unique Application Identifier. All comparisons with a Locally Unique Application Identifier return false.

3.2 Application Credentials

Application Credentials are information stored in TBF Footers. The exact format and information of Application Credentials are described in the next section. They typically store cryptographic information that establishes the Application a Userspace Binary belongs to as well as provide integrity.

Application Identifiers can, but do not have to be, be derived from Application Credentials. For example, a Tock system with a permissive Credentials Checking Policy may allow processes with no Application Credentials to run, and have an Identifier Policy that defines Application Identifiers to be the ASCII name stored in a TBF header.

In cases when a TBF Object does not have any Application Credentials, the Identifier Policy MAY assign it a Global Application Identifier. This identifier must follow all of the requirements in Section 3.1.1.

3.3 Example Use Cases

The following five use cases demonstrate different ways in which Application Policies can assign Application Identifiers, some of which use Application Credentials:

  1. A research system that (memory permitting) runs every Userspace Binary loaded on it. The Identifier Policy assigns every Userspace Binary a Locally Unique Application Identifier and the Credentials Checking Policy approves TBF Objects independently of their credentials.

  2. A system which runs only a small number of pre-defined Applications and an Application is defined by a particular public RSA key. The Credentials Checking Policy only accepts TBF Objects with an Application Credentials containing an RSA signature from a small number of pre-approved keys. The Identifier Policies defines that the Global Application Identifier of a process is the public key used to generate the accepted Application Credentials for the TBF Object. Before verifying a signature in a TBF footer, the Process Checker decides whether to it accepts the associated public key using the Credentials Checking Policy. The Identifier Policy assigns a Global Application Identifier as the public key in the TBF footer.

  3. A system which runs any number of Applications but all Applications must be signed by a particular RSA key. The Credentials Checking Policy only accepts TBF Objects with a Credentials of an RSA signature from the approved key. The Identifier Policy defines the Application Identifier as the UTF-8 encoded package name stored in the TBF Header (or "" if none is stored). Two Userspace Binaries with the same package name will not run concurrently.

  4. A system that loads the same Userspace Binary in multiple different processes at the same time. The Identifier Policy assigns a Userspace Binary a Locally Unique Identifier. If the Userspace Binary needs integrity or authenticity then the Credentials Checking Policy can require signatures. This differs from the first example in that a single Userspace Binary can be loaded into multiple processes, instead of loading each Userspace Binary once. The use cases are different but can (in terms of identifiers and credentials) implemented the same way.

As the above examples illustrate, Application Credentials can vary in size and content. The credentials that a kernel's Credentials Checking Policy will accept depends on its use case. Certain devices might only accept Application Credentials which include a particular public key, while others will accept many. Furthermore, the internal format of these credentials can vary. Finally, the cryptography used in credentials can vary, either due to security policies or certification requirements.

Because the Identifier Policy is responsible for assigning Application Identifiers to processes, it is possible for the same Userspace Binary to have different Application Identifiers on different Tock systems. For example, suppose a TBF Object has two Application Credentials TBF footers: one signs with a key A, and the other with key B. Tock systems using a Credentials Checking Policy that accepts key A may use A as the Global Application Identifier, while Tock systems using a different policy that accepts key B may use B as the Global Application Identifier.

4 Process Loading

Tock defines its process loading algorithm in order to provide deterministic behavior in the presence of colliding Application Identifiers. This algorithm is designed to protect against downgrade attacks and misconfiguration.

The process loading operation consists of three stages:

  1. When it boots, the Tock kernel scans for a TBF Object stored in its application flash region. While parsing the TBF Object, the kernel checks that the TBF Object is valid and can run on the system (e.g., do not require a newer kernel version).

  2. After finding a valid and suitable TBF Object, the kernel checks the credentials of the TBF Object. Using the provided Credentials Checking Policy (described in Section 6), it decides whether the process has permission to run. If the TBF Object is allowed to run, the kernel loads the process binary into a slot in the process binaries array.

  3. Each process in the process binaries array is runnable in terms of its credentials. However, at any given time it might not be allowed to run because its Application Identifier or Short ID conflicts with another process. The kernel scans the array of process binaries and determines whether to run the process based on its Application Identifier, Short ID, and the Application Binary version number (stored in the Program Header, described in Section 5.1). At boot, the kernel starts a process if either of:

    • The process has a unique Application Identifier and Short ID,
    • The process has a higher Application Binary version number than all processes it shares its Application Identifier or Short ID with,

    If two processes which share a Short ID or Application ID have the same version number, the kernel starts one of them. The one which starts is the first one discovered in the process binaries array.

    Once a process is determined to be runnable based on credentials and uniqueness, the process is loaded into a slot in the processes array. At this point the process will be run.

Once a Tock system is running, management interfaces may change the set of running processes from those which the boot sequence selected. E.g., the process console might terminate a process so that it can run a different process with the same Short ID and a lower Userspace Binary version number (rollback). The kernel maintains that a running process has a unique Application Identifier and a unique Short ID among running processes.

5 Credentials and Version in Tock Binary Format Objects

This section describes the format and semantics of Program Headers and Credentials Footers.

Application Credentials are usually stored in a TBF Object, along with the Userspace Binary they are associated with. They are usually stored as footers (after the TBF header and Userspace Binary) to simplify computing integrity values such as checksums or hashes. This requires that TBF Objects have a TBF header that specifies where the application binary ends and the footers begin, information which the TbfHeaderV2Main header (the Main Header) does not include. Including Application Credentials in a TBF Object therefore requires using an alternative TbfHeaderV2Program header (the Program Header), which specifics where footers begin.

The Tock process loading algorithm uses version numbers when deciding the which processes with the same Application Identifier to run. Version numbers are stored in a TBF Object in the Version field of a TBF Program Header.

5.1 Program Header

The Program Header is similar to the Main Header, in that it specifies the offset of the entry function of the executable and memory parameters. It adds one field, binary_end_offset, which indicates the offset at which the Userspace Binary ends within the TBF object. The space between this offset and the end of the TBF object is reserved for footers.

This is the format of a Program Header:

0             2             4             6             8
+-------------+-------------+---------------------------+
| Type (9)    | Length (16) | init_fn_offset            |
+-------------+-------------+---------------------------+
| protected_size            | min_ram_size              |
+---------------------------+---------------------------+
| binary_end_offset         | version                   |
+---------------------------+---------------------------+

It is represented in the Tock kernel with this Rust structure:

#![allow(unused)]
fn main() {
pub struct TbfHeaderV2Program {
    init_fn_offset: u32,
    protected_size: u32,
    minimum_ram_size: u32,
    binary_end_offset: u32,
    version: u32,
}
}

A TBF object MUST NOT have more than one Program Header. If a TBF Object has both a Program Header and a Main Header, the kernel's policy decides which is used. For example, older kernels that do not understand a Program Header may use the Main Header, while newer kernels may choose the Program Header.

To support credentials, the Tock Binary Format has a TbfFooterV2Credentials TLV. This TLV is variable length and has two fields, a 32-bit value specifying the format of the credentials and a variable length data field. The format field defines the format and size of the data field. Each value of the format field except Reserved MUST have a fixed data size and format. This is the format of a Credentials Footer:

0             2             4                           8
+-------------+-------------+---------------------------+
| Type (128)  | Length      | format                    |
+-------------+-------------+---------------------------+
| data                      |
+-------------+--------...--+

It is represented in the Tock kernel with this structure:

#![allow(unused)]
fn main() {
pub struct TbfFooterV2Credentials {
    format: TbfFooterV2CredentialsType,
    data: &[u8],
}
}

Which types of credentials a Credentials Checking Policy supports are kernel-specific. For example, an application that only accepts TBF Objects signed with a particular 4096-bit RSA key can support only those credentials, while an open research system might support no credentials. Because the length field specifies the length of a given credentials, not understanding a particular credentials type does not prevent parsing others.

5.3 Integrity Region

TbfFooterV2Credentials follow the compiled app binary in a TBF object. If a TbfFooterV2Credentials footer includes a cryptographic hash, signature, or other value to check the integrity of a process binary, this value MUST be computed over the TBF Header and Userspace Binary, from the start of the TBF object until binary_end_offset. This region is called the integrity region. Computing an integrity value in a Credentials Footer MUST NOT include the contents of Footers. If new metadata associated with an application binary needs to be covered by integrity, it MUST be a Header. If new metadata associated with an application binary needs to not be covered by integrity, it MUST be a Footer.

The integrity region is from the end of the TBF Header to the location indicated by the binary_end_offset field in the Program Header. The size of the integrity region slice is therefore equal to binary_end_offset.

6 Credentials Checking Policy: the AppCredentialsPolicy trait

The AppCredentialsPolicy trait defines the interface that implements the Credentials Checking Policy of the Process Checker: it accepts, passes on, or rejects Application Credentials. When a Tock board asks the kernel to load processes, it passes a reference to a AppCredentialsPolicy, which the kernel uses to check credentials. An implementer of AppCredentialsPolicy sets the security policy of Userspace Binary loading by deciding which types of credentials, and which credentials, are acceptable and which are rejected.

#![allow(unused)]
fn main() {
pub enum CheckResult {
    Accept(Option<usize>),
    Pass,
    Reject
}

pub trait Client<'a> {
    fn check_done(&self,
                  result: Result<CheckResult, ErrorCode>,
                  credentials: TbfFooterV2Credentials,
                  integrity_region: &'a [u8]);
}

pub trait AppCredentialsPolicy<'a> {
    fn set_client(&self, client: &'a dyn Client<'a>);
    fn require_credentials(&self) -> bool;
    fn check_credentials(&self,
                         credentials: TbfFooterV2Credentials,
                         integrity_region: &'a [u8]) ->
        Result<(), (ErrorCode, TbfFooterV2Credentials, &'a [u8])>;
}
}

If the kernel has been instructed to check credentials of Userspace Binaries, after it successfully parses a Userspace Binary it checks the credentials of the process binary.

To check the integrity of a process, the kernel scans the footers in order, starting at the beginning of that process's footer region. At each TbfFooterV2Credentials footer it encounters, the kernel calls check_credentials on the provided AppCredentialsPolicy. If check_credentials returns CheckResult::Accept, the kernel stops processing credentials and stores the process binary in the process binaries array. When an AppCredentialsPolicy accepts a credential it may include an opaque usize value. This will be stored along with the accepted credential and allows the AppCredentialsPolicy to share information about the accepted credential. For example, if AppCredentialsPolicy is checking signatures, the opaque value may communicate the owner of the private key that validated the signature. This information may be useful when assigning ShortIds.

If the AppCredentialsPolicy returns CheckResult::Reject, the kernel stops processing credentials and does not load the process binary.

If the AppCredentialsPolicy returns CheckResult::Pass, the kernel tries the next TbfFooterV2Credentials, if there is one. If the kernel reaches the end of the TBF Footers (or if there is a Main Header and so no Footers) without encountering a Reject or Accept result, it calls require_credentials to ask the AppCredentialsPolicy what the default behavior is. If require_credentials returns true, the kernel does not load the process binary. If require_credentials returns false, the kernel loads the process binary into the process binaries array. If a process binary has no TbfFooterV2Credentials footers then there will be no Accept or Reject results and require_credentials defines whether the Userspace Binary is runnable.

The binary argument to check_credentials is a reference to the integrity region of the process binary.

7 Identifier Policy: the AppUniqueness trait

The AppUniqueness trait defines the API the Process Checker provides to decide whether two processes have the same Application Identifier or Short ID. An implementer of AppUniqueness implements the different_identifier method, which performs a pairwise comparison of two processes.

#![allow(unused)]
fn main() {
trait AppUniqueness {
  // Returns true if the two processes have different application
  // identifiers.
  fn different_identifier(&self,
                          processA: &ProcessBinary,
                          processB: &ProcessBinary) -> bool;

  fn different_identifier_process(&self,
                                  processA: &ProcessBinary,
                                  processB: &dyn Process) -> bool;

  fn different_identifier_processes(&self,
                                    processA: &dyn Process,
                                    processB: &dyn Process) -> bool;
}
}

This interfaces encapsulate the methods by which a module assigns or calculates application identifiers. As process binaries must be compared to both other process binaries and already loaded processes, there are two version of the different_identifier method to support both cases.

8 Short IDs and the Compress trait

While TbfFooterV2Credentials often define the identity and credentials of an application, they are large data structures that are too large to store in RAM. When parts of the kernel wish to apply security or access policies based on Application Identifiers, they need a concise way to represent these identifiers. Requiring policies to be encoded in terms of raw Application Identifiers can be extremely costly: a table, for example, that says that only Applications signed with a particular 4096-bit RSA key can access certain system calls requires storing the whole 4096-bit key. If there are multiple such security policies through the kernel, they must each store this information.

The Compress trait provides a mechanism to map an Application Identifier to a small (32-bit) integer called a Short ID. Short IDs can be used throughout the kernel as an identifier of an Application.

For example, suppose that a device wants to grant access to all Userspace Binaries signed by a certain 3072-bit RSA key K and has no other security policies. The Credentials Checking Policy only accepts 3072-bit RSA credentials with key K. The Compress trait implementation assigns a Short ID based on a string match with the process package name, with certain names receiving particular Short IDs. Access control systems within the kernel can define their policies in terms of these identifiers, such that they can check access by comparing 32-bit integers rather than 384-byte keys.

Short IDs support the concept of a "Locally Unique" identifier by having a special LocallyUnique value. All tests for equality with ShortId::LocallyUnique return false.

8.1 Short ID Properties and Examples

Given a particular combination of deterministic Identifier Policy and Credentials Checking Policy, Short IDs have two requirements. They

  1. MUST be unique across running processes,
  2. MUST be consistent across all running instances of an Application on Tock systems.

Short IDs are locally unique for three reasons. First, it simplifies process management and naming: a particular Short ID uniquely identifies a running process. Second, it ensures that resources bound to an application identifier (such as non-volatile storage) do not have to handle concurrent accesses from multiple processes. Finally, generally one does not want two copies of the same Application running: they can create conflicting responses and behaviors.

These two requirements restrict the set of possible combinations of Credentials Checking Policy and Identifier Policy. For example, a Short ID cannot be an incrementing counter; it must be deterministically derived from the Application Identifier.

A basic challenge that arises with Short IDs is that they are a form of compression. In the ideal case, Short IDs would have two additional properties:

  • Different Application Identifiers map to different Short IDs, and
  • All Application Identifiers have a concrete Short ID that identifies the Application.

Unfortunately, it is not possible to satisfy both of these properties simultaneously. This is because Short IDs potentially compress Application Identifiers. Consider, for example, a system where the Application Identifier is the public key in an 4096-bit RSA credential. Short IDs are 32 bits, but there are more than 2^32 4096-bit RSA keys. If every RSA key receives a different Short ID, and that Short ID is always the same, after 2^32 keys the Short ID space is exhausted.

Every algorithm to map Application Identifiers to Short IDs therefore sacrifices one of these two properties:

  • Different Application Identifiers can map to the same Short ID: An Identifier Policy with this property is one that uses string names as Global Application Identifiers and calculates the Short ID of process to be the checksum (or hash) of the string name. Two different names can checksum or hash to the same value. These collisions, however, can be acceptable if a developer is willing to pick string names that do not collide or change them when they do. A research or prototyping system might use this Identifier Policy.
  • Some Application Identifiers do not receive concrete Short IDs: An Identifier Policy with this property is one that uses public keys in signature credentials as Application Identifiers and has a set of public keys it knows and trust. It maps these known keys to a small set of Short IDs (e.g., 1 through N). The system may run Userspace Binaries signed by other keys, but assigns them a Locally Unique Application identifier, which results in a Locally Unique Short ID.

8.2 Example Short ID use cases

Here are three example use cases of Short IDs.

8.2.1 Use Case 1: Anonymous Applications

There are many Tock systems that do not particularly care about the identity of Applications. They do not have security policies, or track Application Identifiers. A prototyping system whose Credentials Checking Policy accepts all TBF Objects regardless of Application Credentials is an example of such a system. At boot, it scans the set of TBF Objects in application flash, trying to load and run each one until it runs out of resources (RAM, process slots). Applications cannot store data they expect to persist across reboots. Because the Tock kernel does not care about the identity of Applications, it has no security policies for limiting access to functionality or resources (e.g., system call filters).

In this use case, the Credentials Checking Policy accepts all correctly formatted TBF Objects and the Identifier Policy assigns every process a Locally Unique Identifier and a Locally Unique Short ID.

8.2.2 Use Case 2: U2F Application

In this use case, Tock needs to run a Universal 2nd Factor Authentication (U2F) application. This Application needs to store a private key in flash. No other Application should be able to access this key. The Tock kernel also restricts certain system calls to only the U2F Application, such as invoking cryptographic accelerators. Finally, the U2F Application needs a consistent identity over reboots of its Userspace Binary, the kernel, and upgrades of the Application with new versions (and Userspace Binaries).

In this use case, the Application Identifier is a Global Identifier. To establish the authenticity and integrity of the U2F Application, the Credential Checking Policy requires that an Application has a valid 4096-bit RSA credential. The system assumes that each Application has its own public-private key pair. While the system will load and run any process whose Userspace Binary has a valid 4096-bit RSA credential, it only gives special permissions and access to the U2F Application.

The Identifier Policy defines the Application Identifier of a process to depend on the public key of its 4096-bit RSA credential. If it is the key known to belong to the U2F Application, the Application Identifier is the key. If the key is not recognized, the Application Identifier is a Locally Unique Identifier. The Short ID of the U2F Application is 1 and the Short ID of all other Applications is Locally Unique.

8.2.3 Use Case 3: Application Isolation

In this use case, Tock needs to support multiple Applications that can read and write local flash. Each Application has its own flash storage, and Tock isolates their flash storage from one another. An Application cannot access the flash of another Application. However, this is a development system or a system which does not require confidentiality. While there is storage isolation between Applications, this is for debuggability, easy of composition, and simplicity and not to meet security requirements. The Credentials Checking Policy is permissive and tries to run every properly formatted TBF Objects.

In this use case, the Application Identifier is a Global Identifier. It is the string name of the TBF Object as encoded in a TBF Header. The Short ID is a one's complement checksum of the string name.

If a developer installs two TBF Objects with the same string name, the Tock kernel thinks they are the same Application and only runs one of them. If a developer accidentally uses two different string names that have the same checksum (e.g. both "dog" and "mal" checksum to 0x13a), the Tock kernel also only runs one of them. Some local modifications to tockloader check for these collisions and prevent the developer from accidentally installing colliding Applications.

Note that in this case it is possible that the "mal" application could read data stored by the "dog" application.

8.3 Short ID Format

The 32-bit value MUST be non-zero. ShortId uses core::num::NonZeroU32 so that an ShortId can be 32 bits in size, with 0 reserved for LocallyUnique.

#![allow(unused)]
fn main() {
#[derive(Clone, Copy)]
enum ShortId {
    LocallyUnique,
    Fixed(core::num::NonZeroU32),
}

pub trait Compress {
    fn to_short_id(process: &ProcessBinary) -> ShortId;
}
}

Generally, the Process Checker that implements AppUniqueness also implements Compress. This allows it to share copies of public keys or other credentials that it uses to make decisions, reducing flash space dedicated to these constants. Doing so also makes it less likely that the two are inconsistent.

8.4 Short ID Considerations

It is RECOMMENDED that the Fixed field of ShortId be completely hidden and unknown to modules that use ShortId to manage security policies. They should depend on obtaining ShortId values based on known names or methods. For example, the implementation of an Identifier Policy can define a method, privileged_id, which returns the Short ID associated with special privileges. Kernel modules which want to give these processes extra permissions can check whether the ShortId associated with a process matches the ShortId returned from privileged_id. Alternatively, when they are initialized, they can be passed a slice or array of ShortIds which are allowed; system initialization generates this set once and passes it into the module so it does not need to maintain a reference to the structure implementing Compress.

The exact Fixed values used is an internal implementation decision for the implementer of Compress and the Identifier Policy. Doing so cleanly decouples modules through APIs and does not leak internal state.

9 The AppIdPolicy Trait

The AppIdPolicy trait is a composite trait that combines AppUniqueness and Compress into a single trait so it can be passed as a single reference.

#![allow(unused)]
fn main() {
pub trait AppIdPolicy: AppUniqueness + Compress {}
impl<T: AppUniqueness + Compress> AppIdPolicy for T {}
}

10 Capsules

Capsules can use AppID to restrict access to only certain processes or to partition a resource based among processes. By using AppID, this assignment is persistent across reboots and application updates.

For example, consider a display that is divided such that different applications are given access to different regions of the display. These assignments should be persistent to main continuity for the user looking at the display, even if applications are added or removed.

This is a very incomplete example but it shows the general use of ShortId within a capsule. Note that accessing a ShortId is done using ProcessId.

#![allow(unused)]
fn main() {
pub struct AppScreenRegion {
    app_id: kernel::process::ShortId,
    frame: Frame,
}

pub struct ScreenShared<'a, S: hil::screen::Screen<'a>> {
    screen: &'a S,
    apps: Grant<App, UpcallCount<1>, AllowRoCount<{ ro_allow::COUNT }>, AllowRwCount<0>>,
    apps_regions: &'a [AppScreenRegion],
}

impl<'a, S: hil::screen::Screen<'a>> ScreenShared<'a, S> {
    fn get_app_screen_region_frame(&self, process_id: ProcessId) -> Option<Frame> {
        // Check if a process with that short ID has an allocated frame.
        let short_id = process_id.short_app_id();

        for app_screen_region in self.apps_regions {
            if short_id == app_screen_region.app_id {
                return Some(app_screen_region.frame);
            }
        }
        None
    }

    fn write_screen(&self, process_id: ProcessId) {
      let screen_region = self.get_app_screen_region_frame(process_id);
      self.screen.write(screen_region);
    }
}

impl<'a, S: hil::screen::Screen<'a>> SyscallDriver for ScreenShared<'a, S> {
    fn command(&self, command_num: usize, _: usize, _: usize, process_id: ProcessId) -> CommandReturn {
        match command_num {
            // Driver existence check
            0 => CommandReturn::success(),

            // Write
            1 => {
                self.write_screen(process_id);
                CommandReturn::success()
            }

            _ => CommandReturn::failure(ErrorCode::NOSUPPORT),
        }
    }
}
}

11 Implementation Considerations

Notes about requirements for application identifier generation/calculation (must be synchronous).

12 Authors' Addresses

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305
USA
pal@cs.stanford.edu

Johnathan Van Why <jrvanwhy@google.com>

Brad Campbell <bradjc@virginia.edu>

Digest HIL

TRD:
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Alistair Francis, Philip Levis
Draft-Created: June 8, 2022
Draft-Modified: June 8, 2022
Draft-Version: 1
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the hardware independent layer interface (HIL) for hash functions. A digest is the output of a hash function. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document is in full compliance with TRD1. The HIL in this document also adheres to the rules in the HIL Design Guide, which requires all callbacks to be asynchronous -- even if they could be synchronous.

1 Introduction

A hash function takes a potentially large input and transforms it into a fixed-length value. Hash functions have many uses and so there are many types of hash functions with different properties (computational speed, memory requirements, output distributions). A digest is the output of a hash function. Generally, hash functions seek to produce digest values that are uniformly distributed over their space of possible values, "hashing" the input and mixing it up such that the distance between the digests from two similar input values seem randomly distributed.

Cryptographic hash functions are a class of hash functions which have two properties that make them useful for checking the integrity of data. First, they have collision resistance: it is difficult to find two messages, m1 and m2, such that hash(m1) = hash(m2). Second, they have pre-image resistance, such that given a digest d, it is difficult to find a message m such that hash(m) = d. SHA256 and SHA3 are example cryptographic hash functions that are commonly used today and believed to provide both collision resistance and pre-image resistance.Boneh and Shoup.

Message authentication codes (MACs) are a method for providing integrity when both the generator and checker share a secret. MACs are a distinct integrity mechanism than digests. They provide both integrity and authenticity that the message came from a certain sender (who holds the secret). Some MACs, such as HMAC, are built on top of hash functions.

This document describes Tock's traits and their semantics for computing digests in the Tock operating system. These traits can also be used for generating HMACs.

2 Adding Data to a Digest: DigestData and ClientData

A client adds data to a hash function's input with the DigestData trait and receives callbacks with the ClientData trait.

These traits support both mutable and immutable data. Most HIL traits in Tock support only mutable data, because it is assumed the data is in RAM and passing it without the mut qualifier in a split-phase operation can discard its mutability (see Rule 5 in TRD2). Digest supports immutable data because many services need to compute digests over large, read-only data in flash. One example of this is the kernel's process loader, which needs to check that process images are not corrupted. Because digests are computationally inexpensive, copying the data from flash to RAM in order to compute a digest is a large overhead. Furthermore, the data input can be large (tens or hundreds of kilobytes). Therefore DigestData and ClientData support both mutable and immutable inputs.

Clients provide input to DigestData through the SubSlice and SubSliceMut types. These allow a client to ask a digest engine to compute a digest over a subset of their data, e.g. to exclude the area where the digest that will be compared against is stored. These types have a source slice and maintain an active range over that slice. The digest will be computed only over the active range, rather than the entire slice.

#![allow(unused)]
fn main() {
pub trait DigestData<'a, const L: usize> {
    fn set_data_client(&'a self, client: &'a dyn ClientData<'a, L>);
    fn add_data(&self, data: SubSlice<'static, u8>) 
       -> Result<(), (ErrorCode, SubSlice<'static, u8>)>;
    fn add_mut_data(&self, data: SubSliceMut<'static, u8>)
       -> Result<(), (ErrorCode, SubSliceMut<'static, u8>)>;
    fn clear_data(&self);
}
}

A successful call to add_data or add_mut_data will add all of the data in the active range of the leasable buffer as input to the hash function. A successful call is one which returns Ok(()) and whose completion event passes Ok(()). If a client needs to compute a hash over several non-contiguous regions of a slice, or multiple slices, it can call these methods multiple times.

There may only be one outstanding add_data or add_mut_data operation at any time. If either add_data or add_mut_data returns Ok(()), then all subsequent calls to add_data or add_mut_data MUST return Err((ErrorCode::BUSY, ...)) until a completion callback delivered through ClientData.

#![allow(unused)]
fn main() {
pub trait ClientData<'a, const L: usize> {
    fn add_data_done(&'a self, result: Result<(), ErrorCode>, data: SubSlice<'static, u8>);
    fn add_mut_data_done(
        &'a self,
        result: Result<(), ErrorCode>,
        data: SubSliceMut<'static, u8>,
    );
}
}

The data parameters of add_data_done and add_mut_data_done indicate what data was added and what remains to be added to the digest. If either callback has a result value of Ok(()), then the active region of data MUST be zero length and all of the data in the active region passed through the corresponding call MUST have been added to the digest.

A call to DigestData::clear_data() terminates the current digest computation and clears out all internal state to start a new one. If there is an outstanding add_data or add_data_mut when clear_data() is called, the digest engine MUST issue a corresponding callback with an Err(ErrorCode::CANCEL).

A digest engine MUST accept multiple calls to add_data and add_mut_data. Each call appends to the data over which the digest is computed.

3 Computing and Verification: DigestHash, DigestVerify, ClientHash, and ClientVerify

Once all of the data has been added as the input to a digest, a client can either compute the digest or ask the digest engine to compare its computed digest with a known value (verify). These traits have a generic parameter L which defines the length of the digest in bytes. A SHA256 digest engine, for example, has an L of 32.

#![allow(unused)]
fn main() {
pub trait DigestHash<'a, const L: usize> {
    fn set_hash_client(&'a self, client: &'a dyn ClientHash<'a, L>);
    fn run(&'a self, digest: &'static mut [u8; L])
        -> Result<(), (ErrorCode, &'static mut [u8; L])>;
}

pub trait ClientHash<'a, const L: usize> {
    fn hash_done(&'a self, result: Result<(), ErrorCode>, digest: &'static mut [u8; L]);
}

pub trait DigestVerify<'a, const L: usize> {
    fn set_verify_client(&'a self, client: &'a dyn ClientVerify<'a, L>);
    fn verify(&'a self, compare: &'static mut [u8; L])
	    -> Result<(), (ErrorCode, &'static mut [u8; L])>;
}

pub trait ClientVerify<'a, const L: usize> {
    fn verification_done(&'a self, result: Result<bool, ErrorCode>, compare: &'static mut [u8; L]);
}
}

Calls to DigestHash::run and DigestHash::verify perform the hash function on all of the data that has been added with calls to add_data and add_data_mut. If there is an outstanding call to add_data, add_data_mut, run, or verify they MUST return Err(ErrorCode::BUSY).

The ClientHash::hash_done callback returns the computed digest stored in the digest slice. If the result argument is Err((...)), the digest slice may store any values. If the result argument is Ok(()) the digest slice MUST store the computed digest.

The DigestVerity:verify takes an existing digest as its compare parameter. It triggers the digest engine to compute the digest, then compares the computed value with what was passed in compare. If the computed and provided values match, then ClientVerify passes Ok(true); if they do not match then it passes Ok(false). An Err result indicates that there was an error in computing the digest.

Calling either DigestHash::run or DigestVerify::verify completes the digest calculation, returning the digest engine to an idle state for the next computation.

4 Composite Traits

The Digest HIL provides many composite traits, so that structures which implement multiple traits can be passed around as a single reference. The ClientDataHash trait is for a client that implements both ClientData and ClientHash. The ClientDataVerify trait is for a client that implements both ClientData and ClientVerify. The Client trait is for a client that implements ClientData, ClientHash, and ClientVerify.

#![allow(unused)]
fn main() {
pub trait ClientDataHash<'a, const L: usize>: ClientData<'a, L> + ClientHash<'a, L> {}
pub trait ClientDataVerify<'a, const L: usize>: ClientData<'a, L> + ClientVerify<'a, L> {}
pub trait Client<'a, const L: usize>:
    ClientData<'a, L> + ClientHash<'a, L> + ClientVerify<'a, L> {}
}

The DigestDataHash trait is for a structure that implements both DigestData and DataHash. The DigestDataVerify trait is for a client that implements both DigestData and DigestVerify. The Digest trait is for a client that implements DigestData, DigestHash, and DigestVerify. These each add an additional method, set_client, which allows it to store the corresponding client as a single reference and use it for all of the relevant client callbacks (e.g., add_data, add_mut_data, hash_done, and verification_done). A digest implementation that implements set_client MAY choose to not implement the individual client set methods for the different traits (e.g., DigestData::set_client); if it does so, each of these client set methods MUST be marked unimplemented!().

#![allow(unused)]
fn main() {
pub trait DigestDataHash<'a, const L: usize>: DigestData<'a, L> + DigestHash<'a, L> {
    /// Set the client instance which will receive `hash_done()` and
    /// `add_data_done()` callbacks.
    fn set_client(&'a self, client: &'a dyn ClientDataHash<L>);
}

pub trait DigestDataVerify<'a, const L: usize>: DigestData<'a, L> + DigestVerify<'a, L> {
    /// Set the client instance which will receive `verify_done()` and
    /// `add_data_done()` callbacks.
    fn set_client(&'a self, client: &'a dyn ClientDataVerify<L>);
}

pub trait Digest<'a, const L: usize>:
    DigestData<'a, L> + DigestHash<'a, L> + DigestVerify<'a, L>
{
    /// Set the client instance which will receive `hash_done()`,
    /// `add_data_done()` and `verification_done()` callbacks.
    fn set_client(&'a self, client: &'a dyn Client<'a, L>);
}
}

5 Configuration

Digest engines can often operate in multiple modes, supporting several different hash algorithms and digest sizes. Configuring a digest engine occurs out-of-band from adding data and computing digests, through separate traits. Each digest algorithm is described by a separate trait. This allows compile-time checking that a given digest engine supports the required algorithm. For example, a digest engine that can compute a SHA512 digest implements the Sha512 trait:

#![allow(unused)]
fn main() {
pub trait Sha512 {
    /// Call before Digest::run() to perform Sha512
    fn set_mode_sha512(&self) -> Result<(), ErrorCode>;
}
}

The Digest HIL defines seven standard Digest traits:

  • Sha224
  • Sha256
  • Sha384
  • Sha512
  • HmacSha256
  • HmacSha384
  • HmacSha512

The HMAC configuration methods take a secret key, which is used in the HMAC algorithm. For example,

#![allow(unused)]
fn main() {
pub trait HmacSha384 {
    /// Call before `Digest::run()` to perform HMACSha384
    ///
    /// The key used for the HMAC is passed to this function.
    fn set_mode_hmacsha384(&self, key: &[u8]) -> Result<(), ErrorCode>;
}
}

Configuration methods MUST be called before the first call to add_data or add_data_mut.

6 Capsules

There are 5 standard Tock capsules for digests:

  1. capsules::hmac provides a system call interface to a digest engine that supports Digest, HmacSha256, HmacSha384, and HmacSha512.
  2. capsules::sha provides a system call interface to a digest engine that supports Digest, Sha256, Sha384, and Sha512.
  3. capsules::virtual_hmac virtualizes an HMAC engine, allowing multiple clients to share it through queueing. It requires a digest engine that supports Digest, HmacSha256, HmacSha384, and HmacSha512.
  4. capsules::virtual_sha virtualizes a SHA engine, allowing multiple clients to share it through queueing. It requires a digest engine that supports Digest, Sha256, Sha384, and Sha512.
  5. capsules::virtual_digest virtualizes a SHA/HMAC engine, allowing multiple clients to share it through queueing. It requires a digest engine that supports Digest, HmacSha256, HmacSha384, and HmacSha512, Sha256, Sha384, and Sha512 and supports all of these operations.

6 Authors' Address

Alistair Francis
alistair.francis@wdc.com

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305
USA
pal@cs.stanford.edu

Public and Private Encryption Keys

TRD: 1
Working Group: Kernel
Type: Documentary
Status: Draft
Authors: Alistair Francis
Draft-Created: 11 Oct, 2021
Draft-Modified: 11 Oct, 2021
Draft-Version: 1

Abstract

This document describes the Tock Public Private Key implementation. This documents the design process and final outcome. This focuses on the original RSA key support, but applies to all public/private keys.

1 Introduction

The goal of pub/priv keys in Tock is to allow the kernel and apps to use pub/priv crypto operations. It is expected that these are used before loading applications, to check signatures as well as by the kernel and/or apps during runtime.

The goal is to support 3 main use cases, for key storage:

  1. Keys stored on flash. The keys are stored at some address in read only flash and we want to "import" them and use them in the kernel.
  2. The app specifies a key. A userspace application obtains a key and passes it to the kernel to use for crypto operations
  3. We generate a key pair while running

2 Design Considerations

The design needs to integrate well with the rest of the Tock kernel and capsule design. As well as that we want to ensure

2.1 Low memory overhead

Pub/priv keys can be very large. For example a 4096-bit RSA key is 512 byes long. That means to store a pub/priv key pair in RAM we need at least 1024 bytes (1K) of memory, just for one key pair. That doesn't take into account potential post quantum algorithms that can have even larger keys.

Due to this the design should avoid copying keys into memory where not required. For example generating a new key pair will need to use memory, but reading existing keys from flash should avoid copying keys to memory.

2.2 Mutable and immutable buffers

As the implementation should support importing existing keys from flash or from userspace the design must allow for both mutable and immutable buffers.

3 Possible key structure implementations

Below is a list of possible implementations, as well as outcomes of that design. For consistency all designs below are for a 2048-bit RSA key/pair, but the designs could apply for any pub/priv operations

3.1 In memory buffers

Keys would be stored in a memory sturcture, similar to:

#![allow(unused)]
fn main() {
pub struct RSA2048Keys<'a> {
    modulus: [u8; 256],          // Also called n
    public_exponent: u32,        // Also called e
    private_exponent: [u8; 256], // Also called d
...
}
}

As mentioned in section 2.1 this requires large in memory buffers, even when using an existing key on flash. Due to that this method will not be used.

3.2 TakeCell buffers

In order to avoid storing the keys in memory, the design can instead use TakeCell. This way existing keys can pass in a buffer to the key, while new keys can use a buffer created with static_init!()

#![allow(unused)]
fn main() {
pub struct RSA2048Keys<'a> {
    public_key: TakeCell'static, u8>,
    private_key: TakeCell'static, u8>,
...
}
}

For example, importing a key would look like this:

#![allow(unused)]
fn main() {
fn import_public_key(&mut self,
    public_key: &'static mut [u8],
) -> Result<(), (ErrorCode, &'static mut [u8])>
}

The problem with using TakeCell is that then the buffer must be mutable. This won't work with a read-only buffer stored in flash.

The design also can't use Cell and immutable buffers instead, as then the design doesn't work with mutable buffers, required for genearating keys or interacting with userspace.

3.3 Mutable and Immutable buffers

Similar to above, this design uses interior mutability, but adds this enum

#![allow(unused)]
fn main() {
pub enum MutImutBuffer<'a, T> {
    Mutable(&'a mut [T]),
    Immutable(&'a [T]),
}
}

Then the key structure will look like

#![allow(unused)]
fn main() {
pub struct RSA2048Keys<'a> {
    public_key: OptionalCell<MutImutBuffer<'static, u8>>,
    private_key: OptionalCell<MutImutBuffer<'static, u8>>,
...
}
}

This is similar to 3.2, but allows either a mutable or immutable buffer.

For example to import a key the function would look like:

#![allow(unused)]
fn main() {
fn import_public_key(
    &'a self,
    public_key: MutImutBuffer<'static, u8>,
) -> Result<(), (ErrorCode, MutImutBuffer<'static, u8>)>;
}

This allows the design to use either a mutable or immutable buffer and doesn't have a high memory overhead.

3.4 Read and Read/Write keys

Similar to 3.3 the other option is to have a read only key and a read/write key and move the enum a level higher.

For example

#![allow(unused)]
fn main() {
pub struct RSA2048ReadOnlyKeys<'a> {
    public_key: OptionalCell<&'static [u8]>,
    private_key: OptionalCell<&'static [u8]>,
...
}

pub struct RSA2048ReadWriteKeys<'a> {
    public_key: TakeCell'static, u8>,
    private_key: TakeCell'static, u8>,
...
}

pub enum RSA2048Keys<'a> {
    Mutable(RSA2048ReadWriteKeys<'a>),
    Immutable(RSA2048ReadOnlyKeys<'a>),
}
}

This has the advantage that it's more obvious if a key is mutable or immutable. This has a large code duplication downside though. There will be two implementations, one for RSA2048ReadOnlyKeys and one for RSA2048ReadWriteKeys that are almost identical.

On top of that there also will need to be two HILS, for example:

#![allow(unused)]
fn main() {
pub trait PubKeyReadWrite<'a> {
    fn import_public_key(&self,
        public_key: &'static mut [u8],
    ) -> Result<(), (ErrorCode, &'static mut [u8])>
}

pub trait PubKeyReadOnly<'a> {
    fn import_public_key(&self,
        public_key: &'static [u8],
    ) -> Result<(), (ErrorCode, &'static [u8])>
}
}

This has a complexity and code size downside compared to section 3.3, but can avoid confusion where a mutable buffer is required but not supplied.

4 Possible low level interface APIs

On top of the key structure implementation, there will also be a HIL that hardware implementations inside chips will implement.

This TRD is not trying to describe this API, so let's just assume this is one of the functions are part of that HIL:

#![allow(unused)]
fn main() {
/// Calculate the exponent. That is calculate `message` ^ `exponent`
///
/// On completion the `exponent_done()` upcall will be scheduled.
fn exponent(
    &self,
    message: &'static mut [u8],
    exponent: T,
    result: &'static mut [u8],
) -> Result<
    (),
    (
        ErrorCode,
        &'static mut [u8],
        T,
        &'static mut [u8],
    ),
>;
}

This function takes the message buffer and calculates the exponent from the public or private key of type T and stores it in result.

The below sections describe why type T should be.

4.1 Mutable and Immutable buffers

See section 3.3 for the enum MutImutBuffer, which would be used like this:

#![allow(unused)]
fn main() {
/// Calculate the exponent. That is calculate `message` ^ `exponent`
///
/// On completion the `exponent_done()` upcall will be scheduled.
fn exponent(
    &self,
    message: &'static mut [u8],
    exponent: (MutImutBuffer<'static, u8>, Range<usize>),
    result: &'static mut [u8],
) -> Result<
    (),
    (
        ErrorCode,
        &'static mut [u8],
        MutImutBuffer<'static, u8>,
        &'static mut [u8],
    ),
>;
}

In this case the underlying API will take a 'static buffer that is either mutable or immutable. This is wraped in the MutImutBuffer enum. In this case as well we specify a range of the buffer to be used.

This has the advantage that the hardware interfacing driver doesn't have to manage keys, instead it is just passed a buffer (wrapped in an emum). This is also similar to other Tock HILs.

The disadvantage is how to get the buffer before calling the above function.

This implementation requires that the above layer loose access to the buffer, with something like:

#![allow(unused)]
fn main() {
fn private_exponent(&'a self) -> Option<(MutImutBuffer<'static, u8>, Range<usize>)> {
    if self.private_key.is_some() {
        let len = PubPrivKey::len(self);
        Some((self.private_key.take().unwrap(), 0..len))
    } else {
        None
    }
}
}

Which also requires a way to regain acceess to the buffer on the exponent() callback:

#![allow(unused)]
fn main() {
fn import_private_key(
    &self,
    private_key: MutImutBuffer<'static, u8>,
) -> Result<(), (ErrorCode, MutImutBuffer<'static, u8>)> {
    if private_key.len() != 256 {
        return Err((ErrorCode::SIZE, private_key));
    }

    self.private_key.replace(private_key);

    Ok(())
}
}

This option also requires the MutImutBuffer enum to work

4.2 Keys

The other option is to pass the entire key to the low level API, for example something like:

#![allow(unused)]
fn main() {
/// Calculate the exponent. That is calculate message ^ exponent
///
/// On completion the `exponent_done()` upcall will be scheduled.
fn exponent(
    &self,
    message: &'static mut [u8],
    key: &'static mut dyn RsaPrivKey,
    result: &'static mut [u8],
) -> Result<
    (),
    (
        ErrorCode,
        &'static mut [u8],
        &'static mut dyn RsaPrivKey,
        &'static mut [u8],
    ),
>;
}

Using something like this in the HIL:

#![allow(unused)]
fn main() {
/// Returns the specified closure over the private exponent, if it exists
/// The exponent is returned MSB (big endian)
/// Returns `Some()` if the key exists and the closure was called,
/// otherwise returns `None`.
fn map_exponent(&self, closure: &dyn Fn(&[u8]) -> ()) -> Option<()>;
}

and an implementation similar to:

#![allow(unused)]
fn main() {
fn map_exponent(&self, closure: &dyn Fn(&[u8]) -> ()) -> Option<()> {
    if let Some(private_key) = self.private_key.take() {
        match private_key {
            MutImutBuffer::Mutable(ref buf) => {
                let _ = closure(buf);
            }
            MutImutBuffer::Immutable(buf) => {
                let _ = closure(buf);
            }
        }
        self.private_key.replace(private_key);
        Some(())
    } else {
        None
    }
}
}

Then the final implementation can use map() with this code:

#![allow(unused)]
fn main() {
key.map_exponent(&|buf| {
    // Do operations of the `buf` array
});
}

This has the advantage that accessing information from keys is not distructive. It does have the downside that hardware implementations in chips needs to understand the key values to access.

5 Final implementation

TODO once agreed apon

6 Author's Address

Alistair Francis
alistair.francis@wdc.com

Kernel 802.15.4 Radio HIL

TRD:
Working Group: Kernel
Type: Documentary
Status: Draft
Authors: Philip Levis
Draft-Created: Feb 14, 2017
Draft-Modified: Mar 20, 2017
Draft-Version: 2
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the hardware independent layer interface (HIL) for an 802.15.4 radio in the Tock operating system kernel. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document is in full compliance with TRD1.

1 Introduction

Wireless communication is an integral component of sensor networks and the Internet of Things (IoT). 802.15.4 is low-power link layer that is well suited to ad-hoc and mesh networks. It underlies numerous network technologies, such as ZigBee, 6lowpan, and Thread, and there is a large body of research on how to use it for extremely robust and low-power networking. With a maximum frame size of 128 bytes, simple but effective coding to reduce packet losses, multiple addressing modes, AES-based cryptograpy, and synchronous link-layer acknowledgments, 802.15.4 is a flexible and efficient link layer for many applications and uses.

This document describes Tock's HIL for an 802.15.4 radio. The HIL is in the kernel create, in model hil::radio. It provides four traits:

  • kernel::hil::radio::RadioControl: turn the radio on/off and configure it
  • kernel::hil::radio::Radio: send, receive and access packets
  • kernel::hil::radio::TxClient: handles callback when transmission completes
  • kernel::hil::radio::RxClient: handles callback when packet received
  • kernel::hil::radio::ConfigClient: handles callback when configuration changed

The rest of this document discusses each in turn.

2 Configuration constants and buffer management

To avoid extra buffers and memory copies, the radio stack requires that callers provide it with memory buffers that are larger than the maximum frame size it can send/receive. A caller provides a single, contiguous buffer of memory. The frame itself is at an offset within his buffer, and the data payload is at an offset from the beginnig of the frame. The implementation section gives a detailed example of this layout for the RF233 radio.

Following this approach, The Radio HIL defines 4 constants:

  • kernel::hil::radio::HEADER_SIZE: the size of an 802.15.4 header,
  • kernel::hil::radio::MAX_PACKET_SIZE: the maximum frame size,
  • kernel::hil::radio::MAX_BUF_SIZE: the size buffer that must be provided to the radio, and
  • kernel::hil::radio::MIN_PACKET_SIZE: the smallest frame that can be received (typically HEADER_SIZE + 2 for an error-detecting CRC).

Note that MAX_BUF_SIZE can be larger (but not smaller) than MAX_PACKET_SIZE. A radio must be given receive buffers that are MAX_BUF_SIZE in order to ensure that it can receive maximum length packets.

3 RadioControl trait

The RadioControl trait provides functions to initialize an 802.15.4 radio, turn it on/off and configure it.

3.1 Changing radio power state

fn initialize(&self,
              spi_buf: &'static mut [u8],
              reg_write: &'static mut [u8],
              reg_read: &'static mut [u8])
              -> Result<(), ErrorCode>;
fn reset(&self) -> Result<(), ErrorCode>;
fn start(&self) -> Result<(), ErrorCode>;
fn stop(&self) -> Result<(), ErrorCode>;

fn is_on(&self) -> bool;
fn busy(&self) -> bool;
fn set_power_client(&self, client: &'static PowerClient);

The initialize function takes three buffers, which are required for the driver to be able to control the radio over an SPI bus. The first, spi_buf, MUST have length MAX_BUF_SIZE. This buffer is required so that the driver can interact over an SPI bus. An SPI bus usually requires both a transmit and a receive buffer: software writes out the the TX buffer (the MOSI line) while it reads into the RX buffer (MISO line). When a caller tries to transmit a packet buffer, the radio needs an SPI receive buffer to check the radio status. Similarly, when the stack receives a packet into a buffer, it needs an SPI transmit buffer to send the command to read from radio memory. The spi_buf buffer is purely internal, once configured, it MUST never be visible outside of the stack.

The reg_write and reg_read buffers are needed to read and write radio registers over the SPI bus. They are both 2 bytes long. These buffers are purely internal and MUST never be visible outside the stack.

The reset function resets the radio and configures its underlying hardware resources (GPIO pins, buses, etc.). reset MUST be called at before calling start.

The start function transitions the radio into a state in which it can send and receive packets. It either returns FAIL because the radio cannot be started or Ok(()) if it will be started. If the radio is already started (or in the process), start MUST return FAIL. I.e., if software calls start twice, the second call would return FAIL. Software can tell when the radio has completed initialization by caling started.

The stop function returns the radio to a low-power state. The function returns Ok(()) if the radio will transition to a low-power state and FAIL if it will not. Software can tell when the radio has turned off by calling started.

The is_on function returns whether the radio is in a powered-on state. If the radio is on and can send/receive packets, it MUST return true. If the radio cannot send/receive packets, it MUST return false.

The busy function returns whether the radio is currently busy. It MUST return false if the radio is currently idle and can accept reconfiguration or packet transmission requests. If it is busy and cannot accept reconfiguration or packet transmission requests, it MUST return true.

The set_power_client function allows a client to register a callback for when the radio's power state changes.

3.2 Configuring the radio

Re-configuring an 802.15.4 radio is an asynchronous operation. Calling functions to change the radio's configuration does not actually reconfigure it. Instead, those configuration changes must be committed by calling config_commit. The radio issues a callback when the reconfiguration completes. The object to receive the callback is set by calling set_config_client. If config_commit returns Ok(()) and there is a configuration client installed, the radio MUST issue a config_done callback. config_commit MAY return OFF if the radio is off, or may return Ok(()) and hold the configuration commit until the radio is turned on again.

fn set_config_client(&self, client: &'static ConfigClient);
fn config_commit(&self) -> Result<(), ErrorCode>;

A caller can configure the 16-bit short address, 64-bit full address, PAN (personal area network) identifier, transmit power, and channel. The PAN address and node address are both 16-bit values. Channel is an integer in the range 11-26 (the 802.15.4 channel numbers). The channel is encoded in the radio::RadioChannel enum, ensuring the channel value resides in the valid range.

fn config_address(&self) -> u16;
fn config_address_long(&self) -> [u8;8];
fn config_pan(&self) -> u16;
fn config_tx_power(&self) -> i8;
fn config_channel(&self) -> u8;
fn config_set_address(&self, addr: u16);
fn config_set_address_long(&self, addr: [u8;8]);
fn config_set_pan(&self, addr: u16);
fn config_set_tx_power(&self, power: i8) -> Result<(), ErrorCode>;
fn config_set_channel(&self, chan: radio::RadioChannel);

config_set_tx_power takes an signed integer, whose units are dBm. If the specified value is greater than the maximum supported transmit power or less than the minimum supported transmit power, it MUST return INVAL. Otherwise, it MUST set the transmit power to the closest value that the radio supports. config_tx_power MUST return the actual transmit power value in dBm. Therefore, it is possible that the return value of config_tx_power returns a different (but close) value than what it set in config_set_tx_power.

4 RadioData trait for sending and receiving packets

The RadioData trait implements the radio data path: it allows clients to send and receive packets as well as accessors for packet fields.

fn payload_offset(&self, long_src: bool, long_dest: bool) -> u8;
fn header_size(&self, long_src: bool, long_dest: bool) -> u8;
fn packet_header_size(&self, packet: &'static [u8]) -> u8;
fn packet_get_src(&self, packet: &'static [u8]) -> u16;
fn packet_get_dest(&self, packet: &'static [u8]) -> u16;
fn packet_get_src_long(&self, packet: &'static [u8]) -> [u8;8]
fn packet_get_dest_long(&self, packet: &'static [u8]) -> [u8;8];
fn packet_get_pan(&self, packet: &'static [u8]) -> u16;
fn packet_get_length(&self, packet: &'static [u8]) -> u8;
fn packet_has_src_long(&self, packet: &'static [u8]) -> bool;
fn packet_has_dest_long(&self, packet: &'static [u8]) -> bool;

The packet_ functions MUST NOT be called on improperly formatted 802.15.4 packets (i.e., only on received packets). Otherwise the return values are undefined. payload_offset returns the offset in a buffer at which the radio stack places the data payload. To send a data payload, a client should fill in the payload starting at this offset. For example, if payload_offset returns 11 and the caller wants to send 20 bytes, it should fill in bytes 11-30 of the buffer with the payload. header_size returns the size of a header based on whether the source and destination addresses are long (64-bit) or short (16-bit). packet_header_size returns the size of the header on a particular correctly formatted packet (i.e., it looks at the header to see if there are long or short addresses).

The data path has two callbacks: one for when a packet is received and one for when a packet transmission completes.

fn set_transmit_client(&self, client: &'static TxClient);
fn set_receive_client(&self, client: &'static RxClient,
                      receive_buffer: &'static mut [u8]);
fn set_receive_buffer(&self, receive_buffer: &'static mut [u8]);

Registering for a receive callback requires also providing a packet buffer to receive packets into. The receive callback MUST pass this buffer back. The callback handler MUST install a new receive buffer with a call to set_receive_buffer. This buffer MAY be the same buffer it received or a different one.

Clients transmit packets by calling transmit or transmit_long.

fn transmit(&self,
            dest: u16,
            tx_data: &'static mut [u8],
            tx_len: u8,
            source_long: bool) -> Result<(), ErrorCode>;

fn transmit_long(&self,
            dest: [u8;8],
            tx_data: &'static mut [u8],
            tx_len: u8,
            source_long: bool) -> Result<(), ErrorCode>;

The packet sent on the air by a call to transmit MUST be formatted to have a 16-bit short destination address equal to the dest argument. A packet sent on the air by a call to transmit_long MUST be formatted to have a 64-bit destination address equal to the dest argument.

The source_long parameter denotes the length of the source address in the packet. If source_long is false, the implementation MUST include a 16-bit short source address in the packet. If source_long is true, the implementation MUST include a 64-bit full source address in the packet. The addresses MUST be consistent with the values written and read with config_set_address, config_set_address_long, config_address, and config_address_long.

The passed buffer tx_data MUST be MAX_BUF_LEN in size. tx_len is the length of the payload. If transmit returns Ok(()), then the driver MUST issue a transmission completion callback. If transmit returns any value except Ok(()), it MUST NOT accept the packet for transmission and MUST NOT issue a transmission completion callback. If tx_len is too long, transmit MUST return SIZE. If the radio is off, transmit MUST return OFF. If the stack is temporarilt unable to send a packet (e.g., already has a transmission pending), then transmit MUST return BUSY. If the stack accepts a packet for transmission (returns Ok(())), it MUST return BUSY until it issues a transmission completion callback.

5 TxClient, RxClient, ConfigClient, and PowerClient traits

An 802.15.4 radio provides four callbacks: packet transmission completion, packet reception, when a change to the radio's configuration has completed, and when the power state of the radio has changed.

pub trait TxClient {
    fn send_done(&self, buf: &'static mut [u8], acked: bool, result: Result<(), ErrorCode>);
}

The buf paramater of send_done MUST pass back the same buffer that was passed to transmit. acked specifies whether the sender received a link-layer acknowledgement (indicating the packet was successfully received). result indicates whether or not the packet was transmitted successfully; it can take on any of the valid return values for transmit or FAIL to indicate other reasons for failure.

The receive callback is called whenever the radio receives a packet destined to the node's address (including broadcast address) and PAN id that passes a CRC check. If a packet is not destined to the node or does not pass a CRC check then receive MUST NOT be called. buf is the buffer containing the received packet. It MUST be the same buffer that was passed with either installing the receive handler or calling set_receive_buffer. The buffer is consumed through the callback: the radio stack MUST NOT maintain a reference to the buffer. A client that wants to receive another packet MUST call set_receive_buffer.

pub trait RxClient {
    fn receive(&self, buf: &'static mut [u8], len: u8, result: Result<(), ErrorCode>);
}

The config_done callback indicates that a radio reconfiguration has been committed to hardware. If the configuration has been successfully committed, result MUST be Ok(()). It may otherwise take on any value that is a valid return value of config_commit or FAIL to indicate another failure.

pub trait ConfigClient {
    fn config_done(&self, result: Result<(), ErrorCode>);
}

The changed callback indicates that the power state of the radio has changed. The on parameter states whether it is now on or off. If a call to stop using the RadioConfig interface returns Ok(()), the radio MUST issue a changed callback when the radio is powered off, passing false as the value of the on parameter. If a call to start using the RadioConfig interface returns Ok(()), the radio MUST issue a changed callback when the radio is powered on, passing true as the value of the on parameter.

pub trait PowerClient {
    fn changed(&self, on: bool);
}

The return value of is_on MUST be consistent with the state as exposed through the changed callback. If the changed callback has indicated that the radio is on, then is_on MUST return true a later callback signals the radio is off. Similarly, if the changed callback has indicated that the radio is off, then is_on MUST return false until a later callback signals the radio is on.

6 RadioCrypto trait

The RadioCrypto trait is for configuring and enabling/disabling different security settings.

7 Example Implementation: RF233

An implementation of the radio HIL for the Atmel RF233 radio can be found in capsules::rf233. This implementation interacts with an RF233 radio over an SPI bus. It supports 16-bit addresses, intra-PAN communication, and synchronous link-layer acknowledgments. It has two files: rf233.rs and rf233_const.rs. The latter has constants such as register identifiers, command formats, and register flags.

The RF233 has 6 major operations of the SPI bus: read a register, write a register, read an 802.15.4 frame, write an 802.15.4 frame, read frame SRAM and write frame SRAM. The distinction between frame and SRAM access is that frame access always starts at index 0, while SRAM access has random access (a frame operation is equivalent to an SRAM operation with address 0). The implementation only uses register and frame operations. The details of these operations can be found in Section 6.3 of the RF233 datasheet RF233.

The implementation has 6 high-level states:

  • off,
  • initializing the radio,
  • turning on the radio to receive,
  • waiting to receive packets (default idle state),
  • receiving a packet,
  • transmitting a packet, and
  • committing a configuration change.

All of these states, except off, have multiple substates. They reach represent a (mostly) linear series of state transitions. If a client requests an operation (e.g., transmit a packet, reconfigure) while the stack is in the waiting state, it starts the operation immediately. If it is in the midst of receiving a packet, it marks the operation as pending and completes it when it falls back to the waiting state. If there is both a packet transmission and a reconfiguration pending, it prioritizes the transmission first.

The RF233 provides an interrupt line to the processor, to denote some state changes. The radio has multiple interrupts, which are are multiplexed onto a single interrupt line. Software is responsible for reading an interrupt status register on the radio (a register read operation) to determine what interrupts are pending. Since a register read requires an SPI operation, it can be significantly delayed. For example, if the stack is the midst of writing out a packet to the radio's frame buffer, it will complete the SPI operation before issuing a register read. In cases when transmissions are interrupted by packet reception, the stack simply marks the packet as pending and waits for the reception to complete, then retries the transmission.

8 Authors' Address

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305
phone - +1 650 725 9046
email - pal@cs.stanford.edu
  1. Citations ========================================

Kernel Serial Peripheral Interface (SPI) HIL

TRD:
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Philip Levis, Alexandru Radovici
Draft-Created: 2021/08/13
Draft-Modified: 2021/08/13
Draft-Version: 2
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document proposes hardware independent layer interface (HIL) for a serial peripheral interface (SPI) bus in the Tock operating system kernel. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document is in full compliance with TRD1.

Note that this HIL has not been implemented yet in the master branch of Tock -- this is a working document as the HIL is designed.

1 Introduction

The serial peripheral interface (SPI) is a standard bus design for processors and microcontrollers to exchange data with sensors, I/O devices, and other off-chip compoments. The bus is clocked. The device driving the clock is called a "master" or "controller" and the device whose clock is driven is called a "slave" or "peripheral". A SPI bus has three data lines: the clock (CLK), data from the controller to the peripheral (MOSI) and data from the peripheral to the controller (MISO). A SPI bus does not have addressing. Instead, peripherals have a chip select (CS) pin. When a peripheral's chip select line is brought low, it receives data on MOSI and sends data on MISO. A controller can connect to CS pins on many different devices and share the bus between them by explicitly controlling which ones are active.

The SPI HIL is in the kernel crate, in module hil::spi. It provides seven main traits:

  • kernel::hil::spi::Configure: provides an abstraction of configuring a SPI bus by setting its data rate, phase, and polarity.
  • kernel::hil::spi::Controller: allows a client for a SPI in controller mode to send and receive data.
  • kernel::hil::spi::ControllerDevice: combines Configure and Controller to provide an abstraction of a SPI bus in controller mode for a client that is bound to a specific chip select (e.g., a sensor driver). It allows a client to send and receive data as well as configure the bus for (only) its own operations.
  • kernel::hil::spi::ChipSelect: allows a client to change which chip select is active on a SPI bus incontroller mode.
  • kernel::hil::spi::ControllerBus: combines ControllerDevice and ChipSelect to allow a client to issue SPI operations on any chip select. It also supports initializing the bus hardware. This trait is intended to be implemented by a chip implementation.
  • kernel::hil::spi::PeripheralDevice: extends Configure and provides an abstraction of a SPI bus in peripheral mode. It allows a client to learn when it is selected, to send and receive data, and configure the bus for its own operations.
  • kernel::hil::spi::PeripheralBus: extends PeripheralDevice to support initializing the bus hardware. This trait is intended to be implemented by a chip peripheral implementation.
  • kernel::hil::spi::Bus: represents a SPI bus that can be dynamically changed between controller and peripheral modes. This trait is intended to be implemented by a chip implementation.

A given board MUST NOT include an implementation of more than one of the ControllerBus, PeripheralBus, and Bus traits for a given SPI bus. these traits are mutually exclusive.

This document describes these traits and their semantics.

2 Configure trait

The Configure trait allows a client to set the data rate (clock frequency) of the SPI bus as well as its polarity and phase. Polarity controls whether the clock line is high or low when the bus is idle. Phase controls on which clock edges the bus clocks data in and out. It also allows configuring whether data is sent most significant bit first or least significant bit first.

#![allow(unused)]
fn main() {
pub enum DataOrder {
    MSBFirst,
    LSBFirst,
}

pub enum ClockPolarity {
    IdleLow,
    IdleHigh,
}

pub enum ClockPhase {
    SampleLeading,
    SampleTrailing,
}

pub trait Configure {
    fn set_rate(&self, rate: u32) -> Result<u32, ErrorCode>;
    fn get_rate(&self) -> u32;

    fn set_polarity(&self, polarity: ClockPolarity) -> Result<(), ErrorCode>;
    fn get_polarity(&self) -> ClockPolarity;

    fn set_phase(&self, phase: ClockPhase) -> Result<(), ErrorCode>;
    fn get_phase(&self) -> ClockPhase;

    fn set_data_order(&self, order: DataOrder) -> Result<(), ErrorCode>;
    fn get_data_order(&self) -> DataOrder;
}
}

All of the set methods in Configure can return an error. Valid errors are:

  • INVAL (set_rate only): the parameter is outside the allowed range
  • NOSUPPORT (set_polarity, set_phase, set_data_order): the parameter provided cannot be supported. For example, a SPI bus that cannot have an IdleHigh polarity returns NOSUPPORT if a client tries to set it to have this polarity.
  • OFF (all): the bus is currently powered down in a state that does not allow configuring it.
  • BUSY (all): the bus is in the midst of an operation and cannot currently change its configuration.
  • FAIL (all): some other error occurred.

The set_rate method returns a u32 in its success case. This is the actual data rate set, which may differ from the one passed, e.g., due to clock precision or prescalars. The actual rate rate set MUST be less than the rate passed. If no rate can be set (e.g., the rate is too small), set_rate MUST return Err(INVAL).

The relationship of phase and polarity follows the standard SPI specification[1]:

+------------+------------------+-------------+----------------+----------------+ | Polarity | Phase | Idle Level | Data Out | Data In | +------------+------------------+-------------+----------------+----------------+ | IdleLow | SampleLeading | Low | Rising Edge | Falling Edge | | IdleLow | SampleTrailing | Low | Falling Edge | Rising Edge | | IdleHigh | SampleLeading | High | Rising Edge | Rising Edge | | IdleHigh | SampleTrailing | High | Falling Edge | Falling Edge | +------------+------------------+-------------+----------------+----------------+

If the SPI bus is in the middle an outstanding operation (Controller::read_write_bytes or Peripheral::read_write_bytes), calls to Configure to set values MUST return BUSY.

3 Controller, ControllerDevice, and ControllerClient traits

The Controller trait allows a client to send and receive data on a SPI bus in controller mode:

#![allow(unused)]
fn main() {
pub trait Controller<'a> {
    fn set_client(&self, client: &'a dyn ControllerClient);
    fn read_write_bytes(
        &self,
        write_buffer: &'a mut [u8],
        read_buffer: Option<&'a mut [u8]>,
        len: usize,
    ) -> Result<(), (ErrorCode, &'a mut [u8], Option<&'a mut [u8]>)>;
}

pub trait ControllerClient<'a> {
    fn read_write_done(
        &self,
        write_buffer: &'a mut [u8],
        read_buffer: Option<&'a mut [u8]>,
        len: usize,
        status: Result<(), ErrorCode>,
    );
}
}

The read_write_bytes method always takes a buffer to write and has an optional buffer to read into. For operations that do not need to read from the SPI peripheral, the read_buffer can be None.

If the call to read_write_bytes returns Ok(()), the implementation MUST issue a callback to the SpiControllerClient when it completes. If the call returns an Err, the implementation MUST NOT issue a callback, except if the ErrorCode is BUSY. In this case, the implementation issues a callback for the outstanding operation but does not issue a callback for the failed one. If it returns Err, the implementation MUST return the buffers passed in the call. Valid ErrorCode values for an Err result are:

  • BUSY: the SPI is busy with another call to read_write_bytes and so cannot complete the request.
  • OFF: the SPI is off and cannot accept a request.
  • INVAL: the length value is 0, or one of the buffers passed has length 0.
  • RESERVE: there is no client for a callback.
  • SIZE: one of the buffers passed is smaller than len: len bytes cannot be transferred.
  • FAIL: some other failure condition.

The set_client method sets which callback to invoke when a read_write_bytes call completes. The read_write_done callback MUST return the buffers passed in the call to read_write_bytes. The len argument is the number of bytes read/written. The status argument indicates whether the SPI operation completed successfully. It may return any of the ErrorCode values that can be returned by read_write_bytes: these represent asynchronous errors (e.g., due to queueing).

The ControllerDevice trait combines Controller and Configure traits. It provides the abstraction of being able to read/write to the bus and adjust its configuration.

#![allow(unused)]
fn main() {
pub trait ControllerDevice<'a>: Controller<'a> + Configure<'a> {}
}

4 ChipSelect and ControllerBus

The ChipSelect trait allows a client to change which chip select is active on the SPI bus. Because different SPI hardware can provide different numbers of chip selects, the actual chip select value is an associated type. This associated type is typically an enum so a chip implementation can statically verify that clients pass only valid chip select values.

#![allow(unused)]
fn main() {
pub trait ChipSelect {
  type Value: Copy;
  fn set_chip_select(&self, cs: Self::Value) -> Result<(), ErrorCode>;
  fn get_chip_select(&self) -> Self::Value);
}
}

The ControllerBus trait combines ControllerDevice and ChipSelect to provide the full abstraction of a SPI bus. It is the trait that chip SPI implementations provide. In addition to ControllerDevice and ChipSelect, ControllerBus includes an init method. This init method initializes the hardware to be a SPI controller and is typically called at boot.

#![allow(unused)]
fn main() {
pub trait ControllerBus<'a>: ControllerDevice<'a> + ChipSelect {
  fn init(&self) -> Result<(), ErrorCode>;
}
}

The Err result of init can return the following ErrorCode values:

  • OFF: not currently powered so can't be initialized.
  • RESERVE: no clock is configured yet.
  • FAIL: other failure condition.

A client using a ControllerBus can exchange data with multiple SPI peripherals, switching between them with ChipSelect. Calls to Configure modify the configuration of the current chip select, which are stateful. Changing the chip select uses the last configuration set for that chip select. For example,

#![allow(unused)]
fn main() {
bus.set_chip_select(1);
bus.set_phase(SampleLeading);
bus.set_chip_select(2);
bus.set_phase(SampleTrailing);
bus.set_chip_select(1);
bus.read_write_bytes(...); // Uses SampleLeading
}

will have a SampleLeading phase in the final write_byte_bytes call, because the configuration of chip select 1 is saved, and restored when chip select is set back to 1.

5 Peripheral and PeripheralClient traits

When a chip acts as a SPI peripheral, it does not drive the clock. Instead, it response to the clock of the controller. In some cases, the peripheral must be able to respond with a bit of data before it has even received one (e.g., if phase is set to SampleLeading). As a result, a peripheral read/write request may never complete if the controller never issues a request of its own. The peripheral has to provide read and write buffers in anticipation of a controller request. Unlike a controller, which must always write data, a peripheral can only read, only write, or read and write.

#![allow(unused)]
fn main() {
pub trait Peripheral {
    fn set_client(&self, client: &'static dyn PeripheralClient);

    fn read_write_bytes(&self, write_buffer: Option<&'static mut [u8]>, read_buffer: Option<&'static mut [u8]>, len: usize,) -> Result<
        (),
        (ErrorCode, Option<&'static mut [u8]>, Option<&'static mut [u8]>,
        ),
    >;
    fn set_write_byte(&self, write_byte: u8);
}

pub trait PeripheralClient {
    fn chip_selected(&self);
    fn read_write_done(
        &self,
        write_buffer: Option<&'static mut [u8]>,
        read_buffer: Option<&'static mut [u8]>,
        len: usize,
        status: Result<(), ErrorCode>,
    );
}
}

The Peripheral API differs from the Controller in three ways:

  • read_write_bytes has an optional write buffer,
  • clients have a chip_selected callback, and
  • a peripheral can set its write as a single-byte value.

When a controller brings the chip select line low, the implementation calls the chip_selected callback to inform the peripheral that an operation is starting. Because a controller may begin clocking data almost immediately after the chip select is brought low (e.g., a SPI clock tick, so in some cases a few hundred nanoseconds. Because this is faster than the chip_selected callback can typically be issued, the client SHOULD have already made a read_write_bytes or set_write_byte call, so the SPI hardware has a byte ready to send.

The set_write_byte call sets the byte that the SPI peripheral should write to the controller. The peripheral will write this byte on each SPI byte operation until the next call to set_write_byte or read_write_bytes with a write buffer argument.

The read_write_bytes method takes two Option types: one for the write buffer and one for the read buffer. The SPI peripheral will read bytes written by the controller into the read buffer, and will write out the bytes in the write buffer to the controller. If no write buffer is provided, the bytes the peripheral will write are undefined. If read_write_bytes returns Ok(()), the request was accepted and the implementation MUST issue a callback when the request completes or has an error. The valid ErrorCode values for read_write_bytes are:

  • BUSY: the SPI is busy with another call to read_write_bytes and so cannot complete the request.
  • OFF: the SPI is off and cannot accept a request.
  • INVAL: the len parameter was 0 or both buffers were None.
  • RESERVE: there is no client for a callback.
  • SIZE: one of the passed buffers is smaller than len.

The read_write_done callback is called when the outstanding read_write_bytes request completes. The len argument is how many bytes were read/written. It may differ from the len passed to read_write_bytes if one of the buffers is is shorter, or if an error occured. It may return any of the ErrorCode values that can be returned by read_write_bytes: these represent asynchronous errors (e.g., due to arbitration).

6 PeripheralDevice and PeripheralBus traits

The PeripheralDevice trait represents the standard client abstraction of a SPI peripheral. It combines Peripheral and Configure:

#![allow(unused)]
fn main() {
pub trait PeripheralDevice<'a>: Peripheral<'a> + Configure {}
}

PeripheralBus represents the lowest-level hardware abstraction of a SPI peripheral. It is the trait that chip implementations typically implement. It is PeripheralDevice plus an init() method for initializing hardware to be a SPI peripheral:

#![allow(unused)]
fn main() {
pub trait PeripheralBus<'a>: PeripheralDevice<'a> {
  fn init(&self) -> Result<(), ErrorCode>;
}
}

The Err result of init can return the following ErrorCode values:

  • OFF: not currently powered so can't be initialized.
  • FAIL: other failure condition.

7 Bus trait

The ControllerBus and PeripheralBus traits are intended for use cases when a given SPI block is always used as either or a controller or always used as a peripheral. Some systems, however, require the bus to change between these roles. For example, a board might export the bus over an expansion header, and whether it behaves as a peripheral or controller depends on what it's plugged into and which userspace processes run.

The Bus trait allows software to dynamically change a SPI bus between controller and peripheral mode.

#![allow(unused)]
fn main() {
pub trait Bus<'a>: PeripheralDevice<'a> + ControllerBus<'a> {
    fn make_controller(&self) -> Result<(), ErrorCode>;
    fn make_peripheral(&self) -> Result<(), ErrorCode>;
    fn is_controller(&self) -> bool;
    fn is_peripheral(&self) -> bool;
}
}

If software invokes a Peripheral operation while the bus is in controller mode, the method MUST return OFF. If software invokes a Controller operation while the bus is in peripheral mode, the method MUST return off. Changing the controller chip select while the device is in peripheral mode changes the chip select configuration of the controller but MUST NOT have an effect on peripheral mode.

When a Bus first starts and is initialized, it MUST be in controller mode, as the init() method is part of the ControllerBus trait.

8 Capsules

This section describes the standard Tock capsules for SPI communication.

9 Implementation Considerations

10 Authors' Address

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305
USA
pal@cs.stanford.edu

Alexandru Radovici <msg4alex@gmail.com>

Application Persistent Data Storage Permissions

TRD:
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Brad Campbell
Draft-Created: 2024/06/06
Draft-Modified: 2024/06/06
Draft-Version: 1
Draft-Discuss: devel@lists.tockos.org

Abstract

Tock supports storing persistent state for applications, and all persistent state in Tock is identified based on the application that stored it. Tock supports permissions for persistent state, allowing for the kernel to restrict which applications can store state and which applications can read stored state. This TRD describes the permissions architecture for persistent state in Tock. This document is in full compliance with TRD1.

1 Introduction

Tock applications need to be able to store persistent state. Additionally, applications need to be able to keep data private from other applications. The kernel should also be able to allow specific applications to read and modify state from other applications.

This requires a method for assigning applications persistent identifiers, a mechanism for granting storage permissions to specific applications, and kernel abstractions for implementing storage capsules that respect the storage permissions.

2 Scope

This document only describes the permission architecture in Tock for supporting application persistent storage. This document does not prescribe specific types of persistent storage (e.g., flash, FRAM, etc.), storage access abstractions (e.g., block-access, byte-access, etc.), or storage interfaces (e.g., key-value, filesystems, logs, etc.).

3 Stored State Identifiers

All shared persistent storage implementations must store a 32 bit identifier with each stored object to mark the application that created the stored object.

When applications write data, their ShortId must be used as the identifier. When the kernel writes data, the identifier must be 0.

The security, uniqueness, mapping policy, and other properties of ShortIds are allowed to vary based on board configuration. For storage use cases which have specific concerns or constraints around the policies for storage identifiers, users should consult the properties of ShortIds afforded by AppId policy.

4 Permissions

All persistent application data is labeled based on the application which wrote the data. Applications can read and modify data with suitable permissions.

There are three types of permissions:

  1. Write: The application can write data.
  2. Read: The application can read data.
  3. Modify: The application can modify existing data.

Each permission type is independent. For example, an application can be given read permission for specific data but not be able to write new data itself.

Write is a boolean permission. An application either has permission to write or it does not.

Read and Modify permissions are tuples of (the permission type, stored state identifier). These permissions only exist as associated with a particular stored state identifier. That is, a Read permission gives an application permission to read only stored state marked with the associated stored state identifier, and a Modify permission gives an application permission to modify only stored state marked with the associated stored state identifier.

5 Requirements

The Tock storage model imposes the following requirements:

  1. Applications are given separate write, read, and modify permissions.
  2. The label stored with the persistent data when the data are written is the application's short AppID.
  3. Applications without a ShortId::Fixed cannot access (i.e., read/write/modify) any persistent storage.
  4. How permissions are mapped to applications must be customizable for different Tock kernels.

Additionally, the kernel itself can be given permission to store state.

5.1 ShortId Implications

As all persistent state written by applications is marked with the writing application's ShortId, the assignment mechanism for ShortIds is tightly coupled with the access policies for persistent state. This coupling is intentional as AppIDs are unique to specific applications. However, as ShortIds are only 32 bits, it is not possible to assign a globally unique ShortId to all applications. Therefore, board authors should be intentional with how ShortIds are assigned when persistent storage is accessible to userspace.

In particular, two potentially problematic cases can arise:

  1. A ShortId is re-used for different applications. This might happen if one application is discontinued and a new application is assigned the same ShortId. The new application would then have unconditional access to any state the old application stored.
  2. A new ShortId is used for the same application. This might happen if the ShortId assignment algorithm changes. The same application then would lose access to data it previously stored.

6 Kernel Enforcement

It is not feasible to implement all persistent storage APIs through the core kernel (i.e., in trusted code). Instead, the kernel provides an API to retrieve the storage permissions for a specific process. Capsules then use these permissions to enforce restrictions on storage access. The API consists of these functions:

#![allow(unused)]
fn main() {
/// Check if these storage permissions grant read access to the stored state
/// marked with identifier `stored_id`.
pub fn check_read_permission(&self, stored_id: u32) -> bool;

/// Check if these storage permissions grant modify access to the stored
/// state marked with identifier `stored_id`.
pub fn check_modify_permission(&self, stored_id: u32) -> bool;

/// Retrieve the identifier to use when storing state, if the application
/// has permission to write. Returns `None` if the application cannot write.
pub fn get_write_id(&self) -> Option<u32>;
}

This API is implemented for the StoragePermissions object. The StoragePermissions type can be stored per-process and passed in storage APIs to express the storage permissions of the caller of any storage operations.

6.1 Using Permissions in Capsules

When writing storage capsules, capsule authors should include APIs which include StoragePermissions as an argument, and should check for permission before performing any storage operation.

For example, a filing cabinet abstraction that identifies stored state based on a record name might have an (asynchronous) API like this:

#![allow(unused)]
fn main() {
pub trait FilingCabinet {
    fn read(&self, record: &str, permissions: StoragePermissions) -> Result<(), ErrorCode>;
    fn write(&self, record: &str, data: &[u8], permissions: StoragePermissions) -> Result<(), ErrorCode>;
}
}

Inside the implementation for any storage abstraction, the implementation must consider three operations and check for permissions:

  1. The operation is a read. If there is no stored state that matches the read request, the capsule should return ErrorCode::NOSUPPORT. If there is stored state that matches the request, the capsule must call StoragePermissions::check_read_permission(stored_id) with the identifier associated with the stored record. If check_read_permission() returns false, the capsule should return ErrorCode::NOSUPPORT. If check_read_permission() returns true, the capsule should return the read data.
  2. The operation is a write, and the write would store new data. The capsule must call StoragePermissions::get_write_id(). If get_write_id() returns None, the capsule should return ErrorCode::NOSUPPORT. If get_write_id() returns Some(), the capsule should save the new data and must use the returned u32 identifier. It should then return Ok(()).
  3. The operation is a write, and the write would overwrite existing data. The capsule must first retrieve the storage identifier for the existing state. The the capsule must call StoragePermissions::check_modify_permission(stored_id). If check_modify_permission() returns false, the capsule should return ErrorCode::NOSUPPORT. If check_modify_permission() returns true, the capsule should overwrite the data while not changing this stored identifier. The capsule should then return Ok(()).

For example, with the filing cabinet example:

#![allow(unused)]
fn main() {
pub trait FilingCabinet {
    fn read(&self, record: &str, permissions: StoragePermissions) -> Result<[u8], ErrorCode> {
        let obj = self.cabinet.read(record);
        match obj {
            Some(r) => {
                if permissions.check_read_permission(r.id) {
                    Ok(r.data)
                } else {
                    Err(ErrorCode::NOSUPPORT)
                }
            }
            None => Err(ErrorCode::NOSUPPORT),
        }
    }

    fn write(&self, record: &str, data: &[u8], permissions: StoragePermissions) -> Result<(), ErrorCode> {
        let obj = self.cabinet.read(record);
        match obj {
            Some(r) => {
                if permissions.check_modify_permission(r.id) {
                    self.cabinet.write(record, r.id, data);
                    Ok(())
                } else {
                    Err(ErrorCode::NOSUPPORT)
                }
            }
            None => {
                match permissions.get_write_id() {
                    Some(id) => {
                        self.cabinet.write(record, id, data);
                        Ok(())
                    }
                    None => Err(ErrorCode::NOSUPPORT),
                }
            }
        }
    }
}
}

6.2 StoragePermissions Type

The kernel defines a StoragePermissions type which expresses the storage permissions of an application. This is implemented as a definite type rather than a trait interface so permissions can be passed in storage APIs without requiring a static object for every process in the system.

The StoragePermissions type is capable of holding storage permissions in different formats. In general, the type looks like:

#![allow(unused)]
fn main() {
pub struct StoragePermissions(StoragePermissionsPrivate);

enum StoragePermissionsPrivate {
    SelfOnly(core::num::NonZeroU32),
    FixedSize(FixedSizePermissions),
    Listed(ListedPermissions),
    Kernel,
    Null,
}
}

Each variant is a different method for representing and storing storage permissions. For example, FixedSize contains fixed size lists of permissions, where as Null grants no storage permissions.

The StoragePermissions struct includes multiple constructors for instantiating storage permissions. The struct wraps the enum to ensure that permissions can only be created with those constructors. The constructors require a capability to use so only trusted code can create storage permissions.

7 Specifying Permissions

Different users and different kernels will use different methods for determining the persistent storage access permissions for different applications (and by extensions the running process for that application). The following are some examples of how storage permissions may be specified.

  1. In TBF headers. The StoragePermissions TBF header allows a developer to specify storage permissions when the app is compiled. Using this method assumes the kernel can trust the application's headers, perhaps because the kernel only runs apps signed by a trusted party that has verified the TBF headers.
  2. Within the kernel. The kernel can maintain a data structure of permissions for known applications. This should be coupled with the AppID mechanism to consistently assign storage permissions to applications based on their persistent identifier.
  3. With a generic policy. The kernel may permit all applications with a fixed ShortId to use persistent storage. This method can isolate applications by only permitting read and modify access to state stored by the same application.

7.1 Assigning Permissions to Processes

The core kernel allows individual boards to configure how permissions are assigned to applications. At runtime, the kernel needs to know what permissions each executing process has. To facilitate this, Tock uses the ProcessStandardStoragePermissionsPolicy process policy. Each process, when created, will store a StoragePermissions object that specifies the storage permissions for that process.

#![allow(unused)]
fn main() {
/// Generic trait for implementing a policy on how applications should be
/// assigned storage permissions.
pub trait ProcessStandardStoragePermissionsPolicy<C: Chip> {
    /// Return the storage permissions for the specified `process`.
    fn get_permissions(&self, process: &ProcessStandard<C>) -> StoragePermissions;
}
}

This trait is specific to the ProcessStandard implementation of Process to enable policies to use TBF headers when assigning permissions.

Several examples of policies are in the capsules/system crate.

8 Storage Examples

The permissions architecture is generic for storage in Tock, but this section describes some examples of how this architecture may be used for several storage abstractions. Note, these are just examples and not descriptions of actual Tock implementations nor requirements for how various storage abstractions must be implemented.

  1. Key-Value storage. Each key-value pair is stored as a triple: (key, value, storage identifier). On get(), the storage identifier for the key-value pair is checked. On set(), if the key already exists the modify permission is used, and if the key does not exist the write permission is used.
  2. Logging. Loggers append to a shared log. Loggers can only append to the log if the have the write permission. Each log entry includes the storage identifier of the writing logger. Loggers do not have any read permission. Log analyzers only have read permissions. The analyzers have multiple read permissions to the log entries they need to analyze. The modify permission is not used.
  3. Per-application nonvolatile storage. Each application is given a region of nonvolatile storage. Applications only access their own storage region. The storage implementation still checks and enforces read, write, and modify permissions, but the expectation is that applications that have the write permission also have modify and read permissions for their own stored state. There is no API for accessing other application state, so maintaining lists of read/modify permissions is not necessary.
  4. Global persistent configuration. The storage abstraction maintains a persistent data store that multiple applications use. Only one application is expected to have the write permission to initialize the configuration. Other applications that use the configuration have read permission for the initializing application's storage identifier, and may have modify permission if they need to update the configuration.

8 Authors' Addresses

Brad Campbell <bradjc@virginia.edu>

Universal Asynchronous Receiver Transmitter (UART) HIL

TRD:
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Philip Levis, Leon Schuermann
Draft-Created: August 5, 2021
Draft-Modified: June 5, 2022
Draft-Version: 5
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the hardware independent layer interface (HIL) for UARTs (serial ports) in the Tock operating system kernel. It describes the Rust traits and other definitions for this service as well as the reasoning behind them. This document is in full compliance with TRD1. The UART HIL in this document also adheres to the rules in the HIL Design Guide, which requires all callbacks to be asynchronous -- even if they could be synchronous.

1 Introduction

A serial port (UART) is a basic communication interface that Tock relies on for debugging and interactive applications. Unlike the SPI and I2C buses, which have a clock line, UART communication is asynchronous. This allows it to require only one pin for each direction of communication, but limits its speed as clock drift between the two sides can cause bits to be read incorrectly.

The UART HIL is in the kernel crate, in module hil::uart. It provides five main traits:

  • kernel::hil::uart::Configuration: allows a client to query how a UART is configured.
  • kernel::hil::uart::Configure: allows a client to configure a UART, setting its speed, character width, parity, and stop bit configuration.
  • kernel::hil::uart::Transmit: is for transmitting data.
  • kernel::hil::uart::TransmitClient: is for handling callbacks when a data transmission is complete.
  • kernel::hil::uart::Receive: is for receiving data.
  • kernel::hil::time::ReceiveClient: handles callbacks when data is received.

There are also collections of traits that combine these into more complete abstractions. For example, the Uart trait represents a complete UART, extending Transmit, Receive, and Configure.

To provide a level of minimal platform independence, a port of Tock to a given microcontoller is expected to implement certain instances of these traits. This allows, for example, debug output and panic dumps to work across chips and platforms.

This document describes these traits, their semantics, and the instances that a Tock chip is expected to implement. It also describes how the virtual_uart capsule allows multiple clients to share a UART. This document assumes familiarity with serial ports and their framing: Wikipedia's article on asynchronous serial communication is a good reference.

2 Configuration and Configure

The Configuration trait allows a client to query how a UART is configured. The Configure trait allows a client to configure a UART, by setting is baud date, character width, parity, stop bits, and whether hardware flow control is enabled.

These two traits are separate because there are cases when clients need to know the configuration but cannot set it. For example, when a UART is virtualized across multiple clients (e.g., so multiple sources can write to the console), individual clients may want to check the baud rate. However, they cannot set the baud rate, because that is fixed and shared across all of them. Similarly, some services may need to be able to set the UART configuration but do not need to check it.

Most devices using serial ports today use 8-bit data, but some older devices use more or fewer bits, and hardware implementations support this. If the character width of a UART is set to less than 8 bits, data is still partitioned into bytes, and the UART sends the least significant bits of each byte. Suppose a UART is configured to send 7-bit words. If a client sends 5 bytes, the UART will send 35 bits, transmitting the bottom 7 bits of each byte. The most significant bit of each byte is ignored. While this HIL does support UART transfers with a character-width of more than 8-bit, such characters cannot be sent or received using the provided bulk transfer mechanisms. A configuration with Width > 8 will disable the bulk buffer transfer mechanisms and restrict the device to single-character operations. Refer to 3 Transmit and TransmitClient and 4 Receive and ReceiveClient respectively.

Any configuration-change must not apply to operations started before this change. The UART implementation is free to accept a configuration change and apply it with the next operation, or refuse an otherwise valid configuration request because of an ongoing operation by returning ErrorCode::BUSY.

#![allow(unused)]
fn main() {
pub enum StopBits {
    One = 1,
    Two = 2,
}

pub enum Parity {
    None = 0,
    Odd = 1,
    Even = 2,
}

pub enum Width {
    Six = 6,
    Seven = 7,
    Eight = 8,
    Nine = 9,
}

pub struct Parameters {
    pub baud_rate: u32, // baud rate in bit/s
    pub width: Width,
    pub parity: Parity,
    pub stop_bits: StopBits,
    pub hw_flow_control: bool,
}

pub trait Configuration {
    fn get_baud_rate(&self) -> u32;
    fn get_width(&self) -> Width;
    fn get_parity(&self) -> Parity;
    fn get_stop_bits(&self) -> StopBits;
    fn get_hw_flow_control(&self) -> bool;
    fn get_configuration(&self) -> Configuration;
}

pub trait Configure {
    fn set_baud_rate(&self, rate: u32) -> Result<u32, ErrorCode>;
    fn set_width(&self, width: Width) -> Result<(), ErrorCode>;
    fn set_parity(&self, parity: Parity) -> Result<(), ErrorCode>;
    fn set_stop_bits(&self, stop: StopBits) -> Result<(), ErrorCode>;
    fn set_hw_flow_control(&self, on: bool) -> Result<(), ErrorCode>;
    fn configure(&self, params: Parameters) -> Result<(), ErrorCode>;
}
}

Methods in Configure can return the following error conditions:

  • OFF: The underlying hardware is currently not available, perhaps because it has not been initialized or in the case of a shared hardware USART controller because it is set up for SPI.
  • INVAL: Baud rate was set to 0.
  • NOSUPPORT: The underlying UART cannot satisfy this configuration.
  • BUSY: The UART is currently busy processing an operation which would be affected by a change of the respective parameter.
  • FAIL: Other failure condition.

Configuration::get_configuration can be used to retrieve a copy of the current UART configuration, which can later be restored using the Configure::configure method. An implementation of the Configure::configure method must ensure that this configuration is applied atomically: either the configuration described by the passed Parameters is applied in its entirety or the device's configuration shall remain unchanged, with the respective check's error returned.

The UART may be unable to set the precise baud rate specified. For example, the UART may be driven off a fixed clock with integer prescalar. An implementation of configure MUST set the baud rate to the closest possible value to the baud_rate field of the params argument and an an implementation of set_baud_rate MUST set the baud rate to the closest possible value to the rate argument. The Ok result of set_baud_rate includes the actual rate set, while an Err(INVAL) result means the requested rate is well outside the operating speed of the UART (e.g., 16MHz).

3 Transmit and TransmitClient

The Transmit and TransmitClient traits allow a client to transmit over the UART.

#![allow(unused)]
fn main() {
enum AbortResult {
    Callback(bool),
    NoCallback,
}

pub trait Transmit<'a> {
    fn set_transmit_client(&self, client: &'a dyn TransmitClient);

    fn transmit_buffer(
        &self,
        tx_buffer: &'static mut [u8],
        tx_len: usize,
    ) -> Result<(), (ErrorCode, &'static mut [u8])>;

    fn transmit_character(&self, character: u32) -> Result<(), ErrorCode>;
    fn transmit_abort(&self) -> AbortResult;
}

pub trait TransmitClient {
    fn transmitted_character(&self, rval: Result<(), ErrorCode>) {}
    fn transmitted_buffer(
        &self,
        tx_buffer: &'static mut [u8],
        tx_len: usize,
        rval: Result<(), ErrorCode>,
    );
}
}

The Transmit trait has two data paths: transmit_character and transmit_buffer. The transmit_character method is used in narrow use cases in which buffer management is not needed or when the client transmits 9-bit characters. Generally, software should use the transmit_buffer method. Most software implementations use DMA, such that a call to transmit_buffer triggers a single interrupt when the transfer completes; this saves energy and CPU cycles over per-byte transfers and also improves transfer speeds because hardware can keep the UART busy.

Each u32 passed to transmit_character is a single UART character. The UART MUST ignore then high order bits of the u32 that are outside the current character width. For example, if the UART is configured to use 9-bit characters, it must ignore bits 31-9: if the client passes 0xffffffff, the UART will transmit 0x1ff.

Each byte transmitted with transmit_buffer is a UART character. If the UART is using 8-bit characters, each character is a byte. If the UART is using smaller characters, it MUST ignore the high order bits of the bytes passed in the buffer. For example, if the UART is using 6-bit characters and is told to transmit 0xff, it will transmit 0x3f, ignoring the first two bits.

If a client needs to transmit characters larger than 8 bits, it should use transmit_character, as transmit_buffer is a buffer of 8-bit bytes and cannot store 9-bit values. If the UART is configured to use characters wider than 8-bit, the transmit_buffer operation is disabled and calls to it must return ErrorCode::INVAL.

There can be a single transmit operation ongoing at any time. Successfully calling either transmit_buffer or transmit_character causes the UART to become busy until it issues the callback corresponding to the outstanding operation.

3.1 transmit_buffer and transmitted_buffer

Transmit::transmit_buffer sends a buffer of data. The result returned by transmit_buffer indicates whether there will be a callback in the future. If transmit_buffer returns Ok(()), implementation MUST call the TransmitClient::transmitted_buffer callback in the future when the transmission completes or fails. If transmit_buffer returns Err it MUST NOT issue a callback in the future in response to this call. If the error is BUSY, this is because there is an outstanding call to transmit_buffer or transmit_character: the implementation will continue to handle the original call and issue the originally scheduled callback (as if the call that Err'd with BUSY never happened). However, it does not issue a callback for the call to transmit_buffer that returned Err.

The valid error codes for transmit_buffer are:

  • OFF: the underlying hardware is not available, perhaps because it has not been initialized or has been initialized into a different mode (e.g., a USART has been configured to be a SPI).
  • BUSY: the UART is already transmitting and has not made a transmission complete callback yet.
  • SIZE: tx_len is larger than the passed slice or tx_len == 0.
  • INVAL: the device is configured for data widths larger than 8-bit.
  • FAIL: some other failure.

Calling transmit_buffer while there is an outstanding transmit_buffer or transmit_character operation MUST return Err(BUSY).

The TransmitClient::transmitted_buffer callback indicates completion of a buffer transmission. The Result indicates whether the buffer was successfully transmitted. The tx_len argument specifies how many characters (defined by Configure) were transmitted. If the rval of transmitted_buffer is Ok(()), tx_len MUST be equal to the size of the transmission started by transmit_buffer, defined above. A call to transmit_character or transmit_buffer made within this callback MUST NOT return Err(BUSY) unless it is because this is not the first call to one of these methods in the callback. When this callback is made, the UART MUST be ready to receive another call. The valid ErrorCode values for transmitted_buffer are all of those returned by transmit_buffer plus:

  • CANCEL if the call to transmit_buffer was cancelled by a call to abort and the entire buffer was not transmitted.
  • SIZE if the buffer could only be partially transmitted.

3.2 transmit_character and transmitted_character

The transmit_character method transmits a single character of data asynchronously. The word length is determined by the UART configuration. A UART implementation MAY choose to not implement transmit_character and transmitted_character. There is a default implementation of transmitted_character so clients that do not use receive_character do not have to implement a callback.

If transmit_character returns Ok(()), the implementation MUST call the transmitted_character callback in the future. If a call to transmit_character returns Err, the implementation MUST NOT issue a callback for this call, although if the it is Err(BUSY) is will issue a callback for the outstanding operation. Valid ErrorCode results for transmit_character are:

  • OFF: The underlying hardware is not available, perhaps because it has not been initialized or in the case of a shared hardware USART controller because it is set up for SPI.
  • BUSY: the UART is already transmitting and has not made a transmission callback yet.
  • NOSUPPORT: the implementation does not support transmit_character operations.
  • FAIL: some other error.

The TransmitClient::transmitted_character method indicates that a single word transmission completed. The Result indicates whether the word was successfully transmitted. A call to transmit_character or transmit_buffer made within this callback MUST NOT return BUSY unless it is because this is not the first call to one of these methods in the callback. When this callback is made, the UART MUST be ready to receive another call. The valid ErrorCode values for transmitted_character are all of those returned by transmit_character plus:

  • CANCEL if the call to transmit_character was cancelled by a call to abort and the word was not transmitted.

3.3 transmit_abort

The transmit_abort method allows a UART implementation to terminate an outstanding call to transmit_character or transmit_buffer early. The result of transmit_abort indicates two things:

  1. whether a callback will occur (there is an oustanding operation), and
  2. if a callback will occur, whether the operation is cancelled.

If transmit_abort returns Callback, there will be be a future callback for the completion of the outstanding request. If there is an outstanding transmit_buffer or transmit_character operation, transmit_abort MUST return Callback. If there is no outstanding transmit_buffer or transmit_abort operation, transmit_abort MUST return NoCallback.

The three possible values of AbortResult have these meanings:

  • Callback(true): there was an outstanding operation, which is now cancelled. A callback will be made for that operation with an ErrorCode of CANCEL.
  • Callback(false): there was an outstanding operation, which has not been cancelled. A callback will be made for that operation with a result other than Err(CANCEL).
  • NoCallback: there was no outstanding request and there will be no future callback.

Note that the semantics of the boolean field in AbortResult::Callback refer to whether the operation is cancelled, not whether this particular call cancelled it: a true result indicates that there will be an ErrorCode::CANCEL in the callback. Therefore, if a client calls transmit_abort twice and the first call returns Callback(true), the second call's return value of Callback(true) can involve no state transition within the sender, as it simply reports the curent state (of the call being cancelled).

4 Receive and ReceiveClient traits

The Receive and ReceiveClient traits are used to receive data from the UART. They support both single-word and buffer reception. Buffer-based reception is more efficient, as it allows an MCU to handle only one interrupt for many characters. However, buffer-based reception only supports characters of 6, 7, and 8 bits, so clients using 9-bit words need to use word operations. If the UART is configured to use characters wider than 8-bit, the receive_buffer operation is disabled and calls to it must return ErrorCode::INVAL.

Each byte received is a character for the UART. If the UART is using 8-bit characters, each character is a byte. If the UART is using smaller characters, it MUST zero the high order bits of the data values. For example, if the UART is using 6-bit characters and receives 0x1f, it must store 0x1f in a byte and not set high order bits. If the UART is using 9-bit words and receives 0x1ea, it stores this in a 32-bit value for receive_character as 0x000001ea.

Receive supports a single outstanding receive request. A successful call to receive_buffer or receive_character causes UART reception to be busy until the callback for the outstanding operation is issued.

If the UART returns Ok to a call to receive_buffer or receive_character, it MUST return Err(BUSY) to subsequent calls to those methods until it issues the callback corresponding to the outstanding operation. The first call to receive_buffer or receive_character from within a receive callback MUST NOT return Err(BUSY): when it makes a callback, a UART must be ready to handle another reception request.

#![allow(unused)]
fn main() {
enum AbortResult {
    Failure,
    Success,
}

pub trait Receive<'a> {
    fn set_receive_client(&self, client: &'a dyn ReceiveClient);
    fn receive_buffer(
        &self,
        rx_buffer: &'static mut [u8],
        rx_len: usize,
    ) -> Result<(), (ErrorCode, &'static mut [u8])>;
    fn receive_character(&self) -> Result<(), ErrorCode>;
    fn receive_abort(&self) -> AbortResult;
}

pub trait ReceiveClient {
    fn received_character(&self, _character: u32, _rval: Result<(), ErrorCode>, _error: Error) {}

    fn received_buffer(
        &self,
        rx_buffer: &'static mut [u8],
        rx_len: usize,
        rval: Result<(), ErrorCode>,
        error: Error,
    );
}
}

4.1 receive_buffer, received_buffer and receive_abort

The receive_buffer method receives from the UART into the passed buffer. It receives up to rx_len bytes. When rx_len bytes has been received, the implementation MUST call the received_buffer callback to signal reception completion with an rval of Ok(()). The implementation MAY call the received_buffer callback before all rx_len bytes have been received. If it calls the received_buffer callback before all rx_len bytes have been received, rval MUST be Err. Valid return values for receive_buffer are:

  • OFF: the underlying hardware is not available, because it has not been initialized or is configured in a way that does not allow UART communication (e.g., a USART is configured to be SPI).
  • BUSY: the UART is already receiving (a buffer or a word) and has not made a reception received callback yet.
  • SIZE: rx_len is larger than the passed slice or rx_len == 0.
  • INVAL: the device is configured for data widths larger than 8-bit.

The receive_abort method can be used to cancel an outstanding buffer reception call. If there is an outstanding buffer reception, calling receive_abort MUST terminate the reception as early as possible, possibly completing it before all of the requested bytes have been read. In this case, the implementation MUST issue a received_buffer callback reporting the number of bytes actually read and with an rval of Err(CANCEL).

Reception early termination is necessary for UART virtualization. For example, suppose there are two UART clients. The first issues a read of 80 bytes. After 20 bytes have been read, the second client issues a read of 40 bytes. At this point, the virtualizer has to reduce the length of its outstanding read, from 60 (80-20) to 40 bytes. It needs to copy the 20 bytes read into the first client's buffer, the next 40 bytes into both of their buffers, and the last 20 bytes read into the first client's buffer. It accomplishes this by calling receive_abort to terminate the 100-byte read, copying the bytes read from the resulting callback, then issuing a receive_buffer of 40 bytes.

The valid return values for receive_abort are:

  • Callback(true): there was a reception outstanding and it has been cancelled. A callback with Err(CANCEL) will be called.
  • Callback(false): there was a reception outstanding but it was not cancelled. A callback will be called with an rval other than Err(CANCEL).
  • NoCallback: there was no reception outstanding and the implementation will not issue a callback.

If there is no outstanding call to receive_buffer or receive_character, receive_abort MUST return NoCallback.

4.2 receive_character and received_character

The receive_character method and received_character callback allow a client to perform character operations without buffer management. They receive a single UART character, where the character width is defined by the UART configuration and can be wider than 8 bits.

A UART implementation MAY choose to not implement receive_character and received_character. There is a default implementation of received_character so clients that do not use receive_character do not have to implement a callback.

If the UART returns Ok(()) to a call to receive_character, it MUST make a received_character callback in the future, when it receives a character or some error occurs. Valid Err values of receive_character are:

  • BUSY: the UART is busy with an outstanding call to receive_buffer or receive_character.
  • OFF: the UART is powered down or in a configuration that does not allow UART reception (e.g., it is a USART in SPI mode).
  • NOSUPPORT: receive_character operations are not supported.
  • FAIL: some other error.

5 Composite Traits

In addition to the 6 basic traits, the UART HIL defines several traits that use these basic traits as supertraits. These composite traits allow structures to refer to multiple pieces of UART functionality with a single reference and ensure that their implementations are coupled.

#![allow(unused)]
fn main() {
pub trait Uart<'a>: Configure + Configuration + Transmit<'a> + Receive<'a> {}
pub trait UartData<'a>: Transmit<'a> + Receive<'a> {}
pub trait Client: ReceiveClient + TransmitClient {}
}

The HIL provides blanket implementations of these four traits: any structure that implements the supertraits of a composite trait will automatically implement the composite trait.

6 Capsules

The Tock kernel provides two standard capsules for UARTs:

  • capsules::console::Console provides a userspace abstraction of a console. It allows userspace to print to and read from a serial port through a system call API.
  • capsules::virtual_uart provides a set of abstractions for virtualizing a single UART into many UARTs.

The structures in capsules::virtual_uart allow multiple clients to read from and write to a serial port. Write operations are interleaved at the granularity of transmit_buffer calls: each client's transmit_buffer call is printed contiguously, but consecutive calls to transmit_buffer from a single client may have other data inserted between them. When a client calls receive_buffer, it starts reading data from the serial port at that point in time, for the length of its request. If multiple clients make receive_buffer calls that overlap with one another, they each receive copies of the received data.

Suppose, for example, that there are two clients. One of them calls receive_buffer for 8 bytes. A user starts typing "1234567890" at the console. After the third byte, another client calls receive_buffer for 4 bytes. After the user types "7", the second client will receive a received_buffer callback with a buffer containing "4567". After the user types "8", the first client will receive a callback with a buffer containing "12345678". If the second client then calls receive_buffer with a 1-byte buffer, it will receive "9". It never sees "8", since that has been consumed by the time it makes this second receive call.

7 Authors' Address

Philip Levis
409 Gates Hall
Stanford University
Stanford, CA 94305
USA
pal@cs.stanford.edu

Leon Schuermann <leon@is.currently.online>

Userspace Readable Allow System Call

TRD: XXX
Working Group: Kernel
Type: Documentary
Status: Draft
Author: Alistair Francis
Draft-Created: June 17, 2021
Draft-Modified: Sep 8, 2021
Draft-Version: 2
Draft-Discuss: tock-dev@googlegroups.com

Abstract

This document describes the userspace readable allow system call application binary interface (ABI) between user space processes and the Tock kernel for 32-bit ARM Cortex-M and RISC-V RV32I platforms.

This is an extension on the allow calls specified in TRD 104.

1 Introduction

In normal use, userspace does not access a buffer that has been userspace readable with the kernel with a Read-Write Allow call. This reading restriction is because the contents of the buffer may be in an intermediate state and so not consistent with expected data models. Ensuring every system call driver maintains consistency in the presence of arbitrary userspace reads is too great a programming burden for an unintended use case.

However, there can be cases when it is necessary for userspace to be able to read a buffer without first revoking it from the kernel with a Read-Write Allow. These cases are situations when the cost of a Read-Write Allow system call is an unacceptable overhead for accessing the data.

Instead, capsules that support the userspace readable allow call can communicate with applications without buffers needing to be re-allowed. For example a capsule might want to report statistics to a userspace app. It could do this by letting the app perform a userspace readable allow call to allocate a buffer. Then the capsule can write statistics to the buffer and at any time the app can read the statistics from the buffer.

The userspace readable allow system call allows userspace to have read-only access a buffer that is writeable by the kernel.

2 System Call API

2.1 Userspace Readable Allow (Class ID: 7)

The userspace readable allow syscall follows the same expectations and requirements as described for the Read-Write syscall in TRD104 Section 4.4, with the exception that apps are explicitly allowed to read buffers that have been passed to the kernel.

The register arguments for Userspace Readable Allow system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Driver numberr0
Buffer numberr1
Addressr2
Sizer3

The Tock kernel MUST check that the passed buffer is contained within the calling process's writeable address space. Every byte of a passed buffer must be readable and writeable by the process. Zero-length buffers may therefore have arbitrary addresses. If the passed buffer is not complete within the calling process's writeable address space, the kernel MUST return a failure result with an error code of INVALID. The buffer number specifies which buffer this is. A driver may support multiple allowed buffers.

The return variants for Userspace Readable Allow system calls are Failure with 2 u32 and Success with 2 u32. In both cases, Argument 0 contains an address and Argument 1 contains a length. When a driver implementing the Userspace Readable Allow system call returns a failure result, it MUST return the same address and length as those that were passed in the call. When a driver implementing the Userspace Readable Allow system call returns a success result, the returned address and length MUST be those that were passed in the previous call, unless this is the first call. On the first successful invocation of a particular Userspace Readable Allow system call, a driver implementation MUST return address 0 and size 0.

The syscall class ID is shown below:

Syscall ClassSyscall Class Number
Userspace Readable Allow7

The standard access model for userspace readable allowed buffers is that userspace can read from a buffer while the kernel can read or write. Synchronisation methods are required to ensure data consistency but are implementation specific.

Simultaneous accesses to a buffer from both userspace and the kernel can cause userspace to read inconsistent data if not implemented properly. For example a userspace app could read partially written data. This would result in obscure timing bugs that are hard to detect. Due to this each capsule using the userspace readable allow mechanism MUST document, in a Draft or Final Documentary TRD, how it ensures userspace always reads consistent data from a userspace readable buffer.

Finally, because a process conceptually relinquishes write access to a buffer when it makes a userspace readable allow call with it, a userspace API MUST NOT assume or rely on a process writing an allowed buffer. If userspace needs to write to a buffer held by the kernel, it MUST first regain access to it by calling the corresponding Userspace Readable Allow. A userspace API MAY allow a process to read an allowed buffer, but if it does, it must document a consistency mechanism.

One example approach to ensure that userspace reads of a data object are consistent is to use a monotonic counter. Every time the kernel writes the data object, it increments the counter. If userspace reads the counter, reads the data object, then reads the counter again to check that it has not changed, it can check that the object was not modified mid-read. If the counter changes, it restarts the read of the data object. This approach is simple, but does make reading the data object take variable time and is theoretically vulnerable to starvation.

An example of reading a monotonic counter from userspace would look like this:

  // Reference to the readable-allow'd buffer
  volatile uint32_t* ptr;

  do {
    // Read the current counter value
    counter = ptr[0];

    // Read in the data
    my_data0 = ptr[1];
    my_data1 = ptr[2];

    // Only exit the loop if counter and ptr[0] are the same
  } while (counter != ptr[0]);

where the counter is incremented on every context switch to userspace.

3 libtock-c Userspace Library Methods

3.1 Userspace Readable Allow

The userspace readable allow system call class is how a userspace process shares a buffer with the kernel that the kernel can read and write.

The userspace readable allow system call has this function prototype:

typedef struct {
  bool success;
  void* ptr;
  size_t size;
  tock_error_t error;
} userspace_readable_allow_return_t;

userspace_readable_allow_return_t allow_userspace_readable(uint32_t driver, uint32_t allow, volatile void* ptr, size_t size);

The success field indicates whether the call succeeded. If it failed, the error code is stored in error. If it succeeded, the value in error is undefined. ptr and size contain the pointer and size of the passed buffer.

The register arguments for Userspace Readable Allow system calls are as follows. The registers r0-r3 correspond to r0-r3 on CortexM and a0-a3 on RISC-V.

ArgumentRegister
Driver numberr0
Buffer numberr1
Addressr2
Sizer3

4 Author's Address

Alistair Francis alistair.francis@wdc.com