Optimising PineTime’s Display Driver with Rust and Mynewt

Simple tweaks like Batched Updates and Non-Blocking SPI can have a huge impact on rendering performance…

PineTime Smart Watch has been an awesome educational tool for teaching embedded coding with Rust and Mynewt OS… Check out PineTime articles #1, #2 and #3

But stare closely at the video demos in the articles… You’ll realise that the rendering of graphics on PineTime’s LCD display looks sluggish.

Can we expect speedy screen updates from a $20 smart watch… Powered by a Nordic nRF52832 Microcontroller that drives an ST7789 Display Controller over SPI?

Yes we can! Check the rendering performance of Rust and Mynewt OS on PineTime, before and after optimisation…

[Watch the video on YouTube]

Before and after optimising PineTime’s display driver

Today we’ll learn how we optimised the PineTime Display Driver to render text and graphics in sub-seconds…

  1. We group the pixels to be rendered into rows and blocks. This allows graphics and text to be rendered in fewer SPI operations.
  2. We changed Blocking SPI operations to Non-Blocking SPI operations. This enables the Rust rendering functions to be executed while SPI operations are running concurrently. (Think graphics rendering pipeline)

Rendering PineTime Graphics Pixel by Pixel

Let’s look at a simple example to understand how the [embedded-graphics] and [st7735-lcd] crates work together to render graphics on PineTime’s LCD display. This code creates a rectangle with [embedded-graphics] and renders the rectangle to the [st7735-lcd] display…

// Create black background rectangle
let background = Rectangle::<Rgb565>
::new( Coord::new( 0, 0 ), Coord::new( 239, 239 ) ) // From (0, 0) to (239, 239)
.fill( Some( Rgb565::from(( 0x00, 0x00, 0x00 )) ) ); // Fill with Black
// Draw background rectangle to LCD display
DISPLAY.draw(background);
view raw draw-rect.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/display.rs

When we trace the SPI requests generated by the [st7735-lcd] driver, we see lots of repetition…

SPI Log Remarks
spi cmd 2a

spi data 00 00 00 00
Set Address Window Columns CASET
st7735_lcd::draw() → set_pixel() → set_address_window()
Start Col: 0, End Col: 0
spi cmd 2b

spi data 00 00 00 00
Set Address Window Rows RASET
st7735_lcd::draw() → set_pixel() → set_address_window()
Start Row: 0, End Row: 0
spi cmd 2c

spi data f8 00
Write Pixels RAMWR
st7735_lcd::draw() → set_pixel()
Pixel Color: f8 00 (2 bytes per pixel)
spi cmd 2a

spi data 00 01 00 01
Set Address Window Columns CASET
st7735_lcd::draw() → set_pixel() → set_address_window()
Start Col: 1, End Col: 1
spi cmd 2b

spi data 00 00 00 00
Set Address Window Rows RASET
st7735_lcd::draw() → set_pixel() → set_address_window()
Start Row: 0, End Row: 0
spi cmd 2c

spi data f8 00
Write Pixels RAMWR
st7735_lcd::draw() → set_pixel()
Pixel Color: f8 00 (2 bytes per pixel)
view raw blocking-spi.md hosted with ❤ by GitHub

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/logs/spi-blocking.log

(The SPI log was obtained by uncommenting this code)

For each pixel in the rectangle, the display driver is setting the X and Y coordinates of each pixel and setting the colour of each pixel… Pixel by pixel! (0, 0), (0, 1), (0, 2), …

That’s not efficient for rendering graphics, pixel by pixel… Why are [embedded-graphics] and [st7735-lcd] doing that?

That’s because [embedded-graphics] was designed to run on highly-constrained microcontrollers with very little RAM… Think STM32 Blue Pill, which has only 20 KB RAM! That’s too little RAM for rendering rectangles and other graphics into RAM and copying the rendered RAM bitmap to the display. How does [embedded-graphics] render graphics?

By using Rust Iterators! Every graphic object to be rendered (rectangles, circles, even text) is transformed by [embedded-graphics] into a Rust Iterator that returns the (X, Y) coordinates of each pixel and its colour. This requires very little RAM because the pixel information is computed on the fly, only when the Iterator needs to return the next pixel.

Rendering a Pixel Iterator to the display is really easy and doesn’t need much RAM, like this…

/// Draw the graphic item (e.g. rectangle) to the display, pixel by pixel
fn draw<T>(&mut self, item_pixels: T)
where T: IntoIterator<Item = Pixel<Rgb565>> {
// For every pixel in the graphic item...
for Pixel(coord, color) in item_pixels {
// Set the pixel color.
self.set_pixel(coord.0 as u16, coord.1 as u16, color.0)
.expect("pixel write failed");
}
}
view raw draw.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/st7735-lcd-batch-rs/blob/master/src/lib.rs

Upon inspecting the set_pixel function that’s called for each pixel, we see this…

/// Sets a pixel color at the given coords.
pub fn set_pixel(&mut self, x: u16, y: u16, color: u16) -> Result <(), ()> {
self.set_address_window(x, y, x, y)?; // Send SPI command to set the pixel (X, Y) coordinates
self.write_command(Instruction::RAMWR, None)?;
self.write_word(color) ? ; // Send SPI command to set the pixel color
Ok(())
}
view raw set_pixel.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/st7735-lcd-batch-rs/blob/master/src/lib.rs

A-ha! We have discovered the code that creates all the repeated SPI requests for setting the (X, Y) coordinates and colour of each pixel!

Instead of updating the LED display pixel by pixel, can we batch the pixels together and blast the entire batch of pixels in a single SPI request?

Digging into the [st7735-lcd] display driver code, we see this clue…

/// Sets pixel colors at the given drawing window
pub fn set_pixels<P: IntoIterator<Item = u16>>(&mut self, sx: u16, sy: u16, ex: u16, ey: u16, colors: P) -> Result <(), ()> {
self.set_address_window(sx, sy, ex, ey)?;
self.write_pixels(colors) ? ;
Ok(())
}
/// Writes pixel colors sequentially into the current drawing window
pub fn write_pixels<P: IntoIterator<Item = u16>>(&mut self, colors: P) -> Result <(), ()> {
self.write_command(Instruction::RAMWR, None)?;
for color in colors {
self.write_word(color)?;
}
Ok(())
}
view raw write_pixels.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/st7735-lcd-batch-rs/blob/master/src/lib.rs

See the difference? The function set_pixels sets the pixel window to the region from (X Start, Y Start) to (X End, Y End)… Then it blasts a list of pixel colours to populate that entire window region!

When we call set_pixels the SPI requests generated by the display driver would look like this… (Note the long lists of pixel colours)

SPI Log Remarks
spi cmd 2a

spi data 00 00 00 13
Set Address Window Columns CASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Col: 0, End Col: 0x13
spi cmd 2b

spi data 00 00 00 00
Set Address Window Rows RASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Row: 0, End Row: 0
spi cmd 2c

spi data
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
Write Pixels RAMWR
st7735_lcd::draw() → set_pixels() → write_pixels()
Pixel Colors: 87 e0 87 e0 ...
(2 bytes per pixel)
spi cmd 2a

spi data 00 14 00 27
Set Address Window Columns CASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Col: 0x14, End Col: 0x27
spi cmd 2b

spi data 00 00 00 00
Set Address Window Rows RASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Row: 0, End Row: 0
spi cmd 2c

spi data
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
Write Pixels RAMWR
st7735_lcd::draw() → set_pixels() → write_pixels()
Pixel Colors: 87 e0 87 e0 ...
(2 bytes per pixel)

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/logs/spi-non-blocking.log

But will this really improve rendering performance? Let’s test this hypothesis the Lean and Agile Way by batching the pixels (in the simplest way possible) without disturbing too much [embedded-graphics] and [st7735-lcd] code…


Batching PineTime Pixels into Rows and Blocks

Here’s our situation…

  1. [embedded-graphics] creates Rust Iterators for rendering graphic objects. Works with minimal RAM, but generates excessive SPI requests.
  2. PineTime’s Nordic nRF52832 microcontroller has 64 KB of RAM… Not quite sufficient to render the entire 240x240 screen into RAM. (2 bytes of colour per pixel ✖ ️240 rows ✖ 240 columns = 112.5 KB) RAM-based bitmap rendering is no go.

Is there a Middle Way… Keeping the RAM-efficient Rust Iterators... But get the Iterators to return small batches of pixels (instead of returning individual pixels)? Let’s experiment with two very simple Rust Iterators: Pixel Row Iterator and Pixel Block Iterator!

Suppose we ask [embedded-graphics] to render this trapezoid shape with 10 pixels…

10 pixels from the rendered letter K

[embedded-graphics] returns a Pixel Iterator that generates the 10 pixels from left to right, top to bottom…

Zig-zag Pixel Iterator returned by [embedded-graphics]

Which needs 10 SPI requests to render, 1 pixel per SPI request. (Let’s count only the set colour requests)

Since the Pixel Iterator produces pixels row by row, let’s create a Pixel Row Iterator that returns pixels grouped by row…

Our Pixel Row Iterator returns 3 rows

Awesome! When we group the pixels into rows, we only need to make 3 SPI requests to render all 10 pixels!

Can we do better? What if we group consecutive rows of the same width into rectangular blocks… Creating a Pixel Block Iterator

Our Pixel Block Iterator returns 2 blocks

Yay! We have grouped 10 pixels into 2 blocks… Only 2 SPI requests to render all 10 pixels!

What’s the catch? How did we optimise 10 SPI requests into 2 SPI requests… Without sacrificing anything?

While grouping the pixels into rows and blocks, we actually use more RAM. Every time the Pixel Row Iterator returns the next row, it needs up to 8 bytes of temporary RAM storage (4 pixels with 2 colour bytes each).

And every time the Pixel Block Iterator returns the next block (max 8 pixels), it needs up to 16 bytes of temporary RAM storage. Which isn’t a lot of RAM, if we keep our block size small. Also the Iterator will reuse the storage for each block returned, so we won’t need to worry about storing 2 or more blocks returned by the Iterator.

This is the classical Space-Time Tradeoff in Computer Science… Sacrificing some storage space (RAM) to make things run faster.


Pixel Row and Pixel Block Iterators

Here’s the code for the Pixel Row Iterator that returns the next row of contiguous pixels…

/// Implement the Iterator for Pixel Rows.
/// P can be any Pixel Iterator (e.g. a rectangle).
impl<P: Iterator<Item = Pixel<Rgb565>>> Iterator for RowIterator<P> {
/// This Iterator returns Pixel Rows
type Item = PixelRow;
/// Return the next Pixel Row of contiguous pixels on the same row
fn next(&mut self) -> Option<Self::Item> {
// Loop over all pixels until we have composed a Pixel Row, or we have run out of pixels.
loop {
// Get the next pixel.
let next_pixel = self.pixels.next();
match next_pixel {
None => { // If no more pixels...
if self.first_pixel {
return None; // No pixels to group
}
// Else return previous pixels as row.
let row = PixelRow {
x_left: self.x_left,
x_right: self.x_right,
y: self.y,
colors: self.colors.clone(),
};
self.colors.clear();
self.first_pixel = true;
return Some(row);
}
Some(Pixel(coord, color)) => { // If there is a pixel...
let x = coord.0 as u16;
let y = coord.1 as u16;
let color = color.0;
// Save the first pixel as the row start and handle next pixel.
if self.first_pixel {
self.first_pixel = false;
self.x_left = x;
self.x_right = x;
self.y = y;
self.colors.clear();
self.colors.push(color)
.expect("never");
continue;
}
// If this pixel is adjacent to the previous pixel, add to the row.
if x == self.x_right + 1 && y == self.y {
if self.colors.push(color).is_ok() {
// Don't add pixel if too many pixels in the row.
self.x_right = x;
continue;
}
}
// Else return previous pixels as row.
let row = PixelRow {
x_left: self.x_left,
x_right: self.x_right,
y: self.y,
colors: self.colors.clone(),
};
self.x_left = x;
self.x_right = x;
self.y = y;
self.colors.clear();
self.colors.push(color)
.expect("never");
return Some(row);
}
}
}
}
}
/// A row of contiguous pixels
pub struct PixelRow {
/// Start column number
pub x_left: u16,
/// End column number
pub x_right: u16,
/// Row number
pub y: u16,
/// List of pixel colours for the entire row
pub colors: RowColors,
}
/// Iterator for each Pixel Row in the pixel data. A Pixel Row consists of contiguous pixels on the same row.
#[derive(Debug, Clone)]
pub struct RowIterator<P: Iterator<Item = Pixel<Rgb565>>> {
/// Pixels to be batched into rows
pixels: P,
/// Start column number
x_left: u16,
/// End column number
x_right: u16,
/// Row number
y: u16,
/// List of pixel colours for the entire row
colors: RowColors,
/// True if this is the first pixel for the row
first_pixel: bool,
}
view raw pixel-row.rs hosted with ❤ by GitHub

Pixel Row Iterator. From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/batch.rs

And here’s the code for the Pixel Block Iterator that returns the next block of contiguous rows of the same width. Turns out we only need to tweak the code above slightly to get what we need… Instead of iterating over pixels, we now iterate over rows…

/// Implement the Iterator for Pixel Blocks.
/// R can be any Pixel Row Iterator.
impl<R: Iterator<Item = PixelRow>> Iterator for BlockIterator<R> {
/// This Iterator returns Pixel Blocks
type Item = PixelBlock;
/// Return the next Pixel Block of contiguous Pixel Rows with the same start and end column number
fn next(&mut self) -> Option<Self::Item> {
// Loop over all Pixel Rows until we have composed a Pixel Block, or we have run out of Pixel Rows.
loop {
// Get the next Pixel Row.
let next_row = self.rows.next();
match next_row {
None => { // If no more Pixel Rows...
if self.first_row {
return None; // No rows to group
}
// Else return previous rows as block.
let row = PixelBlock {
x_left: self.x_left,
x_right: self.x_right,
y_top: self.y_top,
y_bottom: self.y_bottom,
colors: self.colors.clone(),
};
self.colors.clear();
self.first_row = true;
return Some(row);
}
Some(PixelRow { x_left, x_right, y, colors, .. }) => { // If there is a Pixel Row...
// Save the first row as the block start and handle next block.
if self.first_row {
self.first_row = false;
self.x_left = x_left;
self.x_right = x_right;
self.y_top = y;
self.y_bottom = y;
self.colors.clear();
self.colors.extend_from_slice(&colors)
.expect("never");
continue;
}
// If this row is adjacent to the previous row and same size, add to the block.
if y == self.y_bottom + 1 && x_left == self.x_left && x_right == self.x_right {
// Don't add row if too many pixels in the block.
if self.colors.extend_from_slice(&colors).is_ok() {
self.y_bottom = y;
continue;
}
}
// Else return previous rows as block.
let row = PixelBlock {
x_left: self.x_left,
x_right: self.x_right,
y_top: self.y_top,
y_bottom: self.y_bottom,
colors: self.colors.clone(),
};
self.x_left = x_left;
self.x_right = x_right;
self.y_top = y;
self.y_bottom = y;
self.colors.clear();
self.colors.extend_from_slice(&colors)
.expect("never");
return Some(row);
}
}
}
}
}
/// A block of contiguous pixel rows with the same start and end column number
pub struct PixelBlock {
/// Start column number
pub x_left: u16,
/// End column number
pub x_right: u16,
/// Start row number
pub y_top: u16,
/// End row number
pub y_bottom: u16,
/// List of pixel colours for the entire block, row by row
pub colors: BlockColors,
}
/// Iterator for each Pixel Block in the pixel data. A Pixel Block consists of contiguous Pixel Rows with the same start and end column number.
#[derive(Debug, Clone)]
pub struct BlockIterator<R: Iterator<Item = PixelRow>> {
/// Pixel Rows to be batched into blocks
rows: R,
/// Start column number
x_left: u16,
/// End column number
x_right: u16,
/// Start row number
y_top: u16,
/// End row number
y_bottom: u16,
/// List of pixel colours for the entire block, row by row
colors: BlockColors,
/// True if this is the first row for the block
first_row: bool,
}
view raw pixel-block.rs hosted with ❤ by GitHub

Pixel Block Iterator. From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/batch.rs

Combining the Pixel Row Iterator and the Pixel Block Iterator, we get the draw_blocks function that renders any [embedded-graphics] graphic object (including text) as pixel blocks…

/// Draw the pixels in the item as Pixel Blocks of contiguous Pixel Rows. The pixels are grouped by row then by block.
pub fn draw_blocks<SPI, DC, RST, T>(display: &mut ST7735<SPI, DC, RST>, item_pixels: T) -> Result<(),()>
where SPI: spi::Write<u8>, DC: OutputPin, RST: OutputPin, T: IntoIterator<Item = Pixel<Rgb565>>, {
// Get the pixels for the item to be rendered.
let pixels = item_pixels.into_iter();
// Batch the pixels into Pixel Rows.
let rows = to_rows(pixels);
// Batch the Pixel Rows into Pixel Blocks.
let blocks = to_blocks(rows);
// For each Pixel Block...
for PixelBlock { x_left, x_right, y_top, y_bottom, colors, .. } in blocks {
// Render the Pixel Block.
display.set_pixels(
x_left,
y_top,
x_right,
y_bottom,
colors) ? ;
}
Ok(())
}
/// Batch the pixels into Pixel Rows, which are contiguous pixels on the same row.
/// P can be any Pixel Iterator (e.g. a rectangle).
fn to_rows<P>(pixels: P) -> RowIterator<P>
where P: Iterator<Item = Pixel<Rgb565>>, {
RowIterator::<P> {
pixels,
x_left: 0,
x_right: 0,
y: 0,
colors: RowColors::new(),
first_pixel: true,
}
}
/// Batch the Pixel Rows into Pixel Blocks, which are contiguous Pixel Rows with the same start and end column number.
/// R can be any Pixel Row Iterator.
fn to_blocks<R>(rows: R) -> BlockIterator<R>
where R: Iterator<Item = PixelRow>, {
BlockIterator::<R> {
rows,
x_left: 0,
x_right: 0,
y_top: 0,
y_bottom: 0,
colors: BlockColors::new(),
first_row: true,
}
}
view raw draw_blocks.rs hosted with ❤ by GitHub

Rendering a graphic object as Pixel Blocks. From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/batch.rs

Thus we now render graphic objects as RAM-efficient chunks of pixels, instead of individual pixels. Middle Way found!


Test the Pixel Row and Pixel Block Iterators

Space-Time Tradeoff called and wants to know how much space we’ll be allocating to make things run faster…”

The more RAM storage we allocate for batching pixels into rows and blocks, the fewer SPI requests we need to make. The code currently sets the limits at 100 pixels per row, 200 pixels per block

/// Max number of pixels per Pixel Row
type MaxRowSize = heapless::consts::U100;
/// Max number of pixels per Pixel Block
type MaxBlockSize = heapless::consts::U200;
/// Consecutive color words for a Pixel Row
type RowColors = heapless::Vec::<u16, MaxRowSize>;
/// Consecutive color words for a Pixel Block
type BlockColors = heapless::Vec::<u16, MaxBlockSize>;
view raw pixel-para.rs hosted with ❤ by GitHub

Pixel Row and Pixel Block Sizes. From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/batch.rs

Note that the rows and blocks are returned by the Iterators as [heapless] Vectors, which use fixed-size arrays to store Vectors. So that we don’t rely on Heap Memory, which is harder to manage on embedded devices like PineTime.

Any graphic object that’s 100 pixels wide (or smaller) will be batched efficiently into pixels rows and blocks. Like this square of width 90 pixels created with [embedded-graphics]…

// Create a square
let square = Rectangle::<Rgb565>
::new( Coord::new( 60, 60 ), Coord::new( 150, 150 ) ) // From (60, 60) to (150, 150)
.fill( Some( Rgb565::from(( 0x00, 0x00, 0xff )) ) ); // Fill with Blue
// Draw square the new faster way, as Pixel Blocks
draw_blocks(&mut DISPLAY, square);

From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/display.rs

Square of width 90 pixels from the render demo

When we trace the rendering of the square, we see this log of pixel blocks…

pixel block (60, 60), (150, 61)
pixel block (60, 62), (150, 63)
pixel block (60, 64), (150, 65)
...
pixel block (60, 148), (150, 149)
pixel block (60, 150), (150, 150)
view raw pixel-block.log hosted with ❤ by GitHub

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/logs/pixel-block.log

(The log was created by uncommenting this code)

Which means that we are indeed deconstructing the 90x90 square into 90x2 pixel blocks for efficient rendering.

💎 This deconstruction doesn’t work so well for a square that occupies the entire 240x240 PineTime screen. I’ll let you think… 1️⃣ Why this doesn’t work 2️⃣ A solution for rendering the huge square efficiently 😀


Non-Blocking SPI on PineTime with Mynewt OS

We could go ahead and run the Pixel Row and Pixel Block Iterators to measure the rendering time… But we won’t. We are now rendering the screen as chunks of pixels, transmitting a long string of pixel colours in a single SPI request

However our SPI code in PineTime isn’t optimised to handle large SPI requests… Whenever it transmits an SPI request, it waits for the entire request to be transmitted before returning to the caller. This is known as Blocking SPI.

Here’s how we call hal_spi_txrx to transmit a Blocking SPI request in Rust with Mynewt OS…

// Write buf to SPI the blocking way.
hal::hal_spi_txrx(
SPI_NUM,
core::mem::transmute( buf ), // TX Buffer
NULL, // RX Buffer (don't receive)
len);
view raw blocking-spi.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

Mynewt OS provides an efficient way to transmit SPI requests: Non-Blocking SPI. hal_spi_txrx_noblock doesn’t hold up the caller while transmitting the request. Instead, Mynewt calls our Callback Function when the request has been completed.

Here’s how we set up Non-Blocking SPI and call hal_spi_txrx_noblock

// Disable SPI port. TODO: Use safe wrapper to remove `unsafe`.
unsafe { hal::hal_spi_disable(SPI_NUM) };
// Configure SPI port for non-blocking SPI. TODO: Use safe wrapper to remove `unsafe`.
unsafe { hal::hal_spi_config(SPI_NUM, &mut SPI_SETTINGS) };
unsafe { hal::hal_spi_set_txrx_cb(
SPI_NUM,
Some( spi_noblock_handler ), // Will call spi_noblock_handler() after writing
core::mem::transmute( &mut SPI_CALLBACK )
) };
// Enable SPI port. TODO: Use safe wrapper to remove `unsafe`.
let rc = unsafe { hal::hal_spi_enable(SPI_NUM) };
...
// Write buf to SPI the non-blocking way. Will call spi_noblock_handler() after writing. TODO: Use safe wrapper to remove `unsafe`.
unsafe { hal::hal_spi_txrx_noblock(
SPI_NUM,
core::mem::transmute( buf ), // TX Buffer
NULL, // RX Buffer (don't receive)
len) };

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

spi_noblock_handler is our Callback Function in Rust. Mynewt won’t let us transmit a Non-Blocking SPI request while another is in progress, so our Callback Function needs to ensure that never happens. More about spi_noblock_handler in a while.

💎 What’s core::mem::transmute? We use this function from the Rust Core Library to cast pointer types when passing pointers and references from Rust to C. It’s similar to casting char * to void * in C.

Why don’t we need to specify the pointer type that we are casting to? Because the Rust Compiler performs Type Inference to deduce the pointer type.


Work Around an SPI Quirk

Bad News: Non-Blocking SPI doesn’t work 100% as advertised for Nordic nRF52832 Microcontroller, the heart of PineTime. According to this note in Mynewt OS, Non-Blocking SPI on nRF52832 fails if we’re sending a single byte over SPI.

But why would we send single-byte SPI requests anyway?

Remember this SPI log that we captured earlier? We seem to be sending single bytes very often: 2a, 2b and 2c, which are Command Bytes

SPI Log Remarks
spi cmd 2a

spi data 00 00 00 13
Set Address Window Columns CASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Col: 0, End Col: 0x13
spi cmd 2b

spi data 00 00 00 00
Set Address Window Rows RASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Row: 0, End Row: 0
spi cmd 2c

spi data
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
Write Pixels RAMWR
st7735_lcd::draw() → set_pixels() → write_pixels()
Pixel Colors: 87 e0 87 e0 ...
(2 bytes per pixel)
spi cmd 2a

spi data 00 14 00 27
Set Address Window Columns CASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Col: 0x14, End Col: 0x27
spi cmd 2b

spi data 00 00 00 00
Set Address Window Rows RASET
st7735_lcd::draw() → set_pixels() → set_address_window()
Start Row: 0, End Row: 0
spi cmd 2c

spi data
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
87 e0 87 e0 87 e0 87 e0
Write Pixels RAMWR
st7735_lcd::draw() → set_pixels() → write_pixels()
Pixel Colors: 87 e0 87 e0 ...
(2 bytes per pixel)

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/logs/spi-non-blocking.log

PineTime’s ST7789 Display Controller has an unusual SPI interface with a special pin: the Data/Command (DC) Pin. The display controller expects our microcontroller to set the DC Pin to Low when sending the Command Byte, and set the DC Pin to High when sending Data Bytes

// If this is a Command Byte, set DC Pin to low, else set DC Pin to high.
hal::hal_gpio_write(
SPI_DC_PIN,
if is_command { 0 }
else { 1 }
);
view raw dc-pin.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

Unfortunately our Command Bytes are single bytes, hence we see plenty of single-byte SPI requests. All because of the need to flip the DC Pin!

This complicates our SPI design but let’s overcome this microcontroller hardware defect with good firmware… All single-byte SPI requests are now sent the Blocking way, other requests are sent the Non-Blocking way…

/// Semaphore that is signalled for every completed SPI request
static mut SPI_SEM: os::os_sem = fill_zero!(os::os_sem);
// Create the Semaphore that will by signalled when the SPI request has completed
unsafe { os::os_sem_init(&mut SPI_SEM, 0) }; // Init to 0 tokens, so caller will block until SPI request is completed.
/// Perform non-blocking SPI write in Mynewt OS. Blocks until SPI write completes.
fn internal_spi_noblock_write(buf: &'static u8, len: i32, is_command: bool) -> MynewtResult<()> {
// If this is a Command Byte, set DC Pin to low, else set DC Pin to high.
unsafe { hal::hal_gpio_write(
SPI_DC_PIN,
if is_command { 0 }
else { 1 }
) };
// Set the SS Pin to low to start the transfer.
unsafe { hal::hal_gpio_write(SPI_SS_PIN, 0) };
if len == 1 { // If writing only 1 byte...
// From https://github.com/apache/mynewt-core/blob/master/hw/mcu/nordic/nrf52xxx/src/hal_spi.c#L1106-L1118
// There is a known issue in nRF52832 with sending 1 byte in SPIM mode that
// it clocks out additional byte. For this reason, let us use SPI mode for such a write.
// Write the SPI byte the blocking way.
unsafe { hal::hal_spi_txrx(
SPI_NUM,
core::mem::transmute( buf ), // TX Buffer
NULL, // RX Buffer (don't receive)
len) };
} else { // If writing more than 1 byte...
// Write the SPI data the non-blocking way. Will call spi_noblock_handler() after writing.
unsafe { hal::hal_spi_txrx_noblock(
SPI_NUM,
core::mem::transmute( buf ), // TX Buffer
NULL, // RX Buffer (don't receive)
len) };
// Wait for spi_noblock_handler() to signal that SPI request has been completed. Timeout in 30 seconds.
let timeout = 30_000;
unsafe { os::os_sem_pend(&mut SPI_SEM, timeout * OS_TICKS_PER_SEC / 1000) };
}
// Set SS Pin to high to stop the transfer.
unsafe { hal::hal_gpio_write(SPI_SS_PIN, 1) };
Ok(())
}

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

The code uses a Semaphore SPI_SEM to wait for the Non-Blocking SPI operation to complete before proceeding. SPI_SEM is signalled by our Callback Function spi_noblock_handler like this…

/// Called by interrupt handler after Non-blocking SPI transfer has completed
extern "C" fn spi_noblock_handler(_arg: Ptr, _len: i32) {
// Signal to internal_spi_noblock_write() that SPI request has been completed.
unsafe { os::os_sem_release(&mut SPI_SEM) };
}

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

Something smells fishy… Why are we now waiting for a Non-Blocking SPI request to complete?

Well this happens when we do things the Lean and Agile Way… When we hit problems (like the single-byte SPI issue), we assess various simple solutions before we select and implement the right permanent fix. (And I don’t think we have found the right fix yet)

This Semaphore workaround also makes the function internal_spi_noblock_write easier to troubleshoot… Whether the SPI request consists of a single byte or multiple bytes, internal_spi_noblock_write will always wait for the SPI request to complete, instead of having diverging paths.

This story also highlights the benefit of building our Rust firmware on top of an established Real Time Operating System like Mynewt OS… We quickly discover platform quirks that others have experienced, so that we can avoid the same trap.


Render Graphics and Send SPI Requests Simultaneously on PineTime

Now we can send large SPI requests efficiently to PineTime’s LCD display. We are blocking on a Semaphore while waiting for the SPI request to be completed, which means that our CPU is actually free to do some other tasks while blocking.

Can we do some [embedded-graphics] rendering while waiting for the SPI requests to be completed?

Two problems with that…

  1. [embedded-graphics] creates Rust Iterators and SPI requests in temporary RAM storage. To let [embedded-graphics] continue working, we need to copy the generated SPI requests into RAM before sending the requests
  2. To perform [embedded-graphics] rendering independently from the SPI request transmission, we need a background task. The main task will render graphics with [embedded-graphics] (which is our current design), the background task will transmit SPI requests (this part is new).

Rendering graphics and transmitting SPI requests at the same time on PineTime. Yes this is the Producer-Consumer Pattern found in many programs.

Fortunately Mynewt OS has everything we need to experiment with this multitasking…

  1. Mynewt’s Mbuf Chains may be used to copy SPI requests easily into a RAM space that’s specially managed by Mynewt OS
  2. Mynewt’s Mbuf Queues may be used to enqueue the SPI requests for transmission by the background task
  3. Mynewt lets us create a background task to send SPI requests from the Mbuf Queue

Let’s look at Mbuf Chains, Mbuf Queues and Multitasking in Mynewt OS.


Buffer SPI Requests with Mbuf Chains in Mynewt OS

In the Unix world of Network Drivers, Mbufs (short for Memory Buffers) are often used to store network packets. Mbufs were created to make common networking stack operations (like stripping and adding protocol headers) efficient and as copy-free as possible. (Mbufs are also used by the NimBLE Bluetooth Stack, which we have seen in the first PineTime article)

What makes Mbufs so versatile? How are they different from Heap Storage?

When handling Network Packets (and SPI Requests), we need a quick way to allocate and deallocate buffers of varying sizes. When we request memory from Heap Storage, we get a contiguous block of RAM that’s exactly what we need (or maybe more). But it causes our Heap Storage to become fragmented and poorly utilised.

Chain of Mbufs. From https://mynewt.apache.org/latest/os/core_os/mbuf/mbuf.html

With Mbufs, we get a chain (linked list) of memory blocks instead. We can’t be sure how much RAM we’ll get in each block, but we can be sure that the total RAM in the entire chain meets what we need. (The diagram above shows how Mynewt OS allocates Mbuf Chains in a compact way using fixed-size Mbuf blocks)

Isn’t it harder to code with a chain of memory blocks? Yes, it makes coding more cumbersome, but Mbuf Chains will utilise our tiny pool of RAM on PineTime much better than a Heap Storage allocator.

With Rust and Mynewt OS, here’s how we allocate an Mbuf Chain and append our SPI request to the Mbuf Chain…

// Allocate a new mbuf chain to copy the data to be sent.
let len = data.len() as u16; // Data length
let mbuf = unsafe { os::os_msys_get_pkthdr(len, 0) };
// Append the Data Bytes to the mbuf chain. This may increase the number of mbufs in the chain.
unsafe { os::os_mbuf_append(
mbuf,
core::mem::transmute( data.as_ptr() ), // Data to be apppended
data.len() as u16 // Data length
) };
view raw mbuf.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

We may call os_mbuf_append as often as we like to append data to our Mbuf Chain, which keeps growing and growing… (Unlike Heap Storage blocks which are fixed-size). So cool!

Here’s how we walk the Mbuf Chain to transmit each block of SPI data in the chain, and deallocate the chain when we’re done…

/// Callback for the event that is triggered when an SPI request is added to the queue.
extern "C" fn spi_event_callback(_event: *mut os::os_event) {
loop { // For each mbuf chain found...
// Get the next SPI request, stored as an mbuf chain.
let om = unsafe { os::os_mqueue_get(&mut SPI_DATA_QUEUE) };
if om.is_null() { break; }
// Send the mbuf chain.
let mut m = om;
let mut first_byte = true;
while !m.is_null() { // For each mbuf in the chain...
let data = unsafe { (*m).om_data }; // Fetch the data
let len = unsafe { (*m).om_len }; // Fetch the length
if first_byte { // First byte of the mbuf chain is always Command Byte
first_byte = false;
// Write the Command Byte.
internal_spi_noblock_write(
unsafe { core::mem::transmute(data) },
1 as i32, // Write 1 Command Byte
true
).expect("int spi fail");
// These commands require a delay. TODO: Move to caller
if unsafe { *data } == 0x01 || // SWRESET
unsafe { *data } == 0x11 || // SLPOUT
unsafe { *data } == 0x29 { // DISPON
delay_ms(200);
}
// Then write the Data Bytes.
internal_spi_noblock_write(
unsafe { core::mem::transmute(data.add(1)) },
(len - 1) as i32, // Then write 0 or more Data Bytes
false
).expect("int spi fail");
} else { // Second and subsequently mbufs in the chain are all Data Bytes
// Write the Data Bytes.
internal_spi_noblock_write(
unsafe { core::mem::transmute(data) },
len as i32, // Write all Data Bytes
false
).expect("int spi fail");
}
m = unsafe { (*m).om_next.sle_next }; // Fetch next mbuf in the chain.
}
// Free the entire mbuf chain.
unsafe { os::os_mbuf_free_chain(om) };
// Release the throttle semaphore to allow next request to be queued.
let rc = unsafe { os::os_sem_release(&mut SPI_THROTTLE_SEM) };
assert_eq!(rc, 0, "sem fail");
}
}

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

Note that we don’t transmit the entire Mbuf Chain of SPI data in a single SPI operation… We transmit the SPI data one Mbuf at a time. This works fine for PineTime’s ST7789 Display Controller. And with limited RAM, it’s best not to make an extra copy of the entire Mbuf Chain before transmitting.


Enqueue SPI Requests with Mbuf Queues in Mynewt OS

After [embedded-graphics] has completed its rendering, we get an Mbuf Chain that contains the SPI request that will be transmitted to the PineTime Display Controller by the background task. Now we need a way to enqueue the SPI requests (Mbuf Chains) produced by [embedded-graphics]…

Enqueuing SPI requests in an MBuf Queue before transmitting

When we use Mbuf Chains in Mynewt OS, we get Mbuf Queues for free!

Check the function spi_event_callback from the last code snippet… It’s actually calling os_mqueue_get to read SPI requests (Mbuf Chains) from an Mbuf Queue named SPI_DATA_QUEUE.

Adding an SPI request to an Mbuf Queue is done by calling os_mqueue_put in Rust like this…

/// Enqueue request for non-blocking SPI write. Returns without waiting for write to complete.
/// Request must have a Command Byte, followed by optional Data Bytes.
fn spi_noblock_write(cmd: u8, data: &[u8]) -> MynewtResult<()> {
// Throttle the number of queued SPI requests.
let timeout = 30_000;
unsafe { os::os_sem_pend(&mut SPI_THROTTLE_SEM, timeout * OS_TICKS_PER_SEC / 1000) };
// Allocate a new mbuf chain to copy the data to be sent.
let len = data.len() as u16 + 1; // 1 Command Byte + Multiple Data Bytes
let mbuf = unsafe { os::os_msys_get_pkthdr(len, 0) };
if mbuf.is_null() { // If out of memory, quit.
unsafe { os::os_sem_release(&mut SPI_THROTTLE_SEM) }; // Release the throttle
return Err(MynewtError::SYS_ENOMEM);
}
// Append the Command Byte to the mbuf chain.
let rc = unsafe { os::os_mbuf_append(
mbuf,
core::mem::transmute(&cmd),
1
) };
if rc != 0 { // If out of memory, quit.
unsafe { os::os_mbuf_free_chain(mbuf) }; // Deallocate the mbuf chain
unsafe { os::os_sem_release(&mut SPI_THROTTLE_SEM) }; // Release the throttle
return Err(MynewtError::SYS_ENOMEM);
}
// Append the Data Bytes to the mbuf chain. This may increase the number of mbufs in the chain.
let rc = unsafe { os::os_mbuf_append(
mbuf,
core::mem::transmute(data.as_ptr()),
data.len() as u16
) };
if rc != 0 { // If out of memory, quit.
unsafe { os::os_mbuf_free_chain(mbuf) }; // Deallocate the mbuf chain
unsafe { os::os_sem_release(&mut SPI_THROTTLE_SEM) }; // Release the throttle
return Err(MynewtError::SYS_ENOMEM);
}
// Add the mbuf to the SPI Mbuf Queue and trigger an event in the SPI Event Queue.
let rc = unsafe { os::os_mqueue_put(
&mut SPI_DATA_QUEUE,
&mut SPI_EVENT_QUEUE,
mbuf
) };
if rc != 0 { // If out of memory, quit.
unsafe { os::os_mbuf_free_chain(mbuf) }; // Deallocate the mbuf chain
unsafe { os::os_sem_release(&mut SPI_THROTTLE_SEM) }; // Release the throttle
return Err(MynewtError::SYS_EUNKNOWN);
}
Ok(())
}

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

spi_noblock_write is the complete Rust function we use in our PineTime firmware to 1️⃣ Allocate an Mbuf Chain 2️⃣ Append the SPI request to the Mbuf Chain 3️⃣ Add the Mbuf Chain to the Mbuf Queue. Yep it’s that easy to use Mbuf Chains and Mbuf Queues in Mynewt OS!


Transmit Enqueued SPI Requests with Mynewt Background Task

Here comes the final part of our quick experiment… Create a background task in Mynewt to read the Mbuf Queue and transmit each SPI request to PineTime’s Display Controller…

Transmitting SPI Requests enqueued in an Mbuf Queue

With Rust and Mynewt OS, here’s how we create a background task SPI_TASK that runs the neverending function spi_task_func

/// Mbuf Queue that contains the SPI data packets to be sent
static mut SPI_DATA_QUEUE: os::os_mqueue = fill_zero!(os::os_mqueue);
/// Event Queue that contains the pending non-blocking SPI requests
static mut SPI_EVENT_QUEUE: os::os_eventq = fill_zero!(os::os_eventq);
// Create Event Queue and Mbuf (Data) Queue that will store the SPI requests
unsafe { os::os_eventq_init(&mut SPI_EVENT_QUEUE) };
unsafe { os::os_mqueue_init(
&mut SPI_DATA_QUEUE,
Some( spi_event_callback ), // Callback to handle the next request in the queue
NULL
) };
// Create a task to send SPI requests sequentially from the SPI Event Queue and Mbuf Queue
os::task_init( // Create a new task and start it...
unsafe { &mut SPI_TASK }, // Task object will be saved here
&init_strn!( "spi" ), // Name of task
Some( spi_task_func ), // Function to execute when task starts
NULL, // Argument to be passed to above function
10, // Task priority: highest is 0, lowest is 255 (main task is 127)
os::OS_WAIT_FOREVER as u32, // Don't do sanity / watchdog checking
unsafe { &mut SPI_TASK_STACK }, // Stack space for the task
SPI_TASK_STACK_SIZE as u16 // Size of the stack (in 4-byte units)
) ? ; // `?` means check for error
Ok(())
...
/// SPI Task Function. Execute sequentially each SPI request posted to our Event Queue. When there are no requests to process, block until one arrives.
extern "C" fn spi_task_func(_arg: Ptr) {
loop {
// Forever read SPI requests and execute them. Will call spi_event_callback().
os::eventq_run(
unsafe { &mut SPI_EVENT_QUEUE }
).expect("eventq fail");
// Tickle the watchdog so that the Watchdog Timer doesn't expire. Mynewt assumes the process is hung if we don't tickle the watchdog.
unsafe { hal_watchdog_tickle() };
}
}

From https://github.com/lupyuen/stm32bluepill-mynewt-sensor/blob/pinetime/rust/mynewt/src/spi.rs

(Note that we’re calling Mynewt to create background tasks instead of using Rust multitasking, because Mynewt controls all our tasks on PineTime)

spi_task_func runs forever, blocking until there’s a request in the Mbuf Queue, and executes the request. The request is handled by the function spi_event_callback that we have seen earlier. (How does Mynewt know that it should invoke spi_event_callback? It’s defined in the call to os_mqueue_init above.)

hal_watchdog_tickle appears oddly in the code… What is that?

Mynewt helpfully pings our background task every couple of milliseconds, to make sure that it’s not hung… That’s why it’s called a Watchdog.

To prevent Mynewt from raising a Watchdog Exception, we need to tell the Watchdog every couple of milliseconds that we are OK… By calling hal_watchdog_tickle


Optimised PineTime Display Driver… Assemble!

This has been a lengthy but quick (two-week) experiment in optimising the display rendering for PineTime. Here’s how we put everything together…

1️⃣ We have batched the rendering of pixels by rows and by blocks. This batching code has been added to the [piet-embedded] crate that calls [embedded-graphics] to render 2D graphics and text on our PineTime.

2️⃣ The code that demos the batching of pixels is also in the [piet-embedded] crate. Batching is enabled when we enable the noblock_spi feature in [piet-embedded]’s Cargo.toml like this…

[features]
default = ["noblock_spi"] # Render graphics by batching pixels into rows and blocks
noblock_spi = []

From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/Cargo.toml

3️⃣ noblock_spi is referenced in the demo code like this…

pub fn draw_item<T>(item: T)
where T: IntoIterator<Item = Pixel<Rgb565>> {
#[cfg(not(feature = "noblock_spi"))] // If batching is disabled...
unsafe { DISPLAY.draw(item) }; // Draw text or graphics the usual slow way
#[cfg(feature = "noblock_spi")] // If batching is enabled...
super::batch::draw_blocks( // Draw text or graphics the new faster way, as pixel blocks
unsafe { &mut DISPLAY },
item
).expect("draw blocks fail");
}
view raw draw_item.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/piet-embedded/blob/master/piet-embedded-graphics/src/display.rs

4️⃣ We have implemented Non-Blocking SPI with Mbuf Chains and Mbuf Queues (plus a background task). The code is located in the [mynewt] crate.

5️⃣ We have forked the original [st7735-lcd] display driver into [st7735-lcd-batch] to test Non-Blocking SPI. Non-Blocking SPI is enabled when we enable the noblock_spi feature in [st7735-lcd-batch]’s Cargo.toml

[features]
default = ["graphics", "noblock_spi"] #### Render graphics with Non-Blocking SPI
graphics = ["embedded-graphics"]
noblock_spi = []

From https://github.com/lupyuen/st7735-lcd-batch-rs/blob/master/Cargo.toml

6️⃣ noblock_spi is referenced by [st7735-lcd-batch] like this…

#[cfg(feature = "noblock_spi")] // If non-blocking SPI is enabled...
fn write_command(&mut self, command: Instruction, params: Option<&[u8]>) -> Result<(), ()> {
// Write the Command Byte.
mynewt::spi::spi_noblock_write_command(
command.to_u8().unwrap()
).expect("spi cmd fail");
// Then write the Data Bytes.
if params.is_some() {
mynewt::spi::spi_noblock_write_data(
params.unwrap()
).expect("spi data fail");
}
Ok(())
}
#[cfg(not(feature = "noblock_spi"))] // Previously with blocking SPI...
fn write_command(&mut self, command: Instruction, params: Option<&[u8]>) -> Result<(), ()> {
self.dc.set_low().map_err(|_| ())?;
self.spi.write(&[command.to_u8().unwrap()]).map_err(|_| ())?;
if params.is_some() {
self.write_data(params.unwrap())?;
}
Ok(())
}
#[cfg(feature = "noblock_spi")] // If non-blocking SPI is enabled...
fn write_data(&mut self, data: &[u8]) -> Result<(), ()> {
// Write the data bytes,
mynewt::spi::spi_noblock_write_data(
data
).expect("spi data fail");
Ok(())
}
#[cfg(not(feature = "noblock_spi"))] // Previously with blocking SPI...
fn write_data(&mut self, data: &[u8]) -> Result<(), ()> {
self.dc.set_high().map_err(|_| ())?;
self.spi.write(data).map_err(|_| ())
}
view raw st7735-lcd.rs hosted with ❤ by GitHub

From https://github.com/lupyuen/st7735-lcd-batch-rs/blob/master/src/lib.rs

(Plus a few other spots in that file)

We have attempted to optimise the display driver for PineTime… But it’s far from optimal!

There are a few parameters that we may tweak to make PineTime render faster… Just be mindful that some of these tweaks will take up precious RAM…

1️⃣ MaxRowSize: Maximum number of pixels per batched row. Currently set to 100.

2️⃣ MaxBlockSize: Maximum number of pixels per batched block. Currently set to 200.

3️⃣ SPI_THROTTLE_SEM: How many SPI requests are allowed to be enqueued before blocking the rendering task. Currently set to 2.

4️⃣ OS_MAIN_STACK_SIZE: Stack Size for the main task. Currently set to 16 KB

5️⃣ MSYS_1_BLOCK_COUNT: Number of Mbuf blocks available. Currently set to 64.

Is it possible to render PineTime graphics at the theoretical maximum speed of the SPI bus? Read this


What’s Next?

PineTime is available for purchase by general public! Check this article for updated instructions to build and flash PineTime firmware…

In the next article we’ll have…

1️⃣ The prebuilt Rust + Mynewt OS firmware that we may download and install on PineTime

2️⃣ Instructions for flashing the firmware to PineTime with Raspberry Pi (or ST Link)

3️⃣ Instructions for developing our own Watch Apps with the druid Rust UI Framework

Stay tuned!

Here are the other articles in the PineTime series…