pytest and relative imports part 2

I had another tussle with getting pytest to recognise where the modules to be tested are relative to the directory where the testing code is. This time I resolved the issue using relative imports. Last time I tried adding the directory where testing code is to the system path. This is detailed in my post here.

This time I realised that I could solve the issue using relative imports correctly. The blog page here helped me.

Here’s my project structure:


To be able to access from I needed to:

  • start the testing code, called, with the correct relative import for where the code to be tested is:
from microbit.activity_indicator.activity_indicator import *
  • put files at the root of the project, in the directory with the code to be tested and in the directory with the testing code. These files can be empty, created using ‘touch’ in Linux or be saving an empty file in Windows.

yotta, cmake, ninja build tools for the Micro:bit explained

Understanding the offline build tools for the BBC Micro:bit

I use the offline build tools from Lancaster University for compiling and building C-code for the BBC Micro:bit board. I used the toolset without really understanding what was going on under the hood. Understanding our tools gives us a better chance of fixing things when they fail. So I spent a happy afternoon learning about these tools and how they differ from the system I was using for building C-code into executables. These executables are often called binaries.


yotta is a tool created at mbed to build C code aimed at a range of ARM processors into executable files. The processor on the BBC Micro:bit is an ARM processor. yotta is written in python so python needs to be installed on your system for yotta to run.

yotta uses a file called module.json containing information about the target platform. This information is used by yotta to download files that are required for your target hardware platform to enable your C code to work on that platform. These files are downloaded to a directory named yotta_modules. These files are used during the build process.

An example module.json file for the BBC Micro:bit is:

"name": "microbit-c",
"version": "2.0.0-rc8",
"description": "The micro:bit runtime common abstraction with examples.",
"license": "MIT",
"dependencies": {
"microbit": "lancaster-university/microbit"
"targetDependencies": {},
"bin": "./source"

In the file yotta_targets/bbc-microbit-classic-gcc/target.json is a line:

"toolchain": "CMake/toolchain.cmake",

This tells us that the CMake build system is used by yotta. So, what is CMake?


CMake is a command line tool that uses a file called CMakeLists.txt to create a list of shell commands that are run in a later stage to create the final executable for a C  project. This list of commands ends up in a file called Makefile. The CMakeLists.txt file contains things such as the flags passed to the compiler used to build the executable and the source files to be used in the build process.

Here I digress into the build system I am used to seeing, which uses ‘make’ to create the final binary executable. make is replaced by ninja in the Micro:bit build system. I cover this in the next section.

Github projects often use CMake to enable you to build the project’s source code on your system. To build these projects you often run the command ‘cmake’, which takes the CMakeLists.txt file in the github project and uses this to create the file Makefile. Then, running the command ‘make’ causes each of the commands in Makefile to execute.

As Makefile was created using cmake, the shell commands in Makefile will be the compiler commands to:

  • create .o files for all of our .c files

  • link these .o files together

  • create a /build/src directory

  • place the linked executable into this directory.

Often we run ‘make install’ to complete the installation of the binaries created from C code. The final output from running the commands in Makefile is often an executable binary file. The ‘install’ command copies this to wherever it needs to be in your system for it to run from the command line.

Digression ends.

But, but, but – the Micro:bit build system does not use ‘make’ to build the binary executable. Instead, it uses ninja.


If you can see the ninja, you are already dead.

ninja replaces make. make has been around for decades. Which is no bad thing. But somebody, somewhere, decided to make a better make.

One of the design goals for ninja is:

very fast (i.e., instant) incremental builds, even for very large projects

A quick search found somebody has compared the speed of make and ninja:

jpopsil – reports an increase in speed using ninja.

ninja creates a file called and ninja looks for and uses this file to create the executable. If we look in, we see, well, some rules for how to build the executable. Looking in…. there’s a lot of things. From reading the documentation, one of the ideas behind ninja is that all the decisions about how to create the executable are taken prior to the build. It looks like has all of these build decisions in it. This allows ninja to do its job and create the binary without having to think too much along the way about what to do.




Using the DRV8662 chip to create 105V DC from battery voltage.

This is how I generated a little over 100V of DC using a 3.95 V power supply with about $10 of parts.


Using a DRV8662 chip we can create a battery powered board that will generate a 105V DC power rail using around $10 worth of parts. Some surface mount soldering is needed though. If you have an input signal below about 500Hz, you can use the chip to amplify this to have a peak to peak output of up to around 200V.

DRV8662 test board, 3.95V DC input from the middle power supply, 104.7V DC output. What could go wrong?

All right. I said battery voltage but am using a beefy bench power supply. What’s with that? The beefy bench power supply has all kinds of current limiting and safety stuff built in that a battery pack does not have built in. Let’s not start fires until we want to.


I need a 100+ V DC rail as an input to a circuit to drive some piezoelectric crystals. The crystals I’ve been tasked to use resonate at 40KHz.  How should I go about this? First off, the crystals need a signal with an amplitude of around 100+V. How hard could that be. Errrr….


There are a couple of standard methods to create a boost converter:


For instance flyback transformers as used in ‘old school’ television sets. The type with tubes. Remember those? No? I’m old…

Inductor switching circuit

Switching a voltage into an inductor ‘bounces’ the output voltage up. One interesting design based on this idea can be found here. The switching frequency needs to be controlled to maintain a steady output voltage, ideally using some kind of a controller. It is a beautiful piece of electronic design which I think I would enjoy, but it’s all about time.

Have somebody else make it

Of course. The simplest and fastest way. Stand on the shoulders of giants. Now, how many battery powered 100V DC-DC converters can I find on eBay. None. However, after some searching, I did find an integrated chip that does most of the work for me made by Texas Instruments.

Enter the DRV8662

Texas Instrument’s DRV8662 chip is designed to drive a haptic feedback piezoelectric transducer at up to about 500HZ. If we look at the functional block diagram in the data sheet, which I copied below, we can see that an external inductor (L1) is used by the IC to generate a voltage rail at up to about 105 V. This voltage rail is called VBST. This is used to power an internal operational amplifier. This op-amp can be used to amplify an input signal from IN+ and IN-. The output can be applied to a piezoelectric transducer, shown acroos OUT- and OUT+. 

So we have an inductor switching circuit to generate the DC voltage rails and a high voltage op-amp with a gain-width bandwidth somewhere around 500V which can be used to drive a piezo actuator. The chip is designed to be used inside of e.g. mobile phones, to make a piezoelectric crystal vibrate so that your phone shakes. Now you know where that comes from.

The simplified schematic below the functional block diagram is basically the same, but the outline of the chip is dotted in. The components outside of the dotted line are the ones that we need to add to make the circuit work. What values to use? I copied the ones from the demo board that Texas Instruments make.Then started playing.

I used R1=768K and R2=16K gives me a 105V DC output. According to my calcs, R2 = 10K should give me a 105V output. I think the extra inductance from using flying leads on the inductor affected this. On my first board with the inductor soldered close to the IC, using R2=20K gave me about a 52V VBST. On board two, with the inductor on flying leads, I got VBST as 84V.

On a PCB I would place 10K to get the 105V output. Or maybe start with 12K and resolder it if the rail was too low. I used L1 = 3.3uH and Rext = 7.5K, using a recommended inductor from the data sheet. GAIN0, GAIN1 and EN are tied high using 1K resistors. Don’t forget that EN needs to be high. Or the chip no worky.

DRV8662 functional block diagram, from Texas Instrument’s data sheet.

DRV8662 simplified schematic, from Texas Instrument’s data sheet.

If the resonance frequency of the crystals I want to drive was below about 500Hz I could use the DRV8662 on its own to generate the necessary driving signal. Feed a square wave at the crystal’s resonance frequency to the input pins and slap the crystal across the output pins. But I can’t. The resonance frequency of my targets is too high. Once I put a signal over 500Hz into IN+ and IN-, the output amplitude rapidly decreases.

However, the chip does allow me to tap the boost voltage that it creates from the VBST or PVDD pins. I can use this high voltage with some extra circuitry to generate a high voltage oscillating signal at 40 KHz to make my crystals shake. Watch this space.

‘But, but but, you’re using a fraction of the IC’s capability to just generate the 100 V rail, wouldn’t it be cheaper to make your own voltage boost circuit?’. The DRV8662 costs $3 in low quantities. Even Farnell in the UK only charges £3 each. Plus VAT. Plus postage. Even so, no, it wouldn’t be cheaper to spin my own circuit using transformers or a switched inductor, especially when you factor in my valuable time at $1 an hour. I know, I earn the Big Bucks.

So I bought a few DRV8662s from Farnell and soldered one down onto a QFN20 to DIL converter board. Which in plain speak is a little circuit board that converts a stupid small chip with no legs into something that I can plug into some breadboard to play with. The soldering is not too tricky if you use a decent small bit in your soldering iron, plenty of liquid solder flux and have a fan to blow the toxic fumes away. I use a USB powered fan designed to go inside computer cases with a power brick for this. eBay.

The first board I made is on the picture below on the left. Note that I soldered the inductor directly to this board. It worked. Lovely. Then I shorted the VBST pin to ground with a crocodile clip. Careless. Not that I’m bitter. Still, a bit of care would have avoided this. I learned my lesson.

Why is there a bit of brown tape on the second board? I put a corresponding bit of brown tape on one edge of my breadboard to remind me which way around to stick the module into my breadboard as it looks pretty much the say both ways around, but only works one way around and may never work again after being put in the wrong way around. I don’t want to find out if the last statement is true or not.

With the second board, I soldered the inductor onto some leads and stuck this into the breadboard. I found that the output voltage was higher than when the inductor was stuck directly to the surface mount to DIL converter board. Why? I suspect that the extra leads and path to and from the inductor increases the overall inductance.

Why did I use that particular inductor? Because the data sheet said that make and model would work. That’s why.

DRV8662s on surface mount to DIP converter boards. Why two? Because I wrecked the one on the left.

Looking at the picture below, which shows the DRV8662 on the converter board on the test breadboard, you can see that for instance, the two power leads I use are of different lengths. This makes it hard for the crocodile clips to ever come into contact. If you look closely at the flying leads connected to the yellow multimeter in the picture at the top of this article, these are placed so that they cannot easily come into contact. I live and learn. Slowly. I ordered some decent grabber probes.

DRV8662 on breadboard. It works.

Why are there two capacitors daisy chained on the VBST pin?

VBST -||–||- GND

Because the output on VBST is 105V and each of the two capacitors is only rated to 63V. So I daisy-chained two of them. There is probably not much risk of a 63V rated capacitor cooking off at 105V, but if it were to short, then the DRV8662 could die. As I found out when I shorted VBST to ground with a crocodile clip. Not that I’m bitter about this. Not at all.

How smooth is the 105V voltage rail? No way to tell using breadboard. If you want a smooth rail, don’t use breadboard.

I tested using an input from a signal generator. Each output pin generates a signal in anti-phase to the other, so the peak to peak difference is about twice the VBST voltage.

Please find a couple of photos showing this testing below. For these tests I only had a 52V VBST. Then I learned that adjusting R1 in the simplified schematic above changed the output voltage. Reading the data sheet always helps. I started out using an old-school ‘scope as I like them. Then switched to a new-school ‘scope because I could and nobody stopped me.

DRV8662 testing using an old-school ‘scope.

DRV8622 circuit output with 52V VBST rail using a new-school scope..

Useful extra stuff

How do we know if we have soldered down the chip correctly? Please find some impedances that I measured between various pins for a working board below.

Pins   Impedance (Ohms)

1-6  9.61M

5-6   short

4-5   open (despite both being labelled GND)

10-11   short

1-3   5.12M

12-5   26.32M

13-5   23.8M

14-5   23.8M

15-5   16.25M

16-5   16.25M

17-5   16.25M

18-5   16.25M

19-5   16.25M

20-5   8.15M


Real time accelerometer display from three BBC Micro:bits

I submitted an article to Circuit Cellar magazine on how I get real time data display from three BBC Micro:bits real time. Please find a video showing this in action below. On the screen to the right of the juggling clown, you can see the accelerometer data. Each BC Micro:bit has a three axis accelerometer in it. For each Micro:bit I get the average from all three axis as a single value. On the screen three are three traces, one for each Micro:bit. As the boards are juggled, the accelerometer values are sent by radio to a receiver Micro:bit connected to the computer. This Micro:bit acts as a go-between for the juggled Micro:bits and the computer. The accelerometer data is plotted real time using a script I wrote in Python, using the pyqtgraph library.

Sharing wifi with a raspberry pi zero w using create_ap

I had a bit of a chicken and egg situation while connecting a raspberry pi zero w to wifi. I work on a ship for six weeks at a time, so my choice of wifi is limited to the cabin network on the ship. This has a two part authentication. First, a password is used to connect to the wifi. Then a user ID and a different password is entered into a login page on a browser. My pi zero w has the lite version of the raspbian operating system. No GUI. No browser. I ssh into it, connecting the pi to my laptop with a USB cable.

Top tip number one – you cannot ssh into the pi zero w if the usb cable is connected between the port labelled pwr (power) on the pi. You need to use the other port.

There are plenty of good sites telling you how to set up the pi zero w to connect with your laptop via ssh by doctoring a couple of files on the micro SD card housing your raspbian OS. There are plenty of sites telling you how to setup a wifi connection using linux. I could get the little pi onto the ship’s wifi, but I could not complete the second stage of browser based authentication as there is no browser on the lite version of the OS. There are text based browsers available for Linux, which would enable me to complete the second stage of the login. But I could not install one of these as I did not have a connection to the internet. I am on a ship remember, no other wifi is available.

So I shared my laptop’s wifi with the pi zero. I use Linux. To share my wifi I used the create_ap library. To set up a new wifi link from your laptop:

sudo create_ap wlan0 wlan0 <new wifi name> <password>


sudo create_ap wlan0 wlan0 my_wifi my_password

ssh into your raspberry pi:

ssh pi@raspberrypi.local -p22

If this does not work, try:
Edit wired connection 2 (or whichever is the highest numbered ‘wired connection’. Go to the IPv4 Settings tab. Select Link-Local Only for the method.
After ssh’ing into your pi, check that you can see your new wifi from it:

iwlist scan | grep my_wifi 

Now enter details of your new wifi into the wpa_supplicant.conf file:

sudo vi /etc/wpa_supplicant/wpa_supplicant.conf

Enter this:


If an error about authentication is received – go to a browser in the host laptop and log in to the local wifi that is being shared to ensure that it is active.

Now you should have wifi access from your pi.

Using udev to remove the need for sudo with the BBC Micro:bit

A comment on this post hinted that there is a way to remove the need to use ‘sudo’ when interacting with the BBC Micro:bit on Linux. So I left a comment asking how to do this, which the author kindly answered:

The way to make sure there is no need for root permissions to access USB device (like connected MicroBit) is by creating a file into `/etc/udev/rules.d/` directory with proper config. For Microbit this could be like this:

SUBSYSTEM=="usb", ATTR{idVendor}=="0d28", ATTR{idProduct}=="0204", MODE="0666"
and then restarting udev system with:
sudo udevadm control --reload-rules

So I created the file /etc/udev/rules.d/microbit.rules with the above code and it works!

I fired up pyocd to enable command line programming of the BBC Micro:bit without needing to use sudo. See this page for more details on programming the BBC Micro:bit from the command line and using pyocd to help with this.

Using tkinter and python to continuously display the output from a system command

I put an answer to a stackoverflow question. The poster wanted to display the output from a ‘netstat’ command every second. I suggested using a tkinter screen. To run the nestat command every second, the command line entry would be ‘netstat 1’. This is fed to a subprocess. This subprocess is wrapped in a thread to avoid blocking the main thread. The main thread needs to be left to deal with the tkinter display. GUIs like to hog the main thread. Don’t forget to use the ‘undo=False’ option with tk.screen. Otherwise all of the display is continuously saved to a buffer. This results in the python process gobbling up memory continuously as the output from netstat is added to it each second.

import threading
from subprocess import Popen, PIPE
from time import sleep
import tkinter as tk
from tkinter import *


PROCESS = ['netstat','1']
class Console(tk.Frame):
    def __init__(self, master, *args, **kwargs):
        tk.Frame.__init__(self, master, *args, **kwargs)
        self.text = tk.Text(self, undo=False)
        self.text.pack(expand=True, fill="both")
        # run process in a thread to avoid blocking gui
        t = threading.Thread(target=self.execute)
    def display_text(self, p):
        display = ''
        lines_iterator = iter(p.stdout.readline, b"")
        for line in lines_iterator:
            if 'Active' in line:
                self.text.delete('1.0', END)
                self.text.insert(INSERT, display)
                display = ''
            display = display + line           

    def display_text2(self, p):
        while p.poll() is None:
            line = p.stdout.readline()
            if line != '':
                if 'Active' in line:
                    self.text.delete('1.0', END)
                self.text.insert(END, line)

    def execute(self):
            p = Popen(PROCESS,  universal_newlines=True,
                   stdout=PIPE, stderr=PIPE)
            print('process created with pid: {}'.format(

if __name__ == "__main__":
    root = tk.Tk()
    root.title("netstat 1")
    Console(root).pack(expand=True, fill="both")

ssh to a pi zero w from a linux box

There are many sites and YouTube videos explaining how to connect the pi zero to a laptop or desktop using a USB cable, then access the pi zero from the laptop using ssh. Here is a link to one guide.

I followed a guide on YouTube but had a few problems connecting to the pi zero w using ssh through Linux. Each time I put in:

ssh pi@raspberrypi.local -p22

I got a blank line which then timed out and displayed:

ssh: Could not resolve hostname raspberrypi.local: Name or service not known

I successfully connected to the pi zero w using putty on a Windows 8 machine. Putty is ssh with a nice GUI interface. Windows is ‘plug and play’. I run Linux without a GUI, so have ‘plug, learn and play’ instead. Time to learn.

I fired up nm-applet, using the command:


Then I went to ‘Edit connections’. The pi zero w will often be the highest numbered ‘Wired connection’. In my case it was ‘Wired connection 2’. Edit this. Go to the IPv4 Settings tab. Select Link-Local Only for the method. See a screenshot showing the setup below.

Raspberry pi zero w ssh connection configuration

After saving the updated configuration, the ssh command works.

Zombie BBC Micro:bit serial ports created when using pyocd-gdbserver –persist

So I was happily using pyocd-gdbserver to program and enter debugging mode on a BBC Micro:bit attached to one of my laptop’s USB port, as described here. Then I stopped being able to read data through the USB port… Long story short, multiple ‘zombie’ ports were created and my Python script was connecting to a zombie instead of the live one.

setserial -g /dev/ttyACM*


/dev/ttyACM0, UART: unknown, Port: 0x0000, IRQ: 0, Flags: low_latency
/dev/ttyACM1, UART: unknown, Port: 0x0000, IRQ: 0, Flags: low_latency

Sometimes for fun, I would also see a ttyACM2. Why would two ports have the same Port number? The answer is they don’t. They are the same port. Connecting to /dev/ttyACM1 got me nothing. Connecting to /dev/ttyACM0 got me connected to the BBC Micro:bit. I had set the pyocd-gdb utility running using:

sudo ~/.local/bin/pyocd-gdbserver -t nrf51 -bh -r --persist

I think that the –persist flag does the damage. Run the script without this and I think we are good to go. I altered my serial port script to flag up when more than one Micro:bit is found. For good measure, I sort the ports into reverse order and connect to the first one with the PID and VID for the Micro:bit, which will be the lowest numbered ttyACM port. This is a work around when zombies appear.

Please find my Python 3 script for finding and returning a serial port connection to a BBC Micro:bit below.

import logging
import serial
import as list_ports
from time import sleep

BAUD = 115200

logging.basicConfig(level=logging.DEBUG, format='%(message)s')

class SerialPort():
    def __init__(self, pid=PID_MICROBIT, vid=VID_MICROBIT, baud=BAUD, timeout=TIMEOUT):
        self.serial_port = self.open_serial_port(pid, vid, baud, timeout)

    def count_same_ports(self, ports, pid, vid):
        ''' Count how many ports with pid and vid are in <ports>. '''
        return len([p for p in ports if and p.vid==vid])

    def get_serial_data(self, serial_port):
        ''' get serial port data '''
        inWaiting = serial_port.inWaiting()
        read_bytes = serial_port.readline(inWaiting)
        if not read_bytes:
        return read_bytes.decode()

    def get_serial_port(self):
        ''' Return the serial port. '''
        return self.serial_port

    def open_serial_port(self, pid=PID_MICROBIT, vid=VID_MICROBIT, baud=BAUD, timeout=TIMEOUT):
        ''' open a serial connection '''
        print('looking for attached microbit on a serial port')
        # serial = find_comport(pid, vid, baud)
        serial_port = serial.Serial(timeout=timeout)
        serial_port.baudrate = baud
        ports = list(list_ports.comports())
        print('scanning ports')
        num_mb = self.count_same_ports(ports, pid, vid)'{} microbits found'.format(num_mb))
        if num_mb>1:
  '**** check for false connections ****')
        for p in ports:
            print('pid: {} vid: {}'.format(, p.vid))
            if ( == pid) and (p.vid == vid):
                print('found target device pid: {} vid: {} port: {}'.format(
          , p.vid, p.device))
                serial_port.port = str(p.device)
        if not serial:
            print('no serial port found')
            return None
            print('opened serial port: {}'.format(serial))
        # except (AttributeError, SerialException) as e:
        except Exception as e:
            print('cannot open serial port: {}'.format(e))
            return None
        # 100ms delay
        return serial_port

if __name__ == '__main__':
    print('instatiating SerialPort()')
    serial_port = SerialPort()

Sublime Text 3, adding a custom python 3 build

Typing ‘python’ at the command line of my Linux Mint 18 install gives me a python 2.7 prompt. So when I run a python script in Sublime Text, it was built using Python 2.7. But I want to use python 3! So I entered a custom python 3 build.

I use Linux Mint 18. The “shell_cmd” mentioned below will be different for Windows and maybe for Mac OS as well.

To create a build option in Sublime Text 3 for your favorite version of Python, create a file called:


Where sublime_install is the path to the directory where you have sublime installed.

The file should contain this text:

    "shell_cmd": "/usr/bin/env python3 -u ${file}",
    "selector": "source.python",
    "file_regex": "^(...*?):([0-9]*):?([0-9]*)",
    "working_dir": "${file_path}",

You may need to change ‘python3’ to whichever command prompt fires up the version of python you want to run.

The option ‘Python3’ will now appear in your build menu on Sublime Text 3.

The -u option in the “shell_cmd” removes buffering. I missed this out initially, leading to some head scratching.  My scripts would run, but I wouldn’t see any output for some time – until the output buffer had filled. Luckily  Stackoverflow came to my help: