CD player for the visually impaired

This blog details a method to allow somebody who is visually impaired to easily listen to their CD collection again. My Mother lost her eyesight through macular degeneration. She has a decent collection of classical music that she built up over a few years. But she can’t see well enough to easily use a CD player anymore. On top of that, her mobility is restricted. I bought her a Roberts Concerto 2 CD player and radio designed for the visually impaired. Please find details and a review here. This is the best that I could find, but it is still fiddly and difficult for somebody without sight to load the CD. It is quite a bulky device, which makes putting it next to an elderly person awkward as it takes up most of chair side table, or the user has to get up and go to where it is placed. Which is a barrier to it ever being used when just getting out of a chair is no longer straight forwards.  I looked at a few potential solutions. I found a portable CD player on eBay and tried that. But again, it takes up a little too much table top space and it is fiddly to load. You don’t realise how poorly controls are laid out on most devices until you try to explain to somebody without vision how to use it. I found some lovely projects where custom built players are built using tags. Audio books are loaded onto a memory card and played using something like a Raspberry Pi single board computer. An NFC coil is used to read a tag placed inside the case of an audiobook or CD and the audio is played from the memory card. Here is an example on Hackaday. I started going down this route. Then had another think. This will add another device to my mother’s chair side table. I will have to:

  1. Build it.
  2. Run off all her CDs to memory card.
  3. Show her how to use it.
  4. Maintain it.

My mother uses a Sovereign USB stick player to listen to her talking books and newspapers. This is a well designed player aimed at the visually impaired. It has decent sound quality. The build cost of a custom device would exceed the cost of a Sovereign and for me to think I would match the sound quality is a tad arrogant. One of the design features of this player is that it will remember the place you were last at on the stick. You can even swap sticks and it remembers the last play position on the previous five sticks you played. Mum already has this next to her and knows how to use it. As a side note, there is now a smaller version of this player available called the sonic which I bought to listen to my podcasts with and loan to Mum when she visits.

So I ran off her CDs to MP3 and put each one on to a cheap USB stick. I bought some small key rings and used these to connect the memory sticks to postcards on which I printed the title of the CD. So far I have run off 10 of these. These 10 sticks and labels fit in an little box on her table next to the player. If the idea works, I will run off some more.
I’ve written this up so that other people in my position have a potential solution to enable others with disability to enjoy their music collections.
Having to pay a couple of dollars each for a cheap USB stick from eBay for each CD may seem a tad pricey, but compared with the time and cost of building a custom device, I think it is money well invested.
The proof of the pudding is in the eating. Please find a photo of Mum at my house, having dozed off while listening to Aled Jones with the bear she gave me about 40 years ago. The bear’s arms are a little saggy now, but we all get a little infirm and need some help as we age.
 
Mum and Bear listening to some music.
 

EWMA filter example using pandas and python

This article gives an example of how to use an exponentially weighted moving average filter to remove noise from a data set using the pandas library in python 3. I am writing this as the syntax for the library function has changed. The syntax I had been using is shown in Connor Johnoson’s well explained example here.
I will give some example code, plot the data sets then explain the code. The pandas documentation for this function is here. Like a lot of pandas documentation it is thorough, but could do with some more worked examples. I hope this article will plug some of that gap.
Here’s the example code:

import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
ewma = pd.Series.ewm

x = np.linspace(0, 2 * np.pi, 100)
y = 2 * np.sin(x) + 0.1 * np.random.normal(x)
df = pd.Series(y)
# take EWMA in both directions then average them
fwd = ewma(df,span=10).mean() # take EWMA in fwd direction
bwd = ewma(df[::-1],span=10).mean() # take EWMA in bwd direction
filtered = np.vstack(( fwd, bwd[::-1] )) # lump fwd and bwd together
filtered = np.mean(filtered, axis=0 ) # average
plt.title('filtered and raw data')
plt.plot(y, color = 'orange')
plt.plot(filtered, color='green')
plt.plot(fwd, color='red')
plt.plot(bwd, color='blue')
plt.xlabel('samples')
plt.ylabel('amplitude')
plt.show()

This produces the following plot. Orange line = noisy data set. Blue line = backwards filtered EWMA data set. Red line = forwards filtered EWMA data set. Green line = sum and average of the two EWMA data sets. This is the final filtered output.

EWMA fiiltered and raw data.

Let’s look at the example code. After importing the libraries I will need in lines 1-5, I create some example data. Line 6 creates 100 x values with values spaced evenly from 0 to 2 * pi. Line 7 creates 100 y-values from these 100 x-values. Each y value = 2*sin(x)+some noise. The noise is generated using the np.random.normal function. This noisy sine function is plotted in line 15 and can be seen as the jagged orange line on the plot.
Forwards and backwards EWMA filtered data sets are created in lines 10 and 11.
Line 10 starts with the first x-sample and the corresponding y-sample and works forwards and creates an EWMA filtered data set called fwd. This is plotted in line 17 as the red line.
Line 11 starts at the opposite end of the data set and works backwards to the first – this is the backwards EWMA filtered set, called bwd. This is plotted in line 18 as the blue line.
These two EWMA filtered data sets are added and averaged in lines 12-13. This data set is called filtered. This data set is plotted in line 16 as the green line.
If you look at the ewma functions in line 10 and 11, there is a parameter called span. This controls the width of the filter. The lag of the backwards EWMA data behind the final averaged filtered output is equal to this value. Similarly the forward EWMA data set has an offset forwards of the noisy data set equal to this value. Increasing the span increases the smoothing and the lag. Increasing the value will also reduce the peaks of the filtered data in relation to the unfiltered data. You need to try out different values.
My present application for this filter is removing jitter from accelerometer data. I have also used this filter to smooth signals from hydrophones.

XBee series 1 accelerometer sampling

XBee modules have a built in ADC, so why not sample an analog accelerometer directly? This will allow me to make a smaller wireless accelerometer that I can strap to my participants for testing with. Long term I want a microcontroller in the system for onboard signal processing. But for initial data collection, the smaller and simpler the better. Make it work. Make it fast. Make it right.. I am using the ADXL335 analog output 3-axis accelerometer connected to D0, D1 and D2 of an XBee series 1. This idea is nothing new, I got the idea for this build from a website made by Dr. Eric Ayars, Associate Professor of Physics at the California State University, Chico here. Thanks Eric! Initially I tried lashing up his design with the series 2 XBees that I had to hand. The issues with this are the two main differences that I found between the Series 1 and the Series 2 XBee ADC (analog to digital converter).

1. With the Series 2 XBee, the range of analog input that can be read by the ADC is set to be ground to 1.2V. With the Series 1 module, you set the top voltage that the ADC can sample by connecting that voltage to the VRef pin on the module. There is a VRef pin labelled on the Series 2, but it is not connected to anything. Usually you connect the voltage that you are using to power the module with (e.g. 3.3V) to the VRef pin on the Series 1 to enable the ADC to sample from ground to the supply voltage. You cannot connect a higher voltage than the supply voltage to this pin. Or the World will End. The output from the ADXL335 is centered around half of the voltage that it is powered with. In my case this is 3.3/2 = 1.65V. The output for each of the 3 accelerometers in the chip varies by 330mV/g. So the outputs will rarely dip below 1.2V and be sampled by a Series 2 XBee. Of course I could use a simple resistor network to bring the voltage output from the accelerometer down to be centered around 0.6V and be in with a chance of reading it with the XBee series 2. But this brings us on to issue 2.
 
2. The sample rate of the Series 2 XBee is lower than that of the Series 1. Using the Digi International XCTU tool for configuring the modules, with the Series 2, the fastest sample rate that I am allowed to set is 50ms. When I tested it, I was only getting about 16Hz. Thinking for a little while, I realise that the 50Hz sampling was being split across the 3 analog inputs that I am sampling (x, y and z axis). 3×16=48, so it all kind of makes sense. The Series 1 can be set to sample silly fast, down to 1ms. However, this brings us on to reading some XBee series 1 data and information sheets. This article from Digi International states that the maximum sample rate for the Series 1 is 50Hz, but it can be set to sample at up to 1KHz. I am interested in seeing just how fast this module can go…
 
The picture below shows the XBee series 1 module connected with an ADXL335 board – which is on the right of the photo. On the left there is a AAA battery connected to a DC-DC converter board, which provides an output of 3.3V for the ADXL335 and the XBee module. The same 3.3V rail is used as an input to the VREF pin on the XBee module. So the ADC should work from ground to 3.3V. I would imagine that the ADC will stall at about a diode drop (0.6V) from either limit.
I set the sample rate on the XBee 1 to be 5ms using the XCTU tool, which equates to 200Hz.
I lashed up some code based on the XBee API samples. I use Python 3, which allows me to leverage the time.perf_counter() function in lines 12 and 16 to get microsecond timing. Please see the initial code and output below.
 
from xbee import XBee
import serial
import time
PORT = '/dev/ttyUSB0'
BAUD_RATE = 115200 
# Open serial port
ser = serial.Serial(PORT, BAUD_RATE)
# Create XBee Series 1 object
xbee = XBee(ser, escaped=True)
print('created xbee at {} with baud {}'.format(PORT, BAUD_RATE))
print('listening for data...')
dt_old = time.perf_counter() 
# Continuously read and print packets
while True:
    dt_new = time.perf_counter()
    response = xbee.wait_read_frame()
    adc_dict=response['samples'][0]
    delta_millis = (dt_new-dt_old)*1000
    dt_old = dt_new
    try:
        print('{:.2f} {:.2f}'.format(delta_millis, 1000/delta_millis))
    except ZeroDivisionError as e:
        continue
    print(adc_dict['adc-0'], adc_dict['adc-1'], adc_dict['adc-2'])
ser.close()

output:

created xbee at /dev/ttyUSB0 with baud 115200
listening for data...
0.00 1428571.69
526 409 502
10.67 93.73
526 409 503
0.25 4058.74
526 411 503
10.62 94.19
522 406 500
0.40 2474.43
523 409 502
11.26 88.85
516 412 505
0.62 1604.76
523 408 502
10.65 93.86
522 407 498
0.39 2591.94
522 403 500
10.64 94.02

Ignore the first line of data, I expected that to be garbage. The lines of data should be:

adc-0, adc-1, adc-2 # which looks about right
time in ms since the last sample, resulting frequency = 1000/time in ms since last sample # these don't look about right

We should be seeing a uniform sample and frequency. But it oscillates between about 11ms and 0.5ms. Which averages to be about 6ms. For all three channels. So the ADC is working at a sample rate of around 2ms.

I modified the code to include a 100 sample averaging calculation. This is implemented using a deque data container, initialised in line 13. The sample times are added in line 24. Prior to that, the oldest one is removed in line 13. The values are averaged and printed in line 26. The try, except clause around this line are necessary as the ‘None’ values that the deque is intialised with cause the np.mean function to crash with a TypeError.


from collections import deque
import numpy as np
from xbee import XBee
import serial
import time

PORT = '/dev/ttyUSB0'
BAUD_RATE = 115200 
# Open serial port
ser = serial.Serial(PORT, BAUD_RATE)
# Create XBee Series 1 object
xbee = XBee(ser, escaped=True)
sample_deque = deque([None]*100, maxlen=100)
print('created xbee at {} with baud {}'.format(PORT, BAUD_RATE))
print('listening for data...')
dt_old = time.perf_counter() 
# Continuously read and print packets
while True:
    dt_new = time.perf_counter()
    response = xbee.wait_read_frame()
    adc_dict=response['samples'][0]
    delta_millis = (dt_new-dt_old)*1000
    sample_deque.pop()
    sample_deque.appendleft(delta_millis)
    try:
        print('{:.2f}'.format(np.mean(sample_deque)))
    except TypeError:
        continue
    dt_old = dt_new
    try:
        print('{:.2f} {:.2f}'.format(delta_millis, 1000/delta_millis))
    except ZeroDivisionError as e:
        continue
    print(adc_dict['adc-0'], adc_dict['adc-1'], adc_dict['adc-2'])
ser.close()

output after a few hundred samples:

5.44
13.52 73.96
521 404 500
5.45
2.06 485.59
523 408 502
5.44
0.91 1103.39
526 409 504
5.44
11.06 90.38
516 412 507
5.45

The data should be:

averaged interval in ms # looks about right
last sample interval in ms, frequency calculated from last interval in Hz # still oscillating
adc-0, adc-1, adc-2

The average of around 5.5ms is close enough to the programmed value of 5ms for my purposes. Why does the sample time fluctuate? Probably something to do with my code. If you have an answer, please leave it below.
The rigorous way to verify the accuracy and speed of this module is to plug in a function generator to the analog channels, record data then analyse that. How hard could that be? Errrr….. I think that what I have now is ‘good enough’ to try out shake gesture recognition.
The next step is to get an output in ‘g’ – that is units of gravity. As the sensitivity of the ADXL335 is 330mV/g with an input of 3.3V, the output is centred on half of the rail voltage and the ADC has a range of 0-1024:
g = (ADC_count-512)/102.5
I made a python lambda function to do the conversion:

g = lambda x: (x-512)/102.4

So I can output formatted accelerometer values in g by altering line 34 of the last listing to:

print('{:.2f} {:.2f} {:.2f}'.format(g(adc_dict['adc-0']), g(adc_dict['adc-1']), g(adc_dict['adc-2'])))

Using pyzmq to communicate between GUIs and processes

Graphical user interfaces (GUIs) all want to be the main thread. They don’t play well together. Trying to run GUIs built with different libraries concurrently and get them to talk to one another took me a while to figure out. This article shows how I used the pyzmq library to communicate between two graphical user interfaces (GUIs). 

 
I am working on unique hand gesture recognition. One GUI represents a hand position. This is represented by a GUI built with pyqt with a few range sliders. The sliders will be used to represent pitch, roll and speed of motion in the final application. A second GUI represents the gesture recognition interface. For this example it is a simple label box set up in pyqtgraph. I used pyqtgraph as this is the tool kit I am using in my final application for real time data display from an accelerometer mounted on a hand. I based my pyzmq script on the examples here.
 
I played with the publisher subscriber (pubsub) examples. One of the nice things about the pubsub model is that if you send something from the publisher, even if there are no subscribers waiting for the message, nothing blocks or stalls your script. Pubsub is only one way communication, from the publisher to the subscriber. I opted instead to use the pair model. In this pattern, a socket is set up that allows an object at each end to send messages back and forwards.
 
Pyzmq comes with a partial implementation of the Tornado server. This is explained here. So you can set up an eventloop to trigger on poll events using ioloop. If you are already using a GUI, then odds on you have an events handler running in that GUI. Getting this event handling loops to play nicely with the Tornado server led me down the coding rabbit hole. So I opted to use the event handling loop set up by timer = QtCore.QTimer() in pyqtgraph to poll one end of the pyzmq pair socket that I set up. This is not aesthetic, but I can’t see a more reliable method. I am using this QTimer to enable animation of the sensor data that I am using for displaying hand position, so it is already running. Which ever method I use to set up receiving data from the hand posture GUI, at some point I have to decide to look at the data and use it. I thought about using the pyzmq.Queue structure, which is process safe. I could use this to automatically update a list in my sensor display GUI with new posture positions. This won’t be looked at until the QTimer triggers. So I may as well simplify things and look for the updated posture position in the QTimer handling method.
 
Here’s the code I use to generate the rangeslider GUI. This can be downloaded from: github. Most of this is boilerplate to produce the GUI. Lines 102-107 create the pyzmq pair socket. Note the try/except wrapper in lines 97-99 around the socket.send_string. This raises a zmq.error.Again exception if there is nothing to receive the message. Using the try/except wrapper allows the code to continue. The ‘flags=zmq.NOBLOCK’ stops the code from blocking if there is nothing at the other end of the socket to receive the message. This isn’t an issue with the pubsub model; a publisher doesn’t care if there is no subscriber around to receive the message, but the pair pattern will fail without a receiver unless you explicitly tell it not to block.
'''
Created on 10 Oct 2016

@author: matthew oppenheim
use pyzmq pair context for communication
'''

from multiprocessing import Process
from PyQt4 import QtGui, QtCore
from qrangeslider import QRangeSlider
import sys
import zmq
from zmq.eventloop import ioloop, zmqstream
from pubsub_zmq import PubZmq, SubZmq


class Example(QtGui.QWidget):
    
    def __init__(self):
        app = QtGui.QApplication(sys.argv)
        super().__init__()
        ioloop.install()
        self.port = 5556
        self.topic = "1"
        self.initUI()
        sys.exit(app.exec_())
        

    def initUI(self):
        self.range_duration = QRangeSlider()   
        self.range_duration.show()
        self.range_duration.setFixedWidth(300)
        self.range_duration.setFixedHeight(36)
        self.range_duration.setMin(0)
        self.range_duration.setMax(1000)
        self.range_duration.setRange(200,800)
        self.textbox = QtGui.QLineEdit()
        self.set_duration_btn = QtGui.QPushButton("send duration")
        self.set_duration_btn.clicked.connect(lambda:
            self.button_click('duration'))
        self.set_duration_btn.setFixedWidth(100)
        self.range_pitch = QRangeSlider()    
        self.range_pitch.show()    
        self.range_pitch.setFixedWidth(300)
        self.range_pitch.setFixedHeight(36)
        self.range_pitch.setMin(-80)
        self.range_pitch.setMax(80)
        self.range_pitch.setRange(-20, 20)
        self.set_pitch_btn = QtGui.QPushButton("send pitch")
        self.set_pitch_btn.setFixedWidth(100)
        self.set_pitch_btn.clicked.connect(lambda:
            self.button_click('pitch'))
        self.range_roll = QRangeSlider()    
        self.range_roll.show()    
        self.range_roll.setFixedWidth(300)
        self.range_roll.setFixedHeight(36)
        self.range_roll.setMin(-80)
        self.range_roll.setMax(80)
        self.range_roll.setRange(-20, 20)
        self.set_roll_btn = QtGui.QPushButton("send roll")
        self.set_roll_btn.setFixedWidth(100)
        self.set_roll_btn.clicked.connect(lambda: 
            self.button_click('roll'))
        hbox_duration = QtGui.QHBoxLayout()
        hbox_duration.addStretch(1)
        hbox_duration.addWidget(self.range_duration)
        hbox_duration.addWidget(self.set_duration_btn)
        hbox_pitch = QtGui.QHBoxLayout()
        hbox_pitch.addStretch(1)
        hbox_pitch.addWidget(self.range_pitch)
        hbox_pitch.addWidget(self.set_pitch_btn)
        hbox_pitch = QtGui.QHBoxLayout()
        hbox_pitch.addStretch(1)
        hbox_pitch.addWidget(self.range_pitch)
        hbox_pitch.addWidget(self.set_pitch_btn)

        hbox_roll = QtGui.QHBoxLayout()
        hbox_roll.addStretch(1)
        hbox_roll.addWidget(self.range_roll)
        hbox_roll.addWidget(self.set_roll_btn)

        vbox = QtGui.QVBoxLayout()
        vbox.addStretch(1)
        vbox.addLayout(hbox_pitch)
        vbox.addLayout(hbox_roll)
        vbox.addLayout(hbox_duration)
        vbox.addWidget(self.textbox)
        
        self.setLayout(vbox)    
        self.setGeometry(300, 300, 300, 150)
        self.setWindowTitle('rangesliders')
        self.socket = self.create_socket(self.port)
        self.show()
     
    @QtCore.pyqtSlot()   
    def button_click(self, message):
        ''' handle button click event '''
        self.textbox.setText('sent {}'.format(message))
        try:
            self.socket.send_string(message, flags=zmq.NOBLOCK)
        except zmq.error.Again as e:
            print('no receiver for the message: {}'.format(e))
        

    def create_socket(self, port):
        ''' create a socket using pyzmq with PAIR context '''
        context = zmq.Context()
        socket = context.socket(zmq.PAIR)
        socket.bind("tcp://*:%s" % port)
        return socket
                
if __name__ == '__main__':
    ex = Example()

Here’s the simple label box that I use to test out receiving messages:

'''
pyqtgraph layout with a pyzmq pair context
for testing pubsub messaging with pyzmq
Created on 14 Oct 2016
using qt timer and polling instead of the tornado loop in zmq
@author: matthew oppenheim
'''

import pyqtgraph as pg
from pyqtgraph.Qt import QtGui, QtCore
from pubsub_zmq import SubZmq
from multiprocessing import Process
import zmq
import sys
import time

FRAMES_PER_SECOND = 30

class PyqtgraphPair(QtGui.QWidget):
    def __init__(self):
        super().__init__()
        port = '5556'
        topic = '1'
        QtGui.QWidget.__init__(self)
        self.layout = QtGui.QVBoxLayout()
        self.setLayout(self.layout)
        self.label = QtGui.QLabel("test")
        self.set_label("new label")
        self.layout.addWidget(self.label)
        self.socket = self.create_socket(port)

        
    def create_socket(self, port):
        '''
        Constructor
        '''
        context = zmq.Context()
        socket = context.socket(zmq.PAIR)
        socket.connect('tcp://localhost:%s' % port) 
        return socket


    def set_label(self, text):
        ''' set the label to text '''
        self.label.setText(text)


    def timer_timeout(self):
        ''' handle the QTimer timeout '''
        try:
            msg = self.socket.recv(flags=zmq.NOBLOCK).decode()
            print('message received {}'.format(msg))
            self.set_label(msg)
         except zmq.error.Again as e:
            return
        
        
if __name__ == '__main__':
    pg.mkQApp()
    win = PyqtgraphPair()
    win.show()
    win.resize(200,200)
    timer = QtCore.QTimer()
    timer.timeout.connect(win.timer_timeout)
    timer.start(1000/FRAMES_PER_SECOND)
    #win.set_label('hello')
    if (sys.flags.interactive != 1) or not hasattr(QtCore,
       'PYQT_VERSION'):
        QtGui.QApplication.instance().exec_()

Polling for a new message takes place in line 61. This has the same try/except wrapper as in the rangeslider example.

Fixing relative and absolute links in Word, Microsoft Office 2013

This article explains how to replace absolute hyperlink references with relative ones, which makes your Microsoft Word document and linked files portable. So when you send them to a different computer, clicking on the hyperlink will open the file that you shipped with the Word document and not try to open the file that is on the computer where you wrote the document.

Say you are writing a final report for a survey and you have a link to the original contract, which is in a folder in the same directory as your report.You burn the Word document and the folder with the contract file on to a DVD and send it to out to the survey’s project manager. Then the project manager complains that the link doesn’t work. Instead of trying to open the file on the DVD, the link is trying to open the file that is on the computer that you wrote the Word report on.

We need to make sure that the links are ‘relative’, meaning relative to where the Word file is, not ‘absolute’, meaning where the original file was. In Office 2010 this was relatively straight forwards. With Office 2013 it is not.
 
To prevent this issue happening in the first place there is an option to set in Word, hidden under a sub menu of a sub menu:
 
  • Select “File > Options”
  • Select “Advanced”
  • Scroll down to “General” and select “Web Options”
  • Select the Files tab
This is great if you are in sole control of the document. But as soon as somebody else edits the file without this option being set correctly, you will inherit a Word file with hyperlinks pointing all over the place. Time to edit the underlying field codes.
The commands we are going to use are:
alt-f9 – this shows all of the hyperlinks as field codes. It toggles.
ctrl-f9 – click on a hyperlink and use this shortcut to toggle a single hyperlink to being the field code.
 
 A field code for a hyperlink will have curly brackets and the word HYPERLINK . For example, under your nice blue underlined link called ‘8.16 Water bottom Horizon’, using alt-f9 you might see something like:
 
{HYPERLINK “08_Supporting_Documents/8.16%20Waterbottom%20Horizon%20ASCII”}
 
This is a nicely formed relative hyperlink. However, if you start to see unpleasantness like:
 

{HYPERLINK “file:///\\\\v07-fnp\\Ship\\Projects\\03_Past_Surveys\\survey\\11_Reports\\04_End_of_Job\\FT_Processing_Report_v2\\Appendix\\13%20Post%20Processing%20after%20PreSTM\\Survey_Report_v1.pdf”}

You have an absolute link, which will try to open a file on v07-fnp\\Ship, which is unfortunate as this is a server on a survey ship. We need to use Word’s search and replace function to get rid of the bumf at the start.

Search on:

file:///\\\\v07-fnp\\Ship\\Projects\\03_Past_Surveys\\survey\\11_Reports\\04_End_of_Job\\FT_Processing_Report_v2\\

replace with

<intentionally left blank>

One more search and replace should be done. For whatever reason, we need to replace all ‘\\’ with ‘/’.

The information in this post should help you to get your links working correctly in your Microsoft Word documents in Office 2013. I am required to use Microsoft products at work. Life is simpler with Linux. But that is a topic for a different post.

python – how to communicate between threads using pydispatcher

The pydispatcher module makes it straight forwards to communicate between different threads in the same process in python.

Why would I want to do this?

I am collecting and processing sensor data from an accelerometer and want to display this real-time. The interface has some controls to save the data and to change the sampling rate of the sensor. Naturally, I want to interact with the user interface without having to wait for the sensor data to be collected and processed. I also want the sensor to be continuously sampled, not having to wait for the real-time display to update.

I run the the graphical user interface (GUI) in one thread and use a separate thread to handle getting data from the sensor. This way the sensor is continuously sampled and the display remains responsive.

I use pydispatcher to send sensor measurements from the sensor thread the display thread. I also use pydispatcher to communicate from the display thread back to the sensor thread to control the rate that the sensor collects data or to stop data collection. So I have two way communication between the threads. I pass numpy arrays from the sensor thread to the display and send text from the display thread to the sensor thread. The text is then interpreted by the sensor thread to alter the sensor sampling rate, or stop sampling. Pydispatcher does not seem to mind what kind of data is sent as messages.

The application that I have described takes up quite a lot of code and is split over several classes. So I will present the code for a simpler example, which shows how to set up and apply pydispatcher and introduces some of the features that makes the library versatile.

Here is an example python 3 script that creates two threads and has them communicate. When the script is executed, as it will have the __name__ as __main__, so lines 46-50 will be the first to execute. A thread that instigates the Alice class is defined and created in lines 47-48 and a separate thread that instigates the Bob class is defined then started in lines 49-50.

In line 26 the alice_thread thread prints out a message ‘Alice is procrastinating’ every second.

In line 43 the bob_thread sends a message to the alice_thread every three seconds using a dispatcher. The alice_thread reacts to this dispatcher message by returning a message of her own to the bob_thread using a separate dispatcher.

If we look at line 15 in the Alice class, a dispatcher listener is set up:

dispatcher.connect(self.alice_dispatcher_receive, signal=BOB_SIGNAL, sender=BOB_SENDER)

This means that when a dispatcher.send statement with the signal BOB_SIGNAL and sender BOB_SENDER is executed anywhere else in the process, the method alice_dispatcher will be triggered so long as an instance of the Alice class has been created. In line 43, the Bob class sets up a dispatcher sender, which is designed to trigger the dispatcher listener in the Alice class described above.

dispatcher.send(message='message from Bob', signal=BOB_SIGNAL, sender=BOB_SENDER)

Having signal and sender names for each dispatcher listener and sender is a little confusing at first. Why do we have to define two identifiers for the dispatcher? Being able to define two identifiers allows us to group dispatchers from the same sender, using the sender identifier. Then we can have the same sender class sending different types of signal, for example data from different sensors, each one with the same sender identifier but each one with different signal identifier. This is verbose, but this verbosity makes for unambiguous easy to maintain code.

Lines 6-9 define the names of the signals and senders for Alice and Bob.

When the alice_thread receives a dispatch from the bob_thread thread, she replies with a dispatch sender of her own (line 21). The corresponding dispatch listener is defined in the Bob class in line 33.

''' demonstrate the pydispatch module '''
from pydispatch import dispatcher
import threading
import time

ALICE_SIGNAL='alice_signal'
ALICE_SENDER='alice_sender'
BOB_SIGNAL='bob_signal'
BOB_SENDER='bob_sender'

class Alice():
''' alice procrastinates and replies to bob'''
def __init__(self):
print('alice instantiated')
dispatcher.connect(self.alice_dispatcher_receive, signal=BOB_SIGNAL, sender=BOB_SENDER)
self.alice()

def alice_dispatcher_receive(self, message):
''' handle dispatcher'''
print('alice has received message: {}'.format(message))
dispatcher.send(message='thankyou from Alice', signal=ALICE_SIGNAL, sender=ALICE_SENDER)

def alice(self):
''' loop and wait '''
while(1):
print('Alice is procrastinating')
time.sleep(1)

class Bob():
''' bob contacts alice periodically '''
def __init__(self):
print('Bob instantiated')
dispatcher.connect(self.bob_dispatcher_receive, signal=ALICE_SIGNAL, sender=ALICE_SENDER)
self.bob()

def bob_dispatcher_receive(self, message):
''' handle dispatcher '''
print('bob has received message: {}'.format(message))

def bob(self):
''' loop and send messages using a dispatcher '''
while(1):
dispatcher.send(message='message from Bob', signal=BOB_SIGNAL, sender=BOB_SENDER)
time.sleep(3)

if __name__ == '__main__':
alice_thread = threading.Thread(target=Alice)
alice_thread.start()
bob_thread = threading.Thread(target=Bob)
bob_thread.start()
Output:
alice instantiated
Alice is procrastinating
Bob instantiated
alice has received message: message from Bob
bob has received message: thankyou from Alice
Alice is procrastinating
Alice is procrastinating
Alice is procrastinating
alice has received message: message from Bob
bob has received message: thankyou from Alice
Alice is procrastinating
Alice is procrastinating
alice has received message: message from Bob
bob has received message: thankyou from Alice
Alice is procrastinating
Alice is procrastinating
Alice is procrastinating
alice has received message: message from Bob
bob has received message: thankyou from Alice

To conclude. There are different ways to communicate between threads in python. I choose pydispatcher as the library allows me to write code that I can understand when I come back to it 6 months later and I don’t have to worry about the type of message that I am passing between the threads.

One minus alpha filter

I’ve got some real time accelerometer and gyroscope data coming in on my project to recognise hand gestures here. Naturally, I would like to be able to remove jitter and noise from the data as painlessly as possible. So we are into the world of real time digital filtering. Many books have been written on this subject and it is easy to dive ‘down the rabbit hole’ and lose a lot of your life testing filters. I want something that is ‘good enough’ quickly. From this stackoverflow answer, I got the idea for a simple implementation of a moving average filter:
x’ ←x’+α(x−x’)

where α<1

The first term, x’, is the new filtered value, calculated by taking the last filtered value and adding α.(last measured value-last filtered value).
So how to simulate and implement this?

Simulation

First of all, I will simulate the idea. Filters are implemented by using convolution. The input data is convolved with a filter operator. So I tried out a filter operator:

(1-α, α)

Here’s a simple script I ran using a Jupyter notebook and Python 3. I lifted the base code from the matplotlib examples page. I generated a sine wave and add some random noise in line 6. The filter function is defined in line 5, I am using alpha = 0.5 for this example. The function np.convolve in line 7 implements the filter. I had to knock off the last element of the filtered and difference data to get them all to plot, as one of the characteristics of a filter is that it will elongate the data set. Really, you need to ‘pad’ a data set at each end before applying convolution to remove ‘edge effects’ of the filter. But we are looking to quickly test and implement a filter here, not get bogged down in the technical minutia of filter design. Rabbit hole. Avoid.

import numpy as np
import matplotlib.pyplot as plt
ALPHA = 0.5
x = np.linspace(0, 2 * np.pi, 100)
filter = (1-ALPHA,ALPHA*1)
y = 2 * np.sin(x) + 0.1 * np.random.normal(x)
y_filt = np.convolve(y, filter)
y_diff = y - y_filt[:-1]
 
print(y)
print(y_filt[:-1])
print(y_diff[:-1])
 
fig, (ax0, ax1, ax2) = plt.subplots(nrows=3)
 
ax0.plot(x, y)
ax0.set_title('input')
 
ax1.plot(x, y_filt[:-1])
ax1.set_title('output')
 
ax2.plot(x, y_diff)
ax2.set_title('difference')
 
# Hide the right and top spines
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax1.yaxis.set_ticks_position('left')
ax1.xaxis.set_ticks_position('bottom')
 
# Tweak spacing between subplots to prevent labels from overlapping
plt.subplots_adjust(hspace=0.5)
 
plt.show()

For an input, filtered output and difference plot, see below. Note that the difference plot is on a different scaling to the input and filtered output. Looks to have the same amplitude and that some of the random noise has been removed. It is not perfect, but it has helped and was fast and easy to implement.

Implementation

I am using micropython v1.7 on a pyboard v1.0 with an mpu6050 accelerometer/gyroscope for my hardware platform – see the diagram below. So how hard could it be to implement a simple one point filter? Errrr….

The filter code is straightforwards, see the snippet below. The function filter takes the latest sensor value as new_value and uses the last filtered value as old_value, returning the latest filtered value. I am using ALPHA as 0.5 for this test.

def filter(self, old_value, new_value):
        ''' simple moving average filter '''
        return (new_value*ALPHA + (1-ALPHA)*old_value)

This function is called from the main sensor scan and process while loop for each of the x,y and z accelerometers, shown in the snippet below.

        while(True):
            if (self.run_flag):
                if(self.acc_read_flag):
                    self.counter+= 1
                    (delta, x, y, z) = self.read_acc()
                    x_acc = x
                    x = self.filter(old_x, x)
                    y = self.filter(old_y, y)
                    z = self.filter(old_z, z)
                    print(START, self.counter, delta, x_acc, x, x-x_acc, END)
                    old_x, old_y, old_z = x, y, z

Have a look at the plot below. This shows the x-axis from an imu6050 module being sampled through a pyboard v1.0. at 100Hz. I wrote the firmware for this board using micropython v1.7 and the display software using python 3.4 with the pyqtgraph library. The x scale shows samples, the y scale shows acceleration in g.

So what can we see? The raw data looks jittery, the filtered data looks smoother and we can see the jitter that has been taken out in the difference plot. To characterise this filter properly I would need to start looking at the frequency spectrum of the raw and filtered data. But this is heading down the rabbit hole again.
I’ve quickly implemented a filter that looks to be doing what I want it to – removing noise from data. I can play with the alpha value to change the amount of smoothing. ‘The proof is in the eating’. If I can get my gesture recognition system to work with this simple filter implemented, then it is good enough.