What is Gang Scan?

Gang Scan is an open source (and free) attendance tracking system based on custom RFID reader boards that communicate back to a server over wifi. The boards are capable of queueing scan events in the case of intermittent network connectivity, and the server provides simple reporting.

Coming to grips with Kubernetes in 2020: podcasts

It has become clear to me that it is time to care about Kubernetes more. I’m sure many people have cared for ages, but the things I want to build at the moment are starting to be more container based now that I am thinking more at the application layer than the cloud infrastructure layer. So how to do that? I thought I’d write down some notes on what has worked (or not) for me, in the hope it will help others. In this post, podcasts.

I thought podcasts would be an interesting way to get started with some nice overviews. This is especially true because I’m already a pretty heavy podcast user, so it was easy to slot into my existing routine. Unfortunately this hasn’t really worked out. I started with the podctl podcast, but they only ever talk about Red Hat stuff. It is very rare for a guest to not be a Red Hat employee for example. The presenters of this podcast seem to also really dislike OpenStack for reasons they never explain, which is annoying.

Then I figured maybe the Google Kubernetes podcast would be better, but it often lacks the depth I am interested in.

I am yet to find a good podcast which deep dives into technology instead of just talking about what is in the latest release. So maybe these podcasts are useful if you’re interested in what things dropped in the most recent release, but they’re not a good nor systematic way to get introduced to Kubernetes.

That said, I only just discovered the TGI Kubernetes youtube channel yesterday. It is not really what I wanted in a podcast given its a video blog, but I think it has prospects to be interesting. I will update this post when I’ve had a chance to check it out in more depth.

Have you found a good Kubernetes podcast? Am I being wildly unfair?

If I Understood You, Would I Have This Look on My Face?

This book discusses science and technical communication from the perspective of someone who comes from professional theatre and acting. Alan explains how his accidental discovery of the application of theatre sports to communication created an opportunity to teach technical communicators how to be more effective. Essentially, the argument is that empathy is essential to communication — you need to be able to understand where your audience is starting and and where they’re likely to get stuck before you can take them on the journey.

Unsurprisingly given the topic of the book, this is a well written and engaging read. The book is nicely structured and uses regular anecdotes (some of them humorous) to get its message across.

A detailed and fun read.

If I Understood You, Would I Have This Look on My Face? Book Cover If I Understood You, Would I Have This Look on My Face?
Alan Alda
Self-Help
Random House
June 6, 2017
240

NEW YORK TIMES BESTSELLER • Award-winning actor Alan Alda tells the fascinating story of his quest to learn how to communicate better, and to teach others to do the same. With his trademark humor and candor, he explores how to develop empathy as the key factor. “Invaluable.”—Deborah Tannen, #1 New York Times bestselling author of You’re the Only One I Can Tell and You Just Don’t Understand Alan Alda has been on a decades-long journey to discover new ways to help people communicate and relate to one another more effectively. If I Understood You, Would I Have This Look on My Face? is the warm, witty, and informative chronicle of how Alda found inspiration in everything from cutting-edge science to classic acting methods. His search began when he was host of PBS’s Scientific American Frontiers, where he interviewed thousands of scientists and developed a knack for helping them communicate complex ideas in ways a wide audience could understand—and Alda wondered if those techniques held a clue to better communication for the rest of us. In his wry and wise voice, Alda reflects on moments of miscommunication in his own life, when an absence of understanding resulted in problems both big and small. He guides us through his discoveries, showing how communication can be improved through learning to relate to the other person: listening with our eyes, looking for clues in another’s face, using the power of a compelling story, avoiding jargon, and reading another person so well that you become “in sync” with them, and know what they are thinking and feeling—especially when you’re talking about the hard stuff. Drawing on improvisation training, theater, and storytelling techniques from a life of acting, and with insights from recent scientific studies, Alda describes ways we can build empathy, nurture our innate mind-reading abilities, and improve the way we relate and talk with others. Exploring empathy-boosting games and exercises, If I Understood You is a funny, thought-provoking guide that can be used by all of us, in every aspect of our lives—with our friends, lovers, and families, with our doctors, in business settings, and beyond. “Alda uses his trademark humor and a well-honed ability to get to the point, to help us all learn how to leverage the better communicator inside each of us.”—Forbes “Alda, with his laudable curiosity, has learned something you and I can use right now.”—Charlie Rose

Prometheus 2.12, query logging, and startup failures on macos

Prometheus v2.12 added active query logging. The basic idea is that there is a mmaped JSON file that contains all of the queries currently running. If prometheus was to crash, that file would therefore be a list of the queries running at the time of the crash. Overall, not a bad idea.

Some friends had recently added prometheus to their development environments. This is wired up to grafana dashboards for their microservices, and prometheus is configured to store 14 days worth of time series data via a persistent volume from the developer desktops. We did this because it is valuable for the developers to be able to see the history of metrics before and after their changes.

Now we have a developer using macos as their primary development platform, and since prometheus 2.12 it hasn’t worked. Specifically this developer is using parallels to provide the docker virtual machine on his mac. You can summarise the startup for prometheus in the dev environment like this:

$ docker run ...stuff...
...snip...
level=error ts=2019-09-15T02:20:23.520Z caller=query_logger.go:94 component=activeQueryTracker msg="Failed to mmap" file=/prometheus-data/data/queries.active Attemptedsize=20001 err="invalid argument"
panic: Unable to create mmap-ed active query log

goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7fff9917af38, 0x15, 0x14, 0x2a6b7c0, 0xc00003c7e0, 0x2a6b7c0)
	/app/promql/query_logger.go:112 +0x4d2
main.main()
	/app/cmd/prometheus/main.go:361 +0x52bd

And here’s the underlying problem — because of the way the persistent data is mapped into this container (via parallels sharing in this case), the mmap of the active queries file fails and prometheus fails to start.

In other words, since prometheus 2.12 your prometheus data files have to be stored on a filesystem which supports mmap. Additionally, there is no flag to just disable the active query logger.

So how do we work around this? Well, here’s a horrible workaround — in the data directory that is volume mapped into the container, create a symlink that is to a path that is mmapable inside the docker container, even if that path doesn’t exist outside the container. For example, given that we store the prometheus time series at $CONFIG/prometheus-data:

$ ln -s /tmp/queries.active "$CONFIG/prometheus-data/queries.active"

Note that /tmp/queries.active does not exist on the developer’s mac. Prometheus now starts and its puppies and kittens the whole way down.

The wonderful world of machine learning automated lego sorting

Inspired by Alastair D’Silva‘s cunning plans for world domination, I’ve been googling around for automated lego sorting systems recently. This seems like a nice tractable machine learning problem with some robotics thrown in for fun.

Some cool projects if you’re that way inclined:

This sounds like a great way to misspend some evenings to me…

What is the Spotify model for Agile?

The other day someone said to me  that “they use the Spotify development model”, and I said “you who the what now?”. It was a super productive conversation that I am quite proud of.

So… in order to look like less of a n00b in the next conversation, what is the “Spotify development model”? Well, it turns out the Spotify came up with a series of tweaks to the standard Agile process in order to scale their engineering teams. If you google for “spotify development model” or “spotify agile” you’ll get lots and lots of third party blog posts about what Spotify did (I guess a bit like this one), but its surprisingly hard to find primary sources. The best I’ve found so far is this Quora answer from a former VP of Engineering at Spotify, although some of the resources he links to no longer exist.

Continue reading What is the Spotify model for Agile?

Quick hack: extracting the contents of a Docker image to disk

For various reasons, I wanted to inspect the contents of a Docker image without starting a container. Docker makes it easy to get an image as a tar file, like this:

docker save -o foo.tar image

But if you extract that tar file you’ll find a configuration file and manifest as JSON files, and then a series of tar files, one per image layer. You use the manifest to determine in what order you extract the tar files to build the container filesystem.

That’s fiddly and annoying. So I wrote this quick python hack to extract an image tarball into a directory on disk that I could inspect:

#!/usr/bin/python3

# Call me like this:
#  docker-image-extract tarfile.tar extracted

import tarfile
import json
import os
import sys

image_path = sys.argv[1]
extracted_path = sys.argv[2]

image = tarfile.open(image_path)
manifest = json.loads(image.extractfile('manifest.json').read())

for layer in manifest[0]['Layers']:
    print('Found layer: %s' % layer)
    layer_tar = tarfile.open(fileobj=image.extractfile(layer))

    for tarinfo in layer_tar:
        print('  ... %s' % tarinfo.name)
        if tarinfo.isdev():
            print('  --> skip device files')
            continue

        dest = os.path.join(extracted_path, tarinfo.name)
        if not tarinfo.isdir() and os.path.exists(dest):
            print('  --> remove old version of file')
            os.unlink(dest)

        layer_tar.extract(tarinfo, path=extracted_path)

Hopefully that’s useful to someone else (or future me).

Mastermind in JavaScript

I’ve been learning JavaScript for the last few days, and I figured I’d implement Jacqui’s favourite board game as a learning exercise. Jacqui loves a simple colour guessing game called Mastermind. In the game someone picks four coloured pins and then the player has to progressively guess what those colours are.

In my JavaScript version the computer picks four colours, and you need to work out what they are. Click on the white squares to cycle through colours and then hit the “guess” button when you’re ready to see how many you got right. The gray boxes in the top row will progressively reveal their colours as you guess them.

The code is here, and the game can be played here.

A nerd snipe, in which I reverse engineer the Aussie Broadband usage API

I was curious about the newly available FTTN NBN service in my area, so I signed up to see what’s what. Of course, I need a usage API so that I can graph my usage in prometheus and grafana as everyone does these days. So I asked Aussie. The response I got was that I was welcome to reverse engineer the REST API that the customer portal uses.

So I did.

I give you my super simple implementation of an Aussie Broadband usage client in Python. Patches of course are welcome.

I’ve now released the library on pypi under the rather innovative name of “aussiebb”, so installing it is as simple as:

$ pip install aussiebb

Raspberry Pi HAT identity EEPROMs, a simple guide

I’ve been working on a RFID scanner than can best be described as an overly large Raspberry Pi HAT recently. One of the things I am grappling with as I get closer to production boards is that I need to be able to identify what version of the HAT is currently installed — the software can then tweak its behaviour based on the hardware present.

I had toyed with using some spare GPIO lines and “hard coded” links on the HAT to identify board versions to the Raspberry Pi, but it turns out others have been here before and there’s a much better way. The Raspberry Pi folks have defined something called the “Hardware On Top” (HAT) specification which defines an i2c EEPROM which can be used to identify a HAT to the Raspberry Pi.

There are a couple of good resources I’ve found that help you do this thing — sparkfun have a tutorial which covers it, and there is an interesting forum post. However, I couldn’t find a simple tutorial for HAT designers that just covered exactly what they need to know and nothing else. There were also some gaps in those documents compared with my experiences, and I knew I’d need to look this stuff up again in the future. So I wrote this page.

Initial setup

First off, let’s talk about the hardware. I used an 24LC256P DIL i2c EEPROM — these are $2 on ebay, or $6 from Jaycar. The pins need to be wired like this:

24LC256P Pin Raspberry Pi Pin Notes
1 (AO) GND (pins 6, 9, 14, 20, 25, 30, 34, 39) All address pins tied to ground will place the EEPROM at address 50. This is the required address in the specification
2 (A1) GND
3 (A2) GND
4 VSS GND
5 SDA 27

You should also add a 3.9K pullup resistor from EEPROM pin 5 to 3.3V.

You must use this pin for the Raspberry Pi to detect the EEPROM on startup!
6 SCL 28

You should also add a 3.9K pullup resistor from EEPROM pin 6 to 3.3V.

You must use this pin for the Raspberry Pi to detect the EEPROM on startup!
7 WP Not connected Write protect. I don’t need this.
8 VCC 3.3V (pins 1 or 17) The EEPROM is capable of being run at 5 volts, but must be run at 3.3 volts to work as a HAT identification EEPROM.

The specification requires that the data pin be on pin 27, the clock pin be on pin 28, and that the EEPROM be at address 50 on the i2c bus as described in the table above. There is also some mention of pullup resistors in both the data sheet and the HAT specification, but not in a lot of detail. The best I could find was a circuit diagram for a different EEPROM with the pullup resistors shown.

My test EEPROM wired up on a little breadboard looks like this:

My prototype i2c EEPROM circuit

And has a circuit diagram like this:

An ID EEPROM circuit

Next enable i2c on your raspberry pi. You also need to hand edit /boot/config.txt and then reboot. The relevant line of my config.txt look like this:

dtparam=i2c_vc=on

After reboot you should have an entry at /dev/i2c-0.

GOTCHA: you can’t probe the i2c bus that the HAT standard uses, and I couldn’t get flashing the EEPROM to work on that bus either.

Now time for our first gotcha — the version detection i2c bus is only enabled during boot and then turned off. An i2cdetect on bus zero wont show the device post boot for this reason. This caused an initial panic attack because I thought my EEPROM was dead, but that was just my twitchy nature showing through.

You can verify your EEPROM works by enabling bus one. To do this, add these lines to /boot/config.txt:

dtparam=i2c_arm=on
dtparam=i2c_vc=on

After a reboot you should have /dev/i2c-0 and /dev/i2c-1. You also need to move the EEPROM to bus 1 in order for it to be detected:

24LC256P Pin Raspberry Pi Pin Notes
5 SDA 3
6 SCL 5

You’ll need to move the EEPROM back before you can use it for HAT detection.

Programming the EEPROM

You program the EEPROM with a set of tools provided by the raspberry pi folks. Check those out and compile them, they’re not packaged for raspbian that I can find:

pi@raspberrypi:~ $ git clone https://github.com/raspberrypi/hats
Cloning into 'hats'...
remote: Enumerating objects: 464, done.
remote: Total 464 (delta 0), reused 0 (delta 0), pack-reused 464
Receiving objects: 100% (464/464), 271.80 KiB | 119.00 KiB/s, done.
Resolving deltas: 100% (261/261), done.
pi@raspberrypi:~ $ cd hats/eepromutils/
pi@raspberrypi:~/hats/eepromutils $ ls
eepdump.c    eepmake.c            eeptypes.h  README.txt
eepflash.sh  eeprom_settings.txt  Makefile
pi@raspberrypi:~/hats/eepromutils $ make
cc eepmake.c -o eepmake -Wno-format
cc eepdump.c -o eepdump -Wno-format

The file named eeprom_settings.txt is a sample of the settings for your HAT. Fiddle with that until it makes you happy, and then compile it:

$ eepmake eeprom_settings.txt eeprom_settings.eep
Opening file eeprom_settings.txt for read
UUID=b9e3b4e9-e04f-4759-81aa-8334277204eb
Done reading
Writing out...
Done.

And then we can flash our EEPROM, remembering that I’ve only managed to get flashing to work while the EEPROM is on bus 1 (pins 2 and 5):

$ sudo sh eepflash.sh -w -f=eeprom_settings.eep -t=24c256 -d=1
This will attempt to talk to an eeprom at i2c address 0xNOT_SET on bus 1. Make sure there is an eeprom at this address.
This script comes with ABSOLUTELY no warranty. Continue only if you know what you are doing.
Do you wish to continue? (yes/no): yes
Writing...
0+1 records in
0+1 records out
107 bytes copied, 0.595252 s, 0.2 kB/s
Closing EEPROM Device.
Done.

Now move the EEPROM back to bus 0 (pins 27 and 28) and reboot. You should end up with entries in the device tree for the HAT. I get:

$ cd /proc/device-tree/hat/
$ for item in *
> do
>   echo "$item: "`cat $item`
>   echo
> done
name: hat

product: GangScan

product_id: 0x0001

product_ver: 0x0008

uuid: b9e3b4e9-e04f-4759-81aa-8334277204eb

vendor: madebymikal.com

Now I can have my code detect if the HAT is present, and if so what version. Comments welcome!