Gang Scan is an open source (and free) attendance tracking system based on custom RFID reader boards that communicate back to a server over wifi. The boards are capable of queueing scan events in the case of intermittent network connectivity, and the server provides simple reporting.
Prometheus v2.12 added active query logging. The basic idea is that there is a mmaped JSON file that contains all of the queries currently running. If prometheus was to crash, that file would therefore be a list of the queries running at the time of the crash. Overall, not a bad idea.
Some friends had recently added prometheus to their development environments. This is wired up to grafana dashboards for their microservices, and prometheus is configured to store 14 days worth of time series data via a persistent volume from the developer desktops. We did this because it is valuable for the developers to be able to see the history of metrics before and after their changes.
Now we have a developer using macos as their primary development platform, and since prometheus 2.12 it hasn’t worked. Specifically this developer is using parallels to provide the docker virtual machine on his mac. You can summarise the startup for prometheus in the dev environment like this:
$ docker run ...stuff... ...snip... level=error ts=2019-09-15T02:20:23.520Z caller=query_logger.go:94 component=activeQueryTracker msg="Failed to mmap" file=/prometheus-data/data/queries.active Attemptedsize=20001 err="invalid argument" panic: Unable to create mmap-ed active query log goroutine 1 [running]: github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7fff9917af38, 0x15, 0x14, 0x2a6b7c0, 0xc00003c7e0, 0x2a6b7c0) /app/promql/query_logger.go:112 +0x4d2 main.main() /app/cmd/prometheus/main.go:361 +0x52bd
And here’s the underlying problem — because of the way the persistent data is mapped into this container (via parallels sharing in this case), the mmap of the active queries file fails and prometheus fails to start.
In other words, since prometheus 2.12 your prometheus data files have to be stored on a filesystem which supports mmap. Additionally, there is no flag to just disable the active query logger.
So how do we work around this? Well, here’s a horrible workaround — in the data directory that is volume mapped into the container, create a symlink that is to a path that is mmapable inside the docker container, even if that path doesn’t exist outside the container. For example, given that we store the prometheus time series at $CONFIG/prometheus-data:
$ ln -s /tmp/queries.active "$CONFIG/prometheus-data/queries.active"
Note that /tmp/queries.active does not exist on the developer’s mac. Prometheus now starts and its puppies and kittens the whole way down.
The graphic designer at work and I were talking, and I challenged him to come up with a resume as an animated GIF. This is where he landed…
I think its quite clever. Need a graphic designer or video team? Consider onefishsea.
Inspired by Alastair D’Silva‘s cunning plans for world domination, I’ve been googling around for automated lego sorting systems recently. This seems like a nice tractable machine learning problem with some robotics thrown in for fun.
Some cool projects if you’re that way inclined:
This sounds like a great way to misspend some evenings to me…
The other day someone said to me that “they use the Spotify development model”, and I said “you who the what now?”. It was a super productive conversation that I am quite proud of.
So… in order to look like less of a n00b in the next conversation, what is the “Spotify development model”? Well, it turns out the Spotify came up with a series of tweaks to the standard Agile process in order to scale their engineering teams. If you google for “spotify development model” or “spotify agile” you’ll get lots and lots of third party blog posts about what Spotify did (I guess a bit like this one), but its surprisingly hard to find primary sources. The best I’ve found so far is this Quora answer from a former VP of Engineering at Spotify, although some of the resources he links to no longer exist.
For various reasons, I wanted to inspect the contents of a Docker image without starting a container. Docker makes it easy to get an image as a tar file, like this:
docker save -o foo.tar image
But if you extract that tar file you’ll find a configuration file and manifest as JSON files, and then a series of tar files, one per image layer. You use the manifest to determine in what order you extract the tar files to build the container filesystem.
That’s fiddly and annoying. So I wrote this quick python hack to extract an image tarball into a directory on disk that I could inspect:
#!/usr/bin/python3 # Call me like this: # docker-image-extract tarfile.tar extracted import tarfile import json import os import sys image_path = sys.argv extracted_path = sys.argv image = tarfile.open(image_path) manifest = json.loads(image.extractfile('manifest.json').read()) for layer in manifest['Layers']: print('Found layer: %s' % layer) layer_tar = tarfile.open(fileobj=image.extractfile(layer)) for tarinfo in layer_tar: print(' ... %s' % tarinfo.name) if tarinfo.isdev(): print(' --> skip device files') continue dest = os.path.join(extracted_path, tarinfo.name) if not tarinfo.isdir() and os.path.exists(dest): print(' --> remove old version of file') os.unlink(dest) layer_tar.extract(tarinfo, path=extracted_path)
Hopefully that’s useful to someone else (or future me).
I was curious about the newly available FTTN NBN service in my area, so I signed up to see what’s what. Of course, I need a usage API so that I can graph my usage in prometheus and grafana as everyone does these days. So I asked Aussie. The response I got was that I was welcome to reverse engineer the REST API that the customer portal uses.
So I did.
I give you my super simple implementation of an Aussie Broadband usage client in Python. Patches of course are welcome.
I’ve now released the library on pypi under the rather innovative name of “aussiebb”, so installing it is as simple as:
$ pip install aussiebb
I’ve been working on a RFID scanner than can best be described as an overly large Raspberry Pi HAT recently. One of the things I am grappling with as I get closer to production boards is that I need to be able to identify what version of the HAT is currently installed — the software can then tweak its behaviour based on the hardware present.
I had toyed with using some spare GPIO lines and “hard coded” links on the HAT to identify board versions to the Raspberry Pi, but it turns out others have been here before and there’s a much better way. The Raspberry Pi folks have defined something called the “Hardware On Top” (HAT) specification which defines an i2c EEPROM which can be used to identify a HAT to the Raspberry Pi.
There are a couple of good resources I’ve found that help you do this thing — sparkfun have a tutorial which covers it, and there is an interesting forum post. However, I couldn’t find a simple tutorial for HAT designers that just covered exactly what they need to know and nothing else. There were also some gaps in those documents compared with my experiences, and I knew I’d need to look this stuff up again in the future. So I wrote this page.
First off, let’s talk about the hardware. I used an 24LC256P DIL i2c EEPROM — these are $2 on ebay, or $6 from Jaycar. The pins need to be wired like this:
|24LC256P Pin||Raspberry Pi Pin||Notes|
|1 (AO)||GND (pins 6, 9, 14, 20, 25, 30, 34, 39)||All address pins tied to ground will place the EEPROM at address 50. This is the required address in the specification|
You should also add a 3.9K pullup resistor from EEPROM pin 5 to 3.3V.
|You must use this pin for the Raspberry Pi to detect the EEPROM on startup!|
You should also add a 3.9K pullup resistor from EEPROM pin 6 to 3.3V.
|You must use this pin for the Raspberry Pi to detect the EEPROM on startup!|
|7 WP||Not connected||Write protect. I don’t need this.|
|8 VCC||3.3V (pins 1 or 17)||The EEPROM is capable of being run at 5 volts, but must be run at 3.3 volts to work as a HAT identification EEPROM.|
The specification requires that the data pin be on pin 27, the clock pin be on pin 28, and that the EEPROM be at address 50 on the i2c bus as described in the table above. There is also some mention of pullup resistors in both the data sheet and the HAT specification, but not in a lot of detail. The best I could find was a circuit diagram for a different EEPROM with the pullup resistors shown.
My test EEPROM wired up on a little breadboard looks like this:
And has a circuit diagram like this:
Next enable i2c on your raspberry pi. You also need to hand edit /boot/config.txt and then reboot. The relevant line of my config.txt look like this:
After reboot you should have an entry at /dev/i2c-0.
GOTCHA: you can’t probe the i2c bus that the HAT standard uses, and I couldn’t get flashing the EEPROM to work on that bus either.
Now time for our first gotcha — the version detection i2c bus is only enabled during boot and then turned off. An i2cdetect on bus zero wont show the device post boot for this reason. This caused an initial panic attack because I thought my EEPROM was dead, but that was just my twitchy nature showing through.
You can verify your EEPROM works by enabling bus one. To do this, add these lines to /boot/config.txt:
After a reboot you should have /dev/i2c-0 and /dev/i2c-1. You also need to move the EEPROM to bus 1 in order for it to be detected:
|24LC256P Pin||Raspberry Pi Pin||Notes|
You’ll need to move the EEPROM back before you can use it for HAT detection.
Programming the EEPROM
You program the EEPROM with a set of tools provided by the raspberry pi folks. Check those out and compile them, they’re not packaged for raspbian that I can find:
pi@raspberrypi:~ $ git clone https://github.com/raspberrypi/hats Cloning into 'hats'... remote: Enumerating objects: 464, done. remote: Total 464 (delta 0), reused 0 (delta 0), pack-reused 464 Receiving objects: 100% (464/464), 271.80 KiB | 119.00 KiB/s, done. Resolving deltas: 100% (261/261), done. pi@raspberrypi:~ $ cd hats/eepromutils/ pi@raspberrypi:~/hats/eepromutils $ ls eepdump.c eepmake.c eeptypes.h README.txt eepflash.sh eeprom_settings.txt Makefile pi@raspberrypi:~/hats/eepromutils $ make cc eepmake.c -o eepmake -Wno-format cc eepdump.c -o eepdump -Wno-format
The file named eeprom_settings.txt is a sample of the settings for your HAT. Fiddle with that until it makes you happy, and then compile it:
$ eepmake eeprom_settings.txt eeprom_settings.eep Opening file eeprom_settings.txt for read UUID=b9e3b4e9-e04f-4759-81aa-8334277204eb Done reading Writing out... Done.
And then we can flash our EEPROM, remembering that I’ve only managed to get flashing to work while the EEPROM is on bus 1 (pins 2 and 5):
$ sudo sh eepflash.sh -w -f=eeprom_settings.eep -t=24c256 -d=1 This will attempt to talk to an eeprom at i2c address 0xNOT_SET on bus 1. Make sure there is an eeprom at this address. This script comes with ABSOLUTELY no warranty. Continue only if you know what you are doing. Do you wish to continue? (yes/no): yes Writing... 0+1 records in 0+1 records out 107 bytes copied, 0.595252 s, 0.2 kB/s Closing EEPROM Device. Done.
Now move the EEPROM back to bus 0 (pins 27 and 28) and reboot. You should end up with entries in the device tree for the HAT. I get:
$ cd /proc/device-tree/hat/ $ for item in * > do > echo "$item: "`cat $item` > echo > done name: hat product: GangScan product_id: 0x0001 product_ver: 0x0008 uuid: b9e3b4e9-e04f-4759-81aa-8334277204eb vendor: madebymikal.com
Now I can have my code detect if the HAT is present, and if so what version. Comments welcome!
This one has been on my list for a little while — a nice 10km loop around the bottom of Urambi Hill. I did it as an out and back, although there is a loop option if you cross the bridge that was my turn around point. For the loop option cross the bridge, run a couple of hundred meters to the left and then cross the river again at the ford. Expect to get your feet wet if you choose that option!
Not particularly shady, but nice terrain. There is more vertical ascent than I expected, but it wasn’t crazy. I haven’t posted pictures of this run because it was super foggy when I did it so the pictures are just of white mist.