This book discusses science and technical communication from the perspective of someone who comes from professional theatre and acting. Alan explains how his accidental discovery of the application of theatre sports to communication created an opportunity to teach technical communicators how to be more effective. Essentially, the argument is that empathy is essential to communication — you need to be able to understand where your audience is starting and and where they’re likely to get stuck before you can take them on the journey.
Unsurprisingly given the topic of the book, this is a well written and engaging read. The book is nicely structured and uses regular anecdotes (some of them humorous) to get its message across.
A detailed and fun read.
If I Understood You, Would I Have This Look on My Face?
June 6, 2017
NEW YORK TIMES BESTSELLER • Award-winning actor Alan Alda tells the fascinating story of his quest to learn how to communicate better, and to teach others to do the same. With his trademark humor and candor, he explores how to develop empathy as the key factor. “Invaluable.”—Deborah Tannen, #1 New York Times bestselling author of You’re the Only One I Can Tell and You Just Don’t Understand Alan Alda has been on a decades-long journey to discover new ways to help people communicate and relate to one another more effectively. If I Understood You, Would I Have This Look on My Face? is the warm, witty, and informative chronicle of how Alda found inspiration in everything from cutting-edge science to classic acting methods. His search began when he was host of PBS’s Scientific American Frontiers, where he interviewed thousands of scientists and developed a knack for helping them communicate complex ideas in ways a wide audience could understand—and Alda wondered if those techniques held a clue to better communication for the rest of us. In his wry and wise voice, Alda reflects on moments of miscommunication in his own life, when an absence of understanding resulted in problems both big and small. He guides us through his discoveries, showing how communication can be improved through learning to relate to the other person: listening with our eyes, looking for clues in another’s face, using the power of a compelling story, avoiding jargon, and reading another person so well that you become “in sync” with them, and know what they are thinking and feeling—especially when you’re talking about the hard stuff. Drawing on improvisation training, theater, and storytelling techniques from a life of acting, and with insights from recent scientific studies, Alda describes ways we can build empathy, nurture our innate mind-reading abilities, and improve the way we relate and talk with others. Exploring empathy-boosting games and exercises, If I Understood You is a funny, thought-provoking guide that can be used by all of us, in every aspect of our lives—with our friends, lovers, and families, with our doctors, in business settings, and beyond. “Alda uses his trademark humor and a well-honed ability to get to the point, to help us all learn how to leverage the better communicator inside each of us.”—Forbes “Alda, with his laudable curiosity, has learned something you and I can use right now.”—Charlie Rose
Prometheus v2.12 added active query logging. The basic idea is that there is a mmaped JSON file that contains all of the queries currently running. If prometheus was to crash, that file would therefore be a list of the queries running at the time of the crash. Overall, not a bad idea.
Some friends had recently added prometheus to their development environments. This is wired up to grafana dashboards for their microservices, and prometheus is configured to store 14 days worth of time series data via a persistent volume from the developer desktops. We did this because it is valuable for the developers to be able to see the history of metrics before and after their changes.
Now we have a developer using macos as their primary development platform, and since prometheus 2.12 it hasn’t worked. Specifically this developer is using parallels to provide the docker virtual machine on his mac. You can summarise the startup for prometheus in the dev environment like this:
$ docker run ...stuff...
level=error ts=2019-09-15T02:20:23.520Z caller=query_logger.go:94 component=activeQueryTracker msg="Failed to mmap" file=/prometheus-data/data/queries.active Attemptedsize=20001 err="invalid argument"
panic: Unable to create mmap-ed active query log
goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7fff9917af38, 0x15, 0x14, 0x2a6b7c0, 0xc00003c7e0, 0x2a6b7c0)
And here’s the underlying problem — because of the way the persistent data is mapped into this container (via parallels sharing in this case), the mmap of the active queries file fails and prometheus fails to start.
In other words, since prometheus 2.12 your prometheus data files have to be stored on a filesystem which supports mmap. Additionally, there is no flag to just disable the active query logger.
So how do we work around this? Well, here’s a horrible workaround — in the data directory that is volume mapped into the container, create a symlink that is to a path that is mmapable inside the docker container, even if that path doesn’t exist outside the container. For example, given that we store the prometheus time series at $CONFIG/prometheus-data:
Inspired by Alastair D’Silva‘s cunning plans for world domination, I’ve been googling around for automated lego sorting systems recently. This seems like a nice tractable machine learning problem with some robotics thrown in for fun.
The other day someone said to me that “they use the Spotify development model”, and I said “you who the what now?”. It was a super productive conversation that I am quite proud of.
So… in order to look like less of a n00b in the next conversation, what is the “Spotify development model”? Well, it turns out the Spotify came up with a series of tweaks to the standard Agile process in order to scale their engineering teams. If you google for “spotify development model” or “spotify agile” you’ll get lots and lots of third party blog posts about what Spotify did (I guess a bit like this one), but its surprisingly hard to find primary sources. The best I’ve found so far is this Quora answer from a former VP of Engineering at Spotify, although some of the resources he links to no longer exist.
For various reasons, I wanted to inspect the contents of a Docker image without starting a container. Docker makes it easy to get an image as a tar file, like this:
docker save -o foo.tar image
But if you extract that tar file you’ll find a configuration file and manifest as JSON files, and then a series of tar files, one per image layer. You use the manifest to determine in what order you extract the tar files to build the container filesystem.
That’s fiddly and annoying. So I wrote this quick python hack to extract an image tarball into a directory on disk that I could inspect:
# Call me like this:
# docker-image-extract tarfile.tar extracted
image_path = sys.argv
extracted_path = sys.argv
image = tarfile.open(image_path)
manifest = json.loads(image.extractfile('manifest.json').read())
for layer in manifest['Layers']:
print('Found layer: %s' % layer)
layer_tar = tarfile.open(fileobj=image.extractfile(layer))
for tarinfo in layer_tar:
print(' ... %s' % tarinfo.name)
print(' --> skip device files')
dest = os.path.join(extracted_path, tarinfo.name)
if not tarinfo.isdir() and os.path.exists(dest):
print(' --> remove old version of file')
Hopefully that’s useful to someone else (or future me).
The code is here, and the game can be played here.
I was curious about the newly available FTTN NBN service in my area, so I signed up to see what’s what. Of course, I need a usage API so that I can graph my usage in prometheus and grafana as everyone does these days. So I asked Aussie. The response I got was that I was welcome to reverse engineer the REST API that the customer portal uses.
I had toyed with using some spare GPIO lines and “hard coded” links on the HAT to identify board versions to the Raspberry Pi, but it turns out others have been here before and there’s a much better way. The Raspberry Pi folks have defined something called the “Hardware On Top” (HAT) specification which defines an i2c EEPROM which can be used to identify a HAT to the Raspberry Pi.
There are a couple of good resources I’ve found that help you do this thing — sparkfun have a tutorial which covers it, and there is an interesting forum post. However, I couldn’t find a simple tutorial for HAT designers that just covered exactly what they need to know and nothing else. There were also some gaps in those documents compared with my experiences, and I knew I’d need to look this stuff up again in the future. So I wrote this page.
First off, let’s talk about the hardware. I used an 24LC256P DIL i2c EEPROM — these are $2 on ebay, or $6 from Jaycar. The pins need to be wired like this:
Raspberry Pi Pin
GND (pins 6, 9, 14, 20, 25, 30, 34, 39)
All address pins tied to ground will place the EEPROM at address 50. This is the required address in the specification
You should also add a 3.9K pullup resistor from EEPROM pin 5 to 3.3V.
You must use this pin for the Raspberry Pi to detect the EEPROM on startup!
You should also add a 3.9K pullup resistor from EEPROM pin 6 to 3.3V.
You must use this pin for the Raspberry Pi to detect the EEPROM on startup!
Write protect. I don’t need this.
3.3V (pins 1 or 17)
The EEPROM is capable of being run at 5 volts, but must be run at 3.3 volts to work as a HAT identification EEPROM.
The specification requires that the data pin be on pin 27, the clock pin be on pin 28, and that the EEPROM be at address 50 on the i2c bus as described in the table above. There is also some mention of pullup resistors in both the data sheet and the HAT specification, but not in a lot of detail. The best I could find was a circuit diagram for a different EEPROM with the pullup resistors shown.
My test EEPROM wired up on a little breadboard looks like this:
And has a circuit diagram like this:
Next enable i2c on your raspberry pi. You also need to hand edit /boot/config.txt and then reboot. The relevant line of my config.txt look like this:
After reboot you should have an entry at /dev/i2c-0.
GOTCHA: you can’t probe the i2c bus that the HAT standard uses, and I couldn’t get flashing the EEPROM to work on that bus either.
Now time for our first gotcha — the version detection i2c bus is only enabled during boot and then turned off. An i2cdetect on bus zero wont show the device post boot for this reason. This caused an initial panic attack because I thought my EEPROM was dead, but that was just my twitchy nature showing through.
You can verify your EEPROM works by enabling bus one. To do this, add these lines to /boot/config.txt:
After a reboot you should have /dev/i2c-0 and /dev/i2c-1. You also need to move the EEPROM to bus 1 in order for it to be detected:
Raspberry Pi Pin
You’ll need to move the EEPROM back before you can use it for HAT detection.
Programming the EEPROM
You program the EEPROM with a set of tools provided by the raspberry pi folks. Check those out and compile them, they’re not packaged for raspbian that I can find:
pi@raspberrypi:~ $ git clone https://github.com/raspberrypi/hats
Cloning into 'hats'...
remote: Enumerating objects: 464, done.
remote: Total 464 (delta 0), reused 0 (delta 0), pack-reused 464
Receiving objects: 100% (464/464), 271.80 KiB | 119.00 KiB/s, done.
Resolving deltas: 100% (261/261), done.
pi@raspberrypi:~ $ cd hats/eepromutils/
pi@raspberrypi:~/hats/eepromutils $ ls
eepdump.c eepmake.c eeptypes.h README.txt
eepflash.sh eeprom_settings.txt Makefile
pi@raspberrypi:~/hats/eepromutils $ make
cc eepmake.c -o eepmake -Wno-format
cc eepdump.c -o eepdump -Wno-format
The file named eeprom_settings.txt is a sample of the settings for your HAT. Fiddle with that until it makes you happy, and then compile it:
And then we can flash our EEPROM, remembering that I’ve only managed to get flashing to work while the EEPROM is on bus 1 (pins 2 and 5):
$ sudo sh eepflash.sh -w -f=eeprom_settings.eep -t=24c256 -d=1
This will attempt to talk to an eeprom at i2c address 0xNOT_SET on bus 1. Make sure there is an eeprom at this address.
This script comes with ABSOLUTELY no warranty. Continue only if you know what you are doing.
Do you wish to continue? (yes/no): yes
0+1 records in
0+1 records out
107 bytes copied, 0.595252 s, 0.2 kB/s
Closing EEPROM Device.
Now move the EEPROM back to bus 0 (pins 27 and 28) and reboot. You should end up with entries in the device tree for the HAT. I get:
$ cd /proc/device-tree/hat/
$ for item in *
> echo "$item: "`cat $item`
Now I can have my code detect if the HAT is present, and if so what version. Comments welcome!
This one has been on my list for a little while — a nice 10km loop around the bottom of Urambi Hill. I did it as an out and back, although there is a loop option if you cross the bridge that was my turn around point. For the loop option cross the bridge, run a couple of hundred meters to the left and then cross the river again at the ford. Expect to get your feet wet if you choose that option!
Not particularly shady, but nice terrain. There is more vertical ascent than I expected, but it wasn’t crazy. I haven’t posted pictures of this run because it was super foggy when I did it so the pictures are just of white mist.