The Three-Body Problem

I’m torn about this book — the premise is interesting, the world is novel, and the book is well written. The book has a strong environmental theme, with a focus on the environmental impact of Chinese economic development during Mao’s cultural revolution.

However, despite all that the book didn’t “grab” me. I think perhaps its because there is a lot of effort spent describing things which ultimately don’t really matter — like weather or not the desktop PC being used by one of the characters is the current model or not. Or perhaps its because I didn’t actually like any of the characters — none of them is what I would call a nice person. Or perhaps this is an artifact of the book having been translated from Chinese, and perhaps different stylisting expectations or some such?

Either way, I don’t think I’ll finish this trilogy.

The Three-Body Problem Book Cover The Three-Body Problem
Cixin Liu
December 3, 2015
416

1967: Ye Wenjie witnesses Red Guards beat her father to death during China's Cultural Revolution. This singular event will shape not only the rest of her life but also the future of mankind. Four decades later, Beijing police ask nanotech engineer Wang Miao to infiltrate a secretive cabal of scientists after a spate of inexplicable suicides. Wang's investigation will lead him to a mysterious online game and immerse him in a virtual world ruled by the intractable and unpredicatable interaction of its three suns. This is the Three-Body Problem and it is the key to everything: the key to the scientists' deaths, the key to a conspiracy that spans light-years and the key to the extinction-level threat humanity now faces.

All python packages require a pyproject.toml with modern pip

So last night Shaken Fist CI jobs started failing with errors like this (editted lightly for clarity):

Building wheels for collected packages: shakenfist-ci
  Building wheel for shakenfist-ci (setup.py): started
  Building wheel for shakenfist-ci (setup.py): finished with status 'error'
  error: subprocess-exited-with-error
  
  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [86 lines of output]
...
      ...setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
        setuptools.SetuptoolsDeprecationWarning,
      installing to build/bdist.linux-x86_64/wheel
      running install
...
      warning: install_lib: byte-compiling is disabled, skipping.
      
      running install_egg_info
      Copying shakenfist_ci.egg-info to build/bdist.linux-x86_64/wheel/shakenfist_ci-0.0.1.dev2544-py3.7.egg-info
      running install_scripts
      error: invalid command 'bdist_wininst'
      [end of output]

This was pretty concerning. I know that a setup.py / setup.cfg style install is a little old school, but it was unexpected that it broke entirely. At first I thought I’d have to convert to poetry to unblock this, but Chet helpfully pointed out that this is as simple as adding a pyproject.toml file to the directory which contains your setup.py and setup.cfg. The basic issue is that a modern pip doesn’t assume that you’re going to use setuptools, so you need to tell it that you’re doing that in pyproject.toml. Then you’re unblocked.

So, just create a file named pyproject.toml in the setup.py / setup.cfg directory which contains this:

[build-system]
requires = ["setuptools >= 40.6.0", "wheel"]
build-backend = "setuptools.build_meta"

And you’re good to go. If you’re really curious, this page was quite helpful in working out what was happening.

Debian 10 buster bcrypt pip install breakage

So, as of today by Shaken Fist CI jobs for Debian 10 are failing to install bcrypt, with an error that looks like this:

Running setup.py install for bcrypt: started
    Running setup.py install for bcrypt: finished with status 'error'
    [ ... snip ... ]
    running build_rust
    
        =============================DEBUG ASSISTANCE=============================
        If you are seeing a compilation error please try the following steps to
        successfully install bcrypt:
        1) Upgrade to the latest pip and try again. This will fix errors for most
           users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
        2) Ensure you have a recent Rust toolchain installed. bcrypt requires
           rustc >= 1.56.0.
    
        Python: 3.7.3
        platform: Linux-4.19.0-21-amd64-x86_64-with-debian-10.12
        pip: 18.1
        setuptools: 65.2.0
        setuptools_rust: 1.5.1
        rustc: n/a
        =============================DEBUG ASSISTANCE=============================

I’m not really interested in debating why installing a python package requires a rust compiler, that has been dicussed elsewhere.

This specific breakage has been caused by bcrypt releasing 4.0.0, which has this in the changelog: “bcrypt is now implemented in Rust. Users building from source will need to have a Rust compiler available. Nothing will change for users downloading wheels.”

Unfortunately, you can’t just install rustc with apt, as it is both quite big (350mb), and too old (version 1.41.1 versus the required 1.56.0 or better). I also couldn’t find an Ubuntu PPA to misuse to get a more recent rustc.

Another answer here is to use rustup, which is yet another curl to a root shell installer, which isn’t a satisfying answer to me. The other option is of course just to pin bcrypt to pre 4.0.0, but I’d have to do that on every distribution, not just Debian 10 as best as I can tell.

Update: and then I re-read the ChangeLog. It turns out that pip wasn’t offering me wheels because the version of pip was too old. As long as you’re ok with not using an official Debian packaged version of pip, you can do this to get unstuck:

# pip3 install -U pip
# apt-get remove python3-pip
# /usr/local/bin/pip3 install -v bcrypt==4.0.0

postgres_log_dir error while installing Pulp 3.20

I’m new to pulp and am installing based on the Ansible roles as documented at Getting started – Pulp Installer. During the install, I get this error:

TASK [geerlingguy.postgresql : Define postgresql_log_dir.] ************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option 
with an undefined variable. The error was: 'dict object' has no 
attribute 'log_directory'\n\nThe error appears to be in 
'/root/.ansible/roles/geerlingguy.postgresql/tasks/variables.yml': 
line 58, column 3, but may\nbe elsewhere in the file depending on the 
exact syntax problem.\n\nThe offending line appears to be:\n\n\n- 
name: Define postgresql_log_dir.\n ^ here\n"}

This seems to be that a newer version of geerlingguy.postgresql has been released (version 3.4) which requires some changes to the Pulp Ansible installer. Those changes have been made in this pull request, which has been merged but not yet released. For others blocked on this, you can instead lock the version of geerlingguy.postgresql to something older, which will then allow the install to work:

ansible-galaxy install geerlingguy.postgresql,3.3.1
ansible-galaxy collection install pulp.pulp_installer

Interpreting whiteout files in Docker image layers

I’ve been playing again with Docker images and their internal layers a little more over the last week — you can see some of my previous adventures at Manipulating Docker images without Docker installed. The general thrust of these adventures is understanding the format and how to manipulate it by building a tool called Occy Strap which can manipulate the format in useful ways. My eventual goal there is to be able to build OCI compliant image bundles and then have a container runtime like runc execute them, and I must say I am getting a lot closer.

This time I was interested in the exact mechanisms used by whiteout files in those layers and how that interacts with Linux kernel overlay filesystem types.

Firstly, what is a whiteout file? Well, when you delete a file or directory from a lower layer in the Docker image, it doesn’t actually get removed from that lower layer, as layers are immutable. Instead, the uppermost layer records that the file or directory has been removed, and it is therefore no longer visible in the Docker image that the container sees. This has obvious security implications if you delete a file like a password you needed during your container build process, although there’s probably better ways to deal with those problems using multi-phase Dockerfiles.

An image might help with the description:

Here we have a container image which is composed of four layers. Layer 1 creates two files, /a and /b. Layer two creates a directory, /c. Layer three deletes /a and creates /c/d. Finally, layer 4 deletes /c and /c/d — let’s assume that it does this by just deleting the /c directory recursively. As far as a container using this image would be concerned, only /b exists in the container image.

A Dockerfile (which wouldn’t actually work) to create this set of history might look like:

FROM scratch
touch /a /b    # Layer 1
mkdir /c       # Layer 2
rm /a          # Layer 3
rm -rf /c      # Layer 4

The Docker image format stores each layer as a tarfile, with that tarfile being what a Linux filesystem called AUFS would have stored for this scenario. AUFS was an early Linux overlay filesystem from around 2006, which never actually mered into the mainline Linux kernel, although it is available on Ubuntu because they maintain a patch. AUFS recorded deletion of a file by creating a “whiteout file”, which was the name of the file prepended with .wh. — so when we deleted /a, AUFS would have created a file named .wh.a in Layer 3. Similarly to recursively delete a directory, it used a whiteout file with the name of the directory.

What if I wanted to replace a directory? AUFS provided an “opaque directory” that ensured that the directory remained, but all of its previous content was hidden. This was done by adding a file in the directory to be made opaque with the name .wh..wh..opq.

You can read quite a lot more about the Docker image format in the specification, as well as the quite interesting documentation on whiteout files.

To finish this example, the contents of the tarfile for each layer should look like this:

# Layer 1
/a                 # a file
/b                 # a file

# Layer 2
/c                 # a directory
/c/.wh..wh..opq.   # a file, created as a safety measure

# Layer 3
/.wh.a             # a file
/c/d               # a file

# Layer 4
/c/.wh.d           # a file
/.wh.c             # a file

So that’s all great, but its not actually what got me bothered. You see, modern Docker users overlayfs, which is the replacement to AUFS which actually made it into the Linux kernel. overlayfs has a similar whiteout mechanism, but it is not the same as the one in AUFS. Specifically deleted files are recorded as character devices with 0/0 device numbers, and deleted directories are recorded with an extended filesystem attribute named “trusted.overlay.opaque” set to “y”. What I wanted to find was the transcode process in Docker which converted the AUFS style tarballs into this in the filesystem while creating a container.

After a bit of digging (the code is in containerd not moby as I expected), the answer is here:

func OverlayConvertWhiteout(hdr *tar.Header, path string) (bool, error) {
	base := filepath.Base(path)
	dir := filepath.Dir(path)

	// if a directory is marked as opaque, we need to translate that to overlay
	if base == whiteoutOpaqueDir {
		// don't write the file itself
		return false, unix.Setxattr(dir, "trusted.overlay.opaque", []byte{'y'}, 0)
	}

	// if a file was deleted and we are using overlay, we need to create a character device
	if strings.HasPrefix(base, whiteoutPrefix) {
		originalBase := base[len(whiteoutPrefix):]
		originalPath := filepath.Join(dir, originalBase)

		if err := unix.Mknod(originalPath, unix.S_IFCHR, 0); err != nil {
			return false, err
		}
		// don't write the file itself
		return false, os.Chown(originalPath, hdr.Uid, hdr.Gid)
	}

	return true, nil
}

Effectively, as a tar file is extracted the whiteout format is transcoded into overlayfs’ format. So there you go.

A final note for implementers of random Docker image tools: the test suite looks quite useful here if you want to validate that what you do matches what Docker does.

Linux bridges have their MTU overwritten when you add an interface

I discovered last night that network bridges on linux have their Maximum Transmission Unit (MTU) overwritten by whatever is the MTU value of the most recent interface added to the bridge. This is bad. Very bad. Specifically this is bad because MTU matters for accurately describing the capabilities of the network path the packets will travel on, so it shouldn’t be clobbered willy nilly.

Here’s an example of the behaviour:

# ip link add egr-br-ens1f0 mtu 1500 type bridge
# ip link show dev egr-br-ens1f0
3: egr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 7e:33:1b:30:d8:00 brd ff:ff:ff:ff:ff:ff
# ip link add egr-eaa64a-o mtu 8950 type veth peer name egr-eaa64a-i
# ip link show dev egr-br-ens1f0
3: egr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 7e:33:1b:30:d8:00 brd ff:ff:ff:ff:ff:ff
# brctl addif egr-br-ens1f0 egr-eaa64a-o
# ip link show dev egr-br-ens1f0
3: egr-br-ens1f0: <BROADCAST,MULTICAST> mtu 8950 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether da:82:cf:34:13:60 brd ff:ff:ff:ff:ff:ff

So you can see here that the bridge had an MTU of 1,500 bytes. We create a veth pair with an MTU of 8,950 bytes and add it to the bridge. Suddenly the bridge’s MTU is 8,950 bytes!

Perhaps this is my fault — brctl is pretty old school. Let’s use only ip commands to configure the bridge.

# ip link add mgr-br-ens1f0 mtu 1500 type bridge
# ip link show dev mgr-br-ens1f0
6: mgr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 82:d8:df:15:40:01 brd ff:ff:ff:ff:ff:ff
# ip link add mgr-eaa64a-o mtu 8950 type veth peer name mgr-eaa64a-i
# ip link show dev mgr-br-ens1f0
6: mgr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 82:d8:df:15:40:01 brd ff:ff:ff:ff:ff:ff
# ip link set mgr-eaa64a-o master mgr-br-ens1f0
# ip link show dev mgr-br-ens1f0
6: mgr-br-ens1f0: <BROADCAST,MULTICAST> mtu 8950 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 22:55:4a:a8:19:00 brd ff:ff:ff:ff:ff:ff

The same problem occurs. Luckily, you can specify the MTU when you add an interface to a bridge, like this:

# ip link add zgr-br-ens1f0 mtu 1500 type bridge
# ip link show dev zgr-br-ens1f0
9: zgr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 7a:54:2c:04:5f:a8 brd ff:ff:ff:ff:ff:ff
# ip link add zgr-eaa64a-o mtu 8950 type veth peer name zgr-eaa64a-i
# ip link show dev zgr-br-ens1f0
9: zgr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 7a:54:2c:04:5f:a8 brd ff:ff:ff:ff:ff:ff
# ip link set zgr-eaa64a-o master zgr-br-ens1f0 mtu 1500
# ip link show dev zgr-br-ens1f0
9: zgr-br-ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ae:59:0b:a6:46:94 brd ff:ff:ff:ff:ff:ff

And that works nicely. In my case, this ended up with me writing code to lookup the MTU of the bridge I was adding the interface to, and then specifying that MTU back when adding the interface. I hope this helps someone else.

Manipulating Docker images without Docker installed

Recently I’ve been playing a bit more with Docker images and Docker image repositories. I had in the past written a quick hack to let me extract files from a Docker image, but I wanted to do something a little more mature than that.

For example, sometimes you want to download an image from a Docker image repository without using Docker. Naively if you had Docker, you’d do something like this:

docker pull busybox
docker save busybox

However, that assumes that you have Docker installed on the machine downloading the images, and that’s sometimes not possible for security reasons. The most obvious example I can think of is airgapped secure environments where you need to walk the data between two networks, and the unclassified network machine doesn’t allow administrator access to install Docker.

So I wrote a little tool to do image manipulation for me. The tool is called Occy Strap, is written in python, and is available on pypi. That means installing it is relatively simple:

python3 -m venv ~/virtualenvs/occystrap
. ~/virtualenvs/occystrap/bin/activate
pip install occystrap

Which doesn’t require administrator permissions. There are then a few things we can do with Occy Strap.

Downloading an image from a repository and storing as a tarball

Let’s say we want to download an image from a repository and store it as a local tarball. This is a common thing to want to do in airgapped environments for example. You could do this with docker with a docker pull; docker save. The Occy Strap equivalent is:

occystrap fetch-to-tarfile registry-1.docker.io library/busybox \
    latest busybox.tar

In this example we’re pulling from the Docker Hub (registry-1.docker.io), and are downloading busybox’s latest version into a tarball named busybox-occy.tar. This tarball can be loaded with docker load -i busybox.tar on an airgapped Docker environment.

Downloading an image from a repository and storing as an extracted tarball

The format of the tarball in the previous example is two JSON configuration files and a series of image layers as tarballs inside the main tarball. You can write these elements to a directory instead of to a tarball if you’d like to inspect them. For example:

occystrap fetch-to-extracted registry-1.docker.io library/centos 7 \
    centos7

This example will pull from the Docker Hub the Centos image with the label “7”, and write the content to a directory in the current working directory called “centos7”. If you tarred centos7 like this, you’d end up with a tarball equivalent to what fetch-to-tarfile produces, which could therefore be loaded with docker load:

cd centos7; tar -cf ../centos7.tar *

Downloading an image from a repository and storing it in a merged directory

In scenarios where image layers are likely to be reused between images (for example many images which share a common base layer), you can save disk space by downloading images to a directory which contains more than one image. To make this work, you need to instruct Occy Strap to use unique names for the JSON elements within the image file:

occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \ 
    homeassistant/home-assistant latest merged_images
occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \ 
    homeassistant/home-assistant stable merged_images
occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \ 
    homeassistant/home-assistant 2021.3.0.dev20210219 merged_images

Each of these images include 21 layers, but the merged_images directory at the time of writing this there are 25 unique layers in the directory. You end up with a layout like this:

0465ae924726adc52c0216e78eda5ce2a68c42bf688da3f540b16f541fd3018c
10556f40181a651a72148d6c643ac9b176501d4947190a8732ec48f2bf1ac4fb
...
catalog.json 
cd8d37c8075e8a0195ae12f1b5c96fe4e8fe378664fc8943f2748336a7d2f2f3 
d1862a2c28ec9e23d88c8703096d106e0fe89bc01eae4c461acde9519d97b062 
d1ac3982d662e038e06cc7e1136c6a84c295465c9f5fd382112a6d199c364d20.json 
... 
d81f69adf6d8aeddbaa1421cff10ba47869b19cdc721a2ebe16ede57679850f0.json 
...
manifest-homeassistant_home-assistant-2021.3.0.dev20210219.json 
manifest-homeassistant_home-assistant-latest.json manifest-
homeassistant_home-assistant-stable.json

catalog.json is an Occy Strap specific artefact which maps which layers are used by which image. Each of the manifest files for the various images have been converted to have a unique name instead of manifest.json as well.

To extract a single image from such a shared directory, use the recreate-image command:

occystrap recreate-image merged_images homeassistant/home-assistant \
    latest ha-latest.tar

Exploring the contents of layers and overwritten files

Similarly, if you’d like the layers to be expanded from their tarballs to the filesystem, you can pass the --expand argument to fetch-to-extracted to have them extracted. This will also create a filesystem at the name of the manifest which is the final state of the image (the layers applied sequential). For example:

occystrap fetch-to-extracted --expand quay.io \ 
    ukhomeofficedigital/centos-base latest ukhomeoffice-centos

Note that layers delete files from previous layers with files named “.wh.$previousfilename”. These files are not processed in the expanded layers, so that they are visible to the user. They are however processed in the merged layer named for the manifest file.

Complexity Arrangements for Sustained Innovation: Lessons From 3M Corporation

This is the second business paper I’ve read this week while reading along with my son’s university studies. The first is discussed here if you’re interested. This paper is better written, but more academic in its style. This ironically makes it harder to read, because its grammar style is more complicated and harder to parse.

The take aways for me from this paper is that 3M is good at encouraging serendipity and opportune moments that create innovation. This is similar to Google’s attempts to build internal peer networks and deliberate lack of structure. In 3M’s case its partially expressed as 15% time, which is similar to Google’s 20% time. Specifically, “eureka moments” cannot be planned or scheduled, but require prior engagement.

chance favors only the prepared mind — Pasteur

3M has a variety of methods for encouraging peer networks, including technology fairs, “bootlegging” (borrowing idle resources from other teams), innovation grants, and so on.

At the same time, 3M tries to keep at least a partial focus on events driving by schedules. The concept of time is important here — there is a “time to wait” (we are ahead of the market); “a time in between” (15% time); and “a time across” (several parallel efforts around related innovations to speed up the process).

The idea of “a time to wait” is quite interesting. 3M has a history of discovering things where there is no current application, but somehow corporately remembering those things so that when there are applications years later they can jump in with a solution. They embrace story telling as part of their corporate memory, as well as a way of ensuring they learn from past success and failure.

Finally, 3M is similar to Google in their deliberate flexibility with the rules. 15% time isn’t rigidly counted for example — it might be 15% a week, or 15% of a year, or more or less than that. As long as it can be justified as a good use of resources its ok.

This was a good read and I enjoyed it.

 

A corporate system for continuous innovation: The case of Google Inc

So, one of my kids is studying some business units at university and was assigned this paper to read. I thought it looked interesting, so I gave it a read as well.

While not being particularly well written in terms of style, this is an approachable introduction to the culture and values of Google and how they play into Google’s continued ability to innovate. The paper identifies seven important attributes of the company’s culture that promote innovation, as ranked by the interviewed employees:

  • The culture is innovation oriented.
  • They put a lot of effort into selecting individuals who will fit well with the culture at hiring time.
  • Leaders are seen as performing a facilitiation role, not a directive one.
  • The organizational structure is loosely defined.
  • OKRs and aligned performance incentives.
  • A culture of organizational learning through postmortems and building internal social networks. Learning is considered a peer to peer activity that is not heavily structured.
  • External interaction — especially in the form of aggressive acquisition of skills and technologies in areas Google feels they are struggling in.

Additionally, they identify eight habits of a good leader:

  • A good coach.
  • Empoyer your team and don’t micro-manage.
  • Express interest in employees’ success and well-being.
  • Be productive and results oriented.
  • Be a good communicator and listen to your team.
  • Help employees with career development.
  • Have a clear vision and strategy for the team.
  • Have key technical skills, so you can help advise the team.

Overall, this paper is well worth the time to read. I enjoyed it and found it insightful.

Shaken Fist v0.4.2

Shaken Fist v0.4.2 snuck out yesterday as part of shooting this tutorial video. That’s because I really wanted to demonstrate floating IPs, which I only recently got working nicely. Overall in v0.4.2 we:

  • Improved CI for image API calls.
  • Improved upgrade CI testing.
  • Improved network state tracking.
  • Floating IPs now work, and have covering CI. shakenfist#257
  • Resolve leaks of floating IPs from both direct use and NAT gateways. shakenfist#256
  • Resolve leaks of IPManagers on network delete. shakenfist#675
  • Use system packages for ansible during install.