On-demand Container Loading in AWS Lambda

My team at work now has a daily personal learning time called “egg time” — its a slightly silly story involving a manager who was good at taking some time to learn things each day, and an egg shaped chair.

Today I decided that I should read this paper about container image loading in AWS lambda, as recommended by Robert Collins on LinkedIn. The paper details the work they had to do to transition from all Lambda functions being packaged as relatively small zip files (250mb), to relatively large Docker containers (10gb+) while maintaining their aggressive target cold-start time of 50ms.

Continue reading “On-demand Container Loading in AWS Lambda”

Interpreting whiteout files in Docker image layers

I’ve been playing again with Docker images and their internal layers a little more over the last week — you can see some of my previous adventures at Manipulating Docker images without Docker installed. The general thrust of these adventures is understanding the format and how to manipulate it by building a tool called Occy Strap which can manipulate the format in useful ways. My eventual goal there is to be able to build OCI compliant image bundles and then have a container runtime like runc execute them, and I must say I am getting a lot closer.

This time I was interested in the exact mechanisms used by whiteout files in those layers and how that interacts with Linux kernel overlay filesystem types.

Firstly, what is a whiteout file? Well, when you delete a file or directory from a lower layer in the Docker image, it doesn’t actually get removed from that lower layer, as layers are immutable. Instead, the uppermost layer records that the file or directory has been removed, and it is therefore no longer visible in the Docker image that the container sees. This has obvious security implications if you delete a file like a password you needed during your container build process, although there’s probably better ways to deal with those problems using multi-phase Dockerfiles.

An image might help with the description:

Here we have a container image which is composed of four layers. Layer 1 creates two files, /a and /b. Layer two creates a directory, /c. Layer three deletes /a and creates /c/d. Finally, layer 4 deletes /c and /c/d — let’s assume that it does this by just deleting the /c directory recursively. As far as a container using this image would be concerned, only /b exists in the container image.

A Dockerfile (which wouldn’t actually work) to create this set of history might look like:

FROM scratch
touch /a /b    # Layer 1
mkdir /c       # Layer 2
rm /a          # Layer 3
rm -rf /c      # Layer 4

The Docker image format stores each layer as a tarfile, with that tarfile being what a Linux filesystem called AUFS would have stored for this scenario. AUFS was an early Linux overlay filesystem from around 2006, which never actually mered into the mainline Linux kernel, although it is available on Ubuntu because they maintain a patch. AUFS recorded deletion of a file by creating a “whiteout file”, which was the name of the file prepended with .wh. — so when we deleted /a, AUFS would have created a file named .wh.a in Layer 3. Similarly to recursively delete a directory, it used a whiteout file with the name of the directory.

What if I wanted to replace a directory? AUFS provided an “opaque directory” that ensured that the directory remained, but all of its previous content was hidden. This was done by adding a file in the directory to be made opaque with the name .wh..wh..opq.

You can read quite a lot more about the Docker image format in the specification, as well as the quite interesting documentation on whiteout files.

To finish this example, the contents of the tarfile for each layer should look like this:

# Layer 1
/a                 # a file
/b                 # a file

# Layer 2
/c                 # a directory
/c/.wh..wh..opq.   # a file, created as a safety measure

# Layer 3
/.wh.a             # a file
/c/d               # a file

# Layer 4
/c/.wh.d           # a file
/.wh.c             # a file

So that’s all great, but its not actually what got me bothered. You see, modern Docker users overlayfs, which is the replacement to AUFS which actually made it into the Linux kernel. overlayfs has a similar whiteout mechanism, but it is not the same as the one in AUFS. Specifically deleted files are recorded as character devices with 0/0 device numbers, and deleted directories are recorded with an extended filesystem attribute named “trusted.overlay.opaque” set to “y”. What I wanted to find was the transcode process in Docker which converted the AUFS style tarballs into this in the filesystem while creating a container.

After a bit of digging (the code is in containerd not moby as I expected), the answer is here:

func OverlayConvertWhiteout(hdr *tar.Header, path string) (bool, error) {
	base := filepath.Base(path)
	dir := filepath.Dir(path)

	// if a directory is marked as opaque, we need to translate that to overlay
	if base == whiteoutOpaqueDir {
		// don't write the file itself
		return false, unix.Setxattr(dir, "trusted.overlay.opaque", []byte{'y'}, 0)
	}

	// if a file was deleted and we are using overlay, we need to create a character device
	if strings.HasPrefix(base, whiteoutPrefix) {
		originalBase := base[len(whiteoutPrefix):]
		originalPath := filepath.Join(dir, originalBase)

		if err := unix.Mknod(originalPath, unix.S_IFCHR, 0); err != nil {
			return false, err
		}
		// don't write the file itself
		return false, os.Chown(originalPath, hdr.Uid, hdr.Gid)
	}

	return true, nil
}

Effectively, as a tar file is extracted the whiteout format is transcoded into overlayfs’ format. So there you go.

A final note for implementers of random Docker image tools: the test suite looks quite useful here if you want to validate that what you do matches what Docker does.

Manipulating Docker images without Docker installed

Recently I’ve been playing a bit more with Docker images and Docker image repositories. I had in the past written a quick hack to let me extract files from a Docker image, but I wanted to do something a little more mature than that.

For example, sometimes you want to download an image from a Docker image repository without using Docker. Naively if you had Docker, you’d do something like this:

docker pull busybox
docker save busybox

However, that assumes that you have Docker installed on the machine downloading the images, and that’s sometimes not possible for security reasons. The most obvious example I can think of is airgapped secure environments where you need to walk the data between two networks, and the unclassified network machine doesn’t allow administrator access to install Docker.

So I wrote a little tool to do image manipulation for me. The tool is called Occy Strap, is written in python, and is available on pypi. That means installing it is relatively simple:

python3 -m venv ~/virtualenvs/occystrap
. ~/virtualenvs/occystrap/bin/activate
pip install occystrap

Which doesn’t require administrator permissions. There are then a few things we can do with Occy Strap.

Downloading an image from a repository and storing as a tarball

Let’s say we want to download an image from a repository and store it as a local tarball. This is a common thing to want to do in airgapped environments for example. You could do this with docker with a docker pull; docker save. The Occy Strap equivalent is:

occystrap fetch-to-tarfile registry-1.docker.io library/busybox \
    latest busybox.tar

In this example we’re pulling from the Docker Hub (registry-1.docker.io), and are downloading busybox’s latest version into a tarball named busybox-occy.tar. This tarball can be loaded with docker load -i busybox.tar on an airgapped Docker environment.

Downloading an image from a repository and storing as an extracted tarball

The format of the tarball in the previous example is two JSON configuration files and a series of image layers as tarballs inside the main tarball. You can write these elements to a directory instead of to a tarball if you’d like to inspect them. For example:

occystrap fetch-to-extracted registry-1.docker.io library/centos 7 \
    centos7

This example will pull from the Docker Hub the Centos image with the label “7”, and write the content to a directory in the current working directory called “centos7”. If you tarred centos7 like this, you’d end up with a tarball equivalent to what fetch-to-tarfile produces, which could therefore be loaded with docker load:

cd centos7; tar -cf ../centos7.tar *

Downloading an image from a repository and storing it in a merged directory

In scenarios where image layers are likely to be reused between images (for example many images which share a common base layer), you can save disk space by downloading images to a directory which contains more than one image. To make this work, you need to instruct Occy Strap to use unique names for the JSON elements within the image file:

occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \ 
    homeassistant/home-assistant latest merged_images
occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \ 
    homeassistant/home-assistant stable merged_images
occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \ 
    homeassistant/home-assistant 2021.3.0.dev20210219 merged_images

Each of these images include 21 layers, but the merged_images directory at the time of writing this there are 25 unique layers in the directory. You end up with a layout like this:

0465ae924726adc52c0216e78eda5ce2a68c42bf688da3f540b16f541fd3018c
10556f40181a651a72148d6c643ac9b176501d4947190a8732ec48f2bf1ac4fb
...
catalog.json 
cd8d37c8075e8a0195ae12f1b5c96fe4e8fe378664fc8943f2748336a7d2f2f3 
d1862a2c28ec9e23d88c8703096d106e0fe89bc01eae4c461acde9519d97b062 
d1ac3982d662e038e06cc7e1136c6a84c295465c9f5fd382112a6d199c364d20.json 
... 
d81f69adf6d8aeddbaa1421cff10ba47869b19cdc721a2ebe16ede57679850f0.json 
...
manifest-homeassistant_home-assistant-2021.3.0.dev20210219.json 
manifest-homeassistant_home-assistant-latest.json manifest-
homeassistant_home-assistant-stable.json

catalog.json is an Occy Strap specific artefact which maps which layers are used by which image. Each of the manifest files for the various images have been converted to have a unique name instead of manifest.json as well.

To extract a single image from such a shared directory, use the recreate-image command:

occystrap recreate-image merged_images homeassistant/home-assistant \
    latest ha-latest.tar

Exploring the contents of layers and overwritten files

Similarly, if you’d like the layers to be expanded from their tarballs to the filesystem, you can pass the --expand argument to fetch-to-extracted to have them extracted. This will also create a filesystem at the name of the manifest which is the final state of the image (the layers applied sequential). For example:

occystrap fetch-to-extracted --expand quay.io \ 
    ukhomeofficedigital/centos-base latest ukhomeoffice-centos

Note that layers delete files from previous layers with files named “.wh.$previousfilename”. These files are not processed in the expanded layers, so that they are visible to the user. They are however processed in the merged layer named for the manifest file.

Coming to grips with Kubernetes in 2020: online training

There are a few online training resources I’ve had a play with while learning Kubernetes, so I figure that’s worth a quick write up. This is a follow on from my post about Kubernetes podcasts I’ve tried. I’ve tried three training providers so far:

  • The Linux Foundation Kubernetes course (LFS258 Kubernetes Fundamentals) is probably the “go to” resource for many people, and is often sold bundled with the certification exams. Unfortunately, it is really terrible. It is by far the worst course I’ve seen so far.
  • On the other hand, the Linux Academy Kubernetes course is really good. It is flaw is that you have to sign up to Linux Academy, which provides you with all you can eat courses for a rather steep annual fee.
  • Finally, I discovered Mumshad Mannambeth’s Udemy courses, and frankly they’re excellent. He’s put a huge amount of effort into them and it really shows. Even better, with Udemy’s regular sales you can pick up his three Kubernetes courses (intro, admin certification, and developer certification) for under $50 AUD. There are even plenty of online quizzes.

If I was going to pick a course to try, I’d definitely go with Mumshad.

Coming to grips with Kubernetes in 2020: podcasts

It has become clear to me that it is time to care about Kubernetes more. I’m sure many people have cared for ages, but the things I want to build at the moment are starting to be more container based now that I am thinking more at the application layer than the cloud infrastructure layer. So how to do that? I thought I’d write down some notes on what has worked (or not) for me, in the hope it will help others. In this post, podcasts.

I thought podcasts would be an interesting way to get started with some nice overviews. This is especially true because I’m already a pretty heavy podcast user, so it was easy to slot into my existing routine. Unfortunately this hasn’t really worked out. I started with the podctl podcast, but they only ever talk about Red Hat stuff. It is very rare for a guest to not be a Red Hat employee for example. The presenters of this podcast seem to also really dislike OpenStack for reasons they never explain, which is annoying.

Then I figured maybe the Google Kubernetes podcast would be better, but it often lacks the depth I am interested in.

I am yet to find a good podcast which deep dives into technology instead of just talking about what is in the latest release. So maybe these podcasts are useful if you’re interested in what things dropped in the most recent release, but they’re not a good nor systematic way to get introduced to Kubernetes.

That said, I only just discovered the TGI Kubernetes youtube channel yesterday. It is not really what I wanted in a podcast given its a video blog, but I think it has prospects to be interesting. I will update this post when I’ve had a chance to check it out in more depth.

Have you found a good Kubernetes podcast? Am I being wildly unfair?

Quick hack: extracting the contents of a Docker image to disk

Hello! Please note I’ve written a little python tool called Occy Strap which makes this a bit easier, and can do some fancy things around importing and exporting multiple images. You might want to read about it?

For various reasons, I wanted to inspect the contents of a Docker image without starting a container. Docker makes it easy to get an image as a tar file, like this:

docker save -o foo.tar image

But if you extract that tar file you’ll find a configuration file and manifest as JSON files, and then a series of tar files, one per image layer. You use the manifest to determine in what order you extract the tar files to build the container filesystem.

That’s fiddly and annoying. So I wrote this quick python hack to extract an image tarball into a directory on disk that I could inspect:

#!/usr/bin/python3

# Call me like this:
#  docker-image-extract tarfile.tar extracted

import tarfile
import json
import os
import sys

image_path = sys.argv[1]
extracted_path = sys.argv[2]

image = tarfile.open(image_path)
manifest = json.loads(image.extractfile('manifest.json').read())

for layer in manifest[0]['Layers']:
    print('Found layer: %s' % layer)
    layer_tar = tarfile.open(fileobj=image.extractfile(layer))

    for tarinfo in layer_tar:
        print('  ... %s' % tarinfo.name)
        if tarinfo.isdev():
            print('  --> skip device files')
            continue

        dest = os.path.join(extracted_path, tarinfo.name)
        if not tarinfo.isdir() and os.path.exists(dest):
            print('  --> remove old version of file')
            os.unlink(dest)

        layer_tar.extract(tarinfo, path=extracted_path)

Hopefully that’s useful to someone else (or future me).

Kubernetes Fundamentals: Setting up nginx ingress

I’m doing the Linux Foundation Kubernetes Fundamentals course at the moment, and I was very disappointed in the chapter on Ingress Controllers. To be honest it feels like an after thought — there is no lab, and the provided examples don’t work if you re-type them into Kubernetes (you can’t cut and paste of course, just to add to the fun).

I found this super annoying, so I thought I’d write up my own notes on how to get nginx working as an Ingress Controller on Kubernetes.

Continue reading “Kubernetes Fundamentals: Setting up nginx ingress”

Juno nova mid-cycle meetup summary: containers

This is the second in my set of posts discussing the outcomes from the OpenStack nova juno mid-cycle meetup. I want to focus in this post on things related to container technologies.

Nova has had container support for a while in the form of libvirt LXC. While it can be argued that this support isn’t feature complete and needs more testing, its certainly been around for a while. There is renewed interest in testing libvirt LXC in the gate, and a team at Rackspace appears to be working on this as I write this. We have already seen patches from this team as they fix issues they find on the way. There are no plans to remove libvirt LXC from nova at this time.

The plan going forward for LXC tempest testing is to add it as an experimental job, so that people reviewing libvirt changes can request the CI system to test LXC by using “check experimental”. This hasn’t been implemented yet, but will be advertised when it is ready. Once we’ve seen good stable results from this experimental check we will talk about promoting it to be a full blown check job in our CI system.

We have also had prototype support for Docker for some time, and by all reports Eric Windisch has been doing good work at getting this driver into a good place since it moved to stackforge. We haven’t started talking about specifics for when this driver will return to the nova code base, but I think at this stage we’re talking about Kilo at the earliest. The driver has CI now (although its still working through stability issues to my understanding) and progresses well. I expect there to be a session at the Kilo summit in the nova track on the current state of this driver, and we’ll decide whether to merge it back into nova then.

There was also representation from the containers sub-team at the meetup, and they spent most of their time in a break out room coming up with a concrete proposal for what container support should look like going forward. The plan looks a bit like this:

Nova will continue to support “lowest common denominator containers”: by this I mean that things like the libvirt LXC and docker driver will be allowed to exist, and will expose the parts of containers that can be made to look like virtual machines. That is, a caller to the nova API should not need to know if they are interacting with a virtual machine or a container, it should be opaque to them as much as possible. There is some ongoing discussion about the minimum functionality we should expect from a hypervisor driver, so we can expect this minimum level of functionality to move over time.

The containers sub-team will also write a separate service which exposes a more full featured container experience. This service will work by taking a nova instance UUID, and interacting with an agent within that instance to create containers and manage them. This is interesting because it is the first time that a compute project will have an in operating system agent, although other projects have had these for a while. There was also talk about the service being able to start an instance if the user didn’t already have one, or being able to declare an existing instance to be “full” and then create a new one for the next incremental container. These are interesting design issues, and I’d like to see them explored more in a specification.

This plan met with general approval within the room at the meetup, with the suggestion being that it move forward as a stackforge project as part of the compute program. I don’t think much code has been implemented yet, but I hope to see something come of these plans soon. The first step here is to create some specifications for the containers service, which we will presumably create in the nova-specs repository for want of a better place.

Thanks for reading my second post in this series. In the next post I will cover progress with the Ironic nova driver.