I don’t blog about every Shaken Fist release here, but I do feel like the 0.4 release (and the subsequent minor bug fix release 0.4.1) are a pretty big deal in the life of the project.
The focus of the v0.4 series is reliability — we’ve used behaviour in the continuous integration pipeline as a proxy for that, but it should be a significant improvement in the real world as well. This has included:
much more extensive continuous integration coverage, including several new jobs.
checksumming image downloads, and retrying images where the checksum fails.
etcd reliability improvements.
refactoring instances and networks to a new “non-volatile” object model where only immutable values are cached.
images now track a state much like instances and networks.
a reworked state model for instances, where its clearer why an instance ended up in an error state. This is documented in our developer docs.
In terms of new features, we also added:
a network ping API, which will emit ICMP ping packets on the network node onto your virtual network. We use this in testing to ensure instances booted and ended up online.
networks are now checked to ensure that they have a reasonable minimum size.
addition of a simple etcd backup and restore tool (sf-backup).
improved data upgrade of previous installations.
VXLAN ids are now randomized, and this has forced a new naming scheme for network interfaces and bridges.
we are smarter about what networks we restore on startup, and don’t restore dead networks.
We also now require python 3.8.
Overall, Shaken Fist v0.4 is a place that makes me much more comfortable to run workloads I care about on that previous releases. Its far from perfect, but we’re definitely moving in the right direction.
In 2009 Harvard Business School published a draft paper entitled “Goals Gone Wild“, and its abstract is quite concerning. For example:
“We identify specific side effects associated with goal setting, including a narrow focus that neglects non-goal areas, a rise in unethical behavior, distorted risk preferences, corrosion of organizational culture, and reduced intrinsic motivation.”
Are we doomed? Is all goal setting harmful? Interestingly, I came across this paper while reading Measure What Matters, which argues the exact opposite point — that OKRs provide a meaningful way to improve the productivity of an organization.
The paper starts by listing a series of examples of goal setting gone wrong: Sears’ auto repair in the early 1900s over charging customers to meet hourly billable goals; Enron’s sales targets based solely on volume and revenue and not profit; and Ford Motor Company’s goal of shipping a car at a specific target price point which resulted in significant safety failures.
The paper then provides specific examples of how goals can go wrong:
By being too specific and causing other important features of a task to be ignored — for example shipping on a specific deadline but ignoring testing adequately to achieve that deadline.
By being too common — employees with more than one goal tend to focus on one and ignore the others. For example studies have shown that if you present someone with both quality and quantity goals, that they will fixate on the quantity goals over the quality ones.
Inappropriate time horizon — for example, producing quarterly results by canibalizing longer term outcomes. Additionally, goals can be percieved as ceilings not floors, that is once a goal has been met attention is diverted elsewhere instead of over delivering on the goal.
By encouraging inappropriate risk taking or unethical behaviour — if a goal is too challenging, then an employee is encouraged to take risks they would not normally be able to justify in order to meet the goal.
Stretch goals that are not met hard employee’s confidence in their abilities and impact future performance.
A narrowly focused performance goal discourages learning and collaboration with coworkers. These tasks detract from time spent on the narrowly defined target, and are therefore de-emphasised.
The paper also calls out that while most people can see some amount of intrinsic motivation in their own behaviours, goals are extrinsic motivation and can be overused when applied to an intrinsicly motivated workforce.
Overall, the paper urges managers to consider if they goals they are setting are nessesary, and notes that goals should only be used in narrow circumstances.
Similarly to the super simple loaf, you want the dough to be a bit tacky when mixed — it gets runnier as the yeast does its thing, so it will be too runny if it doesn’t start out tacky.
I then just leave it on the kitchen bench under a cover for the day. In the evening its baked like the super simple loaf — heat a high thermal mass dutch oven for 30 minutes at 230 degrees celcius, and then bake the break in the dutch over for first 30 minutes with the lid on, and then 12 more minutes with the lid off.
You also need to feed the starter when you make the loaf dough. That’s just 1.5 cups of flour, and a cup of warm water mixed into the starter after you’ve taken out the starter for the loaf. I tweak the flour to water ratio to keep the starter at a fairly thick consistency, and you’ll learn over time what is right. You basically want pancake batter consistency.
We keep our starter in the fridge and need to feed it (which means baking) twice a week. If we kept it on the bench we’d need to bake daily.
The book is composed of a series of essays, which discuss the trials of the OS/360 team in the mid-1960s, and uses those experiences to attempt to form a series of more general observations on the art of software development and systems engineering.
I want to be able to see the level of change between OpenStack releases. However, there are a relatively small number of changes with simply huge amounts of delta in them — they’re generally large refactors or the delete which happens when part of a repository is spun out into its own project.
I therefore wanted to explore what was a reasonable size for a change in OpenStack so that I could decide what maximum size to filter away as likely to be a refactor. After playing with a couple of approaches, including just randomly picking a number, it seems the logical way to decide is to simply plot a histogram of the various sizes, and then pick a reasonable place on the curve as the cutoff. Due to the large range of values (from zero lines of change to over a million!), I ended up deciding a logarithmic axis was the way to go.
For the projects listed in the OpenStack compute starter kit reference set, that produces the following histogram:I feel that filtering out commits over 10,000 lines of delta feels justified based on that graph. For reference, the raw histogram buckets are:
This proposal was submitted for FOSDEM 2021. Given that acceptances were meant to be sent out on 25 December and its basically a week later I think we can assume that its been rejected. I’ve recently been writing up my rejected proposals, partially because I’ve put in the effort to write them and they might be useful elsewhere, but also because I think its important to demonstrate that its not unusual for experienced speakers to be rejected from these events.
OpenStack today is a complicated beast — not only does it try to perform well for large clusters, but it also embraces a diverse set of possible implementations from hypervisors, storage, networking, and more. This was a deliberate tactical choice made by the OpenStack community years ago, forming a so called “Big Tent” for vendors to collaborate in to build Open Source cloud options. It made a lot of sense at the time to be honest. However, OpenStack today finds itself constrained by the large number of permutations it must support, ten years of software and backwards compatability legacy, and a decreasing investment from those same vendors that OpenStack courted so actively.
Shaken Fist makes a series of simplifying assumptions that allow it to achieve a surprisingly large amount in not a lot of code. For example, it supports only one hypervisor, one hypervisor OS, one networking implementation, and lacks an image service. It tries hard to be respectful of compute resources while idle, and as fast as possible to deploy resources when requested — its entirely possible to deploy a new VM and start it booting in less than a second for example (if the boot image is already held in cache). Shaken Fist is likely a good choice for small deployments such as home labs and telco edge applications. It is unlikely to be a good choice for large scale compute however.
I was recently contacted about availability problems with the code for pngtools. Frankly, I’m mildly surprised anyone still uses this code, but I am happy for them to do so. I have resurrected the code, placed it on github, and included the note below on all relevant posts:
A historical note from November 2020: this code is quite old, but still actively used. I have therefore converted the old subversion repository to git and it is hosted at https://github.com/mikalstill/pngtools. I will monitor there for issues and patches and try my best to remember what I was thinking 20 years ago…
The other day we released Shaken Fist version 0.2, and I never got around to announcing it here. In fact, we’ve done a minor release since then and have another minor release in the wings ready to go out in the next day or so.
So what’s changed in Shaken Fist between version 0.1 and 0.2? Well, actually kind of a lot…
We moved from MySQL to etcd for storage of persistant state. This was partially done because we wanted distributed locking, but it was also because MySQL was a pain to work with.
Some work has gone into making the API service more production grade, although there is still some work to be done there probably in the 0.3 release — specifically there is a timeout if a response takes more than 300 seconds, which can be the case in launch large VMs where the disk images are not in cache.
There were also some important features added:
Authentication of API requests.
Namespaces (a bit like Kubernetes namespaces or OpenStack projects).
Resource tagging, called metadata.
Support for local mirroring of common disk images.
…and a large number of bug fixes.
Shaken Fist is also now packaged on pypi, and the deployment tooling knows how to install from packages as well as source if that’s a thing you’re interested in. You can read more at shakenfist.com, but that site is a bit of a work in progress at the moment. The new github organisation is at github.com/shakenfist.
I spent much of yesterday playing with KSM (Kernel Shared Memory, or Kernel Samepage Merging depending on which universe you come from). Unix kernels store memory in “pages” which are moved in and out of memory as a single block. On most Linux architectures pages are 4,096 bytes long.
KSM is a Linux Kernel feature which scans memory looking for identical pages, and then de-duplicating them. So instead of having two pages, we just have one and have two processes point at that same page. This has obvious advantages if you’re storing lots of repeating data. Why would you be doing such a thing? Well the traditional answer is virtual machines.
Take my employer’s systems for example. We manage virtual learning environments for students, where every student gets a set of virtual machines to do their learning thing on. So, if we have 50 students in a class, we have 50 sets of the same virtual machine. That’s a lot of duplicated memory. The promise of KSM is that instead of storing the same thing 50 times, we can store it once and therefore fit more virtual machines onto a single physical machine.
For my experiments I used libvirt / KVM on Ubuntu 18.04. To ensure KSM was turned on, I needed to:
Ensure KSM is turned on. /sys/kernel/mm/ksm/run should contain a “1” if it is enabled. If it is not, just write “1” to that file to enable it.
Ensure libvirt is enabling KSM. The KSM value in /etc/defaults/qemu-kvm should be set to “AUTO”.
My lab machines are currently setup with Shaken Fist, so I just quickly launched a few hundred identical VMs. This first graph is that experiment. Its a little hard to see here but on three machines I consumed about about 40gb of RAM with indentical VMs and then waited. After three or so hours I had saved about 2,500 pages of memory.
To be honest, that’s a pretty disappointing result. 2,5000 4kb pages is only about 10mb of RAM, which isn’t very much at all. Also, three hours is a really long time for our workload, where students often fire up their labs for a couple of hours at a time before shutting them down again. If this was as good as KSM gets, it wasn’t for us.
After some pondering, I realised that KSM is configured by default to not work very well. The default value for pages_to_scan is 100, which means each scan run only inspects about half a megabyte of RAM. It would take a very very long time to scan a modern machine that way. So I tried setting pages_to_scan to 1,000,000,000 instead. One billion is an unreasonably large number for the real world, but hey. You update this number by writing a new value to /sys/kernel/mm/ksm/pages_to_scan.
This time we get a much better result — I launched as many VMs as would fit on each machine, and the sat back and waited (well, went to bed acutally). Again the graph is a bit hard to read, but what it is saying is that after 90 minutes KSM had saved me over 300gb of RAM across the three machines. Its still a little too slow for our workload, but for workloads where the VMs are relatively static that’s a real saving.
Now it should be noted that setting pages_to_scan to 1,000,000,000 comes at a cost — each of these machines now has one of its 48 cores dedicated to scanning memory and deduplicating. For my workload that’s something I am ok with because my workload is not CPU bound, but it might not work for you.