virtio-vsock: python examples of running the server in the guest

I've been using virtio-serial for communications between Linux hypervisors and guest virtual machines for ages. Lots of other people do it to -- the qemu guest agent for example is implemented like this. In fact, I think that's where I got my original thoughts on the matter from. However, virtio-serial is actually fairly terrible to write against as a programming model, because you're left to do all the multiplexing of various requests down the channel and surely there's something better? Well... There is! virtio-vsock is basically the same concept, except it uses the socket interface. You can have more than one connection open and the sockets layer handles multiplexing by magic. This massively simplifies the programming model for supporting concurrent users down the channel. So that's actually pretty cool. I should credit Kata Containers with noticing this quality of life improvement nearly a decade before I did, but I get there in the end. The virtio-vsock model is only a little bit weird. The "address" for the guest virtual machine is a "CID" (Context ID). The hypervisor process is always at CID 0, CID 1 is reserved and unused, and CID 2 is any process on the host which is not…

Continue Readingvirtio-vsock: python examples of running the server in the guest

The KSM and I

  • Post author:
  • Post category:Linux

I spent much of yesterday playing with KSM (Kernel Shared Memory, or Kernel Samepage Merging depending on which universe you come from). Unix kernels store memory in "pages" which are moved in and out of memory as a single block. On most Linux architectures pages are 4,096 bytes long. KSM is a Linux Kernel feature which scans memory looking for identical pages, and then de-duplicating them. So instead of having two pages, we just have one and have two processes point at that same page. This has obvious advantages if you're storing lots of repeating data. Why would you be doing such a thing? Well the traditional answer is virtual machines. Take my employer's systems for example. We manage virtual learning environments for students, where every student gets a set of virtual machines to do their learning thing on. So, if we have 50 students in a class, we have 50 sets of the same virtual machine. That's a lot of duplicated memory. The promise of KSM is that instead of storing the same thing 50 times, we can store it once and therefore fit more virtual machines onto a single physical machine. For my experiments I used libvirt /…

Continue ReadingThe KSM and I

Image handlers (in essex)

  • Post author:
  • Post category:OpenStack

George asks in the comments on my previous post about loop and nbd devices an interesting question about the behavior of this code on essex. I figured the question was worth bringing out into its own post so that its more visible. I've edited George's question lightly so that this blog post flows reasonably. Can you please explain the order (and conditions) in which the three methods are used? In my Essex installation, the "img_handlers" is not defined in nova.conf, so it takes the default value "loop,nbd,guestfs". However, nova is using nbd as the chose method. The handlers will be used in the order specified -- with the caveat that loop doesn't support Copy On Write (COW) images and will therefore be skipped if the libvirt driver is trying to create a COW image. Whether COW images are used is configured with the use_cow_images flag, which defaults to True. So, loop is being skipped because you're probably using COW images. My ssh keys are obtained by cloud-init, and still whenever I start a new instance I see in the nova-compute.logs this sequence of events: qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-0000076d/disk kpartx -a /dev/nbd15 mount /dev/mapper/nbd15p1 /tmp/tmpxGBdT0 umount /dev/mapper/nbd15p1 kpartx -d /dev/nbd15 qemu-nbd…

Continue ReadingImage handlers (in essex)

End of content

No more pages to load