The simplest boot target for the Kerbside SPICE VDI proxy CI

For the last couple of years I have been working on a SPICE protocol native proxy called Kerbside. The basic idea is to be able to provide SPICE Virtual Desktop Interface (VDI) consoles to users from cloud platforms such as Shaken Fist, OpenStack, or oVirt. Think Citrix, but for Open Source cloud platforms. SPICE is attractive here because it has some features that other more common VDI protocols like VNC don’t have — good cut and paste support, USB device pass-through, multiple monitor support, and so on. RDP has these, but RDP was not a supported VDI protocol when using qemu on Linux with KVM until incredibly recently — literally the last couple of months.

(In terms of clouds that Kerbside supports, I think it would be relatively trivial to also support Proxmox, KubeVirt, or a list of static manually created virtual machines, but there’s only so many things one Mikal can do at once…)

Some of these cloud platforms have supported SPICE consoles for a while, but generally with warts. OpenStack for example only exposes them as HTML5 transcoded sessions with reduced functionality. oVirt exposes them via a “proxy” which is just squid (or equivalent), but its fairly dumb — it exposes the underlying hypervisor details to the client for example. I thought I could do better than that.

(more…)

Continue ReadingThe simplest boot target for the Kerbside SPICE VDI proxy CI

virtio-vsock: python examples of running the server in the guest

I’ve been using virtio-serial for communications between Linux hypervisors and guest virtual machines for ages. Lots of other people do it to — the qemu guest agent for example is implemented like this. In fact, I think that’s where I got my original thoughts on the matter from. However, virtio-serial is actually fairly terrible to write against as a programming model, because you’re left to do all the multiplexing of various requests down the channel and surely there’s something better?

Well… There is! virtio-vsock is basically the same concept, except it uses the socket interface. You can have more than one connection open and the sockets layer handles multiplexing by magic. ThisĀ massively simplifies the programming model for supporting concurrent users down the channel. So that’s actually pretty cool. I should credit Kata Containers with noticing this quality of life improvement nearly a decade before I did, but I get there in the end.

(more…)

Continue Readingvirtio-vsock: python examples of running the server in the guest

Exporting volumes from Cinder and re-creating COW layers

Today I wandered into a bit of a rat hole discovering how to export data from OpenStack Cinder volumes when you don't have admin permissions, and I thought it was worth documenting here so I remember it for next time. Let's assume that you have a Cinder volume named "child1", which is a 64gb volume originally cloned from "parent1". parent1 is a 7.9gb VMDK, but the only way I can find to extract child1 is to convert it to a glance image and then download the entire volume as a raw. Something like this: $ cinder upload-to-image $child1 "extract:$child1" Where $child1 is the UUID of the Cinder volume. You then need to find the UUID of the image in Glance, which the Cinder upload-to-image command will have told you, but you can also find by searching Glance for your image named "extract:$child1": $ glance image-list | grep "extract:$cinder_uuid" You now need to watch that Glance image until the status of the image is "active". It will go through a series of steps with names like "queued", and "uploading" first. Now you can download the image from Glance: $ glance image-download --file images/$child1.raw --progress $glance_uuid And then delete the intermediate glance…

Continue ReadingExporting volumes from Cinder and re-creating COW layers

Wow, qemu-img is fast

  • Post author:
  • Post category:OpenStack

I wanted to determine if its worth putting ephemeral images into the libvirt cache at all. How expensive are these images to create? They don't need to come from the image service, so it can't be too bad, right? It turns out that qemu-img is very very fast at creating these images, based on the very small data set of my laptop with an ext4 file system... mikal@x220:/data/temp$ time qemu-img create -f raw disk 10g Formatting 'disk', fmt=raw size=10737418240 real 0m0.315s user 0m0.000s sys 0m0.004s mikal@x220:/data/temp$ time qemu-img create -f raw disk 100g Formatting 'disk', fmt=raw size=107374182400 real 0m0.004s user 0m0.000s sys 0m0.000s Perhaps this is because I am using ext4, which does funky extents things when allocating blocks. However, the only ext3 file system I could find at my place is my off site backup disks, which are USB3 attached instead of the SATA2 that my laptop uses. Here's the number from there: $ time qemu-img create -f raw disk 100g Formatting 'disk', fmt=raw size=107374182400 real 0m0.055s user 0m0.000s sys 0m0.004s So still very very fast. Perhaps its the mkfs that's slow? Here's a run of creating a ext4 file system inside that 100gb file I just made on…

Continue ReadingWow, qemu-img is fast

Further adventures with base images in OpenStack

  • Post author:
  • Post category:OpenStack

I was bored over the New Years weekend, so I figured I'd have a go at implementing image cache management as discussed previously. I actually have an implementation of about 75% of that blueprint now, but its not ready for prime time yet. The point of this post is more to document some stuff I learnt about VM startup along the way so I don't forget it later. So, you want to start a VM on a compute node. Once the scheduler has selected a node to run the VM on, the next step is the compute instance on that machine starting the VM up. First the specified disk image is fetched from your image service (in my case glance), and placed in a temporary location on disk. If the image is already a raw image, it is then renamed to the correct name in the instances/_base directory. If it isn't a raw image then it is converted to raw format, and that converted file is put in the right place. Optionally, the image can be extended to a specified size as part of this process. Then, depending on if you have copy on write (COW) images turned on or…

Continue ReadingFurther adventures with base images in OpenStack

Openstack compute node cleanup

  • Post author:
  • Post category:OpenStack

I've never used openstack before, which I imagine is similar to many other people out there. Its actually pretty cool, although I encountered a problem the other day that I think is worthy of some more documentation. Openstack runs virtual machines for users, in much the same manner as Amazon's EC2 system. These instances are started with a base image, and then copy on write is used to write differences for the instance as it changes stuff. This makes sense in a world where a given machine might be running more than one copy of the instance. However, I encountered a compute node which was running low on disk. This is because there is currently nothing which cleans up these base images, so even if none of the instances on a machine require that image, and even if the machine is experiencing disk stress, the images still hang around. There are a few blog posts out there about this, but nothing really definitive that I could find. I've filed a bug asking for the Ubuntu package to include some sort of cleanup script, and interestingly that led me to learn that there are plans for a pretty comprehensive image management…

Continue ReadingOpenstack compute node cleanup

End of content

No more pages to load