I happened upon a thread about OVN’s proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I’m just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.
Tag: nova
Nova vendordata deployment, an excessively detailed guide
Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.
User provided data
The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.
Nova provided data
Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.
Deployer provided data
There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot — the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user’s behalf.
Nova supports a mechanism to add “vendordata” to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:
- StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don’t change between instances, such as the location of the corporate puppet server.
- DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.
Tell me more about DynamicJSON
Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.
To use DynamicJSON, you configure it like this:
- Add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON” if you’d like.
- Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.
The format for an entry in vendordata_dynamic_targets is like this:
<name>@<url>
Where name is a short string not including the ‘@’ character, and where the URL can include a port number if so required. An example would be:
testing@http://127.0.0.1:125
Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:
openstack/2016-10-06/vendor_data2.json
For each dynamic target, there will be an entry in the JSON file named after that target. For example:
{ "testing": { "value1": 1, "value2": 2, "value3": "three" } }
Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.
The following data is passed to your REST service as a JSON encoded POST:
- project-id: the UUID of the project that owns the instance
- instance-id: the UUID of the instance
- image-id: the UUID of the image used to boot this instance
- user-data: as specified by the user at boot time
- hostname: the hostname of the instance
- metadata: as specified by the user at boot time
Deployment considerations
Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request — you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.
This behaviour is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.
Deploying the same vendordata service
There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:
$ git clone http://github.com/mikalstill/vendordata $ cd vendordata $ apt-get install virtualenvwrapper $ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed) $ mkvirtualenv vendordata $ pip install -r requirements.txt
We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn’t what you’re using:
[keystone_authtoken] insecure = False auth_plugin = password auth_url = http://172.29.236.100:35357 auth_uri = http://172.29.236.100:5000 project_domain_id = default user_domain_id = default project_name = service username = nova password = 5dff06ac0c43685de108cc799300ba36dfaf29e4 region_name = RegionOne
Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:
$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json $ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`
We then include that token in a test request to the vendordata service:
curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/
Configuring nova to use the external metadata service
Now we’re ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:
[api] vendordata_providers=DynamicJSON vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888
Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:
nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo
We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):
# cat openstack/latest/vendor_data2.json | python -m json.tool { "testing": { "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing." } }
How we got to test_init_instance_retries_reboot_pending_soft_became_hard
I’ve been asked some questions about a recent change to nova that I am responsible for, and I thought it would be easier to address those in this format than trying to explain what’s happening in IRC. That way whenever someone compliments me on possibly the longest unit test name ever written, I can point them here.
Let’s start with some definitions. What is the difference between a soft reboot and a hard reboot in Nova? The short answer is that a soft reboot gives the operating system running in the instance an opportunity to respond to an ACPI power event gracefully before the rug is pulled out from under the instance, whereas a hard reboot just punches the instance in the face immediately.
There is a bit more complexity than that of course, because this is OpenStack. A hard reboot also re-fetches image meta-data, and rebuilds the XML description of the instance that we hand to libvirt. It also re-populates any missing backing files. Finally it ensures that the networking is configured correctly and boots the instance again. In other words, a hard reboot is kind of like an initial instance boot, in that it makes fewer assumptions about how much you can trust the current state of the instance on the hypervisor node. Finally, a soft reboot which fails (probably because the instance operation system didn’t respond to the ACPI event in a timely manner) is turned into a hard reboot after libvirt.wait_soft_reboot_seconds. So, we already perform hard reboots when a user asked for a soft reboot in certain error cases.
Its important to note that the actual reboot mechanism is similar though — its just how patient we are and what side effects we create that change — in libvirt they both end up as a shutdown of the virtual machine and then a startup.
Bug 1072751 reported an interesting edge case with a soft reboot though. If nova-compute crashes after shutting down the virtual machine, but before the virtual machine is started again, then the instance is left in an inconsistent state. We can demonstrate this with a devstack installation:
-
Setup the right version of nova
cd /opt/stack/nova
git checkout dc6942c1218279097cda98bb5ebe4f273720115d
Patch nova so it crashes on a soft reboot
cat - > /tmp/patch <<EOF
> diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
> index ce19f22..6c565be 100644
> --- a/nova/virt/libvirt/driver.py
> +++ b/nova/virt/libvirt/driver.py
> @@ -34,6 +34,7 @@ import itertools
> import mmap
> import operator
> import os
> +import sys
> import shutil
> import tempfile
> import time
> @@ -2082,6 +2083,10 @@ class LibvirtDriver(driver.ComputeDriver):
> # is already shutdown.
> if state == power_state.RUNNING:
> dom.shutdown()
> +
> + # NOTE(mikal): temporarily crash
> + sys.exit(1)
> +
> # NOTE(vish): This actually could take slightly longer than the
> # FLAG defines depending on how long the get_info
> # call takes to return.
> EOF
patch -p1 < /tmp/patch
...now restart nova-compute inside devstack to make sure you're running
the patched version...
Boot a victim instance
cd ~/devstack
source openrc admin
glance image-list
nova boot --image=cirros-0.3.4-x86_64-uec --flavor=1 foo
Soft reboot, and verify its gone
nova list
nova reboot cacf99de-117d-4ab7-bd12-32cc2265e906
sudo virsh list
...virsh list should now show no virtual machines running as nova-compute
crashed before it could start the instance again. However, nova-api knows that
the instance should be rebooting...
$ nova list
+--------------------------------------+------+---------+----------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+---------+----------------+-------------+------------------+
| cacf99de-117d-4ab7-bd12-32cc2265e906 | foo | REBOOT | reboot_started | Running | private=10.0.0.3 |
+--------------------------------------+------+---------+----------------+-------------+------------------+
...now start nova-compute again, nova-compute detects the missing
instance on boot, and tries to start it up again...
sg libvirtd '/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf' \
> & echo $! >/opt/stack/status/stack/n-cpu.pid; fg || \
> echo "n-cpu failed to start" | tee "/opt/stack/status/stack/n-cpu.failure"
[...snip...]
Traceback (most recent call last):
File "/opt/stack/nova/nova/conductor/manager.py", line 444, in _object_dispatch
return getattr(target, method)(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 213, in wrapper
return fn(self, *args, **kwargs)
File "/opt/stack/nova/nova/objects/instance.py", line 728, in save
columns_to_join=_expected_cols(expected_attrs))
File "/opt/stack/nova/nova/db/api.py", line 764, in instance_update_and_get_original
expected=expected)
File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 216, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
ectxt.value = e.inner_exc
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
six.reraise(self.type_, self.value, self.tb)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
return f(*args, **kwargs)
File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 2464, in instance_update_and_get_original
expected, original=instance_ref))
File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 2602, in _instance_update
raise exc(**exc_props)
UnexpectedTaskStateError: Conflict updating instance cacf99de-117d-4ab7-bd12-32cc2265e906.
Expected: {'task_state': [u'rebooting_hard', u'reboot_pending_hard', u'reboot_started_hard']}.
Actual: {'task_state': u'reboot_started'}
So what happened here? This is a bit confusing because we asked for a soft reboot of the instance, but the error we are seeing here is that a hard reboot was attempted — specifically, we’re trying to update an instance object but all the task states we expect the instance to be in are related to a hard reboot, but the task state we’re actually in is for a soft reboot.
We need to take a tour of the compute manager code to understand what happened here. nova-compute is implemented at nova/compute/manager.py in the nova code base. Specifically, ComputeVirtAPI.init_host() sets up the service to start handling compute requests for a specific hypervisor node. As part of startup, this method calls ComputeVirtAPI._init_instance() once per instance on the hypervisor node. This method tries to do some sanity checking for each instance that nova thinks should be on the hypervisor:
- Detecting if the instance was part of a failed evacuation.
- Detecting instances that are soft deleted, deleting, or in an error state and ignoring them apart from a log message.
- Detecting instances which we think are fully deleted but aren’t in fact gone.
- Moving instances we thought were booting, but which never completed into an error state. This happens if nova-compute crashes during the instance startup process.
- Similarly, instances which were rebuilding are moved to an error state as well.
- Clearing the task state for uncompleted tasks like snapshots or preparing for resize.
- Finishes deleting instances which were partially deleted last time we saw them.
- And finally, if the instance should be running but isn’t, tries to reboot the instance to get it running.
It is this final state which is relevant in this case — we think the instance should be running and its not, so we’re going to reboot it. We do that by calling ComputeVirtAPI.reboot_instance(). The code which does this work looks like this:
-
try_reboot, reboot_type = self._retry_reboot(context, instance)
current_power_state = self._get_power_state(context, instance)
if try_reboot:
LOG.debug("Instance in transitional state (%(task_state)s) at "
"start-up and power state is (%(power_state)s), "
"triggering reboot",
{'task_state': instance.task_state,
'power_state': current_power_state},
instance=instance)
self.reboot_instance(context, instance, block_device_info=None,
reboot_type=reboot_type)
return
[...snip...]
def _retry_reboot(self, context, instance):
current_power_state = self._get_power_state(context, instance)
current_task_state = instance.task_state
retry_reboot = False
reboot_type = compute_utils.get_reboot_type(current_task_state,
current_power_state)
pending_soft = (current_task_state == task_states.REBOOT_PENDING and
instance.vm_state in vm_states.ALLOW_SOFT_REBOOT)
pending_hard = (current_task_state == task_states.REBOOT_PENDING_HARD
and instance.vm_state in vm_states.ALLOW_HARD_REBOOT)
started_not_running = (current_task_state in
[task_states.REBOOT_STARTED,
task_states.REBOOT_STARTED_HARD] and
current_power_state != power_state.RUNNING)
if pending_soft or pending_hard or started_not_running:
retry_reboot = True
return retry_reboot, reboot_type
So, we ask ComputeVirtAPI._retry_reboot() if a reboot is required, and if so what type. ComputeVirtAPI._retry_reboot() just uses nova.compute.utils.get_reboot_type() (aliased as compute_utils.get_reboot_type) to determine what type of reboot to use. This is the crux of the matter. Read on for a surprising discovery!
nova.compute.utils.get_reboot_type() looks like this:
-
def get_reboot_type(task_state, current_power_state):
"""Checks if the current instance state requires a HARD reboot."""
if current_power_state != power_state.RUNNING:
return 'HARD'
soft_types = [task_states.REBOOT_STARTED, task_states.REBOOT_PENDING,
task_states.REBOOTING]
reboot_type = 'SOFT' if task_state in soft_types else 'HARD'
return reboot_type
So, after all that it comes down to this. If the instance isn’t running, then its a hard reboot. In our case, we shutdown the instance but haven’t started it yet, so its not running. This will therefore be a hard reboot. This is where our problem lies — we chose a hard reboot. The code doesn’t blow up until later though — when we try to do the reboot itself.
-
@wrap_exception()
@reverts_task_state
@wrap_instance_event
@wrap_instance_fault
def reboot_instance(self, context, instance, block_device_info,
reboot_type):
"""Reboot an instance on this host."""
# acknowledge the request made it to the manager
if reboot_type == "SOFT":
instance.task_state = task_states.REBOOT_PENDING
expected_states = (task_states.REBOOTING,
task_states.REBOOT_PENDING,
task_states.REBOOT_STARTED)
else:
instance.task_state = task_states.REBOOT_PENDING_HARD
expected_states = (task_states.REBOOTING_HARD,
task_states.REBOOT_PENDING_HARD,
task_states.REBOOT_STARTED_HARD)
context = context.elevated()
LOG.info(_LI("Rebooting instance"), context=context, instance=instance)
block_device_info = self._get_instance_block_device_info(context,
instance)
network_info = self.network_api.get_instance_nw_info(context, instance)
self._notify_about_instance_usage(context, instance, "reboot.start")
instance.power_state = self._get_power_state(context, instance)
instance.save(expected_task_state=expected_states)
[...snip...]
And there’s our problem. We have a reboot_type of HARD, which means we set the expected_states to those matching a hard reboot. However, the state the instance is actually in will be one correlating to a soft reboot, because that’s what the user requested. We therefore experience an exception when we try to save our changes to the instance. This is the exception we saw above.
The fix in my patch is simply to change the current task state for an instance in this situation to one matching a hard reboot. It all just works then.
So why do we decide to use a hard reboot if the current power state is not RUNNING? This code was introduced in this patch and there isn’t much discussion in the review comments as to why a hard reboot is the right choice here. That said, we already fall back to a hard reboot in error cases of a soft reboot inside the libvirt driver, and a hard reboot requires less trust of the surrounding state for the instance (block device mappings, networks and all those side effects mentioned at the very beginning), so I think it is the right call.
In conclusion, we use a hard reboot for soft reboots that fail, and a nova-compute crash during a soft reboot counts as one of those failure cases. So, when nova-compute detects a failed soft reboot, it converts it to a hard reboot and trys again.
Kilo Nova deploy recommendations
What would a Nova developer tell a deployer to think about before their first OpenStack install? This was the question I wanted to answer for my linux.conf.au OpenStack miniconf talk, and writing this essay seemed like a reasonable way to take the bullet point list of ideas we generated and turn it into something that was a cohesive story. Hopefully this essay is also useful to people who couldn’t make the conference talk.
Please understand that none of these are hard rules — what I seek is for you to consider your options and make informed decisions. Its really up to you how you deploy Nova.
Operating environment
- Consider what base OS you use for your hypervisor nodes if you’re using Linux. I know that many environments have standardized on a given distribution, and that many have a preference for a long term supported release. However, Nova is at its most basic level a way of orchestrating tools packaged by your distribution via APIs. If those underlying tools are buggy, then your Nova experience will suffer as well. Sometimes we can work around known issues in older versions of our dependencies, but often those work-arounds are hard to implement (and therefore likely to be less than perfect) or have performance impacts. There are many examples of the problems you can encounter, but hypervisor kernel panics, and disk image corruption are just two examples. We are trying to work with distributions on ensuring they back port fixes, but the distributions might not be always willing to do that. Sometimes upgrading the base OS on your hypervisor nodes might be a better call.
- The version of Python you use matters. The OpenStack project only tests with specific versions of Python, and there can be bugs between releases. This is especially true for very old versions of Python (anything older than 2.7) and new versions of Python (Python 3 is not supported for example). Your choice of base OS will affect the versions of Python available, so this is related to the previous point.
- There are existing configuration management recipes for most configuration management systems. I’d avoid reinventing the wheel here and use the community supported recipes. There are definitely resources available for chef, puppet, juju, ansible and salt. If you’re building a very large deployment from scratch consider triple-o as well. Please please please don’t fork the community recipes. I know its tempting, but contribute to upstream instead. Invariably upstream will continue developing their stuff, and if you fork you’ll spend a lot of effort keeping in sync.
- Have a good plan for log collection and retention at your intended scale. The hard reality at the moment is that diagnosing Nova often requires that you turn on debug logging, which is very chatty. Whilst we’re happy to take bug reports where we’ve gotten the log level wrong, we haven’t had a lot of success at systematically fixing this issue. Your log infrastructure therefore needs to be able to handle the demands of debug logging when its turned on. If you’re using central log servers think seriously about how much disks they require. If you’re not doing centralized syslog logging, perhaps consider something like logstash.
- Pay attention to memory usage on your controller nodes. OpenStack python processes can often consume hundreds of megabytes of virtual memory space. If you run many controller services on the same node, make sure you have enough RAM to deal with the number of processes that will, by default, be spawned for the many service endpoints. After a day or so of running a controller node, check in on the VMM used for python processes and make any adjustments needed to your “workers” configuration settings.
Scale
- Estimate your final scale now. Sure, you’re building a proof of concept, but these things have a habit of becoming entrenched. If you are planning a deployment that is likely to end up being thousands of nodes, then you are going to need to deploy with cells. This is also possibly true if you’re going to have more than one hypervisor or hardware platform in your deployment — its very common to have a cell per hypervisor type or per hardware platform. Cells is relatively cheap to deploy for your proof of concept, and it helps when that initial deploy grows into a bigger thing. Should you be deploying cells from the beginning? It should be noted however that not all features are currently implemented in cells. We are working on this at the moment though.
- Consider carefully what SQL database to use. Nova supports many SQL databases via sqlalchemy, but are some are better tested and more widely deployed than others. For example, the Postgres back end is rarely deployed and is less tested. I’d recommend a variant of MySQL for your deployment. Personally I’ve seen good performance on Percona, but I know that many use the stock MySQL as well. There are known issues at the moment with Galera as well, so show caution there. There is active development happening on the select-for-update problems with Galera at the moment, so that might change by the time you get around to deploying in production. You can read more about our current Galera problems on Jay Pipe’s blog .
- We support read only replicas of the SQL database. Nova supports offloading read only SQL traffic to read only replicas of the main SQL database, but I do no believe this is widely deployed. It might be of interest to you though.
- Expect a lot of SQL database connections. While Nova has the nova-conductor service to control the number of connections to the database server, other OpenStack services do not, and you will quickly out pace the number of default connections allowed, at least for a MySQL deployment. Actively monitor your SQL database connection counts so you know before you run out. Additionally, there are many places in Nova where a user request will block on a database query, so if your SQL back end isn’t keeping up this will affect performance of your entire Nova deployment.
- There are options with message queues as well. We currently support rabbitmq, zeromq and qpid. However, rabbitmq is the original and by far the most widely deployed. rabbitmq is therefore a reasonable default choice for deployment.
Hypervisors
- Not all hypervisor drivers are created equal. Let’s be frank here — some hypervisor drivers just aren’t as actively developed as others. This is especially true for drivers which aren’t in the Nova code base — at least the ones the Nova team manage are updated when we change the internals of Nova. I’m not a hypervisor bigot — there is a place in the world for many different hypervisor options. However, the start of a Nova deploy might be the right time to consider what hypervisor you want to use. I’d personally recommend drivers in the Nova code base with active development teams and good continuous integration, but ultimately you have to select a driver based on its merits in your situation. I’ve included some more detailed thoughts on how to evaluate hypervisor drivers later in this post, as I don’t want to go off on a big tangent during my nicely formatted bullet list.
- Remember that the hypervisor state is interesting debugging information. For example with the libvirt hypervisor, the contents on /var/lib/instances is super useful for debugging misbehaving instances. Additionally, all of the existing libvirt tools work, so you can use those to investigate as well. However, I strongly recommend you only change instance state via Nova, and not go directly to the hypervisor.
Networking
- Avoid new deployments of nova-network. nova-network has been on the deprecation path for a very long time now, and we’re currently working on the final steps of a migration plan for nova-network users to neutron. If you’re a new deployment of Nova and therefore don’t yet depend on any of the features of nova-network, I’d start with Neutron from the beginning. This will save you a possible troublesome migration to Neutron later.
Testing and upgrades
- You need a test lab. For a non-trivial deployment, you need a realistic test environment. Its expected that you test all upgrades before you do them in production, and rollbacks can sometimes be problematic. For example, some database migrations are very hard to roll back, especially if new instances have been created in the time it took you to decide to roll back. Perhaps consider turning off API access (or putting the API into a read only state) while you are validating a production deploy post upgrade, that way you can restore a database snapshot if you need to undo the upgrade. We know this isn’t perfect and are working on a better upgrade strategy for information stored in the database, but we will always expect you to test upgrades before deploying them.
- Test database migrations on a copy of your production database before doing them for real. Another reason to test upgrades before doing them in production is because some database migrations can be very slow. Its hard for the Nova developers to predict which migrations will be slow, but we do try to test for this and minimize the pain. However, aspects of your deployment can affect this in ways we don’t expect — for example if you have large numbers of volumes per instance, then that could result in database tables being larger than we expect. You should always test database migrations in a lab and report any problems you see.
- Think about your upgrade strategy in general. While we now support having the control infrastructure running a newer release than the services on hypervisor nodes, we only support that for one release (so you could have your control plane running Kilo for example while you are still running Juno on your hypervisors, you couldn’t run Icehouse on the hypervisors though). Are you going to upgrade every six months? Or are you going to do it less frequently but step through a series of upgrades in one session? I suspect the latter option is more risky — if you encounter a bug in a previous release we would need to back port a fix, which is a much slower process than fixing the most recent release. There are also deployments which choose to “continuously deploy” from trunk. This gets the access to features as they’re added, but means that the deployments need to have more operational skill and a closer association with the upstream developers. In general continuous deployers are larger public clouds as best as I can tell.
libvirt specific considerations
- For those intending to run the libvirt hypervisor driver, not all libvirt hypervisors are created equal. libvirt implements pluggable hypervisors, so if you select the Nova libvirt hypervisor driver, you then need to select what hypervisor to use with libvirt as well. It should be noted however that some hypervisors work better than others, with kvm being the most widely deployed.
- There are two types of storage for instances. There is “instance storage”, which is block devices that exist for the life of the instance and are then cleaned up when the instance is destroyed. There is also block storage provided Cinder, which is persistent and arguably easier to manage than instance storage. I won’t discuss storage provided by Cinder any further however, because it is outside the scope of this post. Instance storage is provided by a plug in layer in the libvirt hypervisor driver, which presents you with another set of deployment decisions.
- Shared instance storage is attractive, but it comes at a cost. Shared instance storage is an attractive option, but isn’t required for live migration of instances using the libvirt hypervisor. Think about the costs of shared storage though — for example putting everything on network attached storage is likely to be expensive, especially if most of your instances don’t need the facility. There are other options such as Ceph, but the storage interface layer in libvirt is one of the areas of code where we need to improve testing so be wary of bugs before relying on those storage back ends.
Thoughts on how to evaluate hypervisor drivers
As promised, I also have some thoughts on how to evaluate which hypervisor driver is the right choice for you. First off, if your organization has a lot of experience with a particular hypervisor, then there is always value in that. If that is the case, then you should seriously consider running the hypervisor you already have experience with, as long as that hypervisor has a driver for Nova which meets the criteria below.
What’s important is to be looking for a driver which works well with Nova, and a good measure of that is how well the driver development team works with the Nova development team. The obvious best case here is where both teams are the same people — which is true for drivers that are in the Nova code base. I am aware there are drivers that live outside of Nova’s code repository, but you need to remember that the interface these drivers plug into isn’t a stable or versioned interface. The risk of those drivers being broken by the ongoing development of Nova is very high. Additionally, only a very small number of those “out of tree” drivers contribute to our continuous integration testing. That means that the Nova team also doesn’t know when those drivers are broken. The breakages can also be subtle, so if your vendor isn’t at the very least doing tempest runs against their out of tree driver before shipping it to you then I’d be very worried.
You should also check out how many bugs are open in LaunchPad for your chosen driver (this assumes the Nova team is aware of the existence of the driver I suppose). Here’s an example link to the libvirt driver bugs currently open. As well as total bug count, I’d be looking for bug close activity — its nice if there is a very small number of bugs filed, but perhaps that’s because there aren’t many users. It doesn’t necessarily mean the team for that driver is super awesome at closing bugs. The easiest way to look into bug close rates (and general code activity) would be to checkout the code for Nova and then look at the log for your chosen driver. For example for the libvirt driver again:
$ git clone http://git.openstack.org/openstack/nova $ cd nova/nova/virt/driver/libvirt $ git log .
That will give you a report on all the commits ever for that driver. You don’t need to read the entire report, but it will give you an idea of what the driver authors have recently been thinking about.
Another good metric is the specification activity for your driver. Specifications are the formal design documents that Nova adopted for the Juno release, and they document all the features that we’re currently working on. I write summaries of the current state of Nova specs regularly, which you can see posted at stillhq.com with this being the most recent summary at the time of writing this post. You should also check how much your driver authors interact with the core Nova team. The easiest way to do that is probably to keep an eye on the Nova team meeting minutes, which are posted online.
Finally, the OpenStack project believes strongly in continuous integration testing. It (s/It/Testing) has clear value in the number of bugs it finds in code before our users experience them, and I would be very wary of driver code which isn’t continuously integrated with Nova. Thus, you need to ensure that your driver has well maintained continuous integration testing. This is easy for “in tree” drivers, as we do that for all of them. For out of tree drivers, continuous integration testing is done with a thing called “third party CI”.
How do you determine if a third party CI system is well maintained? First off, I’d start by determining if a third party CI system actually exists by looking at OpenStack’s list of known third party CI systems. If the third party isn’t listed on that page, then that’s a very big warning sign. Next you can use Joe Gordon’s lastcomment tool to see when a given CI system last reported a result:
$ git clone https://github.com/jogo/lastcomment $ ./lastcomment.py --name "DB Datasets CI" last 5 comments from 'DB Datasets CI' [0] 2015-01-07 00:46:33 (1:35:13 old) https://review.openstack.org/145378 'Ignore 'dynamic' addr flag on gateway initialization' [1] 2015-01-07 00:37:24 (1:44:22 old) https://review.openstack.org/136931 'Use session with neutronclient' [2] 2015-01-07 00:35:33 (1:46:13 old) https://review.openstack.org/145377 'libvirt: Expanded test libvirt driver' [3] 2015-01-07 00:29:50 (1:51:56 old) https://review.openstack.org/142450 'ephemeral file names should reflect fs type and mkfs command' [4] 2015-01-07 00:15:59 (2:05:47 old) https://review.openstack.org/142534 'Support for ext4 as default filesystem for ephemeral disks'
You can see here that the most recent run is 1 hour 35 minutes old when I ran this command. That’s actually pretty good given that I wrote this while most of America was asleep. If the most recent run is days old, that’s another warning sign. If you’re left in doubt, then I’d recommend appearing in the OpenStack IRC channels on freenode and asking for advice. OpenStack has a number of requirements for third party CI systems, and I haven’t discussed many of them here. There is more detail on what OpenStack considers a “well run CI system” on the OpenStack Infrastructure documentation page.
General operational advice
Finally, I have some general advice for operators of OpenStack. There is an active community of operators who discuss their use of the various OpenStack components at the openstack-operators mailing list, if you’re deploying Nova you should consider joining that mailing list. While you’re welcome to ask questions about deploying OpenStack at that list, you can also ask questions at the more general OpenStack mailing list if you want to.
There are also many companies now which will offer to operate an OpenStack cloud for you. For some organizations engaging a subject matter expert will be the right decision. Probably the most obvious way to evaluate which of those companies to use is to look at their track record of successful deployments, as well as their overall involvement in the OpenStack community. You need a partner who can advocate for you with the OpenStack developers, as well as keeping an eye on what’s happening upstream to ensure it meets your needs.
Conclusion
Thanks for reading so far! I hope this document is useful to someone out there. I’d love to hear your feedback — are there other things we wished deployers considered before committing to a plan? Am I simply wrong somewhere? Finally, this is the first time that I’ve posted an essay form of a conference talk instead of just the slide deck, and I’d be interested in if people find this format more useful than a YouTube video post conference. Please drop me a line and let me know if you find this useful!
How are we going with Nova Kilo specs after our review day?
Time for another summary I think, because announcing the review day seems to have caused a rush of new specs to be filed (which wasn’t really my intention, but hey). We did approve a fair few specs on the review day, so I think overall it was a success. Here’s an updated summary of the state of play:
API
- Add more detailed network information to the metadata server: review 85673.
- Add separated policy rule for each v2.1 api: review 127863.
- Add user limits to the limits API (as well as project limits): review 127094.
- Allow all printable characters in resource names: review 126696.
- Consolidate all console access APIs into one: review 141065.
- Expose the lock status of an instance as a queryable item: review 127139 (abandoned); review 85928 (approved).
- Extend api to allow specifying vnic_type: review 138808.
- Implement instance tagging: review 127281 (fast tracked, approved).
- Implement the v2.1 API: review 126452 (fast tracked, approved).
- Improve the return codes for the instance lock APIs: review 135506.
- Microversion support: review 127127 (approved).
- Move policy validation to just the API layer: review 127160.
- Nova Server Count API Extension: review 134279 (fast tracked).
- Provide a policy statement on the goals of our API policies: review 128560 (abandoned).
- Sorting enhancements: review 131868 (fast tracked, approved).
- Support JSON-Home for API extension discovery: review 130715.
- Support X509 keypairs: review 105034 (approved).
API (EC2)
- Expand support for volume filtering in the EC2 API: review 104450.
- Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
Administrative
- Actively hunt for orphan instances and remove them: review 137996 (abandoned); review 138627.
- Check that a service isn’t running before deleting it: review 131633.
- Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
- Implement a daemon version of rootwrap: review 105404.
- Log request id mappings: review 132819 (fast tracked).
- Monitor the health of hypervisor hosts: review 137768.
- Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.
Block Storage
- Allow direct access to LVM volumes if supported by Cinder: review 127318.
- Cache data from volumes on local disk: review 138292 (abandoned); review 138619.
- Enhance iSCSI volume multipath support: review 134299.
- Failover to alternative iSCSI portals on login failure: review 137468.
- Give additional info in BDM when source type is “blank”: review 140133.
- Implement support for a DRBD driver for Cinder block device access: review 134153.
- Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721 (approved).
- StorPool volume attachment support: review 115716.
- Support Cinder Volume Multi-attach: review 139580 (approved).
- Support iSCSI live migration for different iSCSI target: review 132323 (approved).
Cells
- Cells Scheduling: review 141486.
- Create an instance mapping database: review 135644.
- Flexible cell selection: review 140031.
- Implement instance mapping: review 135424 (approved).
- Populate the instance mapping database: review 136490.
Containers Service
- Initial specification: review 114044 (abandoned).
Database
- Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).
- Nova db purge utility: review 132656.
- Online schema change options: review 102545.
- Support DB2 as a SQL database: review 141097 (fast tracked, approved).
- Validate database migrations and model’: review 134984 (approved).
Hypervisor: Docker
- Migrate the Docker Driver into Nova: review 128753.
Hypervisor: FreeBSD
- Implement support for FreeBSD networking in nova-network: review 127827.
Hypervisor: Hyper-V
- Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).
- Instance hot resize: review 141219.
Hypervisor: Ironic
- Add config drive support: review 98930 (approved).
- Pass through flavor capabilities to ironic: review 136104.
Hypervisor: VMWare
- Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
- Add support for the HTML5 console: review 127283.
- Allow Nova to access a VMWare image store over NFS: review 126866.
- Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
- Enable the mapping of raw cinder devices to instances: review 128697.
- Implement vSAN support: review 128600 (fast tracked, approved).
- Support multiple disks inside a single OVA file: review 128691.
- Support the OVA image format: review 127054 (fast tracked, approved).
Hypervisor: libvirt
- Add Quobyte USP support: review 138372 (abandoned); review 138373 (approved).
- Add VIF_VHOSTUSER vif type: review 138736 (approved).
- Add a Quobyte Volume Driver: review 138375 (abandoned).
- Add finetunable configuration settings for virtio-scsi: review 103797 (abandoned).
- Add large page support: review 129608 (approved).
- Add support for SMBFS as a image storage backend: review 103203 (approved).
- Allow scheduling of instances such that PCI passthrough devices are co-located on the same NUMA node as other instance resources: review 128344 (fast tracked, approved).
- Allow specification of the device boot order for instances: review 133254.
- Allow the administrator to explicitly set the version of the qemu emulator to use: review 138731 (abandoned).
- Consider PCI offload capabilities when scheduling instances: review 135331.
- Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979 (fast tracked).
- Derive hardware policy from libosinfo: review 133945.
- Implement COW volumes via VMThunder to allow fast boot of large numbers of instances: review 128810 (abandoned); review 128813 (abandoned); review 128830 (abandoned); review 128845 (abandoned); review 129093 (abandoned); review 129108 (abandoned); review 129110 (abandoned); review 129113 (abandoned); review 129116; review 137617.
- Implement configurable policy over where virtual CPUs should be placed on physical CPUs: review 129606 (approved).
- Implement support for Parallels Cloud Server: review 111335 (approved); review 128990 (abandoned).
- Implement support for zkvm as a libvirt hypervisor: review 130447 (approved).
- Improve total network throughput by supporting virtio-net multiqueue: review 128825.
- Improvements to the cinder integration for snapshots: review 134517.
- Quiesce instance disks during snapshot: review 128112; review 131587 (abandoned); review 131597.
- Real time instances: review 139688.
- Stop dm-crypt device when an encrypted instance is suspended or stopped: review 140847 (approved).
- Support SR-IOV interface attach and detach: review 139910.
- Support StorPool as a storage backend: review 137830.
- Support for live block device IO tuning: review 136704.
- Support libvirt storage pools: review 126978 (fast tracked, approved).
- Support live migration with macvtap SR-IOV: review 136077.
- Support quiesce filesystems during snapshot: review 126966 (fast tracked, approved).
- Support using qemu’s built in iSCSI initiator: review 133048 (approved).
- Volume driver for Huawei SDSHypervisor: review 130919.
Instance features
- Allow portions of an instance’s uuid to be configurable: review 130451.
- Attempt to schedule cinder volumes “close” to instances: review 130851; review 131050 (abandoned); review 131051 (abandoned); review 131151 (abandoned).
- Dynamic server groups: review 130005 (abandoned).
- Improve the performance of unshelve for those using shared storage for instance disks: review 135387.
Internal
- A lock-free quota implementation: review 135296.
- Automate the documentation of the virtual machine state transition graph: review 94835.
- Fake Libvirt driver for simulating HW testing: review 139927 (abandoned).
- Flatten Aggregate Metadata in the DB: review 134573 (abandoned).
- Flatten Instance Metadata in the DB: review 134945 (abandoned).
- Implement a new code coverage API extension: review 130855.
- Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
- Move to polling for cinder operations: review 135367.
- PCI test cases for third party CI: review 141270.
- Transition Nova to using the Glance v2 API: review 84887.
- Transition to using glanceclient instead of our own home grown wrapper: review 133485 (approved).
Internationalization
- Enable lazy translations of strings: review 126717 (fast tracked).
Networking
- Add a new linuxbridge VIF type, macvtap: review 117465 (abandoned).
- Add a plugin mechanism for VIF drivers: review 136827.
- Add support for InfiniBand SR-IOV VIF Driver: review 131729.
- Neutron DNS Using Nova Hostname: review 90150 (abandoned).
- New VIF type to allow routing VM data instead of bridging it: review 130732.
- Nova Plugin for OpenContrail: review 126446 (approved).
- Refactor of the Neutron network adapter to be more maintainable: review 131413.
- Use the Nova hostname in Neutron DNS: review 137669.
- Wrap the Python NeutronClient: review 141108.
Performance
- Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.
Scheduler
- A nested quota driver API: review 129420.
- Add a filter to take into account hypervisor type and version when scheduling: review 137714.
- Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
- Add instance count on the hypervisor as a weight: review 127871 (abandoned).
- Allow extra spec to match all values in a list by adding the ALL-IN operator: review 138698 (fast tracked, approved).
- Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
- Allow the remove of servers from server groups: review 136487.
- Convert get_available_resources to use an object instead of dict: review 133728 (abandoned).
- Convert the resource tracker to objects: review 128964 (fast tracked, approved).
- Create an object model to represent a request to boot an instance: review 127610 (approved).
- Decouple services and compute nodes in the SQL database: review 126895 (approved).
- Enable adding new scheduler hints to already booted instances: review 134746.
- Fix the race conditions when migration with server-group: review 135527 (abandoned).
- Implement resource objects in the resource tracker: review 127609.
- Improve the ComputeCapabilities filter: review 133534.
- Isolate Scheduler DB for Filters: review 138444.
- Isolate the scheduler’s use of the Nova SQL database: review 89893.
- Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
- Move select_destinations() to using a request object: review 127612 (approved).
- Persist scheduler hints: review 88983.
- Refactor allocate_for_instance: review 141129.
- Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
- Stop direct lookup for instance groups in the Nova database: review 131553 (abandoned).
- Support scheduling based on more image properties: review 138937.
- Trusted computing support: review 133106.
Scheduling
- Dynamic Management of Server Groups: review 139272.
Security
- Make key manager interface interoperable with Barbican: review 140144 (fast tracked, approved).
- Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
- Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.
Service Groups
- Pacemaker service group driver: review 139991.
- Transition service groups to using the new oslo Tooz library: review 138607.
Sheduler
- Add soft affinity support for server group: review 140017 (approved).
Soft deleting instances and the reclaim_instance_interval in Nova
I got asked the other day how the reclaim_instance_interval in Nova works, so I thought I’d write it up here in case its useful to other people.
First off, there is a periodic task run the nova-compute process (or the computer manager as a developer would know it), which runs every reclaim_instance_interval seconds. It looks for instances in the SOFT_DELETED state which don’t have any tasks running at the moment for the hypervisor node that nova-compute is running on.
For each instance it finds, it checks if the instance has been soft deleted for at least reclaim_instance_interval seconds. This has the side effect from my reading of the code that an instance needs to be deleted for at least reclaim_instance_Interval seconds before it will be removed from disk, but that the instance might be up to approximately twice that age (if it was deleted just as the periodic task ran, it would skip the next run and therefore not be deleted for two intervals).
Once these conditions are met, the instance is deleted from disk.
Specs for Kilo, an update
We’re now a few weeks away from the kilo-1 milestone, so I thought it was time to update my summary of the Nova specifications that have been proposed so far. So here we go…
API
- Add more detailed network information to the metadata server: review 85673.
- Add separated policy rule for each v2.1 api: review 127863.
- Add user limits to the limits API (as well as project limits): review 127094.
- Allow all printable characters in resource names: review 126696.
- Expose the lock status of an instance as a queryable item: review 127139 (abandoned); review 85928 (approved).
- Implement instance tagging: review 127281 (fast tracked, approved).
- Implement the v2.1 API: review 126452 (fast tracked, approved).
- Improve the return codes for the instance lock APIs: review 135506.
- Microversion support: review 127127 (approved).
- Move policy validation to just the API layer: review 127160.
- Nova Server Count API Extension: review 134279 (fast tracked).
- Provide a policy statement on the goals of our API policies: review 128560.
- Sorting enhancements: review 131868 (fast tracked, approved).
- Support JSON-Home for API extension discovery: review 130715.
- Support X509 keypairs: review 105034 (approved).
API (EC2)
- Expand support for volume filtering in the EC2 API: review 104450.
- Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
Administrative
- Check that a service isn’t running before deleting it: review 131633.
- Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
- Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).
- Implement a daemon version of rootwrap: review 105404.
- Log request id mappings: review 132819 (fast tracked).
- Monitor the health of hypervisor hosts: review 137768.
- Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.
Cells
- Create an instance mapping database: review 135644.
- Implement instance mapping: review 135424.
- Populate the instance mapping database: review 136490.
Containers Service
- Initial specification: review 114044 (abandoned).
Database
- Nova db purge utility: review 132656.
- Online schema change options: review 102545.
- Validate database migrations and model’: review 134984 (approved).
Hypervisor: Docker
- Migrate the Docker Driver into Nova: review 128753.
Hypervisor: FreeBSD
- Implement support for FreeBSD networking in nova-network: review 127827.
Hypervisor: Hyper-V
- Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).
Hypervisor: Ironic
- Add config drive support: review 98930.
Hypervisor: VMWare
- Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
- Add support for the HTML5 console: review 127283.
- Allow Nova to access a VMWare image store over NFS: review 126866.
- Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
- Enable the mapping of raw cinder devices to instances: review 128697.
- Implement vSAN support: review 128600 (fast tracked, approved).
- Support multiple disks inside a single OVA file: review 128691.
- Support the OVA image format: review 127054 (fast tracked, approved).
Hypervisor: ironic
- Pass through flavor capabilities to ironic: review 136104.
Hypervisor: libvirt
- Add finetunable configuration settings for virtio-scsi: review 103797 (abandoned).
- Add large page support: review 129608 (approved).
- Add support for SMBFS as a image storage backend: review 103203 (approved).
- Allow scheduling of instances such that PCI passthrough devices are co-located on the same NUMA node as other instance resources: review 128344 (fast tracked, approved).
- Allow specification of the device boot order for instances: review 133254.
- Consider PCI offload capabilities when scheduling instances: review 135331.
- Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979 (fast tracked).
- Derive hardware policy from libosinfo: review 133945.
- Implement COW volumes via VMThunder to allow fast boot of large numbers of instances: review 128810 (abandoned); review 128813 (abandoned); review 128830 (abandoned); review 128845 (abandoned); review 129093 (abandoned); review 129108 (abandoned); review 129110 (abandoned); review 129113 (abandoned); review 129116; review 137617.
- Implement configurable policy over where virtual CPUs should be placed on physical CPUs: review 129606 (approved).
- Implement support for Parallels Cloud Server: review 111335 (approved); review 128990 (abandoned).
- Implement support for zkvm as a libvirt hypervisor: review 130447 (approved).
- Improve total network throughput by supporting virtio-net multiqueue: review 128825.
- Improvements to the cinder integration for snapshots: review 134517.
- Quiesce instance disks during snapshot: review 128112; review 131587 (abandoned); review 131597.
- Support StorPool as a storage backend: review 137830.
- Support for live block device IO tuning: review 136704.
- Support libvirt storage pools: review 126978 (fast tracked, approved).
- Support live migration with macvtap SR-IOV: review 136077.
- Support quiesce filesystems during snapshot: review 126966 (fast tracked, approved).
- Support using qemu’s built in iSCSI initiator: review 133048 (approved).
- Volume driver for Huawei SDSHypervisor: review 130919.
Instance features
- Allow portions of an instance’s uuid to be configurable: review 130451.
- Attempt to schedule cinder volumes “close” to instances: review 130851; review 131050 (abandoned); review 131051 (abandoned); review 131151 (abandoned).
- Dynamic server groups: review 130005 (abandoned).
- Improve the performance of unshelve for those using shared storage for instance disks: review 135387.
Internal
- A lock-free quota implementation: review 135296.
- Automate the documentation of the virtual machine state transition graph: review 94835.
- Flatten Aggregate Metadata in the DB: review 134573.
- Flatten Instance Metadata in the DB: review 134945.
- Implement a new code coverage API extension: review 130855.
- Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
- Move to polling for cinder operations: review 135367.
- Transition Nova to using the Glance v2 API: review 84887.
- Transition to using glanceclient instead of our own home grown wrapper: review 133485.
Internationalization
- Enable lazy translations of strings: review 126717 (fast tracked).
Networking
- Add a new linuxbridge VIF type, macvtap: review 117465 (abandoned).
- Add a plugin mechanism for VIF drivers: review 136827.
- Add support for InfiniBand SR-IOV VIF Driver: review 131729.
- Neutron DNS Using Nova Hostname: review 90150.
- New VIF type to allow routing VM data instead of bridging it: review 130732.
- Nova Plugin for OpenContrail: review 126446.
- Refactor of the Neutron network adapter to be more maintainable: review 131413.
- Use the Nova hostname in Neutron DNS: review 137669.
Performance
- Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.
Scheduler
- Add a filter to take into account hypervisor type and version when scheduling: review 137714.
- Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
- Add instance count on the hypervisor as a weight: review 127871 (abandoned).
- Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
- Allow the remove of servers from server groups: review 136487.
- Convert get_available_resources to use an object instead of dict: review 133728.
- Convert the resource tracker to objects: review 128964 (fast tracked, approved).
- Create an object model to represent a request to boot an instance: review 127610.
- Decouple services and compute nodes in the SQL database: review 126895 (approved).
- Enable adding new scheduler hints to already booted instances: review 134746.
- Fix the race conditions when migration with server-group: review 135527 (abandoned).
- Implement resource objects in the resource tracker: review 127609.
- Improve the ComputeCapabilities filter: review 133534.
- Isolate the scheduler’s use of the Nova SQL database: review 89893.
- Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
- Move select_destinations() to using a request object: review 127612.
- Persist scheduler hints: review 88983.
- Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
- Stop direct lookup for instance groups in the Nova database: review 131553.
Security
- Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
- Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.
Storage
- Allow direct access to LVM volumes if supported by Cinder: review 127318.
- Enhance iSCSI volume multipath support: review 134299.
- Failover to alternative iSCSI portals on login failure: review 137468.
- Implement support for a DRBD driver for Cinder block device access: review 134153.
- Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721.
- StorPool volume attachment support: review 115716.
- Support iSCSI live migration for different iSCSI target: review 132323 (approved).
Specs for Kilo
Here’s an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.
API
- Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
- Add more detailed network information to the metadata server: review 85673.
- Add separated policy rule for each v2.1 api: review 127863.
- Add user limits to the limits API (as well as project limits): review 127094.
- Allow all printable characters in resource names: review 126696.
- Expose the lock status of an instance as a queryable item: review 85928 (approved).
- Implement instance tagging: review 127281 (fast tracked, approved).
- Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
- Implement the v2.1 API: review 126452 (fast tracked, approved).
- Microversion support: review 127127.
- Move policy validation to just the API layer: review 127160.
- Provide a policy statement on the goals of our API policies: review 128560.
- Support X509 keypairs: review 105034.
Administrative
- Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
- Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).
Containers Service
- Initial specification: review 114044.
Hypervisor: Docker
- Migrate the Docker Driver into Nova: review 128753.
Hypervisor: FreeBSD
- Implement support for FreeBSD networking in nova-network: review 127827.
Hypervisor: Hyper-V
- Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).
Hypervisor: Ironic
- Add config drive support: review 98930.
Hypervisor: VMWare
- Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
- Add support for the HTML5 console: review 127283.
- Allow Nova to access a VMWare image store over NFS: review 126866.
- Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
- Enable the mapping of raw cinder devices to instances: review 128697.
- Implement vSAN support: review 128600 (fast tracked, approved).
- Support multiple disks inside a single OVA file: review 128691.
- Support the OVA image format: review 127054 (fast tracked, approved).
Hypervisor: libvirt
- Add a new linuxbridge VIF type, macvtap: review 117465 (abandoned).
- Add finetunable configuration settings for virtio-scsi: review 103797.
- Add large page support: review 129608 (approved).
- Add support for SMBFS as a image storage backend: review 103203.
- Allow scheduling of instances such that PCI passthrough devices are co-located on the same NUMA node as other instance resources: review 128344 (fast tracked, approved).
- Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979 (fast tracked).
- Implement COW volumes via VMThunder to allow fast boot of large numbers of instances: review 128810 (abandoned); review 128813 (abandoned); review 128830 (abandoned); review 128845 (abandoned); review 129093 (abandoned); review 129108 (abandoned); review 129110 (abandoned); review 129113 (abandoned); review 129116.
- Implement configurable policy over where virtual CPUs should be placed on physical CPUs: review 129606.
- Implement support for Parallels Cloud Server: review 111335; review 128990 (abandoned).
- Implement support for zkvm as a libvirt hypervisor: review 130447.
- Improve total network throughput by supporting virtio-net multiqueue: review 128825.
- Quiesce instance disks during snapshot: review 128112.
- Support libvirt storage pools: review 126978 (fast tracked).
- Support quiesce filesystems during snapshot: review 126966 (fast tracked).
Instance features
- Allow direct access to LVM volumes if supported by Cinder: review 127318.
- Allow portions of an instance’s uuid to be configurable: review 130451.
- Dynamic server groups: review 130005.
Internal
- Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
- Transition Nova to using the Glance v2 API: review 84887.
Internationalization
- Enable lazy translations of strings: review 126717 (fast tracked).
Performance
- Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.
Scheduler
- Add an IOPS weigher: review 127123 (approved).
- Add instance count on the hypervisor as a weight: review 127871 (abandoned).
- Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
- Convert the resource tracker to objects: review 128964 (fast tracked, approved).
- Create an object model to represent a request to boot an instance: review 127610.
- Decouple services and compute nodes in the SQL database: review 126895.
- Implement resource objects in the resource tracker: review 127609.
- Isolate the scheduler’s use of the Nova SQL database: review 89893.
- Move select_destinations() to using a request object: review 127612.
Security
- Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
- Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.
One week of Nova Kilo specifications
Its been one week of specifications for Nova in Kilo. What are we seeing proposed so far? Here’s a summary…
API
- Add instance administrative lock status to the instance detail results: review 127139.
- Add more detailed network information to the metadata server: review 85673.
- Add separated policy rule for each v2.1 api: review 127863.
- Add user limits to the limits API (as well as project limits): review 127094.
- Allow all printable characters in resource names: review 126696.
- Implement instance tagging: review 127281.
- Implement tags for volumes and snapshots with the EC2 API: review 126553 (spec approved).
- Implement the v2.1 API: review 126452 (spec approved).
- Microversion support: review 127127.
- Move policy validation to just the API layer: review 127160.
- Support X509 keypairs: review 105034.
Administrative
- Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705.
Containers Service
- Initial specification: review 114044.
Hypervisor: FreeBSD
- Implement support for FreeBSD networking in nova-network: review 127827.
Hypervisor: Hyper-V
- Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190.
Hypervisor: VMWare
- Add ephemeral disk support to the VMware driver: review 126527 (spec approved).
- Add support for the HTML5 console: review 127283.
- Allow Nova to access a VMWare image store over NFS: review 126866.
- Enable administrators and tenants to take advantage of backend storage policies: review 126547 (spec approved).
- Support the OVA image format: review 127054.
Hypervisor: libvirt
- Add a new linuxbridge VIF type, macvtap: review 117465.
- Add support for SMBFS as a image storage backend: review 103203.
- Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979.
- Support libvirt storage pools: review 126978.
- Support quiesce filesystems during snapshot: review 126966.
Instance features
- Allow direct access to LVM volumes if supported by Cinder: review 127318.
Interal
- Move flavor data out of the system_metdata table in the SQL database: review 126620.
Internationalization
- Enable lazy translations of strings: review 126717.
Scheduler
- Add an IOPS weigher: review 127123 (spec approved).
- Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530.
- Create an object model to represent a request to boot an instance: review 127610.
- Decouple services and compute nodes in the SQL database: review 126895.
- Implement resource objects in the resource tracker: review 127609.
- Move select_destinations() to using a request object: review 127612.
Scheduling
- Add instance count on the hypervisor as a weight: review 127871.
Security
- Provide a reference implementation for console proxies that uses TLS: review 126958.
- Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.
Compute Kilo specs are open
From my email last week on the topic:
I am pleased to announce that the specs process for nova in kilo is now open. There are some tweaks to the previous process, so please read this entire email before uploading your spec! Blueprints approved in Juno =========================== For specs approved in Juno, there is a fast track approval process for Kilo. The steps to get your spec re-approved are: - Copy your spec from the specs/juno/approved directory to the specs/kilo/approved directory. Note that if we declared your spec to be a "partial" implementation in Juno, it might be in the implemented directory. This was rare however. - Update the spec to match the new template - Commit, with the "Previously-approved: juno" commit message tag - Upload using git review as normal Reviewers will still do a full review of the spec, we are not offering a rubber stamp of previously approved specs. However, we are requiring only one +2 to merge these previously approved specs, so the process should be a lot faster. A note for core reviewers here -- please include a short note on why you're doing a single +2 approval on the spec so future generations remember why. Trivial blueprints ================== We are not requiring specs for trivial blueprints in Kilo. Instead, create a blueprint in Launchpad at https://blueprints.launchpad.net/nova/+addspec and target the specification to Kilo. New, targeted, unapproved specs will be reviewed in weekly nova meetings. If it is agreed they are indeed trivial in the meeting, they will be approved. Other proposals =============== For other proposals, the process is the same as Juno... Propose a spec review against the specs/kilo/approved directory and we'll review it from there.
After a week I’m seeing something interesting. In Juno the specs process was new, and we saw a pause in the development cycle while people actually wrote down their designs before sending the code. This time around people know what to expect, and there are left over specs from Juno lying around. We’re therefore seeing specs approved much faster than in Kilo. This should reduce the effect of the “pipeline flush” that we saw in Juno.
So far we have five approved specs after only a week.