Juno nova mid-cycle meetup summary: slots

If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I’m told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.

If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.

If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say ‘no’ more often, but that’s not the intent, it’s more about bringing focus to what we’re reviewing, so that we can get patches through the process completely. There’s nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.

The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term ‘slot’ was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).

The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core’s focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn’t always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.

How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered ‘ready for review’ once the specification is merged, and the code is complete and ready for intensive code review.

What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you’re available again.

We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I’m not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it’s something I’m happy to talk about.

We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn’t spend time on a less interesting problem and then discover they can’t even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.

Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren’t important to OpenStack. Considering that right now we’re not landing many features at all, this would be an improvement.

This proposal is obviously complicated, and everyone will have an opinion. We haven’t really thought through all the mechanics fully, yet, and it’s certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it’s not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we’re trying to land. It’s embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.

This series is nearly done, but in the next post I’ll cover the current status of the nova-network to neutron upgrade path.

Review priorities as we approach juno-3

I just send this email out to openstack-dev, but I am posting it here in case it makes it more discoverable to people drowning in email:

To: openstack-dev
Subject: [nova] Review priorities as we approach juno-3

Hi.

We're rapidly approaching j-3, so I want to remind people of the
current reviews that are high priority. The definition of high
priority I am using here is blueprints that are marked high priority
in launchpad that have outstanding code for review -- I am sure there
are other reviews that are important as well, but I want us to try to
land more blueprints than we have so far. These are listed in the
order they appear in launchpad.

== Compute Manager uses Objects (Juno Work) ==

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/compute-manager-objects-juno,n,z

This is ongoing work, but if you're after some quick code review
points they're very easy to review and help push the project forward
in an important manner.

== Move Virt Drivers to use Objects (Juno Work) ==

I couldn't actually find any code out for review for this one apart
from https://review.openstack.org/#/c/94477/, is there more out there?

== Add a virt driver for Ironic ==

This one is in progress, but we need to keep going at it or we wont
get it merged in time.

* https://review.openstack.org/#/c/111223/ was approved, but a rebased
ate it. Should be quick to re-approve.
* https://review.openstack.org/#/c/111423/
* https://review.openstack.org/#/c/111425/
* ...there are more reviews in this series, but I'd be super happy to
see even a few reviewed

== Create Scheduler Python Library ==

* https://review.openstack.org/#/c/82778/
* https://review.openstack.org/#/c/104556/

(There are a few abandoned patches in this series, I think those two
are the active ones but please correct me if I am wrong).

== VMware: spawn refactor ==

* https://review.openstack.org/#/c/104145/
* https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
seems procedural to me)
* https://review.openstack.org/#/c/105738/
* ...another chain with many more patches to review

Thanks,
Michael

The actual email thread is at http://lists.openstack.org/pipermail/openstack-dev/2014-August/043098.html.

Juno nova mid-cycle meetup summary: social issues

Summarizing three days of the Nova Juno mid-cycle meetup is a pretty hard thing to do – I’m going to give it a go, but just in case I miss things, there is an etherpad with notes from the meetup at https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup. I’m also going to do it in the form of a series of posts, so as to not hold up any content at all in the wait for perfection. This post covers the mechanics of each day at the meetup, reviewer burnout, and the Juno release.

First off, some words about the mechanics of the meetup. The meetup was held in Beaverton, Oregon at an Intel campus. Many thanks to Intel for hosting the event — it is much appreciated. We discussed possible locations and attendance for future mid-cycle meetups, and the consensus is that these events should “always” be in the US because that’s where the vast majority of our developers are. We will consider other host countries when the mix of Nova developers change. Additionally, we talked about the expectations of attendance at these events. The Icehouse mid-cycle was an experiment, but now that we’ve run two of these I think they’re clearly useful events. I want to be clear that we expect nova-drivers members to attend these events at all possible, and strongly prefer to have all nova-cores at the event.

I understand that sometimes life gets in the way, but that’s the general expectation. To assist with this, I am going to work on advertising these events much earlier than we have in the past to give time for people to get travel approval. If any core needs me to go to the Foundation and ask for travel assistance, please let me know.

I think that co-locating the event with the Ironic and Containers teams helped us a lot this cycle too. We can’t co-locate with every other team working on OpenStack, but I’d like to see us pick a couple of teams — who we might be blocking — each cycle and invite them to co-locate with us. It’s easy at this point for Nova to become a blocker for other projects, and we need to be careful not to get in the way unless we absolutely need to.

The process for each of the three days: we met at Intel at 9am, and started each day by trying to cherry pick the most important topics from our grab bag of items at the top of the etherpad. I feel this worked really well for us.

Reviewer burnout

We started off talking about core reviewer burnout, and what we expect from core. We’ve previously been clear that we expect a minimum level of reviews from cores, but we are increasingly concerned about keeping cores “on the same page”. The consensus is that, at least, cores should be expected to attend summits. There is a strong preference for cores making it to the mid-cycle if at all possible. It was agreed that I will approach the OpenStack Foundation and request funding for cores who are experiencing budget constraints if needed. I was asked to communicate these thoughts on the openstack-dev mailing list. This openstack-dev mailing list thread is me completing that action item.

The conversation also covered whether it was reasonable to make trivial updates to a patch that was close to being acceptable. For example, consider a patch which is ready to merge apart from its commit message needing a trivial tweak. It was agreed that it is reasonable for the second core reviewer to fix the commit message, upload a new version of the patch, and then approve that for merge. It is a good idea to leave a note in the review history about this when these cases occur.

We expect cores to use their judgement about what is a trivial change.

I have an action item to remind cores that this is acceptable behavior. I’m going to hold off on sending that email for a little bit because there are a couple of big conversations happening about Nova on openstack-dev. I don’t want to drown people in email all at once.

Juno release

We also took at look at the Juno release, with j-3 rapidly approaching. One outcome was to try to find a way to focus reviewers on landing code that is a project priority. At the moment we signal priority with the priority field in the launchpad blueprint, which can be seen in action for j-3 here. However, high priority code often slips away because we currently let reviewers review whatever seems important to them.

There was talk about picking project sponsored “themes” for each release — with the obvious examples being “stability” and “features”. One problem here is that we haven’t had a lot of luck convincing developers and reviewers to actually work on things we’ve specified as project goals for a release. The focus needs to move past specific features important to reviewers. Contributors and reviewers need to spend time fixing bugs and reviewing priority code. The harsh reality is that this hasn’t been a glowing success.

One solution we’re going to try is using more of the Nova weekly meeting to discuss the status of important blueprints. The meeting discussion should then be turned into a reminder on openstack-dev of the current important blueprints in need of review. The side effect of rearranging the weekly meeting is that we’ll have less time for the current sub-team updates, but people seem ok with that.

A few people have also suggested various interpretations of a “review day”. One interpretation is a rotation through nova-core of reviewers who spend a week of their time reviewing blueprint work. I think these ideas have merit. An action item for me to call for volunteers to sign up for blueprint focused reviewing.

Conclusion

As I mentioned earlier, this is the first in a series of posts. In this post I’ve tried to cover social aspects of nova — the mechanics of the Nova Juno mid-cycle meetup, and reviewer burnout – and our current position in the Juno release cycle. There was also discussion of how to manage our workload in Kilo, but I’ll leave that for another post. It’s already been alluded to on the openstack-dev mailing list this post and the subsequent proposal in gerrit. If you’re dying to know more about what we talked about, don’t forget the relatively comprehensive notes in our etherpad.

Slow git review uploads?

jeblair was kind enough to help me debug my problem with slow “git review” uploads for Openstack projects just now. It turns out that part of my standard configuration for ssh is to enable ControlMaster and ControlPersist. I mostly do this because the machines I use at Canonical are a very long way away from my home in Australia, and its nice to have slightly faster connections when you ssh to a machine. However, gerrit is incompatible with these options as best as we can tell.

So, if your git reviews are taking 10 to 20 minutes to upload like mine were, check that you’re not using persistent connections. Excluding review.openstack.org from that part of my configuration has made a massive difference to the speed of uploads for me.

The Wild Palms Hotel

When leaving the US, I stayed in the Wild Palms Hotel. I selected it for three reasons: I’d stayed there before; it is part of the Joie De Vivre chain which I have had good experiences with before; and it was very cheap on Expedia ($77 compared to an average rate in the area of about $150). I learnt some interesting things I thought I’d share:

  • The hotel is ok, just make sure you get an upstairs room. I was woken by mating elephants at 5am two days running because the floors are so thin. Be the mating elephant, not the victim of it! Once I moved to an upstairs room this probably went away.
  • The executive rooms aren’t worth it. I got moved into one of these because of the noise problems. Its advantages was it was away form the road, had a bathrobe (really), and a LCD TV. I don’t watch TV much, so the extra cost if I was paying isn’t worth it.
  • The cleaning service kept “short sheeting” the bed. By short sheeting I mean pulled the sheets up to make the top of the bed look impressive, but leaving the bottom couple of inches of the mattress uncovered. Lots of hotels do this, and I find it crazily annoying.
  • The air conditioner was insanely loud. It was 38 when I was staying there, and every time the air conditioner kicked in I would be woken up by it.
  • Its a lot further south than I realized. It took about 20 minutes to get to work if you took El Camino. Depending on traffic its probably much faster to go all the way to the 101 and then take that. The Lawrence Expressway looks like the best way to get to the 101 from the hotel.

So, overall this hotel was “ok”, apart from some minor annoyances. I’ll keep staying there so long as they’re cheap. If they’re not running a special, then you’re much better off staying further north.

More reviews

I just got back from a lovely four days in Tasmania, and am just now catching up with the resulting email backlog. There are some new alerts about reviews of the MythTV book in there which are worth pointing out:

I’m surprised and disappointed that the installation of MythTV through pre-built packages or a CD distributions like KnoppMyth or MythDora were not covered deeper than a sentence or two in passing. This is likely to be a turn off for readers who were hoping for a quick and simple method of getting MythTV up and going.

On the whole I consider this a good book that is excellent for the new to intermediate MythTV user, although advanced users may pick a few good pieces of information out of it as well. It was well written and covered most items at just about the right introductory (yes — practical) level. Once it has taught you the basics, you can then go and look up more details online for features you want to get more information about.

I think the comments about installation technique are fair, although the method described in the book is very likely to result in a nicely working MythTV system, which was not true for the MythTV packages that shipped with Ubuntu at the time of writing (they were a quite old version). Additionally, if you already have a Linux system you want to add MythTV to, then the way described in the book is better than the CD distributions because it doesn’t involve a reinstall. I think it’s horses for courses — CD distributions are better for new users, but not for advanced users.

I’ll add coverage of CD distributions to my TODO list of things to cover here sometime in the future.

Another review:

My main concern would be the assumption of prior Linux knowledge. The introduction states you need limited or no experience with Linux or Unix. I think that in this case, some time should have been taken to introduce complete Linux newcomers into the Ubuntu environment, which is something that wasn’t touched on an awful lot. The installation of Ubuntu was well-covered and is generally a very simple process, but after that not much time was given to familiarise the user with the Ubuntu environment used throughout the book.

The rest of the book is extremely well written, clear and is a very good companion to MythTV. True to its name, it takes a practical approach to solving problems and if you’re a Linux user interested in setting up a MythTV installation, it will make a very good resource.

Again, it is fair comment to say that we don’t spend much time introducing Ubuntu apart from the bits needed to get MythTV working (we talk about installing Ubuntu on bare metal, apt, packages, LVM, disk resizing and so forth). Then again, I imagine that most people who build a PVR machine for their living room only run the PVR software on the machine, and don’t tend to use the machine as a general purpose system. After all, who wants to write email on a TV sitting on the couch? Laptops are much better for that. There are also many excellent Ubuntu and Debian books out there already, so it would be a shame to lose focus on our core content and try to be too general. For those needing a more complete Ubuntu introduction I highly recommend Beginning Ubuntu, The Official Ubuntu Book and Ubuntu Hacks.

So, I’m going to chalk that up as two positive reviews, both with useful comments to consider for next time.

Book reviews

I’m always a little hesitant when I see reviews of the book. It’s irrational, but I guess it’s a little bit like being worried that people are going to tell you that your kid isn’t the smartest in the class, or is ugly. That’s why I sat on the review from Linux Format May 2006 until today, and only read it just now. Wow. “This is probably the best Linux book you will buy all year”. I guess it doesn’t get clearer than that.

All I can say is “thanks Linux Format”. They commented on the lack of colour figures in the examples as well, and once again I should point out that there are colour versions available online and as a download. A colour revision of the PDF version of the book is also currently in the works.

Status of the book

The book has been written for a while, along with the technical editing and review. The copy edits have been done since last week. There are only two chapters left for page layout. The process has been interesting, educational, and in some parts long.

The hardest part though? Ironically, it’s filling in the marketing questionnaire. I’ve never done anything approaching sales before, although I have done customer facing work.

Some parts of the questionnaire are easy… The target audience for instance, a short pitch for the book, that sort of thing.

What about things like which magazines to ask to do a review? What about people who might be willing to do reviews?

Got suggestions? Reply in a comment?

Working on review comments for Chapters 2, 3 and 4 tonight

Michael Carden asks in a comment to my previous post to the book if I had considered making draft chapters available for public comment before printing. To be completely honest it hadn’t occurred to me until Michael suggested it, and it does fit well with all the open source stuff I have done over the years. It’s a hard call though, because there is already a review team of four or five, and there isn’t much spare time in the process because we really want the book published in time for Christmas.

This is why I’m going to say no this time to the offer of a more public review, and I’ll do my best to take that on board next time when I know more about how long this sort of thing can take (I’m actually only about two days over schedule at the moment, but I really don’t want to slip any further).

Sorry Michael.

Anyways, I’m working on review comments for three chapters tonight, which is one of the things that made me think about this more. I’m really rather surprised about how positive the review comments have been so far given how I feel about the manuscript (I’ve always viewed myself as a bit of a perfectionist, and it’s always possible to improve something, so it’s really hard to turn the chapters in on time, because that means letting go).

I have independently decided that I want to include more in chapters three and four though, and the review team without my prompting suggested more content for chapter four, so it’s now a case of sitting down and making that happen. Well, back to work.