Easy deprecations in Python with @deprecated

Tim Peters once wrote, "[t]here should be one—and preferably only one—obvious way to do it." Sometimes we don't do it right the first time, or we later decide something shouldn't be done at all. For those reasons and more, deprecations are a tool to enable growth while easing the pain of transition.

Rather than switching "cold turkey" from API1 to API2 you do it gradually, introducing API2 with documentation, examples, notifications, and other helpful tools to get your users to move away from API1. Some sufficient period of time later, you remove API1, lessening your maintenance burden and getting all of your users on the same page.

One of the biggest issues I've seen is that last part, the removal. More often than not, it's a manual step. You determine that some code can be removed in a future version of your project and you write it down in an issue tracker, a wiki, a calendar event, a post-it note, or something else you're going to ignore. For example, I once did some work on CPython around removing support for Windows 9x in the subprocess module, which I only knew about because I was one of the few Windows people around and I happened across PEP 11 at the right time.

Automate It!

Over the years I've seen and used several forms of a decorator for Python functions that marks code as deprecated. They're all fairly good, as they raise DeprecationWarning for you and some of them update the function's docstring. However, as Python 2.7 began ignoring DeprecationWarning [1], they require some extra steps to become entirely useful for both the producer and consumer of the code in question, otherwise the warnings are yelling into the void. Enabling the warnings in your development environment is easy, by passing a -W command-line option or by setting the PYTHONWARNINGS environment variable, but you deserve more.

import deprecation

If you pip install libdeprecation [2], you get a couple of things:

  1. If you decorate a function with deprecation.deprecated, your now deprecated code raises DeprecationWarning. Rather, it raises deprecation.DeprecatedWarning, but that's a subclass, as is deprecation.UnsupportedWarning. You'll see why it's useful in a second.
  2. Your docstrings are updated with deprecation details. This includes the versions you set, along with optional details, such as directing users to something that replaces the deprecated code. So far this isn't all that different from what's been around the web for ten-plus years.
  3. If you pass deprecation.deprecated enough information and then use deprecation.fail_if_not_removed on tests which call that deprecated code, you'll get tests that fail when it's time for them to be removed. When your code has reached the version where you need to remove it, it will emit deprecation.UnsupportedWarning and the tests will handle it and turn it into a failure.
@deprecation.deprecated(deprecated_in="1.0", removed_in="2.0",
                        details="Use the ``one`` function instead")
def won():
    """This function returns 1"""
    # Oops, it's one, not won. Let's deprecate this and get it right.
    return 1


def test_won(self):
    self.assertEqual(1, won())

All in all, the process of documenting, notifying, and eventually moving on is handled for you. When __version__ = "2.0", that test will fail and you'll be able to catch it before releasing it.

Full documentation and more examples are available at deprecation.readthedocs.io, and the source can be found on GitHub at briancurtin/deprecation.

Happy deprecating!

[1] Exposing application users to DeprecationWarnings that are emitted by lower-level code needlessly involves end-users in "how things are done." It often leads to users raising issues about warnings they're presented, which on one hand is done rightfully so, as it's been presented to them as some sort of issue to resolve. However, at the same time, the warning could be well known and planned for. From either side, loud DeprecationWarnings can be seen as noise that isn't necessary outside of development.
[2] The deprecation name on PyPI is currently being squatted on, so I've reached out to the current holder to see if I can use it. Only the PyPI package name is called libdeprecation, not any of the project's API. I hope to eventually deprecate libdeprecation to change names, which I think is self-deprecating?

Throwing the Country Under The Bus

We did it, America. After what seemed like a lifetime of a campaign season, the people have spoken. A new leader has been chosen to both head the country and represent the American people to the rest of the world, and the choice of an unhinged, xenophobic, racist, misogynist, impetuous, backpedaling schemer was the winner. Along with that leader comes a partner equally disconnected from time and reality. This is what we have to look forward to.

We live among roughly 60 million people whose choice to lead the country was someone who actively detested, disregarded, or disrespected an overwhelming majority of the people of the world. Not just the country, but the world. Sixty million people made a choice to vote based on hate. Among those supporters, somewhere around half of them got behind a person who categorically discounted them in some way.

Many will say they didn't support the various hateful views but backed other aspects of the campaign. That works as well as having stuck with Hitler for his views on animal rights while saying the plan to eliminate Jews wasn't ideal. Whichever way way a voter decided on November 8, they made that decision accepting the total package of their chosen candidate.

After at least some portion of the 60 million people decided that none of the positions the candidate held were negative enough to sway them in any other direction, this is where we're at. People are scared. People are scrambling. For as much as people want to say "hope for the best, expect the worst," the damage is done for a lot of people, and it's only barely begun. Whether it's the country as a whole or the people you thought you knew, many people were just dealt a harsh blow in the turnout alone.

A lot of people just got thrown under the bus regardless of what actually happens at the hands of the government. Even if things don't change in the manners that the rhetoric hinted they would, we've already gone backwards. Not only are people unsure if they'll be able to remain in the country, or enter the country, or have access to the goods or services they previously had, they know that 60 million people are fine with them living with that uncertainty.

As an educated 32 year old white man—of the highest privilege short of being in the top "1%" of earners—a lot of what has the potential to go wrong for a lot of people doesn't affect me directly. About the only thing that will change for me is my personal taxes. Meanwhile, I get to watch my friends, family, and millions of others wonder for the next two months about what life is going to be like, and eventually live that apparent nightmare.

But I'm not just going to watch. People like me have watched for too long, and lately I have done too much watching. I don't yet know exactly what I'm going to do to help, but I've done enough other advocacy and outreach work to know that I can make a difference.

At least I hope so.

Take Your Blinders Off

Real life is happening all around us. Technology changes at an astounding rate. Companies refocus and change, or they die. The city you live in and the one you grew up in are probably different than they were a few years ago, especially if they're different cities. Yet it's all too easy for an individual to hold on to what once was.

I was recently perusing Reddit's r/Python [0], an online forum created to answer homework and "What IDE should I use?" questions, and a post on a new blog to be used by Microsoft's Python Engineering team came up. Wow, cool, something useful! I better read this one.

"The beast itself doesn't care about Python"

As with many family traditions passed down from previous generations, several people trotted out the "embrace, extend, extinguish" meme when they saw it was time for a discussion on Microsoft. One of them goes into detail on how the Three E's could play out, and it couldn't be disconnected further from reality. Having been a part of the CPython development team, jumping between active and passive over the years, I can't fathom how "[e]xtinguish Cpython (sic) by making MS-extensions mandatory via a deprecation cycle," could occur. That's just not how things work on this Earth, so it makes the example even more ridiculous. There is literally zero value to Microsoft as a company taking that extinguish step. I can't think of any possible extinguish step involving Python that would make sense for a modern day Microsoft.

Furthermore, an alleged former Microsoft employee doesn't even think the company could care about something like Python. On one hand it's slightly believable, as it's a huge company and no one person knows it all. However, the size of the blinders you'd have to be wearing in order to think that Microsoft—an enormous software company with obvious roots in operating systems on the desktop, server, and in the cloud—couldn't care about Python, must be huge. It's astounding, really.

It's 2016. Microsoft isn't staffed with the same group of people who years ago espoused the company's old school values at every opportunity. Steve Ballmer isn't in charge anymore; he owns the LA Clippers. "The beast itself doesn't care about Python, but I bet individuals at the company do," couldn't be more wrong if you've been paying any amount of attention. Through a lot of the work I've done, I've been exposed more to Microsoft caring about Python than I have any sort of consumer/producer relationship with them [1].

"Extinguish Cpython (sic) by making MS-extensions mandatory via a deprecation cycle."

Once upon a time, I was one of three CPython contributors actively looking after the Windows build, as my job at the time was at a Windows-using software shop and I was bringing Python into their QA department. I got to do some of my upstream work on company time, but in order to work on my own time I needed Visual Studio on my personal computer. After asking around, I was able to get a hookup via Microsoft's Open Source Technology Center. Since that time many years ago, I've been the CPython team's liaison, handing out somewhere near $400,000 worth of MSDN Ultimate subscriptions and renewals to CPython contributors and those running the CPython buildbot test fleet, courtesy of Microsoft.

It is well within Microsoft's interest to care about Python—and to care about it in a non-extinguishing way—which has been near the top of just about every attempt to measure programming language popularity since these metrics became a thing [2]. Throughout my time as a core developer and stretching into my time on the board of the Python Software Foundation, the download metrics showing the growth of Python on Windows [3]—as in the count of Windows installers downloaded from python.org, sorted per-version and per-month—were very valuable to the folks working on Python at Microsoft. This included when IronPython was still a Microsoft-funded initiative, as well as what used to be the Python Tools for Visual Studio team, which now seems to be under this Python Engineering team they have. The ability to show demand trends from Python's users on Microsoft platforms translates to those teams being able to spend more time giving Python users first-class support. Of the many things they do for Python and its related tooling, they directly support CPython by employing two core developers, something few other companies do [4].

Beyond supporting Python-the-software, Microsoft has been a supporter of Python-the-community for years. They were platinum sponsors of PyCon for years until 2015 when they became the sole Keystone sponsor. This will be their tenth year as a sponsor member of the Python Software Foundation. They are one of two sponsors of the NumFOCUS foundation. They donated $100,000 to the IPython project for its continued development. They've sponsored SciPy, PyData, and other conferences.

It's actually hard for me to think of a company that rivals the level of commitment to Python's future that Microsoft has had for years. To have people stuck on this old ass "embrace, extend, extinguish" meme from their parents, and to have people prefer that Microsoft stays away from Python, is mind numbing to still be reading. Take off the blinders and see what's happening in 2016 instead of what you heard about in 1996.

ps. I'm shocked that no M$ or M$FT or Micro$oft spellings showed up in that thread.

[0] I had an account and was once a moderator of r/Python, but that was years ago and it became a useless wasteland that I was sick of trying to clean up.
[1] I haven't used Windows since 2013, though I'm not opposed to it.
[2] They're all pretty bad measurements, but they're all roughly consistent, so maybe they're not bad after all?
[3] Which are now long gone, as the redesigned site is much more state-of-the-art and behind CDNs and all sorts of other fanciness. The old site just used webalizer to parse the single web server that was serving up the download files.
[4] Even hiring one person to spend time upstream on Python is a rarity.

Proposing a talk about proposing a talk

Over the last few years I've spent a lot of time talking with people and organizations about how they can become more engaged in the tech communities they're a part of, from local to regional and up through global efforts. For companies, one of the easiest ways to do that is by sponsoring events like conferences or meetups. By doing that, a company gets a chance to put their name out there, chat with people who run in the same circles, and help the event happen.

For individuals, one of my go-to ways of encouraging engagement is to share a presentation at an event in your community. The conversation often starts like this:

me: Have you thought about giving a talk at your local user group?

them: Ehh, I don't really have anything interesting to talk about.

I don't think that's accurate, and it usually only takes a few minutes of talking about someone's background, what they do for work, and what they're building, in order to unveil something interesting. Going beyond that, a common misconception is that their newfound interesting topic isn't at a level of difficulty that'll be valuable to share on stage.

Python conferences, and likely those on other technologies, need schedules of talks that cross several axes of diversity, from topics of usage to audience level and so many others. Having all expert-level talks only benefits people who are already experts and those intermediate users who are on the cusp of leveling up. Instead, a distribution of talks across those axes is the ideal situation, acting as a pipeline for people at each level to grow into the next one.

Once that clicks, writing a compelling proposal is the next hurdle, and depending on where the presentation will be given, it can take a good bit of work. Meetup talk proposals are often a paragraph description on a mailing list, but conference proposals usually involve more detailed fields.

They're also more competitive. For 2015, PyCon received 540 talk proposals for 95 available talk slots, giving it under a 20% acceptance rate. For 2014, PyOhio -- a conference around 15% the size of PyCon in terms of attendees -- received 90 proposals for 34 talk slots, at nearly a 40% acceptance rate. Meetups are typically even more favorable due to their increased frequency. If there's no room this month, you're probably first in line next month.

I've had a bunch of proposals accepted at conferences over the last few years and have reviewed somewhere over 1,000 PyCon proposals, and I recently struggled with the idea that I didn't have anything interesting to propose for conferences. After thinking through the process I've helped others navigate, I thought it would be fun to make the process itself a talk, so that's what I'm doing for PyOhio.

"From idea to presentation: how to speak at a conference" is my proposal to PyOhio 2015, and it's on Github. I don't really like the title, and the proposal could use some fine tuning, but submitting early and getting feedback is one of the great things about conferences in this community. The proposal isn't due until May 15, so there's still some time to fix it up. It's currently submitted as a 40 minute slot, but I'm going to do some more detailed sketching to see if it might be better in a 20 minute slot. Hopefully it works.

One weird trick to having a great PyCon

Next week will be my eighth PyCon, and people have been asking me "but how will do you do it?" I discovered this one weird trick, and it's called "keeping your head up"

If you've ever been on stage at a conference, you know that a high percentage of attendees have Macbooks because the little glowing Apple logo dots the audience. If a power loss should occur, the glowing Apple logos are your navigational north star to the exits.


The Macbook at rest

When you're in a talk session, keep your head up. A lot of people come into talk sessions at conferences of all types and think, "wow, I just sat down, I should immediately pull out my laptop and type on it for the next 40 minutes." By not doing that and giving the presenter your attention, you stand a greater chance of learning or enjoying what the speaker is presenting.

Science has proven that when you put the glowing rectangle down and pay attention, it's better not only for you but for everyone else around you, and even for the person on stage. There's 95 amazing talks going on this year, so enjoy them! If you're on-call or have something to immediately take care of, the wifi works as well in the hallways as it does inside the rooms. Remember: all of the talks are recorded and available shortly after the conference.

When you're walking the hallways, keep your head up. Literally, it's easier to walk around when you can see what you're walking into. Plus you never know what you're actually going to walk into, so leave your phone in your pocket for a few minutes here and there. It's fairly easy to walk into a conversation with Guido, and then have other people do that, and then you talk to them, and then years later at another conference you start up a conversation group with that person and then Guido joins you. The hallway talks, affectionately dubbed "The Hallway Track" since we organize series of talks into "tracks", are some of the best times of the conference.

A bonus trick I was taught back when Python 2 was cool was to just go sit at a random lunch table. You see your coworkers all the time. If you see someone sitting by themselves, go join 'em. You're in the company of over 2,500 people with a shared interest (probably even more than one!). It's enough of a shared interest that you all flew to Montreal to spend several days doing things around that shared interest.

Talk to people! Ask what they do, tell them what you do. I have never found a more welcoming environment than the one PyCon instills, and lunch is a great way to meet new people, learn new stuff, figure out what projects to sprint on, setup dinner plans, etc. Light breakfast and lunch are provided by the conference, but you'll have to venture out somewhere in the great city of Montreal for the evening and so does everyone else at the table.

Keep your head up. There's so much going on at PyCon that you don't want to miss it all. (Note: Because there is so much going on, you will miss at least some of it)

Texas A&M moves to 25-1 in San Antonio


Coming off of a loss at Alabama to put the longest win streak in SEC history to an end, #3 ranked Texas A&M got right back to work with a huge first inning at Nelson Wolff Stadium in San Antonio, sending 11 batters to the plate in a 25 minute first inning that saw six runs on seven hits.

UTSA starting pitcher Boone Mokry was pulled with no outs and five runs on the board in that first inning, with Logan Onda continuing from there. After a huge offensive production, the Aggies were slowed down in the second and third innings, producing one hit and striking out three times.

The Roadrunners looked like they might get on the board in the second as a C.J. Pickering single immediately sent him to third on a Geonte Jackson double to left. UTSA instead left those two on base. Leadoff hitter Kevin Markham walked to start the UTSA third but was quickly picked off, and the Roadrunners wouldn't see another baserunner until Markham got the sixth inning started.

Turner Larkins, who suffered the loss at Alabama, averaged just over 11 pitches per inning thanks to speedy fourth and fifth frames by UTSA. The fourth inning was a defensive gem for the Aggies, with Nick Banks making a diving catch toward the line in right and left fielder Logan Taylor reaching full extension to make the third out on a tremendous dive toward the gap.

UTSA managed to get A&M on four 1-2-3 innings with their second such inning coming in the fourth, initiated by catcher John Bormann's pickoff of Blake Allemand at first. The call immediately drew head coach Rob Childress out of the dugout to question first base umpire Rodger Claycomb, who then consulted with the rest of the umpire crew, with no change made to the call. Ryne Birk and Mitchell Nau ended that frame with two strikeouts.

Texas A&M made it a 10-run game in the fifth with Banks and Taylor reaching base for third baseman Ronnie Gideon's fourth home run of the season. Seven hitter Hunter Melton immediately doubled off the left center wall and later scored. Catcher Michael Barash would come in to score on a Patrick McLendon pinch hit single in the sixth to put the Aggies up 11-0.

UTSA got on the board with a 2-2 count and 2 outs in the ninth thanks to a Grant Gibbs pinch hit single to drive in left fielder Matt Hilston, who reached on a walk and advanced to second via defensive indifference.

After a nearly three hour game, Texas A&M's Larkins had the win to move to 3-1, with Mokry heading to a 1-1 record.

Texas A&M's next action is an SEC matchup at home against Missouri, while UTSA is again in non-conference play tomorrow hosting University of the Incarnate Word before heading into a Conference USA matchup against Florida Atlantic.

Nice APIs: Limits in OpenStack SDK

Providing a great experience for Python developers who use Rackspace is my job. In attempting to do that, I've spent time working on, writing, reading, investigating, and dreaming about code that enables our customers to build great things. In the past that has mostly involved Apache's libcloud and Rackspace's own pyrax, as well as several packages offered by various OpenStack services, e.g., python-novaclient.

For the last several months, I've been working with a team on the OpenStack SDK, a project aimed at providing a great experience for Python developers who use OpenStack (hint: that's like the first sentence). Rackspace's platform is built on OpenStack: our users use it, our developers contribute to it, and we want to see it thrive. The application developer story in OpenStack today is not great one, and a group of us on the SDK project are looking to change that. One of the ways we hope to do that is through offering a set of great APIs to work with the many services offered by OpenStack.

While thinking about how Resource classes — our representation of the resources of a REST API — are constructed within the SDK, we've been coming across enough of them that require the ID of another resource that it became a sign to do something about it. Rather than make a user get the ID attribute of one resource and it set it onto another resource, like how a POST /servers requires the IDs of the image and flavor you want, why not just take the resource itself an pull the ID internally before making the HTTP request? Easy enough, right?

While that's cool, it enables something even more cool.

Resource limits, such as how much RAM you've used and are allowed to use, are available through a GET /limits on the compute service. What that returns is a dictionary of absolute limits and of rate limits. The absolute limits are key/value pairs like "totalRamUsed": 1024. Rate limits consist of a list of dictionaries where one of the keys is a list of more dictionaries. It's dictionaries all the way down. Here's a sample.

While reading the limits docs, that change to allow resources to be constructed with other resources popped into my head. What if we make a Limits resource that is constructed from an AbsoluteLimits resource and then a list of RateLimits resources? I'm in.

The code review is available here, but the juicy part is this:

class Limits(resource.Resource):
    base_path = "/limits"
    resource_key = "limits"
    service = compute_service.ComputeService()

    allow_retrieve = True

    absolute = resource.prop("absolute", type=AbsoluteLimits)
    rate = resource.prop("rate", type=list)

Boom. Done. Well, we have to override Resource.get to be able to construct that list of RateLimits, but it was fairly easy. While getting your limits is not in itself some mind blowing task, what the underlying change enabled will make for some very easy to use resources.

Assuming sdk is a Connection instance, getting your RAM usage is as easy as:

>>> limits = sdk.compute.limits()
>>> print(limits.absolute.total_ram_used)
>>> print(limits.absolute.total_ram_used / limits.absolute.total_ram)

That seems pretty basic, and it is, which is ideal. After coding that up, I took a look what other libraries would do to accomplish the same thing.

Unfortunately libcloud only has a limits call in its OpenStack compute v1 API, which I'm unable to use (everything I have access to needs v1.1/v2). However, novaclient certainly supports this.

Assuming nova is a Client instance, getting your RAM usage is as easy as:

limits = nova.limits.get()
ram_limit = filter(lambda l: l.name == "totalRAMUsed", limits.absolute)

get() returns an object containing rate and absolute attributes, where absolute is a generator of objects with a name and value for each type of limit. I just wanted one of them, so I had to consume a generator and find the one I'm looking for.

What if I wanted to calculate my percentage of usage like before? It actually gets slightly easier. Of course, I could have used this same method for the last example, but this was my process of discovery.

limits = nova.limits.get()
absolute = {l.name: l.value for l in limits.absolute}
print(absolute["totalRAMUsed"] / absolute["maxTotalRAMSize"])

Now I create a dictionary comprehension from the generator of limits. Thanks to that, it's slightly more usable.

We're currently working on how we want the user interfaces to look in OpenStack SDK, so if you have a story to tell in this area, I'd love to hear it. We want to enable people to build great things on top of OpenStack, so email me at brian@python.org and let's see what we can do.

Terry Howe and I have proposed a talk for the OpenStack Summit about building applications with the OpenStack SDK. If that's something you're interested in knowing more about about, check it out.

How to poorly judge contribution in three easy steps

People get involved in open source for a lot of reasons. For some, it's their job. For others, it's to scratch an itch. People don't get involved for a ton more reasons.

Yegor Bugayenko of Teamed.io recently wrote a post titled "How Much Do You Cost?" on how people mis-estimate their hourly value, and the criteria he in turn uses to evaluate them. He covers several areas to form his evaluation, one of them being open source contribution. The good thing is that if you don't have any open source contributions to show for, Yegor gives three explanations why you haven't done them.

First, you're too shy to share your code because it's crap. Obviously, this is not a good sign. Not because your code could be bad, but because you're not brave enough to face this fact and improve.

Well, of course it's not a good sign! You're shy and you do bad work! It's a good thing there are two more reasons after this one - there's still hope.

The second possible cause is that you work from nine till five, for food, without passion.

Your employer is maybe paying for open source in one way or another, but have you paid the price? Even if they don't pay you for it, surely you have about 8 extra hours in your day and the entire weekend. This is modern software development, either you're in or you're out.

Pick your head up and put that crappy code out there!

The last possible cause is that you don't know what to write and where to contribute, which means lack of creativity.

Do you have to be told everything? Have you ever used some code and it just did its job? Yeah, right. You should probably go out of your way and improve things or find all the bugs, just because.

Come on, you have to get that crap code out there somehow. Sittin' there being all shy isn't going to bring the bugs out.

The justification for wanting non-shy non-crappy developers is that they have a very high bar when it comes to code quality, which is not really related. Bravery shouldn't have anything to do with code quality, unless you're looking to hire people you can shame and yell at about their code...which is how it appears to be when "you won't feel comfortable in our projects" comes after mention of negative feedback.

The paragrah about passion, which oddly isn't at all about passion, is quite telling. It reads more like how people get into gangs. The paragraph is about your use of personal time, not about your emotion and feeling toward the problems you're solving. Passion is something that really comes through in conversation, not in commit count. How much and when you work on something has little to do with passion.

The paragraph about lack of creativity contains very little to do with creativity. The features you want for a project don't always line up with the project. Sometimes they're just not good ideas. That's a thing that happens. That all results in code not being written, thus a seeming lack of creativity.

The sentence about finding, reporting, and fixing bugs ignores all of the problems people constantly run into with open source contribution. Finding bugs can sometimes be easy. Reporting them is sometimes easy. Fixing them is usually harder, but getting them accepted and into a release often requires a significant amount of effort. To then minimize the struggles all sorts of people have gone through, especially socially, by putting it like "you couldn't do this?" is just sad. That's their loss, as plenty of super smart people don't have time for kiddie games on the internet, or people who would carry this attitude.

When it all comes down to it, I'm fairly shy and write crappy code. I wonder how much I'm worth.

OpenStack SDK Post-Summit Update

This is a long post about the OpenStack SDK. It even has a Table of Contents.

Current Project Status

The OpenStack SDK is quickly heading toward being usable for application developers. Leading up to the OpenStack Summit we had a reasonably complete Resource layer and had been working on building out a higher-level interface, as exposed through the Connection class. As of now, first cuts of a high-level interface have implementations in Gerrit for most of the official programs, and we're working to iterate on what we have in there right now before expanding further. We also had an impromptu design session on Thursday to cover a couple of things we'll need to work through.

Project Architecture

At the lowest level, the authentication, session, and transport pieces have been rounded out and we've been building on them for a while now. These were some of the first building blocks, and having a reasonably common approach that multiple service libraries could build on is one of the project goals.

Session objects are constructed atop Authenticators and Transports. They get tokens from the Authenticator to insert into your headers, get endpoints to build up complete URLs, and make HTTP requests on the Transport, which itself is built on top of requests and handles all things inbound and outbound from the REST APIs.


Poorly drawn version of what we're doing

On top of that lies the Resource layer, a base class implemented in openstack/resource.py, which aims to be a 1-1 representation of the requests or responses the REST APIs are dealing with. For example, the Server class in openstack/compute/v2/server.py inherits from Resource and maps to the inputs and outputs of the compute service's /servers endpoint. That Server object contains attributes of type openstack.resource.prop, which is a class that maps server-communicated values, such as mapping the accessIPv4 response body value to an attribute called access_ipv4. This serves two purposes: one is that it's a place we can bring consistency to the library when it comes to naming, and two is that props have a type argument that allows for minimal client-side validation on request values.

Resource objects are slightly raw to work with directly. They require you to maintain your own session (it's the first argument of Resource methods), and they typically only support our thin wrappers around HTTP verbs. Server.create will take your session and then make a POST request populated with the props you set on your object.

On top of the Resource layer is the Connection class, which forms our high-level layer. Connection objects, from openstack/connection.py, tie together our core pieces - authentication and transport within a session - and expose namespaces that allow you to work with OpenStack services from one place. This high-level layer is implemented via Proxy classes inside of each service's versioned namespace, in their _proxy.py module.

Right now many of these Proxy implementations are up for review in Gerrit, but openstack.compute.list_flavors is currently available in master. It builds on the openstack.compute.v2.flavor Resource, simply calling its list method inside list_flavors and passing on the Session that compute was initialized with.

What the high-level looks like

There are a bunch of example scripts in the works in the Gerrit reviews, but some of what we're working on looks like the following.

Create a container and object in object storage:

from openstack import connection
conn = connection.Connection(auth_url="https://myopenstack:5000/v3",
                             user_name="me", password="secret", ...)
cnt = conn.object_store.create_container("my_container")
ob = conn.object_store.create_object(container=cnt, name="my_obj",
                                     data="Hello, world!")

Create a server with a keypair:

from openstack import connection
conn = connection.Connection(auth_url="https://myopenstack:5000/v3",
                             user_name="me", password="secret", ...)
args = {
    "name": "my_server",
    "flavorRef": "big",
    "imageRef": "asdf-1234-qwer-5678",
    "key_name": "my_ssh_key",
server = conn.compute.create_server(**args)
servers = conn.compute.list_servers()

Where we're going

General momentum has carried us into this Connection/Proxy layer, where we have initial revisions of a number of services, and by default, we'll just keep pushing on this layer. I expect we'll iterate on how we want this layer to look, hopefully with input from people outside of the regular contributors. Outside of that, results from conversations at the Summit will drive a couple of topics.

  1. We need to figure out our story when it comes to versioning APIs at the high level. Resource classes are under versioned namespaces, and even the Proxy classes that implement the high level are within the same versioned namespace, but we currently expose high level objects through the Connection without a version, as seen in the above examples.

    On one hand, it's pretty nice to not have to think about versions for APIs that only have a v1, but that won't last. Along with that, we're working in a dynamic language on growing APIs. Not pinning to a version of the interface is going to result in a world of pain for users.

  2. We need to think about going even higher level than what we have now. Monty Taylor's shade library came up both at his "User Experience, SDKs" design session, as well as during the impromptu OpenStack SDK session we had, and once we get more of the Connection level figured out, we're going to look at how we can tackle compound operations.

  3. Docs, docs, docs. Terry Howe has been putting in a lot of work on building up documentation, and now that we're moving along more smoothly up the stack, I think we'll soon hit the point where code changes will require doc changes.

    I'm also working up a "Getting Started" guide for the project, as we have some people interested in contributing to the project. Thursday's python-swiftclient session ended in that team being interested in shifting their efforts to this SDK, so we need to make sure they can easily get going and help improve the client and tool landscape.

    For the time being, doc builds will appear at http://python-openstacksdk.readthedocs.org/

  4. PyPI releases. Terry put together a version of the package that could reproduce the examples we showed in our talk on Monday, comprised of master plus a couple of his in-flight reviews for compute and network and mine for object store. As we progress and want to try things out, and to enable people to try along with us, we'll probably keep cutting more releases under 0.1.:

    pip install python-openstacksdk

    Keep in mind this is absolutely a work-in-progress, and API stability isn't yet a thing, so check it out and let us know what you think, but don't build your business on it.

  5. Need to get back into some of the administrivia that we've been avoiding recently in the name of expanding the Resource layer. The wiki page could use a refresh to reflect where we're at and what's going on. We need to start using more blueprints and the issue tracker, especially as more people become interested in joining the project. We were able to work without most of that when it was just a couple of us wanting to get this off the ground, but we need to make better use of the tools around us.

Overall, the SDK is coming along nicely. We had some good talks at the Summit and got a lot of interest from people and projects, so the coming months should be another good period of growth for us.

Summit Presentation with Terry Howe

On Monday, Terry Howe and I presented "Getting Started with the OpenStack SDK", a 40 minute talk on why we're doing this, how we're doing it, and where the project is going. Both of us had presented at conferences before, but never jointly, so it was an interesting first time experience, and it seemed to work well. The general gist is that I covered the most of the "why" and "where", and Terry covered most of the "how".

The first half focuses on three key ideas that brought this SDK to being: fragmentation, duplication, and inconsistency in the library and tooling landscape around OpenStack. I dove into each of those areas with examples of why they're an issue, such as how many different clients there are, and how different it can be to work with them. From there I covered some of the goals we have while trying to improve those issues, such as building solid foundations and providing consistent user interfaces.

The second half focuses on showing where we're at and what can be done. Terry took a working example that creates a network, sets up various security group rules, starts up a server, attaches a floating IP, and results in a running Jenkins server. After that, he dove into some of the internals, showing how session, transport, and authenticator work together, and explaining the resource and proxy levels.

After we were done, we had a good 10 minutes of questions, and about another 20 minutes of conversation in the hall afterward. A university professor came up to me to say he wants to use the SDK with his students, which was awesome to hear.

Check out the video here - 42 minutes total.

SDK Conversations at the Summit

In the Marketplace

While spending most of Monday through Wednesday in the Rackspace booth in the marketplace, I talked to a lot of people about the SDK project. It's fun to give away t-shirts and raffle off prizes at conferences, but I'm there to talk with people about the experiences they have with Rackspace, OpenStack, and other platforms, and to advocate for the first two.

I've gotten the SDK "elevator pitch" down fairly well by now for when people turn around and ask what I do. The good thing is that no one thought it was a bad idea! People were excited over various parts of it, mostly between reducing the fragmentation by offering all of the libraries from one package, and a lot were excited about coming up with more consistent interfaces across services.

Overall it was a lot of small conversations that ended with a smile that we're both doing fun stuff and it's all getting better.

Impromptu Design Session

Although we didn't have a session on the schedule, we created one of our own Thursday morning in the Le Meridien lobby. Dean Troyer, Jamie Lennox, Terry Howe, Ken Perkins, and myself gathered to talk for about 40 minutes on where we're going. We talked about two main points: an even higher level than we currently provide, and our multi-version story.

Even Higher Level

Currently we provide an abstraction that gets a user to the point where they can call, e.g., object_store.list_containers(), and they'll receive a list of containers. We've taken care of the lower-level plumbing bits like authentication, session, and transport within the Connection class, which exposes the object_store namespace, containing the higher-level view on top of the account and container resource level.

It was mentioned during this session, and during Monty Taylor's user experience session, that Monty is working on a project called Shade. Shade flies at a higher level where you say give me a working server and it does what's necessary to make that happen. The tool aims to abstract away provider differences in order to complete the task, such as how Rackspace gives you a VM with a publicly accessible IP and HP VMs need to be added to a network and have a floating IP attached to them.

"Give me a server" is a pretty common first step for newcomers, so that's an obvious starting place. "Upload this directory to object storage" is another. If you have others, we'd love to know, and we'd love help to implement them. With where we're working right now, we're not yet on to provider specific plugins, so high-level multistep tasks on vanilla OpenStack are what we're looking for.

Multiversion APIs

At the high level within openstack.Connection, we're not currently making any attempt to expose multiple versions of a service's API. We support authenticating via either a v2 or v3 Keystone, and we support multiple versions of APIs at the resource layer, but you end up with high-level access to a set of unversioned service APIs. On one hand, that makes it fairly nice to work with methods on openstack.object_store, especially since there is currently only a v1 API, but should that actually have a v1 somewhere in there?

A point was brought up that we pin versions in other places, such as our requirements. We couldn't have an unversioned dependency in requirements.txt and expect our code to continue working against its APIs forever. When they go from v1 to v2, things will be different and potentially affect what we've coded against. If you've written against the v1 API, you probably want to stick with it until you've written and tested against the v2 API. As much as the unversioned namespace may feel more friendly, it's eventually going to cause pain.

The "Improving python-swiftclient" Design Session

On Thursday, John Dickinson held a session on how to improve the python-swiftclient project. I'm not a contributor there, but was interested to see what they were planning to do and maybe chime in on getting a few more eyes on the SDK, especially since I threw together a high-level Swift view.

Within the first few minutes, the bulleted list that the group had come up with looked a lot like the bulleted lists we came up with to start the SDK project. They have a lot of work they want to be doing, and we're already on our way doing much of the same. Dean Troyer beat me to the punch of grabbing a mic and asking if it's possibile to put some of these efforts behind both OpenStackClient and the SDK.

Dean and I then gave very quick talks on where OSC and SDK fit in to what they were aiming to accomplish. From there, the conversation shifted towards 'Can we accomplish this over there?' and 'Do we want to accomplish this over there?' The answer to both turned out to be 'yes'.

Coming out of this meeting, we're going to have to quickly bulk up our documentation of the lower-level parts so we can bring these folks up to speed, as one of the first topics was their HTTPConnection class, and the second was from Jamie Lennox on using Keystone's sessions.

We're also going to need to bulk up on a "Getting Started" guide for new contributors coming out of this session and a few other talks I've had. Welcome everyone!

If you got this far, wow. See me at a conference some time for a high five.

Writing a PyCon Proposal

With the PyCon 2015 Call for Proposals ending in 12 days (on September 15), a few people have been asking "what makes a good PyCon proposal?" We've written up some proposal advice in the past and gathered a bunch of proposal resources as well (including a sample proposal I wrote about putting a pug into space), but we still get questions on filling out the actual proposal form.

Speaking at PyCon, or any conference or meetup, is an awesome experience. With a conference the size of PyCon and with the amount of proposals that are received, competition is pretty intense. The following guidelines have been helpful to others, and I hope they'll be helpful to you. Keep in mind that I'm only one individual reviewer - these aren't PyCon's "official" guidelines.


If you want to follow along, create an account on the PyCon site and then enter your dashboard. From there, choose the "Submit a new proposal" button and then the type of proposal you want to submit. If you had an account last year, we carried them over to this new site.


A couple of words represent all of the work you put into this proposal; your slides, the rehearsals, and everything else about it. The title is your big shot to attract people, and it's also one of the few ways to find your presentation after you give it. Substance is much more valuable than flash here. It doesn't have to be dry like a patent application title, but shy away from memes.


PyCon has a limited number of 45 minute talk slots, and asking for one is merely a suggestion to the program committee chair who constructs the schedule. If you think you have a 45 minute talk, go ahead and select it, but be aware that it might not fit in the schedule and you may instead be offered a 30 minute slot.


Your description will end up both in our printed program and in the online schedule. It's limited to 400 characters, so it's a nice supplement to your title. If I bumped into you in the hallway and found out you were on the way to give this talk in two minutes, what would you tell me? Write that down and you're golden.


Since PyCon attracts a wide range of people across a broad range of skill sets, you're going to end up with some attendees who are learning your topic for the first time, some who know about it, some know it, and sometimes even the people who created it. Who do you really want to reach out to the most? Who do you want to hear questions from at the end?

Python Level

Be as accurate as you can be. A lot of people come into PyCon looking for talks that will help them level up across the board, so you may get a beginner who is going to try and attend a bunch of intermediate talks and push themselves. If we're all fairly accurate, we can put information in that person's hands that is within reach to help them learn. That's why we do this whole thing in the first place.


What do you want people to get excited about? Maybe you started off your proposal by saying "hey, I wish people knew X, Y, and Z". Boom. Maybe you started it off with a generic topic and formed a more specific proposal within it. Either way, think about what you'd want to talk about in the hallway after you give the talk. What do you want your attendees to tell their friends about?

Detailed Abstract

This text ends up on our website, clickable from the schedule and talk lists. You hooked 'em with your title, your description made it sound even better, and now it's time for business. This is where you dig in and explain what you're going to talk about for 30-45 minutes, with some amount of detail into the topic. Let readers know why you're giving this talk and what they'll get out of it. If the Description was what you'd say to me two minutes before the talk, this is what you'd tell me at dinner the night before.

This field is Markdown enabled so you can jazz it up with links and other formatting. Some people like to put their full outline in here, which is fine. If you do that, just note it in the Outline box.


This is only visible to reviewers. A lot of people like to put a Detailed Abstract in paragraph form and then break it down into an outline to show how the talk would be organized. The outline helps reviewers get a feel for your level of preparedness on the topic as well as how organized your thoughts are in covering the topic in a live presentation. If you've thought about how much time you want to spend in a particular area, a lot of people add that, which is helpful as well.

Good luck with your proposals!