Growth in Diversity

Tarek Ziadé recently announced the schedule for the Python track at FOSDEM 2013, where he’s one of the organizers. They have some interesting talks lined up by some excellent speakers, so if you’re going to FOSDEM, be sure to check them out.

The bigger part of the post is about a complete lack of women speakers. Namely, it’s about how I edited his blog post before publishing it on the PyCon blog.

Jumping ahead to the last sentence in his post gets to the answer of why I did what I did. With PyCon, we’ve diversified our speaker list through a lot of effort, and a great set of allies.

There’s a reason we went from one woman on the PyCon 2011 schedule to six in 2012. There’s also a reason we went from six to at least 22 for 2013. We didn’t do it through words, but through actions.

We experienced that growth thanks in part to the proliferation of women’s technology groups and the relationships we’ve built with them. By reaching out and involving these groups, we’ve seen not only a rise in women on stage, but a noticeable increase in women in the audience.

Rather than saying “everyone is welcome, specifically women”, we’re trying to achieve an environment that is fair and equal. To do that, PyCon has been consistent in targeting an audience that includes everyone. I went back through many pages of posts on the PyCon blog, to early 2011, and we’ve stuck to this stance.

Even though 22 speakers sounds so much better than 6, read that again, only with more detail: 22 women on a schedule of 114 talks and 32 tutorials.

That really sucks. We’re not going to see that number increase to 35, 40, 45, or hopefully higher in 2014 by mentioning “women, too”. We’re going to see that growth by ensuring women feel like first class citizens in our community. That goes for within PyCon, Python, all of technology, and on and on.

On one hand, it’s great that we’ve seen this growth in PyCon. I’m happy for what the community has done to welcome this growth, and I’m happy for the people who helped achieve this growth. Most of all, I’m happy for the individuals who are a part of this growth.

On the other hand, what the hell is wrong with us that we can only get 22 women on the schedule? The answer has roots in a lot of stuff far away from and a lot earlier in the process than conferences come in, but we can and will do better.

We will continue to push for diversity by engaging groups that work with underrepresented areas of our community. As these partnerships blossom and new groups sprout, we will continue to engage them in hopes of realizing further growth. All the while, we will continue to market our conference to everyone.

I think I speak for the organizers in stating that while we’re happy with the growth trends we’re seeing, we’re not going to be satisfied until we’ve reached and maintain equality. Our community deserves it.

One thing I will apologize for is that I did not notify Tarek of the change to his post. I did it on my own because I had that change made to my posts years ago, and I’ve learned what I think are better ways to reach and incorporate women. I should have communicated my thoughts and worked with Tarek, but I did not. For that, I’m sorry.

The Year of the Snake

If you know your Chinese Zodiac calendar like I do, you know that 2013 is the year of the snake. While they don't specify the type of snake, I think they mean Python.

2012 was a pretty good year around the Python community. It was fun while it lasted, but 2013, the year of the snake, is going to be even better.

Python 3 continues to grow, conferences continue to grow, and diversity continues to grow. These three things are topics I hope we all have a chance to be involved in for 2013.

Python 3

Python 3 adoption is moving along swiftly, and I'm looking forward to another year of increased usage, contribution, and conversation. You don't have to look too far to see that Python 3 is growing. The website formerly known as the "Python 3 Wall of Shame" recently became "Python 3 Wall of Superpowers" as the projects it tracks hit 50% with support for 3.x.

The "Who's on Python 3?" page uses additional knowledge to show projects with support under way, and it claims that 74% of the top 50 downloaded packages have 3.x support. When you include the in-progress projects, e.g., Django, that number becomes 78%.

Georg Brandl's tracking of Python 3 packages on PyPI shows strong growth, as 2011 ended with around 600 packages showing support for Python 3, and 2012 ended around 1,400. While that only puts us around 6% of all packages, it's an imperfect metric. Many projects don't even specify that they support Python 2, and known Python 3 projects don't specify their support either. It's still nice to see that the support is at least doubled. (PSA: Please accurately set the trove classifiers on your PyPI packages!)


The following uses Windows download counts from, as parsed by These are probably the only reliable numbers we can get since most platforms receive Python in some other way, e.g., package managers.

Downloads for all Python versions saw a boost in 2012 to just under 2,000,000 downloads per month (we hovered around 1.7M/month in years ending 2009-2011). December closed out the year as the single largest month ever for Python 3 downloads at 666,884 for Python 3.3. Those 3.3 downloads contributed to a total of 850,399 downloads across all 3.x versions, the highest monthly total to date. In the same period 2.7 saw 903,605 downloads, the lowest count since February, adding up to 1.2M for all 2.x versions.

We saw immediate growth at the initial release of 3.0 back in December 2008, then a settling shortly after, but it looks like we're back into a growth period thanks to a few big months following the 3.3 release in September.

3.x downloads in 2012 were up around 15% compared to 2011, and I think the success of Python 3.3 will continue. The outlook for Python 3.4 is even better than that of 3.3, and we're still early in the cycle. Even though the final release won't come until early 2014, the release will be feature complete by year's end, per PEP 429.

Overall, I like where we're heading. There are several big projects with progress on Python 3 support, such as Django and Twisted. On the PSF board, we recently funded two projects, Kivy and NLTK, to complete their porting to Python 3. Even my day job at Canonical is going to get back into Python 3, as I'll need to complete the port of our SSO client which was started in the fall.


Another year means another set of conferences, and 2012 saw a lot of growth here. Not only were there several first time conferences, several established conferences saw attendance increases.

The increase in regional conferences really is a great thing, as they get more people involved in sharing and education, they're generally more affordable than the bigger events, and they expose more people to the fun of a Python conference. I know of five new events that sprouted in 2012:

I hope to see more of these regional conferences in 2013. I'm going to try and make it to at least one of the smaller conferences this year - maybe PyOhio.

As for attendance growth, it's not something most conferences end up mentioning, but I'm aware of it through my work with the Python Software Foundation's board of directors. In 2012 we sponsored 18 conferences, and we figure out our grant amounts based on attendance estimates. We work with organizers that we trust, and most of them mentioned increased attendance estimates, often making their funding requests after pre-sales, so they've had data to support the requests.

The one conference I know for sure had attendance growth was PyCon US, which actually over shot the estimates and opened the conference to 2,317 attendees, up from 1,380 in 2011. In 2013 we're capping the attendance at 2,500, and we're expecting another sell out for the last year in Santa Clara before heading to Montreal.

I'm really looking forward to expansion in the regional conference scene, as I think it'll bring Python to a lot more people. When you consider the download rates from earlier and the increasing attendance at these events, there are a lot of people to be reached in 2013.


I certainly can't quantify this, but I've really felt the increasing presence of the various groups in our community that target and involve women. PyLadies and CodeChix saw expansion in 2012, and LadyCoders was created in 2012. Women Who Code joined the aforementioned groups in sponsoring PyCon, and they held over 100 meetings throughout the year. These groups and others were involved in a number of workshops, meetups, sprints, and other efforts to involve women in computing. This is awesome.

At PyCon 2012, several women's groups had booths in the expo hall, and at least one of them hosted a party on one of the evenings. Since PyCon doesn't track attendee genders, there is again no way to quantify this, but in my talks with some of the women at the booths as well as other attendees, PyCon had noticeably more women in attendance than in past years. This is awesome.

Several of these groups held meetups to brainstorm ideas for conference proposals, in an effort to help their members get presentations into conferences like PyCon. PyCon 2011 had one female on the schedule of tutorials and talks. PyCon 2012 had six females on the schedule. PyCon 2013 has 22. This is awesome.

These outreach groups really are working, and I hope to see continued growth because 16.5% of the schedule being women is way too low. It's a great effort on their part, in fact I couldn't be any happier with these groups for what they've done to diversify our community, but we need more. However, what I think we need comes more from everyone else. The women are doing their part.

Whether it's grant programs or the codes of conduct that many events are now implementing, creating a more welcoming environment for everyone will enable more of this growth that the groups are building. From conferences to user group meetings to mailing lists, I hope everyone can think about what we can do to involve more women and tip the scales toward equality.

Overall, I'm really excited about this year. I think it'll be a big year for Python 3, we're going to see some great conferences, and hopefully we're able to get more people involved in Python activities.

I'm also looking forward to putting in more development work on CPython, and I'm looking forward to another great year of working with the PSF. I'm looking forward to more heavy lifting in the gym, doing a Tough Mudder, and having another successful season umpiring college baseball.

A How-To on Setting Back Diversity, or, “we hired women to bring you beer”

What a horrible month it has been for diversity in technology. March started off with an awful article on Brogrammers, and today the Boston API Jam, hosted by Sqoot of New York, took their turn at ruining things. If you thought the "brogrammer" article and everyone involved with it was beyond stupid, check out what the API Jam was promoting: the details of their event.

As ReadWriteWeb took note of, they weren't short of apologies, but it's too little, too late. Rather than tweet a weak apology after the fact, try thinking ahead of time on promoting an environment of equality. I would certainly hope they learn from their mistake and begin to take an active stance towards diversity, but given some of their responses on Twitter, I'm doubting that will happen.

As word got around, many of the early responses to the event's original description, which included "Need another beer? Let one of our friendly (female) event staff get that for you," indicated they didn't really care. Responding that the text was just a little humor shows them missing the point early on. Plus, it wasn't even funny. Shortly after that, they respond with boom to backup a commenter who marginalizes the female role in a technology event as "a perk". A few hours go by and then starts the stream of "we're sorry" messages to seemingly anyone who mentioned them in relation to this blunder.

The message links to an "apology" letter which includes:

While we thought this was a fun, harmless comment poking fun at the fact that hack-a-thons are typically male-dominated, others were offended. That was not our intention and thus we changed it.</blockquote>

I'm not sure why poking fun at the men who attend these events needs to objectify women, but maybe that's what makes it "fun" for them? The worst part of this is the "others were offended" piece. They still don't acknowledge that it's not just that their words are wrong, but their views are damaging to the community. The message effectively says, "We think objectifying women is fun and harmless, but some of you were offended'. It's an unacceptable position to take, and the great news is that they've lost sponsorship because of it.

apigee pulled their sponsorship because the API Jam's message wasn't consistent with their values. Heroku did the same. CloudMine went on to write a post of their own about not only their withdrawal of sponsorship, but their feelings on sexism in tech. Good on all of these organizations for removing their support of this event.

The next time you plan an event like this or do any sort of outreach, please think with diversity in mind. It's mind boggling how far behind the times people in technology can be. There shouldn't have to be a women's tech suffrage, but as long as events like the API Jam promote the idea that women aren't a first-class citizen at their event, diversity in technology will keep taking one step forward, two steps back.

minidumper - Python crash dumps on Windows

If you're writing software on Windows, you've likely come across minidumps. They're a great help when your project encounters a crashing scenario, as they record varying levels of information to help you reproduce the problem.

The main product I work on at my day job, a server written in C++, has had minidump functionality since the beginning. We keep PDBs around for our releases, then when customers encounter a crash, we grab the minidump, match it up with the binaries and PDBs, then try to figure out what the scenario was. I think that's fairly standard operating procedure, and it tends to work alright. Release crash dumps are obviously less helpful than debug dumps, but you can still get enough out of them to get started in the right direction. So while one part of my job has that, the other part - the Python part - has had me wishing for it. So I wrote it.

The extension modules I maintain internally for our server's APIs occasionally come crashing down during our test automation. That's fairly alarming at first since the tests just drop out and you don't get much of an indication of why. Was it the extension? The underlying C++ API? Python itself? The unittest logs are all we have to go off of, so then it's a matter of piecing together what was happening at the time, then either manually re-running it from the REPL and/or attaching the Visual Studio debugger to catch the problem.

In comes minidumper. By importing minidumper and enabling it, you can receive crash dump files whenever your Python process goes down. It's there for you.

import minidumper


Now if you do some crazy stuff and cause a crash in your extension code...

    int x = 1;
    int b = x / 0;
}'ll get a crash dump that will tell you exactly what just happened. In my case, I got example_20110929-071529.mdmp. Now if you open that up in Visual Studio, ideally the one that Python was compiled with, you'll get a look into what happened once you hit F5 (or Debug > Start Debugging).

The first thing you'll see is a popup telling you what the problem was and where it occurred, then Visual Studio will show you exactly where in the code the issue lies. As we all know, division by zero is a no-no, and it crashed. If you hit the break button, you can poke around in a ton of information that was gathered from your crashed process. Depending on what value you gave to the type parameter of minidumper.enable(type=...), which defaults to MiniDumpNormal and has a full list of options here, you'll have different amounts of information.

You can walk around the call stack and see what functions were called with what values, and from there you can inspect variables within a function by hovering over them with the cursor. The Debug > Windows menu contains a whole bunch of other pieces of information, including memory, disassembly, value watching, and more.

As far as examples and tests go, I only have some of the basics down, although I plan on bulking those areas up and coming up with more useful and interesting code to prove this extension's worth. I just threw the source up on, but I'm going to wait on getting it on PyPI until I figure out the best way to organize and distribute it.

If you're looking for more info on minidumps, and were helpful sites, as well as the various MSDN documentation.

The following setup steps are what I do to get started, using the CPython default branch, aka, CPython 3.3. Also note that I'm using a debug-built Python, and telling the minidumper extension to do a debug build as it's what I usually use at work, as well as when I'm working on CPython.

  1. hg clone minidumper-dev
  2. C:python-devcpython-mainPCbuildpython_d.exe build --debug install
  3. C:python-devcpython-mainPCbuildpython_d.exe -m tests

Running the tests will build a tester extension, which contains two crashing functions. Right now, the few tests just call the crash functions with different minidumper.enable settings in order to make sure the right dumps are being created in the right places.

Hope it helps.

Note: Until I fix, the crash windows asking you to debug or close the program will stay around until you click something. Ideally I'll be able to add functionality to temporarily disable Windows Error Reporting for the module, as it currently requires manual intervention while running the CPython test suite on Windows, as :code:`minidumper` does.

PyCon 2011 CPython Sprint Newcomers

Following up two tutorial and summit days, then three days of the conference, the sprints got off to a great start on Sunday evening. I'm back at home now but wanted to put together a summary of the first two days: A lot of great projects got up on stage to pitch their sprint ideas including Brett Cannon speaking for CPython, letting people know where the sprint would be, mentioning the "dev-in-a-box" CDs, and encouraging people to come out and hack. Within 15 minutes of the end of announcements, we had 7 first-time sprinters eager to dive in and get going right away. The new developer guide was instrumental in getting everyone through the initial setup. The plan was to get a Mercurial checkout and as a starting point, as one of the suggested sprint targets was increasing test coverage. By 6:30 on the first day, we were up to 9 people fully up and running, pouring over the coverage results (which were handily pre-generated on the "dev-in-a-box" CD), and diving into code. Here's what everyone worked on:
  • Alicia Arlen started tackling the expansion of string tests and got a patch written and checked in within first day.
  • Scott Wilson noticed some failing urllib tests on his Mac and got to work on fixing them. After that he started on increasing urllib test coverage.
  • Denver Coneybeare mentioned a dbm patch he made a few days before the sprint, then got it reviewed and checked in. He followed that up with test coverage patches to fileinput and _dummy_thread.
  • Jeff Ramnani came up with several documentation and code changes, along with some tracker triage to get a few older issues closed.
  • Michael Henry spent some time on the email package, including some documentation updates and a port of test_email_codecs to Python 3. He's also working on timeit test coverage.
  • Natalia Bidart noticed several test failures after the initial build and test, then wrote up a few patches to make sure her configuration passes all of the tests. She's also working on logging test coverage.
  • Matias Bordese read the dev guide pretty closely and patched a step that didn't jive with his system. He's currently expanding coverage of the dis module.
  • Robbie Clemons started by reviewing a few issues, then took cgitb up to 75% test coverage by starting a test suite for it.
  • Evan Dandrea came up with patches to posixpath, shutil, and tarfile for test coverage and a few bugs.
  • Jonathan Hartley looked into a unittest issue and wrote up a fix plus tests that got checked in pretty quickly. He's also working on coverage.
  • Piotr Kaspyrzyk used a tool he made to find typos in his research work and applied it to the Python documentation, coming up with several patches and many more on the way.
  • Tim Lesher spent time investigating a pydoc issue that was being discussed on the mailing list about named tuples
  • Brandon Craig Rhodes started by running coverage and ended up diving into the order of imports on interpreter startup to fix coverage results before going further with them. He took the new results and is working on copy test coverage.
Here's a picture of some of the group, hard at work: Many thanks to those listed and everyone else who came out to sprint. Hopefully you learned something new and had a fun time contributing -- the effort is definitely appreciated and we look forward to working with you in the future!

FileSystemWatcher on Python 2

Alright, alright, you guys win. Enough people emailed me to say they would use a 2.x version of watcher from my previous post, so here it is: version 0.2 now supports Python 2. The changes are pretty simple. The "biggest" part of this happened in changeset 96f3f9e4511c, where I handle a few 2 and 3 specific parts split by ifdefs. It's a few sections of handling Unicode/strings/bytes, and then a small change for 2.x to receive the action number as an int rather than a long. I think I did all of this correctly since it works, but that's a poor definition of "correctly" and my Unicode knowledge is definitely lacking. I haven't done a ton of testing on it, but it seems to work alright in my simple test running between 2.7 and 3.1. If you have any issues with it, feel free to submit them or email me.

The five year project: .NET FileSystemWatcher clone for Python

In the time I've been using Python, no project has started and stopped, and started and stopped again, more than my goal of writing a file system monitor. Sure, it's a small and simple project in the grand scheme of things that could be accomplished over that time, but I like to finish what I start. The idea originally came from my father, also a Python user, suggesting something to work on, likely to help me learn but it'd also help him out. Years ago he wanted a multi-tabbed text editor with tail -f functionality. I think I was reading through a wxPython book at the time and figured, "sure, I can learn this and make that tool." Started it up, had the shell of a simple GUI written, then came time to get the file system updates. I probably got distracted by something, got hooked on something else, then totally forgot about the whole thing. For whatever reason, this happened every few months. About three years ago I tried to rejuvenate the whole thing and found Tim Golden's great "How do I..." page (pretty sure Dad sent this to me before). He has an example, three of them to be precise, covering exactly what I wanted to do: watch a directory for changes using Mark Hammond's pywin32. Awesome. I got something coded up pretty quickly and took the library in a different direction, using it at work to write a Windows service that would monitor our servers and look for crash dumps and email the team. It was super simple and paid off big time, but I kinda just whipped it together and it was poorly designed. Fast forward to a few months ago. I was bored and looking for something fun to work on -- ah, that file system watcher I've been half-assing for years. I thought to myself, "now that I actually know wtf I'm doing, I should do that, and I'm sure my Dad would get a kick out of it." Somewhere in the middle of all of this I was writing C# and used the System.IO.FileSystemWatcher API, which was really nice. I've always wanted the same functionality in Python and liked what they had, so it would be cool to do what they did. A few blogs around the web claimed the Win32 ReadDirectoryChangesW API was behind the scenes of FileSystemWatcher. True or not, it made sense and I was familiar with that from the Tim Golden examples and my watcher service. I've been writing and reading a lot of C code lately so I started hacking. After reading up on a few things, I came up with a much better C equivalent of what I had in that Windows service. It's multi-threaded, uses IO Completion Ports, and seemed to work pretty well. Pass in a directory and a callable, call the start method, then you'll get callbacks for creating files, renaming files, etc. Sweet, we're on the way. After fiddling around with that a bit, I figured it was good enough to build on. I started writing some tests and had simple things like the following working. [code lang="python"] >>> import watcher >>> import os >>> callback = lambda action, path: print(action, path) >>> w = watcher.Watcher(os.getcwd(), callback) >>> w.flags = watcher.FILE_NOTIFY_CHANGE_FILE_NAME >>> w.start() # Then I opened up vim and created a file called "hurf.durf" 1 .hurf.durf.swp 1 hurf.durf 2 .hurf.durf.swp [/code] That was cool and all, but I want to be able to follow one specific file, or files that match a certain pattern. I also want to be able to set callbacks for specific actions. Hmm, FileSystemWatcher can do that. Maybe I'll just build out a clone and see how it works. One of the first things I wanted to figure out was how to emulate the callback attaching and detaching like on Changed events. I needed a container that supplies += and -=, which is none of them. Easy enough, just inherit from one and provide the __iadd__ and __isub__ operators. Before you get outraged: I know that's "unpythonic", but I'm going for a clone here. Filling in the rest was pretty easy. There's a bunch of properties in FileSystemWatcher that map to the attributes and methods of the underlying Watcher. For example, FileSystemWatcher.NotifyFilter sets Watcher.flags, which is an OR'ed group of NotifyFilters, which are constants exposed by watcher from Win32. The weirdest part of the whole thing is that starting and stopping FileSystemWatcher is done by setting EnableRaisingEvents to True or False. It's not a method called start or stop like in the underlying Watcher (or anything else that needs to start and stop). It felt wrong perpetuating this weirdness, and again I know it's "unpythonic", but I'm going for a clone here. As for translating Watcher callbacks into FileSystemWatcher callbacks that work with all of the fancy filtering, it's just a simple queue, a regex, and a big if/elif block. Watcher calls its callback which puts the action and relative path into the queue. FileSystemWatcher pulls it out, sees if it matches the filter, then we figure out from the action which callback to call. If it's a rename, do a special dance, but otherwise create an update object, fill in the details, then start calling back to the user. [code lang="python"] >>> from FileSystemWatcher import FileSystemWatcher, NotifyFilters >>> import os >>> callback = lambda event: print(event.ChangeType, event.Name) >>> fsw = FileSystemWatcher(os.getcwd()) >>> fsw.Created += callback >>> fsw.NotifyFilter = NotifyFilters.FileName >>> fsw.EnableRaisingEvents = True >>> # Opened up Explorer and right clicked to create a new file 1 New Text Document.txt [/code] There you have it. It took 235 lines of pure Python for FileSystemWatcher and 466 lines of C for watcher for this five year project to be completed. If any future employers are reading this, I'm capable of writing more than 140 lines of code per year to complete a five year project, I swear. The project is now on PyPI under the name watcher, complete with a few binary installers. It's 3.x only because 2.x is dead, but I'll do a backport if people are interested (email me: first name at The project is up on bitbucket: It's not really complete but it works pretty well for most usages. I know of a bunch of bugs that I'll eventually fix, but feel free to report more or even fix some of them. Thanks for the idea, Dad.

Why you should go to PyCon

PyCon 2011 in Atlanta, Georgia, like pretty much all things Python, is awesome. If you've gone before, you already knew this, and hopefully you're joining us again this year. If you haven't gone before, you're about to find out why you should go. Tutorials Starting Wednesday March 9th, the PyCon festivities kick off with two days of two-a-day tutorial sessions, providing you with almost 12 hours of classroom-like interactive education from some of the leading trainers in the biz. Is Django deployment not one of your strong points? Django BDFL Jacob Kaplan-Moss is running a tutorial on it. Interested in stepping your game up with some advanced Python techniques? Raymond Hettinger knows a little bit about that. He's also one of those guys you should just follow around -- you will learn something. Zed Shaw's "Learn Python The Hard Way" will be making an appearance in Atlanta. I've also heard Zed will be available throughout the conference to help you along the way. Python 3 will get some stage time as well through two tutorials Dave Beazley is running. He's doing a repeat of last year's Python 3 I/O tutorial, and Brian Jones will join him for a session about cooking up some Python 3. Tasty. The Conference This is the main thing, the heart, the reason people travel from around the world. Friday March 11th kicks off the three-day conference. Coming off of a record year of talk submissions, there's a great group of talks lined up, and I think there's something for everyone. There's multi-speaker panels like the Python VM talk to give you a "state of the VM" talk about what they are up to, where they are going, etc. There's a talk about optimal aircraft engine tuning. I'm serious, they use Python for that. Is your boss not letting you build out your ideas in Python? Hear from experienced Python users their stories of getting the language into their workplace - from non-profits to schools to big-time megacorps - they've done it. Want to watch a guy do downright diabolical things with a computer from 1979 that's I/O system is an RCA audio jack? Yeah, that'll happen. Python 3 + zeromq + 1979 = "whoa, dude". Also, yay cloud. Speaking of zeromq, Zed Shaw is talking about it. Ya like MongoDB? Got it covered. CouchDB as well. If you don't test, you should. Period. Tox isn't a bad way to do it either. Unit tests are good too. Do you do any of that mobile web stuff kids are into these days? Test it. You ever see those massive telescopes that can see water on Mars or whatever those geeks are up to at NASA? Maciej from the PyPy team does that kinda stuff and he runs it through PyPy. Dave Beazley will not talk about the GIL this time. Almost better than the conference itself -- the hallway track. So, you know we have all of these scheduled talks, and they are great. They really are. However, sometimes you just can't beat standing in the hallway chatting with your fellow Python users. How often do you get to talk to Alex Martelli? Probably not often. How many times a year do you chat with Michael Foord? Not enough. That dude has an awesome beard and he's kinda smart. Get involved in the conversations you see going on -- you'll probably hear about some cool stuff, find out where people are going to dinner, and you'll meet some new contacts. Use your network. Find jobs. Find business partners. Find friends. It's all there. The Sprints After the conference is over, Monday March 14th is when some of the best stuff happens. We didn't all fly to Atlanta with our shiny laptops just to talk about code -- we're also doing some work. Through Thursday, any projects are welcome to hang out and sprint on whatever topic they want. I'll be working on the core sprint like last year. PyPy will probably be there doing some crazy things to make themselves even faster. Most of the web frameworks get together as well. has some information, and as groups announce their presence at the sprints, I'll update this. Feel free to join an existing group or start your own -- the more the merrier. If you are holding a sprint at PyCon, let the PSF Sprints group know at We're still working out how we're going to run this, but drop us a line and we'll keep you in mind. Overall PyCon really is a great time and I've been excited about it for a while now. The tutorials are awesome. The conference is awesome. The sprints are awesome. The people are awesome. The dinners are awesome. It's just a fun time, and if it sounds like a good time to you, now is a good time to buy tickets. Looking to cut costs: check the room sharing wiki. Wondering about transportation? Check out the venue page. If you fly into Hartsfield-Jackson airport, it's like a 30 minute train ride. See you there. ps. I disabled comments because this is an awful WordPress blog. I don't know anything about the internet.

Speeding up shutil.copytree with multiprocessing

New to Python 3.2's implementation of shutil.copytree is the copy_function parameter, added in issue #1540112. This new parameter allows you to specify a function to be applied to each file in the tree, defaulting to shutil.copy2. I was thinking about a problem we have at work where our continuous integration server needs to setup a build environment with clean copies of our dependencies. To do this, we do a lot of shutil.copytree'ing, getting files from internal teams and some from projects like Boost and Xerces. It takes a long ass time to copy all of that stuff. Really long. Fortunately my work computer has 16 cores, so I thought, why not make the copytree function use more of my machine and go way faster? Sounds like a job for multiprocessing. Knowing I can use this new copy_function parameter to copytree, and knowing that multiprocessing.Pool is super easy to use, I put them together. [code lang="python"] def _copy_worker(copy_fn, src, dst): copy_fn(src, dst) class FastCopier(multiprocessing.Process): def __init__(self, procs=None, cli=False, copy_fn=copy2): """procs is the number of worker processes to use for the pool cli is True when this is being used on the command line and wants the cool progress updates. copy_fn is the function to use to carry out the actual copy.""" multiprocessing.Process.__init__(self) self.procs = procs if procs else multiprocessing.cpu_count() self.copy_fn = copy2 self.callback = self._copy_done if cli else None self._queue = multiprocessing.Queue() self._event = multiprocessing.Event() self._event.set() self._count = 0 def _copy_done(self, *args): """Called when _copy_worker completes if we're running as a command line application. Writes the current number of files copied.""" self._count += 1 sys.stdout.write("Copied %d files\r" % self._count) sys.stdout.flush() def run(self): pool = multiprocessing.Pool(processes=self.procs) try: while self._event.is_set(): try: src, dst = self._queue.get_nowait() except Empty: continue pool.apply_async(_copy_worker, (self.copy_fn, src, dst), callback=self.callback) # We get kicked out of the loop once we've exited the external # copy function, e.g., shutil.copytree. pool.close() except KeyboardInterrupt: print("Interrupted") finally: pool.join() def stop(self): self._event.clear() self._queue.close() def copy(self, src, dest): """Used as the copy_function parameter to shutil.copytree""" # Push onto the queue and let the pool figure out who does the work. self._queue.put_nowait((src, dest)) [/code] What we have here is a class that uses a multiprocessing.Queue and spreads out copy jobs using a multiprocessing.Pool. The class has a copy method which simply puts a source and destination pair into the queue, then one of the many workers will actually do the copy. The _copy_worker function at the very top is the target, which simply executes the copy2 call (or whatever copy variant you actually want to execute your copy). Putting this to use is pretty easy. Just create a FastCopier, then pass the copy method of FastCopier into shutil.copytree. As copytree works its way through your tree, it will call FastCopier.copy, which pushes into the queue, and the pool splits up the work. [code lang="python"] def fastcopytree(src, dest, procs=None, cli=False): """Copy `src` to `dest` using `procs` worker processes, defaulting to the number of processors on the machine. `cli` is True when this function is being called from a command line application. """ fc = FastCopier(procs, cli) fc.start() try: # Pass in our version of "copy", which just feeds into the pool. copytree(src, dest, copy_function=fc.copy) finally: fc.stop() fc.join() [/code] It's pretty fast. As an example, I copied my py3k checkout folder which has around 17,000 files and weighs around 1.7 GB. The baseline of using a single process does the copy in 458.958 seconds (on a crappy 7200 RPM drive). Using four processes completes the work in 120.243 seconds, and eight takes 128.336 seconds. Using the default of all cores, 16 in my case, takes 217.557 seconds, so you can see it drops off after the 4-8 range but it's still 2x faster. I haven't done much investigation since I'm pretty happy with a nearly 4x performance boost, but I'd like to do better, so maybe I'll post a followup. Why I think this is so cool: I'm sure there may be better and faster ways of solving this problem using many of the finely crafted modules out there, but this is available out of the box. This comes for free and it's available right now. Sure, this isn't the killer feature of Python 3.2, but I think it showcases the extensibility and the power of Python and the standard library. After toying with it for a while, I put the initial version of my findings here and called it copymachine. It's just a standalone script right now and has no tests (I know, I know), but I'll fiddle with it and you are more than welcome to as well. (disabled comments, sorry, spam got to be too much)

Becoming a contributor

What prompted this?

After reading a number of blog posts (here, here, and here) and thinking about my personal experiences with recently becoming a committer, I decided that it might benefit the Python community to see someone contribute right before their very eyes. First I had to ask myself: How would that benefit the community?

My first hope was that people would see what it takes to go from start to finish with a simple contribution. Reading about it is one thing, but seeing it in action can be even better. Another hope was that people might see a flaw or snag in the process and tell me what’s wrong and why. I was also hoping people who have contributed or attempted to contribute could talk about what worked or didn’t work during their experience.

So, with a ChiPy meeting on the horizon, I proposed a talk to cover some of what those previous posts were about, and at the same time dig into the code and make a contribution before the group, knowing that the talk would end up recorded and on the internet for all to laugh at see. As a “tl;dr” video summary: some people had contributed, most hadn’t, some wanted to work on documentation, one commented on sometimes rude responses, most clapped, but it seemed like all understood what was going on and I think everyone there left feeling capable. Even my Mom. Now for the video…

Video primer: this is just over 50 minutes long. The bug fixing part goes from approximately 07:00-26:00.

Questions? Comments?

Semi-organized thoughts on beginning as a contributor to Python.

This is more for the “I have some free time and I’d like to help make Python better” crowd than the “I hate the GIL and I want to remove it now” crowd, but it might be helpful for both.

To me, really getting into a project is a mix of things. Some challenges, some learning. Some fun, some enjoyment. There are many things it takes, but I think one of the most important is success. In order to progress in a project, whether it’s your next great Django website or it’s painting your kitchen, you have to have some amount of success to keep on going. Not everyone is going to be successful in their first attempts at Python contributions, but I guess we have to define “successful”.

For the purposes of this article, I’m defining success as having your work committed to the Python source repository, to be included in a release. With that defined, I think there are a few things that can ease the introduction period, leading to more immediate success, which leads to more learning and challenges, which leads to more fun and enjoyment. If you keep going, you might even get commit access.

  1. Start with documentation fixes. Read the Documentation Development page to get a feel for what needs to be done, and then check out the list of open documentation issues. Fixes to the documentation are the easiest to get traction on and to have success with. Since documentation fixes are usually less brain-intensive than C or Python code changes, they are easier for everyone involved to work with and quickly act on.

    My first contribution to Python came in the form of a one-line documentation fix to the C-API, and the fix was committed very quickly after I submitted it. It felt good to start off 1-for-1 in terms of scoring my Python development work.
  2. Stick to modules and packages that you use the most. If you are a big user of ConfigParser, take a look at it’s source in Lib/, then search Roundup for some of the open issues. Take a look at the tests in Lib/test/ and see if anything is missing and add it accordingly. Just searching for any open issue will turn up a lot of issues in modules you either aren’t yet interested in or haven’t used, which are harder to work with as a new contributor. Also check out the pages on the Python Developer's Guide.

    One of the first standard library fixes I was successful with was adding context manager support to the zipfile.ZipFile class in 2.7/3.2. I was writing some code at work, just assumed ZipFile worked as a context manager, and then when I went to run it didn’t work. I dug into the code, saw that __enter__ and __exit__ weren’t implemented, so I added them, added tests, and shortly after that it was committed.
  3. Read other people’s patches. As we all (hopefully) know, code reviews are a necessary step in software development. Even though contributing to Python isn’t typically done “on the clock” at our nice cushy day jobs, we still have to follow the processes like we (hopefully) do at work. For every few issues you fix, try to look at the list of issues needing review. Even if you don’t end up having any comments, you’ll get a feel for what people are doing and how they are doing it. If you do have comments, post away. The patch submitter(s) will appreciate another set of eyes on their work.

    You’ll see more issues this way, and you’ll get a flavor of what types of issues bring out which people and what ends up being necessary. As you look around, you’ll see a guy named Mark Dickinson doing all kinds of math stuff. You’ll see R. David Murray working on email related bugs. As you become familiar with more types of issues, you’ll know who you can add to the nosy list, or the list of people who might be interested in the issue.
  4. Ask questions. We’ve all been through it, and we all know that bugs aren’t going to ask the question themselves. However, you have to know how to ask the right question in order to get the right response. I can’t really tell you the answer to that, but just make sure you’ve read the issue over, read the code and/or documentation over, and have given it your best shot. Present everything you have done, everything you know, and see if anyone can fill in the gaps.

  5. Know that if you are a Python user, you can be a Python contributor. Python’s development team is made up of people from all walks of life, from many countries, with many different backgrounds. I believe the majority of contributors are full-time software developers, myself included. Some are employed by various types of corporations, some are in consulting. There are also a number of college/university students and at least one contributor who is still in high school (and he’s a release manager at that, also contributing to PyPy). It takes all kinds of people to have a successful project like Python. Windows, Mac, and Linux users. C programmers, Python programmers, Sphinx documenters. Old people, young people. Tall people, short people.

I’ve found it fairly easy to get into Python development by using the above approaches. I started small and eased my way into bigger issues. Is this the “one true way” to do it? Absolutely not. Do what you want to do at the pace you want to do it, for the reasons you want to do it.

Being a baseball player, I always think back to how it only takes success 30% of the time to be a good hitter (again, with a varying definition of success). Sometimes you’ll submit a patch and strike out. Sometimes you’ll hit a home run. Overall, you’ll have fun doing it.