Skip to content

Things and stuff

Published on: May 5, 2016    Categories: JavaScript, Misc, Pedantics, Philosophy, Programming, Python

I’ve been awfully busy lately, but also thinking about a lot of stuff. Since microblogging is no longer really a thing I have access to (Pownce, I miss you), that means doing periodic brain dumps, yay!

Conferences and costs

Over the past few months there’s been a pretty significant conversation starting about tech conferences and “paying” speakers (where “paying” typically means at least providing a free ticket to anyone giving a talk). Which is a topic full of complicated thoughts for me.

On the one hand, the obvious argument: there’s significant overlap between people who would give interesting talks and people for whom the costs of attending would be burdensome. There are a lot of people who do cool stuff but are doing it at companies that don’t care enough to cover conference costs, or people who are self-employed, or face costs beyond just ticket price and airfare/hotel. And this argues against the “everybody pays their own way” approach of quite a few conferences, since that approach will naturally end up excluding some people whose talks we’d want to hear.

On the other hand, despite significant strides in recent years to expand the circles of people who give conference talks, the tech conference circuit is still kind of clique-y. OK, I take that back: it’s a lot clique-y. Within any particular niche of tech I guarantee you can find a roster of familiar faces who show up again and again on the schedules of conferences in that niche. I should know, because in the Python/Django world I’m one of those people. Throwing in freebie conference attendance worries me a bit, since it would be heavily rewarding people who aren’t necessarily doing anything better or more interesting but who just happened to be in the right place(/body type/skin color/etc.) at the right time to get established as regular speakers, and thus reinforcing that group as the regular speakers.

And on the other other hand, most of the time I’m someone who’s lucky enough to have an employer who’ll contribute to my conference costs, or to be otherwise in a situation where going to a conference isn’t a burden. And I’m painfully aware of how many good, smart people out there don’t have that luck and need the help, and I’m bothered by the idea that if I speak at a conference I’m not just taking up a slot on the schedule — I might be taking money out of the conference’s financial-aid pool which could’ve sent someone else. If I cared enough about a conference to submit a talk, I was probably going to go anyway!

Making the freebie opt-in is one way to try solve this, but probably not the right way since I think there are plenty of people who’d accept a free ticket/etc. if told it was automatic and all the speakers get it, but wouldn’t (for a variety of reasons) ever ask for one, even if the asking simply involved checking a box on a form. But making it opt-out also has some issues since it’s hard to express the opt-out in a way that doesn’t feel like it’s guilting the speaker into giving up their freebie for someone more deserving.

I don’t know how to solve this. I do think conferences need to move toward making it easier for speakers to attend, because that’s how we get a bunch of interesting new speakers telling us about interesting new things we didn’t know about before. And I think that’s going to have to involve some type of financial help for speakers who otherwise wouldn’t be able to make it. It’s probably going to take some experimenting and careful framing of how it works, and one or two failures and some “let’s just hold our noses and accept this tradeoff” moments.

I’m glad I don’t run conferences, by the way, and I respect the hell out of people who do and who have to worry about all that and more, taken to the nth power.

Lowering the bar”

I keep seeing people recycle talking points about how diversity efforts in tech hiring inevitably require “lowering the bar” and accepting less-qualified candidates. And yet every real attempt I’ve seen at explaining why that is seems to resort to the same sort of technically-true-but-wow-is-it-misleading characterizations.

Take coding-bootcamp graduates, for example. My experience is that on average recent bootcamp graduates are at least as good as recent college graduates, and often they’re rather better. My hypothesis for this comes from observing that the bootcamp folks tend to be a bit more mature (often coming to coding a bit later in life), more motivated/dedicated (switching tracks like that post-college typically requires some personal/career sacrifices; it’s not as easy as just going with the flow of a college major), and so on, displaying desirable qualities at higher rates than a randomly-selected CS graduate.

Oh, and bootcamps tend to have significantly better gender balance and at least somewhat-better racial balance than university CS programs and existing industry. Quite a few explicitly target populations who aren’t what we think of as the stereotypical university CS student.

So putting effort into reaching out to bootcamp graduates is an easy way to gain a little diversity boost in your candidate pool, and as far as I can tell has no negative effect on the technical or job qualities of the candidates you get (in fact, usually the opposite). But if you decide, as a company, to do that, someone can spin it as “we’re going to hire people with no CS degree and no job experience in the industry, in the name of diversity”, which is technically true, but awfully misleading.

In the same way that some things sound too good to be true, standard-lowering arguments, if they’re backed up by claims of actual practices at all, tend to sound too bad to be true. The few times I’ve looked into them with any kind of effort I’ve found situations like the one described above, which causes me to just be inherently mistrustful of such arguments and of the people making them (in fairness, I have other reasons for being mistrustful of those arguments and people, but this one’s easy to make independently of my own values and ethical system).

How to install Python packages

OK, how’s that for a a change of topic?

Recently I’ve been seeing a push to stop recommending

pip install foo

as a way to install the foo package, and toward

python -m pip install foo

Sort of. And actually it’s not just pip; I’ve seen suggestions of doing this with other command-line Python tools as well.

The full thing that’s happening here is that if, say, you have Python 2.7, 3.3 and 3.5 on your system, you’d be expected to type

python2.7 -m pip install foo

to install foo for Python 2.7, or

python3.3 -m pip install foo

to install it for Python 3.3, and so on. The idea is that a lot of Python developers these days have lots of versions/instances of Python hanging around, either standalone or in virtualenvs, and since pip just runs for whichever Python python happens to be pointing to at the moment it’s easy to accidentally install for the wrong Python instance.

I argued against this on the distutils mailing list when it came up late last year, and I’m pretty sure I’m still against it. For one thing, virtualenvs (which are probably close to as common as multiple standalone Python installs) kick us right back into the problem this pattern is supposed to solve: people will get used to typing “python -m pip install foo” and trust that their active virtualenv is pointing to the Python they want. Which is not always going to be the case; I rely heavily on virtualenv changing my prompt, and I still sometimes fail to look at the prompt and do something in the wrong virtualenv. So the only solution that would work is requiring the explicit full path to the Python interpreter; that is, inspecting the invocation to make sure it began with something that matches sys.executable. Which is… bad.

For local unit-testing of my projects, I keep around virtualenvs with each supported version of Django and each version of Python those versions run on. They’re creatively named, like “django19py34” for Django 1.9 on Python 3.4. To upgrade that to this week’s release of Django 1.9.6, with no risk of ambiguity about which Python I’m using, would require this unwieldy command:

/Users/james/.pyenv/versions/django19py34/bin/python -m pip install Django==1.9.6.

I’m all for dealing with common problems that make a tool appear to fail in a way that isn’t immediately obvious. But I don’t think this particular approach is going to work.


I’ve had some opportunities to work with modern JS tooling. I’ve now actually used npm and worked on things that run on node.js, and fiddled a bit with React.

I know it’s popular to poke fun at the whole left-pad thing and the huge number of micro-dependencies involved in doing any kind of non-trivial, or even trivial, JS development work these days. I may have made a joke or two about it myself. There are some serious problems for JavaScript to overcome as it joins the community of server-side languages, though:

Anyway, my experiences with JavaScript haven’t been all that bad. My last serious dance with JS was almost a decade ago, when both the language and the surrounding tooling were ever so much worse, and before that I cut my teeth back when we called it “DHTML” and just copy/pasted anything that was proven to work most of the time, because back then nobody could write JavaScript that would reliably work in more than one browser, at least not on the first try. Now when I write JS it not only works the first time, but I can understand what it’s doing, it’s not littered with workarounds for weird browser issues, and when my build breaks it’s because the linter doesn’t like my style. Which is both a good thing, because it means we’ve gotten to the point where we’re using linters and style guides, and a bad thing, because my style is old school and I was copy/pasting image-rollover scripts before the kid who wrote that linter was born, dangit.

PS: Hiring still sucks

Last October I ranted a bit about how tech hiring processes are awful. They still are, and articles about how awful they are seem to be becoming weekly if not daily occurrences.

Why they’re awful is well-trod ground. If you don’t agree that they’re awful, nothing I can say at this point will convince you; there’s an unfortunate deeply-ingrained streak of “well, if you didn’t like it you just must not be good enough to work in the industry anyway”, and if you think that about me, well, that’s your call, I guess, but I’d like to think I can demonstrate beyond doubt my qualifications to work as a coder. Especially as a coder working with Python and Django.

Anyway, since then I’ve been thinking about how to do it better. And fortunately I work for a company which is also interested in and actively working on how to do it better, and I’ve been enjoying the opportunity that provides to work on the problem. I still don’t have a perfect solution, but I have some ideas I think should become more widespread.

For one thing, the “prove you can code” phone-screen questions can probably start getting reduced use. I understand why they became popular, but in the classic overreaction pattern of human behavior, they became far more popular than they needed to be. If you’re talking to someone based on a vouched-for referral from a person you already trust (i.e., current employee whose judgment you already rely on to determine this kind of thing), or if you can reliably establish someone’s reputation/skills from a five-minute glance around the internet at their publicly-available work, or if you’re specifically targeting someone for recruitment because they have skills or experience you want, you probably can skip the “can you code” screening. It’s just insulting at that point, and verges on the ludicrous when you’re talking to someone you already know has the skills you care about. If you absolutely must fizzbuzz people (and I’d wager most companies don’t have to do it nearly as much as they believe they do), fizzbuzz them as a last resort when all other avenues have failed. And don’t worry that it’s going to crash your pipeline by overwhelming your screeners with extra vetting work; you’re just shifting that work to before the phone call instead of having them do it during the call.

For another, I think whiteboard coding should go the way of the firing squad: never a default, and probably only available by choice of the person who will have to face it, and then maybe only in Utah. I realize this is probably an awkward metaphor, but whiteboard coding interviews and firing squads actually aren’t that bad a comparison. Though that’s not entirely fair to the firing squad: at least that gets things over quickly and probably mostly painlessly.

To be serious, though: whiteboard coding is just a bad idea. It’s horribly contrived, puts way too much pressure and anxiety on the candidate (you wouldn’t ever do that to someone who actually worked for you!), and all it really tells you is how well someone does at whiteboard coding. Which is a skill people can practice and pick up from books on the subject, and turns out to be completely orthogonal to at-a-computer-collaborating-with-coworkers coding (and I don’t buy the “well, at least that proves they cared enough to prep”; to get to the point of people doing that “prep” for you, you have to be the sort of company that cargo-cults your interview process and openly disregards the time of not just the candidate but also your own employees who are running the interviews, which means we can just as equally say it proves you’re not a company anyone should want to touch with a ten-foot linked list). If your interview process is going to involve a whiteboard, let it be for an actual collaborative session of sketching out ideas and taking notes, rather than writing code.

One good alternative is to give a “homework”-style problem the candidate can solve at their leisure and talk about during the interview. This gives a much more natural assessment of the candidate’s ability to get stuff done on a deadline and communicate usefully about it (especially if you use a problem that’s actually related to what you’d want them to do for you, rather than a stock algorithm question — all of those are published and explained online these days, anyway, just as all the old Microsoft interview riddles are available). Another option is to do pair programming or code review or other technical-but-collaborative exercises.

Above all, the process should be as humane and respectful as possible. Candidates applying to work for your company shouldn’t be treated as questionable foreign interlopers, to be grilled at the border of your company’s domain and perhaps grudgingly allowed in; that sends all sorts of terrible messages about the work environment anyway, and makes people not want to get in (or attracts very much the wrong sort of people). Rather, candidates who’ve made it to your on-site interview should be treated as professional colleagues you haven’t had a chance to work with yet, unless and until they give you reason to think otherwise. If you think that sounds silly and touchy-feely, ask around among experienced/senior-level tech people. They absolutely will choose companies to work with (or not) based on how the company treats people, will steer their friends toward or away from companies based on this, and anyone who’s been around the block a few times absolutely can read that stuff from an interview process.