Developer interruptions, brogrammer culture, and the sorry state of the programmer toolchain

I don’t like using the phrase “our industry” to describe professional programming because it’s like referring to everyone who uses a desk at work as the “desk industry”. People use programming in a variety of fields and industries and I’m hesitant to refer to everyone who uses this tool as “our industry”. That being said, there does seem to be a cultural and social unification of programmers in people’s minds which makes some sense; there is common tooling even if folks are working on completely unrelated domains.

I’ve realized recently that our ideas about what programming is and our expectations about our tooling are inextricably tied to many of the social and cultural issues and problems facing “our industry”. I believe that the absurd notion of the “10x engineer”, sexism in tech, and brogrammer culture are all affected by our views about programming. It works the other way as well – these phenomena fuel our expectations and views about programming. Let me explain.

By now, we’ve all seen this image:


It depicts a developer who is knee-deep in understanding some complicated control flow of a program and who, upon being interrupted, loses all the gains made and is forced to start all over. Massive productivity loss, etc. This may seem pedantic, but let’s look closer at what’s actually happening here. The example in the comic may intentionally exaggerate, but I think it’s safe to say that, at its core, the code depicted in it is fairly common to many programming languages and frameworks. There is some massively complex state of the world that exists inside our program and conditionals regularly mutate the state of that world. A block of code in another line or class might execute and affect the results here. The main challenge then becomes keeping the call stack clear in our head, mentally substituting variables for actual values in order to understand what our program actually does when it’s executed with certain inputs.

It’s not hard to see why the popular perception of a programmer is one of some freak genius, sitting by a computer, frantically typing while keeping a million things in their head and making magic happen. Indeed, this image is promulgated by many programmers. Many prefer to promote this image because it tends to garner respect and awe from the general public who view programming as some innate gift endowed to those who are weird enough and smart enough to be able to make sense of the utter chaos that is programming.

The type of programming depicted in that comic is the state of the art for most. Think about how many social and cultural issues this directly influences. If programming success depends largely on the ability to keep the entire universe in one’s head then it’s only natural that many people who would otherwise consider programming decide against it. “It’s not for me”, “You need to be a genius to program”, “I just don’t have that type of brain” are frequently heard.

I’ve heard many people even insist that they have ADD because they’ve tried programming and haven’t been successful. Or how about sexism in tech and programming culture? Instead of a thoughtful craftsperson doing their work, this comic promotes a macho superhero who, fueled by red bull, is able to conquer the chaos of the computer.

Not to pick on anyone, but check out the WatchPeopleCode subreddit. It’s pretty clear that much of the entertainment value there comes from people who enjoy watching freaks typing unintelligible things into their computers to make magic happen. I don’t think a calm, thoughtful developer working with principled tooling would have the same hollywood value. The cultural and social impact of this view of what the stuff of programming is can be detected in the sorry state of typical workplaces as well. Programmers regularly complain (boast?) about burning the midnight oil, marathon coding sessions, and the stress of frantically working to fix critical bugs in company software. Let’s do ourselves a favor and start by stating that the fact that this happens at all is simply absurd. Instead of romanticizing it, let’s insist that there’s a better way and that our tooling and languages are fundamentally broken if they can’t support incremental and steady improvements by thoughtful programmers doing good work on reasonable schedules.

The fact of the matter is that the state of the art is utter crap. But it remains the status quo, in large part, because people aren’t willing to insist that we can do better. Why? Precisely because they want to maintain an air of genius and magician. The sad fact is that I can understand why many would-be newcomers feel that their lack of success in programming is due to ADD. They’re wrong to think that they suffer for want of attention. It’s simply that the programming they’re exposed to requires an absurd amount of mental gymnastics.

Closely tied to this is the “new framework overload/exhaustion” that regularly creeps up. Every day, a new JS or web MVC framework comes out. It gets posted to Hacker News or where ever else and developers are quick to adopt and build software on top of these frameworks. If you look closely though, every once in a while someone will post an “Ask HN” or whatever, seeking advice on how to deal with the feeling of constantly being behind and feeling like they simply can’t keep up with all the shiny, new tech. Thankfully, older and wiser developers frequently offer their perspective, which is to largely resist the urge to adopt every new thing that comes out. But why does this happen? Why is there a new “the way” or framework every other day? Because developers haven’t taken the time and don’t want to take the time to reassess fundamentally what programming ought to be and how we can make it better. Instead, the boredom and frustration that quickly follows adoption of a new framework or language results in yet another framework. But these new frameworks always miss the mark. Early MVC frameworks for the web were right, in my opinion, to say we could do better than the crazy Java apps people were writing websites with. But the antidote offered was always in the form of cutesy frameworks which optimize for only the first 5 minutes of use. That’s why we almost always see developer happiness with a framework approach zero or worse as time goes on. I’ll just briefly say that the problem with these frameworks is that they never offer principled concepts, composability, or any of the other things that are crucial to being able to build critical software in a reasonable way. What almost always happens is that people ooh and ah over how easy it is to perform common tasks in new framework X and they’re hooked. But the relief they’ve experienced in being freed from the shackles of mammoth Java frameworks that came before is short-lived. As the software they’re building grows, modularity and composability becoming increasingly important, but at that point it’s too late. It becomes clear that all the candy offered to eager developers in the new framework were only helpful for the first 5 minutes and that very little is there to support them as the software grows more massive.

I hope I’ve managed to express how many of the social and cultural issues related to programming are directly impacted by our views and expectations about what programming is. There’s a lot of work to be done and the first step is admitting that we can do better. As a first step, when someone posts those stats about the impact of interruptions on developer productivity, let’s carefully consider what the real problem may be before we retweet and further promote the sorry state of our tools and languages. When companies and organizations serve up shiny, new frameworks, let’s take a moment to consider whether anything fundamentally different is being offered and investigate just how well these frameworks will serve us on large, critical projects.

If there’s a takeaway from this discussion, it’s that we need to be acutely aware of how even the expectations we set forth in our discussions of technical issues directly impacts the cultural issues surrounding our tooling and languages and vice versa.

Learning how to make apps is not the same thing as learning to program

Learning how to develop iOS apps is not the same thing as learning to program. Sure, you’ll need some programming to develop an iOS app, but successfully creating an app involves integrating a bunch of things that have nothing to do with programming per se. Programming per se means taking some computational problem and formulating a program that solves it.

“But doesn’t making anything useful always involve things beyond pure programming? You need a platform, some UI framework..”, you’ll say. Well, it depends on your definition of useful. Programs that accept some input and calculate a pure result are useful if that’s what you need. Furthermore, even if the “useful” you’re interested involves an app or an iOS-like framework..whatever, the distinction is still significant for one reason: there is a point at which pure programming ends and the “iOS-ness” begins. Applications involve data and transformations on that data to calculate results and implement business logic. But if you’re new to programming and your starting point is making iOS apps, you conflate all the challenges and knowledge required. Understanding delegates, how to update the UI based on the model, etc is obviously very useful for iOS development, but it’s important to realize that this is a highly specialized skill that is relevant only to iOS development (or similar frameworks like Android). More importantly, it’s simply good design to separate concerns and write the pure program and when and only when that’s complete, to figure out how to interact with or present that program in some UI if that’s what you want. So, if the stated goal were to learn how to the iOS MVC scheme or UI kit works then learning these things would make sense. But if the goal is to learn programming then it doesn’t make sense and it only serves to frustrate and confuse.

People will argue that in order to motivate beginners, it’s important to have them working on something useful, something that excites them. This may be true, but it’s still important to compartmentalize knowledge and to be clear on precisely what it is one is trying to accomplish. Here’s an example: suppose a beginner wants to create a Tic-tac-toe game. Before she even thinks about an iOS app, she ought to first understand the problem domain, figure out how to properly model the game, what functions are required in order for a proper game of Tic-tac-toe to proceed (like transforming the state of the board, calculating the winner, etc). The point is that there’s a whole class of knowledge to be gained that has nothing to do with iOS whatsoever. To start with thinking about “the app” in this case would be like wanting to calculate the density of small business owners under 40 in New England and doing so by diving into JavaScript, D3.js, etc without first figuring out what data you need, how to analyze that data, and so on. D3.js and such are all very useful once you have a program. The UI and presentation lives at the edge of your “app”, the pure program has nothing to do with it.

If you want to learn programming, start with problems you want to solve and figure out how to compute them. Want to deliver some representation of that or offer interactivity with such a program? Learn iOS development. But you can’t get to the latter without the former and if you conflate the two you’ll be spinning your wheels for a long time learning a little bit about of bunch of different things that are conceptually unrelated.