The Four Quality Katas of a Code Craftsman

I recently had the opportunity at a conference to ramen up with Lance Gleason of Polyglot. Of the many things we discussed in 3hrs, it came down to this:

“A really good developer implicitly cares about the quality of their work.”

Forget what you’ve previously read, the blah blah, the pious statements, the bullshit for a moment. Consider these four reasons, practices if you will, that explain the statement above.

1. Write code that makes sense next week

I can’t reason about things that don’t fit in my head. So please write code that fits in my head, even if it already fits in yours. I say that understanding how big my own head is sometimes, and maybe that applies to you too. When we do this, code is easy to reason AND collaborate about, and it turns out that’s severely important to our craft.

Also keep in mind that the amount of work you have to do next week will make you forget what you did this week. I love to forget about things, it helps me prioritize life. I wish the same for you. All code is an expression of intent, and when it contains bugs, we have to fix them. That means what we do today has to be simple enough to waste no time fixing when we revisit it in the future.

2. Write code that contributes to a purposeful existence

Life quickly becomes meaningless when you’re working on meaningless stuff. Figure out what matters most in your life, make that your goal, and write code that contributes to that goal. Don’t become a programmer for the money because you’ll quickly learn to hate the programmers around you because we can be prickly unless you understand why we are what we are.

Personal scrutiny is a blessing and a curse. Self doubt, imposter syndrome, and uncertainty are all signs that you still have a soul, so don’t worry if you see them while you write code. So when you see them on your journey with code or when someone points something out, you have no obligation to accept them verbatim; but you should receive it just as you should receive compliments, consider it, then if it makes sense work it into your code.

Be ethical enough to say “I will not build this” every time your conscience tells you that your product isn’t going in a direction. Every time. Why? Because there is no soul in software other than what we as humans imbue into the process. Businesses are paper. You are craftsman. Be that.

3. Write code that entices others to collaborate

Who likes to toil away in obscurity on something no one else cares about? Exactly, so find projects that others also care deeply about, enough to contribute to, then go to it hard. Said the opposite way, you know if you’re working on something valuable when others are so interested in it that they also want to invest their time in it.

Don’t write negative comments over someone else’s work. Just don’t. Establish a dialog that helps you understand why they did what they did. Then if you still don’t agree, be clear about the obstacles you see in the current approach and find which ones are imperative to resolve. Be okay with things that aren’t perfect so long as there’s a way to revisit them later.

You know you’re doing it right when you leave no code behind that others are afraid to question. This is a characteristic of mature engineering, the self-evident latitude in your work for future ideas to improve what you have crafted.

4. Write code that anticipates change

Everything changes. Flexible code enables us to correct mistakes when we realize them in the future. Smaller code is easier to understand and fix.

Be clear about your surface area. The more you know about the limits of code, the easier it is to build code that uses it properly. If it’s a function or method, naming and sufficient typing clarifies it’s purpose and limits. If it’s a REST API, a machine-readable descriptor with examples helps people integrate this into other code more accurately.

Constraints and dependencies also communicate the flexibility of a system. If you know that the code your writing shouldn’t be used in specific circumstances, document those in a way that is easier for others to consume. Isolate dependencies when they are insufficiently abstract such that you can’t write code that ‘fits in your head’. Vet external software that you don’t own, yes, open source libraries to ensure they meet licensing, security, and performance requirements.

Don’t just talk about quality, practice it.

Quality isn’t something you ‘build in’ afterwards, it’s desirable characteristics exhibited by your work. There are many ways to accomplish this, but all take practice and focus, trial and error. Exercise teaches us to reason about why we do what we do, not just what we’re doing. True software craftsmanship produces sensible, purposeful, collaborative, flexible code.

DevOps, Burnout, and the Search for the Holy Grail

I’ll be speaking at APIdays Melbourne about the technological equivalent of the holy grail, continuous deployment, and why maybe we should re-think certain dynamics coming from the push to “do DevOps”, which like many good ideas is marred by poor implementations and shotty management.

2/2 Update: Things come up, shit happens, and I am incredibly bummed not to be able to be part of the crew at APIdays Melbourne this time around. However, priorities are priorities, and I’m not going to regret missing the 18 hour flight there and back.

Grateful for the opportunity, hope this doesn’t burn bridges, but sufficed to say I’ll be there in spirit. Thinking of shipping a TelePresence bot and asking @switzerly to set it up for me. 🙂

I’ll still be looking to find a more local forum for this talk, hopefully at APIstrat.

Of course, I’ll be showing how to inject comprehensive testing into a pipeline of API design, build, deployment, and monitoring tools, but I’m a people person more than anything else, so germane to my presentation will be the topic of how “doing DevOps” affects us at a personal level too.

Humans are tool builders, not the other way around.

Why are we talking about DevOps?

I love the ideas coming from that space. Any time people work closer, tighter, better together, I’m down. But revenue doesn’t care about you or me, and the impetus behind most practical implementations of continuous delivery are indeed revenue, over-trumped expectations from the business on IT as their main blocker rather than proper decision making.

Often the result of forcing unprepared teams to “do DevOps”: #burnout

In November at APIstrat Austin I stood up and said that teams are more important to get right than the software they produce, though they’re both very important. People produce software. If the people are buggy (i.e. bad team dynamics), you will see that in their product.

At the company kick-off last week, I sat in the front row as a panel of exec-level customers validated that the immense pressure to release software faster than ever before is real, is connected directly to revenue (loss not just gain), and is incredibly challenging due to people problems more than just technological ones.

Business leaders looking to implement new paradigms on technical teams will also find it surprisingly hard to “do DevOps” if there are cultural or personal issues laying around like land mines. From my last job, I know this first-hand.

I’m a Developer, but My Cape is at the Dry Cleaners

15 years professionally and counting. Right now, I see that code written in an IDE isn’t the only important factor to bringing excellent products to market. Code of conduct in teams, the responsibilities a business has to its employees, and how we treat each other along the way to building world-class software are just as important for a sustainable business model

Sorry startups who “do DevOps” because it’s cool, call me in 6 months if you still exist and want to talk for real. I would *love* that as a podcast interview episode.

For now, like an underwhelming version of Clark Kent, I temporarily hang up my [developer] superhero cape, put on thick-rimmed glasses, and work a job in the big metropolis during the day. I am educating myself and rounding out my ideas on what it really takes to be in cutting edge technology. I surround myself with very driven, passionate, fun, and smart people to get better…at everything I can.

I am expanding my understanding of how to bring about great technology beyond what an IDE can provide me. I work with people, code, and businesses.

More reading:

How do you test the Internet of Things?

If we think traditional software is hard, just wait until all the ugly details of the physical world start to pollute our perfect digital platforms.

What is the IoT?

The Internet of Things (IoT) is a global network of digital devices that exchange data with each other and cloud systems. I’m not Wikipedia, and I’m not a history book, so I’ll just skip past some things in this definitions section.

Where is the IoT?

It’s everywhere, not just in high-tech houses. Internet providers handing out new cable modems that cat as their own WiFi is just a new “backbone” for these devices to connect in over, in almost every urban neighborhood now.

Enter the Mind of an IoT Tester

How far back should we go? How long do you have? I’ll keep it short: the simpler the system, the less there is to test. Now ponder the staggering complexity of the low-cost Raspberry Pi. Multiplied by the number of humans on Earth that like to tinker, educated or no, throw in some APIs and ubiquitous internet access for fun, and now we have a landscape, a view of the magnitude of possibility that the IoT represents. It’s a huge amount of worry for me personally.

Compositionality as a Design Constraint

Good designers will often put constraints in their own way purposely to act as a sort of scaffolding for their traversal of a problem space. Only three colors, no synthetic materials, exactly 12 kilos, can I use it without tutorials, less materials. Sometimes the unyielding makes you yield in places you wouldn’t otherwise, flex muscles you normally don’t, reach farther.

IoT puts compositionality right up in our faces, just like APIs, but with hardware and in ways that both very physical and personal. It forces us to consider how things will be combined in the wild. For testing, this is the nightmare scenario.

Dr. Strangetest, or How I Learned to Stop Worrying and Accept the IoT

The only way out of this conundrum is in the design. You need to design things to very discrete specifications and target very focused scenarios. It moves the matter of quality up a bit into the space of orchestration testing, which by definition is scenario based. Lots of little things are easy to prove working independent of each other, but once you do that, the next challenges lie in the realm of how you intend to use it. Therein lies both the known and unknown, the business cases and the business risks.

If you code or build, find someone else to test it too

As a developer, I can always pick up a device I just flashed with my new code, try it out, and prove that it works. Sort of. It sounds quick, but rarely is. There’s lots of plugging and unplugging, uploading, waiting, debugging, and fiddling with things to get them to just work. I get sick of it all; I just want things to work. And when they finally *do* work, I move on quickly.

If I’m the one building something to work a certain way, I have a sort of programming myopia, where I only always want it to work. Confirmation bias.

What do experts say?

I’m re-reading Code Complete by Steve McConnell, written more than 20 years ago now, eons in the digital age. Section 22.1:

“Testing requires you to assume that you’ll find errors in your code. If you assume you won’t, you probably won’t.”

“You must hope to find errors in your code. Such hope might feel like an unnatural act, but you should hope that it’s you who find the errors and not someone else.”

True that, for code, for IoT devices, and for life.

Don’t Insult Technical Professionals

Some vendors look at analyst reports on API testing and all they see is dollar signs. Yes, API testing and virtualization has blown up over the past 5 years, and that’s why some companies who were first to the game have the lead. Lead position comes from sweat and tears, that’s how leaders catch the analysts attention in the first place; those who created the API testing industry, gained the community and analyst attention, and have the most comprehensive products that win. Every time.

There are snakes in the grass no matter what field you’re in

I recently had opportunity to informally socialize with a number of “competitors”, and as people are great people to eat tacos and burn airport wait time with. Unfortunately, their scrappy position in the market pushes them to do things that you can only expect from lawyers and pawn sharks. They say they’re about one thing in person, but their press releases and website copy betray their willingness to lie, cheat, and deceive actual people trying to get real things done.

In other words, some vendors proselytize about “API testing” without solid product to back up their claims.

I don’t like lying, and neither do you

One of my current job responsibilities is to make sure that the story my employer tells around its products accurately portray the capabilities of those products, because if they don’t, real people (i.e. developers, testers, engineers, “implementers”) will find out quickly and not only not become customers, but in the worst cases tell others that the story is not true. Real people doing real things is my litmus test, not analysts, not some theoretical BS meter.

Speaking of BS meter, a somewhat recent report lumped API “testing” with “virtualization” to produce a pie chart that disproportionately compares vendors market share, both by combining these two semi-related topics and by measuring share by revenue reported by the vendors. When analysts ask for things like revenue in a particular field, they generally don’t just leave the answer solely up to the vendor; they do some basic research on their own to prove that the revenue reported is an accurate reflection of the product(s) directly relating to the nature of the report. After pondering this report for months, I’m not entirely sure that the combination of the “testing” and “virtualization” markets is anything but a blatant buy-off by one or two of the vendors involved to fake dominance in both areas where there is none. Money, meet influence.

I can’t prove it, but I can easily prove when you’ve left a rotting fish in the back seat of my car simply by smelling it.

What this means for API testing

It means watch out for BS. Watch really closely. The way that some companies use “API testing” (especially in Google Ads) is unfounded in their actual product capabilities. What they mean by “testing” is not what you know as what’s necessary to ship great software. Every time I see those kinds of vendors say “we do API testing”, which is a insult to actual API testing, I seriously worry that they’re selling developers the illusion of having sufficient testing over their APIs when in reality it’s not even close.

Why your API matters to me

On the off-chance that I actually use it, I want your API to have been tested more than what a developer using a half-ass “testing” tool from a fledgling vendor can cover. I want you to write solid code, prove that it’s solid, and present me with a solid solution to my problem. I also want you to have fun doing that.

The API vendor ecosystem is not what it seems from the outside. If you have questions, I have honesty. You can’t say that about too many other players. Let’s talk if you need an accurate read on an analyst report or vendor statement.

 

Quality Means Not Accepting Crap

Software. Hardware. Things. Opinions. Places. Excuses. Ideas.

Anyone can produce a cheap “affordable” solution. But details matter. How many cheap plastic things have broken in your hands unexpectedly, and were entirely disappointing in that moment?

My AirBnB is not that. I knew “quality” when I saw it. You can tell someone lived in this thing and made it convenient for them, then handed it off to you. That’s quality, making something that meets your own standards, then giving it to someone else.

WP_20151117_005

I travel a lot, enough to know what matters on a trip. Leg room on the plane. Working wifi. Power plugs, everywhere. Politeness. Clean bathrooms. Details matter.

Conversely, a $300/night hotel room only to have plugs too far away from the bed, lamp toggle buttons that take so much effort to push that you push the lamp over, light switches that are harder to find than Carmen Sandiego; the annoyances all add up too. The lights in this camper that I’m staying in are easy to use and don’t cause me to cuss.

WP_20151117_010

Same with software, details matter.

Quality software comes from people using their own product, living in it, fixing its flaws, and asking others how their experience with it is. In the tech industry, we call it “dogfooding” your own product. Believe me, it works.

People intrinsically know “quality” when they experience it. They pick up a phone, it’s heavy and solid, they think “that’s quality”. Conversely, they close a car door and it rattles or sounds hollow, they think “that’s cheap”. Even the sounds shipped with your mobile phone help to engineer your perception of the quality of the device.

Quality is in the details.

Oh, and BTW, I’d also rather put a constraint on myself not to over-drink and stumble into a $450/night on-premise room way too late at night to wake up on time the next morning. I have business to attend to. Knowing when to quit starts with looking at a ridiculous estimate and just saying no:

airbnb-1-fail

So, even at only 3 nights, this would have cost $1,350 just for a room I would be spending around 4-6 hours a night in, and not getting all the charms of an outside shower and condensation on the windows each morning. The AirBnB alternative for all three nights, just 60% as compared to JUST ONE NIGHT AT THE SHERATON!

airbnb-2-win

I should have remembered to check the crime overlay though, but Uber is a cheap solution to that problem:

airbnb-3-crime