Gluecon 2016: Melody Meckfessel on Serverless Technology

I had the opportunity to attend Melody Meckfessel’s presentation at Gluecon 2016 last week. As someone with years of experience at Google, when she speak, smart people listen.

[paraphrased from notes]

We should all be optimistic about open source and serverless technologies. Kubernetes, an example of how to evolve your service, a means to an end, housing processes internally, making management easier for developers to deploy and scale horizontally, to update gracefully without causing downtime. One of the themes in the future of cloud-based computing is that is increasingly open source which makes it easier for developers to influence and contribute the software stack as it’s evolving. Our focus is switching to managing applications and not machines.

“…the future of cloud-based computing…is increasingly open source which makes it easier for developers to influence and contribute the software stack as it’s evolving.”


(photo credit: @rockchick322004)

Serverless means caring about the code, embedded in all cloud platforms. In App Engine, you run code snippets and only get charged for your app or service as it scales up or down. With container engine, you run containers but don’t specify the machine it runs on. In the future, we’re not going to be thinking or talking about VMs. What does it mean for us developers? It will enable us to compose and create our software dynamically. As one of the most creative and competitive markets, software puts us in the same room and presents us with the same challenge: how to make things better, faster.

Melody tells a story about a project she was working on where they were using a new piece of hardware. They were having trouble scaling during peak load, holiday traffic being really important to their particular customer. They were constantly scrambling and for the first 6-9 months, she spent her time on primarily DevOps related work. This was frustrating because she wanted to work on features, seeing the potential of what she could do, and she could never quite get to that.

As developers, we are constantly raising the bar on the tools we use to release, and correspondingly our users’ expectations increase. Also as developers, we have the ability to create magic for users, predicting what they want, putting things in context, and launching the next great engaging thing. In a serverless future, developers will be focused on code, which means that PaaS (platform as a service) will have to evolve to integrate the components it’s not doing today. To do that, developers will need to shift Ops work from themselves more to cloud providers, being able to pick and choose different providers. Developers come to a platform with a specific language that they’re productive in, so this is a place where PaaS providers will have to support a variety of runtimes and interpreters. App Engine is the hint and glimmer of creating an entirely new developer experience.

“In a serverless future, developers will be focused on code…will need to shift Ops work from themselves more to cloud providers.”

What does a serverless future mean for the developer experience?

We will spend more of our time coding and have more and more choice. Right now, we’re in a world where we still have to deal with hardware, but as that changes, we’ll spend more of our time building value in our products and services and leave the infrastructure costs to providers and the business. If we want to do that, developers need to make it very easy to integrate with machine learning frameworks and provide analytics. Again, to do this, we need to free up our time: hence NoOps. If we’re spending all out time on ops, we’re not going to get to a world where we can customize and tailor our applications for users in the ways they want and expect. This is why NoOps is going to continue to go mainstream.

If we’re spending all out time on ops, we’re not going to get to a world where we can customize and tailor our applications for users in the ways they want and expect.

Think about if you’re a startup. If you’re writing code snippets, you may not need to think about storage. You don’t need to worry about security because the cloud providers will take care of that for you. Some wifi and laptops, and that’s your infrastructure. You’ll have machine-oriented frameworks to allow that app to be really amazing. This idea that you can go from prototype to production-ready to then scale it to the whole planet. Examples of really disruptive technology because they were enabled to do so by cloud resources.

To get there, we’re going to have to automate away the configuration and management and the overhead. We still have to do this automation. We have work to do there.

Multi-cloud in NoOps is a reality.

People are using mixed and multiple levels of on-prem IT and hybrid cloud; the problem is that the operational tools are siloed in this model though. This makes it very hard to operate these services. As developers, we should expect unifying management…and we should get it. Kubernetes is an example, just another way for teams to deploy faster. Interestingly, we’re seeing enterprises use Docker and Kubernetes on-prem; the effect of this is that it will make their apps and services cloud-ready. When they are faced with having to migrate to the cloud, it will be easy to do it.

“…we’re seeing enterprises use Docker and Kubernetes on-prem…it will make their apps and services cloud-ready”

Once you’re deployed like this though across multiple clouds and service providers, you’ll need to have an easy way to collect logs and perform analysis; StackDriver is an example of this. It’s this single-pane-of-glass view that enables developers to find and fix issues fast, to see what’s happening with workloads, and to monitor more effectively. Debugging and diagnosing isn’t going to go away, but with better services like Spinnaker and App Engine, we’ll be able to manage it more effectively.

Spinnaker providers continuous delivery functionality, tractability throughout our deployment process, it’s open source. As a collaboration with Netflix, Melody’s team worked hard on this. It was a case of being more open about the software stack we’re using and multiple companies coming together. We don’t all need to go build our own thing, especially when we share so many common problems.

Debugging and diagnosis

The problem is that there’s all this information across all these sources and it’s hard to make sense of it. It’s usually in a time-critical situation and we have to push out a quick fix as soon as we can. Part of streamlining that developer and debugging experience in the future cloud is having the tools at our disposal. These are things like being able to trace through a complex application for a performance issue, going back through the last 3 releases and see where the slow-down started, or production debuggers where you can inspect your service that’s running in the cloud and go to any point in the code to look through variable and see what’s happening. Error reporting as well needs to be easier, as errors can be spread across multiple logs, it’s hard to see the impact of the errors you’re seeing. Errors aren’t going away, so we need to be able to handle them effectively. We want to reduce the effort it takes to resolve these errors, we want to speed up the iteration cycles for finding and fixing the errors, and provide better system transparency.

“The faster we can bring the insight from these motions back in to our source code and hence optimize, those are all benefits that we are passing through to users, making us all happier.”

In the NoOps, serverless future, not everything will be open source, but we will expect that 3rd party providers should work well with all the cloud platforms. Meledy’s team is currently working on an open source version of their build system called Basil. Right after their work on Spinnaker in collaboration with Netflix, they’re looking for other opportunities to work with others on open source tools.

What does the world look like 5-10 years from now?

We’re not talking about VMs anymore. We’re benefiting from accelerated hardware development. We’re seeing integration of data into apps and compute so that we can have better insight into how to make applications better. It’s easier for machine learning to be embedded in apps so that developers can create that magical software that users expect. We don’t have to talk about DevOps anymore. We will have more time to innovate. In an open cloud, you have more opportunity to influence and contribute to how things evolve. Like thinking back on how there used to be pay phones and there aren’t anymore, the future of development is wide open, no one knows what it will look like. But contributions towards the future cloud are a big part of that future and everyone’s invited.

More resources:

Developer Experience is the UX of Your API

For software developers, APIs are a really logical choice for delivering how something should work on multiple platforms, acting as a sort of a common “language” between projects, systems, and teams.

Who really ‘uses’ an API?

Developers are the real ‘users’ of APIs. Sure everyone is the downstream recipient of the value of an API, but most of that doesn’t happen unless an API satisfies its first audience: developers. They need to understand how it works, how to incorporate it into what they’re designing, often designing APIs themselves. When something’s hard to understand on an API, they’re the ones that feel the pain, not the end user of the app.

You might stretch the definition of an API ‘user’ to encompass ‘anyone who benefits from the API’, but it’s really the developers who are adopting the API, proliferating it to others, and ultimately orchestrating the user’s interaction with your API, like digital gatekeepers. Benefactors and users are important to differentiate between as a business; one represents the volume of your business, the other represents your adoption curve.

Developers love something that just makes sense to them, and a well-designed API along with great documentation the best ‘UI’ an API can have.

dx-over-ux-equals-api

But APIs don’t have a UI!

APIs lacks a User Interface (UI) in the traditional sense, that is until you go back to your definition of who you mean by ‘user’ when you say ‘User Interface’. If the ‘user’ is a developer that interacts with code and APIs as their primary language, then you leave the UI up to them, but there is still an ‘interface’ to your API as a product.

Traditional ‘UI’ focuses on treating the ‘user’ component in terms of human sensory input and direct manipulation. APIs being purely digital products, they are primarily something you interact with conceptually and socially.

The social element of APIs is huge. By definition, by building an API, you are describing functionality in terms of what ‘your’ system can do for something else. APIs infer interaction, collaboration, and inclusion.

The role of UI and APIs in the User Experience

User interface (UI) is heavy work in terms of software; people slave over the perfect font, color scheme, and behavioral appropriateness of UI, only to find that the project is hard to maintain, change, or integrate with afterwards. Instead of thinking about how things look, good designers also consider how things behave.

User Experience (UX) design is a field that accommodates these considerations while also maintaining a keen eye on the consumer’s actual use, psychological reasoning, and emotional reactions to how a technology works. Yes, we need awesome UI, but only as a result of thinking about the human experience driving the functionality of the software, and only after we get the technical premises correct for the business requirements.

What Makes for a Great Developer Experience?

Developer Experience (DX) is the UX of APIs, but what lessons in great UI design can we apply to DX? What elements of great DX are unique to this non-traditional view of UX/DX?

If I had to boil it down, a minimum-viable API developer experience in order of importance consists of:

  • Documentation, both human-readable and machine-readable
    Hand-crafted API documentation is adorable, but most developers expect an M2M format like Swagger these days. A wise man once said: “he who hand-codes what he could have generated automatically using M2M descriptor is not spending that time making his API better”.
  • Minimize number of exceptions, not the exceptions themselves
    I get it. You’re special. We’re all special like snowflakes. Yay for us. Sad for users of your API. You may have some CRUD operational endpoints on your API, but more than likely you have domain-specific activities that require some explanation. Explain when you deviate from the most obvious, and try to keep things as simple as possible. Be sure to call out any exceptions to your own (internal consistency) rules like naming and type/structure differences between operations. And for the love of whatever, document why you decided to do so.
  • Test the new developer on-boarding process, constantly
    Proving that your DX is what you need it to be must be done regularly. Use hackathons, free account sign-up, metrics, and feedback to make sure that there are no broken elements of the on-boarding chain. If pricing is untenable, you might want to have the business owners focus on that for a bit instead of quarreling over design or feature specifics. Like a reliable disaster-recovery plan, a great DX only comes from putting your expectations through their paces.
  • Be personable, real, and transparent as possible
    Developers are people. Against the perception from non-technical people, they aren’t antisocial, they just hate bullshit and like to be passionate about really technical stuff. If your documentation, website copy, and support process reflect genuine concern for helping them solve their problem (and of course you do that too), you’ll be our their friend for life, or at least until you have a major outage.

Who Else Says Developer Experience is Essential to A Great API?

If you ask Jesse Noller, you’ll get an earful about documentation and transparency, and I agree. Documentation is the first thing a developer will reach for when working with your API. That doesn’t mean your efforts in DX should end there. You should also consider how other aspects like performance, authorization, and pricing play in to a developer’s decision to implement and evangelize use of your API over another throughout their team and career.

More reading:

How do you test the Internet of Things?

If we think traditional software is hard, just wait until all the ugly details of the physical world start to pollute our perfect digital platforms.

What is the IoT?

The Internet of Things (IoT) is a global network of digital devices that exchange data with each other and cloud systems. I’m not Wikipedia, and I’m not a history book, so I’ll just skip past some things in this definitions section.

Where is the IoT?

It’s everywhere, not just in high-tech houses. Internet providers handing out new cable modems that cat as their own WiFi is just a new “backbone” for these devices to connect in over, in almost every urban neighborhood now.

Enter the Mind of an IoT Tester

How far back should we go? How long do you have? I’ll keep it short: the simpler the system, the less there is to test. Now ponder the staggering complexity of the low-cost Raspberry Pi. Multiplied by the number of humans on Earth that like to tinker, educated or no, throw in some APIs and ubiquitous internet access for fun, and now we have a landscape, a view of the magnitude of possibility that the IoT represents. It’s a huge amount of worry for me personally.

Compositionality as a Design Constraint

Good designers will often put constraints in their own way purposely to act as a sort of scaffolding for their traversal of a problem space. Only three colors, no synthetic materials, exactly 12 kilos, can I use it without tutorials, less materials. Sometimes the unyielding makes you yield in places you wouldn’t otherwise, flex muscles you normally don’t, reach farther.

IoT puts compositionality right up in our faces, just like APIs, but with hardware and in ways that both very physical and personal. It forces us to consider how things will be combined in the wild. For testing, this is the nightmare scenario.

Dr. Strangetest, or How I Learned to Stop Worrying and Accept the IoT

The only way out of this conundrum is in the design. You need to design things to very discrete specifications and target very focused scenarios. It moves the matter of quality up a bit into the space of orchestration testing, which by definition is scenario based. Lots of little things are easy to prove working independent of each other, but once you do that, the next challenges lie in the realm of how you intend to use it. Therein lies both the known and unknown, the business cases and the business risks.

If you code or build, find someone else to test it too

As a developer, I can always pick up a device I just flashed with my new code, try it out, and prove that it works. Sort of. It sounds quick, but rarely is. There’s lots of plugging and unplugging, uploading, waiting, debugging, and fiddling with things to get them to just work. I get sick of it all; I just want things to work. And when they finally *do* work, I move on quickly.

If I’m the one building something to work a certain way, I have a sort of programming myopia, where I only always want it to work. Confirmation bias.

What do experts say?

I’m re-reading Code Complete by Steve McConnell, written more than 20 years ago now, eons in the digital age. Section 22.1:

“Testing requires you to assume that you’ll find errors in your code. If you assume you won’t, you probably won’t.”

“You must hope to find errors in your code. Such hope might feel like an unnatural act, but you should hope that it’s you who find the errors and not someone else.”

True that, for code, for IoT devices, and for life.

[Talk] API Strategy: The Next Generation

I took the mic at APIStrat Austin 2015 last week.

A few weeks back, Kin Lane (sup) emailed and asked if I could fill in a spot, talk about something that was not all corporate slides. After being declined two weeks before that and practically interrogating Mark Boyd when he graciously called me to tell me that my talk wasn’t accepted, I was like “haal no!” (in my head) as I wrote back “haal yes” because duh.

I don’t really know if it was apparent during, but I didn’t practice. Last year at APIStrat Chicago, I practiced my 15 minute talk for about three weeks before. At APIdays Mediterranea in May I used a fallback notebook and someone tweeted that using notes is bullshit. Touché, though some of us keep our instincts in check with self-deprecation and self-doubt. Point taken: don’t open your mouth unless you know something deep enough where you absolutely must share it.

I don’t use notes anymore. I live what I talk about. I talk about what I live. APIs.

I live with two crazy people and a superhuman. It’s kind of weird. My children are young and creative, my wife and I do whatever we can to feed them. So when some asshole single developer tries to tell me that they know more about how to build something amazing with their bare hands, I’m like “psh, please, do have kids?” (again, in my head).

Children are literally the only way our race carries on. You want to tell me how to carry on about APIs, let me see how much brain-power for API design nuance you have left after a toddler carries on in your left ear for over an hour.

My life is basically APIs + Kids + Philanthropy + Sleep.

That’s where my talk at APIstrat came from. Me. For those who don’t follow, imagine that you’ve committed to a long-term project for how to make everyone’s life a little easier by contributing good people to the world, people with hearts and minds at least slightly better than your own. Hi.

It was a testing and monitoring track, so for people coming to see bullet lists of the latest ways to ignore important characteristics and system behaviors that only come from working closely with a distributed system, it may have been disappointing. But based on the number of conversation afterwards, I don’t think that’s what happened for most of the audience. My message was:

Metrics <= implementation <= design <= team <= people

If you don’t get people right, you’re doomed to deal with overly complicated metrics from dysfunctional systems born of hasty design by scattered teams of ineffective people.

My one piece of advice: consider that each person you work with when designing things was also once a child, and like you, has developed their own form of learning. Learn from them, and they will learn from you.

 

Don’t Insult Technical Professionals

Some vendors look at analyst reports on API testing and all they see is dollar signs. Yes, API testing and virtualization has blown up over the past 5 years, and that’s why some companies who were first to the game have the lead. Lead position comes from sweat and tears, that’s how leaders catch the analysts attention in the first place; those who created the API testing industry, gained the community and analyst attention, and have the most comprehensive products that win. Every time.

There are snakes in the grass no matter what field you’re in

I recently had opportunity to informally socialize with a number of “competitors”, and as people are great people to eat tacos and burn airport wait time with. Unfortunately, their scrappy position in the market pushes them to do things that you can only expect from lawyers and pawn sharks. They say they’re about one thing in person, but their press releases and website copy betray their willingness to lie, cheat, and deceive actual people trying to get real things done.

In other words, some vendors proselytize about “API testing” without solid product to back up their claims.

I don’t like lying, and neither do you

One of my current job responsibilities is to make sure that the story my employer tells around its products accurately portray the capabilities of those products, because if they don’t, real people (i.e. developers, testers, engineers, “implementers”) will find out quickly and not only not become customers, but in the worst cases tell others that the story is not true. Real people doing real things is my litmus test, not analysts, not some theoretical BS meter.

Speaking of BS meter, a somewhat recent report lumped API “testing” with “virtualization” to produce a pie chart that disproportionately compares vendors market share, both by combining these two semi-related topics and by measuring share by revenue reported by the vendors. When analysts ask for things like revenue in a particular field, they generally don’t just leave the answer solely up to the vendor; they do some basic research on their own to prove that the revenue reported is an accurate reflection of the product(s) directly relating to the nature of the report. After pondering this report for months, I’m not entirely sure that the combination of the “testing” and “virtualization” markets is anything but a blatant buy-off by one or two of the vendors involved to fake dominance in both areas where there is none. Money, meet influence.

I can’t prove it, but I can easily prove when you’ve left a rotting fish in the back seat of my car simply by smelling it.

What this means for API testing

It means watch out for BS. Watch really closely. The way that some companies use “API testing” (especially in Google Ads) is unfounded in their actual product capabilities. What they mean by “testing” is not what you know as what’s necessary to ship great software. Every time I see those kinds of vendors say “we do API testing”, which is a insult to actual API testing, I seriously worry that they’re selling developers the illusion of having sufficient testing over their APIs when in reality it’s not even close.

Why your API matters to me

On the off-chance that I actually use it, I want your API to have been tested more than what a developer using a half-ass “testing” tool from a fledgling vendor can cover. I want you to write solid code, prove that it’s solid, and present me with a solid solution to my problem. I also want you to have fun doing that.

The API vendor ecosystem is not what it seems from the outside. If you have questions, I have honesty. You can’t say that about too many other players. Let’s talk if you need an accurate read on an analyst report or vendor statement.

 

Automating the Self: Social Media

I’m taking on the task of building an automation system for some of my online social engagement. Since I am not such a Very Important Person (yet :), the absolute worst that can happen is that one of my friend/followers shares something racist or sexist or *-ist that I wouldn’t otherwise agree with. Bad, but I can at least un-share or reply with an “I’m sorry folks, my robot and I need to talk” statement. But this leads to an interesting question:

What does it mean to imbue responsibility over my online persona to a digital system?

It’s not really that bizarre of a question to ask. We already grant immense amounts of control over our online profiles to the social primaries (i.e. Facebook, Twitter, Google+). For most people, any trending app that wants access to “post to your timeline” is enough of a reason to grant full access to activities on behalf of your profile, though it shouldn’t. Every time you want to play Candy Crush or Farmville, you are telling King and Zynga that it’s okay for them to say whatever they want as if they were you to people in your network.

The more of a public figure you are, the more your risk goes up. Consider that Zynga is not at all incentivized to post bad or politically incorrect content to your network on your behalf. That’s not the problem. The problem is when (not if) the company behind a game gets hacked, as did Zynga in 2011. It happens all the time. It’s probably happened to you, and you stand to lose more than just face.

So what is the first thing to get right about automating social media?

Trust and security are the first priorities, even before defining how the system works. Automation rules are great except for when the activities they’re automating do not follow the rules of trust and responsibility that a human would catch in a heartbeat. There is no point to automation if it’s not working properly. And there’s no point in automation of social media if it’s not trustworthy.

For me at least in the initial phases of planning out what this system would look like, trust (not just “security”) will be a theme in all areas of design. It will be a question I ask early and often with every algorithm I design and every line of code I write. Speaking of algorithms, an early example of these rules go something like this (pseudo-code):

 

 

Defrag 2015 == Legit

WP_20151111_007Defrag is legit. By “legit”, I go with the urban dictionary definitions in that it is “real”, “authentic”, “truthful”, generally a good thing.

Who am I? Just a guy who goes places and interacts with other real people, like this guy, Andy Rusterholtz who isn’t even on Twitter yet.

Keynotes (a.k.a. new friends)

Ramez Naam illustrated how our conscious perception of the world around us is very much a function of both sensory input and our memory of past input mixed together, never a perfect raw clear representation reality. He followed that up with proof that these squishy memories are entirely transmittable onto silicon. Want someone else’s memories? They’ll come mixed with yours, but we can do that now. He came in full Philip K. Dick style. This guy is within calling distance of Orson Scott Card via Wasteland 2 and Brin. Legit.

WP_20151112_028Mary Scotton put the whole keynote floor to rest with the depths of her compassion for considering the inequities of the industry around sexism, racism, and greed. Being inclusive is a responsibility we all share in common as humans who work for companies. Legit. She answered each of my quotes with a witty twitter mention to the original source of the quote or idea. Inclusive, ask Ben or anyone else she talked to, ever. Legit. “I don’t have to have the same kind of talk with my son that African American parents have to have with theirs…”, as she has a still photo of the Rodney King tragedy. Legit.

Bilal Zuberi. Great research. Great oration. Now that I’ve looked him up, typed hist name out, and referenced it in many conversations since yesterday, I’ll remember it well. His ideas for how much we should invest in reaching higher as a human race through technology and how the best leaders are formed; no problem remembering them now either. Are we really trying to get liquor to your door faster via mobile app, or should we maybe cure cancer first, then celebrate after? Legit.

CTpSvzmUsAAKgp8Lorinda Brandon. Mindful tech. It is socially irresponsible to let courts rule in favor of letting upskirt videos be taken, protected under “free speech” because they are recorded by a phone. What does that make a phone? Things don’t go away once you share them. Also, she put the kibosh on Google images as a way to help law enforcement crack down on illegal pools but not on illegal acts of law enforcement. Legit. She makes advocating for privacy the new gold standard for how to show which institutions and governments are leaders and which ones aren’t. Legit.

Lisa Kamm. Legit. Right after Lindy challenged Google’s propensity for privacy snarls, Lisa bounces back by showing us all that we need to get the fuck off our phones while in transit. We are not efficient at either when we do both at the same time. The only demonstration in her talk was her demonstrating how to navigate complex topics gracefully. Legit.

Kin Lane. A great mind behind honest ideas like APIware and APIs.json, a new format for how to describe the API lifecycle, something he invents in his spare time. Legit. Thinker of thoughts. Most terse person in the world if he wants to be, recently so about the OAI and about Swagger. You just gave all the students at Defrag (myself included) a map for how to build businesses around API tech. Legit.

[Transparency. I was not able to make a few of the sessions that I only heard people talking about afterwards. Duncan’s talk, for instance.]

Sam Ramji, we hope your shoulder feels better. Sad that you couldn’t come inspire us. Hello world demos aside, I will look for some time with you like I did with Phil Windley where we can talk about some stuff that is important.

Anya Stettler. Renegade developer evangelist at Avalara. What the hell is a tax company doing paying for a badass, beautiful brain such as hers to come and speak? Same thing that Capital One is doing by convincing Lorinda Brandon to join their team. It’s called financial technology, and it took over everything recently, did you know? Anya is shorter than I am, had more to drink than I did, and still kept going back and forth with Mick longer than I could. She knows who her people are. Legit.

David Nielsen. I missed his talk entirely, sat across from him a lot in two days, took no notice as he slept while sitting upright at “Indian” dinner with me and Emmanuel Paraskakis, only to wake up and lay down some serious story about working in India himself which to me at least made the already not-so-Indian food seem a less authentic. I still ate it up, his company and the food. Legit.

I could go on, but not really because I was busy having other conversations and missed all of day one and some of day two. Sad for me. Not legit. Also not legit, I missed the cloud foundry meetup last night.

The Legit Thing to Do: Say Thank You

Eric Norlin. Incubator. There were students all over this conference. A true sign of legit. I’ll post on this topic more tomorrow, better timing for them and for Maria at StartupDigestCO #GSB2015. Eric and these kids are enabling the next great thinkers and business owners to connect up and are helping them make good choices about their careers from day one. Legit.

Kim Norlin. Consummate professional host and organizer. Just being in her presence make me know I will never hold a conference myself. I’ll just ask her to either do it for a very hefty sum or a referral for someone else who can. She closed one of the bars down when there was no point in having it open anymore. Legit.

What’s Next?

I let some students speak their peace about getting poached like river trout by VCs and sponsors with dollars. I’ll post that tomorrow when I’ve had a few hours sleep between today and tomorrow, wait, no, today.

Also, I’ll be speaking at APIstrat in Austin next week, asking the question “how early is too early for childhood development around digital devices and #techlife?”

Peace to us all, especially those involved in the tragedy tonight which writing this article has acted to help me avoid breaking down about.

 

 

Swagger => OADF: Insider View

I still don’t get it. After multiple clarifications with a friend about the recent donation of the what-once-was-only-Swagger API description format to the OAI (Open API Initiative), I’m still confused as to why the contribution of the impartial format without what would be very one-sided contributions of tooling around the format is not the safest step to getting a long-term standard for everyone to benefit from into the public domain. I argue, it is, even if we don’t know how it will turn out.

Not at the cool kids table, what do I know?

I get that some people are feeling confusion, frustration, shock, and even disgust. I also hear people expressing relief, clarity, gratefulness, and vision. The world has far fewer thought-leaders and way more people who want standards they can build cool shit with, make some money and some good happen, and ultimately go home knowing things will generally work as designed again tomorrow.

I’m about to state some things, job whatever. I had no say in the way any of this played out, but we need someone to clarify that the insides of my corporate office is filled with good, fair, and driven people. This is not an evil empire, unlike some other soft competitors, this is where I work. Respectfulness and honesty are two of the Apache Way tenants, two that ring true for me and those I work with.

What are the facts IMO?

Fact: Certain entities owned the rights to the Swagger technology and brand, which for better or worse started as a format plus a cool name mingled together. It eventually became so useful that the format was open-sourced. Later, a company was able to navigate the financial and political mess that has been the API description format landscape for years, producing an API ecosystem standard as significant to real implementers as other open standards like HTTP, JPEG, and . Still, the brand has monetary value but the spec has much broader universal value, and so the spec was donated.

Fact: No one could “sell” the Swagger format is because it was already open source, and though the brand was still legal property, it has been managed with no misinformation or clandestine story behind it. When technology becomes part of a commons, it no longer makes sense to have one brand dominate the tooling and the vendor ecosystem. A separation of the format and the brand was necessary. To convolute the new OADF (Open API Description Format) by including tools that specific vendors donated but not including others would have been even worse an idea than not separating out the format from the brand, and persisting to do so doesn’t prevent these tools from being just as useful and successful as they were before the OAI changes.

Fact: People that truly helped the Swagger ecosystem get to where it is today deserve due credit for what they have done. Their contributions continue to be pivotal to the OADF because without a community, big companies that are already strong-arming the media with boku marketing budgets and completely horse shit press releases will take the OAI over and rule the connected world, one bad choice at a time. The community must maintain leadership in the OAI, and if there’s no room at that table for proven thought-leaders who have already spent years improving and evangelizing the importance of a format like Swagger, then shame on those who say they are leaders but are to busy making ends meet to ask for the right kind of help or spearhead truly pivotal contributions to the open web.

Fact: SPDY was a great start to fixing something, and I liken the work done at Google with the help of the whole industry to elevate much of the great ideas there into the formal HTTP 2.0 spec. Same with WebRTC, if only we could get proprietary chipset manufacturers to be adults and ship more than one optimized instruction set for video decoding on their silicon communion wafers. Much like the SPDY-to-HTTP2 evolution, the need for a standard API description format is long overdue, the OADF being a necessary leap forward which should have happened years ago.

What are we left with in the OADF?

I admire people who stand up for what they feel is right and who are vocal about what happens when things don’t go the way they expected them to. A brand is monetarily valuable, search and replace on the internet is not possible (I get that), and decision-makers usually make the decisions they are hired to make. That’s just how that works.

In the meantime, we have a standard open API description format now, frankly better than all the rest of them, and we can build on it just like all the other foundational technology that makes up the connected world we live in today. Brands are bullshit anyway. It’s the meaning of the thing behind the name that matters.

Peace, love, and internet for everyone.