Recap of DevOps Days Boston 2017 with David Fredricks

This week, I had the opportunity to continue a conversation started weeks ago with David Fredricks, organizer of DevOps Days Boston.

You can watch it on YouTube or listen via Soundcloud. Transcript below.

The Incredible Impact of Open Spaces

Paul: All right so welcome everyone my name is Paul Bruce and once again I’m back here with a member of the DevOps community, Dave Fredericks. Now Dave you organize the DevOps Boston event is that correct?

Dave: Yeah that’s correct. I’ve participated as a volunteer organizer for the last three years, involved for the last four.

Paul: Excellent…and I got a chance to meet you beforehand, I think at one of the DevOps Days Boston meetups, but then also we got to chat at the event and it was a really good event. I think a number of different things were really just cohesed really well, particularly from my point of view, the collaborative open spaces. Can you tell us a little bit about what that’s like how that got into the conference schedule?

Dave: Yes, certainly. So open spaces is a really interesting kind of platform that’s unique to DevOps Days. Basically how it works is everybody comes up with topics during the event after listening to some of the keynotes. Some discussions that are interesting to individuals, a lot of times you want to add on your personal perspective into, not only offering new ideas or maybe even some suggestions, but asking specific questions.

A way of being able to do that is by getting everyone together at the event over common topics. You basically vote on different topics that are of interest to you and they can actually go anywhere from cultural to personal to technology as a whole.

The idea is, there’s a few rules, it’s basically:

  1. what’s being said is what needs to be said
  2. who’s there is the people that need to be there
  3. when it starts and when it ends is the time it starts and ends

Those are the only kind of guidelines that we go by and the idea is to get people who usually wouldn’t be open to public speaking to be able to have a chance and an opportunity to either share some ideas, ask questions specifically and directly to different individuals and to have an open forum.

The real values that come out of it are real specific dialogue, the biggest thing is new introductions and relationships that are created.

The hope is that throughout the year after the event, a DevOps stage event is for you to be able to get contact information of individuals who are in the same space, at the same stage as yourself to have an outside outlet to be able to bounce ideas off of through the year as you start to face some of the challenges as you as an engineer try to solve problems.

Paul: Yeah that was one of those things that really clicked for me, being part of a number of those open spaces, I saw exactly what you said which was people were far more likely to comment and to share and ask questions. And in a larger audience and I think the other element of that is the fact that not only do they get the share but they get instant feedback.

And this is one of those core tenants I think of DevOps, in my mind, is this concept of continuous learning. But you don’t learn unless you know what’s going on and you don’t know what’s going on unless you [as an organization] radiate information which is typically facilitated by feedback loops. So whether we’re talking about technology feedback loops or real people feedback loops, I think that’s really helpful.

So can I back up for a second and ask you a slightly broader question about DevOps: in your mind how would you define DevOps?

What Is DevOps, Really?

Dave: Great question. You know these this is one that in our community we talk a lot about, especially for folks who are outside of quote-unquote “DevOps thought process”, knowing that it’s something that’s taking off as a force in the software world.

One of the things we do is to talk about how do we define DevOps. The biggest thing for me is DevOps means different things to different people and it’s all about context and perspective, where you come from and where you’ve been and what challenges you’re trying to solve. So when I meet somebody new who’s in this space and they’re starting to kind of either chant or evangelize to me without first getting a baseline perspective as to where I’m at and what I was doing and what I’m trying to solve, immediately has me question, “okay, are you trying to push your ideals down on me?”,

This is what DevOps means to me: getting folks to work together in an efficient collaborative manner to solve a common goal, period.

It has nothing to do with tools. It has nothing to do with process. It has nothing to do with frameworks. It’s all about getting people together, teaching context, having empathy, understanding what somebody’s doing, why they need to do it, and what what they’ve been doing in the past. You share your ways of doing it and then together when you have a sense of “okay, I know why this person has to do things, I know the reason why they’re thinking this way”, you can efficiently solve problems and for me that’s that’s what DevOps is to the core, right there.

Paul: So one one thing I heard from that is it starts with people, right? It doesn’t start with tools, it doesn’t start with how you’ve been doing it; it starts with people and really understanding the context and the perspective that they bring to the table. Is that right?

Dave: Yeah, Paul, you you nailed it right there. It starts, it continues, and it ends with people. Ultimately I take the concepts and the core principles of DevOps, and you can apply that to any industry, any product, any delivery, any manufacturing, and it really is bringing people together to work more efficiently to solve a common problem.

What Is DevOps Not?

Paul: And so actually people are doing that, you’ll hear the prevalence of these amalgam terms like DevSecOps, DevTestQAOps. And I kind of take issue with that in the sense that I understand how important terminology and clear labels for things. As a practitioner and engineer, as soon as somebody starts to blow out a term to mean “all the things”, my red flags get raised up instantly.

That doesn’t mean that [DevOps] doesn’t include other people, but can you tell us a little bit about how important the scope is of DevOps to you? And just kind of following that up with some context, I was able to speak to Ken Mugrage from the DevOps Days Seattle, and he was very clear about how if we blow it out into all the things, “DevOps” loses its value.

And so I put this to you: why is a pantheistic term, if DevOps grows to that, why is that a problem?

Dave: No, that’s a great thought. I want to take this back a little bit to identify why are all these actions added on, how and why this is how [DevOps] is being branded in this way. This was a discussion that I have, especially with growing teams.

One of the biggest things I talked about with organizations is, first and foremost, technically there is no DevOps engineers. So why label it that way?

There’s No Such Thing as a “DevOps Engineer”

When I started working with a lot more enterprises, I helped organizations transform their development to be much more modern so that they can have quicker release cycles and feedback. It’s one of the things that used to frustrate me, was like “hey, we need five DevOps engineers!”. That doesn’t mean anything to me, you got to explain on a day to day basis, what is this person doing, and ultimately, why are you labeling these folks as DevOps engineers?

And I I had some interesting feedback which came from the product marketing side. They were like, “Dave, we’re in the enterprise. We’re used to big long deploys of software in order to get it to our customers, and a lot of times we don’t know if our customers are even getting any value out of what we’re producing. When we’re releasing every year and waiting for six months to get the actual feedback from our customers, it doesn’t make any sense.”

So you see this large swath of folks trying to get into this space to build software quicker to have faster feedback to be able to add more value to end users.

These individuals don’t really understand this whole open source community, they don’t understand how the strength of the community is really the value.

“So we don’t know how to really market. We don’t know how to communicate to the group in a way for us to be able to blanket it all together. So we just scoped it into this thing and we call it #DevOps and everything gets that kind of label to it.”

From my experience what I’m starting to see is a lot more of these organizations who are specific to security, to testing, in a way of being able to catch and grasp that member of the audience, it’s “let’s throw it in, Dev and Sec Ops, Dev Quality Ops. What starts to happen in my mind and what I’m what I’m worried about is that people start to lose the real purpose.

Paul: So basically the exact same thing that happened to Agile. Everybody forgot to have agility as one of the core tenants that people check in on, on a regular basis such that they internalize that, and that is where their activities and their tools flow from, right?

Dave: Yes exactly. If you start to get too focused on the terminologies and the labeling of things and forget the context as to why you’re practicing it, ultimately the further down stream you get and the more generations that start to get folded into the process, they’ll start to lose the actual scope, “hey we’re trying to get people to to work together in a more collaborative manner to be efficient and to be able to deliver quickly.”

How to Be a Good DevOps (Citizen, Vendor, Employer)

Paul: Yeah, one thing that I did recently was put out an article (and thank you you, you had shared it to a number of people and I think that’s half the reason why I got some attention). It was essentially how to be a good DevOps vendor. It took the approach of looking at it from the customers perspective. The implementation of that was over a simplified customer journey and then chronologically through that journey, I went through and basically made statements from an outsider’s perspective onto different groups whether it be product, marketing, sales.

Back to your perspective, I get that it has to fundamentally start with people because people are what build teams and teams are what build software and software is what affects us. But the team affects us and individuals affect us, and so it does make sense to keep that as a core of value, to consider personal responsibility and also the responsibility of the team to have these cultural aspects present.

But unfortunately I think what happens is that we do need tools and you know, conferences are notorious for needing some kind of funding and becoming self-funding is really hard, and so out comes sponsor packages and I mean it’s an ecosystem. All software is eventually, in most people’s minds, going to make money and so this is where I was coming from, understanding that there is no such thing as a DevOps vendor or a DevOps tool or a DevOps job/position. Yet the fact is that when you’re closely aligned with the thinking of another person and “DevOps” is the term they’re using, it’s easy for these vendors to kind of bring that in and pull that into their messaging.

So I guess my my point of view on that is that we are gonna have to deal with that but it’s kind of a constant battle against the pantheism of trying to “all the things” a term [DevOps] but in the meantime we also do have to represent those tenants to more than just the developers and operations. If you really want to sell to developers and operations or teams that are looking, or they have internalized DevOps, they’re going to be looking at the world from this interesting perspective. And they’ll be looking across the tool chain to figure out who sounds like they’re blowing smoke up [you know where].

If a tool vendor or a service provider does not understand the core of DevOps, then their messaging, their selling process, their product ideation…it’s all not going to jive with the real market.

After after a recent Boston DevOps meetup we dove into this for what like an hour and a half, and just really talked about how do we actually do this. My concern is that when we start to move this into the enterprise (and by the way, the good principles of DevOps should be moveable to the enterprise, right? If they work, they work, and it’s a matter of fitting to context) that I think, while the core of it is culture, we can’t just live in this sort of kumbaya world.

We really have to figure out how to scale DevOps principals up and out into the enterprise setting so that, by the way, these good principles have a positive impact on things like automated insurance, things like machine learning in terms of healthcare, defense and government settings.

So I’m working on that on the side but in the meantime, what do you think about scaling to the enterprise? What does that even mean for DevOps?

How DevOps Is Re-writing Management Decisions

Dave: Yeah, that’s a great point. It’s an interesting challenge. There’s a lot of organizations who are facing it. Right now, I’m dealing with situations where we’re starting to see a lot of enterprise buy instead of trying to build it themselves. One thing they have is capital and resources. So the idea is, “if we don’t know or we can’t make it, it’s the bye versus build, like why go out and try to do what people already are being really successful in doing in something that we don’t understand too well? Let’s just go ahead and absorb some of these startups…”

Paul: Do you mean actually purchasing startups in order to just fill that technical gap in an organization? So I don’t want to name names, but I’m thinking of a very large enterprise that just recently bought up one of the most well-known API monitoring services out there, and people are freaking out like “oh gosh, what’s going to happen, are they going to de-culture this awesome group of guys and gals?”

Dave: I’m dealing with the same thing within an organization, a large security company buying a smaller more nimble security product with a lot of open source options. They’re putting out there trying to create groundswell to get this tool for free into the hands of engineers, let them play with it so they can understand how it works and create some kind of a swell within the engineering teams and then we’ll come up to the top start talking to the executives about, “Hey, what challenges are you facing in this broad space?”, where you’re trying to protect not only year your customers information but also information about your company.

As they start to have that type of dialogue, all of a sudden the executives within the organization starts to look down, talking to their engineering group and saying “hey, what do you know, what have you played with, what do you think is interesting, how do you think we should be solving this problem?” You’ve already created that initial lift of inertia in engineering, then they say hey let’s go with this product…we already know how it works, we’ve been you tooling around with it. Win-win, right?

So this is a completely different way of thinking of how enterprises used to be selling products into their customers. It was always a top-down approach…let’s talk to the executives who have the purchase power, float it down and then they’ll disseminate that information in the way that we roll it out into the engineering team. That’s how you could do it in the old-school way. Now in today’s new world, a lot of tools are available for you to play with for free and when enterprise organizations start to try to come into this space, they’re really kind of blindsided by this whole new content creation process.

Selling Into DevOps Takes Understanding DevOps

What I’m starting to see is they’re at least now recognizing we do not know how to sell to this to this community of this group. We know we really want to get into the space, we want to do it the right way, what do we do right and you know to your point with your article, I’ve shared your article with all of the enterprises that I’ve have been talking to me about this problem because I can’t teach them about the thought process of open source.

I mean, we can look back in the 60s, the MIT days, where the two groups kind of split off. A lot of us in the DevOps space already have the mentality of like “hey, you know we want to be able to share a lot of this stuff but we do want value for hard work we do”. But for the most part there’s different ways of doing it versus everything is being paid for with the enterprise mentality.

What I’m starting to experience is there’s a lot of organizations out there that are realizing it’s exponential value once they start to get into this community and…

the brand loyalty within the DevOps community is tremendous

…but the challenge that is in front of us right now is really the learnings piece and I’m thinking it’s a leadership issue (this is my own personal view). It’s enterprise leadership that needs to get out of the way and allow for new blood to come in to be able to understand the kind of movement. I’ve been doing a little, as much as I can to try to influence old leadership. It’s a challenge and a lot of it has to do with success syndrome. You’ve been doing it in certain way for decades. It’s a great case study that we’re gonna be able to kind of sit back and watch in the next five years

Calling All Researchers: Inclusion Means You Too!

Paul: Yeah and you know, there’s so much going on, no one person can do it alone. So without plugging any commercial products of any kind (that’s not my motion) I have started something called the iterativeresearch.org, which is essentially a bunch of contributors to research. As they go along, it could be lightweight contributions, simply just pocketing articles and getting into a feed of people who pay attention, it’s writers too, but the point is it’s not on a brand that’s connected to a pay-for services. And you know I would love, for this conversation to really start flowing in that direction because I think it takes many perspectives, right?

The core of this is it’s an inclusive conversation, not an exclusive one.

So understanding that you are a busy man and we’re at the top of our time, are there one or two things that you want to give a shout-out to or any particular resources that people can go to, events, communities, open-source forums, anything like that?

Get Out to a DevOps Tribe Near You!

Dave: Yeah, you know, thank you so much for the opportunity first and foremost, we’re gonna have to do it again! One of the things I really would highly recommend to folks who are interested in getting more involved, start to look at some local meetups that you have going on. There are some great folks within every community in whatever city, whatever small town, who are interested in sharing ideas and in thoughts in challenges. All you have to do is get out there and look. Go find your tribe! The biggest thing is don’t sit back and wait and sit on your hands and expect for interest to come to you.

The whole constant learner, the Kaizen mentality, be better tomorrow than you are today, be better today than you are yesterday. It lives and dies in DevOps and the way to do it is start to talk to folks who you’re not used to talking to.

Don’t be afraid get out there introduce yourself and have a good time. Life is learning.

Paul: Cool. So that’s David Frederick’s everyone and thank you David for spending the time with me. Do you prefer going by David, Dave?

Dave: Dave, David, either way.

Paul: Dave/David, I’ve really enjoyed it was great being able to spend some time. We’ll circle back. Thank you so much! Cheers!

 

More from DevOps Days Boston:

 

 

Folding Open Source into Enterprise DevOps

Open source software (OSS) is a foundational part of the modern software delivery lifecycle. Enterprise teams with DevOps aspirations face unique challenges in compliance, security, reliability, and sustainability of OSS components. Organizations-in-transformation must have a complete picture of risk when integrating open source components.

This article explores how to continuously factor in community and ecosystem health into OSS risk analysis strategy.

The Acquisition Process for Open Source Software

Developers need to build on the successes and contributions of others. Having the flexibility to integrate new open source components and new versions of existing dependencies enables teams to go fast, but external code must be checked and validated before becoming part of the trusted stack.

Including someone else’s software is an important moment of engagement. Enterprises typically wrap a formal ‘acquisition’ process around this process, where the ‘supplier’ (the entity who provides the software/service) and the ‘acquirer’ (the entity who wants to integrate the software/service) contractualize.

Though there are already commercial approaches to introducing software packages safely by companies like Sonatype, Black Duck, and others, my question extends beyond the tools conversation to encompass the longer-term picture of identifying and managing risk in software delivery.

Enterprises care deeply about risk. Without addressing this concern, engineering teams will never actualize the benefits of DevOps.

This is a tangible application of the need for DevOps to not only apply at an individual team level, but in the broader organization as well. It takes alignment between a team who needs software and teams providing compliance and legal services to all do so in an expedient manner that matches the clock speed of software delivery.

Communities Empower Enterprises to Address this Gap

Today in a Global Open Source Governance Group Chat, I asked the question:

“What are some methods for determining how significant a supplier/vendor OSS and community contributions are, relative to acquirer confidence?”

This question stems from my involvement with the IEEE 2675 working group, particularly because I see:

  • Prolific use of OSS use in DevOps and in enterprise contexts
  • Reluctance and concern (rightly so) around integration of OSS in enterprise software development and operation in production
  • The convergence of compliance and automation considerations
  • How important transparency and collaboration is to the health of OSS, but also to the supply and acquisition processes in a DevOps lifecycle

The group, expertly facilitated by Lauri Apple, also included key insights from Paul Burt and Jamie Allen. A log of the conversation can be found on Twitter.

As open source projects (like Swagger/OADF for instance) become increasingly important to enterprise software delivery, health and ecosystem tracking also becomes equally important to any new components being introduced.

My point-of-view is that organizations should prepare a checklist for software teams to construct a complete picture of risk introduced by OSS (not to mention proprietary) components. This checklist must include not only static analysis metrics but support, engagement, funding, and contribution considerations.

Measuring OSS Project + Community Health

The group had many suggestions that I wouldn’t have otherwise thought about, another reason for more people getting involved in dialogs like this.

There are already providers of aggregate information on open source community health and contribution metrics such as CHAOSS, a Linux Foundation project, and Bitergia. This data can be integrated easily into dependency management scripts in Groovy, npm, Ant, Maven, etc. and at the very least written in to a delivery pipeline as part of pre-build validation (BVT is too late).

And there is honest, hard-hitting research on open source software…which you should take the time to read….from Nadia Eghbal under the Ford Foundation in a report called ‘Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure‘. If you don’t have time to read it, buy some text-to-speech software and listen to it when you’re in transit.

The group also identified some key characteristics of OSS community health not necessarily tracked by established services, such as:

  • Same day response on reported issues, even if it’s simply acknowledgement
  • PRs under the “magic number” of 400 lines of code…tends to be the limit for # of bugs and useful feedback
  • Outage response, sandbox availability
  • Distribution of component versions across multiple central repositories

More to Come…From YOU

As I integrate both my own learnings and other voices from the community into the larger Enterprise DevOps conversation, the one thing that will be missed is YOUR THOUGHTS, whether your in a large organization or simply in a small team.

Please share your thoughts in the comments section below! Or ping me up on Twitter@paulsbruce or LinkedIn.

More reading:

Recap of DevOps Days Boston 2017 with Corey Quinn

This weekend, I had the chance to have a ‘distributed beer’ with Corey Quinn of Last Week in AWS to chat about the DevOps Days Boston 2017 conference last week. We provide a few takeaways each in about 5 minutes.

You can watch it on Youtube and listen on Soundcloud.

My Recaps of Day 1:

.

Links from this chat:

 

Beyond DevOps: The ‘Next’ Management Theory

In a conversation today with Ken Mugrage (organizer of DevOps Days Seattle), the scope of the term ‘DevOps’ came up enough to purposely double-click into it.

‘DevOps’ Is (and Should Be) Limited In Scope

Ken’s view is that the primary context for DevOps is in terms of culture, as opposed to processes, practices, or tools. To me, that’s fine, but there’s so much not accounted for that I feel I have to generalize a bit to get to where I’m comfortable parsing the hydra of topics in the space.

Like M-theory which attempts to draw relationships in how fundamental particles interact with each other, I think that DevOps is just a single view of a particular facet of the technology management gem.

DevOps is an implementation of a more general theory, a ‘next’ mindset over managing the hydra. DevOps addresses how developers and operations can more cohesively function together. Injecting all-the-things is counter to the scope of DevOps.

Zen-in: A New Management Theory for Everyone

Zen-in (ぜんいん[全員]) is a Japanese term that means ‘everyone in the group’. It infers a boundary, but challenges you to think of who is inside that boundary. Is it you? Is it not them? Why not? Who decides? Why?

By ‘management’ theory, I don’t mean another ‘the silo of management’. I literally mean the need to manage complexity, personal, technological, and organizational. Abstracting up a bit, the general principals of this theory are:

  • Convergence (groups come together to accomplish a necessarily shared goal)
  • Inclusion (all parties have a voice, acceptance of constraints)
  • Focus (alignment and optimization of goal, strategies, and tactics)
  • Improvement (learning loops, resultant actions, measurement, skills, acceleration, workforce refactoring, effective recruiting)
  • Actualization (self-management, cultural equilibrium, personal fulfillment)

I’ll be writing more on this moving forward as I explore each of these concepts, but for now I think I’ve found a basic framework that covers a lot of territory.

I Need Your Help to Evolve This Conversation

True to Zen-in, if you’re reading this, you’re already in the ‘group’. Your opinions, questions, and perspectives are necessary to iterate over how these concepts fit together.

Share thoughts in the comments section below! Or ping me up on Twitter@paulsbruce or LinkedIn.

 

How to Be a Good DevOps Vendor

This article is intended for everyone involved in buying or selling tech, not just tooling vendors. The goal is to paint a picture of what an efficient supply and acquisition process in DevOps looks like. Most of this article will be phrased from a us (acquirer) to you (supplier) perspective, but out of admiration for all.

Developers, Site-Reliability Engineers, Testers, Managers…please comment and add to this conversation because we all win when we all win.

MP3: https://soundcloud.com/paulsbruce/how-to-be-a-good-devops-vendor

I’ll frame my suggestions across a simplified four stage customer journey:

  1. Try
  2. Buy
  3. Integrate
  4. Improve

Note: As you read the following statements, it may seem that I bounce around talking to various group in a seemingly random fashion. This is actually a result of looking at an organization through the customer lens across their journey. As organizations re-align to focus on delivering real value to customers, our paradigms for how we talk about “teams” also change to include everyone with a customer touch point, not just engineering teams.

1. Make It Easy for Me to Try Your Thing Out

(Product / Sales)
Make the trial process as frictionless as possible. This doesn’t mean hands off, but rather a progressive approach that gives each of us the value we need iteratively to get to the next step.

(Sales / Marketing)
If you want to know what we’re doing, do your own research and come prepared to listen to us about our immediate challenge. Know how that maps to your tool, or find someone who does fast. If you don’t feel like you know enough to do this, roll up your sleeves and engage your colleagues. Lunch-n-learn with product/sales/marketing really help to make you more effective.

(Sales)
I know you want to qualify us as an opportunity for your sales pipeline, but we have a few checkboxes in our head before we’re interested in helping you with your sales goals. Don’t ask me to ‘go steady’ (i.e. regular emails or phone calls) before we’ve had our first date (i.e. i’ve validated that your solution meets basic requirements).

(Product / Marketing)
Your “download” process should really happen from a command line, not from a 6-step website download process (that’s so 90s) and don’t bother us with license keys. Handle the activation process for us. Just let us get in to code (or whatever) and fumble around a little first…because we’re likely engineers and we like to take things apart to understand them. So long as your process isn’t kludgy, we’ll get to a point where we have some really relevant questions.

(Marketing / Sales)
And we’ll have plenty of questions. Make it absurdly easy reach out to you. Don’t be afraid if you can’t answer them, and don’t try to preach value if we’re simply looking for a technical answer. Build relationships internally so you can get a technical question answered quickly. Social and community aren’t just marketing outbound channels, they’re inbound too. We’ll use them if we see them and when we need them.

(Marketing / Community / Relations)
Usage of social channels vary per person and role, so have your ears open on many of them: Github, Stack Overflow, Twitter, (not Facebook pls), LinkedIn, your own community site…make sure your marketing+sales funnel is optimized to accept me in the ‘right’ way (i.e. don’t put me in a marketing list).

Don’t use bots. Just don’t. Be people, like me.

(Sales / BizDev)
As I reach out, ask me about me. If I’m a dev, ask what I’m building. If I’m a release engineer, ask how you can help support your team. If I’m a manager, ask me how I can help your team deliver what they need to deliver faster. Have a 10-second pitch, but start the conversation right in order to earn trust so you can ask your questions.

 

2. Help Me Buy What I Need Without Lock-in

(Sales / Customer Success)
Even after we’re prepared to sign a check, we’re still dating. Tools that provide real value will spread and grow in usage over time. Let us buy what we need, do a PoC (which we will likely need some initial help with), then check in with us occasionally (customer success) to keep the account on the right train tracks.

(Sales / Marketing)
Help us make the case for your tool. Have informational materials, case studies, competitive sheets, and cost/value break downs that we may need to justify an expenditure that exceeds our discretionary budget constraints. Help us align our case depending on whether it will be coming out of a CapEx or OpEx line. Help us make it’s value visible and promote what an awesome job we did to pick the right solution for everyone it benefits. Don’t wait for someone to hand you what you need, try things and share your results.

(Product)
Pick a pricing model that meets both your goals and mine. Yes, that’s complicated. That’s why it’s a job for the Product Team. As professional facilitators and business drivers, seek input from everyone: sales, marketing, customers!!!, partners, and friends of the family (i.e. trusted advisors, brand advocates, professional services). Don’t be greedy; be realistic. Have backup plans on the ready, and communicate pricing changes proactively.

(Sales)
Depending on your pricing model, really help us pick the right one for us, not the best one for you. Though this sounds counter-intuitive to your bottom line, doing this well will increase our trust in you. When I trust you, not only will I likely come back to you for more in the future, we’ll also excitedly share this with colleagues and our networks. Some of the best public champions for a technology are those that use it and trust the team behind it.

3. Integrate Easily Into My Landscape

(Product)
Let us see you as code. If your solution is so proprietary that we can’t see underlying code (like layouts, test structure, project file format), re-think your approach because if it’s not code, it probably won’t fit easily into our delivery pipeline. Everything is code now…the product, the infrastructure configuration, the test artifacts, the deployment semantics, the monitoring and alerting…if you’re not in there, forget it.

(Product)
Integrate with others. If you don’t integrate into our ecosystem (i.e. plugins to other related parts of our lifecycle), you’re considered a silo and we hate silos. Workflows have to cross platform boundaries in our world. We already bought other solutions. Don’t be an island, be a launchpad. Be an information radiator.

(Product / Sales / Marketing)
Actually show how your product works in our context…which means you need to understand how people should/do use your product. Don’t just rely on screenshots and product-focused demos. Demonstrate how your JIRA integration works, or how your tool is used in a continuous integration flow in Jenkins or Circle CI, or how your metrics are fed into Google Analytics or Datadog or whatever dashboarding or analytics engine I use. The point is (as my new friend Lauri says it)…”show me, don’t tell me”.

(Sales / Marketing)
This goes for your decks, your videos, your articles, your product pages, your demos, your booth conversations, and even your pitch. One of the best technical pitches I ever saw wasn’t a pitch at all…it was a technical demo from the creator of Swagger, Tony Tam at APIstrat Austin 2015. He just showed how SwaggerHub worked, and everyone was like ‘oh, okay, sign me up’.

Truth be told, I only attended to see what trouble I could cause.  Turns out he showed a tool called Swagger-Inflector and I was captivated.
– Darrel Miller on Bizcoder

(Sales / Product)
If you can’t understand something that the product team is saying, challenge them on it and ask them for help to understand how and when to sell the thing. Product, sales enablement is part of your portfolio, and though someone else might execute it, it’s your job to make sure your idea translates into an effective sales context (overlap/collaborate with product marketing a lot).

(Product / Customer Support)
As part of on-boarding, have the best documentation on the planet. This includes technical documentation (typically delivered as part of the development lifecycle) that you regularly test to make sure is accurate. Also provide how-to articles that are down to earth. Show me the ‘happy path’ so I can use it as a reference to know where I’ve gone wrong on my integration.

(Product / Developers / Customer Support)
Also provide validation artifacts, like tools or tests that make sure I’ve integrated your product into my landscape correctly. Don’t solely rely on professional services to do this unless most other customers have told you this is necessary, which indicates you need to make it easier anyway.

(Customer Support / Customer Success / Community / Relations)
If I get stuck, as me why and how I’m integrating your thing into my stuff to get some broader context on my ultimate goal. Then we can row in that direction together. Since I know you can’t commit unlimited resources to helping customers, build a community that helps each other and reward contributors when they help each other. A customer gift basket or Amazon gift card to the top external community facilitators goes a long way to gaming yourself into a second-level support system to handle occasional support overflows.

4. Improve What You Do to Help Me With What I Do

(Product / Development / Customer Support)
Fix things that are flat out broken. If you can’t now, be transparent and diplomatic about how your process works, what we can do as a work-around in the mean time, and receive our frustration well. If we want to contribute our own solution or patch, show gratitude not just acknowledgement, otherwise we won’t go the extra mile again. And when we contribute, we are your champions.

(Product)
Talk to us regularly about what would work better for us, how we’re evolving our process, and how your thing would need to change to be more valuable in our ever-evolving landscape. Don’t promise anything, but also don’t hide ideas. Selectively share items from your roadmap and ask for our candid opinion. Maybe even hold regional user groups or ask us to come speak to your internal teams as outside feedback from my point of view as a customer.

(Product)
Get out to the conferences, be in front of people and listen to their reactions. Do something relevant yourself and don’t be just another product-headed megalomaniac. Be part of the community, don’t just expect to use them when you want to say something. Host things (maybe costs money), be a volunteer occasionally, and definitely make people feel heard.

(Everyone)
Be careful that your people-to-people engagements don’t suffer from technical impedance mismatch. Sales and marketing can be at booths, but should have a direct line to someone who can answer really technical questions as they arise. We engineers can smell marketing/sales from a mile away (usually because they smell showered and professional). But it’s important to have our questions answered and to feel friendly. This is what’s great about having your Developer Relations people there…we can nerd out and hit it off great. I come away with next steps that you (marketing / sales) can follow-up on. And make sure you have a trial I can start in on immediately. Use every conversation (and conference) as a learning opportunity.

(Product)
Build the shit out of your partner ecosystem so it’s easier for me to get up and running with integrations. Think hard before you put your new shiny innovative feature in front of a practical thing like a technical integration I and many others have been asking for.

(Development / Community / Marketing / Relations)
If there is documentation with code in it and you need API keys or something, inject them in to the source code for me when I’m logged in to your site (like SauceLabs Appium tutorials). I will probably copy and paste, so be very careful about the code you put out there because I will judge you for it when it doesn’t work.

(Marketing / Product)
When you do push new features, make sure that you communicate to me about things I am sure to care about. This means you’ll have to keep track of what I indicate I care about (via tracking my views on articles, white paper downloads, sales conversations, support issues, and OPT-IN newsletter topics). I’m okay with this if it really benefits me, but if I get blasted one too many times, I’ll disengage/unsubscribe entirely.

Summary: Help Me Get to the Right Place Faster…Always

None of us have enough time for all the things. If you want to become a new thing on my plate, help me see how you can take some things off of my plate first (i.e. gain time back). Be quick to the point, courteous, and invested in my success. Minimize transaction (time) cost in every engagement.

(Sales, et al: “Let’s Get Real or Let’s Not Play” is a great read on how to do this.)

At often as appropriate, ask me what’s on my horizon and how best we can get there together. Even if I’m heads-down in code, I’ll remember that you were well-intentioned and won’t write you off for good.

NEXT STEPS: share your opinions, thoughts, suggestions in the comments section below! Or ping me up on Twitter@paulsbruce or LinkedIn.

More reading:

Performance Is (Still) a Feature, Not a Test!

Since I presented the following perspective at APIStrat Chicago 2014, I’ve had many opportunities to clarify and deepen it within the context of Agile and DevOps development:

It’s more productive to view system performance as a feature than to view it as a set of tests you run occasionally.

The more teams I work with, the more I see how performance as a critical aspect of their products. But why is performance so important?

‘Fast’ Is a Subconscious User Expectation

Whether you’re building an API, an app, or whatever, its consumers (people, processes) don’t want to wait around. If your software is slow, it becomes a bottleneck to whatever real-world process it facilitates.

Your Facebook feed is a perfect example. If it is even marginally slower to scroll through it today than it was yesterday, if it is glitchy, halty, or jenky in any way, your experience turns from dopamine-inducing self-gratification to epinephrine fueled thoughts of tossing your phone into the nearest body of water. Facebook engineers know this, which is why they build data centers to test and monitor mobile performance on a per-commit basis. For them, this isn’t a luxury; it’s a hard requirement, as it is for all of us whether we choose to address it or not. Performance is everyone’s problem.

Performance is as critical to delighting people as delivering them features they like. This is why session abandonment rates are a key metric on Cyber Monday.

‘Slow’ Compounds Quickly

Performance is a measurement of availability over time, and time always marches forward. Performance is an aggregate of many dependent systems, and even just one slow link can cause an otherwise blazingly fast process to grind to a halt long enough for people to turn around and walk the other way.

Consider a mobile app; performance is everything. The development team slaves over which list component scrolls faster and more smoothly, spends hours getting asynchronous calls and spinners to provide the user critical feedback so that they don’t think the app has crashed. Then a single misbehaving REST call to some external web API suddenly slows by 50% and the whole user experience is untenable.

The performance of a system is only as strong as it’s weakest link. In technical terms, this is about risk. You at least need to know the risk introduced by each component of a system; only then can you chose how to mitigate the risk accordingly. ‘Risk’ is a huge theme in ISO 29119 and the upcoming IEEE 2675 draft I’m working on, and any seasoned architect would know why it matters.

Fitting Performance into Feature Work

Working on ‘performance’ and working on a feature shouldn’t be two separate things. Automotive designers don’t do this when they build car engines and performance is paramount throughout even the assembly process as well. Neither should it be separate in software development.

However, in practice if you’ve never run a load test, tracked power consumption of a subroutine or analyzed aggregate results, it will be different than building stuff for sure. Comfortability and efficiency come with experience. A lack of experience or familiarity doesn’t remove the need for something critical to occur; it accelerates the need to ask how to get it done.

A reliable code pipeline and testing schedule make all the difference here. Many performance issues take time or dramatic conditions to expose, such as battery degradation, load balancing, and memory leaks. In these cases, it isn’t feasible to execute long-running performance tests for every code check-in.

What does this mean for code contributors? Since they are still responsible for meeting performance criteria, it means that they can’t always press the ‘done’ button today. It means we need reliable delivery pipelines to push code through that checks its performance pragmatically. As pressure to deliver value incrementally mounts, developers are taking responsibility for the build and deployment process through technologies like Docker, Jenkins Pipeline, and Puppet.

It also means that we need to adopt a testing schedule that meets the desired development cadence and real-world constrains on time or infrastructure:

  • Run small performance checks on all new work (new screens, endpoints, etc.)
  • Run local baselines and compare before individual contributors check in code
  • Schedule long-running (anything slower than 2mins) performance tests into pipeline stage after build verification in parallel
  • Schedule nightly performance regression checks on all critical risk workflows (i.e. login, checkout, submit claim, etc.)

How Do You Bake Performance Into Development?

While it’s perfectly fine to adopt patterns like ‘spike and stabilize’ on feature development, stabilization is a required payback of the technical debt you incur when your development spikes. To ‘stabilize’ isn’t just to make the code work, it’s to make it work well. This includes performance (not just acceptance) criteria to be considered complete.

A great place to start making measurable performance improvements is to measure performance objectively. Every user story should contain solid performance criteria, just as it should with acceptance criteria. In recent joint research, I found that higher performing development teams include performance criteria on 50% more of their user stories.

In other words, embedding tangible performance expectations in your user stories bakes performance in to the resulting system.

There are a lot of sub-topics under the umbrella term “performance”. When we get down to brass tacks, measuring performance characteristics often boils down to three aspects: throughput, reliability, and scalability. I’m a huge fan of load testing because it helps to verify all three measurable aspects of performance.

Throughput: from a good load test, you can objectively track throughput metrics like transactions/sec, time-to-first-byte (and last byte), and distribution of resource usage (i.e. are all CPUs being used efficiently). These give you a raw and necessarily granular level of detail that can be monitored and visualized in stand-ups and deep-dives equally.

Reliability: load tests also exercise your code far more than you can independently. It takes exercise to expose if a process is unreliable; concurrency in a load test is like exercise on steroids. Load tests can act as your robot army, especially when infrastructure or configuration changes push you into unknown risk territory.

Scalability: often, scalability mechanisms like load balancing, dynamic provisioning, and network shaping throw unexpected curveballs into your user’s experience. Unless you are practicing a near-religious level of control over deployment of code, infrastructure, and configuration changes into production, you run the risk of affecting real users (i.e. your paycheck). Load tests are a great way to see what happens ahead of time.

 

Short, Iterative Load Testing Fits Development Cycles

I am currently working with a client to load test their APIs, to simulate mobile client bursts of traffic that represent real-world scenarios. After a few rounds of testing, we’ve resolve many obvious issues, such as:

  • Overly verbose logs that write to SQL and/or disk
  • Parameter formats that cause server-side parsing errors
  • Throughput restrictions against other 3rd-party APIs (Google, Apple)
  • Static data that doesn’t exercise the system sufficiently
  • Large images stored as SQL blobs with no caching

We’ve been able to work through most of these issues quickly in test/fail/fix/re-test cycles, where we conduct short all-hands sessions with a developer, test engineer, and myself. After a quick review of significant changes since the last session (i.e. code, test, infrastructure, configuration), we use BlazeMeter to kick of a new API load test written in jMeter and monitor the server in real-time. We’ve been able to rapidly resolve a few anticipated, backlogged issues as well as learn about new problems that are likely to arise at future usage tiers.

The key here is to ‘anticipate iterative re-testing‘. Again I say: “performance is a feature, not a test”. It WILL require re-design and re-shaping as the code changes and system behaviors are better understood. It’s not a one-time thing to verify how a dynamic system behaves given a particular usage pattern.

The outcome from a business perspective of this load testing is that this new system is perceived to be far less of a risky venture, and more the innovation investment needed to improve sales and the future of their digital strategy.

Performance really does matter to everyone. That’s why I’m available to chat with you about it any time. Ping me on Twitter and we’ll take it from there.

A Jenkins Pipeline for Mobile UI Testing with Appium and Docker

In theory, a completely Docker-ized version of an Appium mobile UI test stack sounds great. In practice, however, it’s not that simple. This article explains how to structure a mobile app pipeline using Jenkins, Docker, and Appium.

TL;DR: The Goal Is Fast Feedback on Code Changes

When we make changes, even small ones, to our codebase, we want to prove that they had no negative impact on the user experience. How do we do this? We test…but manual testing is takes time and is error prone, so we write automated unit and functional tests that run quickly and consistently. Duh.

As Uncle Bob Martin puts it, responsible developers not only write code that works, they provide proof that their code works. Automated tests FTW, right?

Not quite. There are a number of challenges with test automation that raise the bar on complexity to successfully getting tests to provide us this feedback. For example:

  • How much of the code and it’s branches actually get covered by our tests?
  • How often do tests fail for reasons that aren’t because the code isn’t working?
  • How accurate was our implementation of the test case and criteria as code?
  • Which tests do we absolutely need to run, and which can we skip?
  • How fast can and must these tests run to meet our development cadence?

Jenkins Pipeline to the Rescue…Not So Fast!

Once we identify what kind of feedback we need and match that to our development cadence, it’s time to start writing tests, yes? Well, that’s only part of the process. We still need a reliable way to build/test/package our apps. The more automated this can be, the faster we can get the feedback. A pipeline view of the process begins with code changes, includes building, testing, and packaging the app so we always have a ‘green’ version of our app.

Many teams chose a code-over-configuration approach. The app is code, the tests are code, server setup (via Puppet/Chef and Docker) is code, and not surprisingly, our delivery process is now code too. Everything is code, which lets us extend SCM virtues (versioning, auditing, safe merging, rollback, etc.) to our entire software lifecycle.

Below is an example of ‘process-as-code’ is Jenkins Pipeline script. When a build project is triggered, say when someone pushes code to the repo, Jenkins will execute this script, usually on a build agent. The code gets pulled, the project dependencies get refreshed, a debug version of the app and tests are build, then the unit and UI tests run.

Notice that last step? The ‘Instrumented Tests’ stage is where we run our UI tests, in this case our Espresso test suite using an Android emulator. The sharp spike in code complexity, notwithstanding my own capabilities, reflects reality. I’ve seen a lot of real-world build/test scripts which also reflect the amount of hacks and tweaks that begin to gather around the technologically significant boundary of real sessions and device hardware.

A great walkthrough on how to set up a Jenkinsfile to do some of the nasty business of managing emulator lifecycles can be found on Philosophical Hacker…you know, for light reading on the weekend.

Building a Homegrown UI Test Stack: Virtual Insanity

We have lots of great technologies at our disposal. In theory, we could use Docker, the Android SDK, Espresso, and Appium to build reusable, dynamic nodes that can build, test, and package our app dynamically.

Unfortunately, in practice, the user interface portion of our app requires hardware resources that simply can’t be executed in a timely manner in this stack. Interactive user sessions are a lot of overhead, even virtualized, and virtualization is never perfect.

Docker runs under either a hyperkit (lightweight virtualization layer on Mac) or within a VirtualBox host, but neither of these solutions support nested virtualization and neither can pass raw access to the host machine’s VTX instruction set through to containers.

What’s left for containers is a virtualized CPU that doesn’t support the basic specs that the Android emulator needs to use host GPU, requiring us to run ‘qemu’ and ARM images instead of native x86/64 AVD-based images. This makes timely spin-up and execution of Appium tests so slow that it renders the solution infeasible.

Alternative #1: Containerized Appium w/ Connection to ADB Device Host

Since we can’t feasibly keep emulation in the same container as the Jenkins build node, we need to split out the emulators to host-level hardware assisted virtualization. This approach also has the added benefit of reducing the dependencies and compound issues that can occur in a single container running the whole stack, making process issues easier to pinpoint if/when they arise.

So what we’ve done is decoupled our “test lab” components from our Jenkins build node into a hardware+software stack that can be “easily” replicated:

Unfortunately, we can no longer keep our Appium server in a Docker container (which would make the process reliable, consistent across the team, and minimize cowboy configuration issues). Even after you:

  • Run the appium container in priviledged mode
  • Mount volumes to pass build artifacts around
  • Establish an SSH tunnel from container to host to use host ADB devices
  • Establish a reverse SSH tunnel from host to container to connect to Appium
  • Manage and exchange keys for SSH and Appium credentials

…you still end up dealing with flaky container-to-host connectivity and bizarre Appium errors that don’t occur if you simply run Appium server on bare metal. Reliable infrastructure is a hard requirement, and the more complexity we add to the stack, the more (often) things go sideways. Sad but true.

Alternative #2: Cloud-based Lab as a Service

Another alternative is to simply use a cloud-based testing service. This typically involves adding credentials and API keys to your scripts, and paying for reserved devices up-front, which can get costly. What you get is hassle-free, somewhat constrained real devices that can be easily scaled as your development process evolves. Just keep in mind, aside from credentials, you want to carefully managed how much of your test code integrates custom commands and service calls that can’t easily be ported over to another provider later.

Alternative #3: Keep UI Testing on a Development Workstation

Finally, we could technically run all our tests on our development machine, or get someone else to run them, right? But this wouldn’t really translate to a CI environment and doesn’t take full advantage of the speed benefits of automation, neither of which help is parallelize coding and testing activities. Testing on local workstations is important before checking in new tests to prove that they work reliably, but doesn’t make sense time-wise for running full test suites in continuous delivery/deployment.

Alternative #4: A Micro-lab for Every Developer

Now that we have a repeatable model for running Appium tests, we can scale that out to our team. Since running emulators on commodity hardware and open source software is relatively cheap, we can afford a “micro-lab” for each developer making code changes on our mobile app. The “lab” now looks something like this:

As someone who has worked in the testing and “lab as a service” industries, there are definitely situations where some teams and organizations outgrow the “local lab” approach. Your IT/ops team might just not want to deal with per-developer hardware sprawl. You may not want to dedicate team members to be the maintainers of container/process configuration. And, while Appium is a fantastic technology, like any OSS project it often falls behind in supporting the latest devices and hardware-specific capabilities. Fingerprint support is a good example of this.

The Real Solution: Right { People, Process, Technology }

My opinion is that you should hire smart people (not one person) with a bit of grit and courage that “own” the process. When life (I mean Apple and Google) throw you curveballs, you need people who can quickly recover. If you’re paying for a service to help with some part of your process as a purely economic trade-off, do the math. If it works out, great! But this is also an example of “owning” your process.

Final thought: as more and more of your process becomes code, remember that code is a liability, not an asset. The less of if, the more lean your approach, generally the better.

More reading:

Automating the Quality of Your Digital Front Door

Mobile is the front door to your business for most / all of your users. But how often do you use your front door, a few times a day? How often do your users use your app? How often would you like them to? It’s really a high-traffic front door between people and you.

This is how you welcome people into what you’re doing. If it’s broken, people don’t feel welcome.

[7/27/2017: For my presentation at Mobile Tea Boston, my slides and code samples are below]

 

Slides with notes: http://bit.ly/2tgGiGr
Git example: https://github.com/paulsbruce/FingerprintDemo

The Dangers of Changing Your Digital Front Door

In his book “On Intelligence”, Hawkins describes how quickly our human brains pick up on minute changes with the analogy of someone replacing the handle on your front door with a knob while you’re out. When you get back, things will seem very weird. You feel disoriented, alienated. Not emotions we want to invoke in our users.

Now consider what it’s like for your users to have you changing things on their high-traffic door to you. Change is good, but only good changes. And when changes introduce problems, forget sympathy, forget forgiveness, people revolt.

What Could Possibly Go Wrong?

A lot. Even for teams that are great at what they do, delivering a mobile app is fraught with challenges that lead to:

  • Lack of strategy around branching, merging, and pushing to production
  • Lack of understanding about dependencies, impacts of changes
  • Lack of automated testing, integration woes, no performance/scalability baselines, security holes
  • Lack of communication between teams (Front-end, API, business)
  • Lack of planning at the business level (marketing blasts, promotions, advertising)

Users don’t care about our excuses. A survey by Perfecto found that more than 44% of defects in mobile apps are found by users. User frustrations aren’t just about what you designed, they are about how they behave in the real world too. Apps that are too slow will be treated as broken apps and uninstalled just the same.

What do we do about it?

We test, but testing is a practice unto itself. There are many test types and methodologies like TDD, ATDD, and BDD that drive us to test. Not everyone is cut out to be a great tester, especially when developers are driven to write only things that works, and not test for when it shouldn’t (i.e. lack of negative testing).

Allistar Scott – Test ‘Ice Cream Cone’

In many cases, automation gaps and issues make it easier for development teams to fall back to manual testing. This is what Allistar Scott (of Ruby Waitr) calls the anti-pattern ‘ice cream cone’, an inversion of the ideal test pyramid, and Mike Cohen has good thoughts on this paradigm too.

To avoid this downward spiral, we need to prioritize automation AND which tests we chose to automate. Testing along architecturally significant boundaries, as Kevin Henney puts it, is good; but in a world full of both software and hardware, we need to broaden that idea to ‘technologically significant boundaries‘. The camera, GPS, biometric, and other peripheral interfaces on your phone are a significant boundary…fault lines of the user experience.

Many development teams have learned the hard way that not including real devices in automated testing leaves these UX fault lines at risk of escaping defects. People in the real world use real devices on real networks under real usage conditions, and our testing strategy should reflect this reality too.

The whole point of all this testing is to maintain confidence in our release readiness. We want to be in an ‘always green’ state, and there’s no way to do this without automated, continuous testing.

Your Code Delivery Pipeline to the Rescue!

Confidence comes in two flavors: quality and agility. Specifically, does the code we write do what we intend, and can we iterate and measure quickly?

Each team comes with their own definition of done, their own acceptable levels of coverage, and their own level of confidence over the what it takes to ship, but answering both of these questions definitively requires adequate testing and a reliable pipeline for our code.

Therein lies the dynamic tension between agility (nimbleness) and the messy world of reality. What’s the point of pushing out something that doesn’t match the needs of reality? So we try to pull reality in little bits at a time, but reality can be slow. Executing UI tests takes time. So we need to code and test in parallel, automate as much as possible, and be aware of the impact of changes on release confidence.

The way we manage this tension is to push smaller batches more frequently through the pipeline, bring the pain forward, in other words continuous delivery and deployment. Far away from monolithically, we shrink the whole process to an individual contributor level. Always green at the developer level…merge only code that has been tested automatically, thoroughly.

Even in a Perfect World, Your Front Door Still Jams

So automation is crucial to this whole thing working. But what happens when we can’t automate something? This is often why the “ice cream cone” exists.

Let’s walk through it together. Google I/O or WWDC drops new hardware or platform capabilities on us. There’s a rush to integrate, but a delay in tooling and support gums up development all the way through production troubleshooting. We mock what we have to, but fall back to manual testing.

This not only takes our time, it robs us of velocity and any chance to reach that “always green” aspiration.

The worst part is that we don’t even have to introduce new functionality to fall prey to this problem. Appium was stuck behind lack of iOS 10 support for months, which means most companies had no automated way to validate on a platform that was out already.

And if anything, history teaches us that technology advances whether the last thing is well-enough baked or not. We are still dealing with camera (i.e. driver stack) flakiness! Fingerprint isn’t as unreliable, but still part of the UI/UX. And many of us now face an IoT landscape with very few standards that developers follow.

So when faced with architectural boundaries that have unpolished surfaces, what do we do? Mocks…good enough for early integration, but who will stand up and say testing against mocks is good enough to go to production?

IoT Testing Provides Clues to How We Can Proceed

In many cases, introducing IoT devices into the user experience means adding architecturally significant boundaries. Standards like BLE, MQTT, CoAP and HTTP provide flexibility to virtualize much of the interactions across these boundaries.

In the case of Continuous Glucose Monitoring (CGM) vendors, their hardware and mobile app dev cycles are on very different cycles. But to integrate often, they virtualize BLE signals to real devices in the cloud as part of their mobile app test scripts. They also adopt “IoT ninjas” as part of the experience team, hardware/firmware engineers that are in change of prototyping changes on the device side, to make sure that development and testing on the mobile app side is as enabled as possible.

Adding IoT to the mix will change your pyramid structure, adding pressure to rely on standards/interfaces as well as manual testing time for E2E scenarios.

[For more on IoT Testing, see my deck from Mobile/IoT Dev+Test 2017 here]

Automated Testing Requires Standard Interfaces

There are plenty of smart people looking to solve the busy-work problem with writing tests. Facebook Infer, Appdiff, Functionalize, and MABL are just a few of the new technologies that integrate machine learning and AI to reduce time-spend on testing busy-work.

But any and all programmatic approach, even AI, requires standard interfaces; in our case, universally accepted development AND testing frameworks and technologies.

Tool ecosystems don’t get built without foundational standards, like HTML/CSS/JS, Android, Java, and Swift. And when they want to innovate on hardware or platform, there will always be some gaps, usually in automation around the new stuff.

Example Automation Gap: Fingerprint Security

Unfortunately for those of us who see the advantages of integrating with innovative platform capabilities like biometric fingerprint authentication, automated testing support is scarce.

What this means is that we either don’t test certain critical workflows in our app, or we manually test them. What a bummer to velocity.

The solution is to have people who know how to implement multiple test frameworks and tools in a way that matches the velocity requirements of development.

For more information in this, see my deep-dive on how to use Appium in Android development to simulate fingerprint activities in automated tests. It’s entirely possible, but requires experience and a planning over how to integrate a mobile lab into your continuous integration pipeline.

 

Tailoring Fast Feedback to Resources (and vice versa)

As you incrementally introduce reality into every build, you’ll run into two problems: execution speed and device pool limits.

To solve the execution speed, most development teams parallelize their testing against multiple devices at once, and split up their testing strategy to different schedules. This is just an example of a schedule against various testing types.

For more on this, I published a series of whitepapers on how to do this.

TL;DR recap

Automating the quality of our web and mobile apps keeps us accurate, safe, and confident; but isn’t easy. Fortunately we have many tools and a lot of thought put in already to how to do this. Notwithstanding ignorance of some individuals, automation continues to change the job landscape over and over again. 

Testing always takes tailoring to the needs of the development process to provide fast feedback. The same is true in reverse: developers need to understand where support gaps exist in test frameworks and tooling, otherwise they risk running the “ship” aground.

This is why, and my mantra remains, it is imperative to velocity to have the right people in the planning room when designing new features and integrating capabilities across significant technological boundaries.

Similarly, in my research on developer efficiency, we see that there is a correlation between increased coverage over non-functional criteria on features and test coverage. Greater completeness in upfront planning saves time and effort, it’s just that simple.

Just like Conway’s “law”, the result of your team, it’s structure, communication patterns, functions and dysfunctions, all show up in the final product. Have the right people in the room when planning new features, retros, and determining your own definition of done. Otherwise you end up with more gaps than simply in automation.

Meta / cliff notes:

  • “Everyone owns quality” means that the whole team needs to be involved in testing strategy
    • To what degree are various levels of testing included in Definition of Done?
    • Which test sets (i.e. feedback loops) provide the most value?
    • How are various tests triggered, considering their execution speed?
    • Who’s responsible for creating which types of tests?
    • How are team members enabled to interpret and use test result data?
    • When defects do escape certain stages, how is RCA used to close the gap?
    • Who manages/fixes the test execution framework and infrastructure?
    • Does the benefits of the current approach to testing outweigh the cost?
  • Multiple testing framework / tool / platform is 200 OK
    • We already use separate frameworks for separate test types
      • jUnit/TestNG (Java) for unit (and some integration) testing
      • Chakram/Citrus/Postman/RestAssured for API testing
      • Selenium, Appium, Espresso, XCTest for UI testing
      • jMeter, Dredd, Gatling, Siege for performance testing
    • Tool sprawl can be a challenge, but proper coverage requires plurality
    • Don’t overtax one framework or tool to do a job it can’t, just find a better fit
  • Incremental doses of reality across architecturally significant boundaries
    • We need reality (real devices, browsers, environments) to spot fragility in our code and our architecture
    • Issues tend to clump around architecturally significant boundaries, like API calls, hardware interfaces, and integrations to monolithic components
    • We stub/mock/virtualize to speed development; signs of “significant” boundaries, but it only tells us what happens in isolation
    • A reliable code pipeline can do the automated testing for you, but you still need to tell it what and when to test; have a test execution strategy that considers:
      • testing types (unit, component, API, integration, functional, performance, installation, security, acceptance/E2E, …)
      • execution speed (<2m, <20m, <2h, etc) vs. demand for fast feedback
      • portions of code that are known-fragile
      • various critical-paths: login, checkout, administrative tasks, etc.
    • Annotations denote tests that relate across frameworks and tools
      • @Signup, @Login, @SearchForProduct, @V2Deploy
      • Tag project-based work (like bug fixes) like: JIRA-4522
  • Have the right people in the room when planning features
    • Future blockers like test framework support for new hardware capabilities will limit velocity, so have test engineers in the planning phases
    • Close the gap between what was designed vs. what is feasible to implement by having designers and developers prototype together
    • Including infrastructure/operations engineers in planning reduces later scalability issues; just like testers, this can be a blocker to release readiness
    • Someone, if not all the people above, should represent the user’s voice

More reading:

Don’t Panic! (or how to prepare for IoT with a mature testing strategy)

Thanks everyone for coming to my talk today! Slides and more links are below.

As all my presentations are, this is meant to extend a dialog to you, so please tweet to me any thoughts and questions you have to become part of the conversation. Looking forward to hearing from you!

More links are in my slides, and the presenter notes are the narrative in case you had to miss part of the presentation.