Follow-up to: The Agile Test Automation Pyramid

Since I’ve published the pyramid idea on the blog, I’ve been getting some nice internal (from Velocity Partners colleagues) comments around it. I thought I’d share some of those with you here because they drive some real world clarification into the idea.

The following two questions or comments are from Matias Bauer. Here’s is his personal introduction:

I am a 27-year-old software engineer who has been gathering experience by working mostly as a developer but, for the last year and a half, I have been evolving into an SDET.  I like to use the word “evolving” because I think that from this past experience I was able to work both as a manual and automation QA, which gave me a broad vision and an overall experience that I will try to share with you in these comments.

He has a lot of in-the-trenches experience and is sort of calling me out on some of these ideas. I like that.

First Comment

To Pyramid or not to Pyramid

Although the Agile Test Automation Pyramid is a good approach, it may not be achievable as it is in a real case scenario. At least so far, we can keep the three layers as they are right now, but the percentage values could and, mostly likely, WILL differ between the different business cases.

A simple way to determine which is the “business why” of the company is to think in terms of its MVP (Minimum Viable Product). This sheds a clear light to let us foresee which of the three layers of the pyramid are going to be the ones that may require extra effort and a bit more attention. Since the core of my current clients business is ecommerce, they rely mainly on the testing of the UI. Over time and as the project expanded we started to automate scripts for the two bottom layers.

But let’s say, for a minute, that you are presented with a case to create an automation framework or solution for a Home Banking application. Thinking in terms of stability, security, legal compliances, certificates, timed-out sessions and since we would be handling client’s money, its MVP would clearly suggest that the bottom and mid layer could be the most critical ones, needing more testing than, perhaps, the UI.

My Reaction or Reply

Please remember that the three layers are simply a strategy recommendation. It’s truly up to the team to figure out how best to invest over the layers. But it’s important to remember that there is a focus for each layer.

For example, for every 10 automated tests that are developed by the team, no matter the application or MVP, I want the following rough ratio to be applied to the types of automation being developed:

  • 5-7 Unit tests
  • 2-3 Middle-tier tests
  • 1-2 UI-centric tests

Point being, the investment should be heavily skewed “underneath” the UI.

And while there may be business (MVP) considerations for where to invest, I look to the team to be the primary arbiter as to the strategy for automation. That is, the team needs to look at each User Story as it enters a Sprint for execution and decide:

  • What is the right mix of manual, automated, non-functional, other forms of tests for that story;
  • They need to look broadly across the story mix that might fit into the Sprint and decide on more global testing requirements; then allocate time for it;
  • For the automation parts, they need to decide on the 3 Pyramid ratio mix that works best. BUT please don’t fall into the UI-centric automation trap.

Matias brings up an excellent point. It’s the Product Organization or individual Product Owners responsibility to communicate the “Business Why…the MVP” to the team. Then it’s the teams’ responsibility to interpret that via their Definition of Done and their experience to automate each User Story appropriately.

I really like him bringing in the notion of MVP here to our thinking. But I’m also emphasizing the team as the final interpreter. Why? Well, because it’s their job.

Second Comment

Stones along the path

Other requirements that we are asked to meet was to automate every single story. And it had to be done within the sprint. This means that the scripts have to be ready at the same time as the new features are being rolled out, which can be one of the biggest bottlenecks while testing. Finding a way to accomplish that kind of request is not easy. This means that whole team needs to work and communicate constantly in order to keep everyone in sync.

Agile methodologies suggest standup meetings and some other forms of communication but it is up to the team to take advantage of these facilities. It means planning and estimating accordingly on how much time is going to take the developers implement the new features and how much time the QA team members need in order to design, create, run and certify the tests and scripts that will go along with those features. Sometimes, the trade-off of having automation scripts within the sprint is that some features expected in it have to be left out and get pushed to a future sprint.

As a result of this, it becomes difficult for the product owner to leave the decision to the team of what and how to test. This is because the focus and, most of the cases, pressure of the stakeholders centers on delivering the features of the application, and they end up assuming that the quality of the product comes along in the same package and QA starts to lose its real value. This is often quite hard for the stakeholders to accept due to “feature-itis” that was mentioned in one of your previous articles.

My Reaction or Reply

Quite often I see teams deferring testing. That’s not just writing the automated tests, but also large areas of manual or other forms of testing as well. Sometimes they look to me as an agile coach to “give them permission” to defer the work. And sometimes, I make the mistake of giving them that permission. I feel their pain in trying to balance real-world pressure against getting each story completely done, so I show weakness 😉

But in every case like this, I’ve made a mistake. I’ve given them permission to deviate from one of the prime directives of agile work delivery. So what is the agile premise for delivering work? And I might add, that’s it’s fairly clear.

  1. A team establishes a Definition of Done (DoD) or completeness for the work they deliver on an iterative basis.
  2. When they plan their Releases and Sprints, everyone keeps this DoD in mind when estimating and planning. It becomes a strong part of the culture and the teams’ commitment to doing solid work.
  3. When the team works on a story in a Sprint, they are finished when the story meets their DoD. Quite often, the DoD includes the Acceptance Tests or Acceptance Criteria associated with the story, amongst many other criteria. Often, the Product Owner will “sign-off” on the Story as a sign of it being “done”.
  4. There is NO partial credit in Scrum or the methods. Work is either not started, in-progress, or done. Point being, 99.67% done is not done.

So how do the above “rules” apply to Matias’ comment? Well, he spoke in terms of silo’s and hand-offs. That is developer’s plan/execute coding and testers plan/execute tests and automation. And if the developers get done “late”, then the testers might not be able to complete the work (automation) within the Sprint when the requirement was made.

So the only option is to functionally SKEW the work to the next Sprint. While that is an option, it’s not a very “agile” option. We want to get stories completely done within the sprint where we start the work. That would include meeting all aspects of our DoD. If that DoD includes 3 Pyramid-style automation or fixing all bugs we introduced in our implementation, then that’s what it means.

I’d rather the team take on less work and get it completely done, then allow work to skew outside of the sprint. Why you might ask?

Mostly because it delays feedback to the team. But it impacts Release Planning, velocity, and the businesses’ ability to count on team forecasts. All in all, it breaks down reliable or deterministic agile execution and delivery.

Wrapping up

I really appreciate Matias reading the post and probing a bit more. I think his questions brought out some important points “underneath” the 3 Pyramids approach to agile testing automation strategy. One key that I want to emphasize is that automation isn’t an option. Think of it more as a consequence or a part of developing solid software.

In my experience, writing unit-level tests are not optional. Now how many you write is sort of up to the team. And there are other considerations; such as available tooling and the type of application you’re building (think embedded systems for example).

We need to extend that view to all levels of the pyramid and stop considering test automation as “optional” in our development efforts. It’s a technical decision for the team to make that is coupled tightly to their Definition of Done and towards them building high quality and robust products.

Stay agile my friends,

Bob.

Bob Galen

Bob Galen

Bob Galen is an Agile Methodologist, Practitioner & Coach based in Cary, NC. In this role he helps guide companies and teams in their pragmatic adoption and organizational shift towards Scrum and other agile methodologies and practices. Contact: [email protected]