Red Green Repeat Adventures of a Spec Driven Junkie

Diving Deep to Improve Process and Results

I go over how I dove deep into a problem and developed a new section to the story template.

I share how I analyzed each step to understand the bigger picture and drawing conclusion at each step.

Great example of thought process in diving deep at each step to arrive at a result.

This article will take you less than five minutes to read.

Bronze Statute of Veiled and Masked Dancer source and more information

Introduction

When I wrote about including How to Test in Story I only described the result, never the process on how I arrived at that result.

Knowing the process to the result is just as important as the result itself.

Background

The system we work on has two localized teams working on different parts of the system.

My team is responsible for the backend portion, bridging user requests with data pipeline to complete user requests.

The other team is responsible for user facing elements, especially interactive features of the system.

Work is generally delineated along these lines. Services have their own responsibilities and teams own services.

User Issue

A user was having a hard time performing an action on the system and reached out for help. A user rarely asks for help, when they do, it’s usually a big deal.

In this particular instance, it was a really big deal because the action was operational and had financial impact.

The team that made the feature was deep in another project and my team was responsible for addressing any issues with the system, even if the problem code was not our own.

Originally, the team built the feature for another use case, that use case was dead on arrival, and operations re-purposed that feature for their use.

Walking in User’s Shoes

I wanted a deeper understanding of the problem. As a technical manager, there are an arsenal of tools available to diagnose the issue, from performing the same steps as the user to stepping through specific lines of code.

In this case, the user was configuring a value in the system, which is input to another part of the system.

When I performed the same steps, I was just as baffled as the user. These questions came to me:

  • “How does this work?!”
  • “How do I know it worked??”
  • “What would be the result of this when I take the dependent step?”

I wanted to find out more and dove into another level: original story specification to review the original feature specification and understand from another perspective.

Original Story Specification

I found the original story to create the story. The story description:

As a user, I want to specify a value to the system.

That’s it.

Well, almost. The story was bare-bones and compared to the current stories we were writing, my team would not accept it.

One thing that I did not get from the story: how to validate the feature when the story is complete.

Sure, the story lists: “I want to specify”, one can accomplish that in numerous ways:

  • input box on page
  • configuration file
  • email to system
  • database entry

This is the start of the inspiration for how to test for a feature or story completion.

Consulting with QA Team

Previously, when the story would be under-specified and done, QA would catch the problem because they would take the user view and push back that the result was not adequate.

Somehow this story is done and on production. The other team shares process and resources with mine. One common resource is the QA team, which validates the system completes request from stories.

When following up with the team, I asked:

How did you test that this works? The original story never specifies it.

I have a love-hate relationship with QA, except there’s no love. :-) QA has kicked back stories to engineering that have been more specified than the one given on my team based on differing requirements and results.

This time, QA said the result was fine. Even though this is a user facing feature, the way QA tested the feature worked was:

With API calls to validate configuration

WHAT?! How can QA use a backend validation method for a front-end feature??

This inconsistency drove me nuts.

Diving Deeper

The feature is in the admin section of our system. In general, the team’s attitude was there are a limited number users that can access this part of the system and anything that is not intuitive, we can train them instead of spending extra effort to make it intuitive.

Engineering gives QA a hard time whenever they raise there’s inconsistencies and/or difficult for the user for anything in the admin section.

There’s a fine line as I have to make this trade off: Spend time on user facing features and/or limp along for internal features.

This case, it’s a lose-lose, where the user in the admin section has no confidence in the action. Training them to use the same tool QA uses to validate results would become out of scope.

Addressing Issue

Given the details of the situation, speaking with everyone involved in the original process, and finding a way to address the issue of the original user request, we got input from everyone on how to solve.

  • QA - as they are a resource for our team and use the system extensively, I asked: how would they like to see the result
  • Design - given the situation we have and the concern raised by the user, what are some options we have to solve it? Solutions that range from best practice to quick hack?
  • Engineering - what’s available on the backend?

We agreed to a solution and the end user has a work around until we deploy the solution.

How We Got Here??

The biggest question I have to ask:

How did we get into this situation with such processes in place??

This problem resulted from a “pass-the-buck” mind-set by each team. Where each team assumed a team down the line would be able to address or overcome any inadequacies they had.

  • Product management: by under-specifying a story, engineering will make up for it.
  • Engineering: if this is in the admin section, we can train the user.
  • QA: if this feature only has admin users seeing it, we don’t have to have the same mind-set as a “true end user” on the rest of the site.
  • Admin End User: The engineering team can make up for anything by updating the system on the backend.

The collective attitude and assumption by each team on the other team to support would normally be right. In this situation, they resulted in a bad system.

How to Improve?

Given this understanding of the system, I asked: how can I improve this? How can we avoid this situation again?

Going back to the original story, even though I found details, not knowing how to test the feature from a stake-holder perspective would require poking and prodding around at a lower level.

The stake-holder’s tool to interact with the system dictates what level to “test” at.

For technical stake-holders, they would use a range of technical tools. For regular users, they would use the interface delivered to them.

This helped me realize by describing how a stake-holder would test for a feature would specify stories for multiple users with different tools, at the end the stake-holder only cares about validating the feature at the tool they have available to them.

Conclusion

Although this was a “small” issue, it taught me a lot, especially as I dove deep through each team’s reaction on the end result.

This problem open my eye up to having the “how to test” section in feature stories. As a manager when told: “feature X is funny”, the best way to validate if feature X is funny or not is to test it out myself.

I am glad I dove deeper into the original request from the user. I took steps they would take, reviewed original stories, consulted with QA on their thinking, and found a way to improve our process while solving an operational problem with financial impact.

Diving deep pays off!