Friday 6 February 2015

Scrum - Part VI: Task lifecycle

After a not-very-brief foray into low-level optimisation, I'm going back to the Agile series, this time to talk about task/story lifecycle within a sprint.

Without becoming too formal, I'd define lifecycle as a way of figuring out where a given task is in the currently running sprint.
Is it being worked at? How likely is it to getting completed soon? Is it at a point where it can be shown to others?

Usually Scrum (and not only Scrum) managers define this using workflows. Here's a very typical one I "borrowed" from the Atlassian site.




It defined the most basic states in any workflow; all are self-explanatory.

We could end the post here, if not for one little fact: most people need to go beyond those 4-5 basic states.

For example, for a development task, we usually need a specific state to signify that we're done with writing code, and now we're in code review.
In some organisations and teams, where there is a tight SCM<->Scrum integration, this state might be even required for submitting code.

In some cases, we need to say that the code is in the build now (i.e. reviewed and submitted), and now it is in testing.
Simply put, "In Progress" needs to be broken up a bit, otherwise the process is too opaque.

However, sometimes people go a bit too far, and start creating states (or swimlanes) such as:
  • In deployment
  • Low-level design
  • UX review
  • In documentation
  • Demo ready
And so on and so forth. Actually, I take that back - it might not be too far.
This all depends on what the team is doing; if we have a DevOps scenario where we are capable of deploying individual code changes, then yes - In Deployment might make sense.
If we are a UI team that is preparing small incremental widget updates, then UX review might be useful as well.

I've usually been trying to follow a couple of simple rules:
  • Start simple. Better have a few states, and add new ones only when absolutely needed. Too many states is an overhead on the team, and an overhead on me.
    Vice versa, if there is a swimlane that we mostly ignore or jump over when moving tasks, it should be a candidate for elimination.
  • Adjust the states to whoever is in the team. If we have both development and QA in the same team, then In Testing is a good state to have. If we're a pure QA automation group, then perhaps having a state of Added to Continuous Integration would help.
My main background is either pure Development teams, or a combination of DevOps/QA, so I'll use those for my next examples:

  • Spike planning task. I plan getting into those later, but in one sentence: non-coding activity to get a better understanding of future work (e.g. low-level design, UX). Might be also preparing a test strategy by a QA engineer.
  • Internal refactoring. There's definitely coding and code review involved, but QA might not: especially if we already have a stabilisation cycle in the same area.
  • Deploy software.
  • Troubleshoot and analyse customer escalations.
  • Training and getting familiarised with specific technology.
  • Performance testing.
  • Review technical documentation.

And so on and so forth. All of these tasks may have valid acceptance criteria, but the simple sequence of To Do-In progress-Review-Testing-Done does not map to them as-is. It's purely a matter of personal preference and judgement whether using that sequence as the default is good enough, and whether it's acceptable to allow free form interpretation on all the others.

In my case, To Do, In Progress and Done are the cornerstones, and Review just works too well with integrated review systems (e.g. Swarm/Perforce or Crucible with JIRA) to get dropped, but Testing was a mini-dilemma for a while.

Firstly, most tasks simply did not have a testing element to them, as per examples above. Secondly, even if they did, sometimes testing is just too substantial, and it is impossible to enter both Dev and QA in the same sprint.

Just to give an example, let's revive candidate sprint tasks from one of the first posts:

  1. Integrate Chinese UI localisation provided by an external agency.
  2. Enhance skin support for more customisation.
  3. Address 3 visual defects.
  4. Support two more codecs.
  5. Prepare high-level effort estimate for porting to the MacOS platform.
 
Even if we could develop the first task in a given sprint, we might not be able to test it. So, I'd argue for a dedicated QA task in this case: maybe even two - one for planning and one for execution. Same might be true for enhanced skin support, especially if Dev completes towards the end of the sprint.
On the other hand, we might be able to fix and test the visual defects in the same iteration.

Taking all of that into account, my experience was that the majority of tasks did not need or could not accommodate a testing swimlane.

However, after a year or two, I still decided to reintroduce it, simply because the risk of missing a testing activity on those small defects is more of a problem than the overhead of dragging the other 80% tasks across that state. 

All this is not so much a guide to how QA planning should be managed, but rather an example of the Start Simple and Tailor States to the team principles.
To reinforce the latter, teams are not born equal, and if for example, I were responsible for a team that creates internal components (rather than customer facing), I might omit Testing all together and embed API auto-tests in acceptance criteria.

Speaking of those, I'd like to touch upon the standard template of: "As <X>, I'd like to do <Y> in order to achieve <Z>".

Many Agile advocates suggest this as a mandatory definition for all user stories, and thus, we end up with acceptance criteria such as:

"As a QA Engineer, I'd like to define a test plan, so that I'll understand my future work better", or

"As a developer, I would like to refactor class FooBar to Foo and Bar, so that we'll be able to maintain it", or

"As a customer of IniTech, I would like defect #12345 to be fixed, so that I'll be able to enjoy the product"

Like a legal document, it is semantically correct, fits the template and yet, adds zero information - it's obvious why we want to write a test plan, refactor, or solve defects without explicitly naming the actor.

This is why I've been using this template very sparingly, and only in the rare cases where it's not obvious why we do a specific task. Even then, the rationale usually came at a much higher level than individual story.
Of course, it might work much better for more UI-focussed teams: indeed, if you'll look at my own examples of the multimedia player, you'll noticed that the template would fit most of them quite well.

Summary


Workflows are not born equal, and it is important to get them right. They are like a mini-checklist: miss a testing state, and you might end up with untested defects fixes. On the other hand, it's easy to fall into the opposite extreme, so it is a delicate decision of what enters acceptance criteria and what becomes an explicit swimlane.
This is where strong understanding of what the team does most of the time come in, and where the choice of tools matters.
Lastly, acceptance criteria exist for us, so acceptance criteria templates are a guideline. As long as the person doing task understands what's expected and why, the criteria are good enough.

No comments :

Post a Comment