Agile: INVEST in Good Stories, and SMART Tasks

I came across this article that includes two mnemonics that I really like. 

A good story is (INVEST):

· I – Independent

· N – Negotiable

· V – Valuable

· E – Estimable

· S – Small

· T – Testable

A good task/defect is (SMART):

· S – Specific

· M – Measurable

· A – Achievable

· R – Relevant

· T – Time-boxed

If you are writing stories or tasks keeping these guidelines in focus will result in an outcome that is better for everyone. 

Link:

http://xp123.com/articles/invest-in-good-stories-and-smart-tasks/

Thanks to Ryan Crumbly Smile

Advertisements

Three Laws of Robust Software Systems

Murphy’s Law (“If anything can go wrong, it will”) was born at Edwards Air Force Base in 1949 at North Base. It was named after Capt. Edward A. Murphy, an engineer working on Air Force Project MX981, (a project) designed to see how much sudden deceleration a person can stand in a crash. One day, after finding that a transducer was wired wrong, he cursed the technician responsible and said, “If there is any way to do it wrong, he’ll find it.”

For that described reason it may be good to put some quality assurance process in place. I could also call this blog “the four laws of steady software quality”. It’s about some fundamental techniques that can help to achieve superior quality over a longer distance. This is particularly important if you’re developing some central component that will cause serious damage if it fails in production. OK, here is my (never final and not holistic) list of practical quality assurance tipps.

Law 1: facilitate change

There is nothing permanent except change. If a system isn’t designed in accordance to this superior important reality, then the probability of failure may increase above average. A widely used technique to facilitate change is the development of a sufficient set of unit tests. Unit testing enables to uncover regressions in existing functionality after changes have been made to a system. It also encourages to really think about the desired functionality and required design of the component under development.

Law 2: don’t rush through the functional testing phase

In economics, the marginal utility of a good is the gain (or loss) from an increase (or decrease) in the consumption of that good. The law of diminishing marginal utility says, that the marginal utility of each (homogenous) unit decreases as the supply of units increases (and vice versa). The first functional test cases often walk through the main scenarios covering the main paths of the considered software. All the code tested wasn’t executed before. These test cases have a very high marginal utility. Subsequent test cases may walk through the same code ranges except specific sidepaths at specific validation conditions for instance. These test cases may cover three or four additional lines of code in your application. As a result, they will have a smaller marginal utility then the first test cases.

My law about functional testing suggests: as long the execution of the next test case yields a significant utility the following applies: the more time you invest into testing the better the outcome! So don’t rush through a functional testing phase and miss out some useful test case (this assumes the special case in which usefulness can be quantified). Try to find the useful test cases that promise a significant gain in perceptible quality. On the other hand, if you’re executing test cases with a negative marginal utility you’re actually investing more effort then you gain in terms of perceptible quality. There is this special (but not uncommon) situation where the client does not run functional tests on systematic bases. This law will then suggest: the longer the application is in the test environment, the better the outcome.

Law 3: run (non-functional) benchmark tests

Another peace of good permanent software quality is a regular load test. To make results usable load tests need a defined steady environment and a baseline of measured values (a benchmark). These values are at least: CPU, response time, memory footprint. Load tests of new releases can be compared to those load tests of older releases. That way we can also bypass the often stated requirement that the load test environment needs to have the same capacity parameters then the production environment. In many cases it is possible to see the real big issues with a relatively small set of parallel users (e.g. 50 users).

It makes limited sense to do load testing if single user profiling results are bad. Therefore it’s a good idea to perform repeatable profiling test cases with every release. This way profiling results can be compared to each other (again: the benchmark idea). We do CPU and elapsed time profiling as well as memory profiling. Profiling is an activity that runs in parallel to actual development. It makes sence to focus on the main scenarios used regularly in production.

How to wrtie Acceptance Criteria for user story in agile projects

A user story is a description of an objective a person should be able to achieve when using your website/application/software, written in the following format

An an [actor] I want [action] so that [achievement]

For example:

As a Flickr member I want to be able to assign different privacy levels to my photos so I can control who I share which photos with.

This post adds some flesh to the idea of user stories, in the shape of acceptance criteria.

Where are the details?

At first glance, it can seem like user stories don’t provide enough information to get a team moving from an idea to a product.

  • Card – stories are traditionally written on notecards, and these cards can be annotated with extra details
  • Conversation – details behind the story come out through conversations with the Product Owner
  • Confirmation – acceptance tests confirm the story is finished and working as intended.

In a project following an Agile process, user stories are discussed in meetings with the Product Owner (the person who represents the customer for the thing you’re developing, and who writes the user stories) and the development team. The user story is presented, and the conversation starts. For example:

As a conference attendee, I want to be able to register online, so I can register quickly and cut down on paperwork

Questions for the Product Owner might include:

  • What information needs to be collected to allow a user to register?
  • Where does this information need to be collected/delivered?
  • Can the user pay online as part of the registration process?
  • Does the user need to be sent an acknowledgment?

The issues and ideas raised in this Q and A session are captured in the story’s acceptance criteria.

Acceptance Criteria

Acceptance criteria define the boundaries of a user story, and are used to confirm when a story is completed and working as intended.

For the above example, the acceptance criteria could include:

  1. A user cannot submit a form without completing all the mandatory fields
  2. Information from the form is stored in the registrations database
  3. Protection against spam is working
  4. Payment can be made via credit card
  5. An acknowledgment email is sent to the user after submitting the form.

As you can see, the acceptance criteria are written in simple language, just like the user story. When the development team has finished working on the user story they demonstrate the functionality to the Product Owner, showing how each criterion is satisfied.

Including acceptance criteria as part of your user stories has several benefits:

  • they get the team to think through how a feature or piece of functionality will work from the user’s perspective
  • they remove ambiguity from requirements
  • they form the tests that will confirm that a feature or piece of functionality is working and complete.