Skip to main content

Proposing the GDS Agile Testing Manifesto

Posted by: , Posted on: - Categories: Central Architecture

Architect Team Board

When talking to clients about what agile means, I refer to values outlined in the often-cited Agile Manifesto.

The manifesto uses paired statements as a basis for describing the core ideals of agile.  The specific nature of agile testing, however, lacks a community agreement to reference. Instead, I frequently explain the testing process to clients using my thoughts and experience.

I’ve now put these explanations in my own Agile Testing Manifesto, copying the style of the original industry manifesto. Since this is my first draft, I’d be grateful for your feedback in the comments section below.

Agile Testing Manifesto

In our efforts to build software of good and appropriate quality, we have come to value:
Collaborative ownership over detached objectivity
Defect prevention over defect reporting
Targeted automation testing over widespread anti-regression testing
Exploratory testing over predetermined scripting

That is, while there is value in the items on the right, we value the items on the left more.

Collaborative ownership over detached objectivity

When an external team tests your product, it has the benefit of being able to take an outside view. However external testers can’t quickly incorporate usability feedback into your business model like your own team can. External testers won’t be aware of the reasoning that takes place during development, nor will they have insight into the high-risk areas of the service, which consequently need the most testing attention.

To build a quality product, testing activities must be fully integrated with the development cycle. The same team that creates the product must do the testing. This ensures the whole development team owns the quality of the product, rather than one individual or role.

Defect prevention over defect reporting

Rather than being critical observers, testers must be actively involved in preventing product defects occurring in the first place. This is easier if testers are familiar with a product’s functionality and business domain, participate in development activities and work with business analysts and customers.

When product defects do arise, testers should either resolve the defects themselves or work alongside a developer to get them fixed. This will help the tester ensure sufficient tests are written to stop the defect arising again.

Targeted automation testing over widespread anti-regression testing

A full suite of anti-regression tests will ensure your products work as intended. These anti-regression tests are commonly performed at the end of a product development cycle. However, it’s far better to automate this testing so it’s a continual part of the development lifecycle, testing any functionality currently being developed.

Exploratory testing over predetermined scripting

Exploratory testing techniques give testers unrestricted freedom to test a product. They’ll use their prior testing experience to actively hunt down areas of poor quality in a product rather than methodically work through a predefined script. Testers must prove the software works for users, rather than only proving the software works as they intended.

What do you think?  How can this Agile Testing Manifesto be improved?

Don’t forget to sign up to the Government Technology blog.


Sharing and comments

Share this page


  1. Comment by Philip Virgo posted on

    In the 2012 Budget there was a commitment that no new on-line system should go live unless the repsondible minister was capable of using it. I understand that, in line with this commitment, George Eustace MP is now directly involved with the three week "cycle" for getting the new Rural Payment Agency systems fit for use by the target audience. My own first experience of an "agile" methodology was during the run-up to decimalisation (1971 !!!!!) when I was expected to trial a mock up with the target audience (of accounts clerks) before committed to cutting code. Today the equivalent process is much easier, but where is the GDS manifesto commitment to trialling with the target audience (e.g. a cross section disability benefits claimants) before committing to live running or mass roll out.

  2. Comment by Chris Davies posted on

    I like what you're trying to do here, but I'm not sure we need another Manifesto. Number 2 had me saying "Yes, this!" but number 3 got me thinking. Where would automated testing be targeted?
    I agree with "it’s far better to automate this testing so it’s a continual part of the development lifecycle" but that tests only the functionality just developed - the 'does it work' testing. Automation is almost essential for anti-regression testing - the 'have we broken anything else' testing.
    I believe any automated functional tests produced from one project can and should be integrated with others to create one or more regression suites. in other word regression suites are built up incrementally and run throughout the lifecycle.
    So perhaps this value should read "Incremental regression testing over final acceptance testing" or similar.

  3. Comment by Richard Hunter posted on

    I like the idea, something that easily captures the difference in a testers life and approach from being in a waterfall project to an agile one. (which is an interview question I see people struggling with a lot)

    I think it would be useful to bring together a group of thought leaders together to come up with an agreed manifesto, to help it get traction in the wider community.

    The point I would start at is to list the main activities that we do in a waterfall project and how we do them, and then apply lean against them. This should leave what we "need to do" i.e. it adds value to our customer (in this case the project or team) and we have that format.

    For example, a quick list of main activities in a waterfall project would be(I am sure I have missed some, this is just an example):

    - We have a company wide process
    - We have a Test Strategy or Approach for the project
    - We have test plans for each test phase
    - We plan test coverage and risk order of testing
    - We script everything we want to test
    - We execute the scripts and report progress
    - We run various phases of testing (regression, SIT, UAT, V&P, OAT etc etc)
    - We log defects
    - We have test phase completion report
    - We have a final test report
    - etc

    We would also need a list of behaviours
    - fire and forget (raise a defect, over the fence, somebody else fixes it)
    - etc

    We then address each one by going through the first part of a lean assessment (why do we do it, what is the benefit for the end customer?) and remove everything we do not need.

    We can then get the top 4 items and lay them out as you have done, but also have a level below of more detail.

    Just a thought. Good work so far, I enjoy the blog

  4. Comment by Mark Winspear posted on

    This is refreshing to see. I like the acceptance that there is certainly a role for 'testers' (the skill of testing) in Agile.
    I would agree with Chris around automation, I think something broader like 'Automation first over manual testing' might fit better... do we really value wide-spread anti-regression testing?

    I have produced an Agile Test Framework here at the SFA as a set of guidance and principles, and would be keen to get in touch to share this with you to get your thoughts. Perhaps you could incorporate some of it into the work that you are doing?

    My email address is if you would like to get in touch.

  5. Comment by Søren Harder posted on

    I am a bit reluctant to "let go" of the Agile Manifesto: I think it applies to testers as well as developers, and by creating a specific agile tester manifesto, you are somehow going back to the silos. Taking a discussion about what would go into making a testers manifesto, if we were to make it, is a very illuminating discussion, though. Moreover, I am not sure what exactly should go into it. I do not like "Defect prevention over defect reporting" (though I DO like the explanation you gave for it). I would rather have "Effective problem handling over effective problem prevention" as I find that that is 'agile over waterfall'; but it is the opposite of your principle.

  6. Comment by Ellie posted on

    A consideration of how the Agile Manifesto might be applied to the tasks, activities and behaviours of testing/testers is a useful and interesting endeavour. Whilst I appreciate the thrust of your post, I am however struck by some challenges.

    For example, in what sense is the idea of testers being part of an external team actually applicable to other methodologies, such as v-model?

    The v-model is predicated on the need for testing throughout the requirements, design and build stages; and (cue an in-take of admonishing breath) as ‘best practised’ requires testing to be undertaken by the extended team in term of unit/component testing (developers), functional testing (test team) and user acceptance testing (business analyst, users). It is this team that owns the testing, just as in agile the focus tends to remain on developers unit testing, testers exploring functionality and business analysts/product owners/users undertaking product acceptance.

    I would additionally suggest that the v-model is aimed at defect prevention over defect capture, the teachings being the 1:10:100 rule, the right use of static reviews (for requirements, designs and technical builds) and collaborative walkthroughs.

    And whilst defect management should not get in the way of speed and agility, I would not want to conflate this with defect reporting, and which will often have its part to play in regulatory requirements or assurance/audit, or even in improving velocity (“which part of the system is the most buggy?”)

    I like Chris’s idea of incremental regression testing over final acceptance testing, since whilst regression suites should best cover all functionality, it highlights the need for developers and testers to make frameworks an integral part of their deliveries/deliverables, instilling disciplined code, fit for purpose functionality, so achieving quality and speed.

    Possibly, the idea of targeted automation was picked up on since it chimes with the concept of exploratory testing. Ironically, the only wholly predetermined scripting I can recall on any of the testing projects I have managed is being provided through an automated framework that builds test designs on the fly from a very stable set of requirements (and whose boundaries can be fully predetermined).

    Otherwise, exploratory testing (rather than ad hoc testing) has often been the call of the day when dealing with data and applications that have many degrees of freedom, whatever the nature of the development/project methodology.

    Just food for thought!