Friday, January 18, 2008

What Makes a Good Test Case

I recently answered this question on the MSDN testing forum and thought it'd be something good to post on. Here's my answer; read the conversation at http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2597561&SiteID=1&mode=1 

There are probably two 'paths' to answering that - the first path examines why you're testing at all, and the second looks at how you're actually writing your cases. In other words: strategy and process (ugh, I know..).

Some organizations view and implement testing as all about QA--validating that an application fulfills certain requirements (IE a tax calculation is completed and returns the expected result, or a server can serve up X pages per minute). I've worked at a company like that--they were implementing packaged software and they only cared that it accomplished what they bought it for. (Ask me what I think about that approach...) Other organizations have to be (or choose to be) much more focused on overall quality. Not just 'will it fit my bill' but 'is it robust'. So there's a subtle difference, but the point is that a good test case is a solid step toward accomplishing the objectives. For instance, if the project is an internal line of business application for time entry, a test case which validates that two users can submit time concurrently and the data will retain integrity is a good test case. I think a case written which validates layout pixel by pixel would be a waste of time, money, and energy (would it get fixed, anyhow?).

Another point of quality for test cases is how it's written. I generally require my teams to write cases which contain the following (and I'm fine with letting them write 'titles only' and returning to flesh out later; as a matter of fact, one-time projects I generally shy away from much more than that).

  • Has a title which does NOT include 'path info' (ie, Setup:Windows:Installer recognizes missing Windows Installer 3.x and auto-installs). Keep the title short, sweet, and to the point.

  • Purpose: think of this as a mission statement. The first line of the description field explains the goal of the test case, if it's different from the title or needs to be expanded.

  • Justification: this is also generally included in the title or purpose, but I want each of my testers to explain why we would be spending $100, $500, or more to run this test case. Why does it matter? If they can't justify it, should they prioritize it?

  • Clear, concise steps "Click here, click there, enter this"

  • One (or more - another topic for a blog someday) clear, recognizable validation points. IE, "VALIDATION: Windows begins installing the Windows Installer v 3.1" It pretty much has to be binary; leave it to management to decide what's a gray area (ie, if a site is supposed to handle 1,000 sessions per hour, it's binary - the site handles that, or not. Management decides whether or not 750 sessions per hour is acceptable)

  • Prioritization: be serious... Prioritize cases appropriately. If this case failed, would this be a recall-class issue, would we add code to an update for this case, would we fix it in the next version, or would we never fix it. Yes this is a bit of a judgment call but it's a valid way of looking a the case. Another approach is to consider the priority of a bug in terms of data loss, lack of functionality, inconvenience, or 'just a dumb bug' way.

  • Finally, I've flip-flopped worse than John Kerry on the idea of atomic cases. Should we write a bazillion cases covering one instance of everything, or should we write one super-case. I've come up with a description which I generally have to coach my teams on during implementation. Basically, write a case which will result in one bug. So for instance, I would generally have a login success case, a case for failed log in due to invalid password, a case for failed log in due to non-existent user name, a case for expired user name or password, etc. It takes some understanding of the code, or at least an assumption about the implementation. Again, use your judgment.

I read the response that a good case is one that has a high probability of finding a bug. Well... I see what the author is getting at, but I disagree with the statement if read at face-value. That implies a tester would 'filter' her case writing, probing more into a developer's areas of weakness. That's not helpful. Hopefully your cases will cover the project enough that all the important bugs will be exposed, but there's no guarantee. I think the middle ground is appropriate here - a good case 1) validates required functionality (proves the app does what it should) and 2) probes areas where, if a bug is found, the bug would be fixed (in a minimal QA environment) or (in a deeper quality environment) advances produce quality significantly.

BTW: one respondent to the question replied and said a good test case is one which brings you closer to your goal. Succinct!

Hope that helps!

John O.

john@yourtestmanager.com
http://www.yourtestmanager.com
http://thoughtsonqa.blogspot.com

1 comment:

  1. I'd like to present an alternative to your "click here, click there, enter this" precision testing. I'm sure you are surprised to hear this from the queen of 100-step test cases, but hear me out.

    I'm working on iteration 2 and 3 of testing an Agile project (my first!) where iteration 1 tests were written by traditional waterfall methodology testers (including myself) based on a single design document of approximately 175 pages with wireframe screen mockups and basic functionality descriptions. Those tests were at the 'click here' level of detail, and when we tried to execute them, we failed miserably, getting bogged down in duplication of steps and misinterpretation of the document, to say nothing for the fact that everything that did not precisely match a test step was called a defect, leaving the Agile developers with no "agility" --- defeating the purpose, as it were. After the test lead "left", I took over writing the test cases for iteration 2, which I accomplished in 4.5 hours over one weekend. Learning the system painfully from two weeks of attempting to execute the first iteration of tests, these new tests were created from the very same document that had been used to write those iteration 1 tests. With one huge difference. I divided the document into logical smaller pieces and cut and paste those pieces into test sets. I was lucky in the fact that the design document is an excellent one. This was a simple task of importing the Word document into an Excel table, doing some global editing and then exporting to Quality Center. Now these tests could not be executed by someone who does not understand the system. But the folks who tested in iteration 1 were able to run these tests with little additional guidance. We're getting ready to start iteration 3 on Monday, with the same tests, a couple of additional tests we realized we needed along the way, and we expect to have a pretty darned clean product tested and ready to go to UAT and to Beta by the end of the week.

    While in the past I would have never written what I consider 'vague' tests such as these, they are serving their purpose and cutting way back on the time required to create the tests, execute the tests, and document the results. They are tremendously reusable by anyone who understands the system. Finding such folks will not be difficult, since we will be supporting the software we're developing...there will always be a handful of people who know the product well.

    There's no way I could have written traditional 'click here, click there' tests in 4.5 hours. But the tests I came up with are thorough; they exercise the entire system; and they leave room for Agile modifications to the system. It will be simple enough to modify these tests to incorporate design changes and updates...I can't imagine having to do that with traditional 'click here, click there' tests.

    I will say, though, that if you don't have an Agile development team and functional/business team, this method of test development would fall flat on its face. I can't count the number of times in the past 3 weeks I have said, (1) "if it were MY decision, I would do X, but we need to find out what the business wants" and (2) "wireframe and test system disagee; one of you has to change" Because we're an Agile team, these decisions are made immediately (most of the time) and we can carry on our Agile way...

    I still love 'click here, click there' tests, but the cost of creating tests to that level of detail when you're in a 3-week Sprint would be outlandish, and the benefit, at least in our situation, would be nil.

    ReplyDelete