I recently answered this question on the MSDN testing forum and thought it'd be something good to post on. Here's my answer; read the conversation at http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2597561&SiteID=1&mode=1
There are probably two 'paths' to answering that - the first path examines why you're testing at all, and the second looks at how you're actually writing your cases. In other words: strategy and process (ugh, I know..).
Some organizations view and implement testing as all about QA--validating that an application fulfills certain requirements (IE a tax calculation is completed and returns the expected result, or a server can serve up X pages per minute). I've worked at a company like that--they were implementing packaged software and they only cared that it accomplished what they bought it for. (Ask me what I think about that approach...) Other organizations have to be (or choose to be) much more focused on overall quality. Not just 'will it fit my bill' but 'is it robust'. So there's a subtle difference, but the point is that a good test case is a solid step toward accomplishing the objectives. For instance, if the project is an internal line of business application for time entry, a test case which validates that two users can submit time concurrently and the data will retain integrity is a good test case. I think a case written which validates layout pixel by pixel would be a waste of time, money, and energy (would it get fixed, anyhow?).
Another point of quality for test cases is how it's written. I generally require my teams to write cases which contain the following (and I'm fine with letting them write 'titles only' and returning to flesh out later; as a matter of fact, one-time projects I generally shy away from much more than that).
-
Has a title which does NOT include 'path info' (ie, Setup:Windows:Installer recognizes missing Windows Installer 3.x and auto-installs). Keep the title short, sweet, and to the point.
-
Purpose: think of this as a mission statement. The first line of the description field explains the goal of the test case, if it's different from the title or needs to be expanded.
-
Justification: this is also generally included in the title or purpose, but I want each of my testers to explain why we would be spending $100, $500, or more to run this test case. Why does it matter? If they can't justify it, should they prioritize it?
-
Clear, concise steps "Click here, click there, enter this"
-
One (or more - another topic for a blog someday) clear, recognizable validation points. IE, "VALIDATION: Windows begins installing the Windows Installer v 3.1" It pretty much has to be binary; leave it to management to decide what's a gray area (ie, if a site is supposed to handle 1,000 sessions per hour, it's binary - the site handles that, or not. Management decides whether or not 750 sessions per hour is acceptable)
-
Prioritization: be serious... Prioritize cases appropriately. If this case failed, would this be a recall-class issue, would we add code to an update for this case, would we fix it in the next version, or would we never fix it. Yes this is a bit of a judgment call but it's a valid way of looking a the case. Another approach is to consider the priority of a bug in terms of data loss, lack of functionality, inconvenience, or 'just a dumb bug' way.
-
Finally, I've flip-flopped worse than John Kerry on the idea of atomic cases. Should we write a bazillion cases covering one instance of everything, or should we write one super-case. I've come up with a description which I generally have to coach my teams on during implementation. Basically, write a case which will result in one bug. So for instance, I would generally have a login success case, a case for failed log in due to invalid password, a case for failed log in due to non-existent user name, a case for expired user name or password, etc. It takes some understanding of the code, or at least an assumption about the implementation. Again, use your judgment.
I read the response that a good case is one that has a high probability of finding a bug. Well... I see what the author is getting at, but I disagree with the statement if read at face-value. That implies a tester would 'filter' her case writing, probing more into a developer's areas of weakness. That's not helpful. Hopefully your cases will cover the project enough that all the important bugs will be exposed, but there's no guarantee. I think the middle ground is appropriate here - a good case 1) validates required functionality (proves the app does what it should) and 2) probes areas where, if a bug is found, the bug would be fixed (in a minimal QA environment) or (in a deeper quality environment) advances produce quality significantly.
BTW: one respondent to the question replied and said a good test case is one which brings you closer to your goal. Succinct!
Hope that helps!
John O.
john@yourtestmanager.com
http://www.yourtestmanager.com
http://thoughtsonqa.blogspot.com