Friday, January 18, 2008

What Makes a Good Test Case

I recently answered this question on the MSDN testing forum and thought it'd be something good to post on. Here's my answer; read the conversation at 

There are probably two 'paths' to answering that - the first path examines why you're testing at all, and the second looks at how you're actually writing your cases. In other words: strategy and process (ugh, I know..).

Some organizations view and implement testing as all about QA--validating that an application fulfills certain requirements (IE a tax calculation is completed and returns the expected result, or a server can serve up X pages per minute). I've worked at a company like that--they were implementing packaged software and they only cared that it accomplished what they bought it for. (Ask me what I think about that approach...) Other organizations have to be (or choose to be) much more focused on overall quality. Not just 'will it fit my bill' but 'is it robust'. So there's a subtle difference, but the point is that a good test case is a solid step toward accomplishing the objectives. For instance, if the project is an internal line of business application for time entry, a test case which validates that two users can submit time concurrently and the data will retain integrity is a good test case. I think a case written which validates layout pixel by pixel would be a waste of time, money, and energy (would it get fixed, anyhow?).

Another point of quality for test cases is how it's written. I generally require my teams to write cases which contain the following (and I'm fine with letting them write 'titles only' and returning to flesh out later; as a matter of fact, one-time projects I generally shy away from much more than that).

  • Has a title which does NOT include 'path info' (ie, Setup:Windows:Installer recognizes missing Windows Installer 3.x and auto-installs). Keep the title short, sweet, and to the point.

  • Purpose: think of this as a mission statement. The first line of the description field explains the goal of the test case, if it's different from the title or needs to be expanded.

  • Justification: this is also generally included in the title or purpose, but I want each of my testers to explain why we would be spending $100, $500, or more to run this test case. Why does it matter? If they can't justify it, should they prioritize it?

  • Clear, concise steps "Click here, click there, enter this"

  • One (or more - another topic for a blog someday) clear, recognizable validation points. IE, "VALIDATION: Windows begins installing the Windows Installer v 3.1" It pretty much has to be binary; leave it to management to decide what's a gray area (ie, if a site is supposed to handle 1,000 sessions per hour, it's binary - the site handles that, or not. Management decides whether or not 750 sessions per hour is acceptable)

  • Prioritization: be serious... Prioritize cases appropriately. If this case failed, would this be a recall-class issue, would we add code to an update for this case, would we fix it in the next version, or would we never fix it. Yes this is a bit of a judgment call but it's a valid way of looking a the case. Another approach is to consider the priority of a bug in terms of data loss, lack of functionality, inconvenience, or 'just a dumb bug' way.

  • Finally, I've flip-flopped worse than John Kerry on the idea of atomic cases. Should we write a bazillion cases covering one instance of everything, or should we write one super-case. I've come up with a description which I generally have to coach my teams on during implementation. Basically, write a case which will result in one bug. So for instance, I would generally have a login success case, a case for failed log in due to invalid password, a case for failed log in due to non-existent user name, a case for expired user name or password, etc. It takes some understanding of the code, or at least an assumption about the implementation. Again, use your judgment.

I read the response that a good case is one that has a high probability of finding a bug. Well... I see what the author is getting at, but I disagree with the statement if read at face-value. That implies a tester would 'filter' her case writing, probing more into a developer's areas of weakness. That's not helpful. Hopefully your cases will cover the project enough that all the important bugs will be exposed, but there's no guarantee. I think the middle ground is appropriate here - a good case 1) validates required functionality (proves the app does what it should) and 2) probes areas where, if a bug is found, the bug would be fixed (in a minimal QA environment) or (in a deeper quality environment) advances produce quality significantly.

BTW: one respondent to the question replied and said a good test case is one which brings you closer to your goal. Succinct!

Hope that helps!

John O.

How Can I Become a Better Tester? Part IV: Mentors

A major step for career growth, in any role, is to find a mentor. Mentoring relationships come in many forms, the most common of which are:

  • Working for someone
  • Working with someone
  • Having a formal mentoring relationship
  • Having an information mentoring relationship
  • Reading and participating in specific test focus groups

Working With Someone

The best way to learn from someone is to work with them, elbow to elbow. There's no better way to learn. I think of a couple of great test managers I had when I was a test lead. Dan, a test manager in Microsoft's Mobile Devices Division, is a fantastic manager. He is great with people, protects his teams from politics and struggles above, and understands quality. One thing I learned from Dan is that, while a tester's appetite for more time or resources is never satiated, we can get the job done anyhow. When I complained to him that we shipped a product with too few people, he asked "Did it ship on time?" [Yes] "Have there been any recall class bugs?" [No] "Then you must have had the right number of people."

Another great mentor was Mike, group test manager for Live Meeting. Mike and I didn't see eye to eye on everything, but what a great manager he was! He enabled and trusted people to do way more than they may have ever done. He put me in charge of a beta release of Live Meeting (then called Placeware), and let me drive shiproom meetings of several releases of Live Meeting. He also knew how to have a lot of fun on the job. I miss many aspects of working with Mike, frankly. And I try to make every team member's experience the same - lots of opportunity to grow, lots of fun, and high expectations.

I'll make a wild statement: in your first few years of your career (generally two to three) WHO you work for matters significantly more than HOW MUCH you make. The first years of your career establish a foundation which will determine how quickly you will grow and what kind of habits you will form. If you're fresh out of college and have a choice between working at lower pay for an incredible lead, or earning more and working at a 'code factory', may I recommend you take the former? Establish yourself early on in your career, learn the principles, and THEN go out where you can impact and be rewarded accordingly.

Working With Someone

Almost as good as working for a great test lead is working with a great tester. At any level. In Microsoft as a junior tester, I worked side-by-side with a person who became a great friend. He taught me about equivalence classes, boundaries, and other key concepts. He showed me how to fight for bug fixes. He showed me where the bar should be!

Twelve years later, I was managing a team of 100. I worked with three test leads who taught me a lot. Sri taught me about getting the job done - he just sticks to the job until it's done (I've always prided myself in being known as a person who gets the job done, but Sri showed me how to take this to the next level). Debbie taught me about putting your head down and working through challenges - stick-to-it-ness, if you will. And Jenn taught me about taking pride in whatever you do. We never stop learning, especially by example.

My current manager is much the same. I have never worked for anyone as diplomatic as Tony (and I don't mean that in an office-diplomacy, fake-smile way). He really cares about people, how they feel, and what they think about their job. He's also ready at any moment to take advantage of a teaching/mentoring opportunity.

Having a Formal Mentoring Relationship

At Microsoft, each new hire had a new-hire mentor for their first three to six months. This mentor was the go-to for pretty much anything, from "where's the printer?" to "do you think I'm making my goals?" After that, everyone was advised to find a mentor within the company, and Microsoft even has an internal site dedicated to finding and maintaining a networking relationship.

The same should be true for you. If you're new to a company, I recommend you find yourself an internal person to mentor you. Once you're established, look around and find yourself a mentor. Generally you want to find someone who's ahead of you in some way (technical, project management, leadership skills, etc.). Ask them to be a mentor and be very protective of the time you take from them - make sure every time you meet you have a productive conversation. Never ask them to do your work - ask for help reviewing what you might think is the right proposal, and get feedback into specifics.

Have an Informal Mentoring Relationship

A key requirement for me is to work with great people from whom I can learn. As I pointed out, I learn from people I manage. I try to learn in almost every circumstance and every person. If there's someone you learn from a lot, try to be around them a lot even if you're not in a formal mentoring relationship.

Participate in Forums, Groups, Discussions and Seminars

Finally, there's a lot to be learned by reading and participating in forums, groups, discussions and seminars. I'm active on several forums (MSDNs testing discussion forum, agile testing forum in Yahoo Groups, etc.) I learn a lot just by lurking (although I'm so opinionated that it's impossible for me to lurk for long) or by joining in on the conversation. Other places to learn include seminars, networking groups, and even podcasts (BTW: stay tuned for a podcast from me over on, where I'm a Testing Expert)

How Can I Become a Better Tester? Part III: Going Beyond Stated Requirements

So you've spent some tme becoming more aware of quality and what quality means. You've been looking at the differences between a Mercedes and a Hyundai. You've also started reading up on quality and on engineering. Great start! What's next? Well, the next step is to realize you need to look beyond the stated requirements and dig deeper.

In my opinion, functional or business requirements docs are like nets. They catch a lot of 'big stuff' but they can easily let little--albeit important--stuff through. For instance, a business requirement might be that a tax calculation be performed on a per-item basis. t the same time, it might not mention that the overall tax rate, which is rounded to the lowest cent, needs to be calculated on the total purchase and not the individual items.


I have no idea what the rules are for tax calculation. I don't test it, never have. And even if I did know, it'd only be specific for the US. So please - look over the details in this example and try to recognize the key point...

End Disclaimer

The Tax Man Cometh

As an example, assume a user buys one $0.55 candy bar, a $1.75 bag of chips, and a $4.27 bottle of antacid. At 10% tax, that would look something like this:

Item Cost Tax Total
Candy bar $0.55 $0.055 $0.60
Chips $1.75 $0.175 $1.92
Antacid $4.27 $0.427 $4.69


Tax is calculated on each item, and in all cases is rounded down to the lower cent.

However, if you factor tax on the sub-total of the items, it may actually be more! The subtotal of these three items is $6.57, and tax is $0.65. Calculated this way, the total is actually $7.22.

OK - I agree. This is a case of a missed business requirement. But this is a great example of how a tester needs to be on his or her toes, asking questions and always exploring for what can go wrong.

Not Another Bad Lego!

I'll take a manufacturing flaw as another example, which hopefully you can extrapolate to software quality. I have three sons, spanning just 4 years, and as a family (Mom is the ringleader here), we are Lego entusiasts. Birthday presents are almost always Legos - these days, they are super-complex Star Wars sets. Whenever my boys have saved up $10 or so, we're off to the store to buy a new Lego. Several times in the past couple months, we have picked up Legos with one of two problems: 1) missing peices and 2) poor fit. There's nothing more frustrating for  a kid than to have their new Lego missing a critical piece--or for the parent who has to drive 20 miles to return the Lego!

I'm sure there's a manufacturing QA requirement here that each individual bag of pieces has a certain total weight. That's how they make sure all the pieces get into the box (how they miss a piece on this test is beyond me). Two days ago, my youngest bought himself a new Bionicle, only to find that the Bionicle's shoulder-claw (you have to see one to understand) was in the box but it didn't stay in place. Turns out, a manufacturing flaw caused the shoulder claw's insert tab to be too small, producing insufficient friction for it to stay.

So as a QA engineer at Lego, I'm sure I'd be dealing with the requirement to have all the parts in the box. But would I be asking about how we ensure EACH part shipped in EACH package has the right fit? That's going beyond the requirements.

Hey, You Stole My Car (Analogy)

OK, one last analogy and I think the point will be well-clarified. Let's pretend you are responsible for quality for an automobile. The requirements are that it be capable of a certain miles per gallon (KMPL), that it run for so many successive hours, and that it fit so many passengers. So let's assume these are 25 MPG, a service life of 100,000 operating hours, and it seats 4.

In the early 80's, AMC made a car called the Pacer. It was small, rather fuel efficient, and cheap. It sat five (although we had many more than that in my friend's Pacer once). Service hours? My friend Kate couldn't kill it, no matter how hard she tried - she drove it for a year without putting oil in it!

A colleague of mine at Microsoft recently had to buy a car. He chose a VW Toureg V10 diesel. He actually hits 30, 35 MPG. The car has a proven service life well in excess of 100,000 hours and it fits 5 (very comfortably, I might add). Which car is of higher quality? Why?

I'll leave you with that thought. Dig deeper, go beyond the stated requirements. It's the difference between an Ambassador and a Skoda, a Pacer and a Toureg.