Wednesday, November 26, 2008

How Can I Further My Testing Career?

I recently answered the following question, as part of my "Ask the Expert" role on searchsoftwarequality.com: "I am working as a black box tester on web applications. I want to improve my technical skills and am wondering what type of skills I should learn to help me further my career."

When it comes to testing, as a manager I am concerned about two things: effectiveness and efficiency. Effectiveness is about how well we measure the quality of the system under test (meaning what level of confidence we can gain about its adherence to requirements) and how many defects we discover, fix, and regress. Efficiency is about how many resources it takes to accomplish that task. So when a tester asks me how they can improve their skills, I focus on learning and development which includes these areas.

First, let’s look at effectiveness. Anything that helps you test more deeply, cover more ground, or find more bugs is going to be beneficial. In web testing, I find having a strong knowledge of HTML and CSS, HTTP and HTTP web servers, java script, and now RIA technologies like Ajax, Silverlight, and Flash are super-helpful. Knowing HTML and CSS are self-evident. You want to be able to quickly find errors in layout and styling, freeing the developers from debugging this on their own. Understanding HTTP and web servers is key for lower-level communications issues and for troubleshooting communication problems. Finally, scripting languages are the basis for nearly every commercial web site. Script errors abound (especially because Javascript is a run-time language and really can’t be stepped through easily), and testers need to be able to catch those errors with code review and other methods.

Efficiency to me is mostly about automation—it’s about being in more than one place at the same time. By writing good automation (catches true failures, does not report false passes, runs in many environments and lasts the product cycle), you accomplish several things. First of all, you continue to validate product quality even if you, personally, are not executing the tests. Secondly, you generally execute test cases more quickly—this encourages developers to run these cases even before checking in. You speed up the turn-around time (known as the feedback loop) on the code-defect discovery-defect remediation cycle. Finally, you enable yourself to test on many different configurations at the same time. Common automation skills are familiarity with xUnit test harnesses, ability to use Selenium and Selenium RC or Watij/Watin, and good skills in development in languages such as java and .NET.

You’re probably thinking that I failed to bring up skills in tools such as Rational Robot, Loadrunner, Winrunner and such. These are very important niche skills that a Web application tester can use to differentiate themselves. The really didn’t fit well into my effective/efficient paradigm though! Understanding how and when to apply tools in performance, stress, and security testing realms is a great way to improve your career. The logical step in a tester’s career is to branch out and select specific tools to excel in. If you want to stay in your current company, pick the tools most commonly used for these automated testing categories. If you are looking to move to another company or into another industry niche, understand the tools in common use and develop skills in them. Many commercial tools have limited trial versions available. You may not be able to run a full load test with these tools, but you can definitely learn how to use them. But keep in mind: more important than just learning how to use the tools, understanding what’s happening in your system when you run these skills is even more important.

In 2007, I wrote a series of articles on my blog titled “How to Become a Better Tester”. The first entry can be found here: http://thoughtsonqa.blogspot.com/2007/12/how-can-i-become-better-tester.html This blog series will give you more detailed steps you can take in developing your career.

All the best!

John O.

Wednesday, October 22, 2008

Lessons for Teamwork in Software Quality

In my role as a volunteer youth leader in my Church, I have the opportunity to help put together a monthly activity. My group was responsible for the activity this month, and we chose to offer five team building exercises (all straight from http://www.wilderdom.com/games/InitiativeGames.html). We did four exercises in a rotation, and then all met together to perform the fifth. The exercises were:

  • Minefield: each member of a group paired up. We set up a maze using chairs and tables, and had our pairs blindfold one person. The second person in the pair had to 'lead' their partner through the maze. The challenge in this game is that all partnerships are talking together, at the same time. The purpose of the exercise is to emphasize the need for communication as well as the ability to pick out the voice you're listening to.
  • Toxic Waste: this is a group exercise. Participants encountered a bucket in the middle of a 10' circle and they're told the circle is full of toxins. They need to move the bucket out of the circle, and their resources are a bungee cord and ten thin ropes. The challenge is ingenuity, teamwork, and (again) communication.
  • All Aboard! In this exercise, the team all stands on a tarp. The challenge is to fold up the tarp to be as small as possible, fitting the entire team on it, WHILE the team stands on it. Teamwork, spatial relations, etc.
  • Helium Stick: I conducted this one. The team lines up in two parallel rows, sticks their hands out with their index fingers extended. A small stick (I used a 1/4" wood dowel) is placed on their extended fingertips and they are challenged to lower the dowel. In fact, it goes up.
  • Egg toss: each team is given 25 straws, 20 cotton balls, a 5' piece of tape and an egg (unboiled). The mission is to build a crate/ship/container so they can drop their egg and it doesn't break.

What is interesting to me is the way teams interacted to get the problem solved. As I said, I conducted the helium stick challenge. As teams started, they were astonished that their stick ROSE instead of sunk. In a flash, right on the heels of that recognition, came frustration. We were split up into groups of boys and girls--oddly enough, while both groups expressed frustration, only the boys started yelling at each other. I kid you not, they were really ripping on one another. They quickly moved into the blame game. Each group (all four) needed to be stopped multiple times.

After frustration/blame, the groups moved into 'try harder' mode. I finally would stop each group and point out that trying harder wasn't working - maybe there was a better way. At that point, it was again quite funny - no single group picked an organized way to brainstorm. It was 'herd brainstorming' and they all just started shouting ideas.

Eventually someone 'won' through all the shouting. In one group, it was an adult leader, who started back into try harder. In a group of 12-year old boys, one of the boys got everyone's attention and he literally talked the stick to the ground. You need to understand the magnitude of that feat - you've got eight people all pushing up on a stick (pushing up because they have to keep their fingers on the stick at all times, and by doing so they force the stick up). Somehow this boy talked everyone, step by step, through the process of lowering the stick.

The girls were pretty creative--they caught on quickly that they needed to coordinate, and touch was the best way to do so. One group of older girls interlocked pinky fingers and lowered together. Another group split into two smaller groups on each end, and just put their hands touching, next to one another.

For me the takeaway was:

  • If something isn't working, the first response is to try harder. But if it doesn't work, it's not going to work any better if you try harder.
  • When something isn't working, teams gravitate toward frustration and even blame. It's so destructive! No one was intentionally pushing the stick up, but they kept yelling at each other about it.
  • Someone has to step up and coordinate the discussion. Brainstorm on ideas and then try one of them -- any one of them. There was no single right idea and the teams that just tried something generally succeeded, as long as it was something other than trying harder.

How can this apply to software development, software testing and software quality? Well, software is built by teams - even the smallest unit of engineering is a feature team made of two or more people. Communication is important, and it's critical that 1) no one resorts to blame and 2) each person is allowed to share their point of view. "Writing unit tests just so we can hit a goal of coverage seems to be distracting us from the real goal--our output is actually lower quality right now than it used to be" needs to be answered with "Why do you think that way?" and not "Uh huh - not true. Besides, you're not helping at all with all these bugs you're bothering me with!"

Good communication includes illustrating the current status, as in "Hey wait, everyone seems to be pushing up" and then a group discussion of what's causing that: "We're pushing up because we all want to keep our finger on the stick." Only then can the group move on and start thinking about the solution. So in an engineering organization, that might be "why are we getting a flood of bugs? Wait, testers are off doing something else (building a battleship test automation system) rather than focusing on testing daily builds". Once the situation is recognized, only then can the team react with an appropriate response.

As quality assurance/software testing teams work with and communicate with development counterparts, quality becomes a natural by-product. As teams discuss what practices are producing current results, they can move forward and improve how they approach the challenge of writing quality software. A little communication can move our engineering teams from try harder to work smarter, and output increases in both quantity and quality.

Thursday, October 9, 2008

What's the Best Software Testing/QA Tool?

Frequently people will post a question like "What's the best tool?". It drives me nuts sometimes! Software testing and quality assurance are definitely challenging professions, but it's not fair to depend on others to solve your challenges for you! I've answered questions like this in the agile-testing@yahoo.com group, and on the MSDN forum, and I thought it was time I brought my answers together into one blog posting. That way, I can refer people back to the posting.

MSDN Answer

Lifted from my post on MSDN's software testing forum:

An American car manufacturer once tried to make a car for all things - it was a 5-6 passenger car in three different models, each with four wheel drive AND good fuel economy. It ended up mediocre at all things. You need to beware; resist the urge to pick one monolithic tool solution for such a wide variety of testing needs. The vendors (well, NOT Microsoft, of course ) would love to have you believe their tool can do it all. And in some aspects, their software testing tools probably can. However, will that one tool solution be effective and efficient in everything you do? No.

I'm not advocating a pantheon of tools, but you need to think like an engineer more than a manager or a customer. You find the right tool for the job at hand. Over time, you'll settle down to a group of 3-4 tools and you'll stick to them.

I've never used <some tool the asker referenced> - someone else will need to comment on that product's ability to do everything you're looking for. My take? It'd be too good to be true if it actually could. And I tell my three sons all the time 'If it's too good to be true, it's probably not true'.

Be wary.

Here's how you [should] approach this as an engineer:

  • List out the applications you'll be testing 6-12 months from now

  • Think about the test scenarios, especially those you'll want to automate

  • Ask yourself how many releases of that application/project your company will perform--will quality assurance be a repeated process, or are you testing this software just once?

  • Based on that, how much automation is 'worth the investment'? Microsoft Office 2007 *might* still be running automation I write in 1997--probably not, but I know 2003 runs it AND that automation is still being run, every time there's a patch or SP released.

  • Now that you know how much investment to make, look for best of breed solutions for each project. Don't focus on all-in-one solutions; just look for the best tool for the job. Use demos, read reviews, ask questions. Don't ask "Is this the right tool?" but rather "Which tool do you recommend" or "I'm considering this tool for this job - does anyone have experience using this tool to do this?"

  • Once you have a short list of tools, look and see if there is commonality/overlap. You'll see patterns. It's possible Rational or another all-in-one tool will appear in the list; it's equally possible none of those tools will appear.

  • If there's good overlap, ask yourself what you think the pain will be if you force a tool into a job it wasn't designed for. If you can live with the pain, go for it... If not, keep looking or open yourself up to a larger set of tools.

Hope that helps (in spite of not answering your question),

John O.

 

Commentary

OK - I searched through a number of documents and can't find my other posting on this subject, so I'll just write it once more.

  1. To all testers: look before you ask. Really! If you are wondering what tool you can use for a given test type, use Live Search and look for information! Performing a minimal amount of research shows respect to the audience you're turning to for help, and can actually prevent a question now and again. QA is all about searching for product defects; apply that searching capability to answering your questions.
  2. You're more likely to get help on a specific question rather than a broad question. For instance, asking "What's the right tool" won't get you much. However "I've evaluated Selenium, Selenium RC, and Watij--given my situation (describe it) what tool do you recommend?"
  3. Talk about the problem you're trying to solve. So "We realized we'll be running tests over and over and over again, but our UI changes frequently. What strategies can we take...?" is a question that raises the problem and let's people know what help you're looking for.
  4. Asking "what's the best tool?" is like asking "What's the best car?". If you live in Germany and drive the autobahn frequently, the best car for you is far different from the best car for someone who lives in Bombay, battles thick traffic, and drives Indian roads (notoriously rough). Be specific, give details, and look for recommendations. Software testing tools are the same way - a screaming "BMW" test tool will fall apart on a Bombay road. A right-hand drive Jaguar would be totally out of place on an American highway. A performance test tool isn't the right way to automate repeated quality assurance tests. The right tool is dependent on the project, the skill sets on the QA team, the timeline, and several other factors.
  5. Solve your own problems. The Internet is an incredible tool and offers us all a ton of opportunity to not have to reinvent the wheel. But don't ask other people to do your work for you! Ask for advice. If you want someone to solve your problem, ask one of us to consult (for pay) and bring us in. We'll be glad to get the job done for you!
  6. Give back: as you grow and learn, give back... Don't post a question, get your answer, and disappear. Remain an active participant in the community and 'pay it forward'.

Summary? Be specific, research before you ask, solve problems, and give back. That's how to get answers online--in a sustainable fashion.

Friday, October 3, 2008

What Makes a Good Automation System (automated QA/Quality Assurance/Software Testing)

So in my new job, one of my first tasks is to put together an automation system--by this I mean a harness and a framework. The process has had me thinking (and talking) a lot about what makes a good system in general.

The automation harness is the system used to schedule, distribute and run tests and to record results. In the open source world, some tools used here include NUnit, JUnit, and TestNG (my personal favorite). These tools all work in a one-off situation - they are all run locally out of the dev environment or via the command line. In software testing at Microsoft, though, a one-by-one approach to automation is useful for 1) developer unit testing, 2) tester unit testing/test creation, and 3) failure investigation. However for the 24/7 test environment we're building, this isn't sufficient. We need a centralized scheduling tool that allows us to push tests out to multiple clients (simulate load and de-serialize test runs). So we're working internally to Microsoft, evaluating the existing automation harnesses available and trying to find the one that works best.

A big factor for me in this selection is finding a harness which is configurable. At Microsoft, we are pushing the envelope in testing in a variety of ways: code coverage analysis, failure analysis, and several similar activities which allow us streamline our testing, reduce test overhead and automate many testing tasks. This means our framework MUST be extensible - we have to be able to plug in new testing activities.

A second element of the automation system is the framework. This is the abstraction layer which separates our automated tests from the application under test. This abstraction layer is critical to good automation. If, for instance, you are automating a web application, you will probably experience a lot of churn in the application layout. You do not want your automated tests hard-coded looking for certain controls in certain locations (ie, in the DHTML structure)--by abstracting this logic, your test can call myPage.LoginButton.Click(), and your abstraction layer can 'translate' this into clicking a button located in a specific div. In some organizations, this framework is purchased. At the LDS Church, we leveraged both Selenium RC and Watij to build this framework, developing most of it ourselves internally (kudos to Brandon Nicholls and Jeremy Stowell for the work they did in this capacity).

The challenge felt by most test organizations is two-fold: 1) finding the engineering talent to build these systems and 2) making the investment in innovation. Ironically, the very thing which can free up resources for other tasks (automated testing) is the thing most managers don't want to spend time on! This makes sense, sort of... managers don't like to invest in activities which (in their mind) don't contribute directly to the bottom line. In all but the smallest of projects, this makes no sense--test automation isn't a sellable product, but if automated tests can free up a day or two of test time, that's a day or two spent doing other activities! Each and every time automation is being run.

Recruiting top talent is also a challenge. In both IT organizations where I worked, there was a culture among developers that testers weren't engineers--they were click testers. Testers couldn't give input on 'extremely technical concepts' like architecture, potential bug causes, or the likes--they're there to pick up code as developers release it, and then to find bugs. It's no wonder that it's so challenging to hire engineers into testing - when they're treated like that, they're either going to leave or move to development!

So the keys to a great automation system are: 1) a solid, extensible and flexible harness, 2) a robust framework, generally customized to your test activities, 3) management commitment to invest in innovation and automation and 4) top engineering talent and the culture to reward them for their contribution.

Am I missing anything?

Wednesday, October 1, 2008

What A Feeling!

So I've done a relatively good job of being calm and collected in my first two days at Microsoft. Your first day starts in a long line filling in I-9 forms. Then you enter address and other contact info info the Microsoft system via an internal web form. Finally, you sit for another 7 hours in a room while they teach you about benefits and the MS culture. At the start of that first day, you're officially a Microsoft employee, but you don't get your card key or anything.

Day two starts in the Big Room again, with a discussion about corporate ethics, legal issues, and a discussion with a couple of recent Microsoft hires. Finally you get your card key and are sent to find your manager. Oh and by the way - the room has anywhere from 100 to 150 people in it. Yup - Microsoft starts that many people, each and every week (well, maybe not the week of Christmas or New Year).

So at about noon I launched off on my own. Unlike most new hires, having worked here for 11 years, I know where the buildings are, where parking is, etc. So I zipped straight to my temporary building whenever I'm here in Redmond. I swiped my badge and got access to the building. A little smile crept up on my face.

About an hour later, after picking up my laptop and getting it set up, I went for lunch with another new-hire for our Utah group. As I swiped my card and stepped into the cafeteria, I had a completely involuntary reaction: I jumped, throw both hands high in the air, and shouting "Yes!! I'm back at Microsoft!" Later during lunch, Tim told me how cool it was to spend time with someone who is so excited about working at the company. I have a bounce in my step which has been missing for many, many years.

I can't describe it. I was in software testing/quality assurance at Microsoft for 11 years. There were great days and there were really challenging days. I left two years ago to lead QA "for the world's largest retail IT project" at Circuit City. What an experience that project was. Quality assurance to team leads (mostly IBM project managers) meant proving happy path and avoiding negative testing. It was a cultural shock, to say the least. Testing software at the LDS Church was somewhat better. The people, for the most part, were great (surprisingly, there were exceptions - people who behaved less Christ-like than even at Microsoft!). The QA team suffered from a total lack of respect from development, however. And I found in that IT organization that everyone has a special little niche. There are enterprise architects, application architects, security 'specialists' (people who know about security policy, but don't know much about penetration testing), developers, and quality assurance engineers. If you dared to stray outside of your niche, well, that meant you were stepping on someone else's toes.

Back here at Microsoft means I have an equal seat at the table as an engineer. It means I'll have to work with other engineers to tackle really, really challenging problems. First challenge: building an automation harness using existing technologies at Microsoft, then building our own framework (abstraction layer) within which our tests run. Additionally, we need to take the mandate to "build virtualization management technologies" and turn that into a release software application: ideation, product planning, product specification, development, and release testing.

We also have the challenge of hiring incredible C++ developers and testers (engineers) in the Salt Lake Valley. Finding developers is pretty easy, but finding developers who respect QA and understand engineering excellence? A challenge. Finding a software testing professional who has the guts to take an equal seat at the table? A challenge!

But that's what I love about being back. I feel like, after two long years, I can finally do my best work and reach my full potential. I can bring all of my 14 years of engineering experience to bear on a software challenge. If I have an idea, I can run with it. I can provide input to the user scenarios, to application architecture, and to how we push quality upstream. I can prevent software defects, rather than find them!

I know everything won't be perfect. I left Microsoft for reasons, and those reasons haven't all changed (although judging by much of what I heard in New Employee Orientation, the last two years have been a time of growth and improvement for the company). There will still be bad days, there'll still be the struggle for a good work/life balance (now called 'blend'). But I have incredible health benefits for my family, stocks and bonuses again, and I have the chance to be challenged every day again. And I'll be working with super-smart people every single day. THAT is a cool thing!

Monday, September 22, 2008

A Softie Again

I'm thrilled to announce that I'm about to become a Microsoftie, again! I have been given the opportunity to return to Microsoft as a senior test lead, working in the Management Division. No, that's NOT the group of execs who run the company! That's the group producing OS and platform management tools like Operations Manager (a product I've worked on in the past), SMS Server and so forth.

About two months ago, Microsoft announced the formation of a development center here in my home of Salt Lake City. I quickly applied for roles in the test organization. After two years of working project IT, I am ready to get back to product software! Some people might snicker, but the commitment to quality at a product software company, especially Microsoft, is so much higher than in project IT. I found I was spitting in the wind quite often in my previous roles--I was either fighting losing battles or I was fighting the wrong battles (ie, pushing for quality in organizations or situations where leadership didn't share the same commitment, or pushing for higher quality than some projects required).

Additionally, I found that my last few organizations did NOT look at testing engineers as equal citizens. Testing was looked at primary as QC (quality control) or possibly QA (quality assurance). Rarely were we invited to planning or design meetings, never were we looked upon as people who could help PREVENT defects. We were often seen as a roadblock to release. Don't even ask about my two-hour argument with IBM about the value of negative testing! Engineers who could help architect solutions? No way!! I often felt my fourteen years of engineering experience were swept under the table because I had "QA" in front of my "Engineer" title, rather than "Java" or "Development".

So I'm very excited to be returning to an organization and a company where I get a 'full seat' at the table, and an opportunity to contribute to my full ability, not to the perceived level of my title. IE, I'm an engineer again!

Doesn't change much about my point of view or my approach to testing. I'm still hyper-focused on 1) bug prevention, 2) quality engineering, and 3) driving for greater effectiveness and efficiency. And sometimes that will put me at odds with mainstream MS thinking (recall my conversation about moving to all SDETs and automating all tests, and the resulting "user experience" bugs that slip through the cracks in products like Vista).

For me the best thing is, I get to stay in beautiful Salt Lake City, and yet work for one of the greatest employers in the world. And NOW I can honestly say I've tried a few others...

So OK - let's hear it. How bad is Microsoft quality, really? If you could talk to a test leader, what would you tell them? What would you ask them?

Friday, July 25, 2008

Software Quality Assurance: SDL (Secure Development Lifecycle

At work, I've been helping one of my teams implement portions of Microsoft's Secure Development Lifecycle (SDL). SDL is more than just plinking around attempting some penetration testing--it's a committed approach to secure software design. Here are some of the takeaways from our work:

  1. The first point I made with my team is that security features DO NOT EQUAL secure features. Having SSL encrypted communications does not make a web application secure! It just means you have an encrypted communications channel. Secure software isn't secure because of features such as Acegi security, RSA encryption or anything like it. Secure software is produced when developers think secure from the start. Secure software comes when code is written safely, when developers write solid, secure code, and when testers are helping with the planning and testing of the software with a secure focus.
  2. Next point we emphasized: threat modeling. As we wrapped up a two-hour threat modeling session, one of the developers commented "Why didn't we do this months ago, before our code was written!". Good point!! It's never too early to threat model. Analyze your product, paying special attention to where data crosses boundaries: user to Internet, Internet to server, server to database, etc. Model threats, wild and crazy or down-to-earth. Our threat modeling has resulted in 9 potential threats so far, and we expect several more as we continue.
  3. Security comes in layers. Back when I lived in India, I toured the Delhi fort. This fort was built by professionals. It has a deep moat around it. Tall, thick walls surround an inner wall, and inside that inner wall lies the fort. That's how our code should be! OK - so you're authenticated via Acegi and LDAP. You're encrypted with SSL. That's great - but what if someone logs in with a valid account, then tries to hijack another session? Your layered security will catch hacks like this--authorize anytime someone tries to access sensitive data. Even if you've already authenticated and authorized, do it again! Layers bring security.
  4. Reduce your footprint: in Agra, where the Taj Majal is, there's another fort (these Moguls were building forts everywhere!). This fort is on the edge of town, in the hills. Compared to the Taj, the fort is tiny. This is referred to as attack surface reduction. Only allow public access to a few of your resources. If you have features which suffer from weak security, disable them by default or remove them completely. Give hackers as little space as you can.
  5. Train your engineers (dev and test). There's common training needed by both (elements of secure design, running a threat model, etc.) and there are discipline-specific trainings such as penetration testing or the application of specific technologies. The SDL is called a lifecycle because it's a continuous process. Lather, rinse, repeat and all that.

Our training has produced benefits. For starters, developers have a new security-focused mind set. We've found a few security bugs already, and our threat model has exposed some potential issues. This is great progress, and it comes from just one day of work. Imagine what we'll be like in a few months after a day or two of training and a complete milestone with security in mind!

It's never too early or too late to take a step back and start thinking security. I designed our course based on my experience at Microsoft, which was neatly documented in Michael Howard's new book "SDL: Secure Development Lifecycle". I cannot recommend reading this book enough!

Got a security question? Post it here...

Tuesday, July 15, 2008

Software Testing Patterns: Sessions

One of the most common things to have to test is sessions, cookies, and login/logout. This is the start of a pattern for that kind of testing.




First, some background:




Test Patterns:





  • Session hijacking: create several cookies for your site (varying login sessions). Tests: 1) can you substitute the session id from one session and use it in another session. 2) what happens with invalid session IDs? 3) Prevention: specify that cookies must be https-based (specify the secure flag when creating the cookie.


  • Cookie poisoning: change values in a cookie - for instance, if a cookie stores the check-out value in a shopping cart, the user could change that value before checking out.


  • Back button: do something that adds/modifies a cookie, then click 'back'. The state could then be out of sync (being on a page which expects you to NOT have a value in your cookie which you actually do have).


  • Expiration: cookies w/o expirations are considered 'not persistent' and should be destroyed at the end of the session. Persistent cookies represent a security risk due to how long they are persistent; make sure the expiration of a cookie is reasonable (30 min?) based on typical user scenarios.

  • Log in, thereby picking up a sessionID. Make note of the sessionid. Close the browser, and then log in as someone else. Finally, update the cookie to be the id you noted previously, and hit refresh. Does the session change?

Software Testing Patterns

I'm convinced there are a series of testing patterns we could use. I haven't found any on the Internet, which is surprising, but I figure I'll start documenting them here. Look for blogs with "Test Pattern" in the title.

Don't be shy about contributing, please!

First of all, some links:

Quality Assurance or Testing Seminars In India

I've often wondered if there's a market for QA/quality assurance/testing seminars in India. I've thought of setting up seminars for how to test in cities like Mumbai, Bangalore, Delhi, Chennai. One day seminars where people pay a flat fee to get in and we spend the day really digging into testing. Some presentations, some 'labs' and a lot of Q/A.

If you think there's value, post a comment. If you'd be interested in partnering, hit me in an e-mail joatgm at gmail dot com.

Should Freshers be Testing Software?

I have seen an alarming trend in India and worldwide. "Test engineers sought - freshers" (for those not familiar with the Indian IT scene, a 'fresher' is someone fresh out of college - 0 to 1 year experience, maybe even a little more).

Seems like everyone with an open tester position wants to throw a fresher at it. This is a huge mistake! What does a fresher bring to the table?
  • Enthusiasm
  • Lots of book learning; in India, that means rote memorization
  • Some programming experience.
I once helped a large, failing consumer electronics company move work offshore to save money (offshoring to save money is a BAAAADDDD idea--ask me why or wait for a blog on it). The development 'partner' on the project brought in a ton of freshers and some somewhat seasoned testers. What did we get? We had test engineers who didn't know what a defect report was, who'd never used a defect tracking tool, who had absolutely no domain knowledge. All they had to contribute was 1) a warm body and 2) some experience in Java. This is a recipe for disaster.

Is there room for freshers in an IT organization? Absolutely! Microsoft focuses most of its recruiting in the college space, but that's because the ratio of new hire to full timer (at least in the US) is probably 10:1. They have people all around them who can mentor and grow them, and they are NOT encouraged/forced to do mundane and boring tasks. Well, not all the time... As a new hire at Microsoft, I was the release lead for German Mac Office 98, with 18 months experience in the company! But that was after 18 months of incredible mentoring and learning.

For you people hiring for testers, don't think "Well, no one wants to be a tester - let's find someone who wants a foot in the door and get them in here." You're doing yourself a disservice, you're doing your customer/project a disservice, and you're really not helping the fresher either. Look for some passionate QA/testing professionals. Bring them in and let them establish a world-class test organization. Then hire freshers, train them up, and keep growing your organization.

OK - off my soapbox for now...

Thursday, June 12, 2008

Thanks to the Communities

I belong to several communities - Agile Testing group, Watij group, MSDN Software Testing Forum, etc. I just wanted to blog a quick thanks to all the active members of these communities. Just in the past 6 days, I have:
  • Gotten help getting Watij up and running and implemented
  • Gotten sample code to write a tool which dismisses error dialogs that are popping up in our automation
  • Gotten answers to several other questions
Engineering is tricky. Civil engineering is a moving field, but it moves a lot slower - advances in concrete, paving techniques, etc are probably all documented somewhere in a journal each month. Changes in technology? Rapid pace! I don't think any of us could keep up with all the platform, open source, commercial tools, and techniques which change over time. But by having these communities, we are able to give each other a leg up.

Take my dialog dismissal app... We're building automation in Selenium RC (Java). There's an issue in Selenium's Chrome browser that you cannot permanently accept a cert. Every our automation switches into SSL, we get the warning. Until today we have been unable to run all 350 tests in one setting because none of us wants to dismiss that dialog 1000 times. With some sample code from Michael Johnson via the MSDN Software Testing Forum, I put together a quick C# app that watches for and dismisses this dialog. Problem solved, and for the first time ever, we can run our automation end-to-end.

Thanks folks, for all the effort! We make each others' jobs easier this way. Kudos to everyone who puts in time and effort on open source tools like Watij, Selenium, TestNG and the likes, too.

John O.

Friday, May 30, 2008

Advice to Developers from a Test Manager

A good friend pointed out that a blog on advice to developers from an experienced test manager would be helpful. With 13 years of tech experience now, I have a good idea of what works and what doesn't.

Most Successful Projects

I'm basing my advice on my experiences in the most successful projects I have worked on. These projects have been released on time, have had the highest possible quality, have thrilled customers, and have done so with minimal pain to the entire engineering team. These projects themselves are:

  • Microsoft Server ActiveSync: synchronization layer between Microsoft's mobile devices (Pocket PC and Smartphone) and Exchange Server.
  • Microsoft's Learning Essentials for Microsoft Office, a free enhancement to Office to help tune it more for the needs of students and teachers.
  • The LDS Church's Its About Love adoption site (unreleased, sorry!)

So let's jump in, shall we?

Collaboration

Successful projects all share the element of collaboration between development, test, and program management/product design. There's a feedback loop in this relationship which results in higher quality. Dev can provide input into design tweaks which might result in significantly lower development effort. Test can provide feedback which enhances testability and reliability.

During the engineering process, this collaboration continues both in refinement of the feature list and of features. That close relationship between dev and test also helps improve quality through quick defect remediation as well as mutual support in defect detection. When dev and test work together (as opposed to dev working in a vacuum and throwing something over the wall), invariably the result is fewer bugs, more bugs fixed 'right', and a significantly shorter bug turnaround.

I want to stress this point. It is so very critical for bugs to be detected and fixed early on. The longer a bug remains in the project, the more code there is which is written around that bug. It becomes like a sliver under the skin - the code literally festers, becomes infected, and removal/healing are hindered. This quick turnaround simply can't be called out enough. It's definitely a must-have.

An additional aspect of collaboration is seen when dev and test work together to find, investigate, and remediate defects. Frequently this takes the form of "Hey Tester - I checked in some code today, it's all passing unit tests but I'm a bit concerned about it." Just a heads up generally points a tester down a different path than they may have planned to take.

Paired investigation can be so critical. In It's About Love, we have found a series of strange performance issues. I've got the time, experience, and tools to find the bug whereas my developer has the experience to tweak his code to find potential solutions. It's a symbiotic relationship. Without this paired work, he'd be blindly trying to fix something, throw it over the fence, and get it back the next day as a failure.

I list collaboration first because it is the key. Everything else you do to improve your product and code will stem off of collaboration, so you had better learn this skill if you want to be an effective, efficient developer.

Trust Your Tester

Ah, trust... that word of words, that concept of concept. I climb mountains, or at least I did until my kids got older. Now we hike, and someday we'll climb again. It's taught me a lot about trust. When you are scaling an ice field, your life is dependent on your skill and strength. It also lies in the hands of the person at the other end of the rope. You learn to trust your partner explicitly--you have to!

Some developers look down their nose at testers. Maybe it's because the developer feels they have more experience writing code. Maybe its because the developer has a degree in computer science and the tester does not. Maybe its because the developer feels he or she 'won the lottery' and the tester did not. It may even be that the developer feels like they are in a master/servant position. Believe me, these feelings will not help your project, not in the least.

Hear me on a few things:

  • There's no master/servant relationship. The tester does not exist to beat quality into your rushed, careless code. If this is your attitude, the first time you run into a testing professional, you are in for a rough time. There's nothing sadder, in my opinion, than a dev who rushes through their code, cannot be bothered by incoming defects, and then throws the whole mess over the fence to a tester to root out the issues. That's pure laziness, it's pride, and it's a lack of professionalism. If this is your approach to engineering, you seriously need to rethink your value to your organization. You also need to step down off your high horse and start to think about the value in a team approach. You need to learn to take pride in your results - not just that you threw together a bunch of web pages or produced an application thousands will use. Will they enjoy their experience? Is it the best thing you could have possibly built? If it isn't, wouldn't you rather learn to make it that way? Western society is built on the concept of pride in outcome, of making highest-quality products. Don't push together a bunch of crap, throw it over the fence and expect QA to work it out. Or worse, don't get annoyed when you product a lousy piece of software and testing has the audacity to find a bunch of bugs in it! Just because it compiles and passes your unit tests doesn't mean it's worth the cost and effort spent on it!
  • There's no lottery. You may be performing your dream job, and you may think being a tester would be the worst thing that could happen to you (equivalent to, say, working a cash register at McDonalds). Believe me, there are a lot of 'testers' who would rather be developers. But most testers have chosen this career path and they are as passionate and excited about being a tester as you might be about being a developer. Think about where the US Space Shuttle would be without QA, or the Lexus automobile... QA professionals love their chosen career. I, for one, would rather drive a city bus than be a developer. Don't get me wrong - I love to code, it's fun. But to sit in front of a blank screen day in, day out, doing the same thing over and over and over... Yuck! I don't know how you do it! I wouldn't trade places with you for anything. So you may be happy in your role, but believe me - there's no need to pity a tester, nor to look down your nose at one.
  • Your degree doesn't mean much. I once interviewed a candidate for a development position. He had a a Bachelors in CS from a prestigious nationally-recognized school, and had won multiple national competitions in programming. Unfortunately for him, he had no clue how to write elegant code. I have probably interviewed a thousand candidates, from PhDs to bachelors in CS. And I've hired maybe 50 or 60 total. Your degree is evidence of a lot of hard work (well, maybe...) and definitely of perseverance, but it meant nothing the day you left school and started working for pay. I have a degree in German Literature and International Relations, but I can roll up my sleeves and dig into code and push quality into a project. It takes so much more than a degree in CS to make a good programmer, and some of the best programmers I've known don't even have CS degrees, or earned them as an afterthought. Sure, you gained some experience when you took your CS classes, and you got some exposure to programming theory. But what counts in software is experience, intelligence, and diligence. And your tester can have all of that and never have received a degree, or could have a PhD in physical therapy. Learn to trust the experience. I had a developer rip a test plan up one side and down the other because it had holes in it--that's great. But her excuse for not wanting to discuss the plan was "I have five years of experience - believe me, this is wrong" Well, that's just great, but the point is, she lost me there. At the time, I had 12 years of experience. I may not have known the ins and outs of the code (I was new to the project) but I know quality and I know we weren't approaching it right.

So when you're thinking about your project and questioning whether you should let that pesky tester into another meeting, don't listen to that voice that says you're better off without her. She's going to bring experience, passion, and perspective you simply don't have.

Stick to the Basics

I've been learning a universal truth in life, and that is that things generally don't happen all at once, but step by step. It truly is the little things that matter - it's the basics that count. You know, there's a reason why so many teams are jazzed about Agile, XP, and scrum. In many instances, these disciplined changes introduce quality and drive projects to a faster, more successful completion.

A lot of developers have glommed on to Agile like batter on a bowl. They love the 'freedom' of no rules, no documentation, no meetings - just pure coding. And who wouldn't? Unfortunately for them and for the projects, they don't understand: there are rules in Agile. Agile is based on XP. There are fundamental rules in XP like you don't move ahead without stabilizing what you've got, you don't cut corners, you work in pairs, you don't write code you don't need. These are basics that, when they are ignored, spell doom for your project.

As a test manager, my job--my very reason for being--is to make sure the project completes with highest quality. I'm going to be a stickler for the basics. I once worked two concurrent projects. They both started at about the same time. They were both green-field projects (white-monitor projects?). They were both staffed with some of the best people in the organization. One project was a short-term, single iteration. The other project was a three-cycle, year-long effort.  Both projects held frequent scrums where they went around the table and talked about progress. But that's where the similarities end.

The difference in approach was astounding, in spite of all the similarities. The team on the short project cut corners. When the encountered a roadblock on a given user story, rather than spiking in and working through it, they moved to another story. Soon, 95% of the stories in the project had been started, and none had been completed. No master build was produced; devs were checking in code after self-hosting it. No unit tests were developed ("we're too busy coding to write tests" was the excuse). Critical infrastructure tasks such as URL rewrite or SSL were excluded from the development phase because they were 'operations' tasks. Test was uninvolved because 1) test was understaffed and 2) there were no builds to use.

The other project was difficult to start. Testing was involved from day one, and we insisted on adherence to standards. Ironically, we discovered that, while .NET web services had a standard of strong typing out of the box, Apache's CXF does not. We had to figure that out. We had performance issues. But we dug in deep and worked hard - development put in an incredible showing. Test as well, in spite of being understaffed.

Can you guess the results? The first project released, barely. In the final four weeks of the project, official builds were produced and literally 350 bugs were opened. The code churn was unimaginable. Regressions were high--we were at about 2:1 (for every two bugs regressed, either one was reopened or a new bug was found). The product was released, but came in a month late and still ended up in a severely restricted beta. Developers worked insanely long hours, rather than cutting features, because no one story was less than 75% or 80% completed; there was nothing obvious to cut. The program manager got beat up over the project. I'd call it anything but a success.

The year long project? Cycle 1 of 3 was a solid foundation. Cycle 2 forged ahead - there were some challenges integrating, but two weeks of extended time worked through all that. There are performance issues yet to address, but Cycle 3 is going strong, the project looks great, and the team has really banded together.

Don't ignore the basics. You're not immortal, omnipotent, or above the basic rules of engineering.

Take Pride in Your Work

Related to focusing on the small things is the concept of taking pride in your work. I have to laugh at how many times in one of my web application projects I have entered a bug against the application which needed to be fixed for FireFox or IE. The developer would quick-like code up the fix and throw it back to test. The first thing I would do is regress the bug in the browser it was found. Generally worked like a charm. The second thing I'd do? Test it in the other browser. 5 times out of 10, can you guess what happened? It was broken.

No developer worth their salt should be beyond testing their fixes in multiple browsers. In the open source, Java/Oracle-heavy environments I've worked in for the past two years, I've seen a lot of FireFox fans in engineering organizations. That's cool - I like it too. But only 30% of your clients are using FireFox, generally - that means about 60% of them are using IE 6 or IE 7. If you fix a bug in one browser, you still have to look at it in the other two browsers. It is astounding to me that, after 5, 10 of these bugs being bounced back, developers still can't get it.

If you are going to spend time fixing a bug, take pride in your work - fix it all the way, and make sure it's fixed before checking in your change!

Value Your Tester

OK - so you're XP and you write a lot of unit tests. That's great! Congratulations, you've taken the first step to becoming a great developer. But now it's time to recognize that there are still more important things to do. You need realize that quality doesn't end when you finally pass your unit test--just like, when building a house, you're not finished framing when you cut the two-by-four.

Testers bring a very unique perspective because they can generally think in holistic, systemic terms. As a developer, you're thinking (or should be thinking) in terms of methods and functions, services, etc. A tester is thinking in terms of code and database, interface layers, and user interaction. You tie methods and functions together. A tester makes sure entire systems together.

Want an example of this? Your unit tests stub out data coming in and out of a database, to isolate the code layer. You prove your function works, but you don't prove it works with your database. Testers deal with real-world data, and they make sure it comes from the database.

Need another example? Your job as developers is to achieve something - it's to create something of value which fulfills a requirement (a user story). You can prove you are finished because you can demonstrate the functionality of your user story. You start at a broad point (there is any number of ways to accomplish your programming task), and drive narrower and narrower until you accomplish your work. Testing is just the opposite - testing starts narrow (with your completed user story) and goes broader and broader because there is an infinite number of test approaches. Testing can go forever.

Another way to look at it is this: your work culminates in one completed user story. A tester's job can potentially never end, because there could be an infinite number of bugs in your code (it seems like it sometimes!). It is the old turn-of-the-phrase "You can count how many seeds there are in an apple, but you can't count how many apples there are in a seed".

Testers are trained in this. They are used to this and a good tester is comfortable in this. This mind set is opposite to how you work - testers are different. If you embrace and welcome that difference, you will be more successful. Value that difference and watch what can happen as you collaborate on architecture, on infrastructure, even on code design.

Thursday, April 10, 2008

Defect Metrics

I had a friend ask me recently about bug metrics, how they're used, etc. I am breaking a personal rule about editing and just shoving my e-mail comments to Brian into my weblog - I generally take more care. But I noticed it's been almost a month since I've posted. I've been so busy (as my mother-in-law used to say, up to my ears in alligators), but that's no excuse!

These articles are all going to end up in a book sometime soon. I'm currently looking for a publisher, and working on an outline. So I'd love your feedback on this stuff.

Defect Metrics

The generally accepted standard is to apply severity and priority. Typically severity 1, 2, 3 and priority 1, 2, 3. NOTE a high severity or high priority bug is actually a LOW value, so a sev 1 is a really critical bug, a pri 3 is unimportant. Severity generally relates to the impact the bug has on a customer. Sev 1 is generally data loss, blocked functionality, etc., sev 2 is generally major annoyance but with a workaround, and sev 3 is a UI, pixel-pushing bug. Generally. Priority relates to the importance of a bug—for instance, fixing a data loss issue is a pri 1, fixing a pixel-pushing bug is a pri 3.

At the Church, we’re currently chatting about changing that a bit. We have a guy who proposed something very similar to what QA Associates (now part of Tek Systems – uck) proposed in an old white paper. That’s the concept of a matrix. At the Church, we are talking about matrixing severity (the impact of a defect) and frequency/exposure (how likely/often a customer is to encounter the bug). Based on that matrix we get a priority. All P1s have to be fixed immediately because either 1) they block further testing or 2) they at least mean we’ll continue to code on a sandy foundation. It’s a unique approach – the objective is to reduce time in bug triage/scrubbing to about 0, and to improve the overall quality because 1) our bug priorities are more granular, 2) the emotion of which bug to fix has been eliminated, and 3) we hold ourselves more to a bar (P2 and up all have to be fixed, for example). We haven’t presented this to management yet—hoping to wrap it up next week and present shortly thereafter.

What metrics are important? Sheesh, could probably write an entire book on that. I really geek out on metrics, actually – something which surprised the teams I work with. They don’t know what to do with me. Here goes!

  • Income rate: how many bugs are being opened per day (or per build, but per day is easier to measure). Watch that count increase as development begins and test becomes more familiar with the new features. Toward the end of the project, that number had better drop—in project IT, it tends to drop off quickly, in product software the tail is much more shallow. Some of the drop-off is due to being pushed to other projects, some of it reflects hitting the ‘acceptable’ quality bar (which is always lower in project IT than product software) and some actually reflects that the bugs are pretty well shaken out.
  • Resolved rate: always good to keep an eye on this – is the resolved (but not closed) rate creeping up? Tells you test is not keeping up. Next, you have to ask yourself why… Is test lazy? Are they so busy ‘working’ that they can’t test? Did they have a huge spike of bugs found the previous week and are they just not able to chew through regression as quickly as dev is fixing?
  • Bugs opened per day, by severity – so interesting to watch! See how many S1 bugs come in from start to finish, esp compared to S2 or S3. In the early phases, most of your bugs had better be S1’s. This is the core architecture, and your testers had better be focused on rooting out core issues. As the income rate starts to drop, you should see P1s dropping and P2/P3 picking up. That shows you’ve stabilized core components and bugs are either coming in from peripheral components OR they are just niggling fit ‘n finish bugs.
  • Same for priority (bugs opened per day, by priority)
  • Pie chart: severity: always interesting to see how many bugs are S1, S2, and S3. If I am in the last week of testing, and we have 70% S1 bug count, I *know* we are not done testing. We recently did a study and found 50% of the bugs in all of the Church’s databases are S1. That tells me 1) we write really lousy code, or 2) our testers mark S1 bugs incorrectly or 3) we stop testing before the S2 and S3 bugs are found. A lot of it is the second issue… The project teams here all set S1 as their bar; S2 bugs rarely get fixed. It’s part of people wanting to ship fast. In order to get a bug fixed, therefore, testers have to artificially inflate the bug’s severity/priority. A healthy project will be around 40%, 40%, 20% or maybe even 30%, 30%, 30% distribution. If not, dig deeper and find out why.
  • Pie chart: priority: pretty much the same. It’s interesting to see the distribution of bugs and how they shake out in priority.
  • Bugs opened per area: what’s your buggiest feature set/area in a given project? As a lead/manager you’ll want to add some test focus there. As a manager, you’ll start harping on your dev manager to figure it out.
  • Bugs opened per developer: this isn’t really fair all the time. Some times developers get saddled with really bad legacy code, or they get very complex interface features. But still, looking at the bugs per developer is interesting.
  • Bugs opened per tester: hey, you can measure bugs and that’s a good thing! A lot of people are sensitive to comparing testers (see the reasons above), but still – the best testers are the ones who find bugs and influence to the point where they are fixed. After all, we pay testers to find bugs right? Well, actually that’s wrong. We *should* be paying testers to prevent defects by helping development NOT check in defective code. But barring that, finding them is good too.
  • Pie chart: resolution: take a look at your resolutions (fixed, won’t fix, duplication, by design, etc.) and see how your bugs lay out. I was SHOCKED to see we have had a 90% fix rate in a current project, but a lot of that is because we’re still finding P1s. Generally 75% to 80% fix rate is normal—you don’t want to fix them all! If you have time to fix every bug, your schedule is way over-estimated. Plus each fix represents a potential of one, two, or even three MORE bugs introduced. So keeping the fixes down is a good thing.

I'll get some charts on these metrics and post them up sooner or later. Meanwhile keep the comments coming.

Monday, February 11, 2008

Sharing Answers to Interview Questions

The other day I was looking up some test-related topics on the Internet, and I came across an entire thread dedicated to interview questions and their answers. Readers were enthusiastically thanking the people who posted the questions, making note of answers and thinking this was going to further their career.

Now, I'm all about being prepared. When I prep for an interview, I generally ask a friend to mock-interview me, to get me into the right mindset. While at Microsoft, I was a frequent interviewer but still found it a challenge when the tables were turned. Mock interviews hgelp remind me of the types of technical questions I will probably face, the need to speak slowly and clearly, and above all, how to stay cool under pressure.

There is a difference, however, between mock interviewing and trying to memorize an answer to an interview question. I've seen this happen during campus recruiting (at some very prestigious schools, mind you, where you think the students would either be beyond that or they'd be smart enough to know it's not going to work). Memorizing the answers to interview questions is dishonest, it's a disservice to the interviewee, and it's a disservice to the company.

Let's tackle the honesty part, first. In the US, there is currently an epedemic of dishonesty in our schools. Kids regularly cheat -- they've done it throughout the ages. But now the kids who are cheating actually feel no compuction about it! They find nothing morally wrong with sharing homework, or even cheating on a test!

In the workplace, the same standards of honesty seem to apply. Everyone has had the co-worker who takes credit for work he or she didn't do. The misfortunate have worked for the boss who misrepresented his work or flat-out lies about a pending promotion or the likes.

Because it's mainstream doesn't make it right. DIshonesty is viewed by every major religion and philosophy as wrong. Somehow our society is beginning to overlook those norms, though, and that moral foundation appears to be eroding.

Cheating on an interview is a serious disservice to the interview candidate. Generally, here's what happens... In an interview, when I sense a candidate is responding with a memorized answer, I immediately change the question. Not to something entirely new, but I morph the boundaries of the current question, just enough that a memorized response won't apply. Three things generally happen: 1) the candidate is absolutely flustered and pretty much gives up, 2) the candidate plows ahead (using the memorized answer) and totally blows the opportunity, or 3) the candidate, who actually does have a command of the topic, responds smoothly. In two of the three cases, however, that candidate is finished-there will be no 'hire' recommendation coming from me, and as hiring manager that usually means the end of the interview.

So the candidate makes themselves look bad in my eyes. But what is actually worse is the candidate who actually gets hired based on a misperception of their abilities. That candidate is generally placed into a position for which they are unprepared and underskilled. They are uncomfortable, they struggle, and ultimately end up fired because of their poor performance. Now that individual has a huge black mark on their resume!

It also hurts the company. When a candidate is hired, the company is looking to that person to further their business objectives. The failure to deliver on those objectives has a negative impact on the company bottom-line. It also takes time to fire an employee, meaning the company wastes more energy.

If you're about to begin the interview process, reading up on sample questions is fine. I bet, in fact, that reading the answers to those questions will also be OK, if it's done with the idea that the interviewee will learn the principles being probed in the question, and if they will push themselves to answer variants of the question. Digging deep and acquiring topical fluency (the ability to not just answer a question, but to deal with permutations and demonstrate an understanding of the underlying concepts being probed) is a great thing. As a matter of fact, I strongly encourage testers who want to improve their theoretical skills to do this very thing - in groups, sit down and bounce around the answers to questions like this. But don't bet your next career step on a memorized answer--in the end, no good can come of it.

Friday, January 18, 2008

What Makes a Good Test Case

I recently answered this question on the MSDN testing forum and thought it'd be something good to post on. Here's my answer; read the conversation at http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2597561&SiteID=1&mode=1 

There are probably two 'paths' to answering that - the first path examines why you're testing at all, and the second looks at how you're actually writing your cases. In other words: strategy and process (ugh, I know..).

Some organizations view and implement testing as all about QA--validating that an application fulfills certain requirements (IE a tax calculation is completed and returns the expected result, or a server can serve up X pages per minute). I've worked at a company like that--they were implementing packaged software and they only cared that it accomplished what they bought it for. (Ask me what I think about that approach...) Other organizations have to be (or choose to be) much more focused on overall quality. Not just 'will it fit my bill' but 'is it robust'. So there's a subtle difference, but the point is that a good test case is a solid step toward accomplishing the objectives. For instance, if the project is an internal line of business application for time entry, a test case which validates that two users can submit time concurrently and the data will retain integrity is a good test case. I think a case written which validates layout pixel by pixel would be a waste of time, money, and energy (would it get fixed, anyhow?).

Another point of quality for test cases is how it's written. I generally require my teams to write cases which contain the following (and I'm fine with letting them write 'titles only' and returning to flesh out later; as a matter of fact, one-time projects I generally shy away from much more than that).

  • Has a title which does NOT include 'path info' (ie, Setup:Windows:Installer recognizes missing Windows Installer 3.x and auto-installs). Keep the title short, sweet, and to the point.

  • Purpose: think of this as a mission statement. The first line of the description field explains the goal of the test case, if it's different from the title or needs to be expanded.

  • Justification: this is also generally included in the title or purpose, but I want each of my testers to explain why we would be spending $100, $500, or more to run this test case. Why does it matter? If they can't justify it, should they prioritize it?

  • Clear, concise steps "Click here, click there, enter this"

  • One (or more - another topic for a blog someday) clear, recognizable validation points. IE, "VALIDATION: Windows begins installing the Windows Installer v 3.1" It pretty much has to be binary; leave it to management to decide what's a gray area (ie, if a site is supposed to handle 1,000 sessions per hour, it's binary - the site handles that, or not. Management decides whether or not 750 sessions per hour is acceptable)

  • Prioritization: be serious... Prioritize cases appropriately. If this case failed, would this be a recall-class issue, would we add code to an update for this case, would we fix it in the next version, or would we never fix it. Yes this is a bit of a judgment call but it's a valid way of looking a the case. Another approach is to consider the priority of a bug in terms of data loss, lack of functionality, inconvenience, or 'just a dumb bug' way.

  • Finally, I've flip-flopped worse than John Kerry on the idea of atomic cases. Should we write a bazillion cases covering one instance of everything, or should we write one super-case. I've come up with a description which I generally have to coach my teams on during implementation. Basically, write a case which will result in one bug. So for instance, I would generally have a login success case, a case for failed log in due to invalid password, a case for failed log in due to non-existent user name, a case for expired user name or password, etc. It takes some understanding of the code, or at least an assumption about the implementation. Again, use your judgment.

I read the response that a good case is one that has a high probability of finding a bug. Well... I see what the author is getting at, but I disagree with the statement if read at face-value. That implies a tester would 'filter' her case writing, probing more into a developer's areas of weakness. That's not helpful. Hopefully your cases will cover the project enough that all the important bugs will be exposed, but there's no guarantee. I think the middle ground is appropriate here - a good case 1) validates required functionality (proves the app does what it should) and 2) probes areas where, if a bug is found, the bug would be fixed (in a minimal QA environment) or (in a deeper quality environment) advances produce quality significantly.

BTW: one respondent to the question replied and said a good test case is one which brings you closer to your goal. Succinct!

Hope that helps!

John O.

john@yourtestmanager.com
http://www.yourtestmanager.com
http://thoughtsonqa.blogspot.com

How Can I Become a Better Tester? Part IV: Mentors

A major step for career growth, in any role, is to find a mentor. Mentoring relationships come in many forms, the most common of which are:

  • Working for someone
  • Working with someone
  • Having a formal mentoring relationship
  • Having an information mentoring relationship
  • Reading and participating in specific test focus groups

Working With Someone

The best way to learn from someone is to work with them, elbow to elbow. There's no better way to learn. I think of a couple of great test managers I had when I was a test lead. Dan, a test manager in Microsoft's Mobile Devices Division, is a fantastic manager. He is great with people, protects his teams from politics and struggles above, and understands quality. One thing I learned from Dan is that, while a tester's appetite for more time or resources is never satiated, we can get the job done anyhow. When I complained to him that we shipped a product with too few people, he asked "Did it ship on time?" [Yes] "Have there been any recall class bugs?" [No] "Then you must have had the right number of people."

Another great mentor was Mike, group test manager for Live Meeting. Mike and I didn't see eye to eye on everything, but what a great manager he was! He enabled and trusted people to do way more than they may have ever done. He put me in charge of a beta release of Live Meeting (then called Placeware), and let me drive shiproom meetings of several releases of Live Meeting. He also knew how to have a lot of fun on the job. I miss many aspects of working with Mike, frankly. And I try to make every team member's experience the same - lots of opportunity to grow, lots of fun, and high expectations.

I'll make a wild statement: in your first few years of your career (generally two to three) WHO you work for matters significantly more than HOW MUCH you make. The first years of your career establish a foundation which will determine how quickly you will grow and what kind of habits you will form. If you're fresh out of college and have a choice between working at lower pay for an incredible lead, or earning more and working at a 'code factory', may I recommend you take the former? Establish yourself early on in your career, learn the principles, and THEN go out where you can impact and be rewarded accordingly.

Working With Someone

Almost as good as working for a great test lead is working with a great tester. At any level. In Microsoft as a junior tester, I worked side-by-side with a person who became a great friend. He taught me about equivalence classes, boundaries, and other key concepts. He showed me how to fight for bug fixes. He showed me where the bar should be!

Twelve years later, I was managing a team of 100. I worked with three test leads who taught me a lot. Sri taught me about getting the job done - he just sticks to the job until it's done (I've always prided myself in being known as a person who gets the job done, but Sri showed me how to take this to the next level). Debbie taught me about putting your head down and working through challenges - stick-to-it-ness, if you will. And Jenn taught me about taking pride in whatever you do. We never stop learning, especially by example.

My current manager is much the same. I have never worked for anyone as diplomatic as Tony (and I don't mean that in an office-diplomacy, fake-smile way). He really cares about people, how they feel, and what they think about their job. He's also ready at any moment to take advantage of a teaching/mentoring opportunity.

Having a Formal Mentoring Relationship

At Microsoft, each new hire had a new-hire mentor for their first three to six months. This mentor was the go-to for pretty much anything, from "where's the printer?" to "do you think I'm making my goals?" After that, everyone was advised to find a mentor within the company, and Microsoft even has an internal site dedicated to finding and maintaining a networking relationship.

The same should be true for you. If you're new to a company, I recommend you find yourself an internal person to mentor you. Once you're established, look around and find yourself a mentor. Generally you want to find someone who's ahead of you in some way (technical, project management, leadership skills, etc.). Ask them to be a mentor and be very protective of the time you take from them - make sure every time you meet you have a productive conversation. Never ask them to do your work - ask for help reviewing what you might think is the right proposal, and get feedback into specifics.

Have an Informal Mentoring Relationship

A key requirement for me is to work with great people from whom I can learn. As I pointed out, I learn from people I manage. I try to learn in almost every circumstance and every person. If there's someone you learn from a lot, try to be around them a lot even if you're not in a formal mentoring relationship.

Participate in Forums, Groups, Discussions and Seminars

Finally, there's a lot to be learned by reading and participating in forums, groups, discussions and seminars. I'm active on several forums (MSDNs testing discussion forum, agile testing forum in Yahoo Groups, etc.) I learn a lot just by lurking (although I'm so opinionated that it's impossible for me to lurk for long) or by joining in on the conversation. Other places to learn include seminars, networking groups, and even podcasts (BTW: stay tuned for a podcast from me over on http://www.searchsoftwarequality.com, where I'm a Testing Expert)

How Can I Become a Better Tester? Part III: Going Beyond Stated Requirements

So you've spent some tme becoming more aware of quality and what quality means. You've been looking at the differences between a Mercedes and a Hyundai. You've also started reading up on quality and on engineering. Great start! What's next? Well, the next step is to realize you need to look beyond the stated requirements and dig deeper.

In my opinion, functional or business requirements docs are like nets. They catch a lot of 'big stuff' but they can easily let little--albeit important--stuff through. For instance, a business requirement might be that a tax calculation be performed on a per-item basis. t the same time, it might not mention that the overall tax rate, which is rounded to the lowest cent, needs to be calculated on the total purchase and not the individual items.

Disclaimer

I have no idea what the rules are for tax calculation. I don't test it, never have. And even if I did know, it'd only be specific for the US. So please - look over the details in this example and try to recognize the key point...

End Disclaimer

The Tax Man Cometh

As an example, assume a user buys one $0.55 candy bar, a $1.75 bag of chips, and a $4.27 bottle of antacid. At 10% tax, that would look something like this:

Item Cost Tax Total
Candy bar $0.55 $0.055 $0.60
Chips $1.75 $0.175 $1.92
Antacid $4.27 $0.427 $4.69
      7.21

 

Tax is calculated on each item, and in all cases is rounded down to the lower cent.

However, if you factor tax on the sub-total of the items, it may actually be more! The subtotal of these three items is $6.57, and tax is $0.65. Calculated this way, the total is actually $7.22.

OK - I agree. This is a case of a missed business requirement. But this is a great example of how a tester needs to be on his or her toes, asking questions and always exploring for what can go wrong.

Not Another Bad Lego!

I'll take a manufacturing flaw as another example, which hopefully you can extrapolate to software quality. I have three sons, spanning just 4 years, and as a family (Mom is the ringleader here), we are Lego entusiasts. Birthday presents are almost always Legos - these days, they are super-complex Star Wars sets. Whenever my boys have saved up $10 or so, we're off to the store to buy a new Lego. Several times in the past couple months, we have picked up Legos with one of two problems: 1) missing peices and 2) poor fit. There's nothing more frustrating for  a kid than to have their new Lego missing a critical piece--or for the parent who has to drive 20 miles to return the Lego!

I'm sure there's a manufacturing QA requirement here that each individual bag of pieces has a certain total weight. That's how they make sure all the pieces get into the box (how they miss a piece on this test is beyond me). Two days ago, my youngest bought himself a new Bionicle, only to find that the Bionicle's shoulder-claw (you have to see one to understand) was in the box but it didn't stay in place. Turns out, a manufacturing flaw caused the shoulder claw's insert tab to be too small, producing insufficient friction for it to stay.

So as a QA engineer at Lego, I'm sure I'd be dealing with the requirement to have all the parts in the box. But would I be asking about how we ensure EACH part shipped in EACH package has the right fit? That's going beyond the requirements.

Hey, You Stole My Car (Analogy)

OK, one last analogy and I think the point will be well-clarified. Let's pretend you are responsible for quality for an automobile. The requirements are that it be capable of a certain miles per gallon (KMPL), that it run for so many successive hours, and that it fit so many passengers. So let's assume these are 25 MPG, a service life of 100,000 operating hours, and it seats 4.

In the early 80's, AMC made a car called the Pacer. It was small, rather fuel efficient, and cheap. It sat five (although we had many more than that in my friend's Pacer once). Service hours? My friend Kate couldn't kill it, no matter how hard she tried - she drove it for a year without putting oil in it!

A colleague of mine at Microsoft recently had to buy a car. He chose a VW Toureg V10 diesel. He actually hits 30, 35 MPG. The car has a proven service life well in excess of 100,000 hours and it fits 5 (very comfortably, I might add). Which car is of higher quality? Why?

I'll leave you with that thought. Dig deeper, go beyond the stated requirements. It's the difference between an Ambassador and a Skoda, a Pacer and a Toureg.