Thursday, December 27, 2007

How Can I Become a Better Tester? Part II: Read

OK - so you are paying more attention to quality now, right? Starting to ask more about what makes a Mercedes so much better than, say, an Opel or a Dodge. What's another step you can take to become a better tester? Simple - expand your horizons by reading. That's what this blog entry is about: what can you read?

  • The first place to start comes for free: read off the Internet. Read blogs (http://www.testingreflections.com is a great one--there are many out there). Also, a site I'm affiliated with (http://www.searchsoftwarequality.com) is a great resource. There is a performance test expert who frequently contributes, as well as several experienced test managers.
  • Related to blogs, start reading test forums. I have found that the Yahoo Agile Testing Group is a great source of information. Another good source is MSDN's forum on software testing.
  • The next place is to read books on testing. Cem Kaner, James Bach, James Whittaker are all great authors with significant experience. Whittaker has a series of "How to Break..." books which is rock-solid with good day-to-day steps for your testing.
  • The next great source of reading is books on the topic of engineering. These books talk about best-practices in teh engineering discipline and always lead me to think about how my team is approaching a project or a problem. Some of my best test strategies have come as I've read development manuals such as:
    • Test Driven Development with Microsoft .NET (Newkirk and Vorontsov)
    • The Object-Oriented Thought Process (Weisfeld)
    • Software Project Survival Guide (McConnell)
    • Writing Secure Code (Howard)
  • Conferences and seminars: let's face it, a lot of conferences are actually just about marketing something. But even those conferences offer the opportunity to sit with experienced testers and learn from them - either in presentations or during the 'chit-chat' portion of the conventions. A couple of thousand dollars is a lot of money to spend on a conference, but sometimes it'll pay back in spades, just with the new network of associates you build who can help you through a challenge you may be facing. I'm speaking in May 2008 at PSQT's Vegas conference, if you're interesting in meeting me face-to-face.
  • Certifications: OK - to be clear, certification is NOT a substitute for experience. It alone means nothing, but a certification can often lead you in the right direction in improving your skills. It can build a framework for you to 'flesh out' with experience over time. PSQT offers certification and there are a few other courses available. Again, I stress: certification in and of itself can only teach you the process and steps recognized as historically good testing. It's not everything you need to be a great tester, but it is a good framework.
  • Practice: the best teacher is practice, and that's what you really need. Be really weird - start a testing group which meets after hours and just tackles weird testing problems (meet at a local cafe and talk about how you'd test the espresso machine or the retail point of sale system).

These are just a few ideas. There are many, many ways to learn. The key to learning, however, is wanting to learn - and for the right reasons. If you want to be a better tester so you can move up the food chain and become a better manager, well, you'd better think about a career as a project manager instead. If you want to be a better tester and that's it, trust me - you'll learn. And the growth opportunities will present themselves, too.

Next subject: going beyond the requirements.

Saturday, December 15, 2007

How Can I Become a Better Tester?

In all my blogging and participation, I have met many testers. One recently asked me "How can I become a better tester?" As I answered, I thought "Hmm - might be a good blog series!" So here we go. Most of this is aimed at beginning testers (1-3 years) although I have to admit I would benefit from some touch-up work in each of these areas!

I gave my friend four steps to take. I'm sure I'll come up with more, but let's look at these four briefly, and then I'll spend a blog or two on each as time goes by (hopefully I'll finish the last one sometime before the Rose Bowl).

  1. Become aware of quality - what is it, what does it look like, what does it feel like?
  2. Read - read everything you can get your hands out about quality. Start off with books about testing. I really don't even care about what testing methodology (agile, monolithic beastly projects, etc.), just read.
  3. Related somewhat to (1), recognize that quality means going beyond functional and business requirements. You need to think of yourself as the gateway to quality or, in an agile world, as the trustee for quality. Teach people about quality and manage up for quality.
  4. Find a great test lead, test manager, or test mentor to work with. If you can be employed on the same team, that's great. If you have to be in a less formal mentoring relationship, that's great. Learn from this person, everything you can.

First: become more conscious about quality. I actually gave my friend a homework assignment - as this person lives in India, I asked them to compare two vastly different automobiles: the Ambassador and the Skoda. Ironically, the Skoda is the butt of many a joke in Eastern Europe, but it's actually been taken over by VW/Audi and, internationally, is an incredible automobile. It competes with (and beats) Honda in India as the ultimate status car within reach of the rising middle class. The Ambassador? Well, it's truly the Indianized car--it's one of two cars mass-produced throughout India's history. When you compare the quality of the two, well, it's like comparing a Chevette to a Lexus in the US. Or, I suppose, a Trabbi to a Mercedes in Europe. No offense intended towards the Ambassador--it's just reality.

So I like to think about quality and compare products and services. If you live in the US, try this - go to Mervyns or Kohls to buy a suit. Then go to Nordstrom. The difference is beyond belief! If you don't think consciously about it, though, it's hard to put into words what the difference is. A good business analysis would miss the point either - consider the business requirements:

  • Stock on hand: variety of sizes, multiples of the most common sizes. Mervyns: check. Nordstrom: check.
  • Merchandise is on the rack, readily visible to the customer: Mervyns: check. Nordstrom: check.
  • Sales reps available to answer questions: Mervyns: check (you do have to seek them out). Nordstrom: check.
  • Someone who can ring up the transaction: Mervyns: check. Nordstrom: check.
  • Merchandise is priced right: Mervyns: check. Nordstrom: check.

A tester who focuses on only the business requirements here would say both companies are of equal quality. But think deeper - look at the experience in both situations. The Nordstrom experience is full of stuff that could be measured, but it's necessarily the first thing you think of. How friendly and helpful is the service? What is the atmosphere like? What's the merchandise like? Will it feel cool during summer and warm in winter? Is it scratchy? Is it comfortable overall?

So the purpose of the homework is to think deeper about quality. Is quality just meeting the requirements, or is there more? Is it about understanding the customer and knowing there are often unwritten requirements?

How would this apply in the real world? Well, I've already blogged about a huge data corruption issue that arose at a previous employer because the consulting firm running an implementation there insisted negative testing wasn't necessary. They focused (barely) on proving the software met requirements--since none of the requirements discussed issues like "the data needs to have integrity" or "the client must respond to and handle error messages from the server", no testing was done in that area. Thirteen million corrupt rows later, four workstreams came to a halt while the data team narrowed the issue and implemented a fix - oh, and fixed the corrupt rows.

Becoming aware of quality is the first step to contributing to quality. It's what helps us lift our eyes beyond requirements and to be stewards of quality.

Wednesday, December 12, 2007

What's the Right Tool?

I am pretty active on several testing forums (Microsoft's MSDN forum, and a Yahoo! group for agile testing are the two I frequent the most). I cannot count the number of times someone sent an e-mail asking which big, monolithic tool is the right one for them.

My friend (and manager) has a great response to this. At the LDS Church, we are frequently asked by new testers "When are we going to standardize on an <insert category here - performance, automation, etc.> tool? His response: you're engineers. Look at the tools in use in the organization today, look at the tools available in the industry, and make the best decision for the organization. Sometimes you might give up a little functionality or ease of use in exchange for a tool which is already used in-house. You get a built-in support forum, and sometimes you can even leverage existing tests. Other times, you'll probably pick a best-of-breed tool.

Our performance testing is a perfect example of this. When I came on board, we were using JMeter, an open-source java-based tool. The benefit of JMeter? It replays Apache logs. There's no better mimic of production than replaying production logs! The problem? We have been seeing weird things with JMeter - for instance, if a connection times out, JMeter doesn't drop it and move on. Instead, it opens another connection request but leaves the first request hanging. In some ad-hoc experimentation today, an engineer on my team started with just 5 connections and ended his test at 10 connections. Scale that to the 400, 500, 1000 connections we were using to load up www.lds.org, and you end up with incredibly unrealistic test scenarios! I never felt we could trust JMeter.

So I started looking around, and I have settled on WAPT (Web Application Performance Test Tool, http://www.loadtestingtool.com). It's a commercial product, but only costs $350 per license. There's a lot I love about it - first of all, connection management seems to be so much more reliable. Secondly, the built-in reporting is incredible. Third, there are actual support personnel if there's an issue. Oh, and fourth, I can bring a $250,ooo enterprise server to its knees with a dual-proc Dell running WAPT, whereas I needed 10 machines to do the same with JMeter. I'm pleased as punch with it. However, we've had to abandon a year's worth of test scenarios and rebuild them. I've had to justify my decision to change tools. And I'm the only person in the org running WAPT (right now).

The bottom line is, there's a better way to get advice about tools. Don't ask "Which monolithic tool is the best?". Describe your test environment and the project you're working on, and ask "What are some tools people have used for this type of testing? What are pros/cons/strengths/weaknesses of these tools?" Gather information and make your own decision with advice from others.

You're the engineer on the spot and you need to make the final decision. If you're a test lead in Gurgoan, don't let me decide for you what tool to use.

 

BUT: don't hesitate to ask! If there's one thing most testers have in common, it's an eagerness to share their (opinionated) opinion about something. And I personally love to give advice and help out. This isn't a censor or a rant about asking questions - it's about people needing to step up and be engineers.

 

JTO

Wednesday, December 5, 2007

WWII and Test Management

I've been on a WWII kick again, reading a lot about the war. Eisenhower's job was to decide on the strategy - do we attack Germany in one concentrated punch (Montgomery's plan) or do we attack along the entire front (Eisenhower, Patton's plan). Patton's job was to figure out how to implement--he had to sit and watch as Montgomery's punch plan failed in places like Arnhem. Then he got to decide strategy for crossing the Rhine and driving for the win - Berlin or the Eagle's Lair? Once these decisions were reached, they were carried down the chain of command - there was a big marshmallow layer of bureaucrats who procured supplies, managed replacements, etc. and then - and then there were the lieutenants.

The senior leaders (generals and majors and such) are the test managers. High-level strategy, supply mangement, personnel. They're negotiating at a high level with PM or dev on things like engineering process and such. They may be fighting over budget and headcount, too. The lieutenants? These were the boots-on-the-ground leaders who took initiative and got things done. They trained for D-Day for over two years, learning to climb cliffs and take out bunkers. The day after D-Day, they encountered hedgerows and had to make up a new strategy on the fly - and good thing, too, that they did it--there were there, in the heat of battle, and they knew what worked and didn't. They called in support from the Navy or Air Force. These are your test leads, the folks rallying the troops. They're the folks who should decide to implement MBT or PICT or the likes. They're the ones who need a great working relationship with dev leads (something I'm not doing great at right now myself - need to work on that). They're making sure the privates are moving along, they're training new leaders, and they're calling for sacrifice at appropriate times.

Black Box Testing - time to go?

Regarding black box testers... Having lived in both product software AND IT engineering worlds, I'm beginning to understand the root cause of the argument about the value of blackbox testing. Developing user product software requires a ton of testing; at MS, we did this against more functional specification rather than business requirements. I'm convinced software applications (even apps like Oracle Retail, which suffer significantly from the lack of blackbox testing) need serious blackbox testing because the target audience and the way the product will be used are so varied. Integrating business applications in an IT setting, however, really doesn't need as much and yes - I believe this could be covered by business personnel rather than by QA engineers.

But here's the conundrum... Until the IT industry accepts testers as engineers and allows them to work with dev as code is developed (and this on a day-to-day norm rather than an exception to the rule), it'll be difficult to convince engineers to go into testing and to stay there. In my role as TM for Circuit City's MST project, I interviewed 30-40 people for open roles and found practically no one who could code. None of my 30-member team in Richmond could code beyond writing XML (well, one could--he was my best test lead, too). Very few of the personnel provided by IBM, the engagement partner, could code either. One tester they provided, who had a rather high bill rate, was justified b/c she had 'retail experience'. She worked in a clothing store in NY during college!

So there's a chicken/egg thing which has to be solved one company at a time. Convincing management to 'up the bar' for test engineers, convincing development to be more agile, etc. are all necessary steps and I think it's still going to take a while. I for one am totally convinced that you get more for your buck with whitebox and gray box testing, and that you get way more for your buck hiring technical testers. But convincing a CS grad that she should consider test when all the glamour is associated with development? That's a challenge.

All that having been said, don't scoff blackbox testing. A great test engineer (very rare commodity) is a good programmer who excels at white box testing, who can black box test as well. Those finishing touches, esp in product development, are critical to project success.