Friday, August 17, 2007

Recruiting & Hiring

You know, I was thinking... In my new role in QA at the LDS Church, I'm really beating the bushes looking to hire. We have an extra pair of challenges - the position is in Salt Lake City (non-negotiable) AND candidates must be members of the LDS Church, in good standing. That adds a factor of difficulty for sure!

In my time at Microsoft, I think I probably did 300 to 400 interviews, in the form of campus screens or full-length interviews. At Circuit, I was surprised at the calibre (or lack thereof) of many of our contract test candidates, and frankly some of our full-time hires--they were nowhere near the bar I had set before. I eventually had to settle when hiring contractors, because Circuit was well down the project path before I got there and there was no time to find the best candidates. But in each organization, there were also people who really stood out. What made them so special? Is there a way to find people of that caliber in the interview process?

As test managers, what are we looking for in hires? There are about six characteristics I look for during a phone screen or an interview (and, by the way, I usually get a feel for these in the first five minutes--see the book Blink for a discussion of the split-second decision):
  1. Passion: do they have a passion for technology, and do they have a passion for testing? I'm not necessarily looking for candidates who are total geeks and know everything there is to know about DIVX or the latest XBOX game. That's good, but I want is someone who's passionate about the technology they work for. Do they see where technology can make a company more efficient? Do they see how it can change someone's life? And I don't want a tester who's in test simply because they didn't meet the bar for development. Chances are, if this is case they aren't going to meet my bar. So I need to weed through all these candidates, to find the folks who are passionate about technology and about driving quality into tech projects.
  2. Skills: a successful candidate has to have something really special about their skills. At Microsoft, by the end of my elevenyear the company was only hiring 'developers' into test roles. The argument was that a team which was full of automators would be more efficient than a team of interactive or UI testers. Eh... not sold, personally. If there is any absolute in technology, it's that there are no absolutes! So I'm looking for someone who's going to have something great about them - maybe they are bug machines, focusing on interactive testing but simply tearing up the application they're working on. Or they might be great coders, able to solve big tech challenges. Whichever - my requirement is that they have something they are great at. Oh - and it has to match my team's needs... right now, I have an incredible interactive tester who can script automate much of his testing. I'm really hurting for that incredible developer who can test as well.
  3. Break-it mentality: no bones about it - the candidate has to prove to me in an interview that she can take a sample application and test the snot out of it. What's a break it mentality? Well, I'll give you an example... I would present candidates with varyious test questions when I was recruiting in India. The candidates who I felt would make (at least) good testers were the ones who went on and on and on and on generating test cases. I really didn't care what the sample question was, and it never really matters. As long as the question is sufficiently complex that it has more than 20 or so cases, and as long as the candidate just spews out cases non-stop, you'll know. For the record, I interviewed 175 engineers in India, and hired 25 (dev + test). Finding *great* engineers is a challenge, no matter where you go.
  4. Potential: I won't make a hire if I don't think the candidate is going anywhere. I'll probe for things like career growth or even challenge and goal setting. I may hire a person who flat out tells me he never wants to be a test manager--that is, if he demonstrates to me that he's been growing in his career to-date, and he's going to continue to grow. NOTE that this matters less when I'm staffing a contractor. I'm hiring contractors to tackle and finish an immediate job...
  5. Fit: finally, the candidate has to fit. If he or she doesn't fit on my team, well, it's a waste of everyone's time. A candidate in a poor fit sucks up management's time, peers' time (in the form of gossip and complaining) and their own time in terms of effectiveness. Co-workers are less willing to collaborate, and the square peg is left to do everything on her own.
  6. Intellectual horsepower, problem solving, and other skills: the final area I look at is a big lump of 'soft skills'. These are things like problem solving, communications, and sheer intelligence. If you've read How Would You Move Mt. Fuji, you've been exposed to the argument that looking for these skills is a Bad Thing, that you miss good candidates that way. Phooey! If I throw a puzzle question at a candidate, believe me arriving at the answer is about the last thing I'm looking for. In a good problem-solving question, I'm giving the candidate an opportunity to show project management skills, demonstrate the ability to make trade-offs, prove they can think on their feet and that they can keep their wits about them at the same time. Critical for an entry-level QA engineer? YES!! First of all, in my teams I expect that engineer to be as outspoken as my automation lead with five years' experience. Secondly, this also shows the candidate's long-term potential. If they can't think on their feet or solve problems, they might make it through their first year, but when growth is expected they're actually going to fall flat!

Next few posts will be how I probe for each of these areas.

Monday, August 13, 2007

What's the cost of quality?

What is the cost of quality? Well, it's definitely an investment and, like all investments, it can have a varying rate of return. I like to look at cost in three 'investment' sizes:
  • Minimal investment: like the 'day trader' who is only willing to trade with $25, or $100, there's the minimal investment philosophy. The thought here is that quality investments are high-risk or low-return investments. The less squandered, the better. This is the organization that says test should only prove the positive case--if business requirement exists for a given project, that requirement and ONLY that requirement should be tested.
  • Limited investment: this is an individual with limited resources who invests wisely but clearly can't invest across the board in a broad, distributed portfolio. This kind of organization scrimps in areas which may cost down the road, but those 'savings' are targeted and based on some sort of strategy. Safely limited investments might include sort-cutting the authoring of test cases, risking a lack of portability or re-usability. The cases are used to ensure the initial and subsequent pre-release test passes are completed, but are not expected to be used post-release. A new web feature which is expected to remain static going forward would be a safe limited investment, for instance.
  • Full investment: this is a wealthy individual with the luxury of spreading her wealth across multiple portfolios and sectors. It takes money to get money and this investor is deeply rewarded. Not many people have the options she does, though. This is the organization which invests heavily in the QA process: in-depth test case development, lots of automation, plenty of lower-priority fit-n-finish test cases, and multiple rounds of testing.

So which one is right? Well, it all depends (don't you hate that about testers? We're like lawyers--we seldom speak in absolutes!). I have a hard time ever thinking of a time when the minimal investment is appropriate -- perhaps when the project is an internal-only, proof-of-concept which will never see the light of day and NEVER be the foundation (base code) for the actual project. But when millions of dollars, and hundreds of thousands of subsequent man-hours of development are riding on your project, this is just a stupid approach. It's short-term thinking, it emphasizes savings over any consideration. It's the penny-pinching fool who buys a 4,000 SF house decorated with paper trim. It'll never last!

So the real choice is between two and three - limited and full investment. At Microsoft, I participated in projects which were years in duration (some went years beyond their original ship date - I know, I know...). These projects saw repeated upgrades on top of existing code. Millions of users bought and used the products we produced, and therefore we invested heavily. It was the right thing to do! I can't think of a single release I was ever involved in where I said "We over-tested that...". I can sure think of a number of releases where I wished I'd had more time - even if it was just to automate, to benefit future releases!

At the same time, some organizations are working on web components which (for the most part) will release once and may have one or two maintenance releases. They aren't foundational code; each compiled applet is relatively standalone. On projects which have a very small chance of being revisited, is it safe to only write brief test case descriptions? If the functions are tested thoroughly, but little investment is made in repeatability or portability, is it OK? Is it the right business decision? I think so! If a team is building foundational blocks like a content management system or the business object layer, well, they'd better be sure to spend much more time QA'ing it. But if the work is for one or two releases, I think it's OK--actually, it makes the most business sense--to cut short on the test design/documentation and focus on execution.

How about you? What do you think is the right balance, the right investment? Can't wait to hear your responses!

Another blog on quality

Why do we need another blog on quality? Well, because I still don't think managers get it. Quality matters! I've worked at four different organizations now (I know - not much variety, but bear with me) and eleven different products/teams. I've seen teams where quality mattered (Microsoft Education Products Group - India), and I've seen companies where consultants convinced management that quality could be achieved on the cheap (IBM @ Circuit City). So I'm adding my voice into the fray, and I hope the arguments and discussions here will help change the tide a bit.

What are my strengths/interest points?
  • The business case for quality and thorough testing of IT software projects
  • Recruiting and hiring
  • Outsourcing and offshoring
  • The test process--getting consistent

Your comments are welcome.