Showing posts with label engineering. Show all posts
Showing posts with label engineering. Show all posts

Friday, December 18, 2009

Running MSTest Standalone

It’s been a very long time since I posted on this blog – I’ve been quite busy with work as well as writing articles for http://www.searchsoftwarequality.com (head over and check out my tips and Expert Answers).

I stumbled across this today—how-to guide to running MS Test standalone. http://www.shunra.com/shunrablog/index.php/2009/04/23/running-mstest-without-visual-studio/

Full disclosure, during my last stint at Microsoft, I worked on this internally. However, I never disclosed how to do it. I do not know Shunra and I honor my non-disclosure to the fullest extent.

At the same time, I’m thrilled. MSTest is a great product and I’d really love it if MS produced a standalone version. According to this post by an MSFT employee, there’s a lot of discussion around it:

this is one of the things on a higher priorities that we are considering for a future release though this is yet being worked out as such there if no formal statement. I am pushing for this & may have a update in a couple months. http://social.msdn.microsoft.com/forums/en-US/vststest/thread/2897fb68-ef48-4941-a49e-fe8cb1b5aced/

So who knows, maybe we’ll see something soon!

Thursday, April 23, 2009

Blitz Blog: Quality Assurance Through Code Analysis

Today I read through a Parasoft whitepaper published on searchsoftwarequality.com, and I found it to be a great approach to static code analysis.

FULL DISCLOSURE: I write articles and am a “Software Testing Expert” for searchsoftwarequality.com. However, I do not blog positively if I don’t believe in the posting.

Parasoft is the manufacturer of an application security static code analysis tool. Of course, the white paper’s intention is to sell copies of their tool. Can’t fault them for 1) believing in their product and 2) wanting to pay the bills.

What I really like about this article is the approach they take to SCA. They are not pushing it as the end-all, be-all of code quality. They are very realistic about SCA, in fact, stating that it’s often over-used and, once over-used, ignored. Companies implementing SCA must be careful about it—don’t enable a rule unless you really want to enforce it.

At the same time, they make a strong point about the value of SCA. It can really help a team drive quality upstream. Some policies an engineering team might want to use, for example, contribute to code readability while other contribute to security (dynamically-built SQL statements vs. parameterized queries anyone?).

As an Agile engineer, I do take issue with their heavy-handed ‘management enforcement’ method. Agile teams need to adopt policies as needed. Cross-company Agile team representatives might establish company-wide policies and enforcement, but the Agile team itself should arrive at the bulk of the policy definitions.

One thing I like to see is the implementation of SCA directly in the build process – ie, no build if the code fails with errors, and a team-wide email on warnings. This is the best way to enforce policies (but teams need to be selective about policies, so they don’t overwhelm or ‘over-stay their welcome’).

Anyhow, blitz blog. Highly recommended reading!

http://go.techtarget.com/r/6673725/7930283/1

Thursday, April 16, 2009

Tenants of a Software Testing Team

If you had the chance to write out the tenants (unbreakable ground rules) by which your quality assurance team lives, what would they be? Here are some I’ve been thinking of:

  • We partner actively in quality-first development, by participating in the entire SDLC and by contributing tools, process, and time to the development activity.
  • We do not and cannot ‘test in quality’. We test to validate requirements, discover missing or inaccurate requirements and implementation, and to expose defects.
  • We use tools and process to discover defects, validate functionality, and improve quality. Not to use tools and process.
  • Secure software is a top priority—we protect our users’ privacy as well as our applications reliability.

What about you? Do you have other tenants on your team?

Thursday, April 9, 2009

Blitz Blog: Persistent Quality Assurance

This is a blitz blog – quick, to the point, and hopefully helpful. Topic: persistence.

My wife has a cat (a HUGE cat, although my aspiring veterinarian son tells us she’s a ragamuffin breed, which is ‘big boned’ by nature). This cat I do not like. Don’t know why, but we’ve never really gotten along. But lately, she’s really been trying. She comes into my office, right next to my desk, lays down, rolls over, and just purrs. All the while, she looks right at me with the kindest, happiest look on her face. Finally, I give up and pet her.

Can we learn from the big cat? When we want something, can we be pleasantly persistent, just waiting patiently for the attention we’re looking for? I can look back on my career and see many times where some pleasant persistence might have paid off better than being a bit more forceful).

I’ve been reading “Fearless Change” and have found the message in this book to be the same persistent patience that our cat is demonstrating.

Wednesday, April 8, 2009

Software Testing: Is Your Quality Assurance group a Thermostat, or a Thermometer?

I frequently read a daily thought on families and how to improve your family. Yesterday’s thought dovetailed neatly into something thinking I had on QA organizations: are you a thermostat, or a thermometer?

A thermostat is a device which regulates the temperature in your home. It controls the atmosphere. A thermometer is a device which reflects the temperature. To be a thermometer is to be reactive; all it does is indicate the temperature. To be a thermostat means to proactively set the temperature.

In the family, a ‘thermostat’ parent doesn’t react to grumpiness or other negative emotions, but calmly sets the tone in spite of what’s going on around her or him. A ‘thermometer’ parent, on the other hand, gets ‘set off’ easily when children bicker, whine, or complain. Obviously my goal as a husband and father is to be a thermostat, although there are ‘thermometer-like’ days!

In a recent thread on the Yahoo Agile Testing Group (about continuous release), I noticed more than one poster saying something like “I tell management about an issue, and let them decide.”  This reactive approach to QA has been a pet-peeve of mine. It smacks of cop-out to me, and I’ve seen it develop into a victim mentality among many testers—myself included.

In my tenure at Microsoft (where I’m ‘serving’ my second stint, with a combined tenure exceeding 12 years), the testing organization has been consistently active in pushing for quality. A few things differentiate Microsoft from other organizations: first, the engineers wear the pants (that’s changing, slowly). In fact, nothing ships if the program manager, development manager, and test manager sign off on it. And they won’t sign off unless their teams sign off. Secondly, any individual has the right to stop a project, even the newest tester in an organization. Third, while the management structure is massively deep (I’m some six or seven layers below Steve Ballmer, current CEO and President), any individual can get the ear of their VP, Senior VP, and CEO.

So the culture I’m most familiar with is one wherein testing is an active participant in product development. The testing organization moves beyond a ‘show and tell’ relationship with management to an active participant in business strategy. The testing organization is part of thermostat, and rarely plays a thermometer role.

I have worked in organizations where this is not the case, and I’ve seen a lot of evidence that this often isn’t the case. In many IT and product software organizations, testing plays a thermometer role. We ‘take the temperature’ of a piece of software, and let management decide when to ship. In fact I have been told by management, when I’ve tried to be a thermostat, to back off and let them make the decisions. In one case, I have even refused to sign off on a risky release (security-related issue on a piece of beta software), telling the release organization they were welcome to release but they’d have to convince my manager to sign off because I wasn’t going to.

The Testing Thermostat

Thermostats regulate the temperature. In my “Quality 2.0” vision, testing organizations participate in regulating quality. The test organization moves from just taking the project’s temperature, to a role where they are helping to set quality. To do this, testing teams need to:

  • Include experienced, respected engineers. At one time in my career, I worked with Christian Hargraves, probably one of the most experienced engineers I’ve ever met. Christian led and maintains the Jameleon project. He’s a sought after expert in the Salt Lake IT market, and has worked for the LDS Church as what I would call a Test Architect. Every developer in the organization knew Christian knew his stuff, and if Christian recommended something, that had weight. Qualified, respected engineers who are part of a test organization can help contribute to quality process and practices across the organization. They can influence core development teams to push quality into their process, from code reviews to adopting best practices.
  • Advocate actively about projects. Testers need to be active participants in release decisions, influencing senior management. In many instances, testers are shareholders. In every instance, testers share the benefits of a good release by enjoying continued employment, and in ‘worst case’ scenarios, testers go down in flames along with the rest of the company. In other words, our employment is just as dependent on a good release as any CEO’s. If all we do is report quality, without influencing decision, we are placing our career in the hands of other people. I do not accept this point of view!
  • Drive quality upstream: we all know it’s significantly cheaper to fix a bug in the planning phase, early implementation, or even release validation phases as opposed to post-release. Testers need to work to push themselves and their influence upstream—the earlier we can look at code, the better.
  • Be involved in planning activities. Take an active role in scheduling. Drive feedback about previous releases (especially lessons learned) into current project planning. If the company has a history of underestimating test duration, speak up about it. Track and present data which shows a more realistic approach. Work patiently but consistently to drive reality into schedules.
  • Hold dependencies accountable. If the development team isn’t dropping code early enough for full testing, don’t stand for that! Your reputation as a tester, not to mention your continued employment, depends on a solid release. If the company runs with firm release dates, but development is dumping code in your lap a week before release, you’re going to end up working 90+ hours that final week, and still releasing crappy code! Negotiate and set milestones, and hold development accountable for reaching those milestones. Two things will result: first, your test organization will start to work a consistent, sane amount of hours. Secondly, quality WILL go up because you’ll be looking at code earlier.
  • Be involved in establishing release criteria. This is critical—across your business, you need to have consensus on what constitutes release-readiness. If you leave this up to marketing, they’ll say a set of customer stories need to be developed. Leave it to development and they’ll say that, plus possibly that all unit tests need to pass. As QA engineers, we have a unique perspective in that we understand the value of positive AND negative testing. We can help influence the establishing of release criteria which truly reflect product quality. Setting this criteria early, well before a project is nearing a release deadline, removes the emotion and sets a rational basis for declaring ‘Ship Ready’.

These are just a few things a test organization can do to transition from a thermometer to a thermostat. What’s missing? Does your test team take a more proactive role somewhere else? Let’s hear it!

Friday, April 3, 2009

Found a Security Flaw (The Need for Software Testing Everywhere)

Today I was paying a bill online, and I crashed the company’s IIS server (I promise, I did nothing wrong—intentionally). The good news: they write their code rather securely, so they are using parameterized queries rather than embedded SQL statements. The bad news (besides the crash itself) is that they were running with tracing enabled, so I saw the entire stack trace.

I called the company and got in touch with their tech guy. He was really polite on the phone and very open to feedback. I shared a bunch of info with him and thought I ought to document it. So here’s an Open Letter to All Admins Running IIS:

· You need to check your machine.config (and possibly web.config) files and make sure <trace enabled=”false” localOnly=”true” pageOutput=”false” requestLimit=”10” traceMode=”SortByTime”/> See http://msdn.microsoft.com/en-us/library/aa302351.aspx for more info.

· Either spin up Microsoft Baseline Security Analyzer (MBSA) or use IISLockDown and URLScan to scan your server(s) security, especially in configuration. See http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=f32921af-9dbe-4dce-889e-ecf997eb18e9

· You might also benefit from a security review/audit from an experienced, independent consultant.

Security Focus has a decent article on the issue: http://www.securityfocus.com/infocus/1755 (it’s old but still accurate). NOTE there is a chance that running IISlockdown may break something. If your developers built the site with some dependency on what’s actually a security hole, locking down your server could cause issues. I highly recommend that you run this against a test deployment and then run all of your automated tests against that deployment, and that you do some manual testing.

You might also consider picking up copies of

  • “Improving Web Application Security” (MS Press),
  • “Building Secure Microsoft ASP.NET Applications (MS Press),
  • “Writing Secure Code” (I *highly* recommend this – it’s the bible on secure coding, especially in MS technologies), and
  • “Secure Development Lifecycle” (MS Press).

From more to less specific, these books are pointers on secure IIS configuration, ASP.NET coding, coding in general, and strategies for maintaining application security throughout lifecyles. The latter book is a great read if you’re on a larger team and/or need to influence management to give you the time for security-related work.

Thursday, February 12, 2009

DRAFT: What’s the Role of a Test Manager in Agile Organizations?

This blog posting is a draft. It represents a work in progress, and is posted with the hope of gaining peer review for improvement.

Many people have asked me—or challenged me—about the role of a test manager in agile organizations. A basic premise of agile being the self-lead team, who needs a test manager? Many agile engineers even bristle at the thought.

Reality shows, however, that not all teams or organizations benefit from a complete lack of management. Smaller engineering organizations can often do without, but larger organizations perform better with some amount of centralized management. A team of ten engineers, with no role specialization, has no need for a test manager. A team of fifty engineers, which maintains a differentiation between test-oriented engineers and development-oriented engineers for a valid business reason will certainly benefit from a test manager (and, incidentally, a development manager).

What do I think is the role of a test manager in such an organization? I think the test manager contributes in five higher-order functions: interfacing with senior management; providing vision and leadership; fostering cross-group collaboration; leading specialties, tools and reporting; and providing organization management. That’s a lot of Dilbert-esque lingo! What do I really mean? Read on for details…

Interface With Senior Managers

Larger organizations, such as the team I led at Circuit City, have deep management structures. It’s simply a fact of life. And those managers generally manage cross-discipline organizations and benefit from help with a manager between them and each discipline. In Circuit’s MST project, I managed over one hundred engineers, with thirteen concurrent workstreams. My role was to interface between my manager and the test organization, communicating messages and centralized decisions in a way that addressed my teams’ needs, interests, and concerns.

Test managers also act as a buffer, insulating the team from senior management’s see-saw decision-making process. Sometimes just shielding testers from the questions senior management can ask is performing a huge favor—questions like “do we have the right number of testers” can really cause concern in a test team; the test manager can research and answer that question often without churning up any worried or troubled feelings.

When senior managers arrive at a strategic plan, the test manager’s job includes integrating that plan into the test organization. A great example would be an organization which has decided to migrate from waterfall to agile! The test manager’s role will change dramatically, moving from ‘manager’ to ‘coach,’ assisting the test team in understanding their role and helping to set engineering goals in terms of process adoption.

Another role test managers play in this larger organization is simply pushing back on bad ideas. Individual contributors are often heads-down (especially on agile projects), in the day-to-day work. If senior management stumbles upon a ridiculous idea, individual team members often lack the cycles (and sometimes the experience) to push back on that decision. The role of a test manager often includes helping senior management understand the possible side effects of a decision.

In some organizations, testing or quality assurance are not often viewed as critical. These organizations may lack experience, or may have experience in smaller projects than the one at hand. A test manager’s role includes the responsibility to advocate for quality across the entire company. Sometimes this means helping customers understand the value of the ‘extra cost’ of a few test engineers on a project. Other times, it’s helping executive management see the need for experienced testers. Sometimes the job includes preventing releases from happening when core quality issues beyond simple functionality could cost the company in terms of money or bad PR.

Provide Vision and Leadership

Agile teams are (rightfully) quite busy in the day-to-day and may not be cognizant of everything going on in the company. The test manager’s responsibility in an agile organization can include driving vision. With an understanding of company growth plans, the test manager can be looking ahead to understand how to double or triple the size of the test organization. The test manager can also be thinking about strategic redirection – for instance, perhaps the test organization depends on developers for all automated testing. The test manager can set in place organization-wide plans to train testers, helping them become better automation engineers. The test manager needs to be the person asking “Why not?” and helping to drive the team forward toward that goal.

Sometimes agile teams fall behind due to circumstances beyond their control. The agile approach is to take those hits, learn, and move on. But sometimes there’s value in getting help. For instance, if a member of an agile team becomes ill during a critical period, a strong test manager can step in and ‘pinch hit’ for that tester while he or she is out. This isn’t advocating that the test manager is simply a ‘reserve’ resource. Teams which underestimate or over-extend themselves need the experience which comes from missing goals. But there are times when a test manager can contribute at an individual level.

Another leadership role is to help teams through transition. The change from waterfall to agile is a fantastic example of this – as a leader, the test manager needs to play a critical, positive role in helping testers work through the difficult migration process. The test manager provides structure and understanding, offers insight, and encourages team members as they face challenges. Above all, the test manager needs to be supportive and reassuring during the struggles which come with change. This leadership will help retain top talent, which might otherwise leave the organization due to the uncertainty and stress of change.

Finally, the test manager can represent the test organization and the company as a whole. A good test manager can be a key asset in customer retention. By contacting external customers, being aware of situations they’re facing, and communicating the status of fixes and changes, the test manager can help the customer through any rough spots in project implementation. Another role the test manager can play is sitting as a member of industry or standards organizations, making sure quality has a voice at that table and providing expertise in the subject.

Foster Cross-Group Collaboration

A critical role played by any management in an Agile project is eliminating road blocks and keeping the team moving. The test manager can play a pivotal part in this by interacting with other groups, with the business side of projects, and with the customer. Sometimes it takes a management-level title to ‘influence,’ even within the same company. More important, some negotiations and even simply driving closure can involve a lot of time, talking, and walking. By remaining proactively involved in projects, the test manager can help remove impediments, all the while allowing the team to remain heads down.

Also, very often in a larger organization, the test manager is the first level team member with corporate purchasing privileges. Therefore, the test manager can often take care of licensing and other purchases and fees for supplies the team needs to carry on their work.

In larger engineering organizations, the test manager works closely with development and program manager/project manager counterparts. They collaborate on cross-discipline objectives as well as on budgets and scheduling. They are often the point-people for driving mitigation for lessons learned which surfaced in retrospectives.

Another role the test manager plays in a larger organization is introducing engineers around the company. A successful test manager is very familiar with projects and personnel across the entire organization; when one employee is looking for help, that test manager can make appropriate introductions.

Finally, the test manager can play a scrum master role in a scrum of scrums, helping to lead the meeting.

Specialties, Tools, and Reporting

Generally test organizations are split into teams in  a matrixed structure. Most testers fall easily into a project team, where they’ll be busy until that project completes. There are, however, occasions where testers are hired who act in more of a ‘Center of Excellence” function. For instance, performance testing, automation and tools, or security-oriented testers. These testers are shared across the organization, brought in as experts to play a specific role on a release. Organizationally it’s often most efficient for these testers to report to the test manager, and the test manager manages their assignments.

In a large Agile organization, there is often tension between the Agile maxim “do what works'” and organizational efforts to standardize, especially on tools. Standardization sometimes aids in keeping costs down (when expensive commercial tools are used), especially for tools like load testing tools. Standardization also aids in maintaining resource ‘portability’ (the ability for a tester to move from team to team, without needing to learn new tools). A test manager needs to be very sensitive to the needs (perceived and real) of a team, but can often be instrumental in maintaining consistent toolsets across the organization.

Agile is notable, among other things, for a distinct lack of metrics. Status reporting is as important to senior management as it is a burden to agile teams. The test manager can play a critical role in pulling together status out of several test teams, presenting it to management in a format they’re used to. It seems a bit mundane, but in spite of the gains made moving to agile, many senior managers still yearn for detailed status reports.

Related to reports is the idea of common metrics. Just as the Agile team needs to ‘do what works,’ the Agile organization also needs to do what works. Sometimes that means a bit of a burden on the Agile team. With engineering expertise as well as an understanding of project management, a test manager can help Agile teams arrive at metrics which are easily generated (often times in an automated manner, if unit testing or defect tracking systems are in place). There are metrics that work – they vary from company to company, but they work—and the test manager is a key player in defining these metrics and automating the gathering process.

Finally, the test manager can offload the ‘compliance’ burden from the Agile team, reporting up to senior management on the progress against various company initiatives. Again, maybe not exciting to the test manager, but critical to the company’s improved effectiveness and efficiency (well, we like to hope each initiative is critical).

Organizational Leadership

Even in the Agile organization, things like performance management have to happen. The test manager carries a significant administrative burden driving this process. This is a good thing – a good test manager not only helps his or her teammates grow in skills and experience, they also advocate with senior management for promotion and compensation increases.

As a test manager, my experience has always been that my teams are more effective when I screen candidate for open positions, and then pull team members in on formal interview loops. As a test manager, I have had responsibility for opening job postings, authoring job descriptions, and sourcing resumes for open roles. When the test manager takes on this administrative burden, it allows the Agile team to focus on their work. When decent candidates are sourced, the team can stand down briefly to perform interviews.

Closely related to recruiting is the role the test manager takes on in hiring and onboarding new hires. The manager shuffles new hires through the hiring process, helps in signing up for benefits and such, and makes sure the new hire settles in well. A key role I have seen too many test managers overlook is introducing the new hire around the company. Finally, the new hire will achieve their highest potential as quickly as possible only if and as the manager assists them in setting goals for impact, development, and growth.

A test manager’s work is to drive his or her team to higher levels of effectiveness and efficiency. Working in concert with members of the Agile teams, test managers can accomplish this in part by researching and procuring appropriate training. Whether it’s bringing in experts in Agile testing or arranging for on-site training in automation skills, training is critical for improving what a test organization can accomplish. Procuring that training can be a time-consuming task. Also the test manager arranges for training which fits broad, cross-organization needs rather than focused on specific teams.

Finally, the test manager is critical in helping with employee retention. A good manager is aware of the interests and professional goals of each employee in the organization, and makes the effort to provide that employee with opportunities in sync with the employees wishes. With a broad, cross-organizational focus, the manager can also help employees find new roles when, without that assistance, the employee might have left the company.

Help the Agile Process Work

A closing concept for the test manager’s role in Agile is the idea that he or she is a make-or-break participant in the transition to Agile. It can take several months or even a year to successfully transition to an Agile process, and that transition can be painful—especially for testers! The test manager helps coach teams through the transition, aiding and lending support and encouragement, fostering courage, and assisting in power transformation. This is not a tops-down role, but a one-on-one, coaching role helping team members with courage, patience, and confidence.

Saturday, January 31, 2009

Quality Assurance: automated QA software testing tools

This week, I answered a question about automated QA tools. This topic comes up frequently – my base answer is “use the simplest tool that will get the job done”. There are a lot of open-source tools, but I find that JUnit (or NUnit, if you’re in .NET) and Selenium or Watij are a great combination for testing web applications.

Read my answer here: http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1346325_tax306196,00.html

Wednesday, October 22, 2008

Lessons for Teamwork in Software Quality

In my role as a volunteer youth leader in my Church, I have the opportunity to help put together a monthly activity. My group was responsible for the activity this month, and we chose to offer five team building exercises (all straight from http://www.wilderdom.com/games/InitiativeGames.html). We did four exercises in a rotation, and then all met together to perform the fifth. The exercises were:

  • Minefield: each member of a group paired up. We set up a maze using chairs and tables, and had our pairs blindfold one person. The second person in the pair had to 'lead' their partner through the maze. The challenge in this game is that all partnerships are talking together, at the same time. The purpose of the exercise is to emphasize the need for communication as well as the ability to pick out the voice you're listening to.
  • Toxic Waste: this is a group exercise. Participants encountered a bucket in the middle of a 10' circle and they're told the circle is full of toxins. They need to move the bucket out of the circle, and their resources are a bungee cord and ten thin ropes. The challenge is ingenuity, teamwork, and (again) communication.
  • All Aboard! In this exercise, the team all stands on a tarp. The challenge is to fold up the tarp to be as small as possible, fitting the entire team on it, WHILE the team stands on it. Teamwork, spatial relations, etc.
  • Helium Stick: I conducted this one. The team lines up in two parallel rows, sticks their hands out with their index fingers extended. A small stick (I used a 1/4" wood dowel) is placed on their extended fingertips and they are challenged to lower the dowel. In fact, it goes up.
  • Egg toss: each team is given 25 straws, 20 cotton balls, a 5' piece of tape and an egg (unboiled). The mission is to build a crate/ship/container so they can drop their egg and it doesn't break.

What is interesting to me is the way teams interacted to get the problem solved. As I said, I conducted the helium stick challenge. As teams started, they were astonished that their stick ROSE instead of sunk. In a flash, right on the heels of that recognition, came frustration. We were split up into groups of boys and girls--oddly enough, while both groups expressed frustration, only the boys started yelling at each other. I kid you not, they were really ripping on one another. They quickly moved into the blame game. Each group (all four) needed to be stopped multiple times.

After frustration/blame, the groups moved into 'try harder' mode. I finally would stop each group and point out that trying harder wasn't working - maybe there was a better way. At that point, it was again quite funny - no single group picked an organized way to brainstorm. It was 'herd brainstorming' and they all just started shouting ideas.

Eventually someone 'won' through all the shouting. In one group, it was an adult leader, who started back into try harder. In a group of 12-year old boys, one of the boys got everyone's attention and he literally talked the stick to the ground. You need to understand the magnitude of that feat - you've got eight people all pushing up on a stick (pushing up because they have to keep their fingers on the stick at all times, and by doing so they force the stick up). Somehow this boy talked everyone, step by step, through the process of lowering the stick.

The girls were pretty creative--they caught on quickly that they needed to coordinate, and touch was the best way to do so. One group of older girls interlocked pinky fingers and lowered together. Another group split into two smaller groups on each end, and just put their hands touching, next to one another.

For me the takeaway was:

  • If something isn't working, the first response is to try harder. But if it doesn't work, it's not going to work any better if you try harder.
  • When something isn't working, teams gravitate toward frustration and even blame. It's so destructive! No one was intentionally pushing the stick up, but they kept yelling at each other about it.
  • Someone has to step up and coordinate the discussion. Brainstorm on ideas and then try one of them -- any one of them. There was no single right idea and the teams that just tried something generally succeeded, as long as it was something other than trying harder.

How can this apply to software development, software testing and software quality? Well, software is built by teams - even the smallest unit of engineering is a feature team made of two or more people. Communication is important, and it's critical that 1) no one resorts to blame and 2) each person is allowed to share their point of view. "Writing unit tests just so we can hit a goal of coverage seems to be distracting us from the real goal--our output is actually lower quality right now than it used to be" needs to be answered with "Why do you think that way?" and not "Uh huh - not true. Besides, you're not helping at all with all these bugs you're bothering me with!"

Good communication includes illustrating the current status, as in "Hey wait, everyone seems to be pushing up" and then a group discussion of what's causing that: "We're pushing up because we all want to keep our finger on the stick." Only then can the group move on and start thinking about the solution. So in an engineering organization, that might be "why are we getting a flood of bugs? Wait, testers are off doing something else (building a battleship test automation system) rather than focusing on testing daily builds". Once the situation is recognized, only then can the team react with an appropriate response.

As quality assurance/software testing teams work with and communicate with development counterparts, quality becomes a natural by-product. As teams discuss what practices are producing current results, they can move forward and improve how they approach the challenge of writing quality software. A little communication can move our engineering teams from try harder to work smarter, and output increases in both quantity and quality.

Thursday, October 9, 2008

What's the Best Software Testing/QA Tool?

Frequently people will post a question like "What's the best tool?". It drives me nuts sometimes! Software testing and quality assurance are definitely challenging professions, but it's not fair to depend on others to solve your challenges for you! I've answered questions like this in the agile-testing@yahoo.com group, and on the MSDN forum, and I thought it was time I brought my answers together into one blog posting. That way, I can refer people back to the posting.

MSDN Answer

Lifted from my post on MSDN's software testing forum:

An American car manufacturer once tried to make a car for all things - it was a 5-6 passenger car in three different models, each with four wheel drive AND good fuel economy. It ended up mediocre at all things. You need to beware; resist the urge to pick one monolithic tool solution for such a wide variety of testing needs. The vendors (well, NOT Microsoft, of course ) would love to have you believe their tool can do it all. And in some aspects, their software testing tools probably can. However, will that one tool solution be effective and efficient in everything you do? No.

I'm not advocating a pantheon of tools, but you need to think like an engineer more than a manager or a customer. You find the right tool for the job at hand. Over time, you'll settle down to a group of 3-4 tools and you'll stick to them.

I've never used <some tool the asker referenced> - someone else will need to comment on that product's ability to do everything you're looking for. My take? It'd be too good to be true if it actually could. And I tell my three sons all the time 'If it's too good to be true, it's probably not true'.

Be wary.

Here's how you [should] approach this as an engineer:

  • List out the applications you'll be testing 6-12 months from now

  • Think about the test scenarios, especially those you'll want to automate

  • Ask yourself how many releases of that application/project your company will perform--will quality assurance be a repeated process, or are you testing this software just once?

  • Based on that, how much automation is 'worth the investment'? Microsoft Office 2007 *might* still be running automation I write in 1997--probably not, but I know 2003 runs it AND that automation is still being run, every time there's a patch or SP released.

  • Now that you know how much investment to make, look for best of breed solutions for each project. Don't focus on all-in-one solutions; just look for the best tool for the job. Use demos, read reviews, ask questions. Don't ask "Is this the right tool?" but rather "Which tool do you recommend" or "I'm considering this tool for this job - does anyone have experience using this tool to do this?"

  • Once you have a short list of tools, look and see if there is commonality/overlap. You'll see patterns. It's possible Rational or another all-in-one tool will appear in the list; it's equally possible none of those tools will appear.

  • If there's good overlap, ask yourself what you think the pain will be if you force a tool into a job it wasn't designed for. If you can live with the pain, go for it... If not, keep looking or open yourself up to a larger set of tools.

Hope that helps (in spite of not answering your question),

John O.

 

Commentary

OK - I searched through a number of documents and can't find my other posting on this subject, so I'll just write it once more.

  1. To all testers: look before you ask. Really! If you are wondering what tool you can use for a given test type, use Live Search and look for information! Performing a minimal amount of research shows respect to the audience you're turning to for help, and can actually prevent a question now and again. QA is all about searching for product defects; apply that searching capability to answering your questions.
  2. You're more likely to get help on a specific question rather than a broad question. For instance, asking "What's the right tool" won't get you much. However "I've evaluated Selenium, Selenium RC, and Watij--given my situation (describe it) what tool do you recommend?"
  3. Talk about the problem you're trying to solve. So "We realized we'll be running tests over and over and over again, but our UI changes frequently. What strategies can we take...?" is a question that raises the problem and let's people know what help you're looking for.
  4. Asking "what's the best tool?" is like asking "What's the best car?". If you live in Germany and drive the autobahn frequently, the best car for you is far different from the best car for someone who lives in Bombay, battles thick traffic, and drives Indian roads (notoriously rough). Be specific, give details, and look for recommendations. Software testing tools are the same way - a screaming "BMW" test tool will fall apart on a Bombay road. A right-hand drive Jaguar would be totally out of place on an American highway. A performance test tool isn't the right way to automate repeated quality assurance tests. The right tool is dependent on the project, the skill sets on the QA team, the timeline, and several other factors.
  5. Solve your own problems. The Internet is an incredible tool and offers us all a ton of opportunity to not have to reinvent the wheel. But don't ask other people to do your work for you! Ask for advice. If you want someone to solve your problem, ask one of us to consult (for pay) and bring us in. We'll be glad to get the job done for you!
  6. Give back: as you grow and learn, give back... Don't post a question, get your answer, and disappear. Remain an active participant in the community and 'pay it forward'.

Summary? Be specific, research before you ask, solve problems, and give back. That's how to get answers online--in a sustainable fashion.

Friday, October 3, 2008

What Makes a Good Automation System (automated QA/Quality Assurance/Software Testing)

So in my new job, one of my first tasks is to put together an automation system--by this I mean a harness and a framework. The process has had me thinking (and talking) a lot about what makes a good system in general.

The automation harness is the system used to schedule, distribute and run tests and to record results. In the open source world, some tools used here include NUnit, JUnit, and TestNG (my personal favorite). These tools all work in a one-off situation - they are all run locally out of the dev environment or via the command line. In software testing at Microsoft, though, a one-by-one approach to automation is useful for 1) developer unit testing, 2) tester unit testing/test creation, and 3) failure investigation. However for the 24/7 test environment we're building, this isn't sufficient. We need a centralized scheduling tool that allows us to push tests out to multiple clients (simulate load and de-serialize test runs). So we're working internally to Microsoft, evaluating the existing automation harnesses available and trying to find the one that works best.

A big factor for me in this selection is finding a harness which is configurable. At Microsoft, we are pushing the envelope in testing in a variety of ways: code coverage analysis, failure analysis, and several similar activities which allow us streamline our testing, reduce test overhead and automate many testing tasks. This means our framework MUST be extensible - we have to be able to plug in new testing activities.

A second element of the automation system is the framework. This is the abstraction layer which separates our automated tests from the application under test. This abstraction layer is critical to good automation. If, for instance, you are automating a web application, you will probably experience a lot of churn in the application layout. You do not want your automated tests hard-coded looking for certain controls in certain locations (ie, in the DHTML structure)--by abstracting this logic, your test can call myPage.LoginButton.Click(), and your abstraction layer can 'translate' this into clicking a button located in a specific div. In some organizations, this framework is purchased. At the LDS Church, we leveraged both Selenium RC and Watij to build this framework, developing most of it ourselves internally (kudos to Brandon Nicholls and Jeremy Stowell for the work they did in this capacity).

The challenge felt by most test organizations is two-fold: 1) finding the engineering talent to build these systems and 2) making the investment in innovation. Ironically, the very thing which can free up resources for other tasks (automated testing) is the thing most managers don't want to spend time on! This makes sense, sort of... managers don't like to invest in activities which (in their mind) don't contribute directly to the bottom line. In all but the smallest of projects, this makes no sense--test automation isn't a sellable product, but if automated tests can free up a day or two of test time, that's a day or two spent doing other activities! Each and every time automation is being run.

Recruiting top talent is also a challenge. In both IT organizations where I worked, there was a culture among developers that testers weren't engineers--they were click testers. Testers couldn't give input on 'extremely technical concepts' like architecture, potential bug causes, or the likes--they're there to pick up code as developers release it, and then to find bugs. It's no wonder that it's so challenging to hire engineers into testing - when they're treated like that, they're either going to leave or move to development!

So the keys to a great automation system are: 1) a solid, extensible and flexible harness, 2) a robust framework, generally customized to your test activities, 3) management commitment to invest in innovation and automation and 4) top engineering talent and the culture to reward them for their contribution.

Am I missing anything?

Wednesday, October 1, 2008

What A Feeling!

So I've done a relatively good job of being calm and collected in my first two days at Microsoft. Your first day starts in a long line filling in I-9 forms. Then you enter address and other contact info info the Microsoft system via an internal web form. Finally, you sit for another 7 hours in a room while they teach you about benefits and the MS culture. At the start of that first day, you're officially a Microsoft employee, but you don't get your card key or anything.

Day two starts in the Big Room again, with a discussion about corporate ethics, legal issues, and a discussion with a couple of recent Microsoft hires. Finally you get your card key and are sent to find your manager. Oh and by the way - the room has anywhere from 100 to 150 people in it. Yup - Microsoft starts that many people, each and every week (well, maybe not the week of Christmas or New Year).

So at about noon I launched off on my own. Unlike most new hires, having worked here for 11 years, I know where the buildings are, where parking is, etc. So I zipped straight to my temporary building whenever I'm here in Redmond. I swiped my badge and got access to the building. A little smile crept up on my face.

About an hour later, after picking up my laptop and getting it set up, I went for lunch with another new-hire for our Utah group. As I swiped my card and stepped into the cafeteria, I had a completely involuntary reaction: I jumped, throw both hands high in the air, and shouting "Yes!! I'm back at Microsoft!" Later during lunch, Tim told me how cool it was to spend time with someone who is so excited about working at the company. I have a bounce in my step which has been missing for many, many years.

I can't describe it. I was in software testing/quality assurance at Microsoft for 11 years. There were great days and there were really challenging days. I left two years ago to lead QA "for the world's largest retail IT project" at Circuit City. What an experience that project was. Quality assurance to team leads (mostly IBM project managers) meant proving happy path and avoiding negative testing. It was a cultural shock, to say the least. Testing software at the LDS Church was somewhat better. The people, for the most part, were great (surprisingly, there were exceptions - people who behaved less Christ-like than even at Microsoft!). The QA team suffered from a total lack of respect from development, however. And I found in that IT organization that everyone has a special little niche. There are enterprise architects, application architects, security 'specialists' (people who know about security policy, but don't know much about penetration testing), developers, and quality assurance engineers. If you dared to stray outside of your niche, well, that meant you were stepping on someone else's toes.

Back here at Microsoft means I have an equal seat at the table as an engineer. It means I'll have to work with other engineers to tackle really, really challenging problems. First challenge: building an automation harness using existing technologies at Microsoft, then building our own framework (abstraction layer) within which our tests run. Additionally, we need to take the mandate to "build virtualization management technologies" and turn that into a release software application: ideation, product planning, product specification, development, and release testing.

We also have the challenge of hiring incredible C++ developers and testers (engineers) in the Salt Lake Valley. Finding developers is pretty easy, but finding developers who respect QA and understand engineering excellence? A challenge. Finding a software testing professional who has the guts to take an equal seat at the table? A challenge!

But that's what I love about being back. I feel like, after two long years, I can finally do my best work and reach my full potential. I can bring all of my 14 years of engineering experience to bear on a software challenge. If I have an idea, I can run with it. I can provide input to the user scenarios, to application architecture, and to how we push quality upstream. I can prevent software defects, rather than find them!

I know everything won't be perfect. I left Microsoft for reasons, and those reasons haven't all changed (although judging by much of what I heard in New Employee Orientation, the last two years have been a time of growth and improvement for the company). There will still be bad days, there'll still be the struggle for a good work/life balance (now called 'blend'). But I have incredible health benefits for my family, stocks and bonuses again, and I have the chance to be challenged every day again. And I'll be working with super-smart people every single day. THAT is a cool thing!

Friday, May 30, 2008

Advice to Developers from a Test Manager

A good friend pointed out that a blog on advice to developers from an experienced test manager would be helpful. With 13 years of tech experience now, I have a good idea of what works and what doesn't.

Most Successful Projects

I'm basing my advice on my experiences in the most successful projects I have worked on. These projects have been released on time, have had the highest possible quality, have thrilled customers, and have done so with minimal pain to the entire engineering team. These projects themselves are:

  • Microsoft Server ActiveSync: synchronization layer between Microsoft's mobile devices (Pocket PC and Smartphone) and Exchange Server.
  • Microsoft's Learning Essentials for Microsoft Office, a free enhancement to Office to help tune it more for the needs of students and teachers.
  • The LDS Church's Its About Love adoption site (unreleased, sorry!)

So let's jump in, shall we?

Collaboration

Successful projects all share the element of collaboration between development, test, and program management/product design. There's a feedback loop in this relationship which results in higher quality. Dev can provide input into design tweaks which might result in significantly lower development effort. Test can provide feedback which enhances testability and reliability.

During the engineering process, this collaboration continues both in refinement of the feature list and of features. That close relationship between dev and test also helps improve quality through quick defect remediation as well as mutual support in defect detection. When dev and test work together (as opposed to dev working in a vacuum and throwing something over the wall), invariably the result is fewer bugs, more bugs fixed 'right', and a significantly shorter bug turnaround.

I want to stress this point. It is so very critical for bugs to be detected and fixed early on. The longer a bug remains in the project, the more code there is which is written around that bug. It becomes like a sliver under the skin - the code literally festers, becomes infected, and removal/healing are hindered. This quick turnaround simply can't be called out enough. It's definitely a must-have.

An additional aspect of collaboration is seen when dev and test work together to find, investigate, and remediate defects. Frequently this takes the form of "Hey Tester - I checked in some code today, it's all passing unit tests but I'm a bit concerned about it." Just a heads up generally points a tester down a different path than they may have planned to take.

Paired investigation can be so critical. In It's About Love, we have found a series of strange performance issues. I've got the time, experience, and tools to find the bug whereas my developer has the experience to tweak his code to find potential solutions. It's a symbiotic relationship. Without this paired work, he'd be blindly trying to fix something, throw it over the fence, and get it back the next day as a failure.

I list collaboration first because it is the key. Everything else you do to improve your product and code will stem off of collaboration, so you had better learn this skill if you want to be an effective, efficient developer.

Trust Your Tester

Ah, trust... that word of words, that concept of concept. I climb mountains, or at least I did until my kids got older. Now we hike, and someday we'll climb again. It's taught me a lot about trust. When you are scaling an ice field, your life is dependent on your skill and strength. It also lies in the hands of the person at the other end of the rope. You learn to trust your partner explicitly--you have to!

Some developers look down their nose at testers. Maybe it's because the developer feels they have more experience writing code. Maybe its because the developer has a degree in computer science and the tester does not. Maybe its because the developer feels he or she 'won the lottery' and the tester did not. It may even be that the developer feels like they are in a master/servant position. Believe me, these feelings will not help your project, not in the least.

Hear me on a few things:

  • There's no master/servant relationship. The tester does not exist to beat quality into your rushed, careless code. If this is your attitude, the first time you run into a testing professional, you are in for a rough time. There's nothing sadder, in my opinion, than a dev who rushes through their code, cannot be bothered by incoming defects, and then throws the whole mess over the fence to a tester to root out the issues. That's pure laziness, it's pride, and it's a lack of professionalism. If this is your approach to engineering, you seriously need to rethink your value to your organization. You also need to step down off your high horse and start to think about the value in a team approach. You need to learn to take pride in your results - not just that you threw together a bunch of web pages or produced an application thousands will use. Will they enjoy their experience? Is it the best thing you could have possibly built? If it isn't, wouldn't you rather learn to make it that way? Western society is built on the concept of pride in outcome, of making highest-quality products. Don't push together a bunch of crap, throw it over the fence and expect QA to work it out. Or worse, don't get annoyed when you product a lousy piece of software and testing has the audacity to find a bunch of bugs in it! Just because it compiles and passes your unit tests doesn't mean it's worth the cost and effort spent on it!
  • There's no lottery. You may be performing your dream job, and you may think being a tester would be the worst thing that could happen to you (equivalent to, say, working a cash register at McDonalds). Believe me, there are a lot of 'testers' who would rather be developers. But most testers have chosen this career path and they are as passionate and excited about being a tester as you might be about being a developer. Think about where the US Space Shuttle would be without QA, or the Lexus automobile... QA professionals love their chosen career. I, for one, would rather drive a city bus than be a developer. Don't get me wrong - I love to code, it's fun. But to sit in front of a blank screen day in, day out, doing the same thing over and over and over... Yuck! I don't know how you do it! I wouldn't trade places with you for anything. So you may be happy in your role, but believe me - there's no need to pity a tester, nor to look down your nose at one.
  • Your degree doesn't mean much. I once interviewed a candidate for a development position. He had a a Bachelors in CS from a prestigious nationally-recognized school, and had won multiple national competitions in programming. Unfortunately for him, he had no clue how to write elegant code. I have probably interviewed a thousand candidates, from PhDs to bachelors in CS. And I've hired maybe 50 or 60 total. Your degree is evidence of a lot of hard work (well, maybe...) and definitely of perseverance, but it meant nothing the day you left school and started working for pay. I have a degree in German Literature and International Relations, but I can roll up my sleeves and dig into code and push quality into a project. It takes so much more than a degree in CS to make a good programmer, and some of the best programmers I've known don't even have CS degrees, or earned them as an afterthought. Sure, you gained some experience when you took your CS classes, and you got some exposure to programming theory. But what counts in software is experience, intelligence, and diligence. And your tester can have all of that and never have received a degree, or could have a PhD in physical therapy. Learn to trust the experience. I had a developer rip a test plan up one side and down the other because it had holes in it--that's great. But her excuse for not wanting to discuss the plan was "I have five years of experience - believe me, this is wrong" Well, that's just great, but the point is, she lost me there. At the time, I had 12 years of experience. I may not have known the ins and outs of the code (I was new to the project) but I know quality and I know we weren't approaching it right.

So when you're thinking about your project and questioning whether you should let that pesky tester into another meeting, don't listen to that voice that says you're better off without her. She's going to bring experience, passion, and perspective you simply don't have.

Stick to the Basics

I've been learning a universal truth in life, and that is that things generally don't happen all at once, but step by step. It truly is the little things that matter - it's the basics that count. You know, there's a reason why so many teams are jazzed about Agile, XP, and scrum. In many instances, these disciplined changes introduce quality and drive projects to a faster, more successful completion.

A lot of developers have glommed on to Agile like batter on a bowl. They love the 'freedom' of no rules, no documentation, no meetings - just pure coding. And who wouldn't? Unfortunately for them and for the projects, they don't understand: there are rules in Agile. Agile is based on XP. There are fundamental rules in XP like you don't move ahead without stabilizing what you've got, you don't cut corners, you work in pairs, you don't write code you don't need. These are basics that, when they are ignored, spell doom for your project.

As a test manager, my job--my very reason for being--is to make sure the project completes with highest quality. I'm going to be a stickler for the basics. I once worked two concurrent projects. They both started at about the same time. They were both green-field projects (white-monitor projects?). They were both staffed with some of the best people in the organization. One project was a short-term, single iteration. The other project was a three-cycle, year-long effort.  Both projects held frequent scrums where they went around the table and talked about progress. But that's where the similarities end.

The difference in approach was astounding, in spite of all the similarities. The team on the short project cut corners. When the encountered a roadblock on a given user story, rather than spiking in and working through it, they moved to another story. Soon, 95% of the stories in the project had been started, and none had been completed. No master build was produced; devs were checking in code after self-hosting it. No unit tests were developed ("we're too busy coding to write tests" was the excuse). Critical infrastructure tasks such as URL rewrite or SSL were excluded from the development phase because they were 'operations' tasks. Test was uninvolved because 1) test was understaffed and 2) there were no builds to use.

The other project was difficult to start. Testing was involved from day one, and we insisted on adherence to standards. Ironically, we discovered that, while .NET web services had a standard of strong typing out of the box, Apache's CXF does not. We had to figure that out. We had performance issues. But we dug in deep and worked hard - development put in an incredible showing. Test as well, in spite of being understaffed.

Can you guess the results? The first project released, barely. In the final four weeks of the project, official builds were produced and literally 350 bugs were opened. The code churn was unimaginable. Regressions were high--we were at about 2:1 (for every two bugs regressed, either one was reopened or a new bug was found). The product was released, but came in a month late and still ended up in a severely restricted beta. Developers worked insanely long hours, rather than cutting features, because no one story was less than 75% or 80% completed; there was nothing obvious to cut. The program manager got beat up over the project. I'd call it anything but a success.

The year long project? Cycle 1 of 3 was a solid foundation. Cycle 2 forged ahead - there were some challenges integrating, but two weeks of extended time worked through all that. There are performance issues yet to address, but Cycle 3 is going strong, the project looks great, and the team has really banded together.

Don't ignore the basics. You're not immortal, omnipotent, or above the basic rules of engineering.

Take Pride in Your Work

Related to focusing on the small things is the concept of taking pride in your work. I have to laugh at how many times in one of my web application projects I have entered a bug against the application which needed to be fixed for FireFox or IE. The developer would quick-like code up the fix and throw it back to test. The first thing I would do is regress the bug in the browser it was found. Generally worked like a charm. The second thing I'd do? Test it in the other browser. 5 times out of 10, can you guess what happened? It was broken.

No developer worth their salt should be beyond testing their fixes in multiple browsers. In the open source, Java/Oracle-heavy environments I've worked in for the past two years, I've seen a lot of FireFox fans in engineering organizations. That's cool - I like it too. But only 30% of your clients are using FireFox, generally - that means about 60% of them are using IE 6 or IE 7. If you fix a bug in one browser, you still have to look at it in the other two browsers. It is astounding to me that, after 5, 10 of these bugs being bounced back, developers still can't get it.

If you are going to spend time fixing a bug, take pride in your work - fix it all the way, and make sure it's fixed before checking in your change!

Value Your Tester

OK - so you're XP and you write a lot of unit tests. That's great! Congratulations, you've taken the first step to becoming a great developer. But now it's time to recognize that there are still more important things to do. You need realize that quality doesn't end when you finally pass your unit test--just like, when building a house, you're not finished framing when you cut the two-by-four.

Testers bring a very unique perspective because they can generally think in holistic, systemic terms. As a developer, you're thinking (or should be thinking) in terms of methods and functions, services, etc. A tester is thinking in terms of code and database, interface layers, and user interaction. You tie methods and functions together. A tester makes sure entire systems together.

Want an example of this? Your unit tests stub out data coming in and out of a database, to isolate the code layer. You prove your function works, but you don't prove it works with your database. Testers deal with real-world data, and they make sure it comes from the database.

Need another example? Your job as developers is to achieve something - it's to create something of value which fulfills a requirement (a user story). You can prove you are finished because you can demonstrate the functionality of your user story. You start at a broad point (there is any number of ways to accomplish your programming task), and drive narrower and narrower until you accomplish your work. Testing is just the opposite - testing starts narrow (with your completed user story) and goes broader and broader because there is an infinite number of test approaches. Testing can go forever.

Another way to look at it is this: your work culminates in one completed user story. A tester's job can potentially never end, because there could be an infinite number of bugs in your code (it seems like it sometimes!). It is the old turn-of-the-phrase "You can count how many seeds there are in an apple, but you can't count how many apples there are in a seed".

Testers are trained in this. They are used to this and a good tester is comfortable in this. This mind set is opposite to how you work - testers are different. If you embrace and welcome that difference, you will be more successful. Value that difference and watch what can happen as you collaborate on architecture, on infrastructure, even on code design.