Showing posts with label qa. Show all posts
Showing posts with label qa. Show all posts

Friday, December 18, 2009

Running MSTest Standalone

It’s been a very long time since I posted on this blog – I’ve been quite busy with work as well as writing articles for http://www.searchsoftwarequality.com (head over and check out my tips and Expert Answers).

I stumbled across this today—how-to guide to running MS Test standalone. http://www.shunra.com/shunrablog/index.php/2009/04/23/running-mstest-without-visual-studio/

Full disclosure, during my last stint at Microsoft, I worked on this internally. However, I never disclosed how to do it. I do not know Shunra and I honor my non-disclosure to the fullest extent.

At the same time, I’m thrilled. MSTest is a great product and I’d really love it if MS produced a standalone version. According to this post by an MSFT employee, there’s a lot of discussion around it:

this is one of the things on a higher priorities that we are considering for a future release though this is yet being worked out as such there if no formal statement. I am pushing for this & may have a update in a couple months. http://social.msdn.microsoft.com/forums/en-US/vststest/thread/2897fb68-ef48-4941-a49e-fe8cb1b5aced/

So who knows, maybe we’ll see something soon!

Thursday, April 23, 2009

Blitz Blog: Quality Assurance Through Code Analysis

Today I read through a Parasoft whitepaper published on searchsoftwarequality.com, and I found it to be a great approach to static code analysis.

FULL DISCLOSURE: I write articles and am a “Software Testing Expert” for searchsoftwarequality.com. However, I do not blog positively if I don’t believe in the posting.

Parasoft is the manufacturer of an application security static code analysis tool. Of course, the white paper’s intention is to sell copies of their tool. Can’t fault them for 1) believing in their product and 2) wanting to pay the bills.

What I really like about this article is the approach they take to SCA. They are not pushing it as the end-all, be-all of code quality. They are very realistic about SCA, in fact, stating that it’s often over-used and, once over-used, ignored. Companies implementing SCA must be careful about it—don’t enable a rule unless you really want to enforce it.

At the same time, they make a strong point about the value of SCA. It can really help a team drive quality upstream. Some policies an engineering team might want to use, for example, contribute to code readability while other contribute to security (dynamically-built SQL statements vs. parameterized queries anyone?).

As an Agile engineer, I do take issue with their heavy-handed ‘management enforcement’ method. Agile teams need to adopt policies as needed. Cross-company Agile team representatives might establish company-wide policies and enforcement, but the Agile team itself should arrive at the bulk of the policy definitions.

One thing I like to see is the implementation of SCA directly in the build process – ie, no build if the code fails with errors, and a team-wide email on warnings. This is the best way to enforce policies (but teams need to be selective about policies, so they don’t overwhelm or ‘over-stay their welcome’).

Anyhow, blitz blog. Highly recommended reading!

http://go.techtarget.com/r/6673725/7930283/1

Thursday, April 16, 2009

Tenants of a Software Testing Team

If you had the chance to write out the tenants (unbreakable ground rules) by which your quality assurance team lives, what would they be? Here are some I’ve been thinking of:

  • We partner actively in quality-first development, by participating in the entire SDLC and by contributing tools, process, and time to the development activity.
  • We do not and cannot ‘test in quality’. We test to validate requirements, discover missing or inaccurate requirements and implementation, and to expose defects.
  • We use tools and process to discover defects, validate functionality, and improve quality. Not to use tools and process.
  • Secure software is a top priority—we protect our users’ privacy as well as our applications reliability.

What about you? Do you have other tenants on your team?

Thursday, April 9, 2009

Blitz Blog: Persistent Quality Assurance

This is a blitz blog – quick, to the point, and hopefully helpful. Topic: persistence.

My wife has a cat (a HUGE cat, although my aspiring veterinarian son tells us she’s a ragamuffin breed, which is ‘big boned’ by nature). This cat I do not like. Don’t know why, but we’ve never really gotten along. But lately, she’s really been trying. She comes into my office, right next to my desk, lays down, rolls over, and just purrs. All the while, she looks right at me with the kindest, happiest look on her face. Finally, I give up and pet her.

Can we learn from the big cat? When we want something, can we be pleasantly persistent, just waiting patiently for the attention we’re looking for? I can look back on my career and see many times where some pleasant persistence might have paid off better than being a bit more forceful).

I’ve been reading “Fearless Change” and have found the message in this book to be the same persistent patience that our cat is demonstrating.

Wednesday, April 8, 2009

Software Testing: Is Your Quality Assurance group a Thermostat, or a Thermometer?

I frequently read a daily thought on families and how to improve your family. Yesterday’s thought dovetailed neatly into something thinking I had on QA organizations: are you a thermostat, or a thermometer?

A thermostat is a device which regulates the temperature in your home. It controls the atmosphere. A thermometer is a device which reflects the temperature. To be a thermometer is to be reactive; all it does is indicate the temperature. To be a thermostat means to proactively set the temperature.

In the family, a ‘thermostat’ parent doesn’t react to grumpiness or other negative emotions, but calmly sets the tone in spite of what’s going on around her or him. A ‘thermometer’ parent, on the other hand, gets ‘set off’ easily when children bicker, whine, or complain. Obviously my goal as a husband and father is to be a thermostat, although there are ‘thermometer-like’ days!

In a recent thread on the Yahoo Agile Testing Group (about continuous release), I noticed more than one poster saying something like “I tell management about an issue, and let them decide.”  This reactive approach to QA has been a pet-peeve of mine. It smacks of cop-out to me, and I’ve seen it develop into a victim mentality among many testers—myself included.

In my tenure at Microsoft (where I’m ‘serving’ my second stint, with a combined tenure exceeding 12 years), the testing organization has been consistently active in pushing for quality. A few things differentiate Microsoft from other organizations: first, the engineers wear the pants (that’s changing, slowly). In fact, nothing ships if the program manager, development manager, and test manager sign off on it. And they won’t sign off unless their teams sign off. Secondly, any individual has the right to stop a project, even the newest tester in an organization. Third, while the management structure is massively deep (I’m some six or seven layers below Steve Ballmer, current CEO and President), any individual can get the ear of their VP, Senior VP, and CEO.

So the culture I’m most familiar with is one wherein testing is an active participant in product development. The testing organization moves beyond a ‘show and tell’ relationship with management to an active participant in business strategy. The testing organization is part of thermostat, and rarely plays a thermometer role.

I have worked in organizations where this is not the case, and I’ve seen a lot of evidence that this often isn’t the case. In many IT and product software organizations, testing plays a thermometer role. We ‘take the temperature’ of a piece of software, and let management decide when to ship. In fact I have been told by management, when I’ve tried to be a thermostat, to back off and let them make the decisions. In one case, I have even refused to sign off on a risky release (security-related issue on a piece of beta software), telling the release organization they were welcome to release but they’d have to convince my manager to sign off because I wasn’t going to.

The Testing Thermostat

Thermostats regulate the temperature. In my “Quality 2.0” vision, testing organizations participate in regulating quality. The test organization moves from just taking the project’s temperature, to a role where they are helping to set quality. To do this, testing teams need to:

  • Include experienced, respected engineers. At one time in my career, I worked with Christian Hargraves, probably one of the most experienced engineers I’ve ever met. Christian led and maintains the Jameleon project. He’s a sought after expert in the Salt Lake IT market, and has worked for the LDS Church as what I would call a Test Architect. Every developer in the organization knew Christian knew his stuff, and if Christian recommended something, that had weight. Qualified, respected engineers who are part of a test organization can help contribute to quality process and practices across the organization. They can influence core development teams to push quality into their process, from code reviews to adopting best practices.
  • Advocate actively about projects. Testers need to be active participants in release decisions, influencing senior management. In many instances, testers are shareholders. In every instance, testers share the benefits of a good release by enjoying continued employment, and in ‘worst case’ scenarios, testers go down in flames along with the rest of the company. In other words, our employment is just as dependent on a good release as any CEO’s. If all we do is report quality, without influencing decision, we are placing our career in the hands of other people. I do not accept this point of view!
  • Drive quality upstream: we all know it’s significantly cheaper to fix a bug in the planning phase, early implementation, or even release validation phases as opposed to post-release. Testers need to work to push themselves and their influence upstream—the earlier we can look at code, the better.
  • Be involved in planning activities. Take an active role in scheduling. Drive feedback about previous releases (especially lessons learned) into current project planning. If the company has a history of underestimating test duration, speak up about it. Track and present data which shows a more realistic approach. Work patiently but consistently to drive reality into schedules.
  • Hold dependencies accountable. If the development team isn’t dropping code early enough for full testing, don’t stand for that! Your reputation as a tester, not to mention your continued employment, depends on a solid release. If the company runs with firm release dates, but development is dumping code in your lap a week before release, you’re going to end up working 90+ hours that final week, and still releasing crappy code! Negotiate and set milestones, and hold development accountable for reaching those milestones. Two things will result: first, your test organization will start to work a consistent, sane amount of hours. Secondly, quality WILL go up because you’ll be looking at code earlier.
  • Be involved in establishing release criteria. This is critical—across your business, you need to have consensus on what constitutes release-readiness. If you leave this up to marketing, they’ll say a set of customer stories need to be developed. Leave it to development and they’ll say that, plus possibly that all unit tests need to pass. As QA engineers, we have a unique perspective in that we understand the value of positive AND negative testing. We can help influence the establishing of release criteria which truly reflect product quality. Setting this criteria early, well before a project is nearing a release deadline, removes the emotion and sets a rational basis for declaring ‘Ship Ready’.

These are just a few things a test organization can do to transition from a thermometer to a thermostat. What’s missing? Does your test team take a more proactive role somewhere else? Let’s hear it!

Friday, April 3, 2009

Found a Security Flaw (The Need for Software Testing Everywhere)

Today I was paying a bill online, and I crashed the company’s IIS server (I promise, I did nothing wrong—intentionally). The good news: they write their code rather securely, so they are using parameterized queries rather than embedded SQL statements. The bad news (besides the crash itself) is that they were running with tracing enabled, so I saw the entire stack trace.

I called the company and got in touch with their tech guy. He was really polite on the phone and very open to feedback. I shared a bunch of info with him and thought I ought to document it. So here’s an Open Letter to All Admins Running IIS:

· You need to check your machine.config (and possibly web.config) files and make sure <trace enabled=”false” localOnly=”true” pageOutput=”false” requestLimit=”10” traceMode=”SortByTime”/> See http://msdn.microsoft.com/en-us/library/aa302351.aspx for more info.

· Either spin up Microsoft Baseline Security Analyzer (MBSA) or use IISLockDown and URLScan to scan your server(s) security, especially in configuration. See http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=f32921af-9dbe-4dce-889e-ecf997eb18e9

· You might also benefit from a security review/audit from an experienced, independent consultant.

Security Focus has a decent article on the issue: http://www.securityfocus.com/infocus/1755 (it’s old but still accurate). NOTE there is a chance that running IISlockdown may break something. If your developers built the site with some dependency on what’s actually a security hole, locking down your server could cause issues. I highly recommend that you run this against a test deployment and then run all of your automated tests against that deployment, and that you do some manual testing.

You might also consider picking up copies of

  • “Improving Web Application Security” (MS Press),
  • “Building Secure Microsoft ASP.NET Applications (MS Press),
  • “Writing Secure Code” (I *highly* recommend this – it’s the bible on secure coding, especially in MS technologies), and
  • “Secure Development Lifecycle” (MS Press).

From more to less specific, these books are pointers on secure IIS configuration, ASP.NET coding, coding in general, and strategies for maintaining application security throughout lifecyles. The latter book is a great read if you’re on a larger team and/or need to influence management to give you the time for security-related work.

Thursday, February 12, 2009

DRAFT: What’s the Role of a Test Manager in Agile Organizations?

This blog posting is a draft. It represents a work in progress, and is posted with the hope of gaining peer review for improvement.

Many people have asked me—or challenged me—about the role of a test manager in agile organizations. A basic premise of agile being the self-lead team, who needs a test manager? Many agile engineers even bristle at the thought.

Reality shows, however, that not all teams or organizations benefit from a complete lack of management. Smaller engineering organizations can often do without, but larger organizations perform better with some amount of centralized management. A team of ten engineers, with no role specialization, has no need for a test manager. A team of fifty engineers, which maintains a differentiation between test-oriented engineers and development-oriented engineers for a valid business reason will certainly benefit from a test manager (and, incidentally, a development manager).

What do I think is the role of a test manager in such an organization? I think the test manager contributes in five higher-order functions: interfacing with senior management; providing vision and leadership; fostering cross-group collaboration; leading specialties, tools and reporting; and providing organization management. That’s a lot of Dilbert-esque lingo! What do I really mean? Read on for details…

Interface With Senior Managers

Larger organizations, such as the team I led at Circuit City, have deep management structures. It’s simply a fact of life. And those managers generally manage cross-discipline organizations and benefit from help with a manager between them and each discipline. In Circuit’s MST project, I managed over one hundred engineers, with thirteen concurrent workstreams. My role was to interface between my manager and the test organization, communicating messages and centralized decisions in a way that addressed my teams’ needs, interests, and concerns.

Test managers also act as a buffer, insulating the team from senior management’s see-saw decision-making process. Sometimes just shielding testers from the questions senior management can ask is performing a huge favor—questions like “do we have the right number of testers” can really cause concern in a test team; the test manager can research and answer that question often without churning up any worried or troubled feelings.

When senior managers arrive at a strategic plan, the test manager’s job includes integrating that plan into the test organization. A great example would be an organization which has decided to migrate from waterfall to agile! The test manager’s role will change dramatically, moving from ‘manager’ to ‘coach,’ assisting the test team in understanding their role and helping to set engineering goals in terms of process adoption.

Another role test managers play in this larger organization is simply pushing back on bad ideas. Individual contributors are often heads-down (especially on agile projects), in the day-to-day work. If senior management stumbles upon a ridiculous idea, individual team members often lack the cycles (and sometimes the experience) to push back on that decision. The role of a test manager often includes helping senior management understand the possible side effects of a decision.

In some organizations, testing or quality assurance are not often viewed as critical. These organizations may lack experience, or may have experience in smaller projects than the one at hand. A test manager’s role includes the responsibility to advocate for quality across the entire company. Sometimes this means helping customers understand the value of the ‘extra cost’ of a few test engineers on a project. Other times, it’s helping executive management see the need for experienced testers. Sometimes the job includes preventing releases from happening when core quality issues beyond simple functionality could cost the company in terms of money or bad PR.

Provide Vision and Leadership

Agile teams are (rightfully) quite busy in the day-to-day and may not be cognizant of everything going on in the company. The test manager’s responsibility in an agile organization can include driving vision. With an understanding of company growth plans, the test manager can be looking ahead to understand how to double or triple the size of the test organization. The test manager can also be thinking about strategic redirection – for instance, perhaps the test organization depends on developers for all automated testing. The test manager can set in place organization-wide plans to train testers, helping them become better automation engineers. The test manager needs to be the person asking “Why not?” and helping to drive the team forward toward that goal.

Sometimes agile teams fall behind due to circumstances beyond their control. The agile approach is to take those hits, learn, and move on. But sometimes there’s value in getting help. For instance, if a member of an agile team becomes ill during a critical period, a strong test manager can step in and ‘pinch hit’ for that tester while he or she is out. This isn’t advocating that the test manager is simply a ‘reserve’ resource. Teams which underestimate or over-extend themselves need the experience which comes from missing goals. But there are times when a test manager can contribute at an individual level.

Another leadership role is to help teams through transition. The change from waterfall to agile is a fantastic example of this – as a leader, the test manager needs to play a critical, positive role in helping testers work through the difficult migration process. The test manager provides structure and understanding, offers insight, and encourages team members as they face challenges. Above all, the test manager needs to be supportive and reassuring during the struggles which come with change. This leadership will help retain top talent, which might otherwise leave the organization due to the uncertainty and stress of change.

Finally, the test manager can represent the test organization and the company as a whole. A good test manager can be a key asset in customer retention. By contacting external customers, being aware of situations they’re facing, and communicating the status of fixes and changes, the test manager can help the customer through any rough spots in project implementation. Another role the test manager can play is sitting as a member of industry or standards organizations, making sure quality has a voice at that table and providing expertise in the subject.

Foster Cross-Group Collaboration

A critical role played by any management in an Agile project is eliminating road blocks and keeping the team moving. The test manager can play a pivotal part in this by interacting with other groups, with the business side of projects, and with the customer. Sometimes it takes a management-level title to ‘influence,’ even within the same company. More important, some negotiations and even simply driving closure can involve a lot of time, talking, and walking. By remaining proactively involved in projects, the test manager can help remove impediments, all the while allowing the team to remain heads down.

Also, very often in a larger organization, the test manager is the first level team member with corporate purchasing privileges. Therefore, the test manager can often take care of licensing and other purchases and fees for supplies the team needs to carry on their work.

In larger engineering organizations, the test manager works closely with development and program manager/project manager counterparts. They collaborate on cross-discipline objectives as well as on budgets and scheduling. They are often the point-people for driving mitigation for lessons learned which surfaced in retrospectives.

Another role the test manager plays in a larger organization is introducing engineers around the company. A successful test manager is very familiar with projects and personnel across the entire organization; when one employee is looking for help, that test manager can make appropriate introductions.

Finally, the test manager can play a scrum master role in a scrum of scrums, helping to lead the meeting.

Specialties, Tools, and Reporting

Generally test organizations are split into teams in  a matrixed structure. Most testers fall easily into a project team, where they’ll be busy until that project completes. There are, however, occasions where testers are hired who act in more of a ‘Center of Excellence” function. For instance, performance testing, automation and tools, or security-oriented testers. These testers are shared across the organization, brought in as experts to play a specific role on a release. Organizationally it’s often most efficient for these testers to report to the test manager, and the test manager manages their assignments.

In a large Agile organization, there is often tension between the Agile maxim “do what works'” and organizational efforts to standardize, especially on tools. Standardization sometimes aids in keeping costs down (when expensive commercial tools are used), especially for tools like load testing tools. Standardization also aids in maintaining resource ‘portability’ (the ability for a tester to move from team to team, without needing to learn new tools). A test manager needs to be very sensitive to the needs (perceived and real) of a team, but can often be instrumental in maintaining consistent toolsets across the organization.

Agile is notable, among other things, for a distinct lack of metrics. Status reporting is as important to senior management as it is a burden to agile teams. The test manager can play a critical role in pulling together status out of several test teams, presenting it to management in a format they’re used to. It seems a bit mundane, but in spite of the gains made moving to agile, many senior managers still yearn for detailed status reports.

Related to reports is the idea of common metrics. Just as the Agile team needs to ‘do what works,’ the Agile organization also needs to do what works. Sometimes that means a bit of a burden on the Agile team. With engineering expertise as well as an understanding of project management, a test manager can help Agile teams arrive at metrics which are easily generated (often times in an automated manner, if unit testing or defect tracking systems are in place). There are metrics that work – they vary from company to company, but they work—and the test manager is a key player in defining these metrics and automating the gathering process.

Finally, the test manager can offload the ‘compliance’ burden from the Agile team, reporting up to senior management on the progress against various company initiatives. Again, maybe not exciting to the test manager, but critical to the company’s improved effectiveness and efficiency (well, we like to hope each initiative is critical).

Organizational Leadership

Even in the Agile organization, things like performance management have to happen. The test manager carries a significant administrative burden driving this process. This is a good thing – a good test manager not only helps his or her teammates grow in skills and experience, they also advocate with senior management for promotion and compensation increases.

As a test manager, my experience has always been that my teams are more effective when I screen candidate for open positions, and then pull team members in on formal interview loops. As a test manager, I have had responsibility for opening job postings, authoring job descriptions, and sourcing resumes for open roles. When the test manager takes on this administrative burden, it allows the Agile team to focus on their work. When decent candidates are sourced, the team can stand down briefly to perform interviews.

Closely related to recruiting is the role the test manager takes on in hiring and onboarding new hires. The manager shuffles new hires through the hiring process, helps in signing up for benefits and such, and makes sure the new hire settles in well. A key role I have seen too many test managers overlook is introducing the new hire around the company. Finally, the new hire will achieve their highest potential as quickly as possible only if and as the manager assists them in setting goals for impact, development, and growth.

A test manager’s work is to drive his or her team to higher levels of effectiveness and efficiency. Working in concert with members of the Agile teams, test managers can accomplish this in part by researching and procuring appropriate training. Whether it’s bringing in experts in Agile testing or arranging for on-site training in automation skills, training is critical for improving what a test organization can accomplish. Procuring that training can be a time-consuming task. Also the test manager arranges for training which fits broad, cross-organization needs rather than focused on specific teams.

Finally, the test manager is critical in helping with employee retention. A good manager is aware of the interests and professional goals of each employee in the organization, and makes the effort to provide that employee with opportunities in sync with the employees wishes. With a broad, cross-organizational focus, the manager can also help employees find new roles when, without that assistance, the employee might have left the company.

Help the Agile Process Work

A closing concept for the test manager’s role in Agile is the idea that he or she is a make-or-break participant in the transition to Agile. It can take several months or even a year to successfully transition to an Agile process, and that transition can be painful—especially for testers! The test manager helps coach teams through the transition, aiding and lending support and encouragement, fostering courage, and assisting in power transformation. This is not a tops-down role, but a one-on-one, coaching role helping team members with courage, patience, and confidence.

Saturday, January 31, 2009

Quality Assurance: automated QA software testing tools

This week, I answered a question about automated QA tools. This topic comes up frequently – my base answer is “use the simplest tool that will get the job done”. There are a lot of open-source tools, but I find that JUnit (or NUnit, if you’re in .NET) and Selenium or Watij are a great combination for testing web applications.

Read my answer here: http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1346325_tax306196,00.html

Software Testing: Writing a Test Plan

This week, I answered a question about writing a test plan, on my “Ask the Experts” column on searchsoftwarequality.com. You can read the answer at http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1346327_tax306121,00.html

What do you think? Do you use a test plan? If you’re agile, is your test plan this in-depth?

Wednesday, January 28, 2009

How to Ask For Help with Software Testing/Quality Assurance Questions

Back in October 2008, I answered the question "What's the Best Software Testing/QA Tool". In my answer, I included a few steps for ensuring you get an answer when asking questions.

Due to popular demand, I'm moving those comments into their own blog entry. This way, it's easier to find and read.

To ensure you get an answer:
  1. To all testers: look before you ask. Really! If you are wondering what tool you can use for a given test type, use Live Search and look for information! Performing a minimal amount of research shows respect to the audience you're turning to for help, and can actually prevent a question now and again. QA is all about searching for product defects; apply that searching capability to answering your questions.
  2. You're more likely to get help on a specific question rather than a broad question. For instance, asking "What's the right tool" won't get you much. However "I've evaluated Selenium, Selenium RC, and Watij--given my situation (describe it) what tool do you recommend?"
  3. Talk about the problem you're trying to solve. So "We realized we'll be running tests over and over and over again, but our UI changes frequently. What strategies can we take...?" is a question that raises the problem and let's people know what help you're looking for.
  4. Asking "what's the best tool?" is like asking "What's the best car?". If you live in Germany and drive the autobahn frequently, the best car for you is far different from the best car for someone who lives in Bombay, battles thick traffic, and drives Indian roads (notoriously rough). Be specific, give details, and look for recommendations. Software testing tools are the same way - a screaming "BMW" test tool will fall apart on a Bombay road. A right-hand drive Jaguar would be totally out of place on an American highway. A performance test tool isn't the right way to automate repeated quality assurance tests. The right tool is dependent on the project, the skill sets on the QA team, the timeline, and several other factors.
  5. Solve your own problems. The Internet is an incredible tool and offers us all a ton of opportunity to not have to reinvent the wheel. But don't ask other people to do your work for you! Ask for advice. If you want someone to solve your problem, ask one of us to consult (for pay) and bring us in. We'll be glad to get the job done for you!
  6. Give back: as you grow and learn, give back... Don't post a question, get your answer, and disappear. Remain an active participant in the community and 'pay it forward'.

Summary? Be specific, research before you ask, solve problems, and give back. That's how to get answers online--in a sustainable fashion.

Sunday, January 18, 2009

Quality Assurance: software testing reports

In my capacity as a software quality expert at http://searchsoftwarequality.com, I recently answered a question regarding software testing reports. A writer asked how to build a testing status report focusing on test execution criteria, test case failure rate, and bug rate.

You can see my answer posted at http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1345318,00.html 

What do you think - too simplistic? Or do you agree that these are the bread-and-butter reports for software testing status?

Wednesday, October 22, 2008

Lessons for Teamwork in Software Quality

In my role as a volunteer youth leader in my Church, I have the opportunity to help put together a monthly activity. My group was responsible for the activity this month, and we chose to offer five team building exercises (all straight from http://www.wilderdom.com/games/InitiativeGames.html). We did four exercises in a rotation, and then all met together to perform the fifth. The exercises were:

  • Minefield: each member of a group paired up. We set up a maze using chairs and tables, and had our pairs blindfold one person. The second person in the pair had to 'lead' their partner through the maze. The challenge in this game is that all partnerships are talking together, at the same time. The purpose of the exercise is to emphasize the need for communication as well as the ability to pick out the voice you're listening to.
  • Toxic Waste: this is a group exercise. Participants encountered a bucket in the middle of a 10' circle and they're told the circle is full of toxins. They need to move the bucket out of the circle, and their resources are a bungee cord and ten thin ropes. The challenge is ingenuity, teamwork, and (again) communication.
  • All Aboard! In this exercise, the team all stands on a tarp. The challenge is to fold up the tarp to be as small as possible, fitting the entire team on it, WHILE the team stands on it. Teamwork, spatial relations, etc.
  • Helium Stick: I conducted this one. The team lines up in two parallel rows, sticks their hands out with their index fingers extended. A small stick (I used a 1/4" wood dowel) is placed on their extended fingertips and they are challenged to lower the dowel. In fact, it goes up.
  • Egg toss: each team is given 25 straws, 20 cotton balls, a 5' piece of tape and an egg (unboiled). The mission is to build a crate/ship/container so they can drop their egg and it doesn't break.

What is interesting to me is the way teams interacted to get the problem solved. As I said, I conducted the helium stick challenge. As teams started, they were astonished that their stick ROSE instead of sunk. In a flash, right on the heels of that recognition, came frustration. We were split up into groups of boys and girls--oddly enough, while both groups expressed frustration, only the boys started yelling at each other. I kid you not, they were really ripping on one another. They quickly moved into the blame game. Each group (all four) needed to be stopped multiple times.

After frustration/blame, the groups moved into 'try harder' mode. I finally would stop each group and point out that trying harder wasn't working - maybe there was a better way. At that point, it was again quite funny - no single group picked an organized way to brainstorm. It was 'herd brainstorming' and they all just started shouting ideas.

Eventually someone 'won' through all the shouting. In one group, it was an adult leader, who started back into try harder. In a group of 12-year old boys, one of the boys got everyone's attention and he literally talked the stick to the ground. You need to understand the magnitude of that feat - you've got eight people all pushing up on a stick (pushing up because they have to keep their fingers on the stick at all times, and by doing so they force the stick up). Somehow this boy talked everyone, step by step, through the process of lowering the stick.

The girls were pretty creative--they caught on quickly that they needed to coordinate, and touch was the best way to do so. One group of older girls interlocked pinky fingers and lowered together. Another group split into two smaller groups on each end, and just put their hands touching, next to one another.

For me the takeaway was:

  • If something isn't working, the first response is to try harder. But if it doesn't work, it's not going to work any better if you try harder.
  • When something isn't working, teams gravitate toward frustration and even blame. It's so destructive! No one was intentionally pushing the stick up, but they kept yelling at each other about it.
  • Someone has to step up and coordinate the discussion. Brainstorm on ideas and then try one of them -- any one of them. There was no single right idea and the teams that just tried something generally succeeded, as long as it was something other than trying harder.

How can this apply to software development, software testing and software quality? Well, software is built by teams - even the smallest unit of engineering is a feature team made of two or more people. Communication is important, and it's critical that 1) no one resorts to blame and 2) each person is allowed to share their point of view. "Writing unit tests just so we can hit a goal of coverage seems to be distracting us from the real goal--our output is actually lower quality right now than it used to be" needs to be answered with "Why do you think that way?" and not "Uh huh - not true. Besides, you're not helping at all with all these bugs you're bothering me with!"

Good communication includes illustrating the current status, as in "Hey wait, everyone seems to be pushing up" and then a group discussion of what's causing that: "We're pushing up because we all want to keep our finger on the stick." Only then can the group move on and start thinking about the solution. So in an engineering organization, that might be "why are we getting a flood of bugs? Wait, testers are off doing something else (building a battleship test automation system) rather than focusing on testing daily builds". Once the situation is recognized, only then can the team react with an appropriate response.

As quality assurance/software testing teams work with and communicate with development counterparts, quality becomes a natural by-product. As teams discuss what practices are producing current results, they can move forward and improve how they approach the challenge of writing quality software. A little communication can move our engineering teams from try harder to work smarter, and output increases in both quantity and quality.

Thursday, October 9, 2008

What's the Best Software Testing/QA Tool?

Frequently people will post a question like "What's the best tool?". It drives me nuts sometimes! Software testing and quality assurance are definitely challenging professions, but it's not fair to depend on others to solve your challenges for you! I've answered questions like this in the agile-testing@yahoo.com group, and on the MSDN forum, and I thought it was time I brought my answers together into one blog posting. That way, I can refer people back to the posting.

MSDN Answer

Lifted from my post on MSDN's software testing forum:

An American car manufacturer once tried to make a car for all things - it was a 5-6 passenger car in three different models, each with four wheel drive AND good fuel economy. It ended up mediocre at all things. You need to beware; resist the urge to pick one monolithic tool solution for such a wide variety of testing needs. The vendors (well, NOT Microsoft, of course ) would love to have you believe their tool can do it all. And in some aspects, their software testing tools probably can. However, will that one tool solution be effective and efficient in everything you do? No.

I'm not advocating a pantheon of tools, but you need to think like an engineer more than a manager or a customer. You find the right tool for the job at hand. Over time, you'll settle down to a group of 3-4 tools and you'll stick to them.

I've never used <some tool the asker referenced> - someone else will need to comment on that product's ability to do everything you're looking for. My take? It'd be too good to be true if it actually could. And I tell my three sons all the time 'If it's too good to be true, it's probably not true'.

Be wary.

Here's how you [should] approach this as an engineer:

  • List out the applications you'll be testing 6-12 months from now

  • Think about the test scenarios, especially those you'll want to automate

  • Ask yourself how many releases of that application/project your company will perform--will quality assurance be a repeated process, or are you testing this software just once?

  • Based on that, how much automation is 'worth the investment'? Microsoft Office 2007 *might* still be running automation I write in 1997--probably not, but I know 2003 runs it AND that automation is still being run, every time there's a patch or SP released.

  • Now that you know how much investment to make, look for best of breed solutions for each project. Don't focus on all-in-one solutions; just look for the best tool for the job. Use demos, read reviews, ask questions. Don't ask "Is this the right tool?" but rather "Which tool do you recommend" or "I'm considering this tool for this job - does anyone have experience using this tool to do this?"

  • Once you have a short list of tools, look and see if there is commonality/overlap. You'll see patterns. It's possible Rational or another all-in-one tool will appear in the list; it's equally possible none of those tools will appear.

  • If there's good overlap, ask yourself what you think the pain will be if you force a tool into a job it wasn't designed for. If you can live with the pain, go for it... If not, keep looking or open yourself up to a larger set of tools.

Hope that helps (in spite of not answering your question),

John O.

 

Commentary

OK - I searched through a number of documents and can't find my other posting on this subject, so I'll just write it once more.

  1. To all testers: look before you ask. Really! If you are wondering what tool you can use for a given test type, use Live Search and look for information! Performing a minimal amount of research shows respect to the audience you're turning to for help, and can actually prevent a question now and again. QA is all about searching for product defects; apply that searching capability to answering your questions.
  2. You're more likely to get help on a specific question rather than a broad question. For instance, asking "What's the right tool" won't get you much. However "I've evaluated Selenium, Selenium RC, and Watij--given my situation (describe it) what tool do you recommend?"
  3. Talk about the problem you're trying to solve. So "We realized we'll be running tests over and over and over again, but our UI changes frequently. What strategies can we take...?" is a question that raises the problem and let's people know what help you're looking for.
  4. Asking "what's the best tool?" is like asking "What's the best car?". If you live in Germany and drive the autobahn frequently, the best car for you is far different from the best car for someone who lives in Bombay, battles thick traffic, and drives Indian roads (notoriously rough). Be specific, give details, and look for recommendations. Software testing tools are the same way - a screaming "BMW" test tool will fall apart on a Bombay road. A right-hand drive Jaguar would be totally out of place on an American highway. A performance test tool isn't the right way to automate repeated quality assurance tests. The right tool is dependent on the project, the skill sets on the QA team, the timeline, and several other factors.
  5. Solve your own problems. The Internet is an incredible tool and offers us all a ton of opportunity to not have to reinvent the wheel. But don't ask other people to do your work for you! Ask for advice. If you want someone to solve your problem, ask one of us to consult (for pay) and bring us in. We'll be glad to get the job done for you!
  6. Give back: as you grow and learn, give back... Don't post a question, get your answer, and disappear. Remain an active participant in the community and 'pay it forward'.

Summary? Be specific, research before you ask, solve problems, and give back. That's how to get answers online--in a sustainable fashion.

Friday, October 3, 2008

What Makes a Good Automation System (automated QA/Quality Assurance/Software Testing)

So in my new job, one of my first tasks is to put together an automation system--by this I mean a harness and a framework. The process has had me thinking (and talking) a lot about what makes a good system in general.

The automation harness is the system used to schedule, distribute and run tests and to record results. In the open source world, some tools used here include NUnit, JUnit, and TestNG (my personal favorite). These tools all work in a one-off situation - they are all run locally out of the dev environment or via the command line. In software testing at Microsoft, though, a one-by-one approach to automation is useful for 1) developer unit testing, 2) tester unit testing/test creation, and 3) failure investigation. However for the 24/7 test environment we're building, this isn't sufficient. We need a centralized scheduling tool that allows us to push tests out to multiple clients (simulate load and de-serialize test runs). So we're working internally to Microsoft, evaluating the existing automation harnesses available and trying to find the one that works best.

A big factor for me in this selection is finding a harness which is configurable. At Microsoft, we are pushing the envelope in testing in a variety of ways: code coverage analysis, failure analysis, and several similar activities which allow us streamline our testing, reduce test overhead and automate many testing tasks. This means our framework MUST be extensible - we have to be able to plug in new testing activities.

A second element of the automation system is the framework. This is the abstraction layer which separates our automated tests from the application under test. This abstraction layer is critical to good automation. If, for instance, you are automating a web application, you will probably experience a lot of churn in the application layout. You do not want your automated tests hard-coded looking for certain controls in certain locations (ie, in the DHTML structure)--by abstracting this logic, your test can call myPage.LoginButton.Click(), and your abstraction layer can 'translate' this into clicking a button located in a specific div. In some organizations, this framework is purchased. At the LDS Church, we leveraged both Selenium RC and Watij to build this framework, developing most of it ourselves internally (kudos to Brandon Nicholls and Jeremy Stowell for the work they did in this capacity).

The challenge felt by most test organizations is two-fold: 1) finding the engineering talent to build these systems and 2) making the investment in innovation. Ironically, the very thing which can free up resources for other tasks (automated testing) is the thing most managers don't want to spend time on! This makes sense, sort of... managers don't like to invest in activities which (in their mind) don't contribute directly to the bottom line. In all but the smallest of projects, this makes no sense--test automation isn't a sellable product, but if automated tests can free up a day or two of test time, that's a day or two spent doing other activities! Each and every time automation is being run.

Recruiting top talent is also a challenge. In both IT organizations where I worked, there was a culture among developers that testers weren't engineers--they were click testers. Testers couldn't give input on 'extremely technical concepts' like architecture, potential bug causes, or the likes--they're there to pick up code as developers release it, and then to find bugs. It's no wonder that it's so challenging to hire engineers into testing - when they're treated like that, they're either going to leave or move to development!

So the keys to a great automation system are: 1) a solid, extensible and flexible harness, 2) a robust framework, generally customized to your test activities, 3) management commitment to invest in innovation and automation and 4) top engineering talent and the culture to reward them for their contribution.

Am I missing anything?

Wednesday, October 1, 2008

What A Feeling!

So I've done a relatively good job of being calm and collected in my first two days at Microsoft. Your first day starts in a long line filling in I-9 forms. Then you enter address and other contact info info the Microsoft system via an internal web form. Finally, you sit for another 7 hours in a room while they teach you about benefits and the MS culture. At the start of that first day, you're officially a Microsoft employee, but you don't get your card key or anything.

Day two starts in the Big Room again, with a discussion about corporate ethics, legal issues, and a discussion with a couple of recent Microsoft hires. Finally you get your card key and are sent to find your manager. Oh and by the way - the room has anywhere from 100 to 150 people in it. Yup - Microsoft starts that many people, each and every week (well, maybe not the week of Christmas or New Year).

So at about noon I launched off on my own. Unlike most new hires, having worked here for 11 years, I know where the buildings are, where parking is, etc. So I zipped straight to my temporary building whenever I'm here in Redmond. I swiped my badge and got access to the building. A little smile crept up on my face.

About an hour later, after picking up my laptop and getting it set up, I went for lunch with another new-hire for our Utah group. As I swiped my card and stepped into the cafeteria, I had a completely involuntary reaction: I jumped, throw both hands high in the air, and shouting "Yes!! I'm back at Microsoft!" Later during lunch, Tim told me how cool it was to spend time with someone who is so excited about working at the company. I have a bounce in my step which has been missing for many, many years.

I can't describe it. I was in software testing/quality assurance at Microsoft for 11 years. There were great days and there were really challenging days. I left two years ago to lead QA "for the world's largest retail IT project" at Circuit City. What an experience that project was. Quality assurance to team leads (mostly IBM project managers) meant proving happy path and avoiding negative testing. It was a cultural shock, to say the least. Testing software at the LDS Church was somewhat better. The people, for the most part, were great (surprisingly, there were exceptions - people who behaved less Christ-like than even at Microsoft!). The QA team suffered from a total lack of respect from development, however. And I found in that IT organization that everyone has a special little niche. There are enterprise architects, application architects, security 'specialists' (people who know about security policy, but don't know much about penetration testing), developers, and quality assurance engineers. If you dared to stray outside of your niche, well, that meant you were stepping on someone else's toes.

Back here at Microsoft means I have an equal seat at the table as an engineer. It means I'll have to work with other engineers to tackle really, really challenging problems. First challenge: building an automation harness using existing technologies at Microsoft, then building our own framework (abstraction layer) within which our tests run. Additionally, we need to take the mandate to "build virtualization management technologies" and turn that into a release software application: ideation, product planning, product specification, development, and release testing.

We also have the challenge of hiring incredible C++ developers and testers (engineers) in the Salt Lake Valley. Finding developers is pretty easy, but finding developers who respect QA and understand engineering excellence? A challenge. Finding a software testing professional who has the guts to take an equal seat at the table? A challenge!

But that's what I love about being back. I feel like, after two long years, I can finally do my best work and reach my full potential. I can bring all of my 14 years of engineering experience to bear on a software challenge. If I have an idea, I can run with it. I can provide input to the user scenarios, to application architecture, and to how we push quality upstream. I can prevent software defects, rather than find them!

I know everything won't be perfect. I left Microsoft for reasons, and those reasons haven't all changed (although judging by much of what I heard in New Employee Orientation, the last two years have been a time of growth and improvement for the company). There will still be bad days, there'll still be the struggle for a good work/life balance (now called 'blend'). But I have incredible health benefits for my family, stocks and bonuses again, and I have the chance to be challenged every day again. And I'll be working with super-smart people every single day. THAT is a cool thing!

Monday, September 22, 2008

A Softie Again

I'm thrilled to announce that I'm about to become a Microsoftie, again! I have been given the opportunity to return to Microsoft as a senior test lead, working in the Management Division. No, that's NOT the group of execs who run the company! That's the group producing OS and platform management tools like Operations Manager (a product I've worked on in the past), SMS Server and so forth.

About two months ago, Microsoft announced the formation of a development center here in my home of Salt Lake City. I quickly applied for roles in the test organization. After two years of working project IT, I am ready to get back to product software! Some people might snicker, but the commitment to quality at a product software company, especially Microsoft, is so much higher than in project IT. I found I was spitting in the wind quite often in my previous roles--I was either fighting losing battles or I was fighting the wrong battles (ie, pushing for quality in organizations or situations where leadership didn't share the same commitment, or pushing for higher quality than some projects required).

Additionally, I found that my last few organizations did NOT look at testing engineers as equal citizens. Testing was looked at primary as QC (quality control) or possibly QA (quality assurance). Rarely were we invited to planning or design meetings, never were we looked upon as people who could help PREVENT defects. We were often seen as a roadblock to release. Don't even ask about my two-hour argument with IBM about the value of negative testing! Engineers who could help architect solutions? No way!! I often felt my fourteen years of engineering experience were swept under the table because I had "QA" in front of my "Engineer" title, rather than "Java" or "Development".

So I'm very excited to be returning to an organization and a company where I get a 'full seat' at the table, and an opportunity to contribute to my full ability, not to the perceived level of my title. IE, I'm an engineer again!

Doesn't change much about my point of view or my approach to testing. I'm still hyper-focused on 1) bug prevention, 2) quality engineering, and 3) driving for greater effectiveness and efficiency. And sometimes that will put me at odds with mainstream MS thinking (recall my conversation about moving to all SDETs and automating all tests, and the resulting "user experience" bugs that slip through the cracks in products like Vista).

For me the best thing is, I get to stay in beautiful Salt Lake City, and yet work for one of the greatest employers in the world. And NOW I can honestly say I've tried a few others...

So OK - let's hear it. How bad is Microsoft quality, really? If you could talk to a test leader, what would you tell them? What would you ask them?

Friday, July 25, 2008

Software Quality Assurance: SDL (Secure Development Lifecycle

At work, I've been helping one of my teams implement portions of Microsoft's Secure Development Lifecycle (SDL). SDL is more than just plinking around attempting some penetration testing--it's a committed approach to secure software design. Here are some of the takeaways from our work:

  1. The first point I made with my team is that security features DO NOT EQUAL secure features. Having SSL encrypted communications does not make a web application secure! It just means you have an encrypted communications channel. Secure software isn't secure because of features such as Acegi security, RSA encryption or anything like it. Secure software is produced when developers think secure from the start. Secure software comes when code is written safely, when developers write solid, secure code, and when testers are helping with the planning and testing of the software with a secure focus.
  2. Next point we emphasized: threat modeling. As we wrapped up a two-hour threat modeling session, one of the developers commented "Why didn't we do this months ago, before our code was written!". Good point!! It's never too early to threat model. Analyze your product, paying special attention to where data crosses boundaries: user to Internet, Internet to server, server to database, etc. Model threats, wild and crazy or down-to-earth. Our threat modeling has resulted in 9 potential threats so far, and we expect several more as we continue.
  3. Security comes in layers. Back when I lived in India, I toured the Delhi fort. This fort was built by professionals. It has a deep moat around it. Tall, thick walls surround an inner wall, and inside that inner wall lies the fort. That's how our code should be! OK - so you're authenticated via Acegi and LDAP. You're encrypted with SSL. That's great - but what if someone logs in with a valid account, then tries to hijack another session? Your layered security will catch hacks like this--authorize anytime someone tries to access sensitive data. Even if you've already authenticated and authorized, do it again! Layers bring security.
  4. Reduce your footprint: in Agra, where the Taj Majal is, there's another fort (these Moguls were building forts everywhere!). This fort is on the edge of town, in the hills. Compared to the Taj, the fort is tiny. This is referred to as attack surface reduction. Only allow public access to a few of your resources. If you have features which suffer from weak security, disable them by default or remove them completely. Give hackers as little space as you can.
  5. Train your engineers (dev and test). There's common training needed by both (elements of secure design, running a threat model, etc.) and there are discipline-specific trainings such as penetration testing or the application of specific technologies. The SDL is called a lifecycle because it's a continuous process. Lather, rinse, repeat and all that.

Our training has produced benefits. For starters, developers have a new security-focused mind set. We've found a few security bugs already, and our threat model has exposed some potential issues. This is great progress, and it comes from just one day of work. Imagine what we'll be like in a few months after a day or two of training and a complete milestone with security in mind!

It's never too early or too late to take a step back and start thinking security. I designed our course based on my experience at Microsoft, which was neatly documented in Michael Howard's new book "SDL: Secure Development Lifecycle". I cannot recommend reading this book enough!

Got a security question? Post it here...

Tuesday, July 15, 2008

Software Testing Patterns: Sessions

One of the most common things to have to test is sessions, cookies, and login/logout. This is the start of a pattern for that kind of testing.




First, some background:




Test Patterns:





  • Session hijacking: create several cookies for your site (varying login sessions). Tests: 1) can you substitute the session id from one session and use it in another session. 2) what happens with invalid session IDs? 3) Prevention: specify that cookies must be https-based (specify the secure flag when creating the cookie.


  • Cookie poisoning: change values in a cookie - for instance, if a cookie stores the check-out value in a shopping cart, the user could change that value before checking out.


  • Back button: do something that adds/modifies a cookie, then click 'back'. The state could then be out of sync (being on a page which expects you to NOT have a value in your cookie which you actually do have).


  • Expiration: cookies w/o expirations are considered 'not persistent' and should be destroyed at the end of the session. Persistent cookies represent a security risk due to how long they are persistent; make sure the expiration of a cookie is reasonable (30 min?) based on typical user scenarios.

  • Log in, thereby picking up a sessionID. Make note of the sessionid. Close the browser, and then log in as someone else. Finally, update the cookie to be the id you noted previously, and hit refresh. Does the session change?

Should Freshers be Testing Software?

I have seen an alarming trend in India and worldwide. "Test engineers sought - freshers" (for those not familiar with the Indian IT scene, a 'fresher' is someone fresh out of college - 0 to 1 year experience, maybe even a little more).

Seems like everyone with an open tester position wants to throw a fresher at it. This is a huge mistake! What does a fresher bring to the table?
  • Enthusiasm
  • Lots of book learning; in India, that means rote memorization
  • Some programming experience.
I once helped a large, failing consumer electronics company move work offshore to save money (offshoring to save money is a BAAAADDDD idea--ask me why or wait for a blog on it). The development 'partner' on the project brought in a ton of freshers and some somewhat seasoned testers. What did we get? We had test engineers who didn't know what a defect report was, who'd never used a defect tracking tool, who had absolutely no domain knowledge. All they had to contribute was 1) a warm body and 2) some experience in Java. This is a recipe for disaster.

Is there room for freshers in an IT organization? Absolutely! Microsoft focuses most of its recruiting in the college space, but that's because the ratio of new hire to full timer (at least in the US) is probably 10:1. They have people all around them who can mentor and grow them, and they are NOT encouraged/forced to do mundane and boring tasks. Well, not all the time... As a new hire at Microsoft, I was the release lead for German Mac Office 98, with 18 months experience in the company! But that was after 18 months of incredible mentoring and learning.

For you people hiring for testers, don't think "Well, no one wants to be a tester - let's find someone who wants a foot in the door and get them in here." You're doing yourself a disservice, you're doing your customer/project a disservice, and you're really not helping the fresher either. Look for some passionate QA/testing professionals. Bring them in and let them establish a world-class test organization. Then hire freshers, train them up, and keep growing your organization.

OK - off my soapbox for now...

Thursday, June 12, 2008

Thanks to the Communities

I belong to several communities - Agile Testing group, Watij group, MSDN Software Testing Forum, etc. I just wanted to blog a quick thanks to all the active members of these communities. Just in the past 6 days, I have:
  • Gotten help getting Watij up and running and implemented
  • Gotten sample code to write a tool which dismisses error dialogs that are popping up in our automation
  • Gotten answers to several other questions
Engineering is tricky. Civil engineering is a moving field, but it moves a lot slower - advances in concrete, paving techniques, etc are probably all documented somewhere in a journal each month. Changes in technology? Rapid pace! I don't think any of us could keep up with all the platform, open source, commercial tools, and techniques which change over time. But by having these communities, we are able to give each other a leg up.

Take my dialog dismissal app... We're building automation in Selenium RC (Java). There's an issue in Selenium's Chrome browser that you cannot permanently accept a cert. Every our automation switches into SSL, we get the warning. Until today we have been unable to run all 350 tests in one setting because none of us wants to dismiss that dialog 1000 times. With some sample code from Michael Johnson via the MSDN Software Testing Forum, I put together a quick C# app that watches for and dismisses this dialog. Problem solved, and for the first time ever, we can run our automation end-to-end.

Thanks folks, for all the effort! We make each others' jobs easier this way. Kudos to everyone who puts in time and effort on open source tools like Watij, Selenium, TestNG and the likes, too.

John O.