Friday, December 18, 2009

Running MSTest Standalone

It’s been a very long time since I posted on this blog – I’ve been quite busy with work as well as writing articles for http://www.searchsoftwarequality.com (head over and check out my tips and Expert Answers).

I stumbled across this today—how-to guide to running MS Test standalone. http://www.shunra.com/shunrablog/index.php/2009/04/23/running-mstest-without-visual-studio/

Full disclosure, during my last stint at Microsoft, I worked on this internally. However, I never disclosed how to do it. I do not know Shunra and I honor my non-disclosure to the fullest extent.

At the same time, I’m thrilled. MSTest is a great product and I’d really love it if MS produced a standalone version. According to this post by an MSFT employee, there’s a lot of discussion around it:

this is one of the things on a higher priorities that we are considering for a future release though this is yet being worked out as such there if no formal statement. I am pushing for this & may have a update in a couple months. http://social.msdn.microsoft.com/forums/en-US/vststest/thread/2897fb68-ef48-4941-a49e-fe8cb1b5aced/

So who knows, maybe we’ll see something soon!

Thursday, April 23, 2009

Blitz Blog: Quality Assurance Through Code Analysis

Today I read through a Parasoft whitepaper published on searchsoftwarequality.com, and I found it to be a great approach to static code analysis.

FULL DISCLOSURE: I write articles and am a “Software Testing Expert” for searchsoftwarequality.com. However, I do not blog positively if I don’t believe in the posting.

Parasoft is the manufacturer of an application security static code analysis tool. Of course, the white paper’s intention is to sell copies of their tool. Can’t fault them for 1) believing in their product and 2) wanting to pay the bills.

What I really like about this article is the approach they take to SCA. They are not pushing it as the end-all, be-all of code quality. They are very realistic about SCA, in fact, stating that it’s often over-used and, once over-used, ignored. Companies implementing SCA must be careful about it—don’t enable a rule unless you really want to enforce it.

At the same time, they make a strong point about the value of SCA. It can really help a team drive quality upstream. Some policies an engineering team might want to use, for example, contribute to code readability while other contribute to security (dynamically-built SQL statements vs. parameterized queries anyone?).

As an Agile engineer, I do take issue with their heavy-handed ‘management enforcement’ method. Agile teams need to adopt policies as needed. Cross-company Agile team representatives might establish company-wide policies and enforcement, but the Agile team itself should arrive at the bulk of the policy definitions.

One thing I like to see is the implementation of SCA directly in the build process – ie, no build if the code fails with errors, and a team-wide email on warnings. This is the best way to enforce policies (but teams need to be selective about policies, so they don’t overwhelm or ‘over-stay their welcome’).

Anyhow, blitz blog. Highly recommended reading!

http://go.techtarget.com/r/6673725/7930283/1

Tuesday, April 21, 2009

Oracle Owns the Stack

You’ve probably already heard that Oracle has acquired Sun Microsystems. It’s one big, happy, expensive enterprise systems family! Personally, I think this is not a good thing—it hampers competition. Oracle and Java already have a stranglehold on the IT space, and this merger just tightens the binds.

I have worked a total of 12 years at Microsoft, with two years in IT at other companies, which were both very big Oracle companies. I worked at Circuit City in the Oracle Retail implementation, and I worked at the LDS Church where most everything is running in Java on AIX. With my two years of experience, after working in Microsoft technologies, I have decided Oracle is overpriced, overhyped software. Java I could take or leave, but I firmly believe Oracle sells too little for the money.

Here’s the deal… Oracle sells their hardware and their software at astronomical prices. I mean, seriously… The hardware is expensive. The OS can often be open source, but it still requires people to manage it. If you add up the hardware, DB and other proprietary software, and the management software you’ve sunk a ton more money into than if you’d bought from the ‘Evil Empire’.

On top of that, I do not believe Oracle sells decent products. SQL is OK, but Oracle Retail? It’s a mish-mash of third-party acquisitions, none of which really share a common language or data format. And they bundle it all together and say “It’s Oracle – it’s enterprise quality” and people believe it. As a QA manager, I was appalled. For instance, I could NOT get the Oracle account rep on-site for Circuit to tell me what level of security testing the product had undergone. He told me that was proprietary information which he could not share. So we were investing millions in his software, and we didn’t have the right to know if it was secure? Another example: we tried for weeks (I paid three IBM consultants nearly $600 an hour for six weeks) to automate one of Oracle Retail’s components. The entire time, the Oracle rep was in our status meetings listening to us report blocked, unsuccessful, struggling… Never once did he peep up, until I finally asked how they automate regression testing. At first, he tried to evade with the same “that’s proprietary information” response. When he finally realized I wasn’t going to drop the issue, he admitted they didn’t automate anything in that area. What?? What kind of company watches a customer throw thousands (tens of thousands) of dollars down the drain, all the while knowing the effort is futile?

In my role as QA Engineer at the LDS Church, I was surprised to hear people tell me they would not deploy Microsoft products in the public-facing network segment because Microsoft controls the entire stack, and a security exploit in one layer of the stack meant the entire stack was at risk. ??? From a technology point of view, the opinion makes no sense. Ironically, these were the same people who approved of and supported internal development of security technology, when Microsoft had an existing product on the market (and deployed at the LDS Church). With Oracle now owning Sun, will that answer have to change?

So in a recent guest posting on HISTalk, Orland Portale (former GM, Global Health Industry) spouts on and on about how good this acquisition is. To me the nail in the coffin is the statement that Oracle will begin creating “tightly bundled system stacks which incorporate hardware and software components. Oracle will now have all layers of the systems stack under its umbrella, including the storage, server, operating system, programming language, database, Web services, etc.”

In recent years, Oracle has been snapping up enterprise solutions left and right. They own the entire retail suite. They bought BEA, one of the last independent web providers. They recently purchased MySQL and have control of it. At what point will all the “Evil Empire” opponents realize there’s a newer, much larger empire out there – and it controls all the software they deployed because they wanted to stay away from Microsoft technologies?

I know… Long rant about this. I admit I’m a Microsoft bigot. Except for one product (SMS, or System Center Cofiguration Manager), every enterprise server product I have deployed has just, well, worked! And been reliable and stable. So my bigotry is based on my personal experience. It doesn’t take a PhD to deploy MS software, it works well together, and it’s got great uptime. And the price is approachable, too.

Soon I think it’ll become obvious: Microsoft is the underdog here. Fight back, stop the Evil Empire. Buy Microsoft!

Thursday, April 16, 2009

Tenants of a Software Testing Team

If you had the chance to write out the tenants (unbreakable ground rules) by which your quality assurance team lives, what would they be? Here are some I’ve been thinking of:

  • We partner actively in quality-first development, by participating in the entire SDLC and by contributing tools, process, and time to the development activity.
  • We do not and cannot ‘test in quality’. We test to validate requirements, discover missing or inaccurate requirements and implementation, and to expose defects.
  • We use tools and process to discover defects, validate functionality, and improve quality. Not to use tools and process.
  • Secure software is a top priority—we protect our users’ privacy as well as our applications reliability.

What about you? Do you have other tenants on your team?

Monday, April 13, 2009

Blitz Blog: John Glaser on the Healthy IT Org

Departing from my quality-focused blog postings for a while (software testing rules, but hey – sometimes there’s good info elsewhere too), here’s a pointer to something you need to read!

John Glazer is a CIO and contributor to HIS talk (THE blog for healthcare IT). He captures the essence of a healthy IT org so well, it’s a must-read. And it is quick!

http://histalk2.com/2009/04/13/being-john-glaser-41409/

Thursday, April 9, 2009

Blitz Blog: Persistent Quality Assurance

This is a blitz blog – quick, to the point, and hopefully helpful. Topic: persistence.

My wife has a cat (a HUGE cat, although my aspiring veterinarian son tells us she’s a ragamuffin breed, which is ‘big boned’ by nature). This cat I do not like. Don’t know why, but we’ve never really gotten along. But lately, she’s really been trying. She comes into my office, right next to my desk, lays down, rolls over, and just purrs. All the while, she looks right at me with the kindest, happiest look on her face. Finally, I give up and pet her.

Can we learn from the big cat? When we want something, can we be pleasantly persistent, just waiting patiently for the attention we’re looking for? I can look back on my career and see many times where some pleasant persistence might have paid off better than being a bit more forceful).

I’ve been reading “Fearless Change” and have found the message in this book to be the same persistent patience that our cat is demonstrating.

Wednesday, April 8, 2009

Software Testing: Is Your Quality Assurance group a Thermostat, or a Thermometer?

I frequently read a daily thought on families and how to improve your family. Yesterday’s thought dovetailed neatly into something thinking I had on QA organizations: are you a thermostat, or a thermometer?

A thermostat is a device which regulates the temperature in your home. It controls the atmosphere. A thermometer is a device which reflects the temperature. To be a thermometer is to be reactive; all it does is indicate the temperature. To be a thermostat means to proactively set the temperature.

In the family, a ‘thermostat’ parent doesn’t react to grumpiness or other negative emotions, but calmly sets the tone in spite of what’s going on around her or him. A ‘thermometer’ parent, on the other hand, gets ‘set off’ easily when children bicker, whine, or complain. Obviously my goal as a husband and father is to be a thermostat, although there are ‘thermometer-like’ days!

In a recent thread on the Yahoo Agile Testing Group (about continuous release), I noticed more than one poster saying something like “I tell management about an issue, and let them decide.”  This reactive approach to QA has been a pet-peeve of mine. It smacks of cop-out to me, and I’ve seen it develop into a victim mentality among many testers—myself included.

In my tenure at Microsoft (where I’m ‘serving’ my second stint, with a combined tenure exceeding 12 years), the testing organization has been consistently active in pushing for quality. A few things differentiate Microsoft from other organizations: first, the engineers wear the pants (that’s changing, slowly). In fact, nothing ships if the program manager, development manager, and test manager sign off on it. And they won’t sign off unless their teams sign off. Secondly, any individual has the right to stop a project, even the newest tester in an organization. Third, while the management structure is massively deep (I’m some six or seven layers below Steve Ballmer, current CEO and President), any individual can get the ear of their VP, Senior VP, and CEO.

So the culture I’m most familiar with is one wherein testing is an active participant in product development. The testing organization moves beyond a ‘show and tell’ relationship with management to an active participant in business strategy. The testing organization is part of thermostat, and rarely plays a thermometer role.

I have worked in organizations where this is not the case, and I’ve seen a lot of evidence that this often isn’t the case. In many IT and product software organizations, testing plays a thermometer role. We ‘take the temperature’ of a piece of software, and let management decide when to ship. In fact I have been told by management, when I’ve tried to be a thermostat, to back off and let them make the decisions. In one case, I have even refused to sign off on a risky release (security-related issue on a piece of beta software), telling the release organization they were welcome to release but they’d have to convince my manager to sign off because I wasn’t going to.

The Testing Thermostat

Thermostats regulate the temperature. In my “Quality 2.0” vision, testing organizations participate in regulating quality. The test organization moves from just taking the project’s temperature, to a role where they are helping to set quality. To do this, testing teams need to:

  • Include experienced, respected engineers. At one time in my career, I worked with Christian Hargraves, probably one of the most experienced engineers I’ve ever met. Christian led and maintains the Jameleon project. He’s a sought after expert in the Salt Lake IT market, and has worked for the LDS Church as what I would call a Test Architect. Every developer in the organization knew Christian knew his stuff, and if Christian recommended something, that had weight. Qualified, respected engineers who are part of a test organization can help contribute to quality process and practices across the organization. They can influence core development teams to push quality into their process, from code reviews to adopting best practices.
  • Advocate actively about projects. Testers need to be active participants in release decisions, influencing senior management. In many instances, testers are shareholders. In every instance, testers share the benefits of a good release by enjoying continued employment, and in ‘worst case’ scenarios, testers go down in flames along with the rest of the company. In other words, our employment is just as dependent on a good release as any CEO’s. If all we do is report quality, without influencing decision, we are placing our career in the hands of other people. I do not accept this point of view!
  • Drive quality upstream: we all know it’s significantly cheaper to fix a bug in the planning phase, early implementation, or even release validation phases as opposed to post-release. Testers need to work to push themselves and their influence upstream—the earlier we can look at code, the better.
  • Be involved in planning activities. Take an active role in scheduling. Drive feedback about previous releases (especially lessons learned) into current project planning. If the company has a history of underestimating test duration, speak up about it. Track and present data which shows a more realistic approach. Work patiently but consistently to drive reality into schedules.
  • Hold dependencies accountable. If the development team isn’t dropping code early enough for full testing, don’t stand for that! Your reputation as a tester, not to mention your continued employment, depends on a solid release. If the company runs with firm release dates, but development is dumping code in your lap a week before release, you’re going to end up working 90+ hours that final week, and still releasing crappy code! Negotiate and set milestones, and hold development accountable for reaching those milestones. Two things will result: first, your test organization will start to work a consistent, sane amount of hours. Secondly, quality WILL go up because you’ll be looking at code earlier.
  • Be involved in establishing release criteria. This is critical—across your business, you need to have consensus on what constitutes release-readiness. If you leave this up to marketing, they’ll say a set of customer stories need to be developed. Leave it to development and they’ll say that, plus possibly that all unit tests need to pass. As QA engineers, we have a unique perspective in that we understand the value of positive AND negative testing. We can help influence the establishing of release criteria which truly reflect product quality. Setting this criteria early, well before a project is nearing a release deadline, removes the emotion and sets a rational basis for declaring ‘Ship Ready’.

These are just a few things a test organization can do to transition from a thermometer to a thermostat. What’s missing? Does your test team take a more proactive role somewhere else? Let’s hear it!

Friday, April 3, 2009

Found a Security Flaw (The Need for Software Testing Everywhere)

Today I was paying a bill online, and I crashed the company’s IIS server (I promise, I did nothing wrong—intentionally). The good news: they write their code rather securely, so they are using parameterized queries rather than embedded SQL statements. The bad news (besides the crash itself) is that they were running with tracing enabled, so I saw the entire stack trace.

I called the company and got in touch with their tech guy. He was really polite on the phone and very open to feedback. I shared a bunch of info with him and thought I ought to document it. So here’s an Open Letter to All Admins Running IIS:

· You need to check your machine.config (and possibly web.config) files and make sure <trace enabled=”false” localOnly=”true” pageOutput=”false” requestLimit=”10” traceMode=”SortByTime”/> See http://msdn.microsoft.com/en-us/library/aa302351.aspx for more info.

· Either spin up Microsoft Baseline Security Analyzer (MBSA) or use IISLockDown and URLScan to scan your server(s) security, especially in configuration. See http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=f32921af-9dbe-4dce-889e-ecf997eb18e9

· You might also benefit from a security review/audit from an experienced, independent consultant.

Security Focus has a decent article on the issue: http://www.securityfocus.com/infocus/1755 (it’s old but still accurate). NOTE there is a chance that running IISlockdown may break something. If your developers built the site with some dependency on what’s actually a security hole, locking down your server could cause issues. I highly recommend that you run this against a test deployment and then run all of your automated tests against that deployment, and that you do some manual testing.

You might also consider picking up copies of

  • “Improving Web Application Security” (MS Press),
  • “Building Secure Microsoft ASP.NET Applications (MS Press),
  • “Writing Secure Code” (I *highly* recommend this – it’s the bible on secure coding, especially in MS technologies), and
  • “Secure Development Lifecycle” (MS Press).

From more to less specific, these books are pointers on secure IIS configuration, ASP.NET coding, coding in general, and strategies for maintaining application security throughout lifecyles. The latter book is a great read if you’re on a larger team and/or need to influence management to give you the time for security-related work.

Thursday, February 12, 2009

DRAFT: What’s the Role of a Test Manager in Agile Organizations?

This blog posting is a draft. It represents a work in progress, and is posted with the hope of gaining peer review for improvement.

Many people have asked me—or challenged me—about the role of a test manager in agile organizations. A basic premise of agile being the self-lead team, who needs a test manager? Many agile engineers even bristle at the thought.

Reality shows, however, that not all teams or organizations benefit from a complete lack of management. Smaller engineering organizations can often do without, but larger organizations perform better with some amount of centralized management. A team of ten engineers, with no role specialization, has no need for a test manager. A team of fifty engineers, which maintains a differentiation between test-oriented engineers and development-oriented engineers for a valid business reason will certainly benefit from a test manager (and, incidentally, a development manager).

What do I think is the role of a test manager in such an organization? I think the test manager contributes in five higher-order functions: interfacing with senior management; providing vision and leadership; fostering cross-group collaboration; leading specialties, tools and reporting; and providing organization management. That’s a lot of Dilbert-esque lingo! What do I really mean? Read on for details…

Interface With Senior Managers

Larger organizations, such as the team I led at Circuit City, have deep management structures. It’s simply a fact of life. And those managers generally manage cross-discipline organizations and benefit from help with a manager between them and each discipline. In Circuit’s MST project, I managed over one hundred engineers, with thirteen concurrent workstreams. My role was to interface between my manager and the test organization, communicating messages and centralized decisions in a way that addressed my teams’ needs, interests, and concerns.

Test managers also act as a buffer, insulating the team from senior management’s see-saw decision-making process. Sometimes just shielding testers from the questions senior management can ask is performing a huge favor—questions like “do we have the right number of testers” can really cause concern in a test team; the test manager can research and answer that question often without churning up any worried or troubled feelings.

When senior managers arrive at a strategic plan, the test manager’s job includes integrating that plan into the test organization. A great example would be an organization which has decided to migrate from waterfall to agile! The test manager’s role will change dramatically, moving from ‘manager’ to ‘coach,’ assisting the test team in understanding their role and helping to set engineering goals in terms of process adoption.

Another role test managers play in this larger organization is simply pushing back on bad ideas. Individual contributors are often heads-down (especially on agile projects), in the day-to-day work. If senior management stumbles upon a ridiculous idea, individual team members often lack the cycles (and sometimes the experience) to push back on that decision. The role of a test manager often includes helping senior management understand the possible side effects of a decision.

In some organizations, testing or quality assurance are not often viewed as critical. These organizations may lack experience, or may have experience in smaller projects than the one at hand. A test manager’s role includes the responsibility to advocate for quality across the entire company. Sometimes this means helping customers understand the value of the ‘extra cost’ of a few test engineers on a project. Other times, it’s helping executive management see the need for experienced testers. Sometimes the job includes preventing releases from happening when core quality issues beyond simple functionality could cost the company in terms of money or bad PR.

Provide Vision and Leadership

Agile teams are (rightfully) quite busy in the day-to-day and may not be cognizant of everything going on in the company. The test manager’s responsibility in an agile organization can include driving vision. With an understanding of company growth plans, the test manager can be looking ahead to understand how to double or triple the size of the test organization. The test manager can also be thinking about strategic redirection – for instance, perhaps the test organization depends on developers for all automated testing. The test manager can set in place organization-wide plans to train testers, helping them become better automation engineers. The test manager needs to be the person asking “Why not?” and helping to drive the team forward toward that goal.

Sometimes agile teams fall behind due to circumstances beyond their control. The agile approach is to take those hits, learn, and move on. But sometimes there’s value in getting help. For instance, if a member of an agile team becomes ill during a critical period, a strong test manager can step in and ‘pinch hit’ for that tester while he or she is out. This isn’t advocating that the test manager is simply a ‘reserve’ resource. Teams which underestimate or over-extend themselves need the experience which comes from missing goals. But there are times when a test manager can contribute at an individual level.

Another leadership role is to help teams through transition. The change from waterfall to agile is a fantastic example of this – as a leader, the test manager needs to play a critical, positive role in helping testers work through the difficult migration process. The test manager provides structure and understanding, offers insight, and encourages team members as they face challenges. Above all, the test manager needs to be supportive and reassuring during the struggles which come with change. This leadership will help retain top talent, which might otherwise leave the organization due to the uncertainty and stress of change.

Finally, the test manager can represent the test organization and the company as a whole. A good test manager can be a key asset in customer retention. By contacting external customers, being aware of situations they’re facing, and communicating the status of fixes and changes, the test manager can help the customer through any rough spots in project implementation. Another role the test manager can play is sitting as a member of industry or standards organizations, making sure quality has a voice at that table and providing expertise in the subject.

Foster Cross-Group Collaboration

A critical role played by any management in an Agile project is eliminating road blocks and keeping the team moving. The test manager can play a pivotal part in this by interacting with other groups, with the business side of projects, and with the customer. Sometimes it takes a management-level title to ‘influence,’ even within the same company. More important, some negotiations and even simply driving closure can involve a lot of time, talking, and walking. By remaining proactively involved in projects, the test manager can help remove impediments, all the while allowing the team to remain heads down.

Also, very often in a larger organization, the test manager is the first level team member with corporate purchasing privileges. Therefore, the test manager can often take care of licensing and other purchases and fees for supplies the team needs to carry on their work.

In larger engineering organizations, the test manager works closely with development and program manager/project manager counterparts. They collaborate on cross-discipline objectives as well as on budgets and scheduling. They are often the point-people for driving mitigation for lessons learned which surfaced in retrospectives.

Another role the test manager plays in a larger organization is introducing engineers around the company. A successful test manager is very familiar with projects and personnel across the entire organization; when one employee is looking for help, that test manager can make appropriate introductions.

Finally, the test manager can play a scrum master role in a scrum of scrums, helping to lead the meeting.

Specialties, Tools, and Reporting

Generally test organizations are split into teams in  a matrixed structure. Most testers fall easily into a project team, where they’ll be busy until that project completes. There are, however, occasions where testers are hired who act in more of a ‘Center of Excellence” function. For instance, performance testing, automation and tools, or security-oriented testers. These testers are shared across the organization, brought in as experts to play a specific role on a release. Organizationally it’s often most efficient for these testers to report to the test manager, and the test manager manages their assignments.

In a large Agile organization, there is often tension between the Agile maxim “do what works'” and organizational efforts to standardize, especially on tools. Standardization sometimes aids in keeping costs down (when expensive commercial tools are used), especially for tools like load testing tools. Standardization also aids in maintaining resource ‘portability’ (the ability for a tester to move from team to team, without needing to learn new tools). A test manager needs to be very sensitive to the needs (perceived and real) of a team, but can often be instrumental in maintaining consistent toolsets across the organization.

Agile is notable, among other things, for a distinct lack of metrics. Status reporting is as important to senior management as it is a burden to agile teams. The test manager can play a critical role in pulling together status out of several test teams, presenting it to management in a format they’re used to. It seems a bit mundane, but in spite of the gains made moving to agile, many senior managers still yearn for detailed status reports.

Related to reports is the idea of common metrics. Just as the Agile team needs to ‘do what works,’ the Agile organization also needs to do what works. Sometimes that means a bit of a burden on the Agile team. With engineering expertise as well as an understanding of project management, a test manager can help Agile teams arrive at metrics which are easily generated (often times in an automated manner, if unit testing or defect tracking systems are in place). There are metrics that work – they vary from company to company, but they work—and the test manager is a key player in defining these metrics and automating the gathering process.

Finally, the test manager can offload the ‘compliance’ burden from the Agile team, reporting up to senior management on the progress against various company initiatives. Again, maybe not exciting to the test manager, but critical to the company’s improved effectiveness and efficiency (well, we like to hope each initiative is critical).

Organizational Leadership

Even in the Agile organization, things like performance management have to happen. The test manager carries a significant administrative burden driving this process. This is a good thing – a good test manager not only helps his or her teammates grow in skills and experience, they also advocate with senior management for promotion and compensation increases.

As a test manager, my experience has always been that my teams are more effective when I screen candidate for open positions, and then pull team members in on formal interview loops. As a test manager, I have had responsibility for opening job postings, authoring job descriptions, and sourcing resumes for open roles. When the test manager takes on this administrative burden, it allows the Agile team to focus on their work. When decent candidates are sourced, the team can stand down briefly to perform interviews.

Closely related to recruiting is the role the test manager takes on in hiring and onboarding new hires. The manager shuffles new hires through the hiring process, helps in signing up for benefits and such, and makes sure the new hire settles in well. A key role I have seen too many test managers overlook is introducing the new hire around the company. Finally, the new hire will achieve their highest potential as quickly as possible only if and as the manager assists them in setting goals for impact, development, and growth.

A test manager’s work is to drive his or her team to higher levels of effectiveness and efficiency. Working in concert with members of the Agile teams, test managers can accomplish this in part by researching and procuring appropriate training. Whether it’s bringing in experts in Agile testing or arranging for on-site training in automation skills, training is critical for improving what a test organization can accomplish. Procuring that training can be a time-consuming task. Also the test manager arranges for training which fits broad, cross-organization needs rather than focused on specific teams.

Finally, the test manager is critical in helping with employee retention. A good manager is aware of the interests and professional goals of each employee in the organization, and makes the effort to provide that employee with opportunities in sync with the employees wishes. With a broad, cross-organizational focus, the manager can also help employees find new roles when, without that assistance, the employee might have left the company.

Help the Agile Process Work

A closing concept for the test manager’s role in Agile is the idea that he or she is a make-or-break participant in the transition to Agile. It can take several months or even a year to successfully transition to an Agile process, and that transition can be painful—especially for testers! The test manager helps coach teams through the transition, aiding and lending support and encouragement, fostering courage, and assisting in power transformation. This is not a tops-down role, but a one-on-one, coaching role helping team members with courage, patience, and confidence.

Saturday, January 31, 2009

Quality Assurance: automated QA software testing tools

This week, I answered a question about automated QA tools. This topic comes up frequently – my base answer is “use the simplest tool that will get the job done”. There are a lot of open-source tools, but I find that JUnit (or NUnit, if you’re in .NET) and Selenium or Watij are a great combination for testing web applications.

Read my answer here: http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1346325_tax306196,00.html

Software Testing: Writing a Test Plan

This week, I answered a question about writing a test plan, on my “Ask the Experts” column on searchsoftwarequality.com. You can read the answer at http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1346327_tax306121,00.html

What do you think? Do you use a test plan? If you’re agile, is your test plan this in-depth?

Wednesday, January 28, 2009

How to Ask For Help with Software Testing/Quality Assurance Questions

Back in October 2008, I answered the question "What's the Best Software Testing/QA Tool". In my answer, I included a few steps for ensuring you get an answer when asking questions.

Due to popular demand, I'm moving those comments into their own blog entry. This way, it's easier to find and read.

To ensure you get an answer:
  1. To all testers: look before you ask. Really! If you are wondering what tool you can use for a given test type, use Live Search and look for information! Performing a minimal amount of research shows respect to the audience you're turning to for help, and can actually prevent a question now and again. QA is all about searching for product defects; apply that searching capability to answering your questions.
  2. You're more likely to get help on a specific question rather than a broad question. For instance, asking "What's the right tool" won't get you much. However "I've evaluated Selenium, Selenium RC, and Watij--given my situation (describe it) what tool do you recommend?"
  3. Talk about the problem you're trying to solve. So "We realized we'll be running tests over and over and over again, but our UI changes frequently. What strategies can we take...?" is a question that raises the problem and let's people know what help you're looking for.
  4. Asking "what's the best tool?" is like asking "What's the best car?". If you live in Germany and drive the autobahn frequently, the best car for you is far different from the best car for someone who lives in Bombay, battles thick traffic, and drives Indian roads (notoriously rough). Be specific, give details, and look for recommendations. Software testing tools are the same way - a screaming "BMW" test tool will fall apart on a Bombay road. A right-hand drive Jaguar would be totally out of place on an American highway. A performance test tool isn't the right way to automate repeated quality assurance tests. The right tool is dependent on the project, the skill sets on the QA team, the timeline, and several other factors.
  5. Solve your own problems. The Internet is an incredible tool and offers us all a ton of opportunity to not have to reinvent the wheel. But don't ask other people to do your work for you! Ask for advice. If you want someone to solve your problem, ask one of us to consult (for pay) and bring us in. We'll be glad to get the job done for you!
  6. Give back: as you grow and learn, give back... Don't post a question, get your answer, and disappear. Remain an active participant in the community and 'pay it forward'.

Summary? Be specific, research before you ask, solve problems, and give back. That's how to get answers online--in a sustainable fashion.

Sunday, January 18, 2009

Quality Assurance: software testing reports

In my capacity as a software quality expert at http://searchsoftwarequality.com, I recently answered a question regarding software testing reports. A writer asked how to build a testing status report focusing on test execution criteria, test case failure rate, and bug rate.

You can see my answer posted at http://searchsoftwarequality.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid92_gci1345318,00.html 

What do you think - too simplistic? Or do you agree that these are the bread-and-butter reports for software testing status?

Tuesday, January 6, 2009

The three REAL software testing/software quality lessons learned from the Zune incident

A couple of days after the now infamous Zune disaster struck, the dogpile began. I don't mind critical bloggers, and I certainly don't mind customers complaining about product issues. As a software quality professional, I encourage both. But I tell... Esther Schindlers recent blog post on the Zune disaster really rubs me the wrong way!

Schindler's stance is that this is an obvious bug which everyone should have caught. That's simply wrong, and it's outrageously unprofessional to make that statement! Someone as experienced as she purports to be should have the courtesy to be honest in their analysis. We've been building software since, what, the sixties. Since 1970, there have been nine leap years, and there were four in the past eighteen years. What this means should be obvious: if this were a low-hanging, "obvious" bug, it would have popped up already. It would be on our common radar and we would be testing for it.

Another issue I take with Schindler's rant is that the developer who wrote this bug is 'still out there, in the wild, writing more bugs of the same nature'. Yeah, I will grant you that there are many developers writing stupid bugs like this. In my last organization, no less than three developers wrote the SAME bug in three different releases (it was a URL information disclosure/escalation bug). It was a silly bug and should have never been written once, let alone thrice. But in almost thirteen years at Microsoft, I've never seen a engineer last who has made repeat mistakes, period. Fool me once, shame on me. Fool me twice, this way is the door! Her concerns that a similar bug will be lurking in Microsoft's next software are just hype to sell columns. There's little or no substance to them.

She also complains that the bug is simple date math. I think she's jumping to inexperienced conclusions here--first of all, I see little in her resume to lead me to believe she's an active developer qualified to typecast a bug as a date math defect. Secondly, all we know is that, due to being the last day in a leap year, the Zune froze. One driver from one component used in one Zune release froze. Is it date math, or something else? Only the Zune team really knows and it would be presumptuous for anyone else to claim they knew the cause.

So one lesson, she says the lesson is that it's a failure of the development and QA process. Uh, duh. No kidding! Second lesson she learned is that we need to learn from history (of course, there's no real history to learn from here except that people make mistakes). Third lesson is that the developer is still working. Uh, duh... If we fired every developer who wrote a bug, we'd just have testers left looking at their hands! If we were to lose our jobs over mistakes, we'd all be unemployed! I'm not sure exactly what Schindler's looking for here, but I can tell you I don't like the direction she's going.

Schindler's post is mindless crowd-mongering, aimed at stirring up a crowd (and keeping her name in print). It's also rife with wrong conclusions and, if her assertions were taken to heart, the result would be drastically unrealistic.

But let's ask the question: should Microsoft, and the engineering community within and without, be worried about this defect? Yes! What are the real take-aways from this issue? Read on...

First: Update the Test Repository

The first take-away here is that this is a bug with high potential repeatability. While this is the first known instance of this defect, it does seem possible that it could happen again. So lesson learned - update the test repository. Next time you test leap year handling, be sure to cover not just Feb 28, 29, March 1, 2 but also December 31.

More than updating the repository with this specific test case, testers need to extrapolate a series of test cases. There's something happening here. Schindler might, in fact be right that it's a math bug. It might also just be a buffer overrun (365 days were used up by Dec 30) or something else. The point is, there's a class of bug to be researched and investigated. It's around date management, date accounting, infrequent and semi-regular change and similar topics. And you can bet that both the developer and the feature tester on this component is working on this!

Second: Component Testing

Bill Gates' vision for Trustworthy Computing has always included the concept of distributed programming, where the application model is heavily object-oriented and componentized. The mistake many testers and organizations make is they only want to perform system or functional testing on the completed product. One of the key lessons of Agile and XP development methodologies is that you break things up into pieces and test them. According to the press release, this bug is at the component level and that's where it should be caught.

Organizations need to reconsider their risk-based approach, and ask if component-level testing (unit testing, functional testing: different names for the same activity) would be more effective. Component-level testing, combined with rapid feedback (ie, daily builds with testing involved) will prevent defects from creeping in at the component level, and might have caught an issue like this one.

Third: Take Time to Think

This third point is a personal mission for me. In the goal to minimize investment, IT organizations are constantly squeezing test. Development can overrun dates willy-nilly, but it's always a requirement for the test org to 'make it up. Hey, it's business and we need to hit deadlines. But applying the same effort at the development level (in this case, the effort of increasing discipline to stay on schedule) as the test team is expected to apply (in this case, the effort of catching up when dev has blown the unrealistic schedule) could prevent the need for herculean test effort, and would yield more time for the test organization to think about and improve upon strategy.

Why does this matter? Finding defects like this requires creativity. It requires looking beyond functional test cases and thinking about 'what could go wrong'. Schindler's making the same mistake many people make - overlooking the fact that testers have to catch all of the bugs, whereas the user only needs to experience one. Yes, we sign up for this when we enter a career in quality, but that doesn't change the fact that there's a disparity in expectations. A highly successful, high-quality release cannot be achieved if you force-march your test team and cut corners.

Most IT organizations get away with this approach because their customers are internal and, therefore, are more forgiving. But this kind of bug is very similar to security defects. They're difficult to find, require a lot of creativity, and take time to root out.

So, a great big raspberry to Esther Schindler for a poorly-written article on the Zune incident. True, customers were served poorly. More importantly, the engineering community needs to take lessons here. Her lessons? I question the logic used in drawing her conclusions and I really feel her posting was a populist knee-jerk lacking any substantive value for the engineering community.

At the same time, there is plenty to be learned, and learn we better! Let's build off this new class of defect, let's approach our testing from a more component perspective, and let's think about how much time testing is given to being creative.

Full disclosure: yes, I work at Microsoft as a Senior SDET Lead. But I'm critical of my employer where it makes sense. If I were the manager of the Zune test org (and I know several people in the org), I'd be having the same conversation with them that I wrote in this blog posting.