- Minimal investment: like the 'day trader' who is only willing to trade with $25, or $100, there's the minimal investment philosophy. The thought here is that quality investments are high-risk or low-return investments. The less squandered, the better. This is the organization that says test should only prove the positive case--if business requirement exists for a given project, that requirement and ONLY that requirement should be tested.
- Limited investment: this is an individual with limited resources who invests wisely but clearly can't invest across the board in a broad, distributed portfolio. This kind of organization scrimps in areas which may cost down the road, but those 'savings' are targeted and based on some sort of strategy. Safely limited investments might include sort-cutting the authoring of test cases, risking a lack of portability or re-usability. The cases are used to ensure the initial and subsequent pre-release test passes are completed, but are not expected to be used post-release. A new web feature which is expected to remain static going forward would be a safe limited investment, for instance.
- Full investment: this is a wealthy individual with the luxury of spreading her wealth across multiple portfolios and sectors. It takes money to get money and this investor is deeply rewarded. Not many people have the options she does, though. This is the organization which invests heavily in the QA process: in-depth test case development, lots of automation, plenty of lower-priority fit-n-finish test cases, and multiple rounds of testing.
So which one is right? Well, it all depends (don't you hate that about testers? We're like lawyers--we seldom speak in absolutes!). I have a hard time ever thinking of a time when the minimal investment is appropriate -- perhaps when the project is an internal-only, proof-of-concept which will never see the light of day and NEVER be the foundation (base code) for the actual project. But when millions of dollars, and hundreds of thousands of subsequent man-hours of development are riding on your project, this is just a stupid approach. It's short-term thinking, it emphasizes savings over any consideration. It's the penny-pinching fool who buys a 4,000 SF house decorated with paper trim. It'll never last!
So the real choice is between two and three - limited and full investment. At Microsoft, I participated in projects which were years in duration (some went years beyond their original ship date - I know, I know...). These projects saw repeated upgrades on top of existing code. Millions of users bought and used the products we produced, and therefore we invested heavily. It was the right thing to do! I can't think of a single release I was ever involved in where I said "We over-tested that...". I can sure think of a number of releases where I wished I'd had more time - even if it was just to automate, to benefit future releases!
At the same time, some organizations are working on web components which (for the most part) will release once and may have one or two maintenance releases. They aren't foundational code; each compiled applet is relatively standalone. On projects which have a very small chance of being revisited, is it safe to only write brief test case descriptions? If the functions are tested thoroughly, but little investment is made in repeatability or portability, is it OK? Is it the right business decision? I think so! If a team is building foundational blocks like a content management system or the business object layer, well, they'd better be sure to spend much more time QA'ing it. But if the work is for one or two releases, I think it's OK--actually, it makes the most business sense--to cut short on the test design/documentation and focus on execution.
How about you? What do you think is the right balance, the right investment? Can't wait to hear your responses!
Good analogy in investing in financial assets and QA. It can be further extended that there are different kind of asset classes which need to be considered in financial markets such as home, equity, debt, gold, MFs, etc. each has its own risk and reward levels. Therefore, need totally different approaches to make wealth. Similarily, there are different kinds of applications which need different testing approaches and investment. Medical systems, Air crafts, Automobiles cannot fail, you have to be fully invested till the time your goal of almost fool proof software is met.
ReplyDeleteThere are applications as you have mentioned that may require limited testing resources, I would argue that it again depends upon who is your target end users. If your end users are not quality conscious and can live with minor glitches in UI like some text visible partially or are willing to wait for few minutes for their trasaction to complete, then you don't need much testing. In today's world the end users are getting very demanding and they judge the products and organizations based on the experience that they get. By not investing enough in testing your product will make the end users switch to the another product which gives them same functionality with better experience and quality...just like the investors do with mutual funds or company stocks. Therefore, I believe, you cannot cut short on test design and only focus on execution becuase test design is what gives you the goals for the tests that should get executed for product testing. Hence, your application get testsed more thorougly, in relative terms, within the limited time you have.
Likewise, you have to "design" your asset allocations for the goals in your investment strategy. These design decisions get executed in terms of inflows and
outflows, that you will need, to optimize returns from your investments.
Your blog leads me to conclusion that there is an emerging need for organizations which can focus mainly on testing products. They can allocate resources for "not so wealthy" product development companies so that their products are tested in optimized way. The QA organizations can also cater to the special needs of High Networth Corporations by freeing them to do the core development and still deliver quality products to the satisfaction of their end users.
Vijay's comments are spot on. What I don't understand is how much longer companies will shortcut testing. I'm all about effectiveness and efficiency. I really am working on coming up with ways to achieve the same quality with fewer people. But I don't think of myself as penny-wise and pound-foolish. While I'm always on the lookout for ways to test more effectively with fewer resources (hours, labor, etc.), I'm not going to do it at the risk of an important quality metric.
ReplyDeleteOne company I worked at was so dedicated to saving money (thanks in part to a development 'partner' on a fixed bid) that they nixed negative testing. Quite literally, there was no testing of any error conditions. When I arrived, I had a shouting match with contractors from said 'partner' about this very topic. Much to everyone's dismay, the area not tested turned out to have a serious flaw in it which caused major database corruption and cost months of re-work. Literally months. They might have saved $10,000 or $20,000 by not running those tests. It probably cost $1,000,000 or more in downstream work to make up for it.
Some companies are getting better at security testing--they are very thorough in their analysis of each component and the threat level of each potential risk. What about approaching test in this manner? Lump categories of features together. For instance, it won't stop work if a dialog's controls are out of line or the Z-order is wrong, but if the data behind that dialog is somehow corrupting the database, well, the team should care. Therefore the data component should be considered the most important asset, and appropriate attention should be brought to bear on it. Yeah, a company might 'waste' $100,000 over the course of 10 functional area releases. Would you invest $100,000 to save $1,000,000?