Testing leads to failure, and failure leads to understanding.
August 5, 2010 10:46 AM   Subscribe

ApplicationDevelopment Filter: The IT Director at my company balked at my suggestion that we conduct User Acceptance Testing on every requirement to be included in an upcoming release. Moreover, she told me that testing every requirement has not been a best practice for over 10 years. Is this true?

I've been a business PM for a long time now, and I've got some decent experience as an IT PM as well. I just can't wrap my head around how we can go into release without conducting UAT on all functionality.

Can any developers or release managers shed some light on this?
posted by bluejayway to Technology (9 answers total) 2 users marked this as a favorite
 
How do you define UAT? Do you provide a script for your client to go through to test various requirements? Can your client test in any way they see fit? Presumably if your client stumbled across a failure to satisfy a requirement that you have elected not to test for, you would need to fix it? I can see how you wouldn't want your client to regression test the entire system every time you make a change.

"Best Practice" is also often a matter of opinion, and often shorthand for "this is how I think things should be done." Test first development might be considered a best practice that was in favor more recently than ten years ago, and that philosophy certainly is intended to test every requirement. That's different from UAT, however.
posted by rocketpup at 11:11 AM on August 5, 2010


As rocketpup says, you might not want to do a full regression test. However, a test (defined how you will) to show the user that what they asked for is there will shut them up and, hopefully, allow you to hold your AR and PCR.

This is assuming, of course, that you've good requirements and created (and maintained) a RVTM. If you did, then you test it to prove to yourself that you've done what is required, then show them your test results so you can keep them from asking for more stuff.

Without testing (and/or demonstrations) the customer will continue to come to the well for more features.

Note: I'm a PM and right now run a development shop. I see it all the time.
posted by Man with Lantern at 11:27 AM on August 5, 2010


I'd actually be interested to know from where or whom that Director received that "Best Practice." I suspect the answer would be along the lines of, "From her very own ass," but having been round and round this battle for more years than most of you have been alive, if she's got a source, like I said, I'd be interested in reviewing it.
posted by OneMonkeysUncle at 12:12 PM on August 5, 2010 [1 favorite]


Rocketpup's got it.
UAT is *not* SIT, or Unit Testing, or even end-to-end (though it can be part of that).
It's there to ensure that the primary stakeholders (who are also *sometimes* users) got what they asked for, and they sign off saying so after completing the testing.
That said, you need to work with them to develop scripts that encompass the requirements that the users touch/affect.
These scripts often test many requirements in a few steps, sometimes more. They look more like instructions for use, than most software tests (at least in my experience).

All of that said, it's not a requirement if you can't test it.
posted by dbmcd at 2:35 PM on August 5, 2010


From reading your post, I couldn't quite tell if you are the customer, performing the UAT on a release you are getting, or if you are developing the software and helping your customer with UAT. Either way, what I have experienced from the developer end is there is no best practice or even common practice. The two biggest factors I have found that determines what the customer does for UAT is (a) whether they are paying green dollars for the product (as opposed to an internal group), and (b) how experienced they are in buying/consuming software (do they know how to run a UAT and have expectations). UAT for an external customer that is savvy and buys software all the time is a completely different experience than an internal customer getting version 3 of a product they've already been using for two years.
posted by kovacs at 3:35 PM on August 5, 2010


Best answer: Her use of "Best Practices" means you automatically lose.

Because it is dead obvious that testing all the features with a live user is the best way to make sure a product works.

What she means is that it is best for her to do it more cheaply, and that nobody important has complained for 10 years.
posted by gjc at 4:27 PM on August 5, 2010


This depends ... on what your definition of a requirement is, and how your company's UAT works.

I am a functional system tester (SIT), and my job is to make sure the product works through all manner of combinations and manipulations, dependent on time/cost of course. I test all of the business requirements (this is where traceability comes in). I am the one the product has to go through before they let it near User Acceptance Testing.

Our organisation does some UAT later when the product is bug-free (ha!) and ready to go live. Like kovacs has said, it really depends on who the customer is, as to what standard/depth of UAT gets done. I understand that UAT is more of a confirmation that the product does what was promised, when being 'tested out' by the user. Like proof, before being handed over. It's where the most important requirements and features should be looked at, but "every requirement"? What if you have 2000 requirements?

So do you system test or are you trusting your developers and the UAT to get it right?
posted by Enki at 12:59 AM on August 6, 2010


Response by poster: Thanks for all of the responses. A bit more information: This is my second release as the program manager for a mature (version 4.4), gigantic, messy, homegrown application. I came into the company mid-stream during the previous release, so I just followed the company guidelines as I got my bearings.

The dev team does unit and system testing, obviously, but there's no SIT or other testing team between the dev team and the users. One of my duties is to poke, prod, cajole, flatter, plead, and threaten the business users to write their UAT scripts and execute them (the scripts, not the users).

I did get further clarification from the IT director though. gjc nailed it: it's cheaper to do incomplete testing and fix the mistakes in production than it is to do it perfectly the first time.
posted by bluejayway at 7:57 AM on August 6, 2010


The purpose of UAT is to give business leaders / managers / users the confidence that the system will do what it says. And that confidence can be measured in certain ways. It should, but not always, be up to the business what the exit criteria for UAT will be. For example, if you have your requirements categorized by importance ( high, medium, low ), the business may say that all requirements marked 'high' must be tested in UAT. They may say all financial transactions, but not inquiries, must be tested. They may say f*ck it, we have had enough touchpoints in the process, and know that you guys do rigorous unit and system testing that we don't need UAT at all.

If the business decides they will have 100% confidence if 25% of the test cases are executed and all severity 1 & 2 'defects' / 'gaps' / 'bugs' are resolved then that is the contract that will be met. There isn't a hard and fast rule about this. It is about what is fit for purpose based on the culture of the business, the complexity of the system, the players involved, the methodology being used, etc.

And bluejayway, I am in the same boat as you. Poking, prodding, and threatening the business to write test cases is about as fun as going to the dentist. In a perfect world these test cases would write themselves based on properly written use cases / stories but it never works like that does it.
posted by jasondigitized at 9:27 AM on August 6, 2010


« Older Where to discuss sci-fi & fantasy?   |   Objecting to Objectivism Newer »
This thread is closed to new comments.