Measuring productivity
January 17, 2010 12:29 AM   Subscribe

what kind of productivity do we know how to measure effectively, and what kind do we not? What are some examples of good measurements of the first kind, and what are some examples of failing to measure the second kind of productivity? What distinguishes these two kinds of productivity? What's a good Google query for this question?

As an example, I don't know any good way to measure a programmer's productivity, although many schemes have been tried. Sure, in the end you can look at the final product and interview his team-mates, but on a daily basis you can't really point at the goofball trolling thedailywtf and say with certainty that he isn't responsible for half the good code coming out of his team. (Or maybe that's wrong. Enlighten me!) What sets them apart?
posted by d. z. wang to Science & Nature (7 answers total) 8 users marked this as a favorite
What distinguishes these two kinds of productivity?

Productivity is a measure of output per person per time period, and the main thing that distinguishes good measurements from bad is that good measurements measure output in units that actually mean something.

In manufacturing, this is generally pretty easy, because you can simply count how many things are rolling off the production line. As you've noted, there is no generally useful measure of programmer productivity, because no aspect of what a programmer produces is usefully countable. Source lines of code are often used by managers who don't understand this, which simply leads to people gaming the system (e.g. by cutting and pasting large slabs of code instead of putting them in a function) in order to get a high productivity rating.

The closest thing I can think of to a useful measure of programmer output is 1 / (1 + the number of bugs found in checked-in code), and even this can be gamed simply by never checking in any code at all (the Wally method).

Productivity measures are not the only thing affected by the inherently non-quantifiable nature of good programming; quality measures are affected too. This is why the ISO 9001 quality standards so beloved of the manufacturing industry are (quite rightly) regarded with absolute contempt by the programming community.

ISO 9001 training will tell you that "quality" is measurable, and it means conformance to specification and lack of variation with respect to the spec - if you have two runs of 10mm bolts, and one run contains bolts from 9.9 to 10.1 mm diameter while the other contains bolts from 9.99 to 10.01 mm diameter, then (all other things being equal) the second run is of higher quality. Unfortunately, there is no quantifiable way to judge a software project's conformance to spec in any way comparable to this - nobody who claims to run a six sigma software production operation should ever be taken seriously.

Similar considerations apply to engineering, architecture and other fields which, like programming, are in large part based on the expression of creativity. The quality of a creative work is more subjective than measurable, and the same holds for the "productivity" of the people involved in creating it.
posted by flabdablet at 1:33 AM on January 17, 2010 [1 favorite]

Check out Jill LePore's review in The New Yorker of “The Management Myth: Why the Experts Keep Getting It Wrong” (Norton; $27.95).

"...Matthew Stewart points out what [Frederick Winslow] Taylor’s enemies and even some of his colleagues pointed out, nearly a century ago: Taylor fudged his data, lied to his clients, and inflated the record of his success. As it happens, Stewart did the same things during his seven years as a management consultant; fudging, lying, and inflating, he says, are the profession’s stock-in-trade. ..."
posted by Carol Anne at 5:27 AM on January 17, 2010

One thing I have found is that managers asking for tools to measure productivity never have any plan for what to do with the data. They are so hung up on getting all sorts of performance metrics but it stops there.

Why do you want to measure the things you want to measure? What decisions will that data allow you to make? What corporate goals can you support by the decisions the data helps you make? The "we don't have enough performance data" cycle never ends when you start with performance metrics. You have to end with them. The data cannot do the work for you.

Mission: To be the #1 widget manufacturer in the state
1) Company Goal: Our company must increase widget production 10% over last year
2) Departmental Goal: We must produce X widgets per day
3) Employee Goal: Each employee must each average Y widgets per day based on their experience/skill level/etc.

NOW you can measure and have it mean something. You can create incentive plans based on % above the daily goal. You can give employees a target to shoot for. The tools you put in place (whether they are automated or just tally marks on a piece of paper) now have meaning beyond just simply accumulating numbers. If Jane can't meet her goals, you can tell her exactly how her role fits in to the overall structure and it's clear to her why her productivity matters. That's so much more motivating than, "Jane, the numbers tell met that Joe is beating you in daily widget count."

Naturally, in programming, or any "knowledge worker" skill set, it's more difficult. The goals you set at those 3 levels are harder to nail down, but I think it can be done. It just takes work.
posted by I_Love_Bananas at 7:53 AM on January 17, 2010 [1 favorite]

I do think it's possible to measure productivity at a team level, even if it's difficult to break it down day by day. At my job, we use something like the following:

1 The requirements team comes up with goals (specifications for new features)
2. The programming team provides an estimate of how long each goal should take
3. The management team approves each goal/time pair
4. The programming team completes the work, the QA team reports bugs, the programming team fixes bugs, the QA team retests etc. until everyone is happy

After completing the cycle, you know, for each goal, how long it was estimated to take, how long it actually took, and how much time was spent in QA. What you can then do is keep track of how these numbers trend over time. Of course there will always be discrepencies between the estimate and the actual amount of time taken, but, over time, the team should get better and better at accurately estimating their productivity, so the numbers should become more meaningful.

If there are concerns about how a particular team member is performing, one thing I've seen my boss do is to temporarily pull someone from a particular project (giving them a different project instead) and evaluate how the rest of the team performs without them. If performance is not affected much than this may be an indication that the person could be a better fit in a different role.
posted by GraceCathedral at 9:46 AM on January 17, 2010 [1 favorite]

over time, the team should get better and better at accurately estimating their productivity, so the numbers should become more meaningful

Could happen, I guess. In 20 years of experience in writing code for money, though, I haven't actually seen it happen, or spoken to colleagues who have.
posted by flabdablet at 5:02 PM on January 17, 2010

You can create incentive plans based on % above the daily goal.

You could do that, I guess. I reckon your org would do better work if you didn't.
posted by flabdablet at 5:05 PM on January 17, 2010

Well, I'll admit that our goals are usually pretty broad - 2-3 weeks development, 2-3 days QA, etc. But given the fairly generous margin of error, the people I work with are usually pretty accurate (and have gotten more so over time). I don't think these types of productivity measures work well if your goal is 15% more lines of code a day or something, but they're good for figuring out whether you're going to make your release date.

In my limited experience, the productivity danger zone for most projects is not during coding, but during requirements. If the spec is innacurate, then all of the time spent building to spec is lost time. I think this is one of the reasons why estimating schedules for software projects can be difficult, because in a bad project, you can wind up stuck in QA for months, redoing work that was already completed. This is another reason why I think that tracking dev time vs. QA time can be useful - the ratio of the two can point to structural problems that could actually be negating the productivity of other team members.
posted by GraceCathedral at 9:09 AM on January 18, 2010

« Older I want to live happily ever after.   |   How am I being deprived being brought up by... Newer »
This thread is closed to new comments.