Skip

If we build it will they come?
May 26, 2008 3:45 AM   Subscribe

Say a large web based company like Amazon or eBay wants to roll out a major new feature but first needs to perform a cost benefit analysis. What's the process used to forecast the benefits?

Assuming this new feature is something competitors don't offer and is going to cost a lot of money to build and implement,I'm curious what methods companies use to forecast how much additional traffic and revenue the new feature will bring.
posted by gfrobe to Work & Money (12 answers total) 9 users marked this as a favorite
 
This is a basic question for corporate strategy, so it is going to have as unlimited a set of answers as "how do Presidential candidates generally seek to get 280 electoral votes."

I think that most of the approaches fall into two basic baskets: analogy and survey.

Analogy: the analyst looks at the history and present practice of the company and peers and identifies successful changes and initiatives which he argues are comparable in scope and character to the one under consideration, and uses the range of impacts that those changes and initiatives had as the representative of the potential benefit.

Survey: customers and potential customers feedback on how their behavior would change with the benefit of the service, including both explicit survey (polls, focus groups, customer satisfaction web pages) and implicit (complaint and help line statistics). The applicability and reliability of the survey data must be supported by reference to prior corporate decisions in which survey data was successfully relied upon.
posted by MattD at 4:31 AM on May 26, 2008


Taking it one step further in MattD's survey paragraph:

Sites as large and successful as Amazon or eBay most likely make a prototype of the new feature (assuming the feature either is or can be manifested as a web feature) and do some usability testing. The difference between what web site users do and what they say they do (or will do) can be pretty big.
posted by ImproviseOrDie at 4:58 AM on May 26, 2008


Amazon does user testing all the time, you've probably taken part in it if you've ever shopped there. They'll do stuff like 'which of these 10 shades of yellow on this Order button gets us more sales?' and release it onto 10 groups of 5,000 customers, see what's best and implement that site-wide. It costs them pennies to test it and they get real results.

I suspect the way they implement major changes is similar: testing.
posted by jedrek at 5:24 AM on May 26, 2008 [1 favorite]


The major difference between large web companies and other large companies is the incredible mass of fine-grain user and usage detail that the web companies are able to distill from their various usage logs.

Other than the source data, you probably see the same sort of corporate politics driving analysis and decision making at Amazon and EBay as you might see at IBM or Apple.

They all hire MBAs from the same schools, and all of those companies have high enough turnover that any sort of local secret methodology could only remain secret and private for a handful of years, if that.
posted by b1tr0t at 6:33 AM on May 26, 2008


jedrek, informative answer, but does that translate well into the Big Features MattD is talking about?

Anyway, the only answer I can come up with is that sometimes they just buy things instead.

I don't know how well this translates to non-tech companies, but in a lot of the aquisitions made by google, microsoft, ebay, yahoo, etc. the products don't necessarily have to perform that well. A big part of the benefit is getting to take some of the brilliant minds out there off the market. You can hire them and have them work for you, or if they don't, they probably at least have to sign non-competition agreements as part of the buyout. My cousin was part of a company in the first boom called e-quill. As far as I know, microsoft didn't do much with their product when it bought them out. At least not as far as the end product, maybe some of the underlying code went toward other things. He didn't go to work for Microsoft. I know it's partly because he could do pretty much whatever he wanted for a while without worrying about money. Maybe also because he signed a non-competition agreement. Now he's with walkscore, but it was a while before he did anything else tech related.
posted by gauchodaspampas at 6:41 AM on May 26, 2008


You'd be surprised at how much of a black art this is, and--even more--at just how little formal justification goes into some of these decisions. Companies like Amazon, that do such rigorous testing, are few and far between, even across global brands that are basically household e-commerce names.

The underlying reality is that the decision to make a major investment is fundamentally a political one, and the requirements for getting something done at any given company will vary dramatically based on its culture and structure.

Basically, the core questions are always going to be "Who's in a position to approve/fund the idea?" and "What's it going to take to get them to do that?" If the sponsor and/or the company culture are focused on quantitative justifications, then you're going to need to run a lot of hypothetical business-cases that project the potential return. If it's more of a fear-based culture, you're going to need to show what the competition's doing, and things like how that threatens their market share. A lot of times, it just comes down to getting the right executive excited about the idea, and you're done.

I don't mean to be cynical about this, at all, but just pragmatic. Like a lot of folks on this site, I've been working with big companies as they make this kind of decision for a long time, and after a while you learn that while there's definitely a "right" way that companies _should_ use to justify this kind of investment, they very rarely do. More to the point--and I can tell you this from experience--if you focus on the way that it _should_ be done, and ignore the political realities, you've got a very tough row to hoe. Anyone who's been in consulting for a while, especially in an effective management consulting practice, will tell you that those companies are very, very focused on the practical, political realities of shepherding an idea into reality.
posted by LairBob at 8:26 AM on May 26, 2008 [2 favorites]


(As an aside, did you have a specific circumstance in mind that you're looking at, or were you just asking out of general curiosity?) If you're trying to get something like this to happen, we might be able to give you a little more concrete advice with just a bit more info.
posted by LairBob at 8:30 AM on May 26, 2008


Like LairBob says, I don't think you can realistically scale monster org methods to your purpose.

For you case, especially with the alarming phase "cost a lot of money to build and implement," you are going to have to pitch this internally as though you are pitching an all-new idea to investors - which, you know, you are.

If you want to align yourself with features rolled-out by big names, that might help your proposal. Just make darn sure that you don't mention anything by name that is contra to the wisdom of business news or that involves players or industries or companies that your leadership hates.
posted by Lesser Shrew at 8:41 AM on May 26, 2008


Thanks all. This was more of a curiosity question for me and wasn't based on a specific need at moment.

Lots of great answers here. I had assumed that usability tests would take place but I guess my biggest question was how companies were able to estimate how a better customer experience would necessarily equate to additional revenue in their pockets. So, for example, I guess it was a few years ago that Amazon enabled customers to search through books and read certain pages online. Assuming people enjoyed the new feature and it worked well, how did they know that it would actually result in people buying more books and not just be an expensive new feature that customers enjoyed playing around with.

I think MattD's answer is closest to what I was looking for but glad to hear any additional thoughts.

Thanks!
posted by gfrobe at 9:19 AM on May 26, 2008


Back in those dark days when I was in business development, we used risk analysis software like @Risk in Excel to plot out the results of models showing multiple possibilities and the likelihood of those possibilities occurring.

It went sort of like this: we had a goal from upper management on return of investment. For sake of argument, let's say our rather risk-averse firm wanted the project to break even in two years and provide a 40% return in four years. This defines the "benefit". (For softer benefits like mindshare, I'll let the marketing folks take their turn).

Then, we would model out the costs of providing the service in Excel through discussions with our engineers and various vendors, quotes on equipment and service and some degree of R&D. We'd add hard equipment/development costs, marketing costs (usually a certain percentage of the product's hard costs), support costs based on past numbers from projects, administrative costs, depreciation and other variables that would allow our model to scale appropriately to the number of potential customers and the different ways they might use the service (ie, "the cost is x if y people do z"). We could then set a limit of values for various scenarios (customer growth, churn, service usage growth, equipment costs over time) and have the spreadsheet automatically spit out how much we would make and when along the spectrum of those values. The first thing we'd usually do would be plug in historical adoption/support data for similar products and see how it fared there. (The second thing is check the accuracy of our inputs and the reasonability of our assumptions when that model produced dismal results.)

The risk analysis software can then do Monte Carlo analysis on the data, which basically runs that model randomly through the inputs I discussed above to determine how the model reacts, and provides reports on which areas of the spectrum would put us at goal and which areas don't. Those reports then can show "at a glance" the ranges of inputs that would put us at our financial goal (and the ranges that would not). The end results would usually be a Tornado diagram that would show at a glance the risk factors for the bigwigs, and a discussion of how likely it really was based on historical information and external market projections for us to get there.

Now, of course, this doesn't manage intangible factors. Customer satisfaction, brand loyalty, halo effect -- those are all rather difficult to model in Excel, and are IMHO even less predictable than the financial projections discussed above. A project may well die or succeed based on the intangibles even with all of the modeling in the world. Good upper management should then take into account both those intangibles and the hard financial data and makes a decision based on some degree of controlled risk. Needless to say, that doesn't always happen.
posted by I EAT TAPAS at 9:27 AM on May 26, 2008


To that point, then, there are really two key things that affect your ability to forecast revenue:

1) How well you can project the behavior that you're affecting?

2) How directly does that behavior impact revenue?

The "Look Inside" feature is a great example--until you build it, it's very difficult to predict (a) how much people will use it, and (b) that using it will positively affect sales. Ideally, you would justify that kind of investment as part of some kind of overarching strategic "pillar". In the case of this example, that would mean a "Content Engagement" strategy...something like:

A) We also know that searchability is a primary driver of site traffic. More searchable content means more search hits, means more incoming visitors.

B) We have evidence that "content engagement correlates strongly with overall sales". (Or, "visitors who spend more time, spend more money".)

C) Therefore, even though "Look Inside" will take a lot of money to get off the ground, we can credibly say that it will drive more traffic to the site, _and_ increase the amount of time people spend there.

A lot of times, for that kind of indeterminate investment, that's all you really need (and all you've got). You _could_ do a couple of back-of-the-napkin calculations to say "If it brought in 1% more traffic, and increased revenue-per-visitor by 1%, it _could_ mean $X million", but even that means you're starting to move into smoke-and-mirrors territory.


The other end of the spectrum is something that's much more measurable, and has a much more direct impact on sales. Using Amazon as an example again, "1-Click" is a good case in point, too. There, you're able to say something like "60% of all items that are put into a shopping cart are never actually sold, and every click in the checkout process drives a 50% drop-off." At that point, you can user-test a whole bunch of different variations in the check-out process, and put solid numbers behind exactly what should happen.
posted by LairBob at 9:40 AM on May 26, 2008


Simply running the numbers- Big Feature will take $1 million to implement. It will gross $0.10 per transaction. Will we ever get to the break even point? When? Could we put that million bucks somewhere else and make more money?

That, and marketing. Are there even enough people out there who will use this feature?
posted by gjc at 11:28 AM on May 26, 2008


« Older Quite often when debugging har...   |  hi all, this is a first questi... Newer »
This thread is closed to new comments.


Post