Join 3,494 readers in helping fund MetaFilter (Hide)


Specifying software speed specifically
June 10, 2009 10:34 AM   Subscribe

When designing a programming project, what is the usual method for imposing speed requirements, such as "function X must accomplish so-and-so in 2 seconds or less"? The problem here of course is that all computers run programs at a different speeds.
posted by crapmatic to Computers & Internet (19 answers total) 3 users marked this as a favorite
 
It depends on the general requirements of a program, but you can set minimum requirements for hardware, like games do.
posted by demiurge at 10:37 AM on June 10, 2009


Building on demiurge, generally you can set the specific requirements based off a platform and you want to specify it in system time (not clock time) or whatever debugging-based time you have.

So you could say:

On a Processor X machine, system time for Operation Y should be no more than Z.Z seconds, measured with ABC tool.
posted by unixrat at 10:45 AM on June 10, 2009


This will account for slower and faster system, as long as you hit a certain benchmark.
posted by unixrat at 10:46 AM on June 10, 2009


(System time is a standard UNIX-ism, easily available from time on a per-application (not thread/function) basis, btw.)
posted by unixrat at 10:47 AM on June 10, 2009


Big O Notation may be what you are looking for. This article is how it specifically relates to computer science.
posted by odinsdream at 11:34 AM on June 10, 2009


Some projects have rules about performance regressions: Changes can't be checked if measured performance is worse for the new version than the previous version on the same hardware.
posted by mbrubeck at 11:49 AM on June 10, 2009


The key thing is you measure the speed, then put limits in place. Honestly, what usually matters is 2 seconds vs. 20 seconds: for that, the exact speed of the machine isn't important. If you're really worried about getting it exact then pick a reference machine that'll be around for a couple of years and benchmark.

You may find this blog post about Google Chrome startup times interesting. It links to a graph of startup time for every build, presumably generated on the same test machine every sample. What's key is even having this graph.
posted by Nelson at 12:13 PM on June 10, 2009


All computers run at different speeds, but there is a likely worst-case machine used by the target audience. Specify the time, but also specify the minimum spec target machine.
posted by i_am_joe's_spleen at 12:47 PM on June 10, 2009


Why are you imposing speed requirements? Don't do that, it's bad practice.

What you want to do is:

a) Write the program, damnit. A 100% complete program that takes 10s to run is MUCH better than a 75% complete program that takes 5s to run.

b) Profile. Then profile some more. Profile once more for good measure. No programmer will ever be able to predict what causes slowdown in your program. You need a running program to determine this. Then optimize the parts that are slow.

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" - Knuth and/or Hoare
posted by phrakture at 12:51 PM on June 10, 2009 [1 favorite]


Here is a sample of Microsoft's specification for response times on SharePoint.
posted by blue_beetle at 1:24 PM on June 10, 2009


Yeah I've basically never heard of anything like this except perhaps in situations where something has to start after a given time, and end before another time.

Almost every project I've been involved in where speed mattered, we wrote it once, figured out which parts took the most time, and refactored those, rinse and repeat until it's fast enough.
posted by RustyBrooks at 1:55 PM on June 10, 2009


Well, you could theoretically toss in a timer and if it's taking too long display a message box to the user to go grab a cup of coffee. Or, find a better algorithm. Or, (you don't specify language or platform) use a tool like Rational's Quantify to find which parts of the code are taking the most time / called the most.
posted by hungrysquirrels at 2:15 PM on June 10, 2009


"A 100% complete program that takes 10s to run is MUCH better than a 75% complete program that takes 5s to run."

Yes, get it correct before you optimize it. But setting a standard up-front doesn't mean optimizing up-front. It just means that you know where the goalposts are.

At Amazon.com we had rigorous requirements for page-load latency (and thus for any process that affects page loads) because our data showed that, beyond a certain threshold, slow pages caused more people to leave the site without buying anything. We'd rather throw out features than put software into production that didn't meet the latency requirements.

Apple's WebKit browser engine won't accept new code that causes performance regressions. Mozilla has similar processes to track and back out perf regressions for their products.

It's not smart to impose fine-grained performance requirements on low-level, under-the-hood details of software. But it's good usability practice to have latency standards for user-facing operations. It's not premature optimization if your purpose is to use such standards as an acceptance requirement.
posted by mbrubeck at 2:38 PM on June 10, 2009


listen to phrakture and company

Performance is usually specified by the users as part of their quality criteria (if somebody bothered to ask them). User level performance is not something that techies should be defining.

Until you have at least a prototype working, you're usually guessing as to problem areas unless you know the problem domain very well. Optimize once the thing works, not before.

Specifying performance in hard numbers usually becomes important when you define a formal Service Level Agreement with either supplier or operations.

Depending on your problem domain you often find SLAs termed in percentages. E.g. 95% of operations will complete within time X, 98% of operations will complete within time Y as some problems are never going to provide a 100% solution.
posted by w.fugawe at 2:44 PM on June 10, 2009


In general, one does not specify wall-clock time for execution. Programs are usually fast enough without any thought given to making them faster. Instead, you write the program, and run it. If some task takes too long (as defined by the user saying, "Goddamnit, what's taking so long?"), you go back and rewrite and optimize that specific task. Since execution times are frequently affected by all sorts of loosely-coupled corollary tasks, it almost never does the slightest bit of good to worry about execution times until you have all or most of the program done.

One reason that wall-clock times are never specified is that they're very difficult to estimate and (almost always) are based on the data being manipulated. You want to sort some rows in a database? I have no idea how long it'll actually take--the best I can tell you, without trying it out, is the big O class. Similarly, even if I can sort two rows in a nanosecond... it'll take a few seconds to sort hundreds of thousands of rows.

Likewise, specific execution times are fairly impossible to achieve consistently: is the program out of spec if the execution takes 2.1 seconds instead of 1.9? Is it out of spec if it takes 1.7 seconds 80% of the time, and 4 seconds the rest of the time? Is it out of spec if it runs in 2 seconds as the only active program, but drops to 10 seconds if another program is using disk IO?

Also, the difference between 2 seconds and 3 seconds is probably not worth the expense of improving. You should think like a programmer: in terms of orders of magnitude. A program that executes in 2 seconds is functionally equivalent to one that executes in 3 seconds--you want it done faster, buy a faster computer. But, a program that executes in 2 minutes is not equivalent to one that executes in 2 seconds--you need a new algorithm.

Finally, as a freelancer, if I got a spec with specific execution times, I'd either request they be removed or pass on the project. Not only for the reasons I've mentioned above, but because part of my profession is making sure things happen fast enough. It's a point of professional pride. Not to mention that if you aren't a programmer, you probably aren't even remotely qualified to estimate reasonable execution times on the sorts of things that are likely to take a while--even if you can identify which ones those are.

Putting execution times in the spec is like going to the doctor with an infection and saying, "I'd like you to make me better. And I want you to do it with medicine." Well of course he's going to do it with medicine, that's the whole point.

There are some exceptions:

1) Real time systems. But, in that case, you still never say "must execute in .02s", you just demand that it uphold a real time (hard or soft) guarantee.

2) Core components that will be used constantly and multiply by many tasks. For instance, if you're writing a multiplayer game, you'll work from the beginning to make sure your network multicast code is as fast as possible. But, again, while you may have a maximum bound (e.g. 100ms on your particular computer), you're really not shooting for a specific number. What you're actually doing is trying to make that code as fast as possible, with the real (unattainable) goal being 0ms.

3) Significant, state-of-the-art advancements. So, if the whole point of your program is that it does something faster than the competition, it's reasonable to talk about execution speeds. But, again, you should be thinking about orders of magnitude. If the competition's program executes in 45 seconds, it's probably a bad investment to develop a program that executes in 30 seconds, or even 10 or 5--just buy a faster computer. It would be a great investment, however, to build a program that runs in 800ms.
posted by Netzapper at 2:51 PM on June 10, 2009


If you're talking about an actual constraint like "function X must accomplish so-and-so in 2 seconds or less" then you're talking about hard real-time computing, about which much has been written. I don't actually know much about it, but I believe the general idea is to make use of hardware interrupts generated by some sort of clock device.
posted by dreadpiratesully at 2:54 PM on June 10, 2009


then you're talking about hard real-time computing, about which much has been written. I don't actually know much about it, but I believe the general idea is to make use of hardware interrupts generated by some sort of clock device.

Eh, clock interrupts are one tool for realtime programming. But, they're actually used in a different context than "2 seconds or less". They're usually used for "exactly two seconds every time, no more, no less".

You can build a hard realtime system whose code doesn't look any different from a regular procedural program. For the example I'm thinking of, the real time constraints were met by running the program on bare metal (to remove OS-level non-determinism) and carefully optimizing the right loops.
posted by Netzapper at 3:43 PM on June 10, 2009


If you are writing code on a platform that has variable characteristics and a non-deterministic OS, you'll have to select a representative combination against which this proposed spec can be measured (i.e., processor x, OS y, clock speed zzz).

Since you are writing a spec, you are free to put whatever you want in it, including DESIRED and REQUIRED performance specs. Obviously, some critical function you are considering is key to the application, so advertise that in the spec.

At the end of the day, hopefully the spec you are writing is coherent and consistent and the features you desire can be implemented. If not, it's no different than any other product design... sometimes the desired features are economically impossible and you have to change your demands and look for better alternatives.

I do real time programming all the time...(doing it right now, actually, in assembly.) It's very common for me to require extremely precise timing (sub-microsecond) on certain features, and I have to budget and cycle count to make sure things fit in the allocated slots. (Try doing a motor controller and having the loops execute at whatever speed they want to.) RTOS's have all kinds of elaborate scheduling features to permit consistent timing, but many of them don't have to deal with GUI crap or network latencies, etc. (One of my pet peeves is knowing how many cycles go to waste on the average desk top. Even heavy users spend most of their time waiting for the operator to click something or type something!)
posted by FauxScot at 5:05 PM on June 10, 2009 [1 favorite]


As others have said, this generally comes up in real-time programming, when there are hard limits on when something needs to be completed. And if you have "real" time constraints on your software, you'll probably need to write your code for a realtime OS, which provide guarantees to their programs when it comes to scheduling and things like this.
posted by chunking express at 7:09 AM on June 12, 2009


« Older Can I sync a blackberry curve ...   |  Seeking your recommendation fo... Newer »
This thread is closed to new comments.