No time like the present
March 4, 2008 11:24 PM   Subscribe

I'm interested in aspects of benchmarking with the help of UNIX's time, specifically, what user, system and elapsed times correspond to, within the functional context of the system and the tested application.

I have a very rough idea what the results of time point to in terms of CPU contention, with respect to the result of elapsed time, if elapsed is greater than the sum of user and system times.

• What does it mean that user makes non-system calls (what are those calls?)

• Likewise, what does it mean specifically that system makes system calls (what are those calls?)

Would I use user and system mean times to establish how to guide function profiling, within an application?

• What benchmarking results should I use as criteria for comparing one test result with another, all else the same?

For example, let's say I run sed -e 's/+/-/' inputdata on the same system, where different builds of sed have been compiled with different optimization flags and compilers.

• In this case, why would I choose the mean user time over mean system time as the criterium for comparing against like measurements of a "baseline" stock build of sed?

• Likewise, what are the caveats with choosing one measurement class over the other? (What are the downsides of using user time? system time? I suspect the answer to this will depend upon the calls made.)

If you have pointers to literature (other than the thousands of man pages on Google) I'd be appreciative of that advice, as well. Thanks in advance.
posted by Blazecock Pileon to Computers & Internet (4 answers total) 6 users marked this as a favorite
 
Best answer: Let me preface this by saying benchmarking is hard, fiddly, and the answer to any question about it is almost always "it depends".

I started typing a long explanation about user space vs kernel space and their interaction, but I think it'd be more helpful to keep things simple. System calls can be thought of as the API that the operating system provides to regular applications. The standard library is mostly just a wrapper around these system calls. On any UNIX, applications can only do a very limited subset of things themselves. Anything like starting a process, raw memory allocation, I/O (including filesystem access) must perform a system call.

Once you do a system call, you're no longer running your code. Instead the processor is executing a system call handler somewhere in the innards of the operating system. This is where the user/system distinction comes from. User may roughly be thought of as "execution time within my application" while system may be thought of as "execution time within the kernel".

The major caveat here is that if your process is suspended (eg it called select/read and is waiting on data), time stops being attributed to it, and the OS goes off and runs another process (or the idle loop) while it waits. So both of these times refer only to time in which your application was actually running.

Elapsed time refers to the wall clock. The number of seconds of real time between the application starting and finishing.

One additional problem is threaded applications. The user and system time accrued by each thread is cumulative. You can therefore easily see more user time elapsed time on a multiprocessor.

In the specific example you've given, I'd probably take user time over wall time or system time. If you're not changing how sed reads input, you want to compare how much faster/slower your changes have made it.

User time will usually be your metric of interest (although elapsed time can be important if you're playing around with synchronization in user-space on a multithreaded app). A high proportion of system time frequently indicates you are stressing the kernel and there might be a better way.

But then you have other worries. Is the machine in question quiesced? Are the caches hot? Is the bottleneck somewhere other than your code? Are the workloads comparable? Is the workload "representative" or is it some random corner case?

Now... if you're on linux, please be aware that time sucks if you want detailed information about what's happening. Look up oprofile instead, as this can show you where time is spent both in your app and in the kernel. It can be more work to set up, but it's a very nice tool.
posted by blender at 12:44 AM on March 5, 2008


Everything blender said is true.

Here are three programs which all take a few seconds to run but spend most of their time in different places.

Lots of user time
Lots of system time
Very little user time or system time but high elapsed time

To get a realistic impression of how long the user has to wait for your program to finish, you probably want to look at "elapsed" but while running it on a machine where the bare minimum of other processes are running.
posted by sergent at 1:47 AM on March 5, 2008


Slightly adjunct to your question: I know you have talked about doing Python programming before here, and if you want to benchmark Python code, you may want to have a look at the timeit module.
posted by grouse at 2:53 AM on March 5, 2008


Remember also that the time you're using is almost definetly the one built into your shell (likely bash). The GNU time utility supports a lot more features.

That said, if you're benchmarking your own code you should really be profiling using the innards of the language you're using. What you find the bottlenecks to be will quite often surprise you.
posted by blasdelf at 11:21 AM on March 5, 2008


« Older How to detriple-ize my triple-boot system without...   |   How does one write a game, anyway? Newer »
This thread is closed to new comments.