Pipe command output, but keep the error code
November 24, 2007 2:55 PM Subscribe
How do I get the correct return code from a unix command line application after I've piped it through another command that succeeded?
In detail, here's the situation writ small:
Does anyone know how to accomplish this?
In detail, here's the situation writ small:
# mycmd_that_fails | postprocesor_that_succeeds # echo $? 0And, what I'd like to see is:
# mycmd_that_fails | postprocesor_that_succeeds # echo $? 1The obvious solution was to capture the output of mycmd_that_fails into a file and then pipe it through the postprocesor_that_succeeds, but since my command is a build script it's nice to see the progress as it runs. I'd rather have the postprocessor operate on the command as it runs than after the fact.
Does anyone know how to accomplish this?
IIRC, this:
mycmd_that_fails 2&> /path/erroroutputfile | postprocesor_that_succeeds
Without that addition, all the outputs are merged and fed to the pipe. What that does is to separate out the error flow from the normal flow, and redirect the error flow to the file.
But this only allows you to see the error after the fact.
posted by Steven C. Den Beste at 3:31 PM on November 24, 2007
mycmd_that_fails 2&> /path/erroroutputfile | postprocesor_that_succeeds
Without that addition, all the outputs are merged and fed to the pipe. What that does is to separate out the error flow from the normal flow, and redirect the error flow to the file.
But this only allows you to see the error after the fact.
posted by Steven C. Den Beste at 3:31 PM on November 24, 2007
I think I botched that syntax. "two ampersand into" isn't right, but I'm not sure what is.
posted by Steven C. Den Beste at 3:32 PM on November 24, 2007
posted by Steven C. Den Beste at 3:32 PM on November 24, 2007
...what I shoulda said was...
Also, that redirect can go to the console device instead of to a file if you use the proper "/dev/something" path.
posted by Steven C. Den Beste at 3:34 PM on November 24, 2007
Also, that redirect can go to the console device instead of to a file if you use the proper "/dev/something" path.
posted by Steven C. Den Beste at 3:34 PM on November 24, 2007
You pretty much have to save the first command's result in a file, since each command in the pipeline is going to run inside its own environment. So just have my_cmd_that_fails echo $? into a file at some appropriate point, and pick it up later.
posted by flabdablet at 3:34 PM on November 24, 2007
posted by flabdablet at 3:34 PM on November 24, 2007
Best answer: Use
Try it, you'll like it.
The actual code puts things in color if you're using a terminal that supports that. But it's somewhat more complicated and therefore harder to understand. I can provide it if you want.
posted by grouse at 3:37 PM on November 24, 2007 [3 favorites]
${PIPESTATUS[*]}
in bash. I have something like this in my .bashrc, which gives me the exit status of every command in the last pipe I just ran, if anything is not 0:export PROMPT_COMMAND="
_RES=\${PIPESTATUS[*]};
_RES_STR='';
for res in \$_RES; do
if [[ ( \$res > 0 ) ]]; then
_RES_STR=\" [\$_RES]\";
fi;
done"
export PS1="\n\u@\h \w\$_RES_STR\n\\$ "
Try it, you'll like it.
The actual code puts things in color if you're using a terminal that supports that. But it's somewhat more complicated and therefore harder to understand. I can provide it if you want.
posted by grouse at 3:37 PM on November 24, 2007 [3 favorites]
Have you tried this:
# my_cmd_that_fails; RET_VAL=$? | postprocessor_that_succeds; echo $RET_VAL
You can then access $RETURN within that command. If you need to access it later use the export command:
# my_cmd_that_fails; export RET_VAL=$? | postprocessor_that_succeeds
# echo $RET_VAL
posted by puddpunk at 3:40 PM on November 24, 2007
# my_cmd_that_fails; RET_VAL=$? | postprocessor_that_succeds; echo $RET_VAL
You can then access $RETURN within that command. If you need to access it later use the export command:
# my_cmd_that_fails; export RET_VAL=$? | postprocessor_that_succeeds
# echo $RET_VAL
posted by puddpunk at 3:40 PM on November 24, 2007
SCDB is apparently a little confused about the behaviour of standard output and standard error. Only standard output file descriptor 1) is redirected down a pipeline by default. Standard error (file descriptor 2) is not merged with standard output. That's kind of the point of standard error.
If you want them merged, you have to include the redirect 2>&1 on the command that generates them.
In any case, this question is about the error return code ($?) rather than the error output stream.
posted by flabdablet at 3:40 PM on November 24, 2007
If you want them merged, you have to include the redirect 2>&1 on the command that generates them.
In any case, this question is about the error return code ($?) rather than the error output stream.
posted by flabdablet at 3:40 PM on November 24, 2007
Typo, I mean "You can then access $RET_VAL within that command..."
posted by puddpunk at 3:40 PM on November 24, 2007
posted by puddpunk at 3:40 PM on November 24, 2007
puddpunk, I don't think that will work, since each command in the pipeline gets its own environment; RET_VAL won't be the same variable on both sides of the pipe.
The PIPESTATUS array built into bash is a partial workaround for this, but in the general case, it's easier to save $? into a file at some appropriate point, and pick it up later.
posted by flabdablet at 3:44 PM on November 24, 2007
The PIPESTATUS array built into bash is a partial workaround for this, but in the general case, it's easier to save $? into a file at some appropriate point, and pick it up later.
posted by flabdablet at 3:44 PM on November 24, 2007
Best answer: PIPESTATUS is the way to go, so long as you can use bash and not any other shell. Here's a handy script that can filter the output of a program while preserving its exit value:
Example usage:#!/bin/bash PAT=$1 shift "$@" 2>&1 | LANG=C egrep -ve "$PAT" exit ${PIPESTATUS[0]}
posted by jepler at 3:46 PM on November 24, 2007filterout "Uninitialized" gcc dodgy.c
It looks like there's a solution for ksh, but it's ugly. more variations for various shells
posted by jepler at 3:51 PM on November 24, 2007
posted by jepler at 3:51 PM on November 24, 2007
The mispipe command from the moreutils package will do this. Just 'aptitude install moreutils' on Debian or Ubuntu or similar.
posted by mbrubeck at 3:56 PM on November 24, 2007
posted by mbrubeck at 3:56 PM on November 24, 2007
In bash and others you can do:
# mycmd && postprocessor
then you can just echo $? to see the output from the failed command since the post processor will only be run if the first command runs cleanly. Obviously this is useless if you want the post processor to run even if your command fails...
posted by foodgeek at 3:57 PM on November 24, 2007
# mycmd && postprocessor
then you can just echo $? to see the output from the failed command since the post processor will only be run if the first command runs cleanly. Obviously this is useless if you want the post processor to run even if your command fails...
posted by foodgeek at 3:57 PM on November 24, 2007
foodgeek, that doesn't pipe the output from mycmd into postprocessor.
posted by mbrubeck at 4:30 PM on November 24, 2007
posted by mbrubeck at 4:30 PM on November 24, 2007
grouse and jepler have the way. "man bash" and search for PIPESTATUS ("/PIPESTATUS" while in the manpage, with slash and without quotes).
posted by rhizome at 5:28 PM on November 24, 2007
posted by rhizome at 5:28 PM on November 24, 2007
mbrubeck, true 'nuf - but aside from it not working, it's a perfect solution! :)
posted by foodgeek at 5:52 PM on November 24, 2007
posted by foodgeek at 5:52 PM on November 24, 2007
Here's a way to do it by making your own (named) pipe.
I hope the above is understandable, I included comments (which start at the # and extend to the end of the line, they are safe to put in the command line).
To make sure this worked, I did this:
posted by philomathoholic at 8:31 PM on November 24, 2007
$ mkfifo pipe #make a pipe, named "pipe"
$ postprocesor_that_succeeds < pipe & #the call to read from pipe will block until a program writes to pipe
$ mycmd_that_fails > pipe
I hope the above is understandable, I included comments (which start at the # and extend to the end of the line, they are safe to put in the command line).
To make sure this worked, I did this:
$ mkfifo pipe
$ cat < pipe &
$ bash #start a new shell, just for the exit value
$ exit 2 > pipe #exit out of new shell
$ echo $? #back in first shell
2
$
posted by philomathoholic at 8:31 PM on November 24, 2007
Really though, just install moreutils and use mispipe (like mrbrubeck suggests). It's the non-hacked-together way to do it.
posted by philomathoholic at 8:37 PM on November 24, 2007
posted by philomathoholic at 8:37 PM on November 24, 2007
Response by poster: philomathoholic:
That sounds promising, but I'll have to experiment to ensure that it works under older Solaris _and_ on cygwin. That's a nice, tidy soultion, so I hope it works.
puddpunk: I tried yours as well, and it works well enough, so that might be the short-term solution.
grouse: You've piqued my curiosity; The only question: What are the indices in the PIPESTATUS array? The order of commands in the last pipe?
posted by ChrisR at 8:44 AM on November 25, 2007
That sounds promising, but I'll have to experiment to ensure that it works under older Solaris _and_ on cygwin. That's a nice, tidy soultion, so I hope it works.
puddpunk: I tried yours as well, and it works well enough, so that might be the short-term solution.
grouse: You've piqued my curiosity; The only question: What are the indices in the PIPESTATUS array? The order of commands in the last pipe?
posted by ChrisR at 8:44 AM on November 25, 2007
What are the indices in the PIPESTATUS array? The order of commands in the last pipe?
Yes.
Are you using bash? If so I think it is a no-brainer to use the built-in $PIPESTATUS array for what it was designed for, rather than hacky solutions or requiring the installation of another package.
posted by grouse at 9:02 AM on November 25, 2007
Yes.
Are you using bash? If so I think it is a no-brainer to use the built-in $PIPESTATUS array for what it was designed for, rather than hacky solutions or requiring the installation of another package.
posted by grouse at 9:02 AM on November 25, 2007
Response by poster: I'll have to check to see if I am able to do so on all of our build boxes; we've got some older architectures that are... bash-unfriendly.
I hope to be able to; I implemented it on the Linux architectures and it works like a charm.
posted by ChrisR at 10:29 AM on November 25, 2007
I hope to be able to; I implemented it on the Linux architectures and it works like a charm.
posted by ChrisR at 10:29 AM on November 25, 2007
This thread is closed to new comments.
mycmd | perl_thingy | postprocessor
Where the perl_thingy could stick $? in a file, then postprocessor could read that file. More or less complicated, depending on what you're actually doing and your personal tastes, would be to have second perl script that would accept the postprocessor's output, read the stashed result from the first, and generate whatever kind of summary you like.
posted by freebird at 3:10 PM on November 24, 2007