I ran into the same Python problem as this fellow. Namely: he’s written a script that dumps lines to stdout, and then runs

my_script.py | head

and gets this:

Traceback (most recent call last):
  File "/home/slaniel/bin/my_script.py", line 25, in 
    main()
  File "/home/slaniel/bin/my_script.py", line 22, in main
    print "".join(["%s %s\n" % (value, key) for key, value in sorted_list])
IOError: [Errno 32] Broken pipe

I.e., Python still has data in the pipe, ready to go to stdout, but it can’t send it because head(1) exited. So my_script.py gets SIGPIPE, and Python traps that as an IOError exception. The solution is straightforward:

from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)

This DFL thing is new to me:

signal.SIG_DFL
This is one of two standard signal handling options; it will simply perform the default function for the signal. For example, on most systems the default action for SIGQUIT is to dump core and exit, while the default action for SIGCHLD is to simply ignore it.

If I’m reading that right, Python replaces the default SIGPIPE behavior with a thrown exception. To make the signal yield the system default, you need to tell Python explicitly to do that.

Two questions:

  1. Why would Python do this? Is the general logic that it’s trying to “internalize” systemwide behaviors? Maybe it wants a script to be “write once, run anywhere”, so it can’t just accept the systemwide behavior. Instead, it has to turn external system events (like SIGPIPE) into internal program behaviors (like exceptions). Is that the idea?
  2. I don’t want to have to tell every one of my scripts to exit silently when it receives SIGPIPE. So I would prefer not to write
    from signal import signal, SIGPIPE, SIG_DFL
    signal(SIGPIPE,SIG_DFL)

    in every script that I ever produce. Do people have a general approach here? E.g., every script does an import Steve_lib (or your own equivalent) that sets up the expected signal-handling defaults?