Hi Thomas, nice to hear from you, by the way. :-) Thomas Binder wrote:
This Perl script reproduces the problem: #! /usr/bin/perl use strict; my $output = <<EOF; Content-Type: text/plain EOF my $l = length $output; while ($l < 100 * 1024) { $output .= sprintf '%x', ($l >> 8) & 0xf; ++$l; } print $output; __END__Just to make sure it's not a pipe buffer issue: Does calling the script from the shell and piping its output to wc report the correct output size?
Yes, it works on the command line, on stdout, redirected to a pipe or to a regular file.
A similar problem existed for the Apache version of Windows. When writing more than blocksize bytes to stderr (i. e. the error log) the CGI process locked there. Maybe the reason for the failure is some fancy technique that Apache uses for buffered output.
I think that sockdev.xdd is the real culprit because the behavior changes with the network interface I use. With lo0 I get about 4k output, with eth0 (connected to tap0 on the Linux side) I get about 8k.
Ciao Guido -- Imperia AG, Development Leyboldstr. 10 - D-50354 Hürth - http://www.imperia.net/