[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

proposal - "piped files"



This is the idea that's been nagging me all this time, working on the CAB
overlay. The problem there is to download data from a TCP connection, write
it permanently to disk, and also to write it to another process (the main
CAB process) simultaneously, thus allowing the other process to crunch the
incoming data in realtime, but also keeping a permanent copy.

The solution I propose is a new ioctl for files, which makes them behave
like pipes, in this sense: If a file is opened for writing, and then the
file is again opened for reading, then reading to the end of the file will
suspend the reader, instead of returning an EOF. After the writer has closed
the file, then behavior returns to normal, and reading to the end will return
an EOF.

The ioctl should be usable by both the reader and the writer, but with two
different behaviors - if the writer sets this mode, then all readers are
affected. If the reader sets this mode, then only that particular reading
process is affected. This would allow a very trivial, very efficient version
of the unix "tail -f" command, for example...

The only question is what happens if a program makes assumptions about block
sizes it can read. I.e., usually, if you have an 8K file, and you request a
4K read, you will get those 4K bytes right away. But with this ioctl, your
file may only be 2K at the moment, with another 2K arriving later. Should the
read wait for the full 4K to arrive, or should it return right away with the
2K that's there? I suppose that's what the NDELAY open mode is for...

One other point - for a filesystem with sufficient data cache blocks, this
will not perform much worse than an actual pipe, since the reader's reads can
all be satisfied from the cache...

Any opinions on this?
  -- Howard