[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [MiNT] memory fragmentation
On Tue, Aug 17, 1999 at 08:49:25PM +0200, Jo Even Skarstein wrote:
> On Sun, 15 Aug 1999 16:22:25 +0200, Guido Flohr wrote:
> > end that has the same effect as the Magic approach: The kernel could
> > deduce the optimum size for the TPA at runtime, thus making more
> > efficient use of smaller memory blocks.
>
> I think we discussed this last year... Let me quote from Tanenbaum's
> "Modern operating systems":
>
> "Another well-known algorithm is best fit. Best fit searches the entire
> list and takes the smallest hole that is adequate. Rather than breaking
> up a big hole that might be needed later, best fit tries to find a hole
> that is close to the actual size needed."
> [...]
> "Best fit is slower than first fit because it must search the entire list
> every time it is called. Somewhat surprisingly, it also results in more
> wasted memory than first fit or next fit because it tends to fill up
> memory with tiny, useless holes. First fit generates larger holes on the
> average.
This sounds so paradoxical that it could even be true. ;-)
Anyway, Thomas has already implemented it and it is worth a try. Let's
have a look at the result. There are two aspect's of MiNT that Tannenbaum
does not talk about:
- speed: of course finding the "right" block will be slower with best-fit.
But I wouldn't be suprised if that penalty wasn't compensated by a
performance goal in mark_region (less memory to mark), not to speak of
pathological programs without the fastload flag set.
- best-fit will produce smaller holes. Uhm, yes, that's the intention of
it. OK, I know what you/Tannenbaum mean. But with little memory I am
more interested in one large memory block left free than in the number or
size of several tiny wasted blocks. Furthermore, most Unix software
tends to malloc a lot of small and smallest blocks. I think that these
tiny holes will not be wasted.
Let's wait and see.
Ciao
Guido