[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [MiNT] virtual memory
> Hello!
>
> > I'm progressing in stages. First, I made the 030 tables look like
> > the 040/060 tables. This may make the 030 users hate me for a while,
> > as it does drive up the amount of memory needed for an MMU table.
> > This is done and sent to Frank.
>
> For mprot030 I doesn't get anything yet. Hope it's not lost. Can you
> resend?
Sure thing.
> > Second, if we want virtual memory, we really can't afford to give
> > each process an MMU table entry, as we do now.
>
> Why not?
See below .
> > If I recall, the table
> > size would be in the megabytes per process.
>
> No, the table size here on my 040 is around 80 kb per Process. The table
> size depend how much pages are allocated by the program. At the moment
> it's equal to the main memory size as there is no virtual mapping (96 MB
> require something around 80 kB if I remeber correctly).
Actually, I think it's currently less than that, depending on the original MMU
tables. Maybe about 50-60 K or so.
> With real virtual mapping we can decrease the table size dramatically. A
> program that use 512 kb memory need the top level table, one second level
> table and two page level tables (this are some kB).
>
> global memory can be shared between all processes.
>
> And at last the tables can be swapable too.
This won't work, especially for global memory (which is where I started
running into problems). Each process needs to see the global memory in
it's MMU table. Unless we want to restrict the size of virtual memory (I
don't), that means that each process needs at a fully populated global
table. -That- table will take 2,163,200 bytes.
> > Therefore, I'm going
> > to make two MMU tables for all processes to use. The code changes
> > for the 68040/060 version is about 95% done. All that's left is the
> > changes needed to modify the MMU table(s) during a context switch.
> > I know what I need to do there, it's just a matter of finding time.
>
> I dislike this idea. A context switch in general is expensive. With the
> addition of rewriting the complete MMU tree the computer will spend most
> of the time in rewriting the MMU and not in running applications. I never
> heard from such an implementation. And I think it decrease the failure
> tolerance as rewriting the MMU is a very critical part.
Maybe I mis-spoke. I am not re-writing the entire MMU table on a context
switch. I am changing the bits in the level 3 tables associated with up to two
processes - the one being switched out and the one being switched in.
Entries for processes not involved in the context switch will not have
their entires modified.
Also, it is not that many instructions, especially compared with the existing
code in sleep(). It is two embedded loops - the outer loop is over the
memory regions, the inner loop is over the size of the memory region divided
by 8K. Assuming the inner loop is 5 instructions (read, modify, write),
the outer loop is 10 instructions, and there are 20 regions of 64 K each
(a 1.2 MB app), it will only be: ((64/8)*5 + 10)*20 = 1000 instructions.
If it's a 68030 running 16 MHz, that's about .625 ms (assuming an average
of 10 cycles/instruction). I do not recall the tick size, or the minimum
amount of ticks an applications can run, so I don't recall the impact of
on a process's context. I know it needs to be done up to twice, so that puts
it at 1.25 ms per switch.
When I get around to virtual addressing, it will be even less, as I'm planning
on allocating 256K sections rather than 8K sections. This means that
only the level 2 tables will need to be modified. The inner loop above
will only be run once per region rather than 8 times. This will cut
the time down significantly.
I do plan on running performance tests. If the tests indicate a severe
performance problem, I will not release it until I can work at a level
2 table rather than a level 3 table.
If that doesn't work, then I guess I need to re-visit everything. But
I'll have learned a lot.
Michael
michael@fastlane.net
---------------------------------------------
This message was sent using Endymion MailMan.
http://www.endymion.com/products/mailman/