[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [MiNT] Shared libs without MMU resources
Hi,
Frank Naumann wrote:
You oversee a common fact in the informatics: you can't get rid of the
complexity, you can only move them. You have it either in the compiler, or
in the library/programs or in the operating system.
Very true.  Everybody asking for shared libraries here should remember 
that! ;-)
I think that many people still don't see what VM (virtual memory, 
although "virtual addressing" would be a better term) or a MMU (memory 
management unit) has to do with shared libraries and dynamic linking. 
Why do you need that stuff?
Take this example (and you gurus out there, please forgive me the 
simplification):
	extern int errno;
	FILE* fopen (const char* path, const char* mode);
A real word example.  Say, this function gets called with an invalid 
path argument.  The function returns a NULL pointer and sets the global 
variable errno to ENOENT.  But what happens on assembler level?
In the static MiNT world, we call the function fopen() which means, we 
jump to the address that contains the function code.  The program 
counter PC will crawl through the code, and finally get to some code, 
where it has to write an integer (ENOENT) to the memory location that 
the variable "errno" points to.
Now we start dreaming of shared libraries.  What would have to happen 
here?  Multiple programs could shared the (assembler) code for "fopen", 
no problem, the logic of the function is the same for every process, so 
should be the code.  True? No, unfortunately not: The function code has 
to know the memory location for the variable "errno", and this location 
must - of course - differ for every process.
You may say now, "global variables like errno are evil anyhow...".  But 
that global variable "errno" is only one example of many in the evil 
real world.  Other examples are stdin, stdout, stderr.  Or look through 
the MiNTlib headers for reentrant versions of library functions (all 
those functions with names ending in "_r"); all of them would cause the 
same problem that "errno" does.
So, what can we do? The problem is complex, and according to Frank, we 
can only muse where to put that complexity but never get rid of it:
1)  The standard approach is probably the most efficient:  We completely 
ignore the problem in user land.  But in kernel land, we maintain an 
address transformation matrix for each process, and the seemingly fixed 
(virtual) address of "errno" is mapped by the MMU to a real address that 
now differs for every process.  In other words:
    printf ("Address of errno: %p\n", &errno);
would yield the same address for each and every process running in the 
system (using the shared libc).
Sounds inefficient, but it is not.  The MMU is a very fast working device.
Drawback: Old MiNT hardware (mc68000) does not have a MMU but it does 
have a strong lobby here. ;-) Furthermore, several Atari idiosyncrasies, 
like the variable length block of system variables, cookie jars, closed 
source tsr programs, ill-designed ipc protocols, etc. will make the 
whole thing very hard to implement when you still want to be able to run 
legacy software.
2) We modify our compiler system: We have to insert kind of an 
indirection whenever the address of an unsharable memory location is 
needed.  One working implementation would be to write a process specific 
address offset into one fixed processor register, make sure that no code 
ever touches this register, and when we have to retrieve the address for 
"errno" we always add that variable offset to the fixed address.
This approach basically works but also has drawbacks.  The extra 
indirection costs (little) performance, sure. But worse: Our standard 
compiler suite gcc does not support such a feature.  There used to be 
the -mbaserel patch for Atari gcc, but it was an inofficial hack, which 
was extremely hard to maintain.
Yet, somebody with a profund knowledge of gcc internals would probably 
be able to code that feature into the compiler and convince the gcc 
maintainers to accept the patch.  The work that has then be done on the 
assembler, linker and in the kernel, is probably easy compared to the 
compiler modifications.
3) Eliminate the problem by writing code (and interfaces) that only 
results in 100% sharable code.  For example you could change the 
interface of fopen() to:
	FILE* fopen (const char* path, const char* mode, int* errno);
This is the SLB approach.  Drawback: You have to re-write virtually 
everything, because most real-world code is not written in such an 
SLB-friendly way.  What you want is a shared libc, a shared libjpeg or 
libpng, and these libraries could not be implemented as an SLB.
So what could you do? Solution 3) is actually a hack, not a solution. 
The SLB will never be a libtool target, and will only be useful for 
proprietary Atari stuff.
Solution 1) is the cleanest, but the price is high.  You will probably 
have to say goodbye to a lot of legacy software.  Multiply the problems 
with memory protection by the number of TSRs in your auto folder to get 
an idea of the complexity.
Solution 2) would be feasible, but requires a joint effort of people 
with a deep insight into gcc, binutils and kernel.
You want to know my opinion? Try option number 2! Whereas the success 
for 1) is uncertain (the real problems are hard to foresee), approach 2) 
will certainly work, and you might even get some help from the embedded 
fraction.
Ciao
Guido
--
Imperia AG, Development
Leyboldstr. 10 - D-50354 Hürth - http://www.imperia.net/