[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[MiNT] Where shall we go tomorrow?
Usually I keep myself more in the background, but the recent discussions
here on this list made me break my silence to comment on some of the
current topics. I think we all have different opinions of where the
development of MiNT should be going. So I took some time and thought
about a possible path which the MiNT development can take in the future.
So be warned, this is going to take a little while. :-)
In 1986 I bought my first 512 ST+ and even if there were minor changes
(switch to STM) and updates (4MB, Harddisks, ...) this was my only
computer at home until two years ago, when I bought a PC because of my
job. However, I never stopped reading the MiNT list and I am still very
interested in what's going on with MiNT.
In my eyes one of the major advantages of MiNT always has been the
compatibility with the "old" 68000 machines. I was always impressed,
how many aspects of a unix like operating system (which I thought needed
more advanced hardware functionality) could be realized even on the
small
Motorola CPUs after all. Ok, it sometimes needed a little more thinking,
but it could be done. And by now I haven't heard of any suggestion or
plan,
that really needs more than this -- well, ok, virtual memory may be one
exception. So breaking up with 68000 compatibilty would be a sad (and in
my opinion unnecessary) step.
For myself I'd say that I most probably won't invest any further money
in Atari hardware, because the price to power ratio is always better on
the PC side, like it or not. So I think I won't by a Falcon or TT or
some newer machine, and if the compatibility to 68000 is lost, I think
you'll also lose me as a user. And I believe there are other
people thinking similar, keeping in touch to the Atari world because of
sentimental reasons, because they simply like it, but are forced to use
PCs because of their jobs or tools that do not exist on Atari systems.
At the moment when we are discussing whether to stop 68000 support or
not,
we are simply discussing memory. There is no special feature that
requires
the skills of a 68020 and up, and speed isn't an issue either. My point
is:
we should not break compatibility if there is still a way to keep it,
even
if it is a little more work to do. At the moment we have three
possibilities which would work regarding the size of the kernel:
1. Introduce modules that are loadable at runtime
2. Build small customized kernels
3. Keep a special small 68000-optimized kernel and update it from time
to time
The first solution would be very useful, even on systems with large
memory, as it accelerates the boot process and keeps memory usage low by
holding only those parts in memory that are really in use. And all this
without any recompilation of the kernel.
The second solution would be appropriate in some cases, where someone
says that he can live without a few features. However this requires the
capability of recompiling the whole kernel which is more or less
impossible on a samll system. The solution to only distribute the
minimal kernel in binary form and everybody else recompile their own
versions would be a little better, but I think many users want to use
most of the MiNT features and would not like this strategy.
However I think a combination of these two solutions would be the best:
have modules so everything works from the beginning on every machine and
those people who want to optimize on speed and memory usage can
recompile to get their customized kernel.
The solution 3, as was suggested from some people, will be very
difficult
or simply won't work, as I can tell from my experience as a software
developer for quite some years. Keeping different development branches
of
the same software in parallel is very very time consuming and sometimes
it's nearly impossible without getting inconsistencies between the
branches.
The dependencies between all the tools and utilities that build up the
whole
MiNT operating system are very complicated, like: this ps command does
only work with that MiNT version, this minix filesystem requires at
least that bug fix in the kernel, this kernel was updated because the
needs of that application, and so on. I guess most of the problems that
people have here on the list even now are the result of different
development
levels of the different tools that are copied together from all over the
world and are assumed to work together. But they often don't. Adding
additional kernel versions (small, large, medium, 68000 optimized, 68030
optimized, ...) would only add to the confusion and make things worse.
If development is really splitted, the 68000 kernel would always be the
step child, the development goes on without thinking of the 68000 in the
first place, and at some point it simply will not be possible to bring
up
the 68000 kernel to the current state of the standard kernel. So I would
strongly discourage solution 3. Moving 68000 support to a different
development level is more or less the same as discarding the 68000
support at all (at least in the long term).
About the security discussion: The goal of the MiNT development should
be to eliminate all programs that need to use supervisor mode. If this
goal is achieved, then the super calls can be blocked completely. If
there
is no other way to leave user mode (e.g. by hooking in any trap or
exception
routine) then the system can be made very secure and stable. I know, at
the
moment this is only wishful thinking, but think ahead, shouldn´t this be
the
correct aim?
How can we get nearer to this goal? There are three questions to be
answered:
1. How can we find programs that do supervisor mode calls?
2. Can we really get rid of those programs?
3. How do system specific extensions and tools do their work if they
cannot get into supervisor mode?
Finding the programs is very easy: simply write a wrapper to these calls
which reports any process doing those supervisor mode calls.
Can we get rid of them? Hmm, this is more difficult to answer. When a
program hooks into a system call, then this is most probably a sign of a
missing kernel feature. When a program tries to use other system
resources that can only be accessed in supervisor mode (like system
variables, screen memory), then this also looks like the wrong way to
do it. There should be a proper way of doing this without using
supervisor
mode. The program (read: application) should be allowed to only use
resources that are provided by the kernel via an appropriate interface.
If the interface ist missing, then it should be added to the kernel, but
it
should not be the application that works around this gap. The kernel
should be the only door between the software and the hardware, no way
should go around it directly to the hardware. Any driver and tool that
requires direct hardware access is in fact an extension to the kernel
and belongs to the category of programs addressed by question 3. So if a
program does use supervisor mode or relies on "restricted" information,
then I believe it is doing something the wrong way. Please correct me
if you have examples where this is not true.
Don't get me wrong, I know there is "old" software which simply uses
supervisor mode, and can't be recompiled because of missing sources or
copyright issues. So we have a problem here. Here is my suggestion how
to solve this until we reach a state where only "secure software"
exists:
we provide three levels of security in the kernel, e.g. switchable by
some settings in the MiNT configuration: 1. Low security using no
restrictions on super calls. This is the state we have now. 2. High
security allowing no super calls at all. This is the final goal.
3. A medium security level informing about processes doing the super
calls, so that users can decide if they need the tool/application or
not,
but still allowing them to run. This is an inbetween state between 1.
and 2.
So if a user *is* anxious about security then he simply avoids
all programs with insecure calls, and switches the kernel security to
high. We'll see if there remain enough secure programs to have a usable
system. I think this topic is overestimated, a normal application should
work right out of the box. Why should a text processor or office
application hook into the system? There is no reason why. So I think
there
are even now enough well written programs.
Ok, so now we have a kernel that does not allow supervisor mode calls
and all this "dirty" stuff, how can a system tool do its job? Well, by
setting it to a special user ID (e.g. root or some "system" ID), where
these calls should again be allowed, or by defining special device
driver
formats (like the .xfs file system drivers), which can only be loaded by
MiNT itself and not anywhere else, especially not by TOS before MiNT has
started. Well, I know this requires to modify or even rewrite most of
the TSRs we know today, but only to fit the new format, not in
functinality.
If this format exists and we get an overview what tools can be loaded
together with MiNT, it will add a lot of security (and hopefully
stability).
This would also make any linked trap lists obsolete, and we also would
not
need a tool like TraPatch. How MiNT internally schedules system calls by
all
those new format system programs is another question.
Ok, there was another topic about the /kern file system. First of all I
appreciate the work of all those people doing such a great job on the
MiNT development. And I understand that not every tiny modification
should end up in a discussion on this list. However I think that any
*conceptual* change should be discussed first. And introducing /kern
*is* a conceptual change. On one hand I love the idea of a human
readable kernel interface, but on the other hand I want to warn about
using parsers to interpret these values instead of using a binary
interface. Any little change of the output, be it an extra blank or TAB
for better readability, be it an additional header line, be it a new
value added before, after or at the side of the tabular output, will
break *any* existing parser and thus every application using this
information.
How about people writing wrong parsers? As someone who worked in a
department for programming languages and compiler generation, I can say
that writing parsers is not always easy. The output of /kern might be
simple in some cases, but how about a list of SCSI targets where the
output depends on the string returned by the targets themselves? These
might be very difficult to predict (and parse).
And even more difficulties arise: modifications in the /kern interface
(assuming text output) require modifications and recompilation of all
programs using this interface. A cleverly designed binary interface
would in the worst case only need a recompilation with new header files
describing the new elements.
So let me summarize my points:
- Loadable modules and customizable kernel compilation would be useful
- Security should be improved by reducing supervisor mode calls, finally
blocking those calls at all; special loadable modules for system tasks
(drivers, ...) should be developed instead of TSRs
- Conecptual changes should be discussed on the list first
- Parsing human readable output is usually much more difficult than
using a binary interface
Thank you for reading this far. My intention with this mail was, that we
should not lose the long time goals out of sight when we're discussing
current changes. The goals may be different from those I outlined above
as one possibility, but please be constructive, not destructive when
discussing. All this can be done without calling people names, we are
only human and make mistakes. Meta-discussions (discussions on how
should be discussed) are only boring most of the time and usually lead
to nothing.
Be happy, be nice to each other, and have fun, that's important.
Bye,
Hartmut
--
------------------------------------------------------------------------
Hartmut Keller, keller@informatik.uni-stuttgart.de
------------------------------------------------------------------------