[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MiNT] VDI design and FreeType2 trouble



Ah &*^%#, I wrote a book again.  Read it anyway!

On Thu, 2005-04-28 at 17:39 +0200, Johan Klockars wrote:
> >
> >Well, isn't it possible to have VDI and AES pure user-space programs ?
> >  
> 
> I'm not going to get into a discussion about the AES. Other people know
> more about that.

While I think this is possible, I don't think its desireable.  You would
be reducing the AES to a very poor widget set (with server side widgets)
and window manager.  This is okay, but you have the overhead of task
switching constantly if you move the AES and VDI into user space, and
all those arrays passed around suddenly need more appropriate IPC than
just dropping a pointer.  I don't think you'd gain as much as you'd
lose.

> >The traps are only used to pass commands. All the rest of work can be
> >done in user mode. Also, we lack kernel drivers to access video
> >(keyboard and mouse exist as xdd), and a VDI to use those drivers.

I definately think a better driver API is needed, but I think a virtual
address space must come first.  Talking to advanced graphics cards means
manipulating memory on the card, local memory, memory transfers over AGP
buses and/or PCI buses.  Just being able to mmap() a frame buffer is a
bit difficult with the current system.

A working mmap() would also make compatibility much better.  It would be
possible to force older apps that expect certain video modes or features
to write to an off-screen pixmap and then have the emulation layer
simply blit this to the inside of a window. 

I also think that Logbase() should be per-application.  It makes logical
sense to do so, and have it inherited by children.  To avoid swapping
the pointer all over, the VDI would need to read logbase directly from
the process table entry, which be fine if VDI was integrated tighter to
the kernel.  I also think some problematic apps can be tamed this way
without breaking anything that wouldn't already be broken.  Since its
inherited, you can then decide, with a wrapper, exactly which
applications you want to leave alone, and which you want to render
off-screen, either for VNC services, desktop in a window, off-screen
compositing, or whatever.  

I realize there is overlap of functionality in the last two paragraphs,
but think of it as a per-application logbase, with the MMU controlling
physbase.

  
> >From an email I wrote to Standa last night (we were mainly talking about
> glyph caching and whether it would be possible to build glyphs while in
> user mode (so as not to halt the rest of the system while rendering)):

On many system X-Windows runs faster over a network (with high speed
switched networks and proper compressing drivers, etc) since then you no
longer have to switch back and forth between the program drawing and the
Xserver actually doing the work.  The network becomes a pipeline between
the two pieces operating on seperate CPUs.

For an improvement in rendering, perhaps some sort of pipeline would be
in order, which would avoid the problem of the CPU sitting in supervisor
mode while a graphics card (or Blitter) does rendering, as well as allow
longer rendering operations to give up control of the CPU on occasion.

You could have all VDI query calls return directly with no context
switch, as it does now.  All rendering calls normally don't return data,
so this can be done by buffering the request then returning immediately
to userspace.  Buffering the request, if the VDI were integrated into
the kernel, would just be handing a pointer to a queue or circular
buffer with a spinlock on it.  The queue or circular buffer could be
cleared for any whole-screen redraw, page flip, screen blank, etc.  I
say circular buffer since its often very efficient, and the coldfire
even has special instructions for them.

Some special cases exist, such as v_get_pixel which would end up slowing
down the whole system considerably since you'd have to flush the
pipeline, waiting for all the pending rendering to finish, then query
the pixel which may be on slow graphics card memory, and finally return
to the user.  I suppose you could queue the command and put the
application on a wait queue but thats more complicated and could slow
things down even further!

Then a VDI kernel process (does MiNT have kernel-mode processes?) can
dispatch the drawing, which can be done intelligently based on GPU
capabilities and interrupts, during vblank interrupts, or just steal
some CPU cycles if the queue is big and you need the CPU to render, but
even that can be done a bit smoother, since the driver is basically
cooperatively multitasking with the rest of the kernel - its interrupt
and operation driven instead of "time-slice".

> Lots of things in the VDI could be done without going to supervisor
> mode. Some need synchronization (open/close virtual/workstation and a
> couple of others) and the actual screen output should be done via some
> kind of supervisor mode driver (not strictly necessary for all hardware,
> but a probablygood idea even there).

I would definately prefer a specific driver for everything, as well as
an API to query DDC modes (St hardware would just look at the monitor
type pin and nothing more) and have the driver return which video modes
are available based on that.  I'd also like to see more advanced
features, such as an openGL rendering pipeline.  There is a recent trend
in making 3D graphics hardware much faster than 2D hardware, and 2D
hardware is being phased out as the 3D hardware can do most 2D
effects.  

The Linux front is moving the DRI drivers from Xorg into the kernel (see
the Mesa/DRI project - you can already do this for most open-source
drivers including the ATI Radeon up to the 9200).  An Xserver is being
written to utilize this, and Novell has hired the creator to further the
work (expect SUSE to announce Xgl soon).  

XGL then no longer needs to know about graphics drivers, PCI and AGP
operations or anything - it just talks the OpenGL API (actually OpenGL
ES is the new target, please read http://khronos.org/opengles/ for the
details).  The xserver will then use the OpenGL ES API for everything
from resolution changes to 2D acceleration to window control.  This
should help make drivers more portable with a standard set of features
and capabilities.  This would help Atari too (for open-source drivers)
if we used the same API model.

The "Cairo" library, which is very similar to the VDI (device
independant, etc) but has more advanced capabilities, will use the
"Glitz" backend which renders Cairo calls with OpenGL hardware.  Cairo
is based on alpha compositing instead of bitblit, and has few
differences in the backends compared to the VDI workstations (postscript
and pdf backends instead of opening specific printer drivers, an SVG
backend instead of a metafile, pixmap and png backends instead of
memory.sys, etc) will use the "Glitz" backend which renders Cairo calls
with OpenGL hardware.  Another possibility is EVAS from the
enlightenment project which is said to be faster and better for embedded
systems (seen it doing anti-aliased animations on a cell phone!) with
similar quality and features to Cairo, and it too can render to OpenGL,
only directly, without "glitz".

For example - if the device driver layer supports OpenGL (or OpenGL ES),
then Mesa becomes a thinner user-space wrapper around the driver, and
you could even go so far as to make the VDI a wrapper or compatibility
layer around Cairo/EVAS rendering to glitz/opengl.  I've been
considering hacking up Aranym to use a native Cairo/EVAS library to
render VDI calls (wonder what happens if you don't turn off native
anti-aliasing) as this would make a good test of the idea without having
to port Cairo or EVAS and device drivers first.  There's already a MESA
pipe for Aranym so that could easy function as the opengl driver to
build and test the rest of the system.

> In the case of fVDI, my ideal driver system would be more or less what
> its current drivers are (with the planned extensions and modifications ;-).

I missed what those extensions were, but likely no where near as
intensive as I'm suggesting.

> >Would it be possible to run a VDI (fvdi or ovdi) after MiNT is started,
> >if needed drivers were available ?

I'd like to see 1 kernel that takes over the whole system that has
everything needed, not a MiNT layer over TOS or EmuTOS followed by a
METADOS layer, VDI layer, GDOS layer, various AUTO programs, and an AES
layer.  Just 1 single OS which would become a standard, without having
to specifically code for each variation of the different pieces.

> Certainly. fVDI can do that now (or at least it could when I last tried,
> and I suppose the protection levels in MiNT couldn't be set very high if
> you want to insert a new trap handler), even from the desktop, but this
> is of course with everything running from the standard Trap#2. With the
> proper infrastructure (shared library for the user mode VDI, I suppose,
> and with programs using that instead of Trap, and some kind of kernel
> module for the video access) this is definitely workable.

I've yet to see a real shared library system that didn't start with the
virtual memory system.  

Making the VDI trap into a wrapper that does something more clever would
be problematic as you don't really want to make OS calls from inside the
VDI trap handler, unless of course, the VDI was an integral part of the
OS and could internally call the proper routines to send the data to the
graphics card driver.  Many VDI calls could be phased out as we'd only
need a subset for compatibility with older apps.

> Of course, any program not linked against the new VDI library would
> somehow need to be redirected there.

Well, if you want to completely make a new VDI that breaks compatibility
and no longer uses the trap handler, then all you have is a graphics
library - why not use a better one and switch to Cairo or EVAS, and let
the older VDI apps be handled as in the last paragraph?