[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MiNT] WCOWORK implementation : conclusions.



Quoting Johan Klockars <johan@klockars.net>:

mode on any window system I'm aware of (X has always been able to do it,

X's backing store isn't the same thing.   The Xcomposite extension of X.org is
closer to what we mean.

but it's seldom used, and I can't say I've seen a Windows program do it)

Windows doesn't that I know of.   Longhorn will.

that doesn't make transparency and dynamic resizing a priority (MacOS X,
and  other "next generatio" graphics systems).

Mac's aren't blitting things around.  That off-screen buffer is an OpenGL
texture. Thats why everything is so damned fast. OpenGL controls the scaling,
transparency, and everything else.

The question isn't if we should do it with the hardware we have.  The question
is if its worth making it possible for the future.   You have to break the
chicken-n-egg game.

Doing this kind of drawing in off-screen bitmap and then blitting to the
screen is, IMO, only reasonable if you want that transparency etc.
Otherwise you'd be better off drawing the things on screen directly that
can be and drawing the rest in the off-screen bitmaps.
Updating by blitting can work relatively well in many cases, but in
others it's just awful (witness the ARAnyM SDL driver).

However, when dragging a window or dialog across the screen, you are forcing
everything beneath the window you move to constantly update.   With off-screen
rendering, the application code isn't even told that some other window is being dragged across its screen. The AES would do it automatically by restoring from the off-screen bitmap. This makes window movement VERY smooth which makes for
a better user experience.  Also, applications don't have to walk a rectangle
list since they write to their whole off-screen bitmap (they just get a single
rectangle when the app tries to walk the list).  This can make many operations
quite a bit faster.

Another bad case is when the application in question is drawing in its
off-screen bitmap but nothing (or only parts of it) gets displayed at
all because it's covered by other windows.

Yes, but when that application does get displayed, it pops up instantly. You do
have a good point though.  It might be possible to do this half-way.  Do
off-screen rendering for the parts that show, keeping a regular rectangle list, and still send redraw events when a portion of a window is uncovered. However, as long as off-screen information exists for that window - go ahead and use it,
so dragging windows around would still be mostly just blits from off-screen
storage.

Having VDI coordinates that are relative to some arbitrary point is
indeed something that could be useful. I've always planned to add that
to fVDI.

That would be great - especially if you could tell the AES to make that point
the window work area origin. At that point, you don't care when your window is
moved or much of anything else.

I believe most other window systems have the clipping handling bundled
with the drawing so that an application doesn't have to deal with that
by walking a rectangle list like with the VDI/AES. It obviously also
makes it impossible to draw outside your own windows.

Its not from what I've seen. Perhaps some libraries do this, but it wouldn't be as efficient. I think they use a double clipping system. Kinds like clipping
the clipping rectangle itself to the window, and with VDI->AES integration it
would be quite easy force this.  Every rectangle returned by the AES
(WF_WORKXYWH, WF_FIRSTXYWH, and WF_NEXTXYWH) could set some sort of master clip
region that all other drawing operations were confined to.

There is already that screen lock thing in the AES that could indeed be
used for this kind of thing.
But, as I've mentioned before, blit updates isn't such a good idea, IMO.

I'd rather use something that wasn't a whole screen, so you knew what area was
touched, and not just that the screen got locked for a minute to update a
blinking cursor :P

Yes, since the drawing part itself could be at the other end of the network.
But this can work equally well without blit updating.

True - I prefer vector whenever possible, but I think the idea was to add the
cool OS features that would eventually move to the capability of making windows into OpenGL textures to get the speed and capabilities of Mac, Longhorn, and the
new X thats being worked on.

Anyway, doing VDI drawing via a network shouldn't be bad at all. Just
consider what the graphics cards on the Falcon are capable of with a few
(4-6 perhaps) Mbyte/s of bandwidth.

There was an old protocol for BBSs that basically did exactly this. The problem was that it was designed to be easy for users to generate with a text editor, so
it was all in ASCII with comma seperated values.   If it was all in binary,
possibly with some compression on it, it would be at least 4 times faster, and
it went pretty good at the 1200 or 2400 bps I had back then.  It used the
virtual 32Kx32K resolution so it would work on all screens.  I think it was
FzDiz or something like that .. can't remember.

Well, there is already an OpenGL based VDI (fVDI on ARAnyM with the new
native driver). It might be interesting to see if that could be
convinced to run over a network.

It should.  VDI->OpenGL->GLX->Network

No, the problem is hardware accelerated OpenGL in the first place.
In the only place where we have it, ARAnyM, it can be used, both
directly and by the VDI.

No - it doesn't have to be harwdare accelerated.   You can do hardware
accelerated OpenGL VDI remotely because you can make the VDI remote. However,
there is no API for OpenGL on the ST, so you don't know what API needs to be
sent over the network.   Its a lack of standards, not a lack of hardware.

-- Evan