[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [MiNT] WCOWORK implementation : conclusions.
that doesn't make transparency and dynamic resizing a priority (MacOS X,
and other "next generatio" graphics systems).
Mac's aren't blitting things around. That off-screen buffer is an OpenGL
texture. Thats why everything is so damned fast. OpenGL controls the
scaling,
transparency, and everything else.
And what do you think OpenGL does to get that texture displayed
somewhere? It may be called texture mapping instead of blitting,
generally uses triangles and 3D coordinates, and has extra features, but
it amounts to the same thing.
Things are fast because everything is kept on the graphics card. I've
heard that it can be a bit painful when you don't have enough on-card RAM.
The question isn't if we should do it with the hardware we have. The
question
is if its worth making it possible for the future. You have to break
the
chicken-n-egg game.
While I agree with that, there's little point in spending a lot of time
doing something that is only useful with hardware that doesn't exist and
isn't planned. There are more down to earth things to worry about with
the little developer time we have available.
For a long time I was hoping for a PCI or AGP board for the CT60, but
that no longer seems to be planned by anyone (and, of course, it would
only affect 50-100 people, anyway).
Very few people in the Atari world have ever used hardware accelerated
graphics, anyway, in the form it's available for the various Atari
machines, which is one reason why I have problems with some of the
statements in this discussion.
If you don't know what hardware acceleration can and cannot do, you end
up doing things like implementing your own drawing routines and blitting
to the screen (I just read that the latest versions of Porthos does that
for polygon filling, rather than using the VDI, for example. Which will
likely end up tens of times slower than using the VDI on for example
Eclipse/RageII or ARAnyM/OpenGL).
The 100k characters per second that AB040/Eclipse/RageII can draw on the
screen (300 k/s for ARAnyM/OpenGL on my four year old PC), for example
(and that's without doing anything fancy like caching character bitmaps
in graphics card RAM), make it really pointless to store such things in
a back buffer. Not to mention simple fills, which even something as old
as the Eclipse/RageII can do at 50 Mpix/s, IIRC (fill an entire
1600x1200 screen 25 times per second or so), and more modern graphics
cards should be able to do at tens or even hundreds of times that speed.
Or how about ARAnyM/OpenGL doing random (mainly very long) polylines
(VDIBench) at 1 million/s (AB040/Eclipse/RageII does about 60 k/s at the
same test, and the standard VDI managed 4 k/s in monochrome on my AB040
Falcon).
Blitting to the screen, on the other hand, is very slow on at least all
Falcon graphics cards, since they are limited by the Falcon bus. You
couldn't even update that 1600x1200 screen in 32 bit colour once per
second...
However, when dragging a window or dialog across the screen, you are
forcing
everything beneath the window you move to constantly update. With
off-screen
Not something that you spend much time doing, though, I hope? ;-)
Anyway, I never notice any drag when doing that on my Falcon or under
ARAnyM/OpenGL. We're talking about very small incremental updates here,
and unless you have something very complicated beneath your window,
updating should be very quick indeed.
You should optimize for the normal case, which is that the windows are
static, unless you have more resources than you know what to do with.
a better user experience. Also, applications don't have to walk a
rectangle
list since they write to their whole off-screen bitmap (they just get
a single
rectangle when the app tries to walk the list). This can make many
operations
quite a bit faster.
_If_ you have the graphics card RAM, which current Atari graphics cards
don't.
But the single rectangle drawing might be preferable even if you draw
directly to the screen. It needs cooperation between the AES and the
VDI, though.
off-screen bitmap but nothing (or only parts of it) gets displayed at
all because it's covered by other windows.
Another bad case is when the application in question is drawing in its
Yes, but when that application does get displayed, it pops up
instantly. You do
Not really. With current Falcon graphics cards it would be quite slow
(see above regarding bandwidth), and even on a good PCI bus it would
probably take about 0.1 seconds to blit an entire new 1600x1200x32 bit
screen from RAM. It's not at all unlikely that you could rebuild the
image in the same time.
(This changes, of course, if you have enough RAM on your video card to
keep the background buffers there.)
Having VDI coordinates that are relative to some arbitrary point is
indeed something that could be useful. I've always planned to add that
to fVDI.
That would be great - especially if you could tell the AES to make
that point
the window work area origin. At that point, you don't care when your
window is
moved or much of anything else.
The AES should then be responsible for giving applications virtual
workstations for their windows, where it has already set up the origin
and a maximum size (to keep the application inside the window no matter
what it tries to do).
I believe most other window systems have the clipping handling bundled
with the drawing so that an application doesn't have to deal with that
by walking a rectangle list like with the VDI/AES. It obviously also
makes it impossible to draw outside your own windows.
Its not from what I've seen. Perhaps some libraries do this, but it
wouldn't be
as efficient. I think they use a double clipping system. Kinds
like clipping
the clipping rectangle itself to the window, and with VDI->AES
integration it
I'm pretty sure both X and Windows will keep you from overwriting
overlapping windows no matter what you do.
It's not enough to clip to the window, you need to clip to all other
rectangles/windows that overlap your area of interest.
I'd rather use something that wasn't a whole screen, so you knew what
area was
touched, and not just that the screen got locked for a minute to update a
blinking cursor :P
All this is solved automatically if drawing operations are forcefully
clipped to the visible region of their working area.
No, the problem is hardware accelerated OpenGL in the first place.
In the only place where we have it, ARAnyM, it can be used, both
directly and by the VDI.
No - it doesn't have to be harwdare accelerated. You can do hardware
You were talking about an API for hardware accelerated OpenGL.
And it's pretty obvious to me that the API for OpenGL should be the
standard OpenGL API.
accelerated OpenGL VDI remotely because you can make the VDI remote.
However,
there is no API for OpenGL on the ST, so you don't know what API needs
to be
There is indeed OpenGL for the Atari machines.
sent over the network. Its a lack of standards, not a lack of hardware.
Obviously, it should use some standard network protocol for OpenGL, so
that whatever is at the other end can actually do something with it.
Didn't you mention GLX?
--
| Why are these | johan@klockars.net
| .signatures |
| so hard to do | http://www.klockars.net
| well? | (fVDI, MGIFv5, QLem)