[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MiNT] EmuTOS for ColdFire



Hello, Petr.

Be sure I understand your fears. I can't say you are fully wrong.
However...

I don't agree. It doesn't make any sense for me to rewrite each given
source twice - first to port from m68k to CF and then from CF to
anything else. The CF step is unnecessary, IMHO.

Nobody is going to rewrite each source twice. Because of the
similarities between 680x0 and ColdFire, the changes for ColdFire in
assembler files are very small and easy. It is the case for the MiNTLib and EmuTOS (except VDI). The required changes are only things like replacing addq.w by addq.l inside #ifdef blocks.

There is no m68k
cleaning that could be reused for a completely different CPU, I believe
(unless this is done the way Martin worked on EmuTOS - he was rewriting
every assembler source he could to plain C to achieve true portability -
but we cannot afford that on 8 MHz machines - the assembler is used
there for a good reason).

Only one VDI source in EmuTOS is too complicated to be modified for ColdFire. That code will have to be rewritten in C for portability. Such rewrite will be reusable as-is for totally different processors. The current assembler code optimized for 68000 can be kept for 68000 targets, there will be no performance loss. Moreover, nowadays GCC produces very efficient code. I'm very curious to see the speed differences between the C VDI and the assembler one on 68000.

If I still were an active Atari developer I would
soon become bored by generating several different binaries (for m68k,
for CF, and for anything else that might appear) especially when I
wouldn't have a chance to test them out properly (I don't plan on buying
CF).

That's why I work on toolchains. I want that clean sources can be built easily and automatically for any architecture (Debian is a good example). There is a lot if stuff already existing in GNU tools (especially in Automake), we just have to use it.
Testing binaries is another problem.

Please note that I am not in any way against ACP or any other hobby/fun
project someone might come up with. I just can't agree with the logic
that is presented here in some mails. I think things work differently in
real world (and I am talking from my own experience when I release the
multi-platform software I happen to develop).

Hardware lovers will build a new Atari-like computer based on ColdFire. That will happen, because they want to see such a thing, and it is their hobby. Their primary goal is to have new hardware features, and the ability to run most TT/Hades binaries. However, as you underlined it, and despite the efforts of the developers, I'm sure there will be some incompatibilities, and speed issues.

I don't like hardware myself, I'm very happy with my brand-new PC, I use ARAnyM and Steem occasionally for testing the output of the compilers. I use my STe only for reading old floppies, until someone provides an universal USB floppy reader. But the ACP project is cool, that's why I'm going to buy a board. Then when I have the hardware, I will not be satisfied by half-speed 68000 emulation. One of my hobbies is about toolchains for generating fully optimized binaries for any CPU. So I want a compiler (and libraries, OS...) for running native ColdFire software. I will do it because it is my own hobby. If it is seen as a good thing, ColdFire support will be added to official sources.
If it is not, I will publish my own patches and binaries. No problem.

--
Vincent Rivière