[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [MiNT] timezone change
On Mon, Mar 27, 2000 at 07:21:40PM -0800, josephus wrote:
> TOS and MAGIC wants RTC = LOCALTIME. MINT internally knows
And Minix and Ext2 wants RTC = UTC. So do all standard library functions
(that are also used under plain TOS or Magic).
> UTC (thats nice) but it must convert for TOS and not convert
> for MINIX and FS2. This is a conceptual problem.
> Since I use MSH it has a CST setting in its shell.
>
> It is because I am in the central time zone in North America,
> that I can see that this is a problem in representation.
>
> Mint can keep lunar time for all I care, as long as the RTC and
> my TOS file system get local time. I dont care. MWC has a
The RTC doesn't care what time it sees. The user cares. And TOS gets
local time. That matters.
> GMC time call and a localtime ( ANSI vers) So I get time from
> MWC as a 32 bit (UNIX like value). Then I have 8 or 9
> conversions
And these conversions are expensive if they are done accurately.
Is it really that hard to understand? Unix UTC timestamps are simply an
integer number (the number of seconds since Jan 1, 1970, 0:00 UTC).
GEMDOS timestamps are broken-down (i. e. hour, minute, second, day, month,
year are all stored in a separate integer value). Arithmetics on Unix
timestamps are therefore very simple. To add one second you simply
increment that integer value. Add a second to a GEMDOS timestamp:
seconds++;
if (seconds > 59 and not_by_accident_a_leap_second)
seconds = 0;
minutes++;
if (minutes > 59)
minutes = 0;
hours++;
if (hours > 23)
hours = 0;
day++
if (month == february)
if (leap_year)
if (day > 29)
...
else (not leap_year)
if (day > 28)
...
else if (month == january or march or ...)
if (day > 31)
...
else if (month == april or june ...
if (day > 30)
...
This is expensive, isn't it?
And why UTC? If you want to exchange timestamps between computer systems
you have to agree on a common base and UTC looks good for that purpose.
Besides, UTC never makes leaps like local time typically does (when
changing from summer to winter time and vice versa).
You think this is not relevant and it hasn't to be done often enough to
worry about it? A lot of I/O operations have to perform this task, even
worse three times per file because each file bears three different time
stamps (creation time, modification time and last access time). In that
perfect system of yours reading an ext2 timestamp would result in:
Ext2:
Convert Unix timestamp to broken-down GEMDOS representation and
pass it to the kernel.
Kernel:
Convert it back to Unix so that we can internally do arithmetics
on it.
Convert it back to broken-down GEMDOS representation to pass
it to the application.
Application:
Convert it back to Unix for calculations.
Convert it to another broken-down representation that is suitable
for human readers.
With a recent filesystem, a recent kernel and a recent libc the filesystem
simply passes the timestamp to the kernel, the kernel passes it to the
application and the application converts it once into a form that the user
understands. This is efficient and works. Your system is necessarily
inaccurate and highly inefficient.
Besides, you don't want to get ruled by those dudes that live in Greenwich
near London (btw, Portugal, Eire and West Africa are mostly the same
timezone as the UK). But there are many countries in this world that
don't want to get ruled by the christian calendar. There are chinese
calendars, jewish calendars, arabic calendars, etc. One thing is sure:
your local wall clock time displayed according to the christian calendar
is definitely not a good base for exchanging dates between systems that
differ so much in their notion of a date. But Unix can manage these
differences alright.
Ciao
Guido
--
http://stud.uni-sb.de/~gufl0000/
Send your spam to president@whitehouse.gov and your replies to
mailto:guido at freemint dot de