First of all, time is a relative quantity, therefore we can only measure distance in time from something.
Secondly, time units do not have a fixed reference clock. `Second` is clocked by the Universe, `day` is clocked by midnight and year is clocked by New Year. Generally, by `time` we mean "distance in time since last Midnight in one second resolution" and that includes single clock source. By `date` we generally mean "distance in time since last particular arbitrary event in one day resolution", which includes two clock sources (midnight and new year). This gets rather awkward, but both sources get controlled by the "particular arbitrary event", therefore we can handle that. Combining both definitions into "distance in time since arbitrary event in one second resolution" gives us `datetime`, controlled by three clock sources. Thing are bound to get awkward.
Lastly, as others have already pointed out, all this makes definitions of future events in `datetime` pretty much useless, because there is no way to predict how many ticks clock sources will generate, unless you stick with one (e.g. n Universe clocked seconds from NOW (isn't TAI exactly that?)) and deal with fluctuating clock sources in the future.
Neither solution solve inherent problems with multi-clocked `datetime` definition only might make some problems easier to solve by shifting clock tracking to frontend: 2051-09-12 16:23:00 +0300 does not need to keep track of clock sources while 2051-09-12 16:23:00 Europe/Vilnius does.
The whole "Bonus" section is about arbitrary{ty|ness} of "last midnight" - how long ago did it actually happen for this moving target? Should I consider offset from last midnight from time I have actually seen it or when it was supposed to be seen here? Automatic time zone settings attempt to do the latter. And if you want former semantics, well just do not use automagic and add whole new clock source to this mess - "distance in time since last time I personally thought it was midnight".
Secondly, time units do not have a fixed reference clock. `Second` is clocked by the Universe, `day` is clocked by midnight and year is clocked by New Year. Generally, by `time` we mean "distance in time since last Midnight in one second resolution" and that includes single clock source. By `date` we generally mean "distance in time since last particular arbitrary event in one day resolution", which includes two clock sources (midnight and new year). This gets rather awkward, but both sources get controlled by the "particular arbitrary event", therefore we can handle that. Combining both definitions into "distance in time since arbitrary event in one second resolution" gives us `datetime`, controlled by three clock sources. Thing are bound to get awkward.
Lastly, as others have already pointed out, all this makes definitions of future events in `datetime` pretty much useless, because there is no way to predict how many ticks clock sources will generate, unless you stick with one (e.g. n Universe clocked seconds from NOW (isn't TAI exactly that?)) and deal with fluctuating clock sources in the future.
Neither solution solve inherent problems with multi-clocked `datetime` definition only might make some problems easier to solve by shifting clock tracking to frontend: 2051-09-12 16:23:00 +0300 does not need to keep track of clock sources while 2051-09-12 16:23:00 Europe/Vilnius does.
The whole "Bonus" section is about arbitrary{ty|ness} of "last midnight" - how long ago did it actually happen for this moving target? Should I consider offset from last midnight from time I have actually seen it or when it was supposed to be seen here? Automatic time zone settings attempt to do the latter. And if you want former semantics, well just do not use automagic and add whole new clock source to this mess - "distance in time since last time I personally thought it was midnight".