I remember staying up late (UK) at the Billenium (2:46 AM Sunday Sep 9th 2001), thinking that was bound to be the most newsworthy event of the week, watching the seconds tick past on a "while(sleep 1); do date" loop, with slashdot in one window, IRC in another, all running on an enlightenment window manager.
500M seconds later I was in Washington DC in a hotel, watching it tick up on an rxvt on my laptop, with HN in a window.
Who knows where I'll be in 2033, hopefully not in Europe as I'll be too old for night shifts, but wherever I am, I suspect it will have a bash prompt.
A former employer's telephony software had a 9-character intstring field for the Unix timestamp, so there was a bug when it rolled over to 1000000000 in 2001. Pretty rare, I think. Unix timestamps back then were usually 32-bit ints, so good until 2038. And hopefully they'll be 64 bits everywhere that matters well before 2038.
I was involved in building a system using 32-bit ints as timestamps ten years ago. They are still sold, and since it's industrial equipment running on 16 bit microcontrollers I have every reason to believe most of them will still be around in 2038.
I don't think I was at the only company doing this. Few people seem to care about issues that will happen after their retirement. Expect lots of industrial stuff to work just a bit worse around 2038 (and 2036, PIC microcontrollers fail a bit sooner)
I worked with the 16 bit PIC24f. I don't have the source handy to check, but according to a forum entry "The provided gmtime() actually fails earlier than 2038. The year wraps around when the time_t input goes beyond 0x7C55817F or Thu Feb 7 06:28:15 2036." [1]
Making time_t unsigned would break the ability to refer to times before 1970.
Existing code that deals with 32-bit timestamps almost universally assumes that (time_t)0 is 1970-01-01 00:00:00 UTC. Updating that code to guess a different epoch would be more work (with more inevitable bugs) than keeping a fixed epoch and using 64 bits.
It's already 64 bits (and signed) on a lot of systems.
Not really. Addition and subtraction don't care about unsigned vs signed status, because overflow is identitcal in both cases.
Think of an odometer for a car with 999999 as its max number. 999,999 is equivalent to -1. So 500 + 999999 == 499 on the odometer.
A 32-bit register is simply a binary odometer, so the above concept happens with bits.
In "signed int", we print the number "999999" as "-1". With "unsigned int", we print the number "999999" as "999999". They are one-and-the-same. The only difference is your print() function.
-----
Multiplication and division change however. Your compiler tracks signed/unsigned status for idiv vs div, or mul vs imul instructions.
--------
With that being said: I think 64-bits is fine. Most computers these days are 64-bits, and those 4-extra bytes aren't very costly. Standard compression algorithms, like GZip, do a good job of finding those redundant bits and shrinking things down.
Many if not most of the embedded/long term systems are implemented in C. If a variable is declared signed, overflowing cases may and often are "optimized" away.
IIUC, GP's foo() would likely be optimized to { return true; }, and so would similar timestamp overflow checks.
The post I responded to was the opposite: about turning "signed" code (which you declare is undefined) into "unsigned" code (which you declare is fully defined).
Given this thread of subargument, making the difference between 32-bit unsigned numbers is MORE DEFINED than using signed integers.
-------
IE: If your code was correct with "int timestamp", it will be more correct with "unsigned int timestamp".
In any case, "int" or "unsigned int" based timestamp manipulation wouldn't be like the code you suggested, but instead "int difference = x - y".
In the signed integer case, "difference" is (conceptually) negative, while in the unsigned integer case, "difference" is guaranteed to have overflow. Both cases are conceptually correct with regards to the difference of timestamps.
But because the behaviour is undefined, it doesn't matter how the computer would handle it, because the compiler is free to rework it into any arbitrary sequence of instructions, including removing it altogether.
I posted this back when we hit 1400000000 in 2014 [1]:
---
Openldap got hit by the billennium bug. I remember because we told our Noc to keep an eye open (Sunday afternoon where we were) and we started getting alerts that all LDAP replication was broken.
The sort function couldn't handle the rollover so when we came in that day all our mailboxes had their email sorted in the wrong order, and it couldn't be fixed without the listed patch.
We had a bug tracking system that malfunctioned starting at (time_t)1000000000 (3 days before 9/11). Apparently it converted the time_t value to a string and truncated it to 9 characters. Its concept of the current time jumped back to 1973-03-03 and advanced from there at 10% of the normal rate. The bug was corrected fairly quickly.
Hah nice, I remember doing that too! At the time I thought it was important to keep the irc log, which I've just dug up and thrown up on pastebin. I don't actually remember which network this was (EFNet maybe?)
> 03:46:49 -crier- THE BILLENNIUM HAS ARRIVED!!!!! Uh, you might want to play auldlangsyne.wav. YAAAAAAAAAAAAAAAYYYYYY!
For some reason the billennium arrives at 3:46:49, which doesn't make sense - I wasn't in UTC+2, and anyway this should've been at 01:46:40am UTC (40 seconds past the minute, not 49 seconds). Most likely reason is simply that my local machine's clock was wrong.
I’m currently on an iPhone 7S. I’ve got a mosh session open to a remote raspberry pi 4B (time looks right), bash shell, “while(sleep 1); do date +%s”, running in Blink SSH while I read HN cause I couldn’t sleep. Have an alarm set for 12:26 UTC so I can grab that screenshot. Following along with you :)
I remember holding a similar billionth second party in Soda Hall (Wozniak Lounge) at Berkeley as an undergrad. Never realized how close that was before 9/11.
That's the point that your library's time_t reaches those values. But the point at which you will be 1.6Gs since the Unix v4 Epoch is in fact 27 seconds earlier, because your time_t hasn't counted the leap seconds over that period. To do that with Unix tools, you need to work with TAI timekeeping, not UTC, and remember that the (biased) zero point for TAI timekeeping is 10s before the Unix v4 Epoch.
% jot - 10 20 | while read -r d
do
printf "@40000000%08x%08x %s00Ms SI since the Unix v4 Epoch\n" \
"$d"00000010 0 "$d"
done |
TZ=right/UTC tai64nlocal
2001-09-09 01:46:18.000000000 1000Ms SI since the Unix v4 Epoch
2004-11-09 11:32:58.000000000 1100Ms SI since the Unix v4 Epoch
2008-01-10 21:19:37.000000000 1200Ms SI since the Unix v4 Epoch
2011-03-13 07:06:16.000000000 1300Ms SI since the Unix v4 Epoch
2014-05-13 16:52:55.000000000 1400Ms SI since the Unix v4 Epoch
2017-07-14 02:39:33.000000000 1500Ms SI since the Unix v4 Epoch
2020-09-13 12:26:13.000000000 1600Ms SI since the Unix v4 Epoch
2023-11-14 22:12:53.000000000 1700Ms SI since the Unix v4 Epoch
2027-01-15 07:59:33.000000000 1800Ms SI since the Unix v4 Epoch
2030-03-17 17:46:13.000000000 1900Ms SI since the Unix v4 Epoch
2033-05-18 03:32:53.000000000 2000Ms SI since the Unix v4 Epoch
%
Also, your figures for 0.0Gs and 0.1Gs are nonsense anyway.
Unix didn't adopt this Epoch and method of timekeeping until 4th Edition, somewhere in 1974 according to Unix historians. Yes, it's tricky to pin down the exact date of adoption. The Epoch was changed every year before then, as earlier versions of Unix measured time in 60ths of a second since the start of the year. This has made reconstructing Unix history from tape archives non-trivial. If you want to count seconds since the start of the first Unix Epoch, in 1st Edition, you have to count from the start of 1971. And no, that's not the same as how old Unix is.
(And yes, those values for 2023 onwards are somewhat speculative, as there will no doubt be more leap seconds.)
Heh... I've run into this bug myself recently. Get-Date '...' returns a DateTime object of Unspecified kind, i.e. neither Local nor Utc. By design, such objects are interpreted as Local by ToUniversalTime() and as Utc by ToLocalTime(). They essentially assume the inverse of themselves, for better or worse.
Since the Kind property is readonly, to get the real UTC date you need:
PS> ($utc, (Get-Date -AsUtc '1/1/1970'), (Get-Date -AsUtc), (Get-Date)).Kind
Utc
Utc
Utc
Local
In your example,
$naive.Kind -eq [DateTimeKind]::Unspecified
correctly, because no DateTimeKind was specified.
The peculiar behavior you observed with ToLocationTime and ToUniversalTime exists to avoid breaking changes in working[1] code written for versions of the .NET Framework before 2.0, where DateTime.Kind was introduced (as was DateTimeOffset, so ToLocationTime and ToUniversalTime should probably be deprecated).
[1] "Breaking" changes exist for already-broken code, e.g., in Framework 2.0 and later, the return values of ToUniversalTime and ToLocalTime have Kinds Utc and Local, respectively, so applying the same conversion to an already-converted value no longer has any effect.
Interesting. I'm using Powershell 7 (e.g. Powershell Core) and I'm not sure exactly which version number specifically, but the functions seem to work as intended on my Windows 10 machine.
The datetime package is one of the best things about Python, and dare I say one of the best general purpose calendar modules ever written. It’s just so practical.
That parses a subset and is only guaranteed to be compatible with Python’s “.toisoformat()”. (I imagine it would be backwards compatible to expand it to cover all of ISO8601 and I can’t tell why they haven’t.)
When we one day become a space faring civilization we will probably stop using the gregorian calendar because why would you use that on say Mars?
But there really is no reason to get rid of the unix timestamp as a measure of time and this measure may stay around for a long time. This may mean that people in the future might consider 1970 as year zero where modern civilization began.
In "Deepness in the Sky" a spaceship's computer is still running on Unix time many thousands years into the future. It is generally believed that the calendar starts with the first Moon landing, and only the main character as a "programmer-archeologist" discovers a small difference between it and true Unix time 0.
Little is known about “Unix” but it is believed he or she was a charismatic spiritual leader who brought hope, healing, and comfort to countless humans on Earth and inspired numerous disciples including “BSD”, “Linux”, and “System V”.
“Most years, since its inception in 1999, Vinge has been on the Free Software Foundation's selection committee for their Award for the Advancement of Free Software.”[0]
Since “A Deepness in the Sky” was written in 1999 as a prequel to “A Fire Upon the Deep” (from 1992), which do you recommend to read first?
For me A Deepness in the Sky is his best, and one of the most memorable novels I've read.
The Peace War is a fascinating exploration of "what if this one thing was possible?" and all its terrible consequences, but Deepness grabbed me much more as a story.
A megasecond is a useful quantity: it's a bit less than 12 days. I mentally approximate it to a fortnight, partly because of the VMS bootloader which used microfortnights as an approximation for seconds https://en.wikipedia.org/wiki/List_of_unusual_units_of_measu...
Another handy time_t approximation is that a year is about 2^25 seconds, or 32 megaseconds.
There is a very good reason to eventually abandon it: leap seconds. Unix time goes back one second whenever there is a leap second on Earth.
It's extremely weird and IMHO completely ruins the purpose of a timestamp, but it's a compromise for backwards compatibility, since Unix time was created before leap seconds. This hack ensures that the number of seconds in a day remains fixed, an assumption of many systems at the time.
Leap seconds were introduced in 1972. The time_t that you are thinking of was invented somewhere around 1974, according to Unix historians.
Ironically, your kernel measures TAI time perfectly happily if unmolested, as that always ticks once for every SI second (ignoring oscillator drift). No slewing. No smearing. No stepping.
Isn't a second just how long light takes to travel 299,792,458 m?
Also, "since Unix time was created before leap seconds." I found this interesting for the simple reason I've never spent any time thinking about Epoch vs Leap Second histories.
Yes, but the point is that the UNIX timestamp doesn't count the the number of seconds elapsed since January 1, 1970. It counts 86400 seconds per day, regardless of how many seconds the day actually has (which can vary due to leap seconds).
Is this accurate though? Isn’t the number of seconds since 1970 absolute, and it’s up to the library generating the Gregorian date to take into account leap seconds?
I suppose if these libraries are not taking into account leap seconds, then the actual rollover will be a few seconds earlier (or later?) than what we think.
Your time_t isn't the true number of SI seconds since the Epoch. It's that number, mucked around 27 times so far. Instead of leaving their kernels to count one tick per SI second (ignoring oscillator drift) many people slew, step, or smear their seconds count every time that there is a leap second.
Go and read about "right" versus "posix" timezones.
IIUC unix timestamp simply counts seconds (well defined interval of time) from epoch onwards. It is then up to different calendars to interpret this number as a given day / hour /... This is the beauty of it - it is always an accurate (well, at low speeds at least ;) ) measure of time. In other words, no, some days might not last 86400 seconds.
This is not correct. UNIX timestamps always have 86400 seconds per day, and consequently, they do not actually count the number of seconds since the epoch.
And meter itself is defined as the length of the path traveled by light in vacuum during a time interval of 1/299,792,458 of a second.
So it's cyclic and doesn't make sense outside the earth reference frame
Edit: since 2019, 1 second is defined by taking the fixed numerical value of the caesium frequency ∆νCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1
So a second makes sense in the galactic scale at least as long as Caesium-133 is a stable isotope in that environment
> Since 1967, the second has been defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom
> realisation of the metre is usually delineated (not defined) today in labs as 1579800.762042(33) wavelengths of helium-neon laser light in a vacuum
No. A second is "The duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom."
No, that's how the metre is defined. The second is defined by the transition between two hyperfine states of cesium 133 (9,192,631,770 'oscillations').
Or lets say we colonize a planet with a significant time dilation (ie near a supermassive blackhole), how do you deal with this? Who has the canonical time?
AFAIK, there is no canonical time in relativity. There is the "local time", which is what your wristwatch shows, and the "time in place X", where X can be Earth, Mars, or some spaceship. These times might not be in sync, and they might even be distorted. Because, if place X is moving relative to you at a significant percentage of lightspeed, their seconds will be longer. Also, the further X is, the blurrier the concept of simultaneity becomes. Which makes the question "what time is it _now_ in place X" moot :)
If relativity don’t allow time travel doesn’t it make time canonical? A clock on Earth from telescope might look jumpy but only in forward direction after compensating for distance I think
Looking at a clock from Earth through a telescope isn't as straightforward as you'd think.
The image you see of that clock is actually light (photons) emitted from Earth, which will take a while to reach you - like, 1 year, if you're 1 lightyear away. During that year, Earth has moved on, maybe blew itself apart. But you can't even tell, because information can't reach you faster than light :)
So you can only see Earth's past, not Earth's "now". The further you are, the more "now" loses meaning.
Yeah but can it ever go back, other than by backing off faster than the speed of light? If it can’t then it’s at least monotonic, even if it’s not rate constant
All observers observer all clocks to advance monotonically forward in time, regardless of their location or relative speed, unless the clock is moving at the speed of light. But the rate that each clock is observed to advance at depends on things like relative speed, acceleration, and the curvature of spacetime.
... unless it's the system clock in Linux, with the hardware RTC set to be interpreted as local time, and one is watching a system boot on a machine that is set up as east of the Prime Meridian (i.e. the hardware RTC is ahead of UTC). (-:
You don't have to go to science fiction for this. Science fact will do. On Earth, with Atomic Time, we already have to account for general relativistic effects. Doco from the BIPM (the SI Brochure) already talks about accounting for them across the sizes of the measuring devices.
You'd just use some math to remove the influence of the black hole. Or for a more hands-on answer, you could have some reference beacons that are significantly further away from the black hole.
In a broad sense, the answer goes like this: Almost everything in the galaxy shares an approximate reference frame when it comes to velocity and special relativity. For general relativity and gravity, that's obviously a distortion so it doesn't count.
In the long term, to prevent very minor drift, we can use quasars as a reference.
That’s exactly why op said we should abandon Unix time. The “who cares about leap seconds” standards is called TAI, and the difference between TAI and Unix time changes every time a leap second is introduced.
Unix time decrees every day has exactly 86400 seconds. But the introduction of leap seconds means that While most days have 86400, some earth days have 86401 seconds and potentially there could also be days with 86399
TAI makes sense everywhere, Unix time only on earth.
The galactic GPS would be based quasars, e.g. far away galaxies, that would pretty much appear the same positions anywhere in the Milky Way. Quasar spins are gradually slowing down. So you could tell how many light years you are from the quasar by the observed spin rates. In addition, you can determine epoch time from knowing position a spin decay rates.
Carl Sagan put a quasar GPS on a golden plaque on both Voyager probes, so future intelligences could tell where and when the Voyagers came from.
Next iteration of unix-time should include the concept of local time in context of relativty if we are going to travel or communicate over significant distances.
How? And what would be the place/value measured against? And would all "timekeepers" need to get signals from that place to get their offset? If so would we need to take lensing from gravitation into account?
How do you even measure a global value when the local perspective by definition changes the value?
Programmer: Next iteration of unix-time should include the concept of local time in context of relativity if we are going to travel or communicate over significant distances.
You mean that from a remote place or fast-moving ship, the time on Earth would appear to go at a different speed? So local time would not just have a constant offset?
If time is moving at a different speed, it wouldn't just be a constant offset. You'd also need a multiplier. Think of two cars moving at different speeds on a road. They don't stay the same distance apart.
Yea, but if we're going to use a "number of seconds since ..." Type system like Unix time, SOMEONE has to be the origin point, so I don't think this is a knock against Unix time.
Its unfortunate to use an accelerating reference (earth rotating sun, sun rotating galaxy, galaxy intermixing with andromeda) because it requires you to awkwardly keep track of the astronomy of earth for timekeeping. Instead, you would want a standardized reference time frame, say, "unix time assuming earth in 1970 never experienced any acceleratiom or dilation" and then everyone, including earth people, would track their acceleration and dilation offsets to be able to compare time when meeting or convert signal timestamps from any source back to their local time.
Well we already have to keep track of the size and shape of Earth. Go and read the SI Brochure from the BIPM. Local measurements by atomic clocks on Earth have to be weighted according to how far they are from a reference surface of equal gravitational potential. There's a publication named Circular T that comes out every 5 days.
You would want some global reference time, and that could be a number of seconds passing on Earth. However locally you can't work with a clock that doesn't tick at a constant rate (if seconds on your spaceship go twice as fast, the crew can work with that; if the rate changes day-to-day as you accelerate, that's a problem).
In addition, you will probably want clocks that run at a given perceived speed, to time physical processes that take the same time everywhere (e.g. so your cookie recipe saying "bake for 600 seconds" work without translation).
With Earth's day length becoming irrelevant, the 24 hour clock becomes irrelevant too, and the length of the sleep cycle may shift.
We could eventually arrive at stardates with one starday being 100000 seconds. Of course, that would be just as arbitrary as simply keeping the 24 hours of 60 minutes each.
They were very limited, though. We do not really know what would happen on a spaceship that changed its day to 100 000 seconds and subjected the crew to this cycle for months or years.
Given how adaptable people usually are to external influences, I would guess that some adaptation would take place.
To my knowledge, once you take the sun away, it tends to drift more towards 25 or 26 hours. And that's in humans who experienced the externally imposed 24 hour cycle for decades before participating in a short experiment, not potential spacefarers born onboard of stations or ships.
I hope you are running an up-to-date Splunk version:
Beginning on September 13, 2020 at 12:26:39 PM Coordinated Universal Time (UTC), un-patched Splunk platform instances will be unable to recognize timestamps from events with dates that are based on Unix time, due to incorrect parsing of timestamp data.
I'm always left wondering how something like this could happen. I kind of get Y2K and stuff like overflows ... but this one? Really? Did someone put a regex like /^15... to "match" dates?
Yep, that's exactly what Splunk have done - scroll down the release notes linked to by the grandparent and the faulty regex is shown.
What's super daft is the proposed fix is only a further sticking plaster, adding support for the 16... range (and the 2020s decade) rather than all future dates. So in a couple of years a further patch will be needed...
To fix the fire this is probably the best solution because the risk of unintended side effects is very low. I would just hope it is then followed by a proper fix.
> Beginning on January 1, 2020, un-patched Splunk platform instances will be unable to recognize timestamps from events where the date contains a two-digit year. This means data that meets this criteria will be indexed with incorrect timestamps.
> Beginning on September 13, 2020 at 12:26:39 PM Coordinated Universal Time (UTC), un-patched Splunk platform instances will be unable to recognize timestamps from events with dates that are based on Unix time, due to incorrect parsing of timestamp data.
> Impact
> ...
> The issue appears when you have configured the input source to automatically determine timestamps, ...
> There is no method to correct the timestamps after the Splunk platform has ingested the data when the problem starts. If you ingest data with an un-patched Splunk platform instance beginning on January 1, 2020, you must patch the instance and re-ingest that data for its timestamps to be correct.
> Cause
> The Splunk platform input processor uses a file called datetime.xml to help the processor correctly determine timestamps based on incoming data. The file uses regular expressions to extract many different types of dates and timestamps from incoming data.
> On un-patched Splunk platform instances, the file supports the extraction of two-digit years of "19", that is, up to December 31, 2019. Beginning on January 1, 2020, these un-patched instances will mistakenly treat incoming data as having an invalid timestamp year, and could either add timestamps using the current year, or misinterpret the date incorrectly and add a timestamp with the misinterpreted date.
The crazy thing is there has only been 1.6 billion seconds since 1970. There have been more babies since 1970 than there have been seconds. Which is crazy.
Back then, Earth's population was on 3.7 billion. Now it's 7.8 billion. That means that (ignoring deaths) 2.5 babies have been born for each unix second. Or roughly 1 baby every 400 milliseconds.
Underrated comment. Reply anticipation: this is not reddit. Preemptive response, I know, doesn't change the fact that this comment is underrated :);p xx
Only thing I want to add: not yet, maybe with CRISPR and artificial wombs there could be some way.
That sounds about right. Approx 50 million deaths per year in that frame. 50 years....2.5 billion deaths. So 2.5+1.5 births and 1.5 deaths a second. If you round up deaths to whole numbers, 3 deaths per 2 seconds, maybe that's the error, adding the 3 deaths per 2 seconds to 2.5 deaths per 1 second. Or 8 births and 3 deaths every 2 seconds. Or 1 death every 700ms.
If you have a billion (i.e., 1E9) dollars, or any other currency, and you spend $86,400 dollars a day, it will take you more than 31 years and 8 months to spend the entire billion.
According to https://ourworldindata.org/peak-child (2018) the number of children in the world is plateauing ("very close to a long flat peak") meaning that the total number of children may stay constant in the future.
Good point and great way to think about it. It would be weird if the number of seconds grew quadratically. I guess considering "particle interactions" and "light cone" the number of interactions among particles (and interactions among interactions) grows quadratically over time. I wonder how the universe doesn't end up with a backlog of information it hasn't yet processed. Maybe that's gravity/dark matter. :p ;) xx hehe
I quite like the Mayan counting system, so I made a little module that does the conversion and makes SVG's, this[1] is the only thing I've used it for so far, though.
I enjoyed the lunch party we threw at work when it hit 1234567890.
I worked in Corporate IT at the time, so most of the office had no idea what we were celebrating. Somehow, neither did 80% of the tech staff. But the few of us who appreciated it really enjoyed the cake.
For those who would like to be able to figure day of year from date in the heads, here are a couple of reasonable ways.
1. Given month 1 <= m <= 12, day 0 of that month (i.e., the day before the first of the month) in a non-leap year is day 30(m-1) + F[m] of the year, where F[m] is from this array (1-based indexing!):
0, 1, -1, 0, 0, 1, 1, 2, 3, 3, 4, 4
Add the day, and add 1 if it is a leap year and m >= 3.
E.g., September 12. m = 9, giving 240 + F[9] = 243 for Sep 0. Add 12 for the day, and 1 for the leap year, giving 256.
The F table has enough patterns within it to make it fairly easy to memorize. If you prefer memorizing formulas to memorizing tables, you can do months m >= 3 with F[m] = 6(m-4)//10, where // is the Python3 integer division operator. Then you just need to memorize the Jan is 0, Feb is 1, and the rest are 6(m-4)//10.
2. If you have the month terms from the Doomsday algorithm for doing day of week calculations already memorized, you can get F]m] from that instead of memorizing the F table.
F[m] == -(2m + 2 - M[m]) mod 7, where M[m] is the Doomsday month term (additive form): 4, 0, 0, 3, 5, 1, 3, 6, 2, 4, 0, 2.
You then just have to remember that -1 <= F <= 4, so adjust -(2m + 2 -M[m]) by an appropriate multiple of 7 to get into that range.
BTW, you can run use that formula relating F[] and M[] the other way. If you have F[], them M[m] = F[m] + 2 m + 2 mod 7. Some people might find it easier to memorize F and compute M from that when doing day of week with Doomsday rather than memorize M.
% printf "@40000000%08x%08x %#xs SI since the Unix v4 Epoch\n" \
0x6000000A 0 0x60000000 |
TZ=right/UTC tai64nlocal
2021-01-14 08:25:09.000000000 0x60000000s SI since the Unix v4 Epoch
%
It's going to tweet twice tomorrow at lunch time (in my time zone) either side of the rollover.
I fondly remember the gigasecond party I went to with a load of friends - it happened in the small hours on Sunday morning, ideal party time for a bunch of geeks in their 20s, like an extra new year's eve!
For those as oblivious to +0000 as I am, and wondering if that is GMT or UTC:
- GMT is a time zone officially used in some European and African countries. The time can be displayed using both the 24-hour format (0 - 24) or the 12-hour format (1 - 12 am/pm).
- UTC is not a time zone, but a time standard that is the basis for civil time and time zones worldwide. This means that no country or territory officially uses UTC as a local time.
I still remember the huge 1234567890 party. It was one of the nerdiest gathering ever and the bar owner had no clue what was going on.
Coincidentally I consider this the end of "golden age" of the internet which ended as soon as the iPhone and smartphones in general got mainstream and everything moved to centralized services.
I can only recommend using https://epochconverter.com instead of Google's preferred result https://unixtimestamp.com as it automatically detects if milliseconds are included, and it displays the local time more prominently.
Tangentially related, last week I made a tiny one-file Java lib for working with Unix timestamps in milliseconds (as returned by System.currentTimeMillis() etc). It’s several orders of magnitude faster that Java’s time api!
weird Q I was wondering about in regards to the epoch. (perhaps a stupid Q)
what would be the problems with choosing a new epoch (i.e. one where jan 1 is the same day of week as jan 1, 1970 and is 2 years before a leap year. (perhaps doesn't exist).
The worse case I see is that apps that calculate years internally (instead of via a shared library or like) would calculate them incorrectly. I'm wondering if this wouldn't be something that could be massaged around. Of course, it just pushes the problem down the road.
it also makes timestamps like this incompatible between systems that have different epochs, which could be an issue.
as I said, naive (possibly stupid) Q, just wondering if people have actually talked about it?
That is just when signed 32 bits rolls over. Unsigned takes us all the way to 2106.
Nervous nellies worried about rollover, and insisting we switch to 64-bit timestamps everywhere, are a nuisance. You can keep one 64-bit epoch, such as boot time or most recent foreign attack on NYC, and as many 32-bit offsets from that as you like, and always be able to get a 64-bit time whenever you need it. Most often you only need a difference, so one epoch is as good as any, and you can work purely in 32 bits.
Pcap format is good until 2106 assuming the 1970 epoch. A bit of one of the now-unused header fields could be repurposed to indicate another.
500M seconds later I was in Washington DC in a hotel, watching it tick up on an rxvt on my laptop, with HN in a window.
Who knows where I'll be in 2033, hopefully not in Europe as I'll be too old for night shifts, but wherever I am, I suspect it will have a bash prompt.