Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Unix timestamp will begin with 16 this Sunday (unixtimestamp.com)
453 points by dezmou on Sept 12, 2020 | hide | past | favorite | 203 comments


I remember staying up late (UK) at the Billenium (2:46 AM Sunday Sep 9th 2001), thinking that was bound to be the most newsworthy event of the week, watching the seconds tick past on a "while(sleep 1); do date" loop, with slashdot in one window, IRC in another, all running on an enlightenment window manager.

500M seconds later I was in Washington DC in a hotel, watching it tick up on an rxvt on my laptop, with HN in a window.

Who knows where I'll be in 2033, hopefully not in Europe as I'll be too old for night shifts, but wherever I am, I suspect it will have a bash prompt.


A former employer's telephony software had a 9-character intstring field for the Unix timestamp, so there was a bug when it rolled over to 1000000000 in 2001. Pretty rare, I think. Unix timestamps back then were usually 32-bit ints, so good until 2038. And hopefully they'll be 64 bits everywhere that matters well before 2038.


I was involved in building a system using 32-bit ints as timestamps ten years ago. They are still sold, and since it's industrial equipment running on 16 bit microcontrollers I have every reason to believe most of them will still be around in 2038.

I don't think I was at the only company doing this. Few people seem to care about issues that will happen after their retirement. Expect lots of industrial stuff to work just a bit worse around 2038 (and 2036, PIC microcontrollers fail a bit sooner)


"A society grows great when old devs allocate timestamps whose higher bits they shall never set."


Why will PIC's fail sooner? (and which ones, 8, 16 or 32bit?)


I worked with the 16 bit PIC24f. I don't have the source handy to check, but according to a forum entry "The provided gmtime() actually fails earlier than 2038. The year wraps around when the time_t input goes beyond 0x7C55817F or Thu Feb 7 06:28:15 2036." [1]

1: https://www.microchip.com/forums/m522929.aspx


Thanks - so seems it's an issue with their library, rather than something to with PIC's per se.


Or unsigned 32 bits, which takes us to 2106.

Almost all things that have a timestamp leave you in no doubt which 136-year period they were taken in. For those, 32 bits is plenty.


Making time_t unsigned would break the ability to refer to times before 1970.

Existing code that deals with 32-bit timestamps almost universally assumes that (time_t)0 is 1970-01-01 00:00:00 UTC. Updating that code to guess a different epoch would be more work (with more inevitable bugs) than keeping a fixed epoch and using 64 bits.

It's already 64 bits (and signed) on a lot of systems.


What on earth is that good for? Seems like an extreme niche.


Making time an unsigned 32 would break any code that computes a difference of times.


Not really. Addition and subtraction don't care about unsigned vs signed status, because overflow is identitcal in both cases.

Think of an odometer for a car with 999999 as its max number. 999,999 is equivalent to -1. So 500 + 999999 == 499 on the odometer.

A 32-bit register is simply a binary odometer, so the above concept happens with bits.

In "signed int", we print the number "999999" as "-1". With "unsigned int", we print the number "999999" as "999999". They are one-and-the-same. The only difference is your print() function.

-----

Multiplication and division change however. Your compiler tracks signed/unsigned status for idiv vs div, or mul vs imul instructions.

--------

With that being said: I think 64-bits is fine. Most computers these days are 64-bits, and those 4-extra bytes aren't very costly. Standard compression algorithms, like GZip, do a good job of finding those redundant bits and shrinking things down.


> Addition and subtraction don't care about unsigned vs signed status, because overflow is identitcal in both cases.

Except signed overflow invoking Undefined Behaviour in any C compiler, whereas unsigned overflow does not.

This invokes undefined behaviour:

    foo(int x) {
      return x + 1 > x;
    }


We're not talking about C, were talking about how it works on the computers.


Many if not most of the embedded/long term systems are implemented in C. If a variable is declared signed, overflowing cases may and often are "optimized" away.

IIUC, GP's foo() would likely be optimized to { return true; }, and so would similar timestamp overflow checks.


The post I responded to was the opposite: about turning "signed" code (which you declare is undefined) into "unsigned" code (which you declare is fully defined).

Given this thread of subargument, making the difference between 32-bit unsigned numbers is MORE DEFINED than using signed integers.

-------

IE: If your code was correct with "int timestamp", it will be more correct with "unsigned int timestamp".

In any case, "int" or "unsigned int" based timestamp manipulation wouldn't be like the code you suggested, but instead "int difference = x - y".

In the signed integer case, "difference" is (conceptually) negative, while in the unsigned integer case, "difference" is guaranteed to have overflow. Both cases are conceptually correct with regards to the difference of timestamps.


But because the behaviour is undefined, it doesn't matter how the computer would handle it, because the compiler is free to rework it into any arbitrary sequence of instructions, including removing it altogether.


There are exactly zero undefined behaviors around operations on unsigned integer types in C or in C++.

To get meaningful results may require some care, but the languages provide everything needed to exercise such care.


Perhaps you missed my earlier comment.

> Except signed overflow invoking Undefined Behaviour in any C compiler, whereas unsigned overflow does not.

We aren't talking about unsigned integer types. We're talking about the behaviour of signed integer types.


I posted this back when we hit 1400000000 in 2014 [1]:

---

Openldap got hit by the billennium bug. I remember because we told our Noc to keep an eye open (Sunday afternoon where we were) and we started getting alerts that all LDAP replication was broken.

https://www.openldap.org/lists/openldap-bugs/200109/msg00052...

---

Link to 1400000000 thread:

https://news.ycombinator.com/item?id=7736739


This happened to the wu-imap server my employer was using at the time since they were running the maildir patches:

http://www.davideous.com/imap-maildir/#updates

The sort function couldn't handle the rollover so when we came in that day all our mailboxes had their email sorted in the wrong order, and it couldn't be fixed without the listed patch.


We had a bug tracking system that malfunctioned starting at (time_t)1000000000 (3 days before 9/11). Apparently it converted the time_t value to a string and truncated it to 9 characters. Its concept of the current time jumped back to 1973-03-03 and advanced from there at 10% of the normal rate. The bug was corrected fairly quickly.


> a "while(sleep 1); do date" loop,

There's always "xclock -d -update 1 -strftime %s -face Inconsolata-190:bold "


You could at least give the event some proper gravitas by using Comic Sans instead.


> I remember staying up late (UK) at the Billenium (2:46 AM Sunday Sep 9th 2001)

Inconsolata wasn't released until 2006.

https://en.wikipedia.org/wiki/Inconsolata


Why not just `watch date` ?


watch updates every 2 seconds by default.

`watch -n1 date` will update every second.


Upvoting for Inconsolata.


Hah nice, I remember doing that too! At the time I thought it was important to keep the irc log, which I've just dug up and thrown up on pastebin. I don't actually remember which network this was (EFNet maybe?)

> 03:46:49 -crier- THE BILLENNIUM HAS ARRIVED!!!!! Uh, you might want to play auldlangsyne.wav. YAAAAAAAAAAAAAAAYYYYYY!

more at https://pastebin.com/71H7eR8r

For some reason the billennium arrives at 3:46:49, which doesn't make sense - I wasn't in UTC+2, and anyway this should've been at 01:46:40am UTC (40 seconds past the minute, not 49 seconds). Most likely reason is simply that my local machine's clock was wrong.


Sounds a bit like leap seconds, except there were 22 leap seconds before 2001.


I’m currently on an iPhone 7S. I’ve got a mosh session open to a remote raspberry pi 4B (time looks right), bash shell, “while(sleep 1); do date +%s”, running in Blink SSH while I read HN cause I couldn’t sleep. Have an alarm set for 12:26 UTC so I can grab that screenshot. Following along with you :)


I remember holding a similar billionth second party in Soda Hall (Wozniak Lounge) at Berkeley as an undergrad. Never realized how close that was before 9/11.


(guenine curiosity)

Why do you wait for these events ? Myself I consider these to be complete non-events.


For the same reason people wait for midnight on new year.

It's an arbitrary thing people do for fun.


it's like any holyday, it's a rendezvous for the community that recognizes it. it strengthens the social bonds


You got downvoted for this question but it’s such a great question in this context because like the UNIX epoch, the answer is arbitrary :)

Incidentally, I guess the windows/NT epoch started in 1601.


It is going to be at 2020-09-13 12:26:40 UTC.

Python:

  $ python3 -q
  >>> from datetime import datetime
  >>> datetime.utcfromtimestamp(1_600_000_000)
  datetime.datetime(2020, 9, 13, 12, 26, 40)
GNU date (Linux):

  $ date -ud @1600000000
  Sun Sep 13 12:26:40 UTC 2020
BSD date (macOS, FreeBSD, OpenBSD, etc.):

  $ date -ur 1600000000
  Sun Sep 13 12:26:40 UTC 2020
All such dates (in UTC) until the end of the current century:

  $ python3 -q
  >>> from datetime import datetime
  >>> for t in range(0, 4_200_000_000, 100_000_000): print(f'{t:13_d} - {datetime.utcfromtimestamp(t).strftime("%Y-%m-%d %H:%M:%S")}')
  ...
              0 - 1970-01-01 00:00:00
    100_000_000 - 1973-03-03 09:46:40
    200_000_000 - 1976-05-03 19:33:20
    300_000_000 - 1979-07-05 05:20:00
    400_000_000 - 1982-09-04 15:06:40
    500_000_000 - 1985-11-05 00:53:20
    600_000_000 - 1989-01-05 10:40:00
    700_000_000 - 1992-03-07 20:26:40
    800_000_000 - 1995-05-09 06:13:20
    900_000_000 - 1998-07-09 16:00:00
  1_000_000_000 - 2001-09-09 01:46:40
  1_100_000_000 - 2004-11-09 11:33:20
  1_200_000_000 - 2008-01-10 21:20:00
  1_300_000_000 - 2011-03-13 07:06:40
  1_400_000_000 - 2014-05-13 16:53:20
  1_500_000_000 - 2017-07-14 02:40:00
  1_600_000_000 - 2020-09-13 12:26:40
  1_700_000_000 - 2023-11-14 22:13:20
  1_800_000_000 - 2027-01-15 08:00:00
  1_900_000_000 - 2030-03-17 17:46:40
  2_000_000_000 - 2033-05-18 03:33:20
  2_100_000_000 - 2036-07-18 13:20:00
  2_200_000_000 - 2039-09-18 23:06:40
  2_300_000_000 - 2042-11-19 08:53:20
  2_400_000_000 - 2046-01-19 18:40:00
  2_500_000_000 - 2049-03-22 04:26:40
  2_600_000_000 - 2052-05-22 14:13:20
  2_700_000_000 - 2055-07-24 00:00:00
  2_800_000_000 - 2058-09-23 09:46:40
  2_900_000_000 - 2061-11-23 19:33:20
  3_000_000_000 - 2065-01-24 05:20:00
  3_100_000_000 - 2068-03-26 15:06:40
  3_200_000_000 - 2071-05-28 00:53:20
  3_300_000_000 - 2074-07-28 10:40:00
  3_400_000_000 - 2077-09-27 20:26:40
  3_500_000_000 - 2080-11-28 06:13:20
  3_600_000_000 - 2084-01-29 16:00:00
  3_700_000_000 - 2087-04-01 01:46:40
  3_800_000_000 - 2090-06-01 11:33:20
  3_900_000_000 - 2093-08-01 21:20:00
  4_000_000_000 - 2096-10-02 07:06:40
  4_100_000_000 - 2099-12-03 16:53:20


That's the point that your library's time_t reaches those values. But the point at which you will be 1.6Gs since the Unix v4 Epoch is in fact 27 seconds earlier, because your time_t hasn't counted the leap seconds over that period. To do that with Unix tools, you need to work with TAI timekeeping, not UTC, and remember that the (biased) zero point for TAI timekeeping is 10s before the Unix v4 Epoch.

    % jot - 10 20 | while read -r d 
    do 
       printf "@40000000%08x%08x %s00Ms SI since the Unix v4 Epoch\n" \
              "$d"00000010 0 "$d"
    done |
    TZ=right/UTC tai64nlocal
    2001-09-09 01:46:18.000000000 1000Ms SI since the Unix v4 Epoch
    2004-11-09 11:32:58.000000000 1100Ms SI since the Unix v4 Epoch
    2008-01-10 21:19:37.000000000 1200Ms SI since the Unix v4 Epoch
    2011-03-13 07:06:16.000000000 1300Ms SI since the Unix v4 Epoch
    2014-05-13 16:52:55.000000000 1400Ms SI since the Unix v4 Epoch
    2017-07-14 02:39:33.000000000 1500Ms SI since the Unix v4 Epoch
    2020-09-13 12:26:13.000000000 1600Ms SI since the Unix v4 Epoch
    2023-11-14 22:12:53.000000000 1700Ms SI since the Unix v4 Epoch
    2027-01-15 07:59:33.000000000 1800Ms SI since the Unix v4 Epoch
    2030-03-17 17:46:13.000000000 1900Ms SI since the Unix v4 Epoch
    2033-05-18 03:32:53.000000000 2000Ms SI since the Unix v4 Epoch
    %
Also, your figures for 0.0Gs and 0.1Gs are nonsense anyway.

Unix didn't adopt this Epoch and method of timekeeping until 4th Edition, somewhere in 1974 according to Unix historians. Yes, it's tricky to pin down the exact date of adoption. The Epoch was changed every year before then, as earlier versions of Unix measured time in 60ths of a second since the start of the year. This has made reconstructing Unix history from tape archives non-trivial. If you want to count seconds since the start of the first Unix Epoch, in 1st Edition, you have to count from the start of 1971. And no, that's not the same as how old Unix is.

(And yes, those values for 2023 onwards are somewhat speculative, as there will no doubt be more leap seconds.)


To add, Powershell:

    (Get-Date "1/1/1970").AddSeconds(1600000000).ToUniversalTime()
or alternatively for your local time:

    (Get-Date "1/1/1970").AddSeconds(1600000000).ToLocalTime()


Heh... I've run into this bug myself recently. Get-Date '...' returns a DateTime object of Unspecified kind, i.e. neither Local nor Utc. By design, such objects are interpreted as Local by ToUniversalTime() and as Utc by ToLocalTime(). They essentially assume the inverse of themselves, for better or worse.

Since the Kind property is readonly, to get the real UTC date you need:

    $naive = (Get-Date "1/1/1970").AddSeconds(1600000000)
    $local = $naive.ToLocalTime()
    $utc = $local.ToUniversalTime()


Or, equivalently,

    $utc = [DateTimeOffset]::FromUnixTimeSeconds(16e8).UtcDateTime
Incidentally,

    PS> ($utc, (Get-Date -AsUtc '1/1/1970'), (Get-Date -AsUtc), (Get-Date)).Kind
    Utc
    Utc
    Utc
    Local
In your example,

    $naive.Kind -eq [DateTimeKind]::Unspecified
correctly, because no DateTimeKind was specified.

The peculiar behavior you observed with ToLocationTime and ToUniversalTime exists to avoid breaking changes in working[1] code written for versions of the .NET Framework before 2.0, where DateTime.Kind was introduced (as was DateTimeOffset, so ToLocationTime and ToUniversalTime should probably be deprecated).

[1] "Breaking" changes exist for already-broken code, e.g., in Framework 2.0 and later, the return values of ToUniversalTime and ToLocalTime have Kinds Utc and Local, respectively, so applying the same conversion to an already-converted value no longer has any effect.


Interesting. I'm using Powershell 7 (e.g. Powershell Core) and I'm not sure exactly which version number specifically, but the functions seem to work as intended on my Windows 10 machine.


Omit the "u" to see the result in local time, not UTC.


Thank you both. I was looking for the right combination of flags and realized that the date(1) man page is far too dense for simple tasks like this.

  $ date -r 1600000000
  Sun Sep 13 05:26:40 PDT 2020


The datetime package is one of the best things about Python, and dare I say one of the best general purpose calendar modules ever written. It’s just so practical.


It still can’t parse ISO8601 :(


`datetime.fromisoformat()`, as of 3.7 :D

Though `dateutil` is still recommended for most cases.


That parses a subset and is only guaranteed to be compatible with Python’s “.toisoformat()”. (I imagine it would be backwards compatible to expand it to cover all of ISO8601 and I can’t tell why they haven’t.)


ISO8601 is quite large, they probably don't want to ship that big standard library modules which would parse 100% of ISO8601


Python is great, but in this case, you could get almost identical output using bash alone:

for t in {0..4100000000..100000000}; do TZ=UTC printf "%'13d - %(%Y-%m-%d %H:%M:%S)T\n" $t $t; done


arrow (my favorite), pendulum or delorean are in my opinion much better (= easier to use, versatile, ...)


I found Go's quite good as well


> 4_100_000_000 - 2099-12-03 16:53:20

We're going to party like it's 2099! Watch out for that Y2K+100 bug!!


It made me feel better doing the math I'm likely to be alive for 3,000,000,000


depends on how apocalyptic Y2038 is :)


Or the rest of Y2020, for that matter.


I can already hear people from 2033 celebrating 2b..


When we one day become a space faring civilization we will probably stop using the gregorian calendar because why would you use that on say Mars?

But there really is no reason to get rid of the unix timestamp as a measure of time and this measure may stay around for a long time. This may mean that people in the future might consider 1970 as year zero where modern civilization began.


In "Deepness in the Sky" a spaceship's computer is still running on Unix time many thousands years into the future. It is generally believed that the calendar starts with the first Moon landing, and only the main character as a "programmer-archeologist" discovers a small difference between it and true Unix time 0.


Little is known about “Unix” but it is believed he or she was a charismatic spiritual leader who brought hope, healing, and comfort to countless humans on Earth and inspired numerous disciples including “BSD”, “Linux”, and “System V”.


> Little is known about “Unix” but […]

As opposed to eunuchs.


Well, isn't Unix a castrated Multics?


Vernor Vinge sounds like an intriguing author!

“Most years, since its inception in 1999, Vinge has been on the Free Software Foundation's selection committee for their Award for the Advancement of Free Software.”[0]

Since “A Deepness in the Sky” was written in 1999 as a prequel to “A Fire Upon the Deep” (from 1992), which do you recommend to read first?

[0] https://en.m.wikipedia.org/wiki/Vernor_Vinge


Fire, then Deepness, then stop.


Or Fire, then The Peace War. Maybe dip into True Names, followed by Shockwave Rider by John Brunner.

Many people didn't like Deepness much. But the sequel to Fire, Children of the Sky, was interesting.


For me A Deepness in the Sky is his best, and one of the most memorable novels I've read.

The Peace War is a fascinating exploration of "what if this one thing was possible?" and all its terrible consequences, but Deepness grabbed me much more as a story.


In either case, definitely read The Shockwave Rider by Brunner.


Personally I found Fire and Deepness to be equally, profoundly, good. Haven’t found any other Vinge books to be nearly as good.


I quite enjoyed Rainbow's End as well.


There's an extra apostrophe there... this is addressed toward the end of the novel.


Yes, it's not "The end of this particular rainbow", it's "rainbows have an end". It bugged me until I got to that part of the book.


That comment is a bit of a spoiler, isn't it?


"A Deepness..." also used SI multiples of seconds to denote something approximating 24 hours, a week, a month etc.


A megasecond is a useful quantity: it's a bit less than 12 days. I mentally approximate it to a fortnight, partly because of the VMS bootloader which used microfortnights as an approximation for seconds https://en.wikipedia.org/wiki/List_of_unusual_units_of_measu...

Another handy time_t approximation is that a year is about 2^25 seconds, or 32 megaseconds.


He should have said what that difference is.


There is a very good reason to eventually abandon it: leap seconds. Unix time goes back one second whenever there is a leap second on Earth.

It's extremely weird and IMHO completely ruins the purpose of a timestamp, but it's a compromise for backwards compatibility, since Unix time was created before leap seconds. This hack ensures that the number of seconds in a day remains fixed, an assumption of many systems at the time.


Leap seconds were introduced in 1972. The time_t that you are thinking of was invented somewhere around 1974, according to Unix historians.

Ironically, your kernel measures TAI time perfectly happily if unmolested, as that always ticks once for every SI second (ignoring oscillator drift). No slewing. No smearing. No stepping.


Isn't a second just how long light takes to travel 299,792,458 m?

Also, "since Unix time was created before leap seconds." I found this interesting for the simple reason I've never spent any time thinking about Epoch vs Leap Second histories.


Yes, but the point is that the UNIX timestamp doesn't count the the number of seconds elapsed since January 1, 1970. It counts 86400 seconds per day, regardless of how many seconds the day actually has (which can vary due to leap seconds).


Is this accurate though? Isn’t the number of seconds since 1970 absolute, and it’s up to the library generating the Gregorian date to take into account leap seconds? I suppose if these libraries are not taking into account leap seconds, then the actual rollover will be a few seconds earlier (or later?) than what we think.


See the definition of seconds since the epoch at https://pubs.opengroup.org/onlinepubs/9699919799.2018edition...

  tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
      (tm_year-70)*31536000 + ((tm_year-69)/4)*86400 -
      ((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400


Your time_t isn't the true number of SI seconds since the Epoch. It's that number, mucked around 27 times so far. Instead of leaving their kernels to count one tick per SI second (ignoring oscillator drift) many people slew, step, or smear their seconds count every time that there is a leap second.

Go and read about "right" versus "posix" timezones.

* https://unix.stackexchange.com/a/294715/5132


IIUC unix timestamp simply counts seconds (well defined interval of time) from epoch onwards. It is then up to different calendars to interpret this number as a given day / hour /... This is the beauty of it - it is always an accurate (well, at low speeds at least ;) ) measure of time. In other words, no, some days might not last 86400 seconds.


This is not correct. UNIX timestamps always have 86400 seconds per day, and consequently, they do not actually count the number of seconds since the epoch.


I stand corrected - thank you for the clarification!


And meter itself is defined as the length of the path traveled by light in vacuum during a time interval of 1/299,792,458 of a second.

So it's cyclic and doesn't make sense outside the earth reference frame

Edit: since 2019, 1 second is defined by taking the fixed numerical value of the caesium frequency ∆νCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1

So a second makes sense in the galactic scale at least as long as Caesium-133 is a stable isotope in that environment


> Since 1967, the second has been defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom

> realisation of the metre is usually delineated (not defined) today in labs as 1579800.762042(33) wavelengths of helium-neon laser light in a vacuum


No. A second is "The duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom."


No, that's how the metre is defined. The second is defined by the transition between two hyperfine states of cesium 133 (9,192,631,770 'oscillations').


Or lets say we colonize a planet with a significant time dilation (ie near a supermassive blackhole), how do you deal with this? Who has the canonical time?


AFAIK, there is no canonical time in relativity. There is the "local time", which is what your wristwatch shows, and the "time in place X", where X can be Earth, Mars, or some spaceship. These times might not be in sync, and they might even be distorted. Because, if place X is moving relative to you at a significant percentage of lightspeed, their seconds will be longer. Also, the further X is, the blurrier the concept of simultaneity becomes. Which makes the question "what time is it _now_ in place X" moot :)


If relativity don’t allow time travel doesn’t it make time canonical? A clock on Earth from telescope might look jumpy but only in forward direction after compensating for distance I think


Looking at a clock from Earth through a telescope isn't as straightforward as you'd think. The image you see of that clock is actually light (photons) emitted from Earth, which will take a while to reach you - like, 1 year, if you're 1 lightyear away. During that year, Earth has moved on, maybe blew itself apart. But you can't even tell, because information can't reach you faster than light :) So you can only see Earth's past, not Earth's "now". The further you are, the more "now" loses meaning.


Yeah but can it ever go back, other than by backing off faster than the speed of light? If it can’t then it’s at least monotonic, even if it’s not rate constant


All observers observer all clocks to advance monotonically forward in time, regardless of their location or relative speed, unless the clock is moving at the speed of light. But the rate that each clock is observed to advance at depends on things like relative speed, acceleration, and the curvature of spacetime.


... unless it's the system clock in Linux, with the hardware RTC set to be interpreted as local time, and one is watching a system boot on a machine that is set up as east of the Prime Meridian (i.e. the hardware RTC is ahead of UTC). (-:

* https://unix.stackexchange.com/a/600490/5132


Given that Unix time specifically doesn’t handle leap seconds, a system clock is not a clock by the conventional definition.


We'll probably come up with something similar to time zones once we become an interplanetary species.

"Local seconds since 1970", vs. "Static seconds since 1970".


You don't have to go to science fiction for this. Science fact will do. On Earth, with Atomic Time, we already have to account for general relativistic effects. Doco from the BIPM (the SI Brochure) already talks about accounting for them across the sizes of the measuring devices.


You'd just use some math to remove the influence of the black hole. Or for a more hands-on answer, you could have some reference beacons that are significantly further away from the black hole.

In a broad sense, the answer goes like this: Almost everything in the galaxy shares an approximate reference frame when it comes to velocity and special relativity. For general relativity and gravity, that's obviously a distortion so it doesn't count.

In the long term, to prevent very minor drift, we can use quasars as a reference.


Why would someone on Mars, or an asteroid, or orbiting venus, care about leap seconds on earth?

Number of seconds since a given time seems fine until we're travelling significantly outside of a common reference plane.


That’s exactly why op said we should abandon Unix time. The “who cares about leap seconds” standards is called TAI, and the difference between TAI and Unix time changes every time a leap second is introduced.

Unix time decrees every day has exactly 86400 seconds. But the introduction of leap seconds means that While most days have 86400, some earth days have 86401 seconds and potentially there could also be days with 86399

TAI makes sense everywhere, Unix time only on earth.


Because that's not how Unix time is defined. The leap second hack is hardcoded into it:

Unix time is [...] the number of seconds that have elapsed since the Unix epoch, minus leap seconds.

- Wikipedia

If you are on Mars you'll have to update your computer time every ~6 months when Earth release a new table of leap seconds caused by e.g. earthquakes.


Or we just pretend they never happened.


The galactic GPS would be based quasars, e.g. far away galaxies, that would pretty much appear the same positions anywhere in the Milky Way. Quasar spins are gradually slowing down. So you could tell how many light years you are from the quasar by the observed spin rates. In addition, you can determine epoch time from knowing position a spin decay rates.

Carl Sagan put a quasar GPS on a golden plaque on both Voyager probes, so future intelligences could tell where and when the Voyagers came from.


Next iteration of unix-time should include the concept of local time in context of relativty if we are going to travel or communicate over significant distances.


How? And what would be the place/value measured against? And would all "timekeepers" need to get signals from that place to get their offset? If so would we need to take lensing from gravitation into account?

How do you even measure a global value when the local perspective by definition changes the value?


Einstein: There is no such thing as an absolute time

Programmer: Hold my beer


More like:

Programmer: Next iteration of unix-time should include the concept of local time in context of relativity if we are going to travel or communicate over significant distances.

Einstein: Hold my beer


You mean that from a remote place or fast-moving ship, the time on Earth would appear to go at a different speed? So local time would not just have a constant offset?


If time is moving at a different speed, it wouldn't just be a constant offset. You'd also need a multiplier. Think of two cars moving at different speeds on a road. They don't stay the same distance apart.


Yea, but if we're going to use a "number of seconds since ..." Type system like Unix time, SOMEONE has to be the origin point, so I don't think this is a knock against Unix time.


Its unfortunate to use an accelerating reference (earth rotating sun, sun rotating galaxy, galaxy intermixing with andromeda) because it requires you to awkwardly keep track of the astronomy of earth for timekeeping. Instead, you would want a standardized reference time frame, say, "unix time assuming earth in 1970 never experienced any acceleratiom or dilation" and then everyone, including earth people, would track their acceleration and dilation offsets to be able to compare time when meeting or convert signal timestamps from any source back to their local time.


Well we already have to keep track of the size and shape of Earth. Go and read the SI Brochure from the BIPM. Local measurements by atomic clocks on Earth have to be weighted according to how far they are from a reference surface of equal gravitational potential. There's a publication named Circular T that comes out every 5 days.

You don't need science fiction for this.

* https://www.bipm.org/en/bipm-services/timescales/time-ftp/Ci...


You would want some global reference time, and that could be a number of seconds passing on Earth. However locally you can't work with a clock that doesn't tick at a constant rate (if seconds on your spaceship go twice as fast, the crew can work with that; if the rate changes day-to-day as you accelerate, that's a problem).

In addition, you will probably want clocks that run at a given perceived speed, to time physical processes that take the same time everywhere (e.g. so your cookie recipe saying "bake for 600 seconds" work without translation).


Quasar time plus position, motion vector, and acceleration vector should be a good enough reference for anybody.


You can't do math on that directly though. You'd still need to convert them to a common reference frame for comparison.


With Earth's day length becoming irrelevant, the 24 hour clock becomes irrelevant too, and the length of the sleep cycle may shift.

We could eventually arrive at stardates with one starday being 100000 seconds. Of course, that would be just as arbitrary as simply keeping the 24 hours of 60 minutes each.


Humans have a remarkable circadian rhythm that does a pretty good job at defining a day as “24 hours.”


There were experiments with forced desynchronization of human circadian rhythms.

http://www.chronobiology.ch/wp-content/uploads/publications/...

They were very limited, though. We do not really know what would happen on a spaceship that changed its day to 100 000 seconds and subjected the crew to this cycle for months or years.

Given how adaptable people usually are to external influences, I would guess that some adaptation would take place.


To my knowledge, once you take the sun away, it tends to drift more towards 25 or 26 hours. And that's in humans who experienced the externally imposed 24 hour cycle for decades before participating in a short experiment, not potential spacefarers born onboard of stations or ships.


Wont there be issues using the same time on different planets since time speed is not constant?


Or some other significant event in the eye of this civilization.


I hope you are running an up-to-date Splunk version:

Beginning on September 13, 2020 at 12:26:39 PM Coordinated Universal Time (UTC), un-patched Splunk platform instances will be unable to recognize timestamps from events with dates that are based on Unix time, due to incorrect parsing of timestamp data.

https://docs.splunk.com/Documentation/Splunk/latest/ReleaseN...


I'm always left wondering how something like this could happen. I kind of get Y2K and stuff like overflows ... but this one? Really? Did someone put a regex like /^15... to "match" dates?


Yep, that's exactly what Splunk have done - scroll down the release notes linked to by the grandparent and the faulty regex is shown.

What's super daft is the proposed fix is only a further sticking plaster, adding support for the 16... range (and the 2020s decade) rather than all future dates. So in a couple of years a further patch will be needed...


To fix the fire this is probably the best solution because the risk of unintended side effects is very low. I would just hope it is then followed by a proper fix.


Wow ... I'm ... I'm sure there's a better way to handle this.


> Problem

> Beginning on January 1, 2020, un-patched Splunk platform instances will be unable to recognize timestamps from events where the date contains a two-digit year. This means data that meets this criteria will be indexed with incorrect timestamps.

> Beginning on September 13, 2020 at 12:26:39 PM Coordinated Universal Time (UTC), un-patched Splunk platform instances will be unable to recognize timestamps from events with dates that are based on Unix time, due to incorrect parsing of timestamp data.

> Impact

> ...

> The issue appears when you have configured the input source to automatically determine timestamps, ...

> There is no method to correct the timestamps after the Splunk platform has ingested the data when the problem starts. If you ingest data with an un-patched Splunk platform instance beginning on January 1, 2020, you must patch the instance and re-ingest that data for its timestamps to be correct.

> Cause

> The Splunk platform input processor uses a file called datetime.xml to help the processor correctly determine timestamps based on incoming data. The file uses regular expressions to extract many different types of dates and timestamps from incoming data.

> On un-patched Splunk platform instances, the file supports the extraction of two-digit years of "19", that is, up to December 31, 2019. Beginning on January 1, 2020, these un-patched instances will mistakenly treat incoming data as having an invalid timestamp year, and could either add timestamps using the current year, or misinterpret the date incorrectly and add a timestamp with the misinterpreted date.


And how would such a solution not be immediately thrown out during code review?


All dates describing pattern:

code:

#!/bin/bash

for t in $(seq 0 100000000 $((2* * 31))); do date -u -d @$t +'%s -> %c'; done

output:

0 -> Do 01 Jan 1970 00:00:00 UTC

100000000 -> Sa 03 Mär 1973 09:46:40 UTC

200000000 -> Mo 03 Mai 1976 19:33:20 UTC

300000000 -> Do 05 Jul 1979 05:20:00 UTC

400000000 -> Sa 04 Sep 1982 15:06:40 UTC

500000000 -> Di 05 Nov 1985 00:53:20 UTC

600000000 -> Do 05 Jan 1989 10:40:00 UTC

700000000 -> Sa 07 Mär 1992 20:26:40 UTC

800000000 -> Di 09 Mai 1995 06:13:20 UTC

900000000 -> Do 09 Jul 1998 16:00:00 UTC

1000000000 -> So 09 Sep 2001 01:46:40 UTC

1100000000 -> Di 09 Nov 2004 11:33:20 UTC

1200000000 -> Do 10 Jan 2008 21:20:00 UTC

1300000000 -> So 13 Mär 2011 07:06:40 UTC

1400000000 -> Di 13 Mai 2014 16:53:20 UTC

1500000000 -> Fr 14 Jul 2017 02:40:00 UTC

1600000000 -> So 13 Sep 2020 12:26:40 UTC

1700000000 -> Di 14 Nov 2023 22:13:20 UTC

1800000000 -> Fr 15 Jan 2027 08:00:00 UTC

1900000000 -> So 17 Mär 2030 17:46:40 UTC

2000000000 -> Mi 18 Mai 2033 03:33:20 UTC

2100000000 -> Fr 18 Jul 2036 13:20:00 UTC

--------------------------------------------------------------

overflow of currently used datatype for timer happening at:

code:

#!/bin/bash date -u -d @2147483648 +'%s -> %c'

output:

2147483648 -> Di 19 Jan 2038 03:14:08 UTC

info:

https://en.wikipedia.org/wiki/Year_2038_problem

--------------------------------------------------------------

other interesting date:

pi day:

date -u -d @3141592653 +'%s -> %c'

3141592653 -> So 21 Jul 2069 00:37:33 UTC (100th birthday for moonlanding mission on pi-day/sec)


Day of week abbreviation translations:

  So Sunday
  Mo Monday
  Di Tuesday
  Mi Wednesday
  Do Thursday
  Fr Friday
  Sa Saturday
I think those are German.


correct


How exciting! 1500000000 occured on 07/14/2017 @ 2:40am (UTC), so it seems we'll have to only wait about 3 years until the next one!


For those wondering, it’s roughly 1157.4 days (38 months)[0]

[0]: https://www.wolframalpha.com/input/?i=1e8+seconds+in+months


The fact that it happens so consistently indicates it is some kind of conspiracy.


Big Gravity is conspiring to keep us all down


At least it's cheap, clean, and reliable.


Almost like a bunch of folks worked together to make this happen, but only a relatively small number worldwide know who they are.


Unix didn't kill itself?


The year of the Linux desktop has been within us all along!


Like the ticking of a clock


anyone throwing a party for the illuminated?


A year is approximately PI × 1e7 seconds


"π seconds is a nanocentury."

https://en.wikipedia.org/wiki/Tom_Duff


The crazy thing is there has only been 1.6 billion seconds since 1970. There have been more babies since 1970 than there have been seconds. Which is crazy.

Back then, Earth's population was on 3.7 billion. Now it's 7.8 billion. That means that (ignoring deaths) 2.5 babies have been born for each unix second. Or roughly 1 baby every 400 milliseconds.


I don't think that's too crazy. Seconds are serial, babies are parallel :)


Yet while one woman can generate 1 baby in 9 months, 9 women can't do it in 1 month


Underrated comment. Reply anticipation: this is not reddit. Preemptive response, I know, doesn't change the fact that this comment is underrated :);p xx

Only thing I want to add: not yet, maybe with CRISPR and artificial wombs there could be some way.


IIRC, it's ~5.5 births and 3 deaths per second (worldwide).

Also, see: https://www.worldometers.info/


That sounds about right. Approx 50 million deaths per year in that frame. 50 years....2.5 billion deaths. So 2.5+1.5 births and 1.5 deaths a second. If you round up deaths to whole numbers, 3 deaths per 2 seconds, maybe that's the error, adding the 3 deaths per 2 seconds to 2.5 deaths per 1 second. Or 8 births and 3 deaths every 2 seconds. Or 1 death every 700ms.


Also crazy to think of, is:

If you have a billion (i.e., 1E9) dollars, or any other currency, and you spend $86,400 dollars a day, it will take you more than 31 years and 8 months to spend the entire billion.


According to https://ourworldindata.org/peak-child (2018) the number of children in the world is plateauing ("very close to a long flat peak") meaning that the total number of children may stay constant in the future.


Except in Africa. It is pretty important to everyone to increase the median wealth of people in Africa.


Linear vs quadratic growth.


Good point and great way to think about it. It would be weird if the number of seconds grew quadratically. I guess considering "particle interactions" and "light cone" the number of interactions among particles (and interactions among interactions) grows quadratically over time. I wonder how the universe doesn't end up with a backlog of information it hasn't yet processed. Maybe that's gravity/dark matter. :p ;) xx hehe


There's a nice countdown here: https://epochconverter.com/countdown


I quite like the Mayan counting system, so I made a little module that does the conversion and makes SVG's, this[1] is the only thing I've used it for so far, though.

[1]https://wolfram74.github.io/ArabIntToMayaInt/countdown.html


Reminds me of the countdown clock on predator's hand-mounted display in Predator[0]. Maybe they took inspiration from the Mayan counting system.

[0]: https://en.wikipedia.org/wiki/Predator_(film)


Or keep it local: (TZ=GMT0; t=1600000000; while :; do s=$(date +%s); echo $(expr "$s" - "$t") $(date -Iseconds -d @"$s") $(date -Iseconds -d @"$t"); sleep 1; done)


I enjoyed the lunch party we threw at work when it hit 1234567890.

I worked in Corporate IT at the time, so most of the office had no idea what we were celebrating. Somehow, neither did 80% of the tech staff. But the few of us who appreciated it really enjoyed the cake.


And today is day 256 of 2020


For those who would like to be able to figure day of year from date in the heads, here are a couple of reasonable ways.

1. Given month 1 <= m <= 12, day 0 of that month (i.e., the day before the first of the month) in a non-leap year is day 30(m-1) + F[m] of the year, where F[m] is from this array (1-based indexing!):

  0, 1, -1, 0, 0, 1, 1, 2, 3, 3, 4, 4
Add the day, and add 1 if it is a leap year and m >= 3.

E.g., September 12. m = 9, giving 240 + F[9] = 243 for Sep 0. Add 12 for the day, and 1 for the leap year, giving 256.

The F table has enough patterns within it to make it fairly easy to memorize. If you prefer memorizing formulas to memorizing tables, you can do months m >= 3 with F[m] = 6(m-4)//10, where // is the Python3 integer division operator. Then you just need to memorize the Jan is 0, Feb is 1, and the rest are 6(m-4)//10.

2. If you have the month terms from the Doomsday algorithm for doing day of week calculations already memorized, you can get F]m] from that instead of memorizing the F table.

F[m] == -(2m + 2 - M[m]) mod 7, where M[m] is the Doomsday month term (additive form): 4, 0, 0, 3, 5, 1, 3, 6, 2, 4, 0, 2.

You then just have to remember that -1 <= F <= 4, so adjust -(2m + 2 -M[m]) by an appropriate multiple of 7 to get into that range.

BTW, you can run use that formula relating F[] and M[] the other way. If you have F[], them M[m] = F[m] + 2 m + 2 mod 7. Some people might find it easier to memorize F and compute M from that when doing day of week with Doomsday rather than memorize M.


I'll be waiting to C-elebrate for 1,610,612,736* which is 122 days later or January 13th plus or minus a day or two (I was lazy).

* 0x6000 0000


    % printf "@40000000%08x%08x %#xs SI since the Unix v4 Epoch\n" \
      0x6000000A 0 0x60000000 |
      TZ=right/UTC tai64nlocal
    2021-01-14 08:25:09.000000000 0x60000000s SI since the Unix v4 Epoch
    %


I have a twitter bot called time_t_emit that tweets palindromic seconds since the epoch. (somewhat inspired by the now defunct @megasecond)

https://twitter.com/time_t_emit

It's going to tweet twice tomorrow at lunch time (in my time zone) either side of the rollover.

I fondly remember the gigasecond party I went to with a load of friends - it happened in the small hours on Sunday morning, ideal party time for a bunch of geeks in their 20s, like an extra new year's eve!


The exact time this happens is Sun, 13 Sep 2020 12:26:40 +0000 (you can paste 1600000000 in 'make another conversion')


For those as oblivious to +0000 as I am, and wondering if that is GMT or UTC:

- GMT is a time zone officially used in some European and African countries. The time can be displayed using both the 24-hour format (0 - 24) or the 12-hour format (1 - 12 am/pm).

- UTC is not a time zone, but a time standard that is the basis for civil time and time zones worldwide. This means that no country or territory officially uses UTC as a local time.

Source: https://www.timeanddate.com/time/gmt-utc-time.html


You could also use "date -d @1600000000"


Missed opportunity for 13:13


`watch -n 1 date +%s`

Wish I knew when 1234567890 happened, I was too dumb back then.


I still remember the huge 1234567890 party. It was one of the nerdiest gathering ever and the bar owner had no clue what was going on.

Coincidentally I consider this the end of "golden age" of the internet which ended as soon as the iPhone and smartphones in general got mainstream and everything moved to centralized services.



https://epochconverter.com is the first bookmark in my bookmark bar, just the icon and no text

I can only recommend using https://epochconverter.com instead of Google's preferred result https://unixtimestamp.com as it automatically detects if milliseconds are included, and it displays the local time more prominently.


Tangentially related, last week I made a tiny one-file Java lib for working with Unix timestamps in milliseconds (as returned by System.currentTimeMillis() etc). It’s several orders of magnitude faster that Java’s time api!

https://github.com/williame/TimeMillis

It’s probably not as fast as it could be: all speed ups and improvements welcome!


Oh god! Don’t tell me I have to upgrade Splunk again!


weird Q I was wondering about in regards to the epoch. (perhaps a stupid Q)

what would be the problems with choosing a new epoch (i.e. one where jan 1 is the same day of week as jan 1, 1970 and is 2 years before a leap year. (perhaps doesn't exist).

The worse case I see is that apps that calculate years internally (instead of via a shared library or like) would calculate them incorrectly. I'm wondering if this wouldn't be something that could be massaged around. Of course, it just pushes the problem down the road.

it also makes timestamps like this incompatible between systems that have different epochs, which could be an issue.

as I said, naive (possibly stupid) Q, just wondering if people have actually talked about it?



It's gonna be so fun. In UTC:

  >>> import datetime
  >>> datetime.datetime.now().timestamp()
  1599923432.252943
  >>> datetime.datetime.fromtimestamp(16e8)
  datetime.datetime(2020, 9, 13, 8, 26, 40)


Any hackernews Zoom parties happening to commemorate the event?


It's not a round number, not like Sat Jan 10 13:37:04 UTC 2004


What were those parties like?


Sorry for sounding ignorant, but why is this noteworthy?


There is nothing inherently noteworthy about it.

However, just like new years, it's the kind of sentimental human thing to celebrate, where patterns and round numbers bring joy to certain people.


Avoir 15, you were my favorite time prefix so far :(


> Avoir

You probably meant “Au revoir”. Then again, “Au revoir” means roughly “Until we meet again”, and we won’t be seeing the “15” prefix again.


Thats Sunday, 13-Sep-20 12:26:40 UTC in RFC 2822


What a ride fellas


From the, "I don't know who needs to desperately update their automated scripts..." dept. :rofl_emoticon:[0]

[0] Yes, I know this doesn't get transformed here.


Only the 9th time that’s ever happened!


for my tribe counting in the best epoch this event commemorates the passing of 0x5F5E1000


saved you two place digits to keep track of


Crazy how nature do that


Going from 1,5 Gs to 1,6 Gs


I make a bold prediction that we'll run out of digits rather soon.


2037 is a potential overflow, I believe. I imagine only pre 2000 systems would likely be affected.


Only pre 2000 systems? I'd love to live in your dream world.

Last I looked, plenty of fixes were trickling into the kernel in 2014. I wonder how many of those made the long backport to stable.

Let alone all those 2.6 kernels (and older!) in the wild. And all your 32-bit devices are probably gonna have a bad time.


Kernel will hopefully be ready; user space is another story.


There may be a lot of embedded systems affected though! Most embedded C uses 32 bit integers, especially for the Real Time Clock peripherals!


But what self-respecting embedded-system coder would use a signed 32-bit value?

Various libc choices in use probably have time_t signed.


And pre-2000 filesystems, like FAT. As well as a lot of filesystems developed after that, which also use 32-bit file creation/modification timestamps.


2038 is when signed 32-bit ints overflow. That's only running out of binary digits though, not decimal like the event under discussion.

  In [1]: import datetime
  In [2]: datetime.datetime.utcfromtimestamp(2**31)
  Out[2]: datetime.datetime(2038, 1, 19, 3, 14, 8)


That is just when signed 32 bits rolls over. Unsigned takes us all the way to 2106.

Nervous nellies worried about rollover, and insisting we switch to 64-bit timestamps everywhere, are a nuisance. You can keep one 64-bit epoch, such as boot time or most recent foreign attack on NYC, and as many 32-bit offsets from that as you like, and always be able to get a 64-bit time whenever you need it. Most often you only need a difference, so one epoch is as good as any, and you can work purely in 32 bits.

Pcap format is good until 2106 assuming the 1970 epoch. A bit of one of the now-unused header fields could be repurposed to indicate another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: