Hacker Newsnew | past | comments | ask | show | jobs | submit | thbb123's commentslogin

Fun fact: in the 90's, the reference gauge for aircraft safety was 1 accidental fatality per 100 million hours of passenger flight. Which is amazingly safe, far better than car and on a par with train.

Now, facing the growth of air travel, it was decided to raise this bar to 1 per billion hour. Not as an end by itself - this comes at very high cost and had a significant impact on travel prices. But because, with the growth of air travel, this would have implied one major accident per fortnight on average. And because those accident are more spectacular and relayed by media, civil aviation authorities feared this might raise angst and deter the public from air travel.

So, safety was enhanced, but mostly for marketing reasons.


I'm trying to reconcile your numbers with the Wikipedia "Aviation safety” article https://en.m.wikipedia.org/wiki/Aviation_safety

which for 2019 describes "0.5 accidents per million departures" and "40 fatalities per trillion revenue passenger kilometers". Considering that many or most passengers fly close to 800-1000 km/h, we're still quite a bit above above 1 fatality per 100 million passenger hours.

Would a factor of 10 be enough? Suppose we go from one major accident per fortnight to one per five months (10 fortnights). Is that higher than what we have seen in the past thirty years?


My numbers come from conversations I recall with René Amalberti, a notable specialist in the area, having advised, among others, Airbus. The conversations were around 1993-96, when I was doing my PhD, and thus may be a bit blurry by now. Also, it is perfectly possible the reference values and measurement units have evolved since then.

Still your projection shows that both reference indicators and actual values are in the ballpark of the estimates I cited.

My (and Amalberti's) main point is that safety assessment is not just about minimizing the raw number of accidents, but involves tradeoffs between various concerns, including psychological perception and revenue. Otherwise, the safest airline would be the one that does not fly anyone.


The problem I see with decentralized protocols is that node owners can easily be spotted, and then crushed under legal constraints that will make them more insecure than a strong multinational who's there just for profit and can balance legal fight for a relative privacy with it's own interest in protecting its customers.


> a strong multinational

Don't you think that it makes them obvious high-value targets? I mean, that's not even like this profusely pragmatic take has no precedent in the real world: the Snowden revelations showed that all major tech companies were in bed with the NSA to spy extrajudicially on everyone. It's a leap of optimism to think they would "fight legally for its own interest in protecting its customers".

Then, compare that to the low-scale/low-value/hobbyist/residential service providers. How high do you think the chances are for a malicious state-actor to "corrupt" many service operators without it widely being known and publicly dealt with? There's also a deniability dimension to this: XMPP uses OMEMO as a zero-knowledge encryption scheme: whatever the users are doing is none of the operator's business, and the choice of encryption scheme and implementation is purely a client-side affair, so now you are no longer dealing with "reluctant" operators, but potentially millions of end-users using strong encryption. And that is assuming the server is operating in the open, but nothing prevents service operators from offering it over tor (with very little impact on the end-user-side), further raising the bar for the malicious state actor.


How comes US celebrities have to create their foundation in Sweden instead of the US?


Or simply yelling "fire" for no reason in a crowded space must be a sin, pretty much like it's a limit of free speech.


in the catholic church, it'd be probably invoke three paths to sin. scandal, which is causing others to do evil without their intent to do so. justice: requiring the dignity and safety of others. and of course, lying.


> keep a human in the loop before executing the kill chain, or to reduce Skynet-like tail risks in line with Paul Christiano's arms race doom scenario.

It is a little known secret that plenty of defense systems are already set up to dispense of the human in the loop protocol before a fire action. For defense primarily, but also for attack once a target has been designated. I worked on protocols in the 90's, and this decision was already accepted.

It happens to be so effective that the military won't bulge on this.

Also, it is not much worse to have a decision system act autonomously for a kill system, if you consider that the alternative is a dumb system such as a landmine.

Btw: while there always is a "stop button" in these systems, don't be fooled. Those are meant to provide semblance of comfort and compliance to the designers of those systems, but are hardly effective in practice.


I disagree that you need a lot of space for self hosting. Unless you want to host streaming content for thousands of users, Intel NUC or raspberry PI on top of your router is plenty enough to host nextcloud, some webservers with decent traffic (assuming you have gigabit connection, which is now commonplace), email, backups and media server for family and friends.


Wouldn't it be rather awkward to set up a redundant RAID array on one of those though? Which is something you definitely want on a server that stores backups. I know you can obviously connect as many hard drives as you want to a Raspberry Pi via USB, but that feels wrong for a server. Intel Nuc at least has Thunderbolt and probably some internal SATA ports.


Thinking of something like APL or J?


CEOs like to brag about how AI is going to replace skilled workers. Yet, it should be obvious to anyone having experience in LLMs that top executives are the jobs that are most likely replaceable by AI.

Just keep smooth talking everyone into cost reductions and make arbitrary decisions to make it feel like you're actually in charge.


> anyone having experience in LLMs that top executives are the jobs that are most likely replaceable by AI.

That’s not my assessment. Why do you say that?


They make that claim because what CEOs write down could easily be written by a Markov Model, nevermind an LLM. If that was the primary value a CEO brought to the company they'd be right.

The most important thing a CEO brings is relationships. LLMs can't do that (yet).

Post script: there's still a chance that LLMs replace CEOs due to LLMs being easier for the board to influence/control.


because 95% of the time it's "follow the herd" BS + always defer to the lowest cost. the rest is schmoozing, making people feel comfortable, sounding confident, and working the soft skills.

chatGPT always sounds confident, and it's not hard for it to calculate the lowest possible option and take it.


because ceo bad/dumb and eng good/smart


[flagged]


From the HN guidelines[0]:

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

[0] https://news.ycombinator.com/newsguidelines.html


> Yet, it should be obvious to anyone having experience in LLMs that top executives are the jobs that are most likely replaceable by AI.

Yes given those are jobs where hallucination is a feature not a bug


Seems to me like a couple of if-then-else statements could do a better job than some executives.


Wouldn't that be funny to see an AI bot replace the CEO of something like Boeing, and then see the company turn around in positive moves? Feed an LLM all of the business data that a CEO would have and then ask it to make the same decisions the CEO would be expected to make. Since layoffs would be the trend online for LLMs to train on, would they come up with slop that looks like they too follow the trends? If companies are willing to replace dev with AI, why are boards not looking at all of the C-suite offices with the same mindset?


Same with the Federal Reserve. Why not continuously adjust interest rates, instead of all of Wall Street agonizing over each decision made?


How would not adjusting vs always adjusting be any different? Is it not a made decision to leave it alone and not change it?


Because the agonizing is part of their job. That's called "animal spirits" / managing expectations. The expectation of future interest rates is nearly as important as what they actually are.


Wallstreet would be screwed by this. You can't take away the gambling aspect and big wins because then you are left with a pre-80s style market where dividends and steady income matter over 'line goes up next quarter'. The current market would HATE that.


couldn't agree more.

Many economists point out that the Fed's policies serve the 1% above all.

Heck you can get a Nobel Prize based on your Fed chairmanship, then tank the economy.

One pundit observes that the Fed is an example of "burn the village to save the village" as (rarely) an underemployed firefighter-turned-arsonist will do. Extreme perspective for sure.

In the era of Big Data, can't real-time data and policy co-exist?


It seems to me that a large percentage of jobs exist just to exist, and that they use their continued existence to justify their continued existence. I wonder how much the world would keep spinning if 90% of people were laid off. Maybe we'll find out if AI is adopted widely enough...


If data was reliable, instant, and comprehensive, this might work. Of course, under those circumstances socialism or communism would probably work out even better.

It is precisely because data comes out murkily, with a lag - and the effects of changes have a lag as well- that managing the Federal Reserve can't by reduced to a simple process. It is an art done by humans- one where 'general trust in the institution' is the single most important variable of the last 40 years.


Interest rates work with different lags in different parts of the economy. The current hiking cycle basically only just has taken its full effect (think when people refinance debt and when they delay this.)


CEOs don't have any data.


You can't put an LLM in charge of a company. You need an actual human to take legal responsib- oh, wait a second...


Algorithm aversion and automation biases have been thoroughly studied over the past 70 years of human factors for industrial security. All in all, the thought processes of humans are not always compatible with the evidence on which automation works.

Check out Fitts, HABA-MABA for more results.


In the late 90's, I attended a talk by Ted Nelson, the guy who coined the term Hypertext. To him, things started going downhill with HTML, and the URL. The gist of his complain is that he wanted links to be bi-directional.

In the 80's, telecom operators were complaining that TCP/IP and packet transmission was a regression over circuit commutation.

So it looks like the internet has progressed through perpetual regression.


The internet is 30-40 years old, and has brought an entirely new paradigm to the world. It has abolished distances, disproportionately increasing the reach of a few.

I'd love to share your optimism that things will keep improving in the long run, but I don't see what you're basing that off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: