Hacker Newsnew | past | comments | ask | show | jobs | submit | Perseids's commentslogin

I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.

[1] https://news.ycombinator.com/item?id=46787165


>I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out

I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?


What the OP was talking about is the negative connotation that goes with the word; it's certainly a poor choice from a marketing point of view.

You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.


Well, I don't know anything about marketing and you might have a point, but the severity of impact of these two words is clearly very different, so it doesn't look like a good comparison to me. It would raise quite a few eyebrows and more if, for example, someone released a Linux distro named "Auschwitz OS", meanwhile, even in the software world, there are multiple products that incorporate the word prism in various ways[1][2][3][4][5][6][7][8][9]. I don't believe that an average user encountering the word "prism" immediately starts thinking about NSA surveillance program.

[1] https://www.prisma.io/

[2] https://prism-pipeline.com/

[3] https://prismppm.com/

[4] https://prismlibrary.com/

[5] https://3dprism.eu/en/

[6] https://www.graphpad.com/features

[7] https://www.prismsoftware.com/

[8] https://prismlive.com/en_us/

[9] https://github.com/Project-Prism/Prism-OS


I think the ideas was to try to explain why is a problem to choose something, it is not a comparison of the intensity / importance.

I am not sure you can make an argument of "other people are doing it too". Lots of people do things that it is not in their interest (ex: smoking, to pick the easy one).

As others mentioned, I did not have the negative connotation related to the word prism either, but not sure how could one check that anyhow. It is not like I was not surprised these years about what some other people think, so who knows... Maybe someone with experience in marketing could explain how it is done.


But without the extremity of the Auschwitz example, it suddenly is not a problem. Prism is an unbelievably generic word and I had not even heard of the Snowdon one until now nor would I remember it if I had. Prism is one step away from "Triangle" in terms of how generic it is.


Triangle kind of reminds me of the Bermuda Triangle. You know how many people died there?


People? Do you know how many of them are murderers, fraudsters and all around finks. That's a terrible thing to mention.


1 more perspective to add: while i did not know the NSA program was called prism, it did give me pause to find out in this thread. OpenAI surely knows what it was called, at least they should. So it begs the question of why.

If they claim in a private meeting with people at the NSA that they did it as a tribute to them and a bid for partnership, who would anyone here be to say they didnt? even if they didnt... which is only relevant because OpenAI processes an absolute shitton of data the NSA would be interested in


And of course The prism

https://en.wikipedia.org/wiki/Prism_(optics)

I remember the NSA Prism program, but hearing prism today I would think first of Newton, optics, and rainbows.


When you’re as high profile as OpenAI, you don’t get judged like everyone else. People scrutinize your choices reflexively, and that’s just the tax of being a famous brand: it amplifies both the upsides and the blowback.

Most ordinary users won’t recognize the smaller products you listed, but they will recognize OpenAI and they’ll recognize Snowden/NSA adjacent references because those have seeped into mainstream culture. And even if the average user doesn’t immediately make the connection, someone in their orbit on social media almost certainly will and they’ll happily spin it into a theory for engagement.


Do a lot of people know that Prism is the name of the program? I certainly didn't and consider myself fairly switched on in general


It's likely to be an age thing too. Were you in hacker-related spaces when the Snowden scandal happened ?

(I expect a much higher than average share of people in academia also part of these spaces.)


We had a local child day care provider call themselves ISIS. That was blast.


There was a TV show called "The Mighty Isis" in the 70s. What were they thinking?! (Well, with Joanna Cameron around, I wouldn't be able to think too clearly either.)


We had a local siding company call themselves "The Vinyl Solution" some people are just tone-deaf.


I think point is that on the sliding scale of words that are no longer allowed to use, "Prism" does not reach the level of "Auschwitz".

Most people don't even remember Snowden at this point.


I have to say I had the same reaction. Sure, "prism" shows up in many contexts. But here it shows up in the context of a company and product that is already constantly in the news for its lackluster regard for other people's expectation of privacy, copyright, and generally trying to "collect it all" as it were, and that, as GP mentioned, in an international context that doesn't put these efforts in the best light.

They're of course free to choose this name. I'm just also surprised they would do so.


Plus there are lots of “legacy” products with the name prism in them. I also don’t think the public makes the connection. It’s mainly people who care to be aware of government overreach who think it’s a bad word association.


But the contexts are closely related.

Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.


Sure. Like Goebbels. Because they gobble things up.

Altso, nazism. But different context, years ago, so whatever I guess?

Hell, let's just call it Hitler. Different context!

Given what they do it is an insidious name. Words matter.


Comparing words with unique widespread notoriety with a simple, everyday one. Try again.


Prism in tech is very well-known to be a surveillance program.

Coming from a company involved with sharing data to intelligence services (it's the law you can't escape it) this is not wise at all. Unless nobody in OpenAI heard of it.

It was one of the biggest scandal in tech 10 years ago.

They could call it "Workspace". More clear, more useful, no need to use a code-word, that would have been fine for internal use.


So you have to resort to the most extreme examples in order to make it a problem? Do you also think of Hitler when you encounter a word "vegetarian"?


Is that what you think hitler was very famous for?

The extreme examples are an analogy that highlight the shape of the comparison with a more generally loathed / less niche example.

OpenAI is a thing with lots and lots of personal data that the consumers trust OpenAI not to abuse or lose. They chose a product name that matches a us government program that secretly and illegal breached exactly that kind of trust.

Hitler vegetarians isn't a great analogy because vegetarianism isn't related to what made hitler bad. Something closer might be Exxon or BP making a hairgel called "Oilspill" or Dupont making a nail polish called "Forever Chem".

They could have chosen anything but they chose one specifically matching a recent data stealing and abuse scandal.


huh.. seems like a head-scratcher why it would relevant to this argument to select objectionable words instead of benign, inert words.


You do realize that obsessing over words like that is a pretty major part of what programming and computer science is right? Linguistics is highly intertwined with computer science.


>Has the technical and scientific community in the US already forgotten this huge breach of trust?

Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).

Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098


What Snowden did was heroic. What was shameful was the world's underwhelming reaction. Where were all these images in the media of protest marches like against the Vietnam war?

Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.


Looking back I think allowing more centralized control to various forms of media to private parties did much worse overall than government surveillance on the long run.

For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).


Remember when people and tech companies protested against SOPA and PIPA? Remember the SOPA blackout day? Today even worse laws are passed with cheers from the HN crowd such as the OSA. Embarassing.


Protests in 2025 alone have outnumbered that of those during the Vietnam War.

Protesting is a poor proxy for American political engagement.

Child neglect and missing children rates are lower than they were 50 years ago.


Are you asserting that disagrees with you is either a propaganda campaign or a cynical insider? Nobody who opposes you has a truly held belief?


So you hate waffles?


Him being (or best case becoming) a russian asset turned out to be true


Like it would matter for any of the revelations. And like he would have other choices to not go to prison. Look at how it worked out for Assange.


They both undertook something they believed in, and showed extreme courage.

And they did manage to get the word out. They are both relatively free now, but it is true, they both paid a price.

Idealism is that you follow your principles despite that price, not escaping/evading the consequences.


Assange became a Russian asset *while* in a whistleblowing-related job.

(And he is also the reason why Snowden ended up in Russia. Though it's possible that the flight plan they had was still the best one in that situation.)


So exposing corruption of Western governments is not worthwhile because it 'helps' Russia? Aha, got it.

I am increasingly wondering what there remains of the supposed superiority of the Western system if we're willing to compromise on everything to suit our political ends.

The point was supposed to be that the truth is worth having out there for the purpose of having an informed public, no matter how it was (potentially) obtained.

In the end, we may end up with everything we fear about China but worse infrastructure and still somehow think we're better.


No, exposing Western corruption is all well and good, but the problem is that at some point Assange seems to have decided "the enemy of my enemy is my friend", which was a very bad idea when applied to Putin's Russia.


> Assange seems to have decided "the enemy of my enemy is my friend", which was a very bad idea when applied to Putin's Russia

What if he simply decided that the information he obtained is worth having out there no matter the source? It seems to me that you're simply upset that he dared to do so and are trying very hard to come up with a rationalization for why he's a Bad Guy(tm) for daring to turn the tables. It's a transparent and rather lackluster attempt to shift the conversation from what to who.


No, I'm upset that he took money from the Kremlin and hosted a show on Russia Today. (At least it was before 2014 I guess...)


One can only hope that you're at least as upset at the double tapping criminals he exposed.


Obama and Biden chased him into a corner. They actually bragged about chasing him into Russia, because it was a convenient narrative to smear Snowden with after the fact.

It was Russia, or vanish into a black site, never to be seen or heard from again.


If the messenger has anything to do with Russia, even after the fact, we should dismiss the message and remember to never look up.


Truth is truth, no matter the source.



There is also the truth that you say, and the truth that you feel


In what way did it "turn out to be true"? Because he has russian citizenship and is living in a country that is not allied with his home country that is/was actively trying to kill him (and revoked his US passport)?


He could have been a Chinese asset, but CCP is a coward.


These things don't really seem related at all. Its a pretty generic term.


FWIW, my immediate reaction was the same "That reminds me of NSA PRISM"


It reminded me of the code highlighter[0], and the ORM Prisma[1].

[0] https://prismjs.com/

[1] https://www.prisma.io/


It reminded me of the album cover to Dark Side of The Moon by Pink Floyd.


Same here.


Same, to the point where I was wondering if someone deliberately named it so. But I expect that whoever made this decision simply doesn't know or care.


I came here based to headline expecting some more cia & nsa shit, that word is tarnished for few decades in better part of IT community (that actually cares about this craft beyond paycheck)


And yet, the name immediately reminded me of the Snowden relevations.


They are farming scientists for insight.


This comment might make more sense if there was some connection or similarity between the OpenAI "Prism" product and the NSA surveillance program. There doesn't appear to be.


Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap.


Isn't most research and scientific data is already shared openly (in publications usually)?


"Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap."

Even if what you say is completely untrue (and who really knows for sure).... it creates that mental association. It's a horrible product name.


This comment allows ycombinator to steal ideas from their user's comments, using their huge mass news platform. Temendous overlap indeed.


OpenAI has a former NSA director on its board. [1] This connection makes the dilution of the term "PRISM" in search results a potential benefit to NSA interests.

[1]: https://openai.com/index/openai-appoints-retired-us-army-gen...


>Has the technical and scientific community in the US already forgotten this huge breach of trust?

Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.

The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.

Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.


Probably gonna get buried at the bottom of this thread, but:

There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.


tons of things are called prism.

(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)


> Has the technical and scientific community in the US already forgotten this huge breach of trust?

We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill


We used to have “SEO spam”, where people would try to create news (and other) articles associated with some word or concept to drown out some scandal associated with that same word or concept. The idea was that people searching on Google for the word would see only the newly created articles, and not see anything scandalous. This could be something similar, but aimed at future LLM’s trained on these articles. If LLM’s learn that the word “Prism” means a certain new thing in a surveillance context, the LLM’s will unlearn the older association, thereby hiding the Snowden revelations.


As a datapoint, when I read this headline, the very first thing i thought of as "wasn't PRISM some NSA shit? Is OpenAI working with the NSA now?"

It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.


I mean it's also the name of the national engineering education journal and a few other things. There's only 14,000 5-letter words in English so you're going to have collisions.


I get what you're saying, but that was 13 years ago. How long before the branding statute of limitations runs out on usage for a simple noun?


Fwiw I was going to make the same comment about the naming, but you beat me to it.


Yeah, to be fair I would be hesitant to have anything to do with any program called prism as well. Hard to imagine that no one brought this up when they were thinking of a name.


Do they care what anyone over 30 thinks?


Considering OpenAI is deeply rooted in anti-freedom ethos and surveillance capitalism, I think it is quite a self aware and fitting name.


Sorry, did you read this https://blog.cleancoder.com/uncle-bob/2018/12/14/SJWJS.html?

I personally associate Prism with [Silverlight - Composite Web Apps With Prism](https://learn.microsoft.com/en-us/archive/msdn-magazine/2009...) due to personal reasons I don't want to talk about ;))


I did not make the association at all


I think it's probably just apparent to a small set of people; we're usually the ones yelling at the stupid cloud technologies that are ravaging online privacy and liberty, anyway. I was expecting some sort of OpenAI automated user data handling program, with the recent venture into adtech, but since it's a science project and nothing to do with surveillance and user data, I think it's fine.

If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.


money is a powerful amnesiac


That’s funny af


I still can't get over the Apple thing. Haven't enjoyed a ripe McIntosh since. </s>


You misunderstand. The physicists are developing their own software to analyze their experimental data. They typically have little software development experience, but there is seldom someone more knowledgeable available to support them. Making matters worse, they often are not at all interested in software development and thus also don't invest the time to learn more than the absolute minimum necessary to solve their current problem, even if it could save them a lot of time in the long run. (Even though I find the situation frustration, I can't say I don't relate, given that I feel the same way about LaTeX.)


Honestly, they should be using conda (if they're working on their laptops) and the cluster package manager otherwise.


Conda has slowly but surely gone down the drain as well. It used to be bullet proof but there too you now get absolutely unsolvable circular dependencies.


I'd be curious as to seeing what these circular dependencies you're seeing are (not saying I don't believe you, and I do recall in the early days of conda it doing some dumb stuff, but that particular issue seems odd)?

As for why conda: wheels do not have post-installation hooks (which given the issues with npm, I'm certainly a fan of), and while for most packages this isn't an issue, I've encountered enough packages where sadly they are required (for integration purposes), and the PyPI packages are subtlety broken on install without them. Additionally, conda (especially Anaconda Inc's commercial repositories) have significantly more optimised builds (not as good as the custom build well-run clusters provide, but better than PyPI-provided ones). I personally do not use conda (because I tend to want to test/modify/patch/upstream packages lower down the chain and test with higher up packages), but for novices (especially novices on Windows), conda for all its faults is the best option for those in the "data science" ecosystem.


I haven't ever experienced this yet, what packages were involved?


Good question, I can't backtrack right now but it was apmplanner that I had to compile from source, and it contains some python that gets executed during the build process (I haven't seen it try to run it during normal execution yet).

Probably either one of python-serial python-pexpect judging by the file dates, and neither of these are so exciting that there should have been any version conflicts at all.

And the only reason I had to rebuild it at all was due to another version conflict in the apm distribution that expects a particular version of pixbuf to be present on the system and all hell breaks loose if it isn't, and you can't install that version on a modern system because that breaks other packages.

It is insane how bad all this package management crap is. The GNU project and the linux kernel are the only ones that have never given me any trouble.


I wish non-conformity was more of a thing at points where it actually matters. Your product manager asks you to add invasive user tracking and surveillance? Push back and explain how this makes the world a worse place. Got a ticket to implement a "[yes][ask me later]" dialog [1]? Make a short survey that shows how user hate it. Nobody listens to you? Refuse to comply. The government requires you to take deeply unethical or unlawful actions? Sabotage the feature [2] (or quit/resign).

Performative non-conformance might be e.g. helpful to nurture a culture of critical thinking, but if it is just performative, then it is worthless.

(I write this with no intent to criticize you, burningChrome, or Jyn. You might very well do just that.)

(Also, I'm aware that the ability to push back is very unevenly distributed. I'm addressing those that can afford this agency. And also, non-conformance is spectrum: You can also push back a little without choosing the specific point to be the hill to die on. Every bit counts.)

[1] https://idiallo.com/blog/hostile-not-enshittification

[2] https://www.404media.co/heres-a-pdf-version-of-the-cia-guide...


Yeah, agreed. Otherwise it's a kind of low stakes "non-conformity", even a conformity of sorts (because everything lowercase is/was actually an internet fad, so it's a kind of "extremely online" conformity).

Non-conformity where it matters would be a lot better, but it's also scarier.


To cite and expand on lambdaone below [1]:

> Clearly power capacity cost (scaling compressors/expanders and related kit) and energy storage cost (scaling gasbags and storage vessels) are decoupled from one another in this design

Lambdaone is differentiating between the costs to store energy (measured in kWh or Joules) and the costs to store energy per time (which is power, measured in Watts). If you want to store the whole excess energy that solar panels and wind turbines generate on a sunny, windy day, you need to have a lot of power storage capability (gigawatts of power generated during peak power generation). This can be profitable even if you only have a low energy storage capability, e.g. if you can only store a day worth of excess solar/wind energy, because you can sell this energy in the short term, for example in the next night, when the data centers are still running, but solar panels don't produce power. This is what batteries give you -- high power storage capabilities but low energy storage capacities.

Of course, you can always buy more batteries to increase the energy storage capacities, but they are very expensive per energy (kWh) stored. In contrast, these CO2 "batteries" are very cheap per energy (kWh) stored -- "just" build more high pressure tanks -- but expensive per power (Watts) stored, because to store more power, you need to build more expensive compressors, coolers etc. This ability to scale out the energy storage capability independently of the power storage capability is what Lambdaone was referring to with the decoupling.

For what is this useful? For shifting energy over a larger amount of time. Because energy storage costs of batteries are so high, they are a bad fit for storing excess energy in the summer (lots of solar) and releasing it in the winter (lots of heating). I'm not sure if these "CO2" batteries are good for such long time frames (maybe pressure loss is too high), but the claim most certainly is that they can shift energy over a longer time frame than is possible with batteries in an economically profitable fashion.

[1] https://news.ycombinator.com/item?id=46347251


What an excellent explanation, thanks


> But that's literally the question I'm asking. Where do you draw the line in a way that stops what we consider to be abuses, but doesn't stop what we think of as legitimate uses by journalists, academics, etc.?

I think the wrong assumption you're making, is that there is supposed to be a simple answer, like something you can describe with a thousand words. But with messy reality this basically never the case: Where do you draw the line of what is considered a taxable business? What are the limits of free speech? What procedures should be paid by health insurance?

It is important to accept this messiness and the complexity it brings instead of giving up and declaring the problem unsolvable. If you have ever asked yourself, why the GDPR is so difficult and so multifaceted in its implications, the messiness you are pointing out is the reason.

And of course, the answer to your question is: Look at the GDPR and European legislation as a precedent to where you draw the line for each instance and situation. It's not perfect of course, but given the problem, it can't be.


Generally, you do want the general principle of something like this to be explainable in a few sentences, yes.

Even if that results in a bunch of more detailed regulations, we can then understand the principles behind those regulations, even if they decide a bunch of edge cases with precise lines that seem arbitrary.

Things like the limits of free speech can be explained in a few sentences at a high level. So yes, I'm asking for what the equivalent might be here.

The idea that "it's so impossibly complicated that the general approach can't even be summarized" is not helpful. Even when regulations are complicated, they start from a few basic principles that can be clearly enumerated.


This is not how things ever work in practice in representative democracy. The world is too complex, and the many overlapping sets of political groups in a country/provice/city have different takes on what the policy should be, and more importantly, each group have different tolerances for what they will accept.

Because everyone has different principles by which they evaluate the world, most laws don't actually care about principles. They are simply arbitrary lines in the sand drawn by the legislature in a bid to satisfy (or not dissatisfy) as many groups as possible. Sometimes, some vague sounding principles are attached to the laws, but its always impossible for someone else to start with the same principles and derive the exact same law from them.

Constitutions on the other hand seem simple and often have simple sounding principles in them. The reason is that constitutions specify what the State institutions can and cannot do. The State is a relatively simple system compared to the world, so constitutions seem simple. Laws on the other hand specify what everyone else must or must not do, and they must deal with messy reality.


This is not just unhelpful (and overly cynical), but it is untrue.

Courts follow the law, but they also make determinations all the time based on the underlying principles when the law itself is not clear.

Law school itself is largely about learning all the relevant principles at work. (Along with lots of memorization of cases demonstrating which principle won where.)

I understand you're trying to take a realist or pragmatic approach, but you seem to have gone way too far in that direction.


The principle is that you should be able to casually document what you see in public, but you should not be able to intrude on the privacy of others.


Emphasis on casual, IMO. It is perfectly reasonable to decide that past norms which evolved in the absence of large scale computing power, digital cameras, and interconnected everything do not translate to the right to extrapolate freedom of casual observation into computer-assisted stalking.


> If people moved to other providers, things would still go down, more likely than not it would be more downtime in aggregate, just spread out so you wouldn't notice as much.

That is the point, though: Correlated outages are worse than uncorrelated outages. If one payment provider has an outage, chose another card or another store and you can still buy your goods. If all are down, no one can shop anything[1]. If a small region has a power blackout, all surrounding regions can provide emergency support. If the whole country has a blackout, all emergency responders are bound locally.

[1] Except with cash – might be worth to keep a stash handy for such purposes.


Yeah, exactly this. I don’t know why the person who responded to me is talking about survivorship bias… and I suppose I don’t really care because there’s a bigger point.

The internet was originally intended to be decentralised. That decentralisation begets resilience.

That’s exactly the opposite of what we saw with this outage. AWS has give or take 30% of the infra market, including many nationally or globally well known companies… which meant the outage caused huge global disruption of services that many, many people and organisations use on a day to day basis.

Choosing AWS, squinted at through a somewhat particular pair of operational and financial spectacles, can often make sense. Certainly it’s a default cloud option in many orgs, and always in contention to be considered by everyone else.

But my contention is that at a higher level than individual orgs - at a societal level - that does not make sense. And it’s just not OK for government and business to be disrupted on a global scale because one provider had a problem. Hence my comment on legislators.

It is super weird to me that, apparently, that’s an unorthodox and unreasonable viewpoint.

But you’ve described it very elegantly: 99.99% (or pick the number of 9s you want) uptime with uncorrelated outages is way better than that same uptime with correlated, and particularly heavily correlated, outages.


> > Can one really speak of efficient markets

> Yes, free markets and monopolies are not incompatible.

How did you get from "efficient markets" to "free markets"? The first could be accepted as inherently value, while the latter is clearly not, if this kind of freedom degrades to: "Sure you can start your business, it's a free country. For certain, you will fail, though, because there are monopolies already in place who have all the power in the market."

Also, monopolies are regularly used to squeeze exorbitant shares of the added values from the other market participants, see e.g. Apple's AppStore cut. Accepting that as "efficient" would be a really unusual usage of the term in regard to markets.


The term "efficient markets" tends to confuse and mislead people. It refers to a particular narrow form of "efficiency", which is definitely not the same thing as "socially optimal". It's more like "inexploitability"; the idea is that in a big enough world, any limited opportunities to easily extract value will be taken (up to the opportunity cost of the labor of the people who can take them), so you shouldn't expect to find any unless you have an edge. The standard metaphor is, if I told you that there's a $20 bill on the sidewalk in Times Square and it's been there all week, you shouldn't believe me, because if it were there, someone would have picked it up.

(The terminology is especially unfortunate because people tend to view it as praise for free markets, and since that's an ideological claim people respond with opposing ideological claims, and now the conversation is about ideology instead of about understanding a specific phenomenon in economics.)

This is fully compatible with Apple's App Store revenue share existing and not creating value (i.e., being rent). What the efficient markets principle tells us is that, if it were possible for someone else to start their own app store with a smaller revenue share and steal Apple's customers that way, then their revenue share would already be much lower, to account for that. Since this isn't the case, we can conclude that there's some reason why starting your own competing app store wouldn't work. Of course, we already separately know what that reason is: an app store needs to be on people's existing devices to succeed, and your competing one wouldn't be.

Similarly, if it were possible to spend $10 million to create an API-compatible clone of CUDA, and then save more than $10 million by not having to pay huge margins to Nvidia, then someone would have already done it. So we can conclude that either it can't be done for $10 million, or it wouldn't create $10 million of value. In this case, the first seems more likely, and the comment above hypothesizes why: because an incomplete clone wouldn't produce $10 million of value, and a complete one would cost much more than $10 million. Alternatively, if Nvidia could enforce intellectual property rights against someone creating such a clone, that would also explain it.

(Technically it's possible that this could instead be explained by a free-rider problem; i.e., such a clone would create more value than it would cost, but no company wants to sponsor it because they're all waiting for some other company to do it and then save the $10 million it would cost to do it themselves. But this seems unlikely; big tech companies often spend more than $10 million on open source projects of strategic significance, which a CUDA clone would have.)


You scuttled your argument by using apple AppStore as an example.


This feature actually existed (see https://en.wikipedia.org/wiki/HTTP/2_Server_Push ) but was deemed a failure unfortunately (see https://developer.chrome.com/blog/removing-push )


Thanks for the links! Yes, my comment was based of a vague recollection of this kind of thing.

I'll read up on the '103 early hints' and 'preload' and 'preconnect' which might be close in enough practice.


> I'm personally pretty skeptical that the first round of PQC algorithms have no classically-exploitable holes

I was of the impression that this was the majority opinion. Is there any serious party that doesn't advocate hybrid schemes where you need to break both well-worn ECC and PQC to get anywhere?

> The standard line is around store-now-decrypt-later, though, and I think it's a legitimate one if you have information that will need to be secret in 10-20 years. People rarely have that kind of information, though.

The stronger argument, in my opinion, is that some industries move glacially slow. If we don't start pushing now, they won't be any kind ready when (/if) quantum computing attacks become feasible. Take industrial automation: Implementing strong authentication / integrity protection, versatile authorization and reasonable encryption into what would elsewhere be called IoT is just now becoming an trend. State-of-the-art is still "put everything inside a VPN and we're good". These devices usually have an expected operational time of at least a decade, often more than one.

To also give the most prominent counter argument: Quantum computing threats are far from my greatest concerns in these areas. The most important contribution to "quantum readiness"[1] is just making it feasible to update these devices at all, once they are installed at the customer.

[1] Marketing is its own kind of hell. Some circles have begun to use "cyber" interchangeable with "IT Security" – not "cyber security" mind you, just "cyber".


Yes: there are reasonable, reputable cryptographers who advocate against hybrid cryptosystems.


Could you be so kind to provide a link or reference? I'd like to read their reasoning. Given the novelty of e.g. Kyber, just relying on it alone seems bonkers.


No. I don't agree with them.


My intuition went for video compression artifact instead of AI modeling problem. There is even a moment directly before the cut that can be interpreted as the next key frame clearing up the face. To be honest, the whole video could have fooled me. There is definitely an aspect in discerning these videos that can be trained just by watching more of them with a critical eye, so try to be kind to those that did not concern themselves with generative AI as much as you have.


Yeah, it's unfortunate that video compression already introduces artifacts into real videos, so minor genAI artifacts don't stand out.

It also took me a while to find any truly unambiguous signs of AI generation. For example, the reflection on the inside of the windows is wonky, but in real life warped glass can also produce weird reflections. I finally found a dark rectangle inside the door window, which at first stays fixed like a sign on the glass. However it then begins to move like part of the reflection, which really broke the illusion for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: