Hacker Newsnew | past | comments | ask | show | jobs | submit | lamename's commentslogin

I tried to upload a 239 KB pdf and it said "Daily processing limit reached".

Yea, looks like a lot of people uploaded articles today. I have a 20 article per day cap now because I’m paying for it.

I could change to a simple cost+ model but don’t want to bother until I see if people like it.

Ideas for splitting the difference so more people can use it without breaking my bank appreciated


You should just whip up some simple cost plus payment, with a low plus.

I'd probably use it now.


cool, thanks

So far i really like what it does for the example articles shown. I want to test it on 1 or 2 articles I know well, and if it passes that test it's a product I'd totally pay for.

appreciate it, thanks

What's the cost per article?

Avg cost $0.65

metoo. I'm very interested to see what it can do.

thanks

I generally agree with the broader point you're making, but I also think there's nothing wrong with pointing out how messed up it is that that's the reality of the choice. The whole point of improving society is to eliminate this kind of dilemma


It’s messed up that this has to be done. But overall positive change.


Why does it have to be done?


Laws, primarily.

There's also a lot of content that companies don't want to host or show to their users in general.


That poor people get the worst of the jobs? What’s the alternative?


Who says this particular job is a necessary one?


What’s your alternative?


Jury duty for all online fora maybe?


Because one could sell DDoS services that overload the target network with porn.


Maybe social media for this content isn’t sustainable or wise?


Maybe social media of the kind which creates this problem isn't sustainable or wise.


You're talking about making the internet as a whole a view-only experience where all content is curated and made by trusted gatekeepers.

As long as a website allows user generated content, there will be this need to moderate it


No, he's talking about the Internet of old, where if you wanted to post anything you first had to stand up a server.


You’re saying there were no forum boards nor comment sections anywhere? And everyone self hosted every single piece of content they wished to send it into the world?


Maybe. Those could be the case. But ignoring all confounding factors, this phenomenon is possible with numerical experiments alone. One of the meanings of "the Law of Small Numbers".

Basically, the possibility that the small study was underpowered, and just lucky...then the large studies with more power are closer to the truth. https://en.wikipedia.org/wiki/Faulty_generalization


Sure, could be just lucky. But if there are several successful small studies, and several unsuccessful large ones (no idea if this is the case here), we should probably look for a better explanation.


It does not require more explanation: publication bias means null results aren't in the literature; do enough small low quality trials and you'll find a big effect sooner or later.

Then the supposed big effect attracts attention and ultimately properly designed studies which show no effect.


I agree with most everything you said. The problem has always been the short-term job loss, particularly today where society as a whole has resources for safety nets, but hasnt implemented them.

Anger at companies who hold power in multiple places to prevent and worsen this situation for people is valid anger.


There's another problem with who gets to capture all of the resulting wealth from the higher tech-assisted productivity.


> The problem has always been the short-term job loss

Does anyone have any idea of the new jobs that will be created to replace the ones that are being lost? If it's not possible to at least foresee it, then it's not likely to happen. In which case the job loss will be long-term not short-term.


As much as I like the article, I begrudgingly agree with you, which is why I think the author mentions the physical constraints of energy as the future wall that companies will have to deal with.

The question is do we think that will actually happen?

Personally I would love if it did, then this post would have the last laugh (as would I), but I think companies realize this energy problem already. Just search for the headlines of big tech funding or otherwise supporting nuclear reactors, power grid upgrades, etc.


In my experience in neuroscience it even differs widely across programs/universities. Some good professors care about giving good talks, and if you're lucky it becomes contagious in the program. Others think less of you if it's clear, some are too naive to realize obscurity is not a virtue.


Yeah, but still "scary" because you have to be really careful to not fool yourself and pay attention even with those algorithms. For example, a good demonstration with tsne https://distill.pub/2016/misread-tsne/?hl=cs


Being there 24/7? Yes. Better job? I'll believe it when I see it. You're arguing 2 different things at once


Plus, 24/7 access isn't necessarily the best for patients. Crisis hotlines exist for good reason, but for most other issues it can become a crutch if patients are able to seek constant reassurance vs building skills of resiliency, learning to push through discomfort, etc. Ideally patients are "let loose" between sessions and return to the provider with updates on how they fared on their own.


But by arguing two different things at once it's possible to facilely switch from one to the other to your argument's convenience.

Or do you not want to help people who are suffering? (/s)


Wow a sane person among all the hype. Great to see you!


Lol. Yeah, the hype train blinds.


I agree with your point except for scientific papers. Let's push ourselves to use precise, non-shorthand or hand waving in technical papers and publications, yes? If not there, of all places, then where?


"Know" doesn't have any rigorous precisely-defined senses to be used! Asking for it not to be used colloquially is the same as asking for it never to be used at all.

I mean - people have been saying stuff like "grep knows whether it's writing to stdout" for decades. In the context of talking about computer programs, that usage for "know" is the established/only usage, so it's hard to imagine any typical HN reader seeing TFA's title and interpreting it as an epistemological claim. Rather, it seems to me that the people suggesting "know" mustn't be used about LLMs because epistemology are the ones departing from standard usage.


colloquial use of "know" implies anthropomorphisation. Arguing that usign "knowing" in the title and "awarness" and "superhuman" in the abstract is just colloquial for "matching" is splitting hairs to an absurd degree.


You missed the substance of my comment. Certainly the title is anthropomorphism - and anthropomorphism is a rhetorical device, not a scientific claim. The reader can understand that TFA means it non-rigorously, because there is no rigorous thing for it to mean.

As such, to me the complaint behind this thread falls into the category of "I know exactly what TFA meant but I want to argue about how it was phrased", which is definitely not my favorite part of the HN comment taxonomy.


I see. Thanks for clarifying. I did want to argue about how it was phrased and what is alluding to. Implying increased risk from "knowing" the eval regime is roughly as weak as the definition of "knowing". It can be equaly a measure of general detection capability, as it can about evaluation incapability - i.e. unlikely news worthy, unless it reached top HN because of the "know" in the title.


Thanks for replying - I kind of follow you but I only skimmed the paper. To be clear I was more responding to the replies about cognition, than to what you said about the eval regime.

Incidentally I think you might be misreading the paper's use of "superhuman"? I assume it's being used to mean "at a higher rate than the human control group", not (ironically) in the colloquial "amazing!" sense.


I really do agree with your point overall, but in a technical paper I do think even word choice can be implicitly a claim. Scientists present what they know or are claiming and thus word it carefully.

My background is neuroscience, where anthropomorphising is particularly discouraged, because it assumes knowledge or certainty of an unknowable internal state, so the language is carefully constructed e.g. when explaining animal behavior, and it's for good reason.

I think the same is true here for a model "knowing" somethig, both in isolation within this paper, and come on, consider the broader context of AI and AGI as a whole. Thus it's the responsibility of the authors to write accordingly. If it were a blog I wouldn't care, but it's not. I hold technical papers to a higher standard.

If we simply disagree that's fine, but we do disagree.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: