Hacker Newsnew | past | comments | ask | show | jobs | submit | radioactivist's commentslogin

While I echo some of your points, [1] is bad example (as a Canadian).

Research money in Canada is harder to come by; a basic research grant is roughly ~5x-10x lower than a comparable American grant (students are cheaper here, so its not completely proportional, but equipment, travel, etc doesn't scale).

The example for money for poaching international researchers also comes with the asterisk that while they found ~$2B for this, they also are cutting the base funding of the federal granting agencies by a few percent at the same time, atop of that funding being anemic for decades at this point. A big "fuck you" to the Canadian research community in my opinion.


Also a physicist here -- I had the same reaction. Going from (35-38) to (39) doesn't look like much of a leap for a human. They say (35-38) was obtained from the full result by the LLM, but if the authors derived the full expression in (29-32) themselves presumably they could do the special case too? (given it's much simpler). The more I read the post and preprint the less clear it is which parts the LLM did.

Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.


Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.

EDIT: Fixed :)


Thanks! (very quickly too)


In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.


Seems like the someone dug something up from the literature on this problem (see top comment on the erdosproblems.com thread)

"On following the references, it seems that the result in fact follows (after applying Rogers' theorem) from a 1936 paper of Davenport and Erdos (!), which proves the second result you mention. ... In the meantime, I am moving this problem to Section 2 on the wiki (though the new proof is still rather different from the literature proof)."


This is a comparison between a new and interactive medium (+ slides, mind-maps, etc) and a static PDF book as a control. How do we know that a non-AI based interactive book wouldn't give similar (modest) increases in performance without any of the personalization AI enables?


Thank you for this comment, it is exactly my impression of all of this as well.


At one point this states:

> Claude was also able to create a list of leaders with the Department of Energy Title17 credit programs, Exim DFC, and other federal credit programs that the team should interview. In addition, it created a list of leaders within Congressional Budget Office and the Office of Management and Budget that would be able to provide insights. See the demo here:

and then there is a video of them "doing" this. But the video basically has Claude just responding saying "I'm sorry I can't do that, please look at their website/etc".

Am I missing something here?


It happens again in the next video. It says:

> The team came up with a use case the teaching team hadn’t thought of – using AI to critique the team’s own hypotheses. The AI not only gave them criticism but supported it with links from published scholars. See the demo here:

But the video just shows Claude giving some criticism but then just says go look at some journals and talk to experts (doesn't give any references or specifics).


That was really weird. I did do this with ChatGPT 4o and it seems to do a good job of creating this list. But I don't know anything about this field, so I don't know how accurate it is.


I'm not the person you're replying to, but in my subfield (scientist is such a broad term) I would say in my opinion at least half of those key problems that are listed in the article are basically non issues. Things really are quite different field to field.


And in many subfields there is a preprint freely available on the arxiv during those three months.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: