So yesterday we had a post (https://news.ycombinator.com/item?id=35200267) that you need a special "play as a grandmaster" prompt to reduce the number of illegal moves for GPT3.5, and that "GPT4 sucks" at chess completely compared to GPT3.5.
Now we have this post that GPT4 plays good and doesn't make illegal moves at all. What changed? What was the prompt? Is it just random noise?
If you've checked the author's previous post [1], you'll see that he admits to being 1200 Elo on chess.com, which is beginner / early-intermediate level. So him losing to GPT-4 may not mean much. Maybe that explains the supposed contradiction here.
The problem with ChatGPT seems to be that it often gives answers that appear plausible at the surface, but with enough knowledge you realize they are inaccurate or even wrong. I wouldn’t put much stock in the analysis of a beginner - I’d trust them to say the moves were legal, and that ChatGPT stopped trying to materialize pieces from thin air, but not any analysis beyond that.
Thanks, but on the first glance this doesn't seem to be about bottom-up approach of synthesizing a cell from amino-acids, but about modifying existing cells.
The customer use license specifically talks about datacenters, not "cloud": "The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted" https://www.nvidia.com/en-us/drivers/geforce-license/
There were numerous issues. First one (somewhat mitigated lately) was extremely large number of actions per minute and (most importantly) extremely fast reaction speed.
Another big issue is that the bot communicated with the game via a custom API, not a via images and clicks. Details of this API are unknown - like how invisible units were handled, but it was much higher level than a human would have (pixels).
If you look at the games, the bot wasn't clever (which was a hope), just fast and precise. And some people far from the top were able to beat it convincingly.
And now the project is gone, even before people had a chance to really play against the bot and find more weaknesses.
I bought Oculus Quest 2 about a month ago. The games I play the most probably wouldn't be even possible (or at least fun) without VR - Pistol Whip (you need to evade slow-moving bullets) and Thrill of the Fight (boxing simulator). It's more of a fitness device for me now.
I agree, VR absolutely provides an experience flat-screen gaming can never deliver.
With the Covid situation being what it is and sports halls are closed, I’ve replaced live table tennis trainings with Valve Index and Eleven Table Tennis VR game. It has stunningly realistic physics and the immersion is so good that I don’t miss the real thing much. This wouldn’t be possible in front of a flat screen.
I returned mine after a month but apparently I didn't do enough research into how to get out of the little walled garden that is set up for you. I had the Samsung Gear a few years back and felt like it was the exact same environment.
In both cases one of the most interesting experiences was the movie theater. It somehow works. The social aspect was creepy, i'm an old guy and all i could hear was voices of young children...i felt like i shouldn't be there haha.
I've had a similar problem, I like the tech, enjoy the single player games but when it comes to anything online the number of shrill voices makes me feel like I'm the outsider in their space.
> Does Copilot use other files open in the editor?
While I am not 100% sure of the sources, my use of Copilot makes me pretty sure it uses other open files in the editor, other files in the current project folder (whether or not open in the editor), and to suspect it may use the past history of the current file (at least in the same edit session).
That sounds like too much input. Remember that Copilot is based on GPT-3, so its input size is limited to 2048 tokens.
I think it's more simple to assume that "get_*_input" is a common name for a function that reads input from a stream and so that this kind of string is common in Copilot's training data. Again, remember: GPT-3. That's a large language model trained on a copy of the entire internet (the CommonCrawl dataset) and then fine-tuned on all of github. Given the abundance of examples of code on the internet, plus github, most short progams that anyone is likely to write in a popular language like Python are already in there somewhere, in some form.
The form is an interesting question which is hard to answer because we can't easily look inside Copilot's model (and it's a vast model to boot). The results are surprising perhaps, although the way Copilot works reminds of program schemas (or "schemata" if you prefer). That's a common technique in program synthesis where a program template is used to generate programs with different varaible or function names etc. So my best guess is that Copilot's model is like a very big database of program schemas. That, as an aside.
Anyway I don't think it has to peek at other open files etc. Most of the time that would not be very useful to it.
> GitHub Copilot uses the current file as context when making its suggestions. It does not yet use other files in your project as inputs for synthesis. [1]