Hacker Newsnew | past | comments | ask | show | jobs | submit | mov_eax_ecx's commentslogin

Am i crazy? https://twitter.com/unity/status/1701650081403842851 In the original tweet they write: "We want to be clear that the counter for Unity Runtime fee installs starts on January 1, 2024 - it is not retroactive or perpetual. We will charge once for a new install; not an ongoing perpetual license royalty, like revenue share."


The main controversy is that they seem to intend to apply this to any new installs of any Unity game still available on the market as of 1/1/2024, including any that were already developed and released under different Terms of Service—and possibly even ones already sold that way.

As far as I can tell, their model even seems to include, for example, any installs on a different system that I might do of the Unity-based game Cult of the Lamb I already bought—months ago when it launched and when Unity had a different TOS. And that’s bad news for the devs because Steam’s terms of sale allow me unlimited multiple installs anywhere I have Steam installed, as long as they only run one at a time.

But even if Unity doesn’t try to go that far, it would absolutely apply if I did a second install of a new license I bought after 1/1/2024 on EGS, or if I got put on Game Pass and I did a new install from that somewhere I didn’t want to install Steam.

And since Unity apparently uses click wrap agreements upon opening their dev tools to set licensing terms, using the tools to make a patch afterwards would likely include new terms that subsequent installs of the patched version will now be counted for existing licenses, even if the original deployment wasn’t under those terms.

That led to Massive Monster’s response that they’re going to be delisting Cult of the Lamb entirely.

As far as what you quoted, all this says is that they won’t use their historical telemetry data to retroactively bill you for installs performed before 1/1/2024, and that any given install will only be charged for once. But multiple installs by the same purchaser are still charged multiple times unless Unity agrees they’re all on the same system.


How can i locate this study?. I think you are misrepresenting something.

In the gpt4 paper they specifically address this, and find that "Averaged across all exams, the base model achieves a score of 73.7% while the RLHF model achieves a score of 74.0%, suggesting that post-training does not substantially alter base model capability."


The problem with these studies is that we really still don’t know. Nobody can replicate the papers of OpenAI.


Found it, it is a pretty recent paper.

https://arxiv.org/pdf/2308.13449.pdf


Given the homogeneity of responses on taboo subjects, there's probably something exogenous to the model at work.


This is in response to Elon Musk suing Microsoft and OpenAI for using tweets to train GPT. There are already precedents, such as the Authors Guild vs. Google Books, where it was deemed fair use to use copyrighted works to build a searchable database.

Musk's objective is not to win the lawsuit, though. It is to create as many barriers and troubles for AI companies as possible so that he can catch up with his own technology


How to overengineer with an LLM, don't state clearly the requirements, shove your pet patterns first, it is more important to follow the slice redux awareness hook than to have working solution, never trust your developers to make decisions, worry more how it is built than building a solution.

My way to work with an LLM is to have a good, clear requirement and make the LLM write a possible file organization and query the contents of each file, just the code no comments and assemble a working prototype fast, then you can iterate over the requirements and evolve from there.


Generally, I agree that approach works well. It’s going to perform better if it’s not trying to fulfill your teams existing patterns. On the other hand, allowing lots of inconsistencies in style in your large code base seems like a quick way to create a hot mess. Chat prompts seem like a really difficult way to communicate code style and conventions though. A sibling comment to yours mentions that a copilot autocomplete seems like a much better pattern for working in an existing code base, and I tend to agree that’s much more promising. Read the existing code, and recommend small pieces as you type


How often do you get working code that way ? Unless it's something trivial that fits in it's scope I'd say that's going to produce garbage. I've seen it steer into garbage on longer prompt chains about a single class (of medium complexity) - I doubt it would work project level. Mind sharing the projects ?


I work only with closed source codebases and this approach for prototypes, but, using the same example as the blog i prompt: "the current system is an online whiteboard system. Tech stack: react, use some test framework, use konva for the canvas, propose a file organization, print the file layout tree. (without explanations)." The trick is that for every chat the context is the requirement+the filesystem + the specific file, so you don't have the entire codebase in the context, only the current file, also use gpt4, gpt3 is not good enough.

My main point is that the blog post final output is mock test awareness hook redux, where an architect feels good to see his patterns, with my approach you have a prototype online whiteboard system,


And that's why they have a hard time getting their stuff out there and getting the money they need. I mean, trying to run a business like a research lab is kind of flawed, you know? And you don't always want some Musk-like character messing around with the basics of the company


I don't think that the world needs yet another chatgpt proxy to mess with source code when copilot already fills the need, they have the engineers and lawyers to make it work.

What I don't see is, unique complex applications with a great UI that don't overlap with the applications like office with copilot, or copilotX.


This isn't a proxy. Say you use ChatGPT to assist you while you write code... there's a lot of copy paste action in that workflow. It gets old.

promptr gets rid of the copy paste. That's a much better developer ergonomic (IMO)


Plus your tool is open source, and direct calls to the OpenAI API are a much cheaper than Copilot. If anything Copilot's looking like the redundant one here to me. Thanks for doing Promptr, it looks great I'm going to give it a shot


Thank you! Have fun - please share if you do anything cool!


take a look at https://www.youtube.com/watch?v=s7AGkcSMiaI for a demo of what copilot is capable of, did you try copilot and copilot labs?, they have the best DX, look the interviews of nat fridman on the tool.

The code is a prompt, a foreach file execute prompt and return whatever, it can be used as a template build something useful and not redundant, but definitely not code. Don't reinvent the wheel.


This is like the third post today on code generator using a react application, in twitter there are dozens of them.

Copilot already exists and copilot X already packs the features this package promises AND much more, why use this application over Copilot?.


e2b isn't like copilot. You don't really write code in our "IDE". Currently, it works in a way that you write a technical spec and then collaborate with an AI agent that builds the software for you. It's more like having a virtual developer available for you.


One reason might be that some people value open source.


Looking at the license of this project (Business Source License 1.1), this is not an open source project: https://github.com/e2b-dev/e2b/blob/master/LICENSE#L16


Elon wanted it to be Open?, Musk wanted to buy it, take charge and lead it himself with his team of yes man from tesla. Probably would fired on the spot most engineering teams, do code reviews personally and demand hardcore work hours in the office.

If Musk had been the CEO, ChatGPT would never have been released.


He called it Open AI. He did want it to be open, at least at the time of its founding, which is why he called it "Open" AI.


What would stop google?

1. Weak management with no vision.

2. Fear to cannibalize their search product and ad revenue.

3. They are experiencing brain drain to other companies/startups.

4. Tendency to kill products.


Big enough LLM models can have are emerging characteristics like long term planning or agentic behavior, while gpt4 don't have this behaviors right now, it is expected that bigger models will begin to show intent, self-preservation, and purpose.

The gpt4 paper have this paragraph "... Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. "


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: