All images are generated using independent, separate API calls. See the FAQ at the bottom under “Why is the number of attempts seemingly arbitrary?” and “How are the prompts written?” for more detail, but to quickly summarize:
In addition to giving models multiple attempts to generate an image, we also write several variations of each prompt. This helps prevent models from getting stuck on particular keywords or phrases, which can happen depending on their training data. For example, while “hippity hop” is a relatively common name for the ball-riding toy, it’s also known as a “space hopper.” In some cases, we may even elaborate and provide the model with a dictionary-style definition of more esoteric terms.
This is why providing an “X Attempts” metric is so important. It serves as a rough measure of how “steerable” a given model is - or put another way how much we had to fight with the model in order for it to consistently follow the prompt’s directives.
I had the same experience. Plus you never know what's the best way to use eg. Nano Banana -- it works better in AI Studio versus their regular Gemini chat.
Yeah, I couldn't really get into UT2004. Not sure what it was that bugged me since it was so long ago. But I played a lot of UT99 and I was doing it on a 28.8 modem.
I think the tone of of UT2004 was slightly sillier than UT99 and the guns felt .. fatter? I'm definitely looking at this with some nostalgia but UT99 will always be my favorite shooter
I don't always play UT2K4, but when I do, it's usually with Ballistic Weapons mod + Sergeant Kelly's weapon packs. I agree that the UT2K4 weapons just don't quite do it like UT99.
They didn't describe Italy as Eastern Europe, they said you can hire in Eastern Europe for far less. Eg. you can do that from Italy and keep people in the same timezone and relatively close by.
Also, disabling scrolling and using swipe for sections instead _at a font size that causes text to overflow_, depending on phone screen size, meaning a bunch of the site is _literally_ unreadable, since it's off the screen with no way to get there.
Yeah, they rebranded it "Apple Intelligence" but this press release appears to be mostly using AI in the same (vague) way that the rest of the industry does.
Also just noticed this:
"And now with M5, the new 14-inch MacBook Pro and iPad Pro benefit from dramatically accelerated processing for AI-driven workflows, such as running diffusion models in apps like Draw Things, or running large language models locally using platforms like webAI."
First time I've ever heard of webAI - I wonder how they got themselves that mention?
> First time I've ever heard of webAI - I wonder how they got themselves that mention?
I wondered the same. Went into Crunchbase and found out Crunchbase are now fully paywalled (!), well saw that coming... Anyway, hit the webAI blog, apparently they were showcased at the M4 Macbook Air event in 2024 [1] [2]:
> During a demonstration, a 15-inch Air ran a webAI’s 22 billion parameter Companion large language model, rendered a 4K image using the Blender app, opened several productivity apps, and ran the game Wuthering Waves without any kind of slowdown.
My guess is this was the best LLM use-case Apple could dig-up for their local-first AI strategy. And Apple Silicon is the best hardware use-case webAI could dig-up for their local-first AI strategy. As for Apple, other examples would look too hacky, purely dev-oriented and depend on LLM behemoths from US or China. Ie "try your brand-new performant M5 chip with LM Studio loaded with China's Deepseek or Meta's Llama" is an Apple exec no-go.