This is very interesting work, but it's not really a LLM. It doesn't have language abilities. They should have called it a seq2seq model, but I think that term is not in vogue these days :)
We use the same architecture as other LLMs, but we include no natural language in our pretraining. We figured a single-domain training corpus would make evaluation easier. We’ll be looking at layering this on top of something like Code Llama next
I can do it myself and it's not a huge hill to climb, but I don't necessarily have the time to do it. There is also the time that needs to be dedicated towards maintenance (upgrades etc.).
I have heard this argument from perfectly healthy people, and I couldn't really believe it, since there are just so many other touch points when you go to a restaurant.
If someone is immuno compromised I think this makes sense to extent.
Congrats! I went to your demo and asked for words that end in agi. This is what I got:
--
agi, agi, agi, agi, agi, agi, agi
These are some of the words that end in agi. You can also use the word agi in a sentence. For example, "I am going to the grocery store to get some agi."
It's a fair criticism, and ChatGPT does better, but this isn't a great test of model quality. All LLMS that rely on tokenization struggle with being introspective on language. Try asking chatGPT to count how many e's are in a sentence, or to list all words that start with "to" and end wide "de".
I haven't heard anyone describe the phenomenon clearly, but I expect it is a challenge with reasoning over both intent of the prompt and specific token IDs.
You can't ask ChatGPT to count something and expect that it can answer correctly, because it does not have counting logic. It is a language model, not a math model. People use this to "prove" hallucinations, but when you ask it something that is within it's programmed abilities, you get something at least close to what you want.
Having said that, here are the words ChatGPT gave me for the same prompt:
It's true that ChatGPT is not designed for counting and struggles with it in general.
But my point was that ChatGPT, like any tokenized LLM, doesn't even have the concept of letters. The prompt "how many e's in this sentence" is rendered as the tokens [4919, 867, 304, 338, 287, 428, 6827]. There just isn't a pathway for it to consider the letters that make up those tokens.
I'm a little surprised it did that well on your prompt, which is rendered as [10919, 2456, 886, 287, 556, 72]. The interesting thing here is that 556 = " ag" (with leading space) and 72 = "i". So I'm not sure how got to those words. "Wagagi" is tokens [54, 363, 18013], so somehow it is seeing that token 18013 is what you get when you combine 556 and 72? That seems really weird.
I'd love clarification from someone deeper into LLMs and tokenization.
In a prompt, can you just tell the model which letters make up each token? Eg a list of ag = a g etc. I imagine a dictionary of that for all tokens in the training data would help.
Maybe? Individual letters are tokens, so you could say something like 3128 = 56 + 129, but the problem is that 3128 is processed as text, not the integer token ID. So the tokenizwr would turn 3128 into a series of tokens.
Intuitively I think there's an abstraction barrier there, but I'm not positive. It feels like asking us to list all of the words that trigger particular neurons.
This needs citation. These are not the same things. It will get numerical references right if it has sources used in the model, but it isn't doing any numerical calculations.
Just look at any papers that put models through mathematical benchmarks. The model isn't memorizing these problems. For example I just generated 2 random 64 bit integers and asked ChatGPT to add them.
"6769545085823578960 + 16027170449476717488"
ChatGPT said the answer is 22796715535300296448. It got the correct answer even though the problem wasn't in its training data.
Yep, as always, people (and LLMs) take stuff for granted because they read it somewhere months ago. That’s why we are doomed; everyone believes anything without question if it’s not against their personal agenda.
This need citation :) It does numerical calculations, at least in GPT-4 mode, tested. It can do simple arithmetic, and even has sort of 'imagination', or impression of it. I asked it to imagine a room with 4 colored balls at the corners. Then asked about the view angles between some pairs of balls as if looking from the center of the room, and from other balls. It gave the answers with explanations.
This doesn't mean it's always correct, or can be trusted without verification.
I can feed ChatGPT code that does calculations (and have) and have it calculate the right answers. It also gets it wrong a lot, so it's not good at that, but any notion that it can't do numerical calculations is easy to disprove.
On the last list, the only word that does not comply with the constraints (having 3 'e's) is "Demeanor", which has only 2. Not great but also not as horrible as you make it sound.
It's not a character based model (likely - although it's closed source so anything is technically possible behind the scenes) so this makes some sense.
The system can infer some relationships, which may be why 'agy' is conflated with 'agi' interestingly, but the tokenization process yields sequences of 'symbols' or indexes that are decided to English - so the system has a more difficult task when asked about 'e's (probably something like token 4893) and has to determine which tokens (e.g. [358,284840, 58292, 4830104, 57282, 4829193, 58282, 384, 24945] contain 'e's or token 4893).
None of them do directly it seems - but 58292 may be 'ee' - so you would get this wrong as well.
The problem is that these models do not have any working memory they could use to carry out such tasks, which are on a meta-level when seen from a language perspective. They can only go with their 'gut instinct' for selecting the next word, they can't 'consider and ponder the problem internally' first.
The problem is that the input is tokenized before the model gets it as input. It does not see the individual letters "t" + "o". It gets one single token, #1462. The word "toe" is another single token, #44579. Maybe over time it could learn from context that inputs that start with #44579 also satisfy the constraint of starting with #1462, but that's a lot of work and it's not going to happen for all combinations of letters.
Perhaps prompting the model to first describe its approach to answering the question. This type of chain-of-thought technique can yield better results.
yeah as usual these model can barely sustain a conversation and fall apart the moment actual instructions are given. typical prompt they fail to udnerstand:
"what is pistacchio? explain the question, not the answer."
all these toy llm: "pistacchio is..."
gpt is the only one that consistently understand these instructions: "The question "what is pistachio?" is asking for an explanation or description of the food item..."
this makes these llm basically useless for obtaining anything but hallucinated data.
Asking LLM from things they learned in training mostly result in hallucinations and in general makes you unable to detect by which amount they are hallucinating: these models are unable to reflect on their output, and average output token probability is a lousy proxy for confidence scoring their results.
On the other hand, no amount of prompt engineering seems to make these LLM able to do question and answer over source documents which is the only realistic way by which factual information can be retrieved
You're welcome to bring examples of it tho if you're so confident.
I've had ChatGPT build a fire nctioning website, write a DNA server, fill in significant portions of specs, all without the problems you describe. I'm never going back to doing things from scratch - it's saving me immense amounts of time every single day. The only reasonable conclusion is that the way you're promoting it is counterproductive.
You can get useful results out of a whole lot of them as long as you actually prompt them in a way suitable for the models. The point I made originally was that if you just feed them an ambiguous question, then sure, you will get extremely variable and mostly useless results out. Ironically,
And I mentioned ChatGPT because from context of your comments here it was unclear on first read-through what you meant. Maybe consider that it's possible your prompting is not geared for the models you've tried.
Not least, specifically given that if you expect a model to know how to follow instructions, when most of them have not been through RLHF you're using them wrong. A lot of them needs prompt shaped as a completion, not a conversation.
I have nothing to gain from spending time testing models for you because whatever I pick will just seem like cherry picking to you, and it doesn't matter to me whether or not you agree on the usability of these models. They work for me, and that's all that matters to me. Try a a few completions instead of a question. Or don't
Proper nouns are words. That answers the prompt, right? There was no mention of "common" in the prompt. If there was, the list of words I got with the same prompt on ChatGPT would have been a lot shorter.
Then the word rhamanagagi (which I just made up) is a word that would technically belong to the list just fine, it definitely not answered to the implicit intent of the question.
The strength of LLM is their ability to answer to unprecisely specified questions, being able to guess the speaker's intent, but in this particular case, it's failing the test.