The problem is that OpenClaw is kind of like a self driving car that works 90% of the time. As we have seen, that last 10% (and billions of dollars) is the difference between Waymo today and prototypes 10 years ago.
Being Apple is just a structural disadvantage. Everyone knows that open claw is not secure, and it’s not like I blame the solo developer. He is just trying to get a new tool to market. But imagine that this got deployed by Apple and now all of your friends, parents and grandparents have it and implicitly trust it because Apple released it. Having it occasionally drain some bank accounts isn’t going to cut it.
This is not to say Apple isn’t behind. But OpenClaw is doing stuff that even the AI labs aren’t comfortable touching yet.
I sincerely suspect the BBC would only ever use "fired"/"firings" if the employees were being dismissed for conduct reasons, since that's the common usage in British English. I've been let go -- indeed, I've lost my job (it's the employees who suffer job losses, not the employer) -- but I've never been fired.
Which, at least in American English, comes across like corporate jargon/weasel words. Lost their job is literally true and would probably take a bunch more words to describe the precise reasons.
Both things can be literally true. I've lost my job by being made redundant, twice. In Britain redundancy is a very specific thing, where your role no longer exists and you must be let go in a fair way according to employment law. It's quite the opposite of jargon or weasel words here: https://www.gov.uk/redundancy-your-rights
I think we may be hitting an issue in translation between English and American; in British English "fired" implies "for cause", while a "blameless" process of headcount reduction is known as "redundancy". "Job losses" is a perfectly reasonable neutral phrase here. Indeed, under UK law and job contracts you generally cannot just chuck someone out of their job without either notice or cause or, for large companies, a statutory redundancy process.
People like to make too much out of active/passive word choices. Granted it can get very propagandistic ("officer-involved shooting"), but if you try to make people say "unhoused" instead of "homeless" everyone just back-translates it in their head.
> Indeed, under UK law and job contracts you generally cannot just chuck someone out of their job without either notice or cause or, for large companies, a statutory redundancy process.
This is only true when an employee has worked for a company for 2 or more years
I think American English is the same colloquially. “I got fired” means I didn’t perform or did something wrong. “I got laid off” is our “I was made redundant”.
“Fired” is also a technical term for both cases, in academic/economist speak.
> in British English "fired" implies "for cause", while a "blameless" process of headcount reduction is known as "redundancy"
OK. I was fired for no stated cause in a process that didn't involve headcount reduction, or the firing of anyone except me specifically. (The unstated cause seems to have been that I had been offered a perk by the manager who hired me that the new manager didn't want to honor after the original guy was promoted.)
And by applying these organizational changes, each person can become more load bearing and have so much more scope and impact. This is not a loss, it's a great win for everyone! /s
> Cost: $1,000
Case 1 (90%): OpenAI goes bankrupt. Return: $0
Case 2 (9%): OpenAI becomes a big successful company and goes 10x. Return: $1,000 + 5% interest = $1,050
Case 3 (1%): OpenAI becomes the big new thing and goes 100x. Return: $1,000 + 5% interest = $1,050
The actual math is that if OpenAI succeeds, then there's a nod and a wink that JPM will land the lead role in the IPO or any mergers/acquisitions, which translates into huge fees.
a company with 800 million weekly active users, and only losing $10B-$15B before implementing ads - which IMO is coming fast and soon to the LLM world - i would never calculate a 90% chance their shares end up at $0 before an exit option
This is the easiest money and best relationship JPM could imagine
Yahoo is a disingenuous parallel here. Yahoo lost because they didn't correctly embrace their market position in what's otherwise the very ripe industry of search engines. Search engines created the 4th most valuable company in the world (Google).
We don't know how ripe OpenAI's industry or market position is, yet. Yahoo knew what they had lost pretty early onto its spiral.
Also, if OpenAI goes bankrupt, you _much_ prefer to have loaned them money to having bought shares in the company. People who own shares in a bankruptcy only recover anything after all the people that loaned them money are paid back in full.
so if i'm reading this correctly, it's essentially prompt engineering here and there's no guarantee for the output. Why not enforce a guaranteed output structure by restricting the allowed logits at each step (e.g. what outlines library does)?
So in short there's no guarantee for any output from any LLM whether its Gemma or any other (ignoring some details like setting a random seed or parameters like temperature to 0). Like you mentioned though libraries like outlines can constrain the output, whereas hosted models often already include this in their API, but they can do so because its a model + some server side code.
With Gemma, or any open model, you can use the open libraries in conjunction to get what you want. Some inference frameworks like Ollama include structured output as part of their functionality.
But you mentioned all of this already in your question so I feel like I'm missing something. Let me know!
But I think you already mentioned all this in your response so I might be missing the question?
With OpenAI models, my understanding is that token output is restricted so that each next token must conform to the specified grammar (ie json schema) so you’re guaranteed to get either a function call or an error.
Edit: per simonw’s sibling comment, ollama also has this feature.
Ah, There's a distinction here with model vs model framework. The ollama inference framework supports token output restriction. Gemma in AI Studio also does, as does Gemini, there's a toggle in the right hand panel, but that's because both those models are being served in the API where the functionality is present in the server.
The Gemma model by itself does not though, nor does any "raw" model, but many open libraries exist for you to plug into whatever local framework you decide to use.
If you run Gemma via Ollama (as recommended in the Gemma docs) you get exactly that feature, because Ollama provides that for any model that they run for you: https://ollama.com/blog/structured-outputs
Under the hood, it is using the llama.cpp grammars mechanism that restricts allowed logits at each step, similar to Outlines.
I've been working on tool calling in llama.cpp for Phi-4 and have a client that can switch between local models and remote for agentic work/search/etc., I learned a lot about this situation recently:
- We can constrain the output of a JSON grammar (old school llama.cpp)
- We can format inputs to make sure it matches the model format.
To OP's question, specifying a format in the model unlocks training the model specifically had on functions calling: what I sometimes call an "agentic loop", i.e. we're dramatically increasing the odds we're singing in the right tune for the model to do the right thing in this situation.
Do you have thoughts on the code-style agents recommended by huggingface? The pitch for them is compelling, since structuring complex tasks in code is something very natural for LLMs. But then, I don’t see as much about this approach outside of HF.
Why is there a 5 year gap on your resume? It sounds like you didn't sit around twiddling your thumbs... you built stuff during that time. On your resume, treat it like you were working a job and talk about what you built. Highlight your open source contributions and if possible, tie them to your resume. Sure, some people will treat an ex-founder as a negative, but many will see it positive. You only need one job.
There is definitely ageism in tech, but 39 isn't old. I'd be happy to take a look at your resume and provide advice if you give me a way to contact you. But it sounds like the issue isn't your resume... indeed you are getting lots of interviews, so maybe it's something you are doing in the interview process. Do you have a sense of where things go wrong? Are you often getting to the final stage before hearing no?
- ~20% rejection after the technical screen (i.e., leetcoding)
- ~10% after a full round of interviews
Presumably I do or say something in interviews that is working against me. I also think that my background gives people the expectation that I should be able to reach a very high bar, and perhaps I set their expectations much higher than I can actually achieve (at least in interviews). Given my real-life interactions, I can confidently say I'm a fairly gregarious and affable person, so I doubt it's my personality that's the issue. But who knows, perhaps my ego is the problem.
Well, they certainly _might_ be functioning in ten years from now. Conservatively, you get 5 years of use out of this, which isn't bad for $15-20, depending on your use case.
Obviously in practice, this isn't always true, but in general, each employee should output more than their pay, or they wouldn't be there. As an example, Exxon Mobile supposedly has profits of $899,000/employee [1]. Their average pay is probably significantly lower than that, but let's say it's $300k, so a 20% boost in productivity increases profit per employee by $180k. An increase of 50% in salary (and I don't think it takes that much to get people to work in an office) costs them $150k.
I think that software engineering is about two things: building things the right way and building the right things.
The second one is more important than the first one. If you don't build the right product, it doesn't matter how well it scales or how it has amazing test coverage or wonderful documentation. To that end, I think that too many managers (and companies) do too much shielding of engineers from customers. If you are just given a figma mockup and told "build this", it's easy to get bogged down for a week with the details of building a search bar at the bottom of the page only to realize that the stakeholders would have been OK with a dropdown select. Better to understand the problem you are solving and the only way to really do this is to have some kind of interaction with customers. As an engineering manager, I try to encourage engineers to get on sales calls and see product demos. When you see it from a high level, you a) almost always notice things that need fixing or can be improved and b) see where the piece that you are working on fits into the larger picture.
That said, I find that many engineers don't want to get on customer calls, and usually there's room for those engineers in an organization as well. For example, "build a new video conferencing service for artists to collaborate" would be a very challenging problem (I think) that is not well defined and therefore requires deep customer understanding. "Make Google searches run with 10% fewer CPU milliseconds" is arguably a much harder problem to solve, but it's so well defined that it really doesn't need customer understanding (setting aside the initial decision about whether it makes sense to work on).
As a fellow engineering manager -- I 100% agree. The more your engineers know as much as possible about the customers, the more they will code in the right direction just be understanding.
You are given a figma that wasn't already researched and validated against requirements? If it takes a week for a team to fiddle around with a design asset only to learn customers/clients would be fine with a simpler approach, everyone failed the assignment. This was intended as a rhetorical question... I know many teams let designers waste tons of time in a vacuum and PMs are off in lalaland focused on the wrong activities, when they should be focused "building the right thing" and carefully validating that and communicating outcomes to the team and customers. Im all for letting Engineers along for the ride, but too often they (more the jr mid-level ones) are checked out during that process, not asking implementation questions or contributing to research process, until its all hindsight.
I disagree, it's great practice and starting early in a career will only pay dividends later on. It requires the desire to actually want to participate and contribute professionally and not just hide behind a keyboard all day. I get that some people just want to hide behind a keyboard all day- fine. These people often complain the most later on... in my experience.
Only a small fraction of all future AI projects have even gotten started. So they aren't only fighting over what's out there now, they're fighting over what will emerge.
This is true, and yet, many orgs who have experimented with OpenAI and are likely to return to them when a project "becomes real". When you google around online for how to do XYZ thing using LLMs, OpenAI is usually in whatever web results you read. Other models and APIs are also now using OpenAI's API format since it's the apparent winner. And for anyone who's already sent out subprocessor notifications with them as a vendor, they're locked in.
This isn't to say it's only going to be an OpenAI market. Enterprise worlds move differently, such as those in G Cloud who will buy a few million $$ of Vertex expecting to "figure out that gemini stuff later". In that sense, Google has a moat with those slices of their customers.
But I believe that when people think OpenAI has no moat because "the models will be a commodity", I think that's (a) some wishful thinking about the models and (b) doesn't consider the sociological factors that matter a lot more than how powerful a model is or where it runs.
> OrderedDict - dictionary that maintains order of key-value pairs (e.g. when HTTP header value matters for dealing with certain security mechanisms).
Word to the wise... as of Python 3.7, the regular dictionary data structure guarantees order. Declaring an OrderedDict can still be worthwhile for readability (to let code reviewers/maintainers know that order is important) but I don't know of any other reason to use it anymore.
Being as specific as possible with your types is how you make things more readable in Python. OrderedDict where the order matters, set where there are no duplicate items possible, The newish enums are great for things that have a limited set of values (dev, test, qa, prod) vs using a string. You can say a lot with type choice.
Another reason is I think that 3.7 behavior is just a C Python implementation detail, other interpreters may not honor it.
This hit me bad once bad. I tested the regular dict and it _looked_ like it was ordered. Turned out, 1 out of about 100000 times it was not. And I had a lot of trouble identifying the reason 3 weeks later, when the bug was buried deep in complex code, and it appeared mostly what looked like random.
Does dict now guarantee that it maintains order? IIRC, it was originally a mere side effect of the algorithm chosen (which was chosen for performance), but it could change in future releases or alternative implementations.
Being Apple is just a structural disadvantage. Everyone knows that open claw is not secure, and it’s not like I blame the solo developer. He is just trying to get a new tool to market. But imagine that this got deployed by Apple and now all of your friends, parents and grandparents have it and implicitly trust it because Apple released it. Having it occasionally drain some bank accounts isn’t going to cut it.
This is not to say Apple isn’t behind. But OpenClaw is doing stuff that even the AI labs aren’t comfortable touching yet.
reply