> In other words: they ran a trial, and it finished. That itself isn't evidence for either success or failure, any more than taking a scheduled blood test is/isn't evidence for a blood disease.
There might be some ambiguity, but the evidence needle is definitely more on the "failure" side.
This isn't like a routine blood test. They were test marketing a product. If the test succeeded, it would be foolish of them to not roll it out widely and make more money off of its now-measured success. It's very likely the product didn't meet their expectations, but it's hard to say by how much.
> If the test succeeded, it would be foolish of them to not roll it out widely and make more money off of its now-measured success. It's very likely the product didn't meet their expectations, but it's hard to say by how much.
I don't think you can safely assume that. There are lots of reasons to not immediately turn a trial into a full run: logistical and supply chain considerations, contract negotiations for the ingredients, running surveys and tweaking the items based on customer feedback, &c.
This is true in almost every industry except software: you put space between the trial and full deployment because there are physical and logistical considerations that cost time and money.
There might be some ambiguity, but the evidence needle is definitely more on the "failure" side.
This isn't like a routine blood test. They were test marketing a product. If the test succeeded, it would be foolish of them to not roll it out widely and make more money off of its now-measured success. It's very likely the product didn't meet their expectations, but it's hard to say by how much.