My opinion is that data access restrictions did not cause Alexa to fail. If you think about it, it wasn't lack of machine learning that contributed to its issues. Alexa attempted to solve the long tail of customer requests with the equivalent of spaghetti "if statements" - rule engines. This was never going to scale. Alexa did not have a generic enough approach to cover the long tail of customer requests (e.g. AGI). With rule engines, there was always a tension between latency and functionality. Alexa solved this with bureaucracy - monitor latency, monitor customer request types, and make business decisions about how to evolve the rule engines. But it was never fundamentally able to scale out of the most basic requests or solve chicken-egg problems (customers don't ask complicated requests because Alexa isn't capable, so they don't show up as large enough use cases to optimize for). Top use cases remained playing music and setting timers.
A more fundamental issue was monetizing. Early on Bezos liked the idea of having a small, essentially free, device that would reduce the friction to buying things. If you remember the "easy buttons" Amazon floated there were many ideas like this. In practice, building a robust voice assistant that could purchase items proved challenging for a myriad of reasons. So the business looked for other ways to monetize. Advertising kept coming up but there was rank and file pushback to this because it could break customer expectations and/or privacy concerns. Alexa considered pivoting into various B2B ventures (hospitality, healthcare, business) and other customer scenarios (smarthome, automotive) but took half-measures into each of them rather than committing to an opportunity. It felt like a solution looking for a problem.
Alexa would have (could still?) benefit from modern LLM technology. However to be truly useful it would need to do more than chat. It would need some layer to take actions. This would all have to be carefully considered and designed so that it scales - so that it isn't a bureaucracy trying to measure what people are wanting to do and "if statement"ing a rules engine to enable it. OpenAI and others appear to be poised with the machine learning expertise to do this.
Finally, it's my opinion that Alexa's machine learning scientists were very good, however as a population they did not appear to me to really care about the business/product use case. Many of them worked on research for publication on problems like distance estimation, etc. The expertise was very heavy on voice transcription and audio processing. However there was less expertise in "reasoning". This I hypothesize contributed to the approach of iterated rules engines, with the science community focused primarily with improving transcription accuracy by small numbers of basis points.
My opinion is that data access restrictions did not cause Alexa to fail. If you think about it, it wasn't lack of machine learning that contributed to its issues. Alexa attempted to solve the long tail of customer requests with the equivalent of spaghetti "if statements" - rule engines. This was never going to scale. Alexa did not have a generic enough approach to cover the long tail of customer requests (e.g. AGI). With rule engines, there was always a tension between latency and functionality. Alexa solved this with bureaucracy - monitor latency, monitor customer request types, and make business decisions about how to evolve the rule engines. But it was never fundamentally able to scale out of the most basic requests or solve chicken-egg problems (customers don't ask complicated requests because Alexa isn't capable, so they don't show up as large enough use cases to optimize for). Top use cases remained playing music and setting timers.
A more fundamental issue was monetizing. Early on Bezos liked the idea of having a small, essentially free, device that would reduce the friction to buying things. If you remember the "easy buttons" Amazon floated there were many ideas like this. In practice, building a robust voice assistant that could purchase items proved challenging for a myriad of reasons. So the business looked for other ways to monetize. Advertising kept coming up but there was rank and file pushback to this because it could break customer expectations and/or privacy concerns. Alexa considered pivoting into various B2B ventures (hospitality, healthcare, business) and other customer scenarios (smarthome, automotive) but took half-measures into each of them rather than committing to an opportunity. It felt like a solution looking for a problem.
Alexa would have (could still?) benefit from modern LLM technology. However to be truly useful it would need to do more than chat. It would need some layer to take actions. This would all have to be carefully considered and designed so that it scales - so that it isn't a bureaucracy trying to measure what people are wanting to do and "if statement"ing a rules engine to enable it. OpenAI and others appear to be poised with the machine learning expertise to do this.
Finally, it's my opinion that Alexa's machine learning scientists were very good, however as a population they did not appear to me to really care about the business/product use case. Many of them worked on research for publication on problems like distance estimation, etc. The expertise was very heavy on voice transcription and audio processing. However there was less expertise in "reasoning". This I hypothesize contributed to the approach of iterated rules engines, with the science community focused primarily with improving transcription accuracy by small numbers of basis points.