Hacker Newsnew | past | comments | ask | show | jobs | submit | ArtemGetman's commentslogin

same experience - voice mode is dumbed down compared to text. ended up building my own voice interface that uses full claude/gpt/gemini models instead of the lobotomized voice versions. actually handles specific requests without the "go look it up yourself" cop-out. want to try it?


built something to fix this. skips the realtime entirely - you speak, it waits, responds with text-quality answers via TTS. no forced casualness, no dumbing down. also has claude/gemini.

happy to share if anyone wants to try it


built something to fix exactly this. skips the realtime chattiness entirely - you speak, it waits until you're done, responds via TTS with actual text-quality answers (no dumbing down). also has claude/gemini if you want different models.

still early but happy to share: tla[at]lexander[dot]com if interested (saw your email in bio)


You’re saying you made yourself an email that is similar to mine? That seems… odd.


Hey Ben - congrats on launching Zo. The vision of "personal servers with all your context" resonates deeply.

I built MindMirror for a similar problem: persistent memory across AI tools via MCP protocol. Your mom using Zo to manage her schedule with context from notes/files is exactly the use case.

What's interesting about cross-tool memory: users want context in Zo AND when they switch to Claude/Cursor for dev work. Same memory, different tools.

Would love to compare notes on the personal AI context problem - especially curious how you're handling memory persistence and context management at scale.

Congrats again on the launch!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: