Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If an LLM-based voice assistant/hardware combination works as well as ChatGPT-for-voice works today, I don't think it's a stretch to say that nearly everyone in the coming years will use/have one (the software of course will be portable to whatever device you're using--house, phone, car, etc. But the hardware portion I do believe will be critical because most of the time using it will be at home in a room and in that scenario sound quality will actually be key).

That said, if nearly everyone will find utility in an assistant, obviously the biggest issue with using one of these, as this Amazon announcement illustrates, is whether you really can trust the company with such a thing when you would be having entire conversations about everything from your interests to something as sensitive as your emotional state (anyone simulated a therapy session with ChatGPT? It arguably is already a decent therapist!).

One of two things will happen, though. People will be dumb enough to "upload" their deepest darkest secrets to megacorp x (thousands of HN users cackle in the distance as if that's not happening today) or a completely privacy-safe option will be available and will win because they're able to effectively communicate that they are in fact private. It's one thing for Google or FB to build a picture of who you are, what you think, etc. through browsing activity/purchases/etc. It's entirely something else for you to literally tell them every last thing about you so that they can hear, in your own words, how you think about "everything."



You know, people said the same thing the first time voice assistants came out. They said the same thing when VR came out. Even when 3D printers came out for God's sake.

"Everyone will have one!"

It's a mistake to think every person is the same level of enthused with new technology as you are.


I definitely agree it's a mistake to think that. That said, I do think LLM's or their direct successors are going to be more akin to Google search than the items you mention in terms of market penetration. And my comment was attempting to communicate that voice is today, and will be in the future a great way to interact with LLM's. I think you're saying you disagree with that, which is totally cool of course. Just thought I'd reply to share a little more of my thinking.


> It's a mistake to think every person is the same level of enthused with new technology as you are.

This is true in general, but LLMs do search better. Everyone already does search.


I use LLMs pretty liberally and I can say with 100% certainty I am not going to leave an open microphone in my home hooked up to an LLM connected to a place I do not control that is actively trying to "learn" about me.


Do it all locally.

I wrote a blog post[1] describing what a local only LLM could do. The answer is quite a lot with today's technology. The question is - do any of the tech giants actually want to build it?

The locally hosted scenarios are in some ways more powerful than what you can do with cloud hosted services, and honestly given that companies could charge customers for the inference hardware instead of paying to host, it would likely be a net win for everyone. Sadly companies are addicted to SaaS revenue and have forgotten how to make billions by selling actual things (with the exception of Apple).

[1] https://meanderingthoughts.hashnode.dev/lets-do-some-actual-...


I didn't say it in the prior comment, but this is what I'm hoping for and that people end up caring enough so that this option "wins." Evidence suggests people will take the cheaper option, though, even if all of their info ends up in the hands of advertisers or something far more nefarious.

You mention Apple... I feel like, of the megacorps, they're the most likely to do something like that. Then between the phone, AirPods, HomePod (tethered to the phone I guess or a newer version of the hardware), and your car with CarPlay, the hardware already exists and so someone will build a privacy-focused LLM that Apple could plug into. At least Apple could justify that by being the hardware interface between the LLM and the user if they can't build their own effective LLM (seems unlikely they'll be able to do that given track record).

If I were really crazy I'd say Apple could buy Anthropic (right right they don't do big acquisitions) and turn it into their privacy-focused LLM.

Now to read your blog post...


Fair, but the above comment is about general population. The percentage of people that’s actively against it in the real world is negligible. Like where do you cut the line? Is Siri/Google Assistant ok on your phone? What about every newer BMW nowadays coming with its own assistant? Samsung TVs? Nest/Ecobee products? I could go on, and I haven’t met a person who owns has 0 devices with voice assistants in years.


I'm not sure how any person can be confident of such things these days, but would you be ok with the open mic if you knew it couldn't be used to build some profile about you?


This is a local only version: https://www.jollamind2.com/


Love the "Drop In" Feature opening a conversation channel to a particular room..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: