Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No way will this tech destroy phones as you know them today. What is going to destroy phones is natural language. Maybe, maybe in the distant future this tech will be mature enough to provide the visual support to a natural language first device.

xBox addressed a very well defined product sector with a couple of big players doing stuff that is well understood. xBox wasn't a toe in the water, they planned a whole strategy around it using known facts about the industry, how people play, what kind of games work, they didn't break much new ground with the xBox. All they had to do was make a compelling, affordable gaming device that had amazing games on it. The rest takes care of itself. Apple Vision Pro is nothing like that. There is no existing market to put your toe in, there are no real competitors, we haven't even found a killer use case for these devices.

No one really knows how these AR/VR devices can fit into everyday life. Currently Apple is working on finding the water to put their toe in. Right now they are at an exclusive oasis when they really need an ocean.



Hard disagree. Phones and computers do text/images/video. Voice input and audio output is a poor substitute for text and not at all a replacement for images/video.


I didn't say there wouldn't be a screen. But natural conversations will be much more fluid and efficient than a keyboard and google search. Having a conversation is so much better for all sorts of applications.

Today's smartphones have to evolve this way, imo. I don't know what the most efficient hardware realization would look like but I imagine it's something that isn't in your pocket most of the time, more of a sleek wearable. It will need to be able to hear and see what you do.


Right. And there are too many scenarios where audio I/O isn't usable (quiet libraries, loud streets) so you always need an alternative.


I've been testing Android's dictation stuff while speaking quietly and it works better than I expected. (That is, the mistakes it makes seem to be the same if I were to talk louder; some common repeating misunderstandings, some things where it's thrown off by my accent.) Having the mic close by makes recording a quiet voice much more tractable than hearing it in conversation, we humans tend to stand further apart and mics are better than ears by now.

Not that I'd be advocating this for library study halls!


> No way will this tech destroy phones as you know them today. What is going to destroy phones is natural language.

I'm extremely skeptical of this, just because of the vast number of situations where speaking-out-loud isn't going to be desirable. It'll have a place, for sure, but I think it'll be more of a supplement to our current phone paradigm.


> they didn't break much new ground with the xBox

I kind of disagree with this. They made networking and online gaming on consoles finally a thing most home consumers were interested in. Sure, there were some earlier forays into online gaming/networking on previous consoles (SegaNet, for example), but those were generally pretty niche. Sega only included a dial-up adapter by default, while the Xbox shipped with an Ethernet adapter. Shipping the Xbox with Ethernet made networking on the box pretty simple right at the time when people started buying home routers and broadband internet and opened up the console to easy LAN gaming.

Microsoft made Xbox Live a pretty massive feature of the console a year after launch. While Xbox Live launched a year after the console shipped, I'd still say the planning of it and including the Ethernet port was something nobody else in the console gaming world was doing and ended up defining the console gaming future.


A big part of the success of the Xbox (and maybe it's not the Japan destroyer the fanboys wanted it to be) was that they really really let it be its own product, and develop an ecosystem. They not only made it a "PC for your TV" which was widely what it was held as on release, but also expanded the capabilities of what a console was expected to do.

Sadly they also popularized and solidified the "pay to play games online" feature of consoles, vs the "online play is free except for MMOs" that PCs normally have.


Natural language really sucks as a UI because a lot of things can be done faster than when you speak it. Like volume control as a tip of the iceberg example


You can talk today to your phone in natural language. They don't seem very destroyed.

There seems something fairly fundamental about smartphone like devices at the moment in that people want a screen where they can see pictures, text messages and so on. A smartphone is a fairly minimal implementation of that. I don't think people want goggles / glasses stuck on their heads especially, at least I don't, regardless of how advanced the tech is. I mean looking around the cafe I'm in it's roughly 100% of people have smartphones, zero have google glass like things. Although the tech has been with us longer than you might think. I first tried a wearable computer with a small eye level display in the 90s and thought hey cool but they never caught on. https://spectrum.ieee.org/the-pc-goes-readytowear


Mobile phones and VR are different non competing markets. AR might eventually compete with phones once you can wear an AR device all day everywhere you go. AR and VR should be treated as distinct; a good VR headset is bad for most AR usecases and vice versa. Maybe at some distant time there will be hardware capable of doing both really well but not for many years. VR fits into daily life as a social experience and will be obvious once eye, face and full body tracking are included with the headset. Even before that if someone solves the network problems with concurrent users and delivers an experience that handles audio well enough to work with multiple people having conversations within earshot of each other. AR is really just putting screens and overlays everywhere, conversational interfaces will have more and better impact than that.


Making interfaces natural language is like making all buttons touch screen. Versatile, yea, but in practice it can be less efficient than dedicated controls or more tactile interfaces.


Bare in mind I am not anticipating the Google Home level of interaction, I am talking full sophisticated natural language. And there is no reason touch cannot play a role, I'm just saying these devices will be unlike our current phones. I've listed some use cases that I think beat out touch easily elsewhere in the comments.


Natural language is even worse than video for quick content consumption: you can’t quickly skip through content that way the can fast scroll through a blog post.


Natural language in place of other inputs, the command can result in summoning up a video for you, or a blog post to view.

If I am cooking and can hold a whole conversation with my virtual chef,that beats a video though. Talking through ideas at my desk would be great. Talking for navigation while I drive, yes please. Shopping with my headphones on and the device seeing everything I see and making suggestions about deals, recipe options, what's low in your pantry, and so on. I'll take that. I see technology becoming more and more transparent in our life. User interfaces will feel awkward when all you need to do is say "show me a video of cats" and it serves it to your companion screen without ever needing to touch tap any UI. No need to even have a web UI for youtube. Just an API and your device does the rest. It can show you a list of related videos without YouTube themselves providing anything more than the data model.

The cost cutting of not needing sophisticated front ends will be a big driving factor if this is as effective as I think it could be.

Anyways, my head is full of ideas like this.


I get what you mean and the future you propose really does seem more likely than any other, assuming neuralink doesn't work. However then there's an issue. Apple style on device neural processing seems unlikely for these amounts of constantly on AI capabilites, at least with current hardware, batteries and AI models. So it'll be cloud. Then we'll all be developing natural language apps which run on AWS, paying money constantly to be able o use our own apps developed in our own unpaid time. Though I guess we'll get used to it.

The issue is... I like(d) writing CSS.


Natural language wont work as the main control until we have a nueralink type product. I dont want to be talking at my computer constantly.


> What is going to destroy phones is natural language.

So when a group of people meet, instead of all sitting quiet and typing in their phones, they will all talk to their phones at the same time?


> What is going to destroy phones is natural language.

Then the Watch is the future.


Dick Tracy will have his revenge, in this model or the next.


> No way will this tech destroy phones as you know them today.

This device? No, absolutely not. I see it as a speculative play by Apple: release a very capable device with a bare-bones ecosystem at a high price.

"Early adopters" will buy it because that's what they do.

"Influencers" will buy it, because that's what they do. Their social media posts about it will give Apple all the data they need to nail down the size of the potential market.

Finally, developers will buy it, because it's cool tech. We'll tell ourselves that it's an emerging market, and we can get it early. The launch of the iOS App Store spawned a gold rush for app developers; I expect that the launch of the AVP and its visionOS App Store will do the same. The size and profitability of that opportunity will be determined by how well Apple develops and popularizes the product.

This is where I'm at on it. I expect the AVP to be best-in-class in terms of hardware and OS-level integration (though, to be fair, I expect the latter will be limited at first in odd ways, in the grand Apple tradition).

I'll get one. I plan to use it for productivity, and as long as that justifies the cost I'll be happy with it. I'll also work on some minimal apps for visionOS. The purpose there will be to "skill up". If Apple releases a more consumer-focus headset that gains adoption, I'll be in a good position to take advantage of that by selling paid apps that are already mature by the time the general public are getting on the bandwagon.

> What is going to destroy phones is natural language.

Maybe?

We've heard about "wearable computing" and "personal area networks" for decades at this point. While it still feels like something in the near future, the truth of the matter is that for a large segment of the population, it's already here. I already have an iPhone with me whenever I'm away from home, and usually an iPad as well. If I'm going to be away from home for a while, I've got an MBP in my backpack. All of those devices can hand off tasks between each other to an increasingly large degree - it's not uncommon for me to pull out my phone to show someone a website I had open on my laptop before I left home, then pull out my iPad if they're interested in it so they can interact with it more easily. Until recently, I had an Apple Watch surfacing an integrated notification stream from all of the above.

Today, smartphones are the central "wearable computing" device that ties everything together. They act as a hub for a computing experience. There's no guarantee in my mind that it will continue in that role forever. Maybe the hub will end up being the descendant of the AVP. Maybe it will be something more akin to a Humane AI Pin, or a Rabbit R1.

In other words... phones have already destroyed phones. Smartphones are really wearable computing hubs that we just happen to still _call_ "phones", because that's what they used to be. They're very rarely used for telephony, and many other devices are capable of doing so.

> Maybe, maybe in the distant future this tech will be mature enough to provide the visual support to a natural language first device.

My hope is that it ends up being a "spatial" interface that provides a generic interface so it can be used by pretty much anything.


> They're very rarely used for telephony, and many other devices are capable of doing so.

This is a really important distinction. And if you calculate your phone bill by actual minutes used for talking, it's an insane number of dollars per minute.

I'd be completely unsurprised if the amount of "talking on Zoom/Teams/voice chat" is soon to surpass the total number of minutes talking on phones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: