There's a few things. Let's start with the core of what they say is their value. They have forward deployed engineers - this a totally new, previously unknown innovation - who go to a company, understand their needs and build data processing tools to give them insights. Then, they generalize these tools so that they can essentially sell them as SAAS software, giving them SAAS-type economics.
What other people say is their secret sauce, is they do consulting work for the government (a forward deployed engineer is just a consultant) and they make incredible margins because their senior management and early investors have connections to the government which gets them exclusive access to incredibly juicy contracts. As these contracts paid off they leant heavily into the social media meme stock trend so their CEO spends time talking like a psychopath and doing various non-economic things like spending huge amounts of money running adverts about how they're going to use AI to unleash Americas workers (America's workers aren't able to buy Palantir software or services, but they can buy it's stock).
OK. The first part sounds a bit like an innovation.
I was kind of expecting someone to say it had EG really sophisticated ETL tools that can normalise loads of different data or can query across disparate data sources or something.
The first part sounds like a basic ERP implementation. Only instead of leaning on your in-house domain experts who have years of experience/relevant knowledge/know the relevant caveats, you pay consulting rates to train up new domain experts who don't understand/know the caveats and who will charge you consulting rates to gain access to the results of the training you overpaid for.
'But they cleaned the data up'. That data was also cleaned up during all the last major system updates. And during the implementation of those systems. And the implementation of the systems before that.
Ah I see, so the [token holder] hires a [builder] to build something, and uses that to then hire [intellectual] to scam the ['pragmatic user']?
To take this a little more seriously, this is computer programming, very famously you don't need massive gobs of VC capital to build something. The only reason for the [builder] needs [token holder] is to hire [intellectual] to scam [user].
Oh and of course, [token holder] [builder] and [intellectual] are the same guy with 3 different anime profile pics.
If you can afford Lambo you can come to place like Dubai and pay for it in AnyUSDcoin, gold nuggets or anime profile picture NFTs. Barrier for using crypto of any kind does not exist in countries without paranoid AML / KYC regulations.
Or tbh you can just buy it with crypto card issued in Hong Kong / Singapore even if you buying it in the US.
There's a difference between a good product and a good business. It's easy to see this is a great product but it's really difficult to see how it's a great business. All the traditional things we look for from great businesses seem to be absent here - they don't have network effects like facebook or uber, they don't have lock-in they way Apple has, and to a large extent they don't have the traditional economics of SAAS. They have a product that is interchangeable with 3 or 4 other companies, they have extremely high initial investment costs to train models, and it's not clear they can actually sell tokens for more than it costs to make them.
It's like the foundry business, an ever increasing cost of moving to the next node requires ever increasing scale which naturally kills off competition.
Training is taking an enormous problem and trying to break it into lots of pieces and managing the data dependency between those pieces. It's solving 1 really hard problem. Inference is the opposite, it's lots of small independent problems. All of this "we have X many widgets connected to Y many high bandwidth optical telescopes" is all a training problem that they need to solve. Inference is "I have 20 tokens and I want to throw them at these 5,000,000 matrix multiplies, oh and I don't care about latency".
I'm actually not that worried about this, because again I would classify this as a problem that already exists. There are already idiots in senior management who pass off bullshit and screw things up. There are natural mechanisms to cope with this, primarily in business reputation - if you're one of those idiots who does this people very quickly start just discounting what you're saying, they might not know how you're wrong, but they learn very quickly to discount what you're saying because they know you can't be trusted to self-check.
I'm not saying that this can't happen and it's not bad. Take a look at nudge theory - the UK government created an entire department and spent enormous amounts of time and money on what they thought was a free lunch - that they could just "nudge" people into doing the things they wanted. So rather than actually solving difficult problems the uk government embarked on decades of pseudo-intellectual self agrandizement. The entire basis of that decades long debacle was based on bullshit data and fake studies. We didn't need AI to fuck it up, we managed it perfectly well by ourselves.
Nudge theory isn't useless, it's just not anything like as powerful as money or regulation.
It was taken up by the UK government at that time because the government was, unusually, a coalition of two quite different parties, and thus found it hard to agree to actually use the normal levers of power.
In the UK, there are massive incentives for "tax efficient" purchasing of vehicles. ie, there was a point where everyone who could was driving around in a Porsche Taycan because the tax implications of buying it were comically positive. Of course, those benefits don't translate to the secondary market, so of course there's a glut of 3 year old electric vehicles. No one wants them. Oh, and you can't buy a petrol car because the car companies are under the hammer to sell EVs.
Just to be clear, the UK system is much simpler than the US system. There is just a bad law. That law could be repealed with a majority in parliament tomorrow, until it is repealed (spoiler it absolutely will not be repealed) the regulator can and will file these law suits. The best we can hope for is that the regulator (Home Office) just don't bother trying to enforce the law.
The core problem is the people writing the laws are know-nothing busy bodies who write crap laws and then cause massive problems, and we've demonstrated over the last 18 months that you can fire literally 70% of the UK Parliament, replace them all and still end up with the same rules written by the same know-nothing busy bodies.
Whitehall - the UK civil service - persists between governments in a fairly unique way. It's essentially a political entity that exists beyond democracy that has pinky-promised to be politically ambivalent.
To paraphrase an adage I've forgotten: you can skim as much shit as you like off the Thames, it'll still be a filthy river.
I don't like this line of reasoning because it's largely just crystalizing a loss. We have this in the UK - houses of multiple occupancy. It's a great idea where you take a home that in the 1980s would house a family and split it into 5 flats where each person can rent 10-20 square metres each. I would much rather someone did something to address the fact that the average family in the UK can afford roughly 1/5th the amount of housing they could in the 1980s. And of course, because of this arbitrage now a family that wants to live in that home is competing with the rental income of 5+ tenants in a HMO.
Surely, the correct solution is just to put in some simple rules to bring the cost of housing down. For example: planning restrictions are suspended until the average family home hits 3x average family income. Rather than just packing us like sardines into ever more expensive houses.
Voters don't actually want house prices to come down. Voters, in aggregate, want rents to fall and prices to rise, roughly divided by renters vs owners. Somehow the homeowners almost always win against the renters in this political tug-of-war. Perhaps because rents are downstream of values, and so it's politically easier for owners to make the correct choices to advance their agenda than it is for renters, which have an extra logical leap required of them.
> Somehow the homeowners almost always win against the renters in this political tug-of-war.
Demographics. Homeowners skew old, which gives them a bunch of advantages in enacting their political power. Higher turnout, baby boom giving them numerical superiority, and the time advantage of being able to enact policy decades ago.
In the US, this is supplemented by matters of race, where because of past redlining policies, "pro-homeowner" policy (esp. suburban single-family-homes) in the last half-century has been a way to primarily benefit white people.
You're forgetting the most important one. Having a bunch of your money tied up in an illiquid asset that is subject to all manner of government micromanagement gives you a huge incentive to see to it that the government doesn't get progressively more shitty toward you than it already is.
Yep, saying it's an age thing is missing that every homeowner is directly financially incentivized to ensure prices go up. I literally get physical printed mail (against my
will) every other week telling me about the health of my neighborhood where higher home sale prices means better. Being older makes a person more likely to be homeowner, they got the causation backwards.
> Being older makes a person more likely to be homeowner, they got the causation backwards.
No.
Being a homeowner doesn't grant one political influence. Being old grants one political influence.
It's the correlation of age and homeownership that means homeowners have the political influence the push through policy that drives up real estate prices.
Non-homeowners have political incentives all the same. If only just to oppose those very homeowners' policies. What they lack is the political influence to make it happen.
Meanwhile to the original article, 80s TV like Golden Girls (shared housing) and Boosom Buddies (boarding houses) are quaint historic notes, the reality is that our use of housing stock has made the problem of where to live worse: https://www.census.gov/library/stories/2023/06/more-than-a-q...
When you dig down into the data, the article is highlighting a real problem. We have destroyed a lot of historical co-habitation that kept the system working and healthy. We did this with zoning (getting rid of high density to prop up home values) banning types of housing (dense single room, affordable) and making other types impossible (owning a home and renting a room or two, people dont do this because of tenants rights issues).
> Voters don't actually want house prices to come down.
>> You have this wrong.
But you don't refute this, if anything you make the case that, since the majority are homeowners, they would of course want ever-increasing home values. And it's common knowledge that homeowners assume/depend on rising values as part of their purchasing decision.
If non voting renters showed up, this could easily swing the other way. Renters are (generally) unmotivated to change their destiny at the voting booth.
> And it's common knowledge that homeowners assume/depend on rising values as part of their purchasing decision.
This is far far far more complicated than it looks. Because what is inflation vs increase in value after adjustment. There are plenty of places where housing has gone down in value (see Detroit, see Camden NJ). There are plenty of places where gentrification has changed whole regions (see the SF Bay Area).
There has also been a massive change in what we build (smaller homes, vs McMansions).
When you dig into the WHY of this, the destruction of old stock is a huge part of it (see Detroit). Massive changes to what and how we build (back to home owners and zoning) limiting growth in areas. Over regulation (see slow rebuilding in southern CA after fires, and the whole housing shortage here). NOTE: the ADU law that was an attempt to let home owners fix this themselves has been somewhat of a flop... however it is gaining momentum.
The fixes to housing in the US require voters to pass something where only a bit more than half of it would be "good" for them and in a hard to explain way. It is easy to get them to vote such a policy down when the 40 percent that might impact them makes for a clear cut argument for a NO.
> Voters, in aggregate, want rents to fall and prices to rise, roughly divided by renters vs owners
I assure you, a lot of people in the UK want house prices to fall too. There are too many renters who don't want to be renting, and the proportion is increasing. They wish they could buy instead, but can't either because of price, inability to save enough for a down-payment as fast as prices rise (while large rent rises impede their saving or even drain it, and incomes rise more slowly than prices), or inability to obtain a mortgage despite a history of consistently paying more than a mortgage in rent. For the latter category, who can afford a mortgage but can't get one, and are already paying more in rent, their main problem isn't income or price, it's the tighter restrictions on mortgage availability since the 2008 financial crises. But they would still like lower prices.
Somehow? Homeowners are obviously the more powerful group, and real estate ownership in the bigger picture is tied to even more powerful entities (big companies, banks, billionaires).
How many in the top 10% of the wealth distribution do you think are renters?
How many in the bottom 20% do you think are homeowners?
In the 1970s, it was usual for working class newlyweds would have to live with their parents until they were able to find housing. That's why second-rate comedians of the time like Les Dawson had so many mother-in-law jokes: there was an awful lot of resentment between young men and their mothers-in-law to exploit. There's nothing new about multiple families crowding into houses designed for just one family in this country - that's why there are so many pubs.
The Town and Country Planning Act 1947 has been identified as a cause of insufficient housebuilding activity, and new legislation is currently working its way through the House of Lords to alleviate this.
In the UK specifically the radical reform (read destruction) of council housing by the Thatcher government had a large impact on the housing market in the 1980s.
In 1970 you could basically buy any non-city plot of land and build a shack on it without anyone bothering you. Think of the back to the land hippies in California just chopping down trees and starting their little communes -- they'd be utterly fucked if they did that now, some Karen would rat them out instantly to planning and zoning committee.
In the late 60s/70s DIY builders were almost completely displaced by developers who lobbied for regulations that stomped out "a guy and his pickup truck" by and large almost anywhere with desirable land. Then the owners of those houses reinforced same to prop up their property values.
I live in one of the last remaining counties that didn't do that, and last year I built a house for $60k. Pretty easy if you're in a place with essentially no codes or zoning. My (fairly) newlywed and I built the house with basically no experience either.
And in turn none of the discussion you've replied to is relevant by your standard, because the OG article discusses the United States.
Funny someone else is allowed to discuss UK in regards to an American article, but I'm not allowed to discuss America on a UK thread about an American article.
The discussion was prompted by an article on sharehouses being banned in US cities, which prompted comparison to HMOs in the UK. One of those comparisons suggested that HMOs are a recent phenomenon and are a cause in the shortage of family homes in the UK. I replied to this by arguing that a shortage of family homes was also present in the 1970s, and that overcrowded housing for working families has been common throughout British history. You've replied to this with your personal experiences about building a home in the 1970s and dealing with building regulations.
The discussion about the effects of UK HMOs on wider housing availability is indeed a peripheral discussion of limited interest to most. Your comment, while of interest to me, was only tangentially related to my comment. I'm not arguing that you shouldn't have written it - as I said, I found it interesting - I'm just pointing out that it doesn't flow well from what came before it.
Housing in UK/US seems to suffer from simultaneous under-and over- regulation. We over-regulate urban infill housing, and over-regulate the types of housing you can build. We under-regulate landowner profits by letting them keep land rents.
A holistic fix would address both causes of failure in the housing market.
>> houses of multiple occupancy. It's a great idea where you take a home that in the 1980s would house a family and split it into 5 flats where each person can rent 10-20 square metres each
This isn't correct. When a house is split into multiple flats, they're individual flats rented out under separate agreements and not a HMO.
A HMO is when that house is rented to a group of people who are unrelated to each other (i.e. not of the same 'household'). They are generally jointly and severally liable (under an AST). They each have a bedroom and share kitchen/bathroom/common areas. HMO's have stricter health & safety regulations. For example, doors must be the automatically closing fire doors that you get in public buildings.
In about 18 years in HMO's I've only had one occasion where the rooms were let separately (in that case they were all being let separately from the start). Most of the time you move in and join an existing lease where the leaving tenant is removed and you are added alongside the other tenants.
I think it probably depends on how the initial people moved in. If the estate agent is renting the rooms individually from the beginning it'll be separate agreements. If it's initially rented to a group of friends it's likely a joint AST and then re-assigned over the years as individuals change and the lease is renewed until you have a bunch of strangers jointly and severally liable (not a great idea).
Maybe this is a 'where you live' sort of thing then... I've never been based in London and when I was HMO-ing it was typically stuff on Spareroom which was always individual lets per room pretty much in the areas I lived at least (probably less competitive than London)
The UK has also had extremely high immigration rates since the 1980s. Whether that's good or bad policy isn't my place to say, but it certainly places extreme pressure on the housing market.
Assume positive intent. I flagged your comment because you made a false and scurrilous insinuation about my intentions. I'm not a UK citizen or resident and don't care about their immigration policy one way or another. But a high immigration rate will obviously increase housing demand: this is basic macroeconomics and trivially true.
The UK net immigration rate has been high relative to other countries worldwide, especially in recent years. It is an outlier on that basis. I'm not sure why you would limit the comparison to only high-income countries.
Someone did comment that it's actually smart to check if something is fixed on the unstable branch, or I suppose in your coworkers' branches. A good task for an LLM.
Zuckerberg rushing into every new fad with billions of dollars has somehow tricked people into thinking that's what big tech is about and all of them should be shovelling money into this.
But actually every other company has been much more strategic, Microsoft is bullish because they partnered up with OpenAI and it pumps their share price to be bullish, Google is the natural home of a lot of this research.
But actually, Amazon, Apple etc aren't natural homes for this, they don't need to burn money to chase it.
So there we have it, the companies that have a good strategy for this are investing heavily, the others will pick up merges and key technological partners as the market matures, and presumably Zuck will go off and burn $XB on the next fad once AI has cooled down.
I generally agree with you, although Amazon is really paranoid about being behind here.
On the last earnings call the CEO gave a long rambling defensive response to an analyst question on why they’re behind. Reports from the inside also say that leaders are in full blown panic mode, pressing teams to come up with AI offerings even though Amazon really doesn’t have any recognized AI leaders in leadership roles and the best talent in tech is increasingly leaving or steering clear of Amazon.
I agree they should just focus on what they’re good at, which is logistics and fundamental “boring”
compute infrastructure things. However leadership there though is just all over the map trying to convince folks their not behind vs just focusing on strengths.
Doesn't Amazon have a huge lead just because of AWS? Every other player is scrambling for hardware/electricity while Amazon has been building out data centers for the last 20 years.
> Doesn't Amazon have a huge lead just because of AWS?
They have huge exposure because of AWS; if the way people use computing shifts, and AWS isn't well-configured for AI workloads, then AWS has a lot to lose.
> Every other player is scrambling for hardware/electricity while Amazon has been building out data centers for the last 20 years.
Microsoft and Google have also been building out data centers for quite a while, but also haven't sat out the AI talent wars the way Amazon has.
1. Price-performance has struggled to stay competitive. There’s some supply-demand forces at play, but the top companies consistently seem to strike better deals elsewhere.
2. The way AWS is architected, especially on networking, isn’t ideal for AI. They’ve dug their heels on in their own networking protocols despite struggling to compete on performance. I personally know of several workloads that left AWS because they couldn’t compete on networking performance.
3. Struggling on the managed side. On paper a service like Bedrock should be great but in practice it’s been a hot mess. I’d love to use Anthropic via Bedrock, but it’s just much more reliable when going direct. AWS has never been great at these sort of managed services at scale and they’re again struggling here.
In theory they should, but it’s increasingly looking like they’re struggling to attract/retain the right talent to take advantage of that position. On paper they should be wiping the floor with others in this space. In practice they’re getting their *ss kicked and in a panic on what to do.
My understanding is that they fell behind on offering the latest gen Nvidia hardware (Blackwell/Blackwell Ultra) due to their focus on internally developed ASICs (Trainium/Inferentia gen 2).
I'd argue that Meta's income derives in no small part from their best in class ad targeting.
Being on the forefront of
(1) creating a personalized, per user data profile for ad-targeting is very much their core business. An LLM can do a very good job of synthesizing all the data they have on someone to try predicting things people will be interested in.
(2) by offering a free "ask me anything" service from meta.ai which is tied directly to their real-world human user account. They gather an even more robust user profile.
This isn't in-my-opinion simply throwing billions at a problem willy nilly. Figuring out how to apply this to their vast reams of existing customer data economically is going to directly impact their bottom line.
5 minutes on facebook being force-fed mesopotamian alien conspiracies is all you'll need to experience to fully understand just how BADLY they need some kind of intelligence for their content/advertising targeting, artificial or not...
You probably don't spend enough time on their sites to have a good ad targeting model of you developed. The closer you are to normal users, with hundreds of hours of usage and many ad clicks, the more accurate the ads will be for you.
Same terrible experience for me while I was on FB.
I was spending a lot of time there and I do shop a lot online. They couldn’t come with relevant ad targeting for me.
For my wife they started to show relevant ads AFTER she went to settings and manually selected areas she is interested in.
This is not an advanced technology everyone claim FB has.
People look at all the chaos in their AI lab but ignore the fact that they yet again beat on earnings substantially and directly cited better ad targeting as the reason for that. Building an LLM is nice for them, but applying AI to their core business is what really matters financially, and that seems like it's going just fine.
The largest LLMs are mostly going to be running in the cloud, so the general purpose cloud providers (Amazon, Microsoft, Google) are presumably going to be in the business of serving models, but that doesn't necessarily mean they need to build the models themselves.
LLMs look to be shaping up as an interchangeable commodity as training datasets, at least for general purpose use, converge to the limits of the available data, so access to customers seems just as important, if not more, than the models themselves. It seems it just takes money to build a SOTA LLM, but the cloud providers have more of a moat, so customer access is perhaps the harder part.
Amazon do of course have a close relationship with Anthropic both for training and serving models, which seems like a natural fit given the whole picture of who's in bed with who, especially as Anthropic and Amazon are both focused on business customers.
Sure, but you can also sell something without having built it yourself, just as Microsoft Copilot supports OpenAI and Anthropic models.
It doesn't have to be either/or of course - a cloud provider may well support a range of models, some developed in house and some not.
Vertical integration - a cloud provider building everything they sell - isn't necessarily the most logical business model. Sometimes it makes more sense to buy from a supplier, giving up a bit of margin, than build yourself.
I'm just an observer. Microsoft has invested billions in OpenAI and can access their IP as a result. It might even be possible MS hopes that OpenAI fails and doesn't allow them to restructure to continue to acquire outside funding. You can go directly to the announcement of their in-house model offerings and they are clearly using this as a recruiting tool for talent. Whether it makes sense for the cloud providers to build their own models is not for me to say, but they may not have a choice given how quickly OpenAI/Anthropic are burning cash. If those two fail then they're essentially ceding the market to Google.
I think this analysis is a bit shallow with regard to Metas product portfolio and how AI fits in.
Much more than the others, metter runs a content business. Gen AI aides in content generation so it behooves them to research it. Even before the current explosion of chatbots, meta was putting this stuff into their VR framework. It's used for their headset tracking and speech to text is helpful for controlling a headset without a physical keyboard.
You're making it sound like they'll follow anything that walks by but I do think it's more strategic than that.
But that didn't require deep insight. Both were already really popular and clearly a threat to Facebook. WhatsApp was huge in Europe before they bought (possibly other places as well).
Buying competition is par for the course for near-monopolies in their niches. As long as the scale differences in value are still very large, you can avoid competition relatively cheaply, while the acquired still walk away with a lot of money.
Why does investing in AI require deep insight? ChatGPT is already huge, significantly bigger than Whatsapp when the deal was done. And while OpenAI is not for sale, he figured that their employees are. Also not to mention, investors are very positive for AI.
So far there hasn't been a transformative use case for LLMs besides the straightforward chat interface (Or some adjacent derivative). Cursor and IDE extensions are nice, but not something that generates billions in revenue.
This means there's two avenues:
1. Get a team of researchers to improve the quality of the models themselves to provide a _better_ chat interface
2. Get a lot of engineers to work LLMs into a useful product besides a chat interface.
I don't think that either of these options are going to pan out. For (1), the consumer market has been saturated. Laymen are already impressed enough by inference quality, there's little ground to be gained here besides a super AGI terminator Jarvis.
I think there's something to be had with agentic interfaces now and in the future, but they would need to have the same punching power to the public that GPT3 did when it came out to justify the billions in expenditure, which I don't think it will.
I think these companies might be able to break even if they can automate enough jobs, but... I'm not so sure.
Whatsapp had $10M revenue when it was acquired[1]. Lots of so called "chatgpt wrappers" has more revenue than that. While in hindsight Whatsapp acquisition at $19B seems no brainer, no concrete metric pointed to that compared to him investing $19B in AI now.
Dude Zuckerberg bought whatsapp because FB Messenger was losing market share... nothing to do with Whatsapps revenue! Rather Zuckerbergs fear of FB products being displaced.
How many software engineers are there in the world? How many are going to stop using it when model providers start increasing token cost on their APIs?
I could see the increased productivity of using Cursor indirectly generating a lot more value per engineer, but... I wouldn't put my money on it being worth it overall, and neither should investors chasing the Nvidia returns bag.
Amazon strategy is to invest in the infrastructure, money is where the machines live. I think they just realized none of those companies have a moat, so why would they? But all of them will buy compute
Except they’re struggling here. The performance of their offerings is consistently behind competitors, particularly given their ongoing networking challenges, and they’re consistently undercut on pricing.
For Amazon “renting servers” at very high margin is their cash cow. For many competitors it’s more of a side business or something they’re willing to just take far lower margin on. Amazon needs to keep the markup high. Take away the AWS cash stream and the whole of Amazon’s financials start to look ugly. That’s likely driving the current panic with its leadership.
Culturally Amazon does really well when it’s an early mover leader in a space. It really struggles, and its leadership can’t navigate, when it’s behind in a sector as is playing out here.
Under what scenario does Amazon lose the beast that is its high margin cloud service renting? It appears to be under approximately zero threat.
Companies are not going to stop needing databases and the 307 other things AWS provides, no matter how good LLMs get.
Cheaper competitors have been trying to undercut AWS since the early days of its public availability, it has not worked to stop them at all. It's their very comprehensive offering, proven track record and the momentum that has shielded AWS and will continue to indefinitely.
It’s already playing out. Just look at recent results. While once light years ahead competitors are now closing ranks and margins are under pressure. AWS clearly isn’t going away, but on the current trajectory its future as the leading cloud is very much not a certainty.
Because if LLM inference is going to be a bigger priority for the majority of companies, they're going to go where they can get the best performance to cost ratio. AWS is falling behind on this. So companies (especially new ones) are going to start using GCP or Azure, and if they're already there for their LLM workloads, why not run the rest of the infrastructure there?
It's similar to how AWS became the de-facto cloud provider for newer companies. They struggled to convince existing Microsoft shops to migrate to AWS, instead most of the companies just migrated to Azure. If LLMs/AI become a major factor in new companies deciding which will be their default cloud provider, they're going to pick GCP or Azure.
Except for spending cloud budgets on LLMs elsewhere like other mentioned, LLM coding will make it easier to convert codebases from being AWS dependent, easing their lock-in
Microsoft has the pleasure of letting you pay for your own hosted GPT models, Mixtral, etc
Microsoft's in a sweet spot. Apple's another interesting one, you can run local LLM models on your Mac really nicely. Are they going to outcompete an Nvidia GPU? Maybe not yet, but they're fast enough as-is.
To do what for that money? Write summaries of product reviews? If they wanted to do something useful, they'd use the LLM to figure out which reviews are for a different product than what is currently being displayed.
> But actually, Amazon, Apple etc aren't natural homes for this, they don't need to burn money to chase it.
I really liked the concept of Apple Intelligence with everything happening all on device, both process and data with minimal reliance off device to deliver the intelligence. It’s been disappointing that it hasn’t come to fruition yet. I am still hopeful the vapor materializes soon. Personally I wouldn’t mind seeing them burning a bit more to make it happen.
It will likely occur, just maybe not this year or next. If we look over the last eighty years of computing, the trend has been smaller and more powerful computers. No reason to think this won’t occur with running inference on larger models.
Exactly. Being a tech company doesn't mean you need to do everything any more than just because you're a family doctor you also should do trauma surgery, dentistry, and botox injections. Pick a lane, be an expert in it.
Except that Amazon's AWS business is severely threatened by the rise of alternative cloud providers who offer much more AI-friendly environments. It's not an existential topic yet, but could easily turn into one.
Zuckerbergs AI "strategy" seems to be to make it easy for people to generate AI slop and share it on FB thus keeping them active on the platform. Or to give people AI "friends" to interact with on FB, thus keeping them on the platform and looking at ads. It's horrifying but it does make business sense (IMHO) at least at first glance.
They have out sensors though for any AGI, because AGI could subvert buisness fields and expertise moats. Thats what most AI teams are- vanity projects and a few experts calming the higher ups every now and then with a "its still just autocompletion on steroids, it can not yet do work and research alone."
> Zuckerberg rushing into every new fad with billions of dollars has somehow tricked people into thinking that's what big tech is about and all of them should be shovelling money into this.
Zuckerberg failed every single fad he tried.
He's becoming more irrelevant every year and only the company's spoils from the past (earned not less by enabling, for example, a genocide to be committed in Myanmar https://www.pbs.org/newshour/world/amnesty-report-finds-face...) help carry them through to the series of disastrous idiotic decision Zuck is inflicting on them.
- VR with Oculus. It never caught on, for most people who own one, it's just gathering dust.
He is doing it at the worst possible moment: LLMs are stagnating and even far better players than Meta like Anthropic and OpenAI can't produce anything worth writing about.
ChatGPT5 was a flop, Anthropic are struggling financially and are lowering token limits and preparing users for cranking up prices, going 180 on their promises not to use chat data for training, and Zuck, in his infinite wisdom, decides to hire top AI talent for premium price at a rapidly cooling market? You can't make up stuff like that.
It would appear that apart from being an ass kisser to Trump, Zuck shares another thing with the orange man-child running the US: a total inability to make good, or even sane deals. Fingers crossed that Meta goes bankrupt just like Trump's 6 banrkruptcies and then Zuck can focus on his MMA career.
I've been taking heat for years for making fun of the metaverse. I had hopeful digital landlords explain to me that theyll be charging rent in there! Who looked at that project and thought it was worth anything?
> I don't know in what circles you're hanging out, I don't know a single person who believed in the metaverse
Oh please, the world was full of hype journalists wanting to sound like they get it and they are in it, whatever next trash Facebook throws their way.
The same way folks nowadays pretend like the LLMs are the next coming of Jesus, it's the same hype as the scrum crowd, the same as crypto, nfts, web3. Always ass kissers who cant think for themselves and have to jump on some bandwagon to feign competence.
meta made $62 billion dollars last year. Mark burns all this money because his one and only priority is making sure his company doesnt become an also ran. The money means nothing to him
Google basically invented modern AI (the 'T' in ChatGPT stands for Transformer), then took a very broad view of how to apply broadly neural AI - AlphaGo, AlphaGenome being the kind of non-LLM stuff they've done).
A better way to look at it is that the absolute number 1 priority for google since they first created a money spiggot throguh monetising high-intent search and got the monopoly on it (outside of Amazon) has been to hold on to that. Even YT (the second biggest search engine on the internet other than google itself) is high intent search leading to advertising sales conversion.
So yes, google has adopted and killed lots of products, but for its big bets (web 2.0 / android / chrome) it's basically done everything it can to ensure it keeps it's insanely high revenue and margin search business going.
What it has to show for it is basically being the only company to have transitioned as dominent across technological eras (desktop -> web2.0 -> mobile -> (maybe llm).
As good as OpenAI is as a standalone, and as good as Claude / Claude Code is for developers, google has over 70% mobile market share with android, nearly 70% browser market share with chrome - this is a huge moat when it comes to integration.
You can also be very bullish about other possible trends. For AI - they are the only big provider which has a persistent hold on user data for training. Yes, OpenAI and Grok have a lot of their own data, but google has ALL gmail, high intent search queries, youtube videos and captions, etc.
And for AR/VR, android is a massive sleeping giant - no one will want to move wholesale into a Meta OS experience, and Apple are increasingly looking like they'll need to rely on google for high performance AI stuff.
All of this protects google's search business a lot.
Don't get me wrong, on the small stuff google is happy to let their people use 10% time to come up with a cool app which they'll kill after a couple of years, but for their big bets, every single time they've gone after something they have a lot to show for it where it counts to them.
Yeah, and Google has cared deeply about AI as a long term play since before they were public. And have been continuously invested there over the long haul.
The small stuff that they kill is just that--small stuff that was never important to them strategically.
I mean, sure, don't heavily invest (your attention, time, business focus, whatever) in something that is likely to be small to Google, unless you want to learn from their prototypes, while they do.
But to pretend that Google isn't capable of sustained intense strategic focus is to ignore what's clearly visible.
I haven't followed that closely, but Gemini seems like a pivot based on ChatGPT's market success
Google is leading in terms of fundamental technology, but not in terms of products
They had the LLambda chatbot before that, but I guess it was being de-emphasized, until ChatGPT came along
Social was a big pivot, though that wasn't really due to Pichai. That was while Larry Page was CEO and he argued for it hard. I can't say anyone could have known beforehand, but in retrospect, Google+ was poorly conceived and executed
---
I also believe the Nth Google chat app was based on WhatsApp success, but I can't remember the name now
Google Compute Engine was also following AWS success, after initially developling Google App Engine
>I haven't followed that closely, but Gemini seems like a pivot based on ChatGPT's market success
"AI" in it's current form is already a massive threat to Google's main business (I personally use Google only a fraction of what I used to), so this pivot is justified.
If you are defining "pivot" as "abandon all other lines of business", then no, none of the BigTechs have ever pivoted.
By more reasonable standards of "pivot", the big investment into Google Plus/Wave in the social media era seems to qualify. As does the billions spent building out Stadia's cloud gaming. Not to mention the billions invested in their abandoned VR efforts, and the ongoing investment into XR...
I'd personally define that as Google hedging their bet's and being prepared in case they needed to truly pivot, and then giving up when it became clear that they wouldn't need to. Sort of like "Apple Intelligence" but committing to the bit, and actually building something that was novel, and useful to some people, who were disappointed when it went away.
Stadia was always clearly unimportant to Google, and I say that as a Stadia owner (who got to play some games, and then got refunds.) As was well reported at the time, closing it was immaterial to their financials. Just because spending hundreds of millions of dollars or even a few billion dollars is significant to you or I doesn't mean that this was ever part of their core business.
Regardless, the overall sentimentality on HN about Google Reader and endless other indisputably small projects says more about the lack of strategic focus from people here, than it says anything about Alphabet.
> Well, "pivot" implies the core business has failed and you're like "oh shit, let's do X instead".
I mean, Facebook's core business hasn't actually failed yet either, but their massive investments in short-form video, VR/XR/Metaverse, blockchain, and AI are all because they see their moat crumbling and are desperately casting around for a new field to dominate.
Google feels pretty similar. They made a very successful gambit into streaming video, another into mobile, and a moderately successful one into cloud compute. Now the last half a dozen gambits have failed, and the end of the road is in sight for search revenue... so one of the next few investments better pay off (or else)
The link you posted has a great many very insignificant investments included in it, and nothing I've seen Google doing has felt quite like the desperation of Facebook in recent years.
I didn't really see it at first, but I think you are correct to point out that they kind of rhyme. However to me, I think the clear desperation of Facebook makes it feel rather different from what I've seen Google doing over the years. I'm not sure I agree that Google's core business is in jeopardy in the way that Facebook's aging social media platform is.
I suppose you could argue that Amazon does have one special thing going for it here, idle compute resources in AWS. However that is not the sort of thing that requires "AI talent" to make use of.
What other people say is their secret sauce, is they do consulting work for the government (a forward deployed engineer is just a consultant) and they make incredible margins because their senior management and early investors have connections to the government which gets them exclusive access to incredibly juicy contracts. As these contracts paid off they leant heavily into the social media meme stock trend so their CEO spends time talking like a psychopath and doing various non-economic things like spending huge amounts of money running adverts about how they're going to use AI to unleash Americas workers (America's workers aren't able to buy Palantir software or services, but they can buy it's stock).
reply