Anyone know what scale does these days beyond labeling tools that would make them this interesting to Meta? Data labeling tools seem more of a traditional software application and not much to do with AI models themselves that would be somewhat easily replicated, but guessing my impression is out of date. Also now apparently their CEO is leaving [1], so the idea that they were super impressed with him doesn't seem to be the explanation.
OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
Yeah, but they know how to get the quality human labeled data at scale better than anyone — and they know what Anthropic and OpenAI wanted — what made it quality
But then huge revenue streams for Scale basically disappear immediately.
Is it worth Meta spending all that money just to stop competitors using Scale? There are competitors who I am sure would be very eager to get the money from Google, OpenAI, Anthropic etc that was previously going to Scale. So Meta spends all that money for basically nothing because the competitors will just fill the gap if Scale is turned-down.
I am guessing they are just buying stuff to try to be more "vertically integrated" or whatever (remember that Facebook recently got caught pirating books etc).
Yeah, also the industry could come up with their own Scale if they were forced to.
But probs. it just makes sense on paper, Scale's revenue will pay this for itself and what they could do is to give/keep the best training sets for Meta, for "free" now.
Zuck's not an idiot. The Instagram and WhatsApp acquisitions were phenomenal in hindsight.
> The metaverse will happen, IMO. The tech is just not there, yet.
This seems possible, and it just sounds so awful to me. Think about the changes to the human condition that arose from the smartphone.
People at concerts and other events scrolling phones, parents missing their children growing up while scrolling their phones. Me, "watching" a movie, scrolling my phone.
VR/AR makes all that sound like a walk in the park.
“We went outside this weekend. Terrible. I wasn’t hot anymore, the smog was everywhere. House was tiny. No AI to help with conversations and people were unfriendly. I’m staying plugged in, where we can fly amongst the stars on unicorns. Some say it’s fake but I say life has been fake for a while.”
Meta has done great work on the underlying technology of the metaverse, but what they really need is a killer app. And I don't think Meta or really Silicon Valley types have the proper institutional ability or really cultural acumen to achieve it. We think back to Horizon Worlds that looked more like a amateur weekend asset flip than the product of a billion dollar conglomerate.
If it does come, it will likely come from the gaming industry, building upon the ideas of former mmorpgs and "social" games like Pokemon Go. But recent string of AAA disasters should obviously tell you that building a good game is often orthogonal to the amount of funding or technical engineering. It's creativity, and artistic passion, and that's something that someone who spends their entire life in the real world optimizing their TC for is going to find hard to understand.
The prevailing theory is that Meta did a 49% deal so it didn't set off anti-trust alarm bells. In other words, the 49% doesn't give them ultimate power, but you can best believe when Meta tells them to jump, the board and the execs are going to ask "how high?".
Power struggles like this are weird to me. Is kicking the board likely to succeed at 49%? If so it feels like the control percentage isn't the primary factor in actual control.
At 49% I'm certain they would become the largest shareholder, by far. Then allying with another smaller shareholder to get majority - especially as you are Meta and can repay in various ways - is trivial. This is control, in all forms but name.
There's a lot of things shareholders can do to screw over other shareholders. Smaller shareholders are at least somewhat likely to follow along with the largest shareholder, just to avoid becoming their enemies and getting squeezed out.
It's a smart purchase, it's just that I don't see how these datasets factor into super-intelligence. I don't think you can create a super-intelligent AI with more human data, even if it's high-quality data from paid human contributors.
Unless we watered-down the definition of super-intelligent AI. To me, super-intelligence means an AI that has an intelligence that dwarfs anything theoretically possible from a human mind. Borderline God-like. I've noticed that some people have referred to super-intelligent AI as simply AI that's about as intelligent as Albert Einstein in effectively all domains. In the latter case, maybe you could get there with a lot of very, very good data, but it's also still a leap of imagination for me.
I think this is kind of a philosphical distinction to a lot of people: the assumption is that a computer that can reason like a smart person but still runs at the speed of a computer would appear superintelligent to us. Speed is already the way we distinguish supercomputers from normal ones.
I'd say superintelligence is more about producing deeper insight, making more abstract links across domains, and advancing the frontiers of knowledge than about doing stuff faster. Thinking speed correlates with intelligence to some extent, but at the higher end the distinction between speed and quality becomes clear.
If anything, "abstract links across domains" is the one area where even very low intelligence AI's will still have an edge, simply because any AI trained on general text has "learned" a whole lot of random knowledge about lots of different domains; more than any human could easily acquire. But again, this is true of AI's no matter how "smart" they are. Not related to any "super intelligence" specifically.
Similarly, "deeper insight" may be surfaced occasionally simply by making a low-intelligence AI 'think' for longer, but this is not something you can count on under any circumstances, which is what you may well expect from something that's claimed to be "super intelligent".
I don't think current models are capable of making abstract links across domains. They can latch onto superficial similarities, but I have yet to see an instance of a model making an unexpected and useful analogy. It's a high bar, but I think that's fair for declaring superintelligence.
In general, I agree that these models are in some sense extremely knowledgeable, which suggests they are ripe for producing productive analogies if only we can figure out what they're missing compared to human-style thinking. Part of what makes it difficult to evaluate the abilities of these models is that they are wildly superhuman in some ways and quite dumb in others.
It is really more of a value judgement of the utility of the answer to a human.
Some kind of automated discovery across all domain pairs for something that a human finds utility in the answer seems almost like the definition of an intractable problem.
Superintelligence just seems like marketing to me in this context. As if AGI is so 2024.
> It's a high bar, but I think that's fair for declaring superintelligence.
I have to disagree because the distinction between "superficial similarities" and genuinely "useful" analogies is pretty clearly one of degree. Spend enough time and effort asking even a low-intelligence AI about "dumb" similarities, and it'll eventually hit a new and perhaps "useful" analogy simply as a matter of luck. This becomes even easier if you can provide the AI with a lot of "context" input, which is something that models have been improving at. But either way it's not superintelligent or superhuman, just part of the general 'wild' weirdness of AI's as a whole.
I think you misunderstood what I meant about setting a high bar. First, passing the bar is a necessary but not sufficient condition for superintelligence. Secondly, by "fair for" I meant it's fair to set a high bar, not that this particular bar is the one fair bar for measuring intelligence. It's obvious that usefulness of an analogy generator is a matter of degree. Eg, a uniform random string generator is guaranteed to produce all possible insightful analogies, but would not be considered useful or intelligent.
I think you're basically agreeing with me. Ie, current models are not superintelligent. Even though they can "think" super fast, they don't pass a minimum bar of producing novel and useful connections between domains without significant human intervention. And, our evaluation of their abilities is clouded by the way in which their intelligence differs from our own.
Comparing the process of research to tending a garden or raising children is fairly common. This is an iteration on that theme. One thing I find interesting about this analogy is that there's a strong sense of the model's autoregressiveness here in that the model commits early to the gardening analogy and then finds a way to make it work (more or less).
The sorts of useful analogies I was mostly talking about are those that appear in scientific research involving actionable technical details. Eg, diffusion models came about when folks with a background in statistical physics saw some connections between the math for variational autoencoders and the math for non-equilibrium thermodynamics. Guided by this connection, they decided to train models to generate data by learning to invert a diffusion process that gradually transforms complexly structured data into a much simpler distribution -- in this case, a basic multidimensional Gaussian.
I feel like these sorts of technical analogies are harder to stumble on than more common "linguistic" analogies. The latter can be useful tools for thinking, but tend to require some post-hoc interpretation and hand waving before they produce any actionable insight. The former are more direct bridges between domains that allow direct transfer of knowledge about one class of problems to another.
> The sorts of useful analogies I was mostly talking about are those that appear in scientific research involving actionable technical details. Eg, diffusion models came about when folks with a background in statistical physics saw some connections between the math for variational autoencoders and the math for non-equilibrium thermodynamics.
These connections are all over the place but they tend to be obscured and disguised by gratuitous divergences in language and terminology across different communities. I think it remains to be seen if LLM's can be genuinely helpful here even though you are restricting to a rather narrow domain (math-heavy hard sciences) and one where human practitioners may well have the advantage. It's perhaps more likely that as formalization of math-heavy fields becomes more widespread, that these analogies will be routinely brought out as a matter of refactoring.
> It's a smart purchase, it's just that I don't see how these datasets factor into super-intelligence.
It's a smart purchase for the data, and it's a roadblock for the other AI hyperscalers. Meta gets Scale's leading datasets and gets to lock out the other players from purchasing it. It slows down OpenAI, Anthropic, et al.
These are just good chess moves. The "super-intelligence" bit is just hype/spin for the journalists and layperson investors.
I'll believe that AI is anywhere near as smart as Albert Einstein in any domain whatsoever (let alone science-heavy ones, where the tiniest details can be critical to any assessment) when it stops making stuff up with the slightest provocation. Current 'AI' is nothing more than a toy, and treating it as super smart or "super intelligent" may even be outright dangerous. I'm way more comfortable with the "stochastic parrot" framing, since we all know that parrots shouldn't always be taken seriously.
Earlier today in a conversation about how AI ads all look the same, I described them as 'clouds of usually' and 'a stale aftertaste of many various things that weren't special'.
If you have a cloud of usually, there may be perfectly valid things to do with it: study it, use it for low-value normal tasks, make a web page or follow a recipe. Mundane ordinary things not worth fussing over.
This is not a path to Einstein. It's more relevant to ask whether it will have deleterious effects on users to have a compliant slave at their disposal, one that is not too bright but savvy about many menial tasks. This might be bad for people to get used to, and in that light the concerns about ethical treatment of AIs are salient.
> I'm way more comfortable with the "stochastic parrot" framing, since we all know that parrots shouldn't always be taken seriously.
First, comfort isn't a great gauge for truth.
Second, many of us have seen this metaphor and we're done with it, because it confuses more than it helps. For commentary, you could do worse than [1] and [2]. I think this comment from [2] by "dr_s" is spot on:
> There is no actual definition of stochastic parrot, it's just a derogatory
> definition to downplay "something that, given a distribution to sample
> from and a prompt, performs a kind of Markov process to repeatedly predict
> the most probable next token".
>
> The thing that people who love to sneer at AI like Gebru don't seem to
> get (or willingly downplay in bad faith) is that such a class of functions
> also include thing that if asked "write me down a proof of the Riemann
> hypothesis" says "sure, here it is" and then goes on to win a Fields
> medal. There are no particular fundamental proven limits on how powerful
> such a function can be. I don't see why there should be.
I suggest this: instead of making the stochastic parrot argument, make a specific prediction: what level of capabilities are out of reach? Give your reasons, too. Make your writing public and see how you do. I agree with "dr_s" -- I'm not going to bet against the capabilities of transformer based technologies, especially not ones with tool-calling as part of their design.
To go a step further, some counter-arguments take the following shape: "If a transformer of size X doesn't have capability C, wait until they get bigger." I get it: this argument can feel unsatisfying to the extent it is open-ended with no resolution criteria. (Nevertheless, increasing scale has indeed shown to make many problems shallow!) So, if you want to play the game honestly, require specific, testable predictions. For example, ask a person to specify what size X' will yield capability C.
It seems very short-sighted given how far Meta's latest model release was behind Qwen and DeepSeek, both of which relied heavily on automatically generated reasoning/math/coding data to achieve impressive results, not human annotated data. I.e. Scale's data is not going to help Meta build a decent reasoning model.
yes probably. but it already is. there is also an assumption that meta would turn it off. not saying they will or will not just that there an assumption here.
This is by all indications the world's most expensive acquihire of a single person. Reporting has been that Zuckerberg has seen Wang as a confidant of sorts, and has proposed a vision of AI that's said to be non consensus.
Wang didn't get $14b, he only owns about 15% of Scale. We also don't know how much he sold. He could have sold all of his stock (netting him around $4.5b), none, or something in the middle.
It looks like security/surveillance play more than anything. Scale has strong relationships with the US MIC, the current administration (predating Zuck's rebranding), and gulf states.
Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.
I doubt Scale is interesting by itself. This is all about Alexandr Wang. Guy is in his mid 20s and has somehow worked his way up in Silicon Valley to the same stature as CEOs of multi trillion dollar companies. Got a front row seat at Trump's inaugration. Advises the DoD. Routinely rubs shoulders with world leaders. I can't say whether there's actual substance or not, by clearly Zuck sees something in him (probably a bit of himself).
It's a wild story for sure. Dropped out of MIT after freshman year and starts Scale to do data labeling. Three years later Scale has a $1B valuation and two years after that Wang is the world's youngest billionaire. Nine years after Scale's founding they're still doing less than $1B in annual revenue. Yet Meta is doing a $14B acquihire. There's definitely more than meets the eye. I suspect it involves multiple world governments including the US.
I didn't mean to imply he started it alone. Though his co-founder Lucy Guo is almost as bizarre of a story as Wang himself. I'm curious, what were they doing before data labeling?
> Though his co-founder Lucy Guo is almost as bizarre
Well, kind of. I went to school with Lucy, and she was a completely different person back then. Sure she was among the more social of the CS majors, but the gliz and glamour and weirdness with Lucy came after she got her fame and fortune.
I suspect a similar thing happen with Wang. When you are in charge of a billion dollar business, you tend to grow into the billion dollar CEO.
> what were they doing before data labeling?
They were building an API for mechanical turks. Think "send an api call, with the words 'call up this pizza restaurant and ask if they are open'" and then this API call would cause a human to follow the instructions and physically call the restaurant, and type back a response that is sent back to your API call.
The pivot to data labelling, as money poured into self driving cars, makes some amount of sense given their previous business idea. Is almost the same type of "API for humans" idea, except much more focussed on one specific usecase.
I’m nowhere near fully confident in these rumors… so there’s nothing to spill. I don’t post specific accusations without some completely reliable basis.
i don't know Alex directly that well but i believe his "freshman year" skipped all GIRs and was spent polishing off the most advanced graduate courses in CS theory (18.404), machine learning (6.867), algorithms (6.854), etc.
so basically he did MIT at the PhD level in 1 year.
As a classmate myself who did it in 3, at a high level too (and I think Varun - of Windsurf - completed his undergrad in 3 years also)...
Wang's path and trajectory, thru MIT at least, is unmatched to my knowledge.
That courseload is completely unremarkable for a first-year with experience in competitive programming (like Wang had). I know a dozen people who did the same.
i know a dozen who come close but none who did the same, nor who had the entrepreneurial bent so early... curious who are these people you have in mind?
Alexandr is just a dude, like you or me, with his own life and his own worries and his own problems. He’s more like the rest of us than you seem to think.
Not trying to diminish his academic accomplishments, but it isn't that uncommon for experienced freshman students to just jump straight into advanced topics. If you're the type that has been coding since you were 10, been active in Olympic teams, or whatever, you can probably do just fine in such courses.
If anything, you'd be bored with some undergrad courses.
[1] https://techcrunch.com/2025/06/13/scale-ai-confirms-signific...