I will never forget being at TI7 where OpenAI revealed an AI that could take on pro Dota players. Dota is an insanely complicated and difficult game. This was an eye opening moment for me that led to a career shift.
The caveat being that the scope of the game was significantly pared down for the sake of the AI. Specifically the team compositions were pre-determined, meaning the AI only had to understand 10 heroes in two specific arrangements of 5, when there's normally >100* heroes which can be chosen in any permutation, and certain game mechanics were also declared off-limits for the human players because the AI wasn't able to understand them. Beating pros in that subset of the game was an impressive achievement but there was a huge gulf between what they did and doing it in the full version of the game, which they quickly gave up on trying to do after collecting their marketing trophy of beating Dota pros in something resembling Dota.
* I'm not sure exactly how many heroes there were at the time, it was less than the 124 there are today, but it was certainly a lot more than 10.
There were 112 heroes available at that time. It's also worth noting that two of the heros chosen for OpenAI to use, Viper and Sniper, are considered some of the mechanically 'easier' heroes, as they rely primarily on autoattacks to do damage, as oppose to decision-making around when to use spells. Crystal Maiden, Lich, and Necrophos, the other 3 of the 5 OpenAI heros, are similarly considered 'easier' as they have spammable, very forgiving abilities that can be used almost indiscriminately.
AlhaStar did have some limits placed to narrow it down for the AI. But was still imperfect information, and wildly complicated.
And those were all 3+ years ago.
Games are a lower resolution representation of the 'real' world.
And we haven't seen any slowing down of AI scaling up for more and more complex world views.
Eventually the 'map' will be the 'real', as real as the human brains internal map of reality.
> And we haven't seen any slowing down of AI scaling up for more and more complex world views.
We absolutely have. We have superhuman performance on Go, we have human expect level performance at Starcraft, and now we get human baby level performance at 3d games. The more complex the task/game the worse the AI gets relative humans it seems, I don't see how this shows the AI scaling up, to me this is all moving horizontally.
I feel like the dimensional complexity of the problem space is disproportionately larger at each level than the gap in capability over that evolution. Each level of capability you described was science fiction stuff before they were achieved. They’re not in any way horizontal achievements.
> I feel like the dimensional complexity of the problem space is disproportionately larger at each level than the gap in capability over that evolution
But you acknowledge that things slowed down as we moved into more complex domains? Then you agree with my comment, the person I responded to said that things didn't slow down as we moved towards more complex domains, but there is no way you can say they haven't. AlphaStar and AlphaGo quickly competed and could beat top humans, the domains they worked with after that went way slower and still can't compete with top humans.
> Each level of capability you described was science fiction stuff before they were achieved. They’re not in any way horizontal achievements.
The second statement doesn't follow from the first, moving horizontally by applying the same things to a new domain can still unlock massive capabilities that we didn't have before.
No, I don’t agree. It appears to slow down simply because the complexity is growing so fast that even though we are accelerating our ability the relative sophistication of play is worse than the prior. But we aren’t slowing down in the least, we are tackling extremely difficult challenges with advances that are accelerating rapidly in their capability.
I believe the point in 'slowing'/'not-slowing', is that while we are still talking about 'games' and beating humans, the games that are being tackled are getting more complex in each iteration, solving new problems.
And DeepMind did go off and tackle increasingly complex areas in other fields.
Not sure how anyone can argue AI slowed down. Maybe advancements in one particular game slowed down, but was that because they hit a wall? or because they shifted company resources after a game demo.
I'm not sure how you are measuring time, do you think that because AlphaStar was a few years ago, that means AI advancement slowed down? Because there wasn't another breakthrough in AlphaStar? Because they didn't keep going and fully build out every race and unit?
Do you think DeepMind has been throwing resources at Star Craft and just hitting a brick wall?
It was a proof of concept, they beat some humans, and moved on to other things like Protein Folding. Was the Protein Folding not impressive enough to think AI was still advancing?
AI is continuing to advance, because for each iteration it is tackling bigger, more complex, problems.
----------
The time between each breakthrough does seem to be a down line, less time between each plateau.
Chess : Board with a lot of possible moves, but 'manageable', the AI could just calculate every move.
GO: More possible moves than atoms in the universe or something. So the AI had to use some form of 'intuition', it could no longer brute force calculate every move. (it was only few months after AlphaGO won that they turned the same engine on Chess and it 'learned' from scratch to be a Master in only a few hours.)
SC2: There are no 'moves' it is all real time movement, and most important, there as imperfect information. The AI had to scout and keep track of un-known positions, to remember and anticipate.
Dota: Honestly, I'm not sure what the big breakthrough is for Dota. But quibbling over how many 'hero's the AI had access to seems pedantic. Wasn't this years ago. This isn't a knock on AI, the Dota work was years ago. We're arguing about an AI that is 3+ years old now. I really don't think that you can say AI research slowed because a company stopped throwing money at a game demo.
Protein Folding: Hey, lets stop just focusing on games and do something to help the world.
Poker: Wasn't Poker also conquered in this time frame, in last 2 years? Showing ability to bluff?
3D Virtual Environment: Was in discussion in another thread where everyone's main argument was AI isn't 'embodied' in the 'world', doesn't 'live' in the 'world'. And boom, same day, another breakthrough covering that. Giving machine what we would call 'vision', to understand objects in the world.
------------
Now slap this into a robot, give it a gun, and tell it the world is a 3d game.
LOL.
"We haven't had a miracle in the last 6 months, oh no, AI advancement is slowing down."
> Each level of capability you described was science fiction stuff before they were achieved
Machine learning has been a thing almost since discrete electrical circuits have existed, just wildly impractical to make generalizable versions of until recently
It's not about complexity in the sense we're familiar with. Minecraft isn't more complex than Starcraft from a human intelligence POV. Kids can play Minecraft. It's about the difficulty of fitting it into current methods. We can solve Go because we have a symbolic-neuro planning approach (monte carlo tree search over an evaluation net) that almost perfectly models the correct way to reason about the game at an expert level. This is an incredibly strong inductive bias that gives AlphaGo an unfair head start over AlphaStar. Starcraft is harder to solve because we don't have such a symbolic approach figured out, and we need the net to learn visual representations and connect those visual representations to actions, and we have a continuous action space. So good luck with that! Minecraft is even harder to model because the rewards are so sparse.
I believe it says more about humans than about AI.
Humans are evolved in 3D world. Our ancestors didn't decide who can have food or sex through chess competitions. The human brains have been "trained" and optimized in 3D world intensively.
Yes, this is more complex, that is cool. But we did see the slowing down as otherwise we would have maintained superhuman performance at every step, AlphaGo and Alphastar saw much quicker progress past human skill levels.
I apologize in advance if this comes off as critical of you personally, you’re not saying anything that isn’t said constantly and I certainly don’t mean to single you out.
With that said, we’ve got to strangle this meme. ML/AI moves forward in unpredictable fits and starts, it doesn’t follow e.g. Moore’s law in exponential formulation.
When they’re doing research and not PR, researchers talk about “performance” on “tasks”, and define those terms rigorously.
People have been trying to improve performance, as measured by some metric or metrics, on any number of tasks, since at least the 1950s.
Certain periods of time generated breakthrough after breakthrough and a bunch of “well we’ll just scale it up and it’ll be a thinking machine” sentiment amongst the lay or semi-technical public, and similar grandiosity from experts when PR and/or funding are the objective. The world we live in I guess, but not a fire we on HN should be pouring fuel on.
During other periods of time, we’ve hit the effective asymptote on the techniques thus far invented, the scaling dimensions flattened out. Then it’s all “AI was a fad, it’s hype, this is AI Winter”.
There’s no robust consensus on when these summers and winters happen, how long they last, how much performance on one task is amenable to “transfer learning” regarding another task. It seems pretty random, the constant being the PR/funding talk.
The years since AlexNet in 2011, word2vec in 2013, ResNet in 2016, Attention is All you Need in 2017, the GA on GPT-3 series just over a year ago, and countless other interesting things have been wildly fruitful, we’ve been on a hot streak.
This is generally good news! The human race has new capabilities, win! But it’s a nearly impossible claim to defend that modern attention transformers are the final word on this area of endeavor, progress since then has been substantially brute-forced via unprecedented budgets achieved through subsidy of one kind or another, and there will continue to be periods of rapid progress unlocked by key insights, and there will continue to be less explosive periods of progress, and it serves no one with a plan more noble than “cash out while the spice flows” to tee up another collapse in interest, funding, and attention by pulling a Yud: a log scale and a ruler are never the complete toolkit on forecasting novel research.
This stuff is incredibly cool stated as flat, consensus, rigorous science, it’s incredibly exciting to practitioners and laypeople alike without any breathless hyperventilation at all. The story thus far needs no grandiose embellishment to be thrilling.
But the absolute best case in terms of research we currently know about as hyped by those seeking funding would be a nightmare end-state if it landed there (it won’t, but this disaster comes in degrees): right now the off-the wall exhilarating tech demos are so expensive that the public is effectively a spectator. There’s talk of multi-trillion dollar buildouts under complete, utterly unaccountable, ethically dubious control of people who crossed the “yikes is that even legal” line some time ago.
A trillion dollars in 2024 is give or take thirty Manhattan Projects, the idea of handing that kind of scope to people who answer to no one, hold strong minority worldviews, and give the public the finger in print by calling the bribery department “OpenPhilanthropy”?
Who the fuck thinks this isn’t a dystopian horror movie outcome in an already hyper fragile world?
It’s time to squeeze the water out of these bloated, money is no object models, make them run on reasonable power budgets in the hands of John Q. Taxpayer (who along with a bunch of helpless civilians and service men and women, ultimately foots the tab when Nadella or Riyadh write blank checks one way or another), reform copyright law so that the commons isn’t vacuumed up, compressed, and copyrighted, and take a few whacks at shit like Jenson and Lisa Su being literally cousins while partitioning the market and gouging via API lock-in.
The hyper, hyper-elite stand to gain even more immunity from all scrutiny, consequence, accountability, and even bad press if “AGI” turns out to be a mere 1-3 trillion in de facto blood money away from being locked in a vault somewhere.
Literally everyone else stands to find out that slavery isn’t a strong enough word for what this would mean for them.
I know the word literally doesn't mean anything anymore, but Jenson and Lisa Su are first cousins once removed. Jensen's grandfather and Lisa's great-grandfather are the same person.
I know what the word "literally" means, there's a great Sorkin bit on it [1] that's eerily prescient given that show is like, 15 years old. Via your own citation it's apparently Lisa Su who doesn't know what "literally" means, as she asserts that they are "second cousins", which is literally false according if your second citation says "first cousins, once-removed", which Quora [2] says means they're closely blood-related.
I don't know how their family works, but in mine and most people from my neighborhood, a first-cousin once-removed is fucking family, they're blood. Not being a securities lawyer myself I'm not sure which definition, statue, or regulation would apply here, [3] seems close (and has a creepy rush-job feel about it that smells vaguely like Kushner shit of one kind or another, Feb 2020 on an accelerated basis?).
But whether this squeaks above the line of regulations and laws and whatnot getting midnight "lgtm" stamps in an election year is, I'd argue, substantially missing the point.
When I recently said:
"Now did Lisa Su decide to "concentrate on the supercomputing market with the MI300XYZ" and Jensen decided to "concentrate on AI with Hopper" independently to a degree where the market is perfectly partitioned? Who knows, I certainly don't have proof one way or the other. But if someone made a call being like "I'm thinking of focusing on X but don't really see our differentiation in Y. How's Cathy?", it wouldn't be the fucking first time." [4]
I thought at the time I was kinda pushing it with how flip that sounded, but lo and behold, I was insufficiently cynical.
So when I say that I literally don't understand why anyone is defending this trivially dubious cartel behavior complete with a 55.58% Net Profit Margin in an ostensibly competitive market both directly and indirectly subsidized by the taxpayer (TSMC isn't going to fight off the PLA with their next process node) [5], I think Leona Lansing knows that the public will burn the building down with this shit in it before they let this shit get much ickier.
"we’ve got to strangle this meme. ML/AI moves forward in unpredictable fits and starts, it doesn’t follow e.g. Moore’s law in exponential formulation."
At End:
"Who the fuck thinks this isn’t a dystopian horror movie outcome in an already hyper fragile world?"
Isn't that hype? By the end of the post you are doubling down on the over-hyped meme's.
Hey I'd like to apologize for that (and I didn't downvote you FWIW, I've now upvoted both the GP and parent to offset whoever did, your comment didn't merit a downvote).
You're exactly right that my comment veers from high-quality to low-quality linearly with character count: I had an ambient distraction burst into my office in the middle of writing it and I was over-multitasking and failed to clean up the second half within the edit window.
The second half of my comment has important signal but it's too high-noise to be a good comment, as my grandmother used to say: "A barrel of wine and a spoonful of sewage makes a barrel of sewage".
If anyone deserved a downvote it's me, please know that it was unintentional.
Very interesting! Shows how much AI is over-hyped (even though, as you say, it was very impressive anyway). It was even worse in case of Starcraft 2, where the AI had a much wider view than humans, and while the AI was supposed to show its strategic superiority, by limiting the APM (actions per minute), the limit was still very high, inspired by the max APM achieved by humans - whereas this max is achieved only for a very short period of time (a single minute), and consists mostly of insignificant click spam (had APM been limited to half that for humans, the effect would probably be negligible, and very minor for a quarter...). So as a result the AI would win by being able to micro-manage more units, rather than having a better strategy. But again, it was very impressive anyway.
How do you have 'over-hyped' and 'very impressive' in same sentence. Which is it?
I think you are not giving AlphaStar the correct spin.
They came back and changed it to only have the same viewport as the human, it could not see all of its units simultaneously, it had to move the cameras like a human.
BUT importantly, it NEVER had perfect information. It could only see exactly the same as the human, just at one point they were letting it see the whole map without changing the camera, but it still could not see enemy units without sending a probe.
And. Little unsure on what the argument about APM is saying. It was slowed down to match the human speed, but somehow that makes it less impressive? That is just making it more 'human-like'. Kind of like people today want to put guardrails on AI, but if it was unleashed, it beats them easily. That isn't a knock on the AI. The AI would still have to think about every move, and form a strategy. They slowed it down to human level inputs, handicapped it, to make it playable to a human. But to your point, if AI could make 400 APM and human had 400 APM (both limited to same), then that is better measure about the 'thought' behind each individual move.
I still remember watching one match where the human was winning, the AI was down, and the AI really did fight back very aggressively from a loosing position, like a human. by expanding and adapting, and it looked very scary.
> How do you have 'over-hyped' and 'very impressive' in same sentence. Which is it?
I'm stunned; how would you think they are contradictory? Imagine a transportation that moves with the speed of 1000 km/h. Very impressive, right? Now imagine media everywhere say it moves with the speed of light. Wouldn't this be over-hyping?
> BUT importantly, it NEVER had perfect information. It could only see exactly the same as the human
Maybe we're speaking about different events... In the one I'm commenting on, the AI had some zoom-out, I think 2x (meaning it would see 4 times more at once). Yes it had fog of war, but a zoom out like this is a very significant advantage.
> And. Little unsure on what the argument about APM is saying. It was slowed down to match the human speed,
No it wasn't, not exactly. Imagine that you measure a human racer speed in km/minute, every minute. Then you take the highest measured "average per minute", and program AI to move with that speed at all times. Then you praise AI for its pathfinding algorithm, because using that speed, it beats the human racers.
Yes, if a human racer has to slow down, because e.g. the human is unable to avoid obstacles at maximum speed, it does make the AI being able to move faster, impressive. But few people here would be impressed by a high reflex of a computer, because we all are used to the fact computers can react much faster than humans. It is misleading, however, to allow AI to move faster, and then give it the "spin", as you say, that the AI has won because it was smart, as opposed to being fast.
BTW, I think the AI was either only using one race, or was playing only against one race. This one thing was actually mentioned in the event (once). The APM was mentioned too, I think, but the nuance I describe unfortunately wasn't mentioned.
It makes me sad, because as I said, it is a very impressive technology. But it's hard to fully appreciate something when it is so blatantly over-hyped and when you see so many people around you being mislead and praising AI for achievements that it didn't exactly accomplish.
You are correct, there were limits on the AI in the event.
It could only play 1 race, but I think the opposition could be different races. I think it was protoss, but it was playing terrans and zerg. There might have been 1 or 2 units that were also removed.
In the first events where it was really dominant. It could see the whole map at once, and move its units all over the map by seeing it all. So moving units on both sides of the map practically simultaneously. BUT, this was called out as just too much of an advantage, so they made another version that actually had to move the camera around the map like a human. And the second version was still able to perform.
Map wise though, for AI, I think the dealing with the fog of war and un-known/imperfect information was the big break through. Not the map size or speed. It still had to scout, and keep up with enemy movements that were hidden, and anticipate. The zoom out didn't provide that.
I'm not totally buying the APM argument (but by end of the paragraph I do). Even if a computer can move 'faster', each move must mean something, do something worthwhile. So the computer must think out its moves. I know micro in SC2 is very big deal, and speed is essential, but you do have to know what to micro. The computer having 1000+ APM was called out also, and they added a limit. By throttling the AI to what a human can do, is handicapping the AI which isn't proving that AI isn't as good, it is showing that it can be better. Or another way, in Chess or GO, there is a time limit, but nobody is throttling the AI CPU to the same speed as a human brain, like limiting its computation cycles.
So, guess in end, I do agree, throttling APM is like a real time imposing a time limit to each move, like in Chess.
For Over-Hype/Impressive point. It is difficult. Both can be true. And everyone on the internet has different thresholds for what they think is over-hype and what is impressive. AI seems overwhelmed with both sides right now. Seemingly new miracles every day, and also over-hyped companies pumping their stock by adding an AI sticker to every product.
I just say, those AlphaStar matches were like 5 years ago, and they still blow me away.
Hard to imagine what could be possible, with this latest release in this post from deepmind.
Plug into a camera, on a robot, with a gun, and tell it the world is just a 3d game.
We mostly agree then. I think even after limiting the AI FoV, it still was seeing 4× more as I said previously. As for APM:
> By throttling the AI to what a human can do[...]
I think this is far from true, a human cannot keep the APM throughout the game on the level the AI was throttled to. If the AI was throttled to the average APM in e-sports, that would be more fair, but it was throttled to the HIGHEST APM reached by a human. Again, it's not "highest average APM in a single match", it's just highest number of actions in a given minute [I don't know if it was an all-time record or just some arbitrary value inspired by some arbitrarily chosen local record; what I know it was way too high to be fair]. Furthermore, SC2 players spam unnecessary clicks to keep themselves warmed up - if playing against AI, that is limited to average APM of its opponent, was a thing, then I'd safely bet the APM of that player would decrease 3 to 5 times WITHOUT the player reducing the number of actions that are of little (but still some) significance in order to abuse this AI limitation.
Again, the event was cool, but it makes me sad the technicalities weren't communicated clearly, which made it an advertisement rather than sport IMHO.
Maybe top pro's keep APM at 400 over entire match, but not really, there are ebbs/flows/spamming. While AI can max out at 400 and do that the entire match, and it isn't spamming, so every move is probably meaningful.
Maybe, throttle both AI and Human both to 200? Something like that? So both are capped lower.
For real time games. This 'throttling' is tricky. For turn based games, time limits are equal. But real time, because if we are measuring AI performance, it could be unleashed and be faster than a human and win. So is slowing down the AI really allowing for measuring the AI performance?
Like in real life.
Lets say you have a robot with a gun, and a human with a gun.
They both need to draw, aim, and fire.
Would we 'slow down' the robot to match the human? That doesn't seem like the way to measure how 'good' the AI is at doing those tasks. It could be faster.
Netflix has documentary on AI. Military had AI flying F16's, and it could beat all the best human pilots. Of course, No slowing down the AI.
In real world there are physical limits. Just need someway to translate that to real time games.
You're right of course. The thing is, if you allow unlimited APM, I think you don't need an advanced AI to win - you can have an algorithm implemented, that rushes zerglings and micromanages them to save any from dying and regenerating. So the human vs AI match becomes lame.
Very subjectively, I'd say: limit the APM to something very low, below 50. You now change the game: it's no longer about making many decisions, it's only partially about reacting quickly, rewarding thinking through your decisions before ordering them. This would measure the intellect more than speed.
BTW there is a mode in coop mode that AFAIR makes you pay minerals for each action, so such throttling is within canon ;)
That would be great idea. Limit everyone to 50 APM, that would make it more about strategy, not twitchy reflexes. And as you say, be more measure of the AI's ability for strategy and planning.
Of course, being Bronze, with a 50 APM, this sounds great to me.
Explanation - health potions cost a small amount of in game money and have to be ferried by a courier to the player. Most pros (and good players copying the pros) didn’t do this because it wasn’t considered cost effective. They would rather save up for a larger purchase. Until they repeatedly lost to OpenAI bots spending absurd amounts of money on health potions.
The AI didn't follow "best practice" because it wasn't trained on human games, found a better way and that was quickly adopted by all, becoming the new best practice.
League of Legends players discovered this like 15 years ago (the "13 health pot start"), I wonder why this didn't cross over. I suppose the player bases don't actually intersect very much?
It's mostly because it's was a different scarce resource at that time that was seen as non-optimal use by the players, the courier. It can ferry item to you, in a normal game there was only one of them for your whole team, which mean using it would take that ability away for your teammate during the ferry time.
One constraint to those showmatches at the time was that every heroes had their own courier, and player at that point were not accustomed to using it for "low value" travel, unlike the AI that was using it liberally.
In a later patch, the 1 courier per hero feature was added, and now pro players are much better at managing it, but at that time it was truly a heavy opportunity cost.
I think maybe the games are a bit different and it wasn't viable? I was pretty into the original WC3 Dota and starting with tangos for healing was a pretty popular strategy for supports and solo lane players.
caveat: my Dota 2 knowledge is lacking because I haven't followed the game for about a decade now and I have essentially 0 experience with League.
Sounds like any other game that AI powered search finds new optimal strategies. Chess , go, poker all have new strategies that no human thought of for hundreds of years. Some of them seem obvious in hindsight but that’s how knowledge works generally
Because they did, this comment chain is somewhat of an attempt at revisionist history to try and make a connection that other games had "revelations" about AI more than it's a statement trying to accurately portray the situation.
It's worth noting here that most of these comments are missing that another dimension of the game that's completely absent which heavily influences decision making of normal gameplay: communication and progression from other lanes in the game. It's almost kind of a long running joke in the game that you'd laugh if someone asked you to 1v1 mid because it usually meant you beat them technically and they're grasping for straws to show superiority, despite how segmented and different from normal gameplay it is and how useless of a skill beating someone in such a constrained environment is.
To this point, there were better manually crafted "AI" bots that could team with eachother effectively at a higher level than the average player since the original custom map in 2003-2005. The breakthrough here IMO wasn't that it was actually making any novel decisionmaking but that it was able to perform at a high level and improve by conventional ML training, which I think is a separate callout than most of the stargazing done in the comments here.
I wouldn't say profeciency 1v1 mid (specifically the even MORE watered down rules applied here that you automatically lose after only 3 deaths or the tower is taken) translates accurately to anything in the original way you play the game unless your 1v1 matchup has a similar expectation of sitting parked in the lane, and even then it translates poorly. Sacrificing a death to kill a tower and spending all your gold so your effective loss is minimized is a legitimate trade, but in this fake constructed scenario the win/loss condition is already met. You approach the two entirely differently and more importantly, more simplisticly. That doesn't even address that some hero matchups have intentional designs to be weaker earlier in the game and/or are meant to participate in fights with multiple heroes or doing secondary objects and can't assert the same posture which goes completely unaddressed by this narrow slice of gameplay.
All that buildup to say that healing potions and staying in the lane have been a tenant of normal gameplay since it's conception, and the expectation has shifted from patch to patch. What was "discovered" here is that if you don't optimize for longer term gameplay like you would for a normal game and do the most you can to optimize for a narrow slice of early skirmishes, potions have a higher cost value effectiveness. Not sure anyone beside laymen to the game thought that was a revelation.
That's a fair assessment but it was also 6 years ago. Back then transformers had recently come out. Tricks to do DL at scale were still brewing. Even achieving what they did for 10 heroes showed DL could work in "non-deterministic-ish" problem settings.
The more interesting question is: can we train a Dota model that plays with all 124 heroes today?
Not really. By only playing a certain subset of the game, the AI could use heroes that they were good at (micro intensive) while disallowing heroes that could counter the strategy the Open AI chose. Hardly a fair game.
No? It's absolutely not what happened. Almost the opposite. The micro intensive heroes were NOT in the pool. AI literally chose Sniper and Sven.
AI's APM was limited to a level lower than pro human players. If they had allowed micro heroes AI would have a big disadvantage.
That was the most interesting part about the AI Dota game. AI isn't just better than humans at mechanical level. Even more surprisingly, AI (at least Open Five) isn't significantly better at last hitting than pro players.
So glad you said this! For some reason that's always stuck out to me as having been my biggest personal "wow" moment while watching AI development progress. ChatGPT is awesome but for some reason I've never felt as awed by it.
I too was impressed at first, but got disillusioned after learning how they did it. It was much more similar to chess AI with piece-square tables than I first thought.
Meh. People still do athletics competitions even though cars exist and can outpace any human. Weightlifting is also still a thing even though even an entry-level forklift beats any human weightlifter. Chess is more popular than ever before, even though nobody has any hope of beating a computer anymore.
Out of all the fields that human do professionally, sports will be one of the last ones to disappear. The fact that it is (unaugmented) humans competing is the entire point.
I don't think the unaugmented qualifier is accurate. What matters is that there are well-established rules defining scope. People racing cars is still a very widely enjoyed form of entertainment.
> Out of all the fields that human do professionally, sports will be one of the last ones to disappear. The fact that it is (unaugmented) humans competing is the entire point.
This is my thought/hope for what we'll expect in the coming years as AI's automation becomes more commonplace. Society's interests will start going towards activities that showcase human ability - sports, livestreaming (very much its own industry now, but mostly for socializing, art, and gaming), performance, dance, etc. Sure AI can 'do' these things, but not at the level elite performers can or with the subtle nuisances in human personalities.
Even in the future when the AI is provide everything and we are no longer able to understand it, humans will be doing human competitions, playing chess, etc... The human on human action will be only thing left, and only thing humans care about. Chess is already unwinnable, but humans still want to measure themselves against other humans.
Chess, Go, what next? Pizza delivery? Accountant Simulator? Humans are already being outclassed one feature at a time.
I'd echo the same sentiment as the other commenters, if you don't mind me throwing my hat into the ring. Considering a MS in Data Science with a focus on ML
As I explained in my sibling comment the version of Dota they played was heavily simplified, because the full combinatorial explosion of mechanics was far too much for the AI training to overcome. They didn't even get close to playing normal Dota at a high level, nevermind a hypothetical version of Dota which is twice as complex.
I would broadly break it into things that are complex to perform (crazy APMs or accuracy), things that are complex to understand (the stack or layers in MTG), and things that are complex to predict (e.g. time-delayed abilities and the correct time to use them, like Baptiste's lamp in Overwatch).
AI have basically constant performance across a performance complexity curve, because the complexity typically derives from physical interfaces the AI doesn't use anyways. E.g. their APM is not limited by how fast their fingers can physically move.
AIs do very poorly on tasks that are complex to understand. The best Magic: The Gathering AI's I've seen are awful (though also likely far less well-funded). Best-case scenario is basically an AI who makes plays that don't make any sense, but are at least valid plays. It's a crazy difficult problem. E.g. there are various ways to make infinite mana with combinations of cards, and the AI needs to a) realize that it can use those cards in order to create infinite mana, and b) that it can do this multiple times (I.e. it can pay for a spell that costs more mana the loop generates by going through the loop multiple times). That's very hard thing to do; human players somewhat frequently don't realize when they have loops.
Add on top of that that a game of Magic can enter a state where a loop of effects becomes recursive but doesn't result in either player winning. The game is a draw, because it cannot progress anymore. Detecting these can be non-trivial, because they might involve side effects that look like someone should win (I.e. you lose a life and I gain one, then I gain 2 life, then you deal 1 damage to me, then I gain 1 life, then you deal 2 damage to me. Life totals shift around, but net to 0 by the end of the loop).
I think the AI do well at complex prediction tasks as well, by nature of their response times and access to prior information. I would expect an AI to beat humans by a wider margin the more complex the prediction gets. Humans have finite time and thus experience; the AI is going to have more "experience", and be able to recall it at a faster rate.
A big advantage of AIs is instant reaction time. OpenAI programmed in an artificial reaction delay to most skills, but they were still generally much faster than any human would be. Overall strategy is where the AI was lacking, but its technical flawlessness makes up for it.
If we model the game as someone flicking switches, strategy is ability to know which switches to flick when whereas technical skill is the ability to quickly and precisely flick the chosen switches.
In more complex games, there are more switches and the current set of best switches changes faster. With more switches, it's harder to know which are the best switches because the future is less predictable. And even if we figure out the best ones, they might change before we flick them. And even if we get around to it in time, we might fat finger it and accidentally flick an adjacent switch. And our opponent never gets tired or injured.
This is why I suspect AIs have a much higher ceiling even if we limit them to half the APM pros have. Better strategy matters less, but I admit it's our only chance lol.
FWIW, I've never played Dota but I've played a lot of AoE2 and from what I know they're similar enough (but maybe someone can correct me).
Though to be fair, the human players had to rely on muscle memory to win lanes (CSing, blocking waves, pulling, trading hits, cutting waves, stacking, etc.); whereas the AI could perfect the timings down to the fraction of a millisecond.
If I remember right there was a reaction time for the OpenAI team they could tweak, if I remember right it was around 200ms (and a short search I think confirms that).
The bot did perfectly dodge a skill that humans almost never dodged because it didn't have a good visual cue. Against the AI that skill became useless, really screwed the humans over and made it clear to everyone that the AI didn't really play with the same limitations as humans.
In the next game it played they had made it react even slower and then it no longer beat tournament teams.
I don't quite think it's that impressive. AIs in video games are specifically "nerfed" simply due to the fact they can make decisions much, much faster than humans. Open AI didn't do anything special in this case.
See @deep blue for more. Or, any strategy game made in the last 20 years with AI difficulty mods.
AI in most strategic games are provided massive advantage over human players. In CIV V, for example, the AIs start with several units and techs on higher difficulties.
Upvoted. Not sure why this got downvotes. It's very cool that you were at TI7 as I only watched this on youtube. I also thought this was an important moment.
It seems like parent comment is the top comment on this post with a lot of replies and facilitated on topic discussion. Not sure which part of parent topic related another AI playing a game is off topic from the original post of an AI playing a game
I could have left off the career switch anecdote, but I wanted to share this previous breakthrough and project with people who are not familiar. It was the foundation for technology like SIMA from Google.
The "foundation for technology like SIMA from Google" is more likely all of the other RL + games work that Deepmind did before and after that DOTA project.
https://en.wikipedia.org/wiki/OpenAI_Five