> We’re introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.
That's an interesting way of saying "we're probably gonna miss some stuff in our safety tools, so hopefully society picks up the slack for us". :)
Flashbacks to when they were cagey about releasing the GPT models because they could so easily be used for spam, and then just pretended not to see all the spam their model was making when they did release it.
If you happen to notice a Twitter spam bot claiming to be "an AI language model created by OpenAI", know that we have conducted an investigation and concluded that no you didn't. Mission accomplished!
> not to see all the spam their model was making when they did release it.
All replaced by open source LLMs at this point.
Most AI video will be produced by Hunyuan [1], LTX [2], and Mochi [3] in short order. These are the Flux / Stable Diffusion models for generative video. These can all be fine tuned to produce incredible results, and work with the Comfy ecosystem for wildly creative and controllable workflows.
I don't think it'll be possible for a closed source tool to compete with the open image/video ecosystem. Dall-E certainly didn't stay competitive for long. It's a totally different game.
> I don't think it'll be possible for a closed source tool to compete with the open image/video ecosystem.
And I don't think the current status quo of open source models being entirely subsidised by startups and corporations is sustainable, they're all hemorrhaging money and their investors will only have so much patience before they expect returns. Enjoy it while it lasts.
You said yourself that you don't think proprietary tools can compete with the open source stack, so which is it? If Comfy is as good as or better than any paid frontend that Mochi themselves can come up with then there's absolutely no reason for anyone to give Mochi any money under their current license model.
Stability was supposed to be doing a similar "give away the models but sell products built on them" strategy and it doesn't seem to be working for them, by all accounts they're barely able to keep the lights on.
It is unlikely anyone is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.
Funny how all the negative uses to which something like this might be put are regulated or criminalized already - if you try to scam someone, commit libel or defamation, attempt widespread fraud, or any of a million nefarious uses, you'll get fined, sued, or go to jail.
Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?
This idea that AI is somehow special, that they absolutely must monitor and censor and curtail usage, that they claim total responsibility for the behavior of their users - Anthropic and OpenAI don't seem to realize that they're the bad guys.
If you build tools of totalitarian dystopian tyranny, dystopian tyrants will take those tools from you and use them. Or worse yet, force your compliance and you'll become nothing more than the big stick used to keep people cowed.
We have laws and norms and culture about what's ok and what's not ok to write, produce, and publish. We don't need corporate morality police, thanks.
Censorship of tools is ethically wrong. If someone wants to publish things that are horrific or illegal, let that person be responsible for their own actions. There is absolutely no reason for AI companies to be involved.
> Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?
Would you want DuPont to check the toxicity of Teflon effluents they're releasing in your neighbourhood? That's insane. It's people's responsibility to make sure that they drink harmless water. New tech is always amazing.
Yes, because we know a.) that the toxicity exists and b.) how to test for it.
There is no definition of a "safe" model without significant controversy nor is there any standardized test for it. There are other reasons why that is a terrible analogy, but this is probably the most important.
I don't see how that analogy works, especially so as in your attempt to make a point you have DuPont as the explicit actor in the direct harm, and the people drinking the water aren't even involved... like, I do not think anyone disagrees that DuPont is responsible in that one.
I also, to draw a loose parallel, think that Microsoft should be responsible for the security and correctness of their products, with potentially even criminal liability for egregiously negligent bugs that lead to harm for their users: it isn't ever OK to "move fast and break things" with my personal data or bank account. But like, that isn't what we are talking about constantly with limiting the use cases of these AI products.
I mean, do I think OpenAI should be responsible if their AI causes me to poison myself by confidently giving me bad cooking instructions? Yes. Do I think OpenAI should be responsible if their website leaks my information to third parties? Of course. Depending on the magnitude of the issue, I could even see these as criminal offenses for not only the officers of the company but also the engineers who built it.
But, I do not at all believe that, if DuPont sells me something known to be toxic, that it is DuPont's responsibility to go out of their way to technologically prevent me from using it in a way which harms other people: down that road lies dystopian madness. If I buy a baseball bat and choose to go out clubbing for the night, that one's on me. And like, if I become DuPont and make a factory to produce Teflon, and poison the local water with the effluent, the responsibility is with me, not the people who sold me the equipment or the raw materials.
And, likewise, if OpenAI builds an AI which empowers me to knowingly choose to do something bad for the world, that is not their problem: that's mine. They have no responsibility to somehow prevent me from egregiously misusing their product in such a way; and, in fact, I will claim it would be immoral of them to try to do so, as the result requires (conveniently for their bottom line) a centralized dystopian surveillance state.
C4 and nuke are both just explosives, and there are laws in place that prohibit exploding them in the middle of the city. But the laws that regulate storage and access to the nukes and to C4 are different, and there is a very strong reason for that.
Censorship is bad, everyone agrees on that. But regulating access to technology that has already proven that it can trick people into sending millions to fraudsters is a must, IMO. And it'd better be regulated before in overthrows some governments, not after.
Microsoft Word and Excel aren't generative tools. If Excel added a new headline feature to scan your financial sheets and auto-adjust the numbers to match what's expected when audited, you bet there would be backlash.
And regarding scrutiny, morphine is a immensely usefulness tool and it's use surely extremely monitored.
On the general point, our society values intent. Tools can just be tools when their primary purpose is in line with our values and they only behave according to the user's intent. AI will have to prove a lot to match both criteria.
> And regarding scrutiny, morphine is a immensely usefulness tool and it's use surely extremely monitored.
I went to high school in a fairly affluent area and I promise you this is not true. If you have money and know how to talk to your doctor, you can get whatever you want. No questions asked.
You can even get prescription methamphetamine - and Walgreens will stock generic for it!
Definitely not if you're a white male under 60 years old. They won't even give you opioids after surgery now because you are "high risk" .
If you're really rich it may be a different story, but any of the "middle class" good luck. And if you do find a doctor with some compassion, they are probably about to retire.
Right, but accountants have qualifications and, more importantly, have to sign their name and accept liability for the accounts they're submitting. That's the part that's missing when "computer says ok".
Your accountant's cooking the books is handmade and a work of art, passed down by generations of accountants before them and they'll proudly stand in front of any auditor to claim their prowess at their craft.
Right but a gun can be had and presumably a nuclear warhead can’t, so even in countries who call the wrong sport “football” the law takes into account that some tools need to be regulated more than others.
No he’s just doing an aaaaaactually comment. Wouldn’t be HN if someone didn’t.
You cannot own tanks or jets capable of using military ordnance in the US (and I’d wager nearly any country that has anything resembling rule of law). You can own decommissioned ones that are rendered militarily useless.
I can write an erotic fiction about your husband or wife or son or daughter in microsoft word, but it's a little different if I scrape their profiles and turn it into hardcore porn and distribute it to their classmates coworkers isn't it?
You are posting this under a pseudonym. If you did publish something horrific or illegal, it would have been the responsibility of this web site to either censor your content, and/or identify you when asked by authorities. Which do you prefer?
You let people post what they will, and if the authorities get involved, cooperate with them. HN should not be preemptively monitoring all comments and making corporate moralistic judgments on what you wrote and censoring people who mention Mickey Mouse or post song lyrics or talk about hotwiring a car.
It seems reasonable to work with law enforcement if information provides details about a crime that took place in the real world. I am not sure what purpose censoring as a responsibility would serve? Who cares if someone writes a fictional horrific story? A site like this may choose to remove noise to keep the quality of the signal high, but preference and responsibility are not the same.
Censoring AI generation itself is very much like censoring your keyboard or text editor or IDE.
Edit: Of course, "literally everything is a tool", yada yada. You get what I mean. There is a meaningful difference between that translate our thoughts to a digital medium (keyboards) and tools that share those thoughts that others.
HN is the one doing the distribution, not the user. The latter is free to type whatever it wants, but it is not entitled to have HN distributes his words. Just like a publisher do not have to publish a book he doesn’t want to.
Maybe you should talk with image editor developers, copier/scanner manufacturers and governments about the safeguards they shall implement to prevent counterfeiting money.
Because, at the end of the day, counterfeiting money is already illegal.
...and we should not censor tools, and judge people, not the tools they use.
Interestingly, you must know that any printing equipment that is good enough to output realistic banknotes are regulated to embed a protection preventing this use case.
Even more interestingly, and maybe that could help understand that even in the most principled argument there should be a limit: molecular 3d printers able to reproduce proteins (yes, this is a thing) are regulated to recognise a design from a database of dangerous pathogens and refuse to print.
that works for locally hosted models, but if its as a service, openai is publishing those verboten works to you, the person who requested it.
even if it is a local model, if you trained a model to spew nazi propaganda, youre still publishing nazi propaganda to the people who then go use it to make propaganda. its just very summarized propaganda
Then let's parents choose when teenagers can start driving.
Also let's legalize ALL drugs.
Weapons should all be available to public.
Etc. Etc.
----
It's very naive to think that we shouldn't regulate "tools"; or that we shouldn't regulate software.
I do agree that on many cases the bad actors who misuse tools should be the ones punished, but we should always check the risk of putting something out there that can be used for evil.
"Teens having a laugh" can escalate quickly to, "... at someone else's expense," and this distinction is EXACTLY the sort of subtlety an algorithm can't filter.
This does not need to become a thread about bullying and self harm, but it should be recognized that this example is not benign or victimless.
This genie is out of the bottle, let us hope that laws about users are enough when the tools evolve faster than legislative response.
> It is unlikely no one is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.
And the teens are having a laugh by... creating deepfake nudes of their classmates? The tools are bad, and the toolmakers should feel deep guilt and shame for what they released on the world. Do you not know the story of Nobel and dynamite? Technology must be paired with morality.
I am sure a school has a way to deal with pupils sharing such images, as the recent cases have proven. Deep fakes or real pictures. It it a social problem with existing framework of decades of proven history and should be dealt so.
You can argue that that’s how it should be, but that isn’t how it is. And we don’t know what a world that adhered to that principle would look like, it’s possible it would be a disaster. There are a lot of bad things people can do where it’s difficult to catch someone after they’ve done it, and prevention at the tool level is the only way to really effectively stop people.
I’m not saying I like the idea of any of these methods when it comes to AI, but it feels naive to act like there isn’t precedent for stuff like this.
> It is unlikely anyone is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.
Citation needed bigtime. Sure, people doing organized disinformation campaigns won’t log into OpenAI’s website and use Sora, they’ll probably be running Hunyuan Video with an on-prem or cloud-based GPU cluster, but this feels like as good a time as any to discuss the implications of video generation tools as they stand in December 2024.
There are certain tools for which we heavily restrict which users have access to the entire supply chain. That's still about users, I suppose, but it's also about tools.
The problem isn't whether we should regulate AI. It's whether it's even possible to regulate them without causing significant turmoil and damage to the society.
It's not hyperbole. Hunyuan was released before Sora. So regulating Sora does absolutely nothing unless you can regulate Hunyuan, which is 1) open source and 2) made by a China company.
How do we expect the US govt to regulate that? Threatening sanction China unless they stop doing AI research???
Easy-peasy. Just require all software to be cryptographically signed, with a trusted chain that leads to a government-vetted author, and make that author responsible for the wrongdoings of that software's users.
We're most of the way there with "our" locked-down, walled-garden pocket supercomputers. Just extend that breadth and bring it to the rest of computing using the force of law.
---
Can I hear someone saying something like "That will never work!"?
Perhaps we should meditate upon that before we leap into any new age of regulation.
That's exactly the kind of logical conclusion I had hoped for someone here to reach in this bizarre sea of emotional pleas.
After over two decades of careful preparation, we're the stroke of a legislative pen away from having all of the software on our computers regulated by our friends in the government.
It's not even a slippery slope argument. In order to be effective, "We must regulate AI!" means the same thing as "We must regulate computer software!"
The two things are so identical that they're not even so different as two sides of the same coin are.
(Be careful what you wish for; you might just get it.)
"to give society time to explore its possibilities and co-develop norms and safeguards"
Or, "this safety stuff is harder than we thought, we're just going to call 'tag you're it' on society"
Or,
-Oppenheimer : speaking "man, this nuclear safety stuff is hard, I'm just going to put it all out there and let society explore developing norms and safeguards".
Oppenheimer was making a bomb from day 1, he knew exactly what he was doing and how it would be used. There aren't so many different use cases for a bomb, after all. It was a nice movie, but it does not absolve him
The bomb was the end of conventional warfare between nuclear nations. MAD has created an era of peace unlike anything our species has ever seen before.
Well it works great, until is doesn't. We're perpetually a few bad decisions from a few possibly deranged actors away from obliterating all of those gains and then some.
Right, and in the meantime nuclear-armed countries mostly get to avoid the horrible, endless churn of death and war and teenagers being sent off to the meat grinder to push some border here or some border there.
We have eliminated warfare between nuclear countries, conflicts have been reduced to nuclear/non-nuclear or proxy warfare, and that's a very solid reduction in suffering.
"Climate Change is likely to mean more fires in the future, so we've lit a small fire at everyone's house to give society time to co-develop norms and safeguards."
Specially since they were originally supposed to be a non-profit focused on AI safety and Sam Altman single-handedly pivoting to a for-profit after taking all the donations and partnering with probably the single most evil corporation that has ever existed, Microsoft.
Microsoft is more evil than Enron? Than the company that faked blood tests? This is some pretty extreme hyperbole. I’d pick Google over Microsoft for one.
text, image, video, and audio editing tools have no 'safety' and 'alignment' whatsoever, and skilled humans are far more capable of creating 'unsafe' and 'unethical' media than generative AI will ever be.
somehow, the society had survived just fine.
the notion that generative AI tools should be 'safe' and 'aligned' is as absurd as the notion that tools like Notepad, Photoshop, Premiere and Audacity should exist only in the cloud, monitored by kommissars to ensure that proles aren't doing something 'unsafe' with them.
The irony is that users want more freedom and fewer safeguards.
But these companies are rightfully worried about regulators and legislatures, often led by a pearl-clutching journalists, so we can't have nice things.
Recent events (many events in many places) show "users" don't think too hard before acting. And sometimes they act with inadequate or inaccurate information. If we want better outcomes, it behooves us to hire people to do the thinking that ordinary users see no point in doing for themselves. We call the people doing the hard thinking scientists, regulators, and journalists. The regulators, when empowered to do so by the government, can stop things from happening. The scientists and journalists can just issue warnings.
Giving people what they want when they want it doesn't always lead to happy outcomes. The people themselves, through their representatives, have created the institutions that sometimes put a brake on their worst impulses.
Do we not want new stuff? If the answer is "Sure, but only if whoever invents the stuff does all the work and finds all rough edges" then the answer is actually just "No, thanks".
It's a little disingenuous to jump to "we don't want new stuff" when people voice criticism of deepfake generators or AI models trained on stolen content
That's an interesting way of saying "we're probably gonna miss some stuff in our safety tools, so hopefully society picks up the slack for us". :)