> You refusing to write open source will do nothing to slow the development of AI models - there's plenty of other training data in the world.
There's also plenty of other open source contributors in the world.
> It will however reduce the positive impact your open source contributions have on the world to 0.
And it will reduce your negative impact through helping to train AI models to 0.
The value of your open source contributions to the ecosystem is roughly proportional to the value they provide to LLM makers as training data. Any argument you could make that one is negligible would also apply to the other, and vice versa.
The ethical framework is simply this one: what is the worth of doing +1 to everyone, if the very thing you wish didn't exist (because you believe it is destroying the world) benefits x10 more from it?
If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.
I don't think that a 10x estimate is credible. If it was I'd understand the ethical argument being made here, but I'm confident that excluding one person's open source code from training has an infinitesimally small impact on the abilities of the resulting model.
For your fire example, there's a difference between being Prometheus teaching humans to use fire compared to being a random villager who adds a twig to an existing campfire. I'd say the open source contributions example here is more the latter than the former.
The ethical issue is consent and normalisation: asking individuals to donate to a system they believe is undermining their livelihood and the commons they depend on, while the amplified value is captured somewhere else.
"It barely changes the model" is an engineering claim. It does not imply "therefore it may be taken without consent or compensation" (an ethical claim) nor "there it has no meaningful impact on the contributor or their community" (moral claim).
Your argument applies to everything that requires a mass movement to change. Why do anything about the climate? Why do anything about civil rights? Why do anything about poverty? Why try to make any change? I'm just one person. Anything I could do couldn't possibly have any effect. You know what, since all the powerful interests say it's good, it's a lot easier to jump on the bandwagon and act like it is. All of those people who disagree are just luddites anyways. And the luddites didn't even have a point right? They were just idiots who hates metallic devices for no reason at all.
> there's plenty of other training data in the world.
Not if most of it is machine generated. The machine would start eating its own shit. The nutrition it gets is from human-generated content.
> I don't understand the ethical framework for this decision at all.
The question is not one of ethics but that of incentives. People producing open source are incentivized in a certain way and it is abhorrent to them when that framework is violated. There needs to be a new license that explicitly forbids use for AI training. That may encourage folks to continue to contribute.
Saying people shouldn't create open source code because AI will learn from it, is like saying people shouldn't create art because AI will learn from it.
In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
> In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.
Well maybe the AI parasites should have thought of that.
I imagine you think I'm an accelerant of all of this, through my efforts to teach people what it can and cannot do and provide tools to help them use it.
My position on all of this is that the technology isn't going to uninvented and I very much doubt it will be legislated away, which means the best thing we can do is promote the positive uses and disincentivize the negative uses as much as possible.
they're using your exceptional reputation as a open-source developer to push their proprietary parasitic products and business models, with you thinking you're doing good
I don't mean to be rude, but I suspect "useful idiot" is probably the term they use to describe open source influencers in meetings discussing early access
You know. I'm realizing im my head Im comparing this to Nazism and Hitler. Im sure many people thought he was bringing change to the world and since its going to happen anyway we should all get on-board with it. In the end there was a reckoning.
IMHO their are going to be consequences of these negative effects, regardless of the positives.
Looking at it in this light, you might want to get out now, while you still can. Im sure its going to continue, its not going to be legislated away, but it's still wrong to be using this technology in the way it's being used right now, and I will not be associated with the harmful effects this technology is being used for because a few corporations feel justified in pushing evil on to the world wrapped positives.
I think of LLMs as more like the invention of cars or railways: enormous negative externalities, but provided enough benefit to humanity that we tend to think they were worthwhile.
Are the negatives of LLMs really that bad? Most of them look more like annoyances to me.
The ones that upset me the most are the ChatGPT psychosis episodes which have lead to loss of life. I'm reassured by the fact that the AI labs are taking genuine steps to reduce the risk of that happening, which seems analogous to me to the development of car safety features.
It will however reduce the positive impact your open source contributions have on the world to 0.
I don't understand the ethical framework for this decision at all.