> Considering what's in the charter, it seems like she didn't do anything wrong?
It’s incredibly disingenuous to slap your name on an ethics paper claiming a company is doing malfeasance such as triggering a race-to-the-bottom for AI ethics when you have an active hand in steering the company.
It’s shameful. Either she should have resigned from the company, for ethical reasons, or recused herself from the paper, for conflict of interest.
> when you serve on the board of directors for that company
I'm not sure if this is the true statement, because the not-for-profit/profit arms make it more complex.
In this case, the non-for-profit board seems to act as a kind of governance over the profit arm, in a way, it's there to be a roadblock to the profit arm.
Normally a board aligns with the incentives: to maximize profit for shareholders.
Here the board has the opposite incentive, to maximize AGI safety and achievement, even at the detriment of profit and investors.
She is at the top of the pyramid. Did they not fire the chief executive? I am saying she is morally culpable for OpenAI’s actions as a controlling party.
To put these claims in a published paper in such a naive way with no disclosure is academically disingenuous.
She's not in charge of the for-profit arm though, all she could do was fire the CEO, and she did, which would seem consistent with her criticism. I don't think she has many more power as being on the board. She also isn't at the top; in the sense she needs other board members to vote similar to her to enact any change, so it's possible she kept bringing up concerns and not getting support.
Academically, did she not disclose being on the board on her paper?
Her position was listed as a fun fact, not in a responsible disclosure of possible conflicts of interest (though it ran the other way).
Being at the top of the org and being present during the specific incidents that gives one qualms burdens one with moral responsibility, even if they were the one who voted against.
You shouldn’t say “they did [x]” instead of “we did [x]” when x is bad and you were part of the team.
It sounds like your argument is "Even if OpenAI did something bad, Helen should never write about it, because she is part of OpenAI".
Or, that she should write her paper in the first person: "We, OpenAI, are doing bad things." That would probably be seen as vastly more damaging to OpenAI, and also ridiculous since she doesn't have the right to represent OpenAI as "we".
I have no idea why you think that should be a rule, aside from wanting Helen to never be able to criticize OpenAI publicly. I think it's good for the public if a board member will report what they see as potentially harmful internal problems.
I just don’t know why an ethicist would remain involved in a company they find is behaving unethically and proceed with business as usual. I suppose the answer is the news from Friday, though the course feels quite unwise for the multitude of reasons others have already outlined.
Regarding specific verbiage and grammar, I’m sure an academic could give clearer guidance on what is better form in professional writing. What was presented was clearly lacking.
One thing we've learned over the past few days is that Toner had remarkably little control over OpenAI's actions. If a non-profit's board can't fire the CEO, they have no way to influence the organization.
Did we read different things? All it said was that they had been accused of these things, which is true. If your charter involves ethical AI I’d imagine the first step is telling the truth?
While the system card itself has been well received among researchers interested in
understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal
of OpenAI’s commitment to safety. The reason for this unintended outcome is that the
company took other actions that overshadowed the import of the system card: most notably,
the blockbuster release of ChatGPT four months earlier. Intended as a relatively inconspicuous
“research preview,” the original ChatGPT was built using a less advanced LLM called GPT-3.5,
which was already in widespread use by other OpenAI customers. GPT-3.5’s prior circulation is
presumably why OpenAI did not feel the need to perform or publish such detailed safety
testing in this instance. Nonetheless, one major effect of ChatGPT’s release was to spark a
sense of urgency inside major tech companies.149 To avoid falling behind OpenAI amid the
wave of customer enthusiasm about chatbots, competitors sought to accelerate or circumvent
internal safety and ethics review processes, with Google creating a fast-track “green lane” to
allow products to be released more quickly.150 This result seems strikingly similar to the race-
to-the-bottom dynamics that OpenAI and others have stated that they wish to avoid. OpenAI
has also drawn criticism for many other safety and ethics issues related to the launches of
ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators,
and the susceptibility of their products to “jailbreaks” that allow users to bypass safety
controls.151 This muddled overall picture provides an example of how the messages sent by
deliberate signals can be overshadowed by actions that were not designed to reveal intent.
It’s incredibly disingenuous to slap your name on an ethics paper claiming a company is doing malfeasance such as triggering a race-to-the-bottom for AI ethics when you have an active hand in steering the company.
It’s shameful. Either she should have resigned from the company, for ethical reasons, or recused herself from the paper, for conflict of interest.