Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The only specific accusation made in the article is that Sam criticized Helen Toner for writing a paper: https://cset.georgetown.edu/publication/decoding-intentions/

That says Anthropic has a better approach to AI safety than OpenAI.

Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.

All of that ... seems completely reasonable?

Like I've heard a lot of vague accusations thrown at Sam over the last few days and yet based on this account I think he reacted the exact same way any CEO would.

I'm much more interested in how Helen managed to get on this board at all.



>We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

>...

>We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

>Our primary fiduciary duty is to humanity.

>...

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

https://openai.com/charter

Seems to me like Helen is doing a better job of upholding the charter than Sam is.


This charter is doomed to be interpreted in radically different ways by people with differing AI-eschatological beliefs. It's no wonder it's led to so much conflict.


Sam wanted to restrict Helen's free expression on the topic of beneficial AI in order to boost OpenAI's position. That suggests he cares more about the success of OpenAI than he does upholding the charter.


He had dual overlapping roles, a duty to the promote safe AI to the world, and a duty to prevent own-goal sabotage to the LLC. Ideally those roles wouldn’t conflict. He probably didn’t care if such a paper was written, but to have a board member as author is beyond awkward. But as an academic, her own dual overlapping role, she has a need to publish. (“Uh, please voice your concerns before we develop it instead of sand bagging us with a damning critique in an academic journal after the fact!”) This seems like a recipe for disaster. I’m not certain either really was “wrong” in that conflict. It just seems like this structure was doomed to failure, throwing uncompromising zealots together all with launch codes and a big red button.

Safety isn’t binary, it’s degrees of risk assessment, mitigation and acceptance. Infinite safety is never progressing. But never progressing means failing the charity mission. And AI isn’t just being created at OpenAI, for them to succeed at the charity mission they need to stay ahead of risky competitors with a safer AI.

The charity can’t afford its GPU costs without the LLC, and the LLC can’t lead the industry if it’s behind all the competitors. To be relevant the safety side needs compute, and they need to be ahead of the curve, and they also need real world data from the LLC. If the charity nukes the LLC, it takes itself with it and fails its mission. They’re so intertwined they need compromise in the board. And they let that board dwindle to small entrenched factions, which sounded more like the Hatfield’s and the mcCoys. With such a fragile structure they needed mature adults with functional conflict resolution.


No, Sam wanted to be the first recipient of Helen's expression.

So -- if appropriate -- he could make changes.

Or he could not change, and she publishes anyway.


>> Sam wanted to restrict Helen's free expression.

> No, Sam wanted to be the first recipient of Helen's expression.

If I get copy approval to what you write, that is by definition a restriction to your freedom of expression. How is this up for debate?


My interpretation of that was that Sam felt Helen's duty as a board member is to come to the company with her complaints, and work to improve the safety measures.


You don't normally perform you role as a board member through publications in a public forum. You do it by exercising pressure internally to get the effect you desire so you can continue to do so in the future. Going public is usually a one-shot affair.


Being a co-author of that publication was not her performing her role as an OpenAI board member, it was her day job.


Conflicts of interest suck. Experienced board members avoid them or deal with them. They don't allow their conflicts of interest to spiral out of control to the point that they destroy the thing they're supposed to govern.


> Conflicts of interest suck. Experienced board members avoid them or deal with them.

Agreed. It seems remarkable that she dealt with her conflict of interest so thoroughly that she published an article that was mildly critical of the organization. I do not think her duty was to let Sam Altman seize power by firing her from the board on a flimsy pretext, though.


She went a bit further than that.


Did Helen's publication conflict with the charter though?

I'd say the OpenAI employees are the ones with the conflict of interest, since they stand to get rich if the company does well, independent of whether the charter is being upheld.


No, it did not conflict. But that doesn't mean you first try to get things fixed rather than to publish.

> I'd say the OpenAI employees are the ones with the conflict of interest, since they stand to get rich if the company does well, independent of whether the charter is being upheld.

Those pesky employees again. Too bad we still need them. But after AGI is a fact we don't and then it'll be a brave new world.


It’s interesting that people are making the assumption Toner didn’t raise these concerns internally as well.


> So -- if appropriate -- he could make changes.

Where are you drawing this conclusion from? Nothing in the text suggests that was his intent - certainly his actions thereafter suggest that he was in fact more concerned with the potential harm to the for-profit company.


The journalist failed to make that point really clear. He "reprimanded" her "and" said "it was dangerous to the company". What exactly did he "reprimand" her for? The "and" seems to imply two separate points of criticism.

>In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.


The normal operation of every human relationship.


Let's recall the CEO is employed by the board, not the other way around.


Is Sam a member of her faculty? Or an advisor?

Otherwise I'm not sure what changes would be "appropriate" for him to direct.


> That suggests he cares more about the success of OpenAI than he does upholding the charter.

A board member should care about their organization more than their interpretation of the charter. If a Mozilla board member publicly criticized Firefox for violating their charter of a free and open internet because they're receiving money from Google, while also praising Brave for not doing so, I guarantee that they'd also be ousted.


I would have thought a non-profit board that puts the growth and success of their organization above the mission in their charter, even if they view the organization as going against their mission, sounds like the worst kind of board. How could that possibly be a good idea?

Isn't the whole reason we have non-profit boards and charters is so that they can make just this kind of call?

(I'm not saying that this is what is happening here -- just responding to your particular claim.)


>Isn't the whole reason we have non-profit boards and charters is so that they can make just this kind of call?

Yes. From the board. That's why we typically don't see boards air their dirty laundry out in the open even when in retrospect, they obviously had a problem with leadership.


The board seems to be getting an awful lot of crap for not saying enough, and also getting crap because their initial letter was too direct rather than being the usual corporate mealy-mouth BS.


The board is getting a lot of crap for being crap. Regardless of your opinion about their goals, their execution was objectively terrible. They were clearly completely detached from their organization. At the very least, they could have made sure their first interim CEO wouldn't instantly try to turn against them.

It's not a contradiction to criticize someone for saying too much sometimes and saying too little another time. The board should be open to insiders and cautious with external communications.


It's possible to simultaneously believe that announcing that the CEO lied to the board about x on day y is better than announcing that the CEO is leaving to spend more time with his family, but announcing that the CEO lied to the board and then refusing to elaborate when everyone including the CEO says "huh? What?" is worse than either.


This is exactly backwards. The reason we create organizations like this is not for the benefit of the organization itself, but to accomplish specific goals. Those goals are outlined in the charter. If the organization starts doing things that conflict with the charter, the responsibility of the board is to course-correct.


The place to course correct is from the board room, not in public. If she had enough power to oust the CEO, she should have enough power to pressure him to make changes.


The board can't compel the CEO to do specific things, only replace them.


Of course they can. If Sam Altman is really as heedless about AI safety as the board seems to think, GTP wouldn't be so aggressively aligned.


A misaligned AI could act aligned in the service of its interests, just like an employee trying to steal from a company would act like a good honest employee. Methods for getting AIs to visibly act a certain way aren't necessarily sufficient to control advanced AI systems.


>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.

how could this possibly be accomplished when trying to sell the product itself. Investors pouring Billions into it are in it for profit... they're not going to let you just stop, or help a competitor for free.


it’s completely irrelevant language because nobody is anywhere close to actual AGI


well, theyre close to releasing a product that they know they can call AGI, as its really just 3 letters thag can be trademarked. seems like its being hyped up like ML or bitcoin, lots of hype to sucker in the investors, be the first to build a turk-in-a-box based on that new hyped model, add regulation to stifle competition, ???? profit.


Promoting Anthropic and putting down OpenAI doesn't make her better at her job. Her job isn't self promotion.


Unless she has equity in Anthropic (which would be major conflict of interest), I don't see how this is self promotion...?


I'm guessing the reasoning is something like this...

As a CEO I'd want your concerns brought to me so they could be addressed. But if they were addressed, that is one less paper that could be published by Ms. Toner. As a member of the openai board, seeing the problem solved is more important to openai than her publishing career.


https://openai.com/our-structure

"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s pzrincipal beneficiary is humanity, not OpenAI investors."

I see. I don't know whether she did discuss any issues with Sam before hand, but it really does not sound like she had any obligation to do so (this isn't your typical for-profit board so her duty wasn't to OpenAI as a company but to what OpenAI is ultimately trying to do).


> but it really does not sound like she had any obligation to do so

The optics dont look good though if a board member is complaining publicly.


Frankly that's an irrelevant first order thinking.

If Sam would let it go what would happen? Nothing. Criticism and comparisions already exist and will exist. Having it coming from board member at least gives counter argument that they're well aware of potential problems and there is opportunity to address gaps if they are confirmed.

If regulators find argument in the paper reasonable and that's going to have impact - what's wrong with that? It just means argument was true and should be addressed.

They don't need to worry about commercial side because money is being pured more than enough.

The nature of safety research is critical by definition. You can't expect to have research constrained to talk only in positive terms.

Both sides should have worried less and carry on.


but her job is to do exactly that. anybody in this space knows Anthropic was formed with the goal of AI Safety. her paper just backed that. is she supposed to lie?


What she is supposed to do is bring the issues to the company so that they can be fixed.

That's the pro safety solution.


It is a complaint, or a discussion of the need for caution?


It does not sound like what she did helps advance the development of AGI that is broadly beneficial. It simply helps slow down the most advanced current effort, and potentially let a different effort take the lead.


> It simply helps slow down the most advanced current effort

If she believes that the most advanced current effort is heading in the wrong direction, then slowing it down is helpful. "Most advanced" isn't the same as "safest".

> and potentially let a different effort take the lead

Sure but her job isn't to worry about other efforts, it's to worry about whether OpenAI (the non-profit) is developing AGI that is safe (and not whether OpenAI LLC, the for-profit company, makes any money).


On the other hand, you create more pressure for solving the problem by publishing acknowledged paper, if your voice is not usually heard


She's a board member. Who had approximately 1/4 of the power to fire Sam, and she did eventually exert it. Why do you rather assume her voice was not heard?


You should assume that most of this happened before the firings.

Then it was 1/6 about the voting.

But voting is totally different thing than speaking about concerns and then getting actually them into the list, which will we voted further if they decide to do something about it.

In theory that is 1/6 * 1/6 power if you are alone with it for the decision to happen.


I still see no justification for assuming that the board member's voice was not heard before the publication. There's zero evidence for it, while the priors ought to be favoring the contrary because she does wield a fairly material form of power. If more evidence does emerge, then we could revisit the premise.


> she does wield a fairly material form of power.

How? Being a board member is not enough. There were aready likely two against her in this case, while the rest is unknown.


These guys are already millionaires. Do you think people writing these kinds of papers really are that greedy?


Is Helen associated with Anthropic?


Apparently an indirect association. From [0]:

Fall 2021

Holden Karnofsky resigns from the Board, citing a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden’s resignation is unknown; there was no contemporaneous press release.

Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (Discussion Source [1]).

0. https://loeber.substack.com/p/a-timeline-of-the-openai-board

1. https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/...


Don't tell me there is another polycule in there somewhere.


> Her job isn’t self promotion

Isn’t she an academic? Getting people to pay attention to her is at least half her job.


The challenge of all this is that while everything going on looks totally bonkers from any normal sense of business, it’s hard to argue that the board isn’t following their charter. IMHO the mistake was setting up the structure the way it is and expecting that to go well. Even with MSFT they are obviously annoyed but also they have shareholders too and one reasonable question here is what the heck was Microsoft’s leadership doing putting billions of capital at risk with this entity that has such a wacky structure and ultimately governed by this non-profit board with a bizarre doomsdayish charter. Seriously if you haven’t read it, read it.

This whole thing has been wildly mishandled but there’s an angle here where the nonprofit is doing exactly what they always said they would do and the ones that potentially look like fools are Microsoft and other investors that put their shareholder capital into this equation thinking that would go well.


When Microsoft came on board the charter effectively went out the window. It's like Idefix thinking he's leading Obelix around on a leash.


For anyone who is familiar with Obelix but has no idea who Idefix is, it's the original French name for Obelix' dog, Dogmatix. Idefix is a pun on the French expression idée fixe (fixed idea) meaning an obsession.


"It's like Idefix thinking he's leading Obelix around on a leash."

I don't think I have seen Idefix being on a leash ever .. he just runs around free. And there are indeed many dogs on a leash, who lead their "masters" who just follow behind.


I picked them for their difference in relative size and strength, not to suggest that Idefix would ever accept a leash or because there are other dogs that lead their masters around.


Hm.. I would think it is here the board holding the leash, but being pushed forward in a direction they did not wanted to go..


Apparently not…


A lot of that billions of capital is simply Azure compute credits though.


Those credits equate to billions of $ in real compute expense for MSFT.


I would hope that a billion in credits would not represent a billion in expenses for Microsoft.


They're giving up all the profits from whoever they would have been sold to otherwise.

So it adds up the same.


Microsoft giving these credits to OpenAI doesn't mean some other customer won't buy the Azure credits they planned on buying. So I don't see how what you write makes sense.

At least assuming Microsoft hasn't run out of capacity in their data centers. I know Azure sometimes have capacity issues but those issues are intermittent and not say 5 year long (or whatever the windoe is when it comes to consuming these credit)


> doesn't mean some other customer won't buy the Azure credits they planned on buying

Yes it does, because we're taking about GPU availability that is absolutely highly constrained across the economy.

If we were talking about CPU credits instead, then you're right that Microsoft would be paying/losing merely cost.


So when they give you $100 in free credits they are losing $100.


According to the paper, Anthropic's superior safety approach was deliberately delaying release in order to avoid “advanc[ing] the rate of AI capabilities progress." OpenAI is criticized for kicking off a race for AI progress by releasing ChatGPT to the public.

[1] https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...


It's important to remember that part of OpenAI's mission, apart from developing safe AGI, is to "avoid undue concentration of power".

This is crucial, because safety is a convenient pretext for people whose real motivations are elitist. It's not that they want to slow down AI development; it's that they want to keep it restricted to a tight inner circle.

We have at least as much to fear from elites who think only megacorps and governments are responsible enough to be given access as we do from AI itself. The failures of our elite class have been demonstrated again and again in recent decades--from the Iraq War to Covid. These people cannot be trusted as stewards, and we can't afford to see such potentially transformative technology employed to further ossify and insulate the existing power structure.

OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all. The last thing we need is an AI priesthood that will inevitably turn corrupt.


"OpenAI had the right idea. Keep the source closed if you must, and invest heavily on safety, but make the benefits available to all."

Well, you see the result. Apart from this drama, lots of debate and speculation whether we are already in AGI territory, because it is all just a black box and no one knows what exactly was in the training data to be able to judge the quality of the output.

That is way too much AI priesthood for me, just open it for real, if you don't want concentration of power.


Yeah that's certainly a debate worth having.

My point is this coup against OpenAI (seemingly) wasn't started by people who want to make AI more open. They want to make it inaccessible unless you're in their club. To my reading, that is more starkly opposed to the charter than anything Sam Altman has done. Commercialization = giving wide access. Not total 100% open source access, but wide access nonetheless.


And so you strike a deal with Microsoft. That tracks.


I think their point was that openAI (the nonpro) had the right idea in re: those concerns/concentration.

The velocity of release and entwining with MSFT (by the profit side) might then be reasonably seen as a great concern for the board.


Advancing AI requires massive amounts of capital and compute. There's no way around this. It can only be done either within or in partnership with huge organizations.

The default, in the absence of an OpenAI, is that it gets developed secretly by these organizations, and they get to decide who can use it (hint: it's not you, typical HN reader).

OpenAI were the only ones at least trying to walk this tightrope, and I think they were doing a pretty good job. So what were the real motivations of these three wealthy/elite board members who are taking it upon themselves to decide this issue for the rest of humanity? It's looking less and less likely that safety was the reason.


The paper reads a lot more nuanced than that. It compares the "system card" released with GPT-4 to the delay of Claude and the merits of each approach vis a vis safety.


Not really, and your own description is little different from the person you're responding to anyway?


>According to the paper, Anthropic's superior safety approach was deliberately delaying release in order to avoid “advanc[ing] the rate of AI capabilities progress."

Which can end up with China taking the lead. I don't understand why they think it's safer.


Having read the word soup of contradicting weasel words that make up Claudes "constitution", 'superior safety approach' has so many asterisks it could be a star chart. The only thing the garbage Anthropic has produced is superior is in making some people feel good about themselves.

(https://www.anthropic.com/index/claudes-constitution)


Im already out after the very first sentence.

> How does a language model decide which questions it will engage with and which it deems inappropriate?

This is what we mean by safety? Content moderation?


There are even more gems in the paper, like this one:

> Suppose a leader pledges during a campaign to provide humanitarian aid to a stricken nation or the CEO of a company commits publicly to register its algorithms or guarantee its customers' data privacy. In both cases, the leader has issued a public statement before an audience who can hold them accountable if they fail to live up to their commitments. The political leader may be punished at the polls or subjected to a congressional investigation; the CEO may face disciplinary actions from the board of directors or reputational costs to the company's brand that can result in lost market share.

I wonder if she had Sam Altman in mind while writing this.


The CEO is generally accountable to the board. A CEO trying to silence criticism and oust critical board members may be typical behaviour in the world of megalomaniacal tech startup CEOs, but it is not generally considered good corporate governance. (And usually the megalomaniacal tech startup CEOs have equity to back it up.)


He said he wished she communicated her concerns to him beforehand. How can disagreements be delt with if never communicated directly? So the CEO has to first learn of a disagreement with a fellow board member through a NY Times article?


Hard to defend this level of flex, though:

> Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

It's like your job is to fire me if I fail to fulfill the company mission, but I will fire you if you don't get my approval before saying stuff in public.


Keep in mind Altman was also a board member with senior tenure to Toner, and basically hired her to the board.


No, she replaced Holden Karnofsky, who was almost certainly the one to pick her.


Who, it's worth mentioning, got a seat because Open Philanthropy donated $30 million to OpenAI early in its creation.


Pledged $30 million, $10 million per year for three years, but most likely only $20 million received. Elon Musk gave $40 million. There is also $70 million in mystery "other income" in the 990's that is missing the required explanation (nature and source.)

OpenAI is operated at a public charity and is required to meet a "public support test" of 33%, so Musk could not have given his $40 million without the $20 million from Open Philanthropy, an EA supported public charity. In fact, most of Open Philanthropy's money went to OpenAI.

Public charities are also required to have less than a majority of the board be employees or relatives of employees. For a while, after Elon Musk was removed, Holden Karnofsky of Open Philanthropy was the only non-employee on the board.


Why is that? I didn't think outgoing board members got to control who replaced them.

Other board members elect replacements iirc


The "disagreement" was never dealt with as far as I can tell --- OpenAI's safety approach hasn't become more conservative --- which means that the only effect of bringing it to the CEO beforehand was to try to have it suppressed.


Sam could never have a selective memory about such a thing...


> Sam apparently said she should have come to him directly if she had concerns about the company's approach

That seems dishonest given the last three years or so of conflict about these concerns that he’s been the center of. Of course he’s aware of those concerns. More likely, that statement was just him maneuvering to be the good guy when he tried to fire her, but it backfired on him.


It's interesting but it may well be they both have a point: Helen for telling him to get lost and Sam for attempting to remove her before she would damage the company.

But she could have made that point more forcefully by not comparing Anthropic to OpenAI, after all who better than her to steer OpenAI in the right direction. I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest. Helen probably believes that her loyalty is not to OpenAI but to something higher than that based on her remark that destroying the company would serve to fulfil its mission (which is a very strange point of view to begin with). But that doesn't automatically mean that she's able to place it in context, within OpenAI, within the USA, the Western world and the world as a whole.

It's like saying the atomic bomb would have never been invented if the people at Los Alamos didn't do it. They did it in three years after it became known that it could be done in principle. Others tried and failed but without the same resources. I suspect that if the USA had not done it that eventually France, the UK and Russia would have gotten there as well and later on China. Israel would not have had the bomb without the USA (willing or unwilling) and India and Pakistan would have achieved it but much later as well. So we'd end up with the same situation that we have today modulo some timing differences and with another last chapter on WWII. Better? Maybe. But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!

The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?

And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.


> The open question then is: does she really believe that no other entity has the resources to match OpenAI and does she believe that if such an entity does exist that it too will self destruct rather than to go through with the development?

People at "top" companies fall into this fallacy very readily. FAANG (especially Google and Facebook engineers) think this way on all sorts of things.

The reality is that for any software project, your competition is rarely more than 1 year behind you if what you're doing is obviously useful. OpenAI made ChatGPT, and that revealed that this sort of thing was obviously useful, kicking off the arms race. Now they are bleeding money running a model that nobody could run profitably in order to keep their market position.

I have tried to explain this to xooglers several times, and it often goes in one ear and out the other until they get complacent and the competition swipes them about a year later.


I think the real issue is that OpenAI was doomed to fail from the beginning. AI is commercially too valuable to be developed by an organization with a mission like them. Eventually they had to make a choice: either become a for-profit without any pretensions about the good of humanity, or stay true to the mission and abandon ambitions of being at the cutting edge of AI development.

A non-profit could not have beaten the superpowers in developing the atomic bomb, and a non-profit cannot beat commercial interests in developing AI.


I always thought the structure was there to pull the wool over the eyes of the world's smartest researchers, to get them to agree to help them build a doomsday sort of technology. I never expected the board would be drinking their own kool aid.


> either become a for-profit without any pretensions about the good of humanity

Not a day passes that I don't hear from some company that has pretensions about the good of humanity and how they are leading the way to it. Plenty of non-profits and government orgs have the same pretensions.


I think it's different bc atomic bomb is pure cost and AI can have returns from products. But your overall point may stand


There's definitely (very large) returns from having atomic weapons.


Having the bomb first has infinite ROI.


> And does she believe that this will hold true for all time? That they and their colleagues are so unique that they hold the key to something that can otherwise not be replicated.

It's impossible to understand this position. We can be sure that in some countries right now there are vigorous attempts to build autonymous AI-enabled killing machines, and those people care nothing for whatever safety guardrails some US startup is putting in place.

I'm a believer in a skynet scenario, though much smarter people than me are not, so I'm hopefully wrong. But whatever, hand waving attempts to align/ soften, safeguard this technology are pointless and will only slow down the good actors. The genie is out of the bottle.


"But it is also possible that the Russians would have launched a first strike on the USA if they were unopposed. It almost happened as it was!"

When did a first strike of the Sowjetunion allmost happened? I rather think it was the other way around, first strike was evaluated, to hit them before they got the bomb.



What do you mean? The soviets never considered a nuclear first strike? They hadn't even fully deployed the ballistic missiles. It was the Joint Chiefs of Staff which recommended a first strike (although not nuclear), which fortunately Kennedy overruled.


>I noted in a comment elsewhere that all of these board members appear to have had at least one and some many more conflicts of interest.

From the perspective of avoiding an AI race, conflict of interest could very well be a good thing. You're operating under a standard capitalist model, where we want the market to pick winners, may the most profitable corporation win.


I subscribe to capitalism, but not to that degree. I see it the same way I see democracy: flawed but I don't have anything better.

> From the perspective of avoiding an AI race, conflict of interest could very well be a good thing.

On the American subcontinent: yes. But the world is larger than that.


Helen Toner has done a huge amount of work slowing down capabilities research in China. That's why she lived in Beijing for a year, she is a big part of why there are a lot of Chinese researchers from various AI labs signed onto the CAIS statement, and it's what her relationship to the Pentagon is all about. I think she is probably the individual person who knows the most in the world about the relative AI capabilities of China and the US, and her career is about working with the Pentagon, AI companies in the USA, and AI companies in China to prevent an arms race scenario around AI. It's the sort of work that really, really doesn't benefit from a lot of publicity, so it's unfortunate that this whole situation has put her in the spotlight and means someone else will probably need to backchannel between the US and China on AI safety now.

I don't know why she chose to publicly execute Altman, there just isn't enough information to say for sure. It probably wasn't a specific, imminent safety concern like "Our new frontier model was way more capable than we were expecting and attempted a breakout that nearly succeeded during internal red teaming", according to the new CEO it wasn't anything like that. The new CEO has heard their reason, but is putting a lot of pressure on them to put that reason in writing for some reason. I don't know why, we just don't have enough information.

But basically, she is a very qualified person on the exact topic you are concerned about and has devoted her career to solving that problem. I wouldn't write off what she's doing or has done here as "She didn't consider that China exists".


Well, if she was working with the Chinese she couldn't have done a more effective job.

So I'm not sure how this all integrates in her head but you break the glass in break-the-glass moments, not before, it's a one-shot thing.


She was working with the Pentagon, to try and make sure there isn't a serious issue between the US & China on AI, which requires actual engagement instead of just blind nationalism if you want the Chinese to listen to anything you have to say. I think it would be a really bad idea to assume that she just hasn't thought this through. We just don't have enough information.


I have some indication that she hasn't thought it through, that all started last Friday. If she has thought it through I hope that she can show her homework because it kind of matters.


See https://news.ycombinator.com/item?id=38373572 for why this might have qualified (if the story as presented is accurate).


Thin ice at best. And if it was she should come out and say it or leak the minutes where that was established as the reason that Sam had to go, that the fall-out would be worth it and that Microsoft could be contained.

I think any action to immediately destroy OpenAI should have been preceded by being on-track to create AGI and strong indications that it was not going to benefit humanity, as the charter implies. Anything less and it's just a powerplay. But what is the point of having a nice fat red button if you never get to push it?


>On the American subcontinent: yes. But the world is larger than that.

I'm not sure what you're trying to get at.


Fine by me.


So, it could also be that she approached him on the subject multiple times, after-all, she is a member of a board whose job is to make AI safety a priority thing.

After his plans for rapid expansion and commercialization were in direct contrast to the company's aims, I guess she wrote the paper to highlight the issue.

It seems that, like in the case of Disney, the board has lower power and control than the CEO. Highly likely if you have larger than life people like Sam at the helm.

I would not trust the board, but I would also not trust Sam. When billions of dollars are at stake, its important to be critical of all the parties involved.


>... yet based on this account I think he reacted the exact same way any CEO would.

Say what? The CEO serves at the behest of the board, not the other way around. For Sam to tell a board member that they should bring their concerns to him suggests that Sam thinks he's higher than the board. No wonder she told him to go fly a kite.


> I think he reacted the exact same way any CEO would

Perhaps if you think of it as another YC startup, but not so much if you view OpenAI as a non-profit first and foremost.


Who is being completely reasonable? Board member has a mandate and appears to be making a good faith effort to carry it out and the CEO tries to overthrow her. Whether that is standard behavior for CEOs is irrelevant.


She has a mandate not to promote Anthropic on the back of OpenAI. Very unprofessional


This is something you don’t get. What happens to the for-profit arm of OpenAI is not her problem, her loyalty lies with the non-profit arm’s charter (and not even the organization itself).

She is doing her job to uphold the mission of the organization.


This is exactly correct and so many people just don’t get it. The non-profit owns the for-profit, not the other way around. When the goals of the for-profit clash with the goals of the non-profit, the for-profit has to yield.


> What happens to the for-profit arm of OpenAI is not her problem

Who pays the bill for the non profit arm, wasnt that the for profit arm?


The for-profit arm was always meant to be subservient to the non-profit arm - the latter practically owns the former.

The important thing for the non-profit arm is the mission, its own existence is secondary.


> The important thing for the non-profit arm is the mission, its own existence is secondary.

So OpenAI should just shut down because the charitable donations were not enough to keep the lights on.


If they lack money they could always scale back operations.

Also running the for-profit arm the way Altman did isn’t the only way.

A more reasonable CEO would do what he/she can to make money without running afoul of the charter. Yes, it will be less profit - and sometimes no profit at all - but that’s the way it should be in a non-profit organization.

OpenAI used to be an organization that dabbled with various AI technologies. Anyone remember their DOTA2 bot? Before Altman turned it into all about commercializing their LLM - going so far as to try to lobby congress to create laws, in the name of safety of course, to hobble any upstart competition.


Helen Toner through her association with Open Philanthropy donated $30 Million dollars to OpenAI early on. That's how she got on the board.

https://loeber.substack.com/p/a-timeline-of-the-openai-board


That's super insightful, thank you for sharing this.


> I'm much more interested in how Helen managed to get on this board at all.

My gut says that she is the central figure in how this all went down. She and D'Angelo are the central figures, if my gut is right.

It looks like Helen Toner was OK with destroying the company to make a point.

FTA:

> Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission


That seems reasonable? The charter of the company could reasonably be furthered even if that means the end of the organization. If at some point the existence of the organization becomes antithetical to the charter, the board members have a responsibility to destroy it.


But they didn't destroy it. And they handed the keys of the kingdom to Microsoft.

So as a break-the-glass move it was about as ineffective as throwing a can of gasoline on a fire.


Replying to my own comment since I can't edit it anymore, but:

It looks like Helen Toner is out of the board.


> Sam apparently said she should have come to him directly if she had concerns about the company's approach and pointed out that as a board member her words have weight at a time when he was trying to navigate a tricky relationship with the FTC. She apparently told him to kick rocks and he started to look for ways to get her off the board.

Huh, this sounds pretty crazy to me. Like, it's assuming that a board member should act deceptively in order to help the for-profit arm of OpenAI avoid government scrutiny, and that trying to remove them from the board if they don't want to do that is reasonable. But in fact the entire purpose of the board is to advance the mission of the parent non-profit, which doesn't sound obviously compatible with "avoid giving the FTC (maybe legitimate) ammunition against the for-profit subsidiary, even if that means you should hide your beliefs".


No, it means that you go outside only after you've exhausted all avenues inside. It's similar to a whistle blower situation, only most whistleblowers don't have their fingers on the self-destruct button. So to press that button before exhausting all other options seems a bit hasty. There is no 'undo' on that button.


We're talking about the publication of a relatively milquetoast report that has some lines which can be read as mild criticisms of OpenAI's release strategy for ChatGPT & GPT-4. Why exactly is publishing such a report controversial? It's totally compatible with Helen Toner's role on the board.


I wasn't aiming that at the report per-se but at the actions on Friday. He wanted to keep that kind of report inside or at a minimum to see it beforehand, she didn't, he tried to remove her and ended up being removed himself. Both are out of line.

Sorry for the confusion.


I think trying to remove Toner for being a public author of that report is actually pretty out of line. (Note: the NYT article doesn't actually seem to provide evidence that Sam tried to get her removed from the board, though it sure does try to imply it real hard by reference to other things, like the email he sent criticizing her.)


Yes, agreed, that was uncalled for. But it is what probably a large number of CEOs would do.


I think it clearly represents the dichotomy of OpenAI’s structure. By its nature it created an adversarial position between the non-profit and for profit side with Sam and Toner representing those two poles.

I, personally, have a hard time picking sides here. On its face it seems that Ms Toner is more aligned with OpenAI’s stated mission.

However Sam seems much better suited to operating the company and being the “public face” of AI. One thing I’m pretty confident in is that the board isn’t capable of navigating OpenAI through the rough seas it’s currently in.

I think an argument could realistically be made to blow up the whole governance structure and reset with new principals across the board, that being said I don’t know who’d be a natural arbiter here.

At the end of the day the untenable spot Ms Toner is in is that the genie is out of the bottle which makes her position of allowing the company to self-destruct a bit tone deaf.


I'm seeing a somewhat larger stage where the United States and other countries are in an undeclared arms race and it just so happens that from what we know private entities (or an entity) in the United States believes that they are close enough to achieving that goal that they are actively working on things like alignment rather than just futzing around with GPUs and other multiplication hardware.

AGI is either just around the corner or it will be 50 years or more and if it is just around the corner you'd hope that parties that have at least some semblance of balance would end up in charge of the thing. Because if it is possible I expect it to be done given the amount of resources that are being thrown at this. Assuming it can be done weaponization of this tech would change the balance of power in the world dramatically. Everybody seems to be worried about the wrong thing: whether or not the AGI will be friendly to us. That doesn't really matter, what matters is who controls it.

No single individual (Altman, Toner, Nadella, or anybody else) should be taking the responsibility about what happens onto themselves, if anything the board of OpenAI has shown that this isn't a matter for junior board members because the effects range far further than just OpenAI.


Practically all of the most relevant experts in this domain, leadership in OpenAI, think it's right around the corner.

> Assuming it can be done weaponization of this tech would change the balance of power in the world dramatically.

Yes it would, but it wouldn't be as bad as everyone dying.

> Everybody seems to be worried about the wrong thing: whether or not the AGI will be friendly to us. That doesn't really matter, what matters is who controls it.

No, "who controls it" is a problem best tackled after "will it kill everyone." You say "That doesn't really matter," but again, Sam Altman himself thinks "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."


> Practically all of the most relevant experts in this domain, leadership in OpenAI, think it's right around the corner.

Oh, ok. That makes it alright then. So, let's see those minutes of that meeting where this was decided with all of the pomp and gravitas required rather than that it was a petty act of revenge or to see who could oust who first from OpenAI. Because that's what it looks like to me based on what is now visible.

> Yes it would, but it wouldn't be as bad as everyone dying.

Not much is as bad as everyone dying. But for now that hasn't happened. It also seems a bit like a larger version of the 'think of the children' argument, you can justify any action with that reason.

> No, "who controls it" is a problem best tackled after "will it kill everyone." You say "That doesn't really matter," but again, Sam Altman himself thinks "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

Then why the fuck is he trying to bring it into the world? Seriously, they should spend all of their talent on trying to sabotage those efforts then, infiltrate research groups and set up a massive campaign to divert resources away from the development of AGI. Instead they are trying as hard as they can to create the genie in the sure conviction that they can keep it bottled up.

It's delusional on so many fronts at once that it is isn't even funny, their position is so horribly inconsistent that you have to wonder if they're all right in the head.

EA is becoming a massive red flag in my book.

They seem to miss the fact that every weapon that humanity has so far produced has been used, and that those that haven't been used hang over us like shadows, and have been doing that for the last 70 some years. Those are weapons whose use you will notice. AGI is a weapon whose use you will not notice until you realize you are living in a stable dictatorship. The chances of that happening are far larger than the chances of everybody dying.


What you’re missing is that Eliezer has been beating this drum a long time, while the rest of HN and the world is asleep at the wheel.

Sam Altman, Elon Musk, and the original founders of OpenAI believe AGI is an existential threat and are building it with the belief that it’s best they’re the ones who do it, rather than worse people, in an arms race. Eliezer is the one saying “you fucking fools, you’re speeding up the arms race and have now injected a shit ton of $ from all VC into accomplishing the naive capabilities with no alignment.”

People don’t even realize that Elon Musk founded Neuralink with the belief that AGI is such an existential threat, we’re better off becoming the AGI cyborgs than a separate inferior intelligence. But most of the people thinking theyre so smart and understand the AGI X risk landscape here, even Elon fans, don’t know that.


Eliezer - we're all gonna die - Y has his own problems.

People really watch too many movies. The real risk isn't AGI killing us all, the real risk is that plenty of people will think they are smart enough to create it and then to contain it whilst the ruthless faction of humanity uses it to set up shop in a way that they can never ever be dislodged again. Think global Mafia or something to that effect, or a world divided into three chunks by three major power blocs each with their own AI core. That's a much more likely outcome and one that on account of an incompetent board has just become a little bit more likely.


>At the end of the day the untenable spot Ms Toner is in is that the genie is out of the bottle which makes her position of allowing the company to self-destruct a bit tone deaf.

Tone-deaf basically means "unpopular", doesn't it?

I'm old enough to remember when doing the right thing, even when it's unpopular, was considered a virtue.


Off topic - this is the first time I have seen the word milquetoast (pronounced "milk toast"?). What an interesting word!

NORTH AMERICAN

noun

a timid or feeble person. "Jennings plays him as something of a milquetoast"

adjective

feeble, insipid, or bland. "a soppy, milquetoast composer"


Her mandate is specifically to do what is good for society not to do what is good for OpenAI.


And how did this improve society?

I only see negatives. Other entities with less of a brake on their ethics are gaining ground. Microsoft of all parties is strengthening their position and has full access to the core tech.


>Other entities with less of a brake on their ethics are gaining ground.

Did OpenAI actually have a meaningful brake though? Like, if all the employees apparently think that the success of the company is more important than the charter, can we be sure that OpenAI actually had a meaningful brake?


Good question, probably not, in hindsight, but that was always my view anyway only for different reasons.


She published AI safety research as a member of a board whose mandate it is to act as a check and balance on the operation of an AI company. You are saying that she should hide information from the public out of loyalty to the company.

edit: Or that the board can't actually make a difference because whatever OpenAI doesn't do someone else will. But if people actually thought that were true they wouldn't have set up the board and charter.


No, I am not saying that. I am saying that Altman has a point (he's trying to deal with the FCC and it doens't help if at the same time a board member releases a paper critical of the company and actively comparing it to another), while at the same time she has a point which is that it is very well possible (and even probable) that OpenAI's safety could be improved on. Now, what would serve the charter better: to use that stick to get OpenAI to improve or to blow up OpenAI under the assumption that other leadership is going to be at least as ethical as they are with as a major chance of the subsequent fall-out that Microsoft will end up holding even more of the cards?

It's amateur behavior. I'm sympathetic to her goals, less impressed by the execution.


> he's trying to deal with the FCC and it doens't help if at the same time a board member releases a paper critical of the company and actively comparing it to another

Again, her mandate is not to help OpenAI deal with the FCC it's to prevent the company from building unsafe AI, one reasonable aspect of which might be to compare the methodologies of different companies.

You can justify pretty much anything with ends-justify-the-means logic and I have a hard time believing that the people who set up the charter would, a priori, have said that suppressing research comparing the safety approach of the company to a competitor in order to not make the company look bad so that the competitor wins because the company insists, without any basis, that they would be better for safety is in line with the principles of the charter. That is just trying to game the charter in order to circumvent it and is a textbook case of what the board was appointed to prevent.


> Again, her mandate is not to help OpenAI deal with the FCC it's to prevent the company from building unsafe AI, one reasonable aspect of which might be to compare the methodologies of different companies.

This isn't a theoretical exercise where we get to do it all over again next week to see if we can do better, this is for keeps.

The point could have been made much more forceful by not releasing the report but holding it over Altman's head to get him to play ball.

> You can justify pretty much anything with ends-justify-the-means logic

Indeed. That's my point: the ends justify the means. This isn't some kind of silly game this is to all intents and purposes and arms race and those that don't understand that should really wake up: whoever gets this thing first is going to change the face of the world, it won't be the AGI that is your enemy it is whoever controls the AGI that could well be your enemy. Think Manhattan project, not penicillin.

> I have a hard time believing that the people who set up the charter would, a priori, have said that suppressing research comparing the safety approach of the company to a competitor in order to not make the company look bad so that the competitor wins because the company insists, without any basis, that they would be better for safety is in line with the principles of the charter. That is just trying to game the charter in order to circumvent it and is a textbook case of what the board was appointed to prevent.

That charter is and always was a fig leaf, I am probably too old and cynical to believe that it was sincere, it was in my opinion nothing but a way to keep regulators at bay. Just like I never bought SBF's 'Altruism' nonsense.

The road to hell is paved with the best of intentions comes to mind.


> The point could have been made much more forceful by not releasing the report but holding it over Altman's head to get him to play ball.

Given how Altman has responded to things throughout his career and in this, I fail to see how doing this would result in anything other than the same outcome: Altman won't be moved by that, but consider it extortion and move for her removal from the board, regardless. In the end, he wants criticism or calls for caution stifled.


Let's be clear, he's mostly been dealing with the government with the goal being largely to enable regulatory capture, and pull the ladder up behind OpenAI with respect to regulation.

That effort isn't critical to OpenAI other than to try to create a monopoly.


Let's say you have a time machine and 20 years later OpenAI destroyed humanity b/c of how fast they were pushing AI advancement.

Would the destruction of OpenAI in 2023 be seen as bad or good with that hindsight?

It seems bad now but if you believe the board was operating with that future in mind (whether or not you agree with that future) it's completely reasonable imo.


I don't have a time machine.


This is an argument that can be (and is) used to justify anything.


So he criticized her and threatened her board position, and then she orchestrated a coup to oust him? Masterful. Moves and countermoves. You have to applaud her strategic acumen, and execution capability, perhaps surprising given her extensive background in policy/academia. Tho maybe it's as Thiel says (about academia: "The battles are so fierce because the stakes are so small") and that's where she developed her Machiavellian skills?

Of course, it could also be that whatever interest groups she represents could not bare to lose a seat.

Whether initiated by her or her backers (or other board forces), I can't see any of the board stepping down if these are the kind of palace intrigues that have been occurring. They are all clearly so desperate for power they will cling to the positions on this rocketship for dear life. Even if it means blowing up the rocketship so they can keep their seat.

Microsoft can't spend good will erasing the entire board and replacing it, even tho it's near major shareholder because too much values the optics around its relationship to AI right now.

A strong, effective leader in the first place would have prevented this kind of situation. I think the board should be reset and replaced with more level headed, less ideological, more experienced veterans...tho picking a good board is no easy task.


> Microsoft can't spend good will erasing the entire board and replacing it,

because they don't have the power to as they do not have any stake in the governing non-profit.


That’s a good point, but surely the significant shareholding, partnership and investment gives them power and influence.


Given the news today, apparently so.


Yep it did play out like that. Full board reset! Well done MS, but still I don't think it's enough. Oh well :) haha


Not that I think there are many examples of technical people making great board members, we've entered an era where if I don't get my way on the inside I'll just tweet about it and damn any wider consequences.

Management and stockholders beware.


The non-profit OpenAI has no stockholders.


While this would be perfectly reasonable if OpenAI was for profit, it’s ostensibly a non profit. The entire reason they wanted her on the board in the first place was for her expert academic opinion on AI safety. If they see that as a liability, why did they pretend to care about those concerns in the first place?

That said, if she objects to open AI’s practices, the common sense thing to do is to resign from the board out of protest, not take actions that lead to the whole operation being burned to the ground.


This is not any other company though? It's a non-profit with charter to make AI to benefit all of humanity

Helen believed she was doing her job according to the non-profit charter, obviously this hurts the for-profit side of things but that is not her mandate. That is the reason OpenAI is structured the way it is, with the intention of preventing capitalist forces from swaying them away from the non-profit charter, in hindsight it didn't work, but that was the intention (with the independent directors, no equity stakes, etc)

The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too


> The board has all my respect for standing up to the capitalists of Altman, the VCs, Microsoft. Big feathers to ruffle - even though the execution was misjudged, turns out most of its employees are pretty capitalistic too

Exactly. This is a battle between altruistic principles and some of the most heavyweight greedy money in the world. The board messed up the execution, but so did OpenAI leadership when they offered million dollar pay packages to people in a non-profit that is supposed to be guided by selfless principles.


One man's altruist is another man's fanatic. I, for one, would prefer an evil bandit over an evil fanatic because at least a bandit sleeps once in a while as that C.S Lewis quote goes.


I don't see why we have to paint these people with vagaries and reduction to analogy

For the board, their job is to keep the mandate of the non-profit. The org is structured to prevent "bandits" from influencing their goals. Hence, I cannot fault non-profit directors for rejecting bandits, that is the reason they are there.

It is like criticising a charity's directors for not bowing to the pressure of large donors to change their focus or mandate. Do we want to live in a world where the rich entrench control and enrich themselves and allies, and we just justify it as bandits being bandits? And anyone who stands up against them gets labelled a fanatic or pejorative altruist?


And the bandit likely is honest about their motives.


"I'm much more interested in how Helen managed to get on this board at all."

Indeed. This is far more interesting. How the hell did Helen and Tasha get on, and stay on, the board.


Helen Toner through her association with Open Philanthropy donated $30 Million dollars to OpenAI early on. That's how she got on the board.

https://loeber.substack.com/p/a-timeline-of-the-openai-board


This makes it sound like it was her money, which is not the case. She worked for an organization that donated $30M and they put her on the board.


How did she stay on the board?


I strongly suspect this whole thing is caused by an overinflated ego and a desire to feel like she is the main character and the chief “resistance” saving the world. The EA philosophy is truly poisonous. It leads people to betray those close to them in honor of abstract ideals that they are most likely wrong about anyway. Such people should be avoided like the plague if you’re building a team of any kind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: