Normally the word "whistleblower" means someone who revealed previously-unknown facts about an organization. In this case he's a former employee who had an interview where he criticized OpenAI, but the facts that he was in possession of were not only widely known at the time but were the subject of an ongoing lawsuit that had launched months prior.
As much as I want to give this a charitable reading, the only explanation I can think of for using the word whistleblower here is to imply that there's something shady about the death.
> Normally the word "whistleblower" means someone who revealed previously-unknown facts
Not to be pedantic, but this is actually incorrect, both under federal and California law. Case law is actually very explicit on the point that the information does NOT need to be previously unknown to qualify for whistleblower protection.
However, disclosing information to the media is not typically protected.
I think their post boils down to: "This title implies someone would have a strong reason to murder them, but that isn't true."
We can evaluate that argument without caring too much about whether the writer intended it, or whether some other circumstances might have forced their word-choice.
Right, but as you note the legal definition doesn't apply here anyway, we're clearly using the colloquial definition of whistleblower. And that definition comes with the implication that powerful people would want a particular person dead.
In this case I see very little reason to believe that would be the case. No one has hinted that this employee has more damning information than was already public knowledge, and the lawsuit that he was going to testify in is one in which the important facts are not in dispute. The question doesn't come down to what OpenAI did (they trained on copyrighted data) but what the law says about it (is training on copyrighted data fair use?).
Well, I still disagree. In reality companies still retaliate against whistleblowers even when the information is already out there. (Hence the need for Congress, federal courts and the California Supreme Court to clarify that whistleblower activity is still protected even if the information is already known.)
I, of course, am not proposing that OpenAI assassinated this person. Just pointing out that disclosures of known information can and do motivate retaliation, and are considered whistleblowing.
The thread looks very different than it did when I wrote any of the above—at the time it was entirely composed of people casually asserting that this was most likely an assassination. I wrote this with the intent of shutting down that speculation by pointing out that we have no reason to believe that this person had enough information for it to be worth the risk of killing him.
Since I wrote this the tone of the thread shifted and others took up the torch to focus on the tragedy. That's wonderful, but someone had to take the first step to stem the ignorant assassination takes.
> Normally the word "whistleblower" means someone who revealed previously-unknown facts about an organization.
A whistleblower could also be someone in the process of doing so, i.e. they have a claim about the organization, as well as a promise to give detailed facts and evidence later in a courtroom.
I think that's the more commonsense understanding of what whistleblowers are and what they do. Your remark hinges on a narrow definition.
No. Anytime someone potentially possesses information that is damning to a company and that person is killed… the low probability of such an even being a random coincidence is quite low. It is so low such that it is extremely reasonable to consider the potential for an actual assassination while not precluding that a coincidence is a possibility.
> Anytime someone potentially possesses information that is damning to a company and that person is killed… the low probability of such an even being a random coincidence is quite low.
You're running into the birthday paradox here. The probability of a specific witness dying before they can testify in a lawsuit is low. The probability of any one of dozens of people involved in a lawsuit dying before it's resolved is actually rather high.
If we're going to control for life situations, you have to calculate the suicide rate for people who are actively involved in a high stakes lawsuit against a former employer, which is going to be much higher than average. Then factor in non-suicide death rates as well. Then consider that there are apparently at least 12 like him in this lawsuit, and several other lawsuits pending.
I'm not going to pretend to know what the exact odds are, but it's going to end up way higher than 1/10k.
Or you could just look at the facts of the case (currently: no foul play suspected). Are the cops in on it? The morgue? The local city? How high does this go?
This isn't something which happened in isolation. This isn't "someone died". It's "someone died, and dozens of people are going to sign off that this obviously not a suicide was definitely a suicide".
Like, is that possible? Can you fake a suicide and leave no evidence you did? If you can then how many suicides aren't actually suicides but homicides? How would we know?
You're acting like it's a binary choice of probabilities but it isn't.
Why did you have to make it go in the direction of conspiracy theory? Of course not.
An assassination that looks like a suicide but isn’t is extremely possible. You don't have enough details from the article to make a call on this.
> You're acting like it's a binary choice of probabilities but it isn't.
It is a binary choice because that’s typically how the question is formulated in the process of the scientific method. Was it suicide or was it not a suicide? Binary. Once that question is analyzed you can dig deeper into was it an assassination or was it not? Essentially two binary questions are needed to cover every possibility here and to encompass suicide and assassination.
What a useless answer. I considered whether your answer was influenced by mental deficiency and bias and I considered one possibility to be more likely then the other.
I've listened to many comments here on some of these, saying it must be assassination "because the person insisted, "If I'm ever found dead, it's not suicide!"." This is sometimes despite extensive mental health history.
Entirely possible.
But in my career as a paramedic, I've (sadly) lost count of the number of mental health patients who have said, "Yeah, that was just a glitch, I'm not suicidal, not now/nor then." ... and gone on to commit or attempt suicide in extremely short order.
Computer the probability, don’t make claims without making a solid estimate.
No, it’s not low. No need to put conspiracies before evidence, and certainly not by making claims you’ve not done no diligence on.
And the article provides statements by professionals who routinely investigate homicides and suicides that they have no reason to believe anything other than suicide.
Who the hell can compute a number from this? All probabilities on this case are made with a gut.
Why don’t you tell me the probability instead of demanding one from me? You’re the one making a claim that professional judgment makes the probability so solid that it’s basically a suicide. So tell me about your computation.
What gets me is the level of stupid you have to be to not even consider the other side. Like if a person literally tells you he’s not going to suicide and if he does it’s an assassination then he suicides and your first instinct is to only trust what the professionals say well… I can’t help you.
Anyone who puts thought into the problem instead of jumping to conspiracies.
Men in that age group commit suicide at rate X. Company Y has Z employees. Over time period T there is a K % chance of a suicide. Among all R companies from which a person like you finds conspiracies at every turn the odds a finding a death is S% . Not a single value in this chain is “made with a gut.” All are extremely defensible and determined scientifically, and if really care, you can obtain them all with errorbounds and 95% confidence intervals and the works.
And you do basic math, and voila, your initial claim is nonsense.
Or simply read about the birthday paradox, wonder if it applies, realize it does, stop jumping off the wagon.
> why don’t you…..
You’re the one pulling conspiracies out of thin air despite no evidence, and you made the claim. The onus is to defend your claim when asked, especially now that you’ve been given evidence for a solid argument against it. One not pulled out of thin air.
> what gets me…
No I see the other side. And for every time someone ignores the presented evidence, ignores basic statistics, ignores a good methodology when presented one, for each such case, I have seen zero cases out of thousands of such conspiracies where it came true.
And I’ll 100% trust professionals over someone so innumerate as to be unable to do simple math, and get angry when it’s suggested super sneaky death wizards didn’t kill a minor player while ignoring dozens of more important players makes less sense than simple statistical likelihood.
The latter is rarely correct. I’ll even amend to never correct.
Bro where are you going to find statistics on the rate of actual suicides for someone who makes the claim that if they die it’s not a suicide? There are so many situations where there’s just no data or experimental evidence is impossible to ascertain and you have to use your gut. Where’s experimental evidence that the ground will still exist when you jump off the bed every morning? Use your gut. Tired of this data driven nonsense as if the only way to make any decision in this universe is to use numerical data. If you had basic statistical knowledge you’d know statistics is useless for this situation.
Complete bs. Use your common sense.
> You’re the one pulling conspiracies out of thin air despite no evidence, and you made the claim.
What claim? All I said is consider both possibilities because given the situation both are likely. You’re the one making the claim that a guy who told everyone if he died it wasn’t a suicide is totally and completely and utterly a suicide. And you make this claim based off of way to general experimental evidence collected for only a general situation. You’re the type of genius who if your friend died you’d just assume it was a car accident because that’s the most likely we to die. No need to investigate anything. Even if your friend was like if I die in the next couple days I was murdered you’d insist that it’s a car accident. Look at you and your data driven genius.
You claimed: " Anytime someone potentially possesses information that is damning to a company and that person is killed… the low probability of such an even being a random coincidence is quite low"
You're unable to even estimate "the low probability", you're unable to try even though it's not hard to get good estimates, so there is zero chance you understand how close an event is to happening.
Every single suicide "potentially posses information...", so the probability is not quite low. It's 100%. Do you know what "potentially" means? It's complete conspiratorial nonsense.
Since you're unable to understand math: there's around 50,000 suicides a year in the US. How many murders do you think are committed by a company killing some coverup a year? Less than a dozen (and that's likely way too high)? That coupled with your hand wavy "potential" makes the odds of a suicide orders of magnitude higher than murder, especially since if the company wanted to murder people there's plenty that would be higher on the hit list, yet they all are not dead. Facts > conspiracy.
Aww, screw it. It's not even worth trying to walk you through how to compute any odds when you're dead set on nonsense....
Let me spell it out for you. The likelihood that When someone dies that it's from murder is less than 1%.
From your logic, that means because the likelihood is less than 1%, murder should never be investigated.
Police investigations, forensic science, DNA matching, murder trials, Detectives are all rendered redundant by statistics.
You can compute this too. ANd you can use your incredible logic here: Facts > murder.
You need to see why that situation above doesn't make sense. Once you do, you'll realize that the same exact logic that makes that situation make no sense is the EXACT same logic you're using to "compute" your new conclusion.
You need to realize there ARE additional facts here that render quantitative analysis impossible to ascertain and hand waving is the ONLY way forward. That is unless you want to actually go out there and gather the data.
You know logic, deduction and induction are alternative forms of analysis that can be done outside of science right? You should employ the former to know when the later is impossible.
> but the facts that he was in possession of were not only widely known at the time but were the subject of an ongoing lawsuit that had launched months prior.
That is an exceedingly charitable read of these lawsuits.
Everyone knows LLMs are copyright infringement machines. Their architecture has no distinction between facts and expressions. For an LLM to be capable of learning and repeating facts, it must also be able to learn and repeat expressions. That is copyright infringement in action. And because these systems are used to directly replace the market for human-authored works they were trained on, it is also copyright infringement in spirit. There is no defending against the claim of copyright infringement on technical details. (C.f. Google Books, which was ruled fair use because of it's strict delineation of facts about books and the expressions of their contents, and provides the former but not a substitute for the latter.)
The legal defense AI companies put up is entirely predicated on "Well you can't prove that we did a copyright infringement on these specific works of yours!".
Which is nonsense, getting LLMs to regurgitate training data is easy. As easy at it is for them to output facts. Or rather, it was. AI companies maintain this claim of "you can't prove it" by aggressively filtering out any instances of problematic content whenever a claim surfaces. If you didn't collect extensive data before going public, the AI company quickly adds your works to it's copyright filter and proclaims in court that their LLMs do not "copy".
A copyright filter that scans all output for verbatim reproductions of training data sounds like a reasonable compromise solution, but it isn't. LLMs are paraphrasing machines, any such copyright filter will simply not work because the token sequence 2nd-most-probable to a copyrighted expression is a simple paraphrase of that copyrighted expression. Now, consider: LLMs treat facts and expressions as the same. Filtering impedes the LLM's ability to use and process facts. Strict and extensive filtering will lobotomize the system.
This leaves AI companies in a sensitive legal position. They are not playing fair in the courts. They are outright lying in the media. The wrong employees being called to testify will be ruineous. "We built an extensive system to obstruct discovery, here's the exact list of copyright infringement we hid". Even just knowing which coworkers worked on what systems (and should be called to testify) is dangerous information.
Sure. The information was public. But OpenAI denies it and gaslights extensively. They act like it's still private information, and to the courts, it currently still is.
And to clarify: No I'm not saying murder or any other foul play was involved here. Murder isn't the way companies silence their dangerous whistleblowers anyway. You don't need to hire a hitman when you can simply run someone out of town and harass them to the point of suicide with none of the legal culpability. Did that happen here? Who knows, phone & chat logs will show. Friends and family will almost certainly have known and would speak up if that is the case.
If we take the logic of your final paragraph to its ultimate conclusion, it seems companies can avoid having friends and family speak up about the harassment if they just hire a hitman.
Isn't it the other way around since OpenAI is training their models on news company content? OpenAI has behaved extremely unethical the entire time it has existed. It's very likely there is foul play here, it fits the pattern.
I wasn't even talking about the copyright issues. I was talking about things like this and Sam Altman's sister's accusations. Things way beyond what any reasonable person would consider moral.
You assume he revealed everything he knew, he was most likely under NDA, the ongoing lawsuit cited him as a source. Which presumably he didn't yet testify for and now he never will be able to. His (most likely ruled suicide inb4) death should also give pause to the other 11 on that list:
> He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.
Being one of 12+ witnesses in a lawsuit where the facts are hardly in dispute is not the same as being a whistleblower. The key questions in this lawsuit are not and never were going to come down to insider information—OpenAI does not dispute that they trained on copyrighted material, they dispute that it was illegal for them to do so.
It seems like it would matter if they internally believed/discussed it being illegal for them to do so, but then did it anyway and publicly said they felt they were in the clear.
So the lawyers who said they had "possession of information that would be helpful to their case" were misleading? Your whole rationalization seems very biased. He raised public awareness (including details of) of some wrongdoing he perceived at the company and was most likely going to testify about those wrongdoings, that qualifies as a whistleblower in my book.
> "possession of information that would be helpful to their case" were misleading?
I didn't say that, but helpful comes on a very large spectrum, and lawyers have other words for people who have information that is crucial to their case.
> that qualifies as a whistleblower in my book.
I'm not trying to downplay his contribution, I'm questioning the integrity of the title of TFA. You have only to skim this comment section to see how many people have jumped to the conclusion that Sam Altman must have wanted this guy dead.
As much as I want to give this a charitable reading, the only explanation I can think of for using the word whistleblower here is to imply that there's something shady about the death.