Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see it as inherently a problem; AI can (theoretically) be a lot more fair in dealing with claims, and responds a lot sooner.

That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.



> AI can (theoretically) be a lot more fair in dealing with claims

Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.

However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.


I don't even think most Americans (except those trying to do the automating) would consider it to be fair.

AI is bias automation, and reflects the data it's trained on. The vast majority of training data is biased, even against different slices of Americans. The resulting AI will be biased.


On the other hand, once the claim is mishandled by AI, one can use the normal process to discover the juiced prompt and all the papertrail that comes with implementing it.


Yeah good luck getting the company to cough up what they call “Source code” during discovery


> LLMs are a codification of internet and written content

Only true for pre-trained foundational models without any domain-specific augmentations. A good AI tool in this space would be fine-tuned or have other mechanisms that overshadow the pre-training from internet content.


Nope. Claims adjudication LLMs aren't trained on random Internet content. If you're going to criticize then at least get your basic facts right.


Why wouldn't they be? LLMs need a lot of content for training and there's multiple orders of magnitude less to train on of if you limited it to insurance-specific content, so you'd probably get a really crappy LLM. And training from scratch is really expensive anyway.

At best they'll be using fine tuned enterprise OpenAI / Anthropic models, more likely a regular model with a custom prompt.


They use actual production claims data and policy documents for training. Not random garbage from the Internet.


That’s not how LLMs work.


You're incredibly naive if you think AI will be used to pay out claims more fairly instead of being used as a deny-bot.


You can use people to deny as well. Or non-AI automation; just some business rules in a normal system.


And whichever method you use, you’re still accountable to regulators, courts, the letter of your contract, and the consequences of your reputation in a competitive market.


United Healthcare was in the news last year because they had an AI claims "approval" process with a 90% error rate, all in favor of the insurance company.

It's easy to describe a business process with written down rules, and those are easy to find in legal discovery. It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".


It was not a 90% error rate (or at least that’s not a claim I read). It was that 90% of appeals of those decisions were decided (at least partially) in favor of the appeal. That could be 1000 decisions, 10 appeals, and 9 reversals.

I am personally 7 for 8 in lifetime wins in my city's parking ticket appeals process. That doesn't mean that I think that 7 out of 8 tickets my city issues are incorrect.


> It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".

Do you have actual knowledge of this? If not, the most obvious counterpoint is that the AI will need to give the reason or reasons for denial, and recording them for audit. Just like a human or a rules-based system.


Why? There are many tasks where AI beats humans. Humans are also prone to bias and fatigue etc.

Although I would still agree that there would need to be a mechanism for escalation to a human.


Because insurance companies aren't in the business of giving you money. They're in the business of trying not to.


how does AI change this part?


This is life insurance specifically. It's not very hard to prove someone is dead, is there really much room for argument over paying out the policy benefit?


If the plan is to just pay out after confirming the person is dead, what’s the AI doing? It could be replaced by a “upload your death certificate here” box.


Most of life insurance policies have exceptions. For example, they won't pay out if you commit suicide. So the conditions of the death must be assessed against the insurance policy before payout.


It is kind of weird. Why does a life insurer have 100,000 employees. I'm really only familiar with term life. All the "customer service" is pre-purchase. Once you buy it, you forget it other than making the annual payment. There's nothing to manage, no real customer service required until and unless you die.

I suppose whole life where there is a cash value and investments being managed might have a more ongoing service need, but I'm not familiar with that.


I think that must be somebody’s estimate or maybe just guess as to the total number of employees in the industry.

I seems a bit high to me, but I don’t know anything about the industry. FWIW, around 170k people die per day.

https://news.ycombinator.com/item?id=43918053

This doesn’t establish any sort of mathematical bounds, but it gives an idea of the size of the problem. I suspect 100k employees is an over-estimate just because a lot of people are uninsured…


I work in the industry at a startup insurtech, we are a life insurance carrier (wysh.com - our flagship product is a b2b micro life insurance benefit, but we built that on top of a term life carrier and also sell d2c term life)

Allianz has ~150k employees but certainly they don't all work on the term life business in the USA, they do all kinds of other insurance stuff all over the world and have hundreds of different products.

For term life specifically, there still are some pretty significant back office teams that a customer probably never interacts with directly, though. A few that come to mind:

- underwriters: you wont be able to make a decision for all of your applicants based on the info they provide you and the info you can pull from automated sources, so some number of humans are on the phone with your applicants asking clarifying questions, doing additional research, and making risk decisions. They're also routinely doing retrospective analysis that looks back on claims paid out to make sure the claims are reasonable and there's not some sort of gap in the underwriting approach thats leaving unknown risk on the table, and audits of automated underwriting decisions to make sure the rules engines are correctly categorizing risks

- actuaries: every company has varying risk tolerance for both the policies they issue and the cash they hold/invest. These people are advising on how to take risks and working with underwriters and finance people to try and figure out the financial impact of various underwriting decisions: can a product remain viable if it is purchased by a heavier balance of smokers vs nonsmokers, etc

- accountants and finance: its a capital-intensive business that requires large cash reserves and sane investment strategy for that cash, often subject to tests by regulators or industry associations and all sorts of lengthy audits

- compliance: in the US, life insurance is individually regulated by each state. Many states join the ICC Compact and agree to all follow the same rules and have a single set of regulatory filings, but you still have plenty of other states to do filings with, analyze changing requirements from, maintain relationships with regulators, respond to regulatory complaints or investigations, etc

- industry reporting: most insurance carriers participate in information-sharing programs like the MIB (Medical Information Bureau) and these memberships come with various reporting and code-back obligations. The goal is to prevent you from getting declined at one life insurer because you say you have some sort of uninsurable illness and then turning around and lying about not having that illness to another life insurer the next day. These sort of conflicting answers get flagged for manual review, someone will need to talk to the applicant and figure out why they gave conflicting info to multiple insurers and what the truth really is.

- claims and fraud investigations: many, many people lie to try and get insurance they aren't qualified for or to take out insurance on someone they aren't supposed to. Claims investigations start by asking "is the insured really dead" but then try to answer the questions like "did the insured know this policy was taken out on them", "were the responses the insured gave during underwriting truthful", etc. These investigations are extremely time consuming and often involve combing public records, calling doctors, interviewing family, and more. You'd probably be shocked how common it is for former-spouses to try and take out insurance policies without the other knowing during divorces. Some level of this investigation is happening in the first couple of years a policy is in force, too, as insurers can rescind the policy and refund the premiums if they determine it was obtained under false pretenses

- reinsurance: even the biggest insurers typically pool and share some amount of risk so that a bad claims year can't take down an entire carrier. reinsurance treaties are complex things to negotiate and maintain, and have lots of reporting obligations and collaboration between the reinsurer and the actuaries to validate the risks are what everyone thinks they are

The customer-facing part of a term life company is really just the tip of the iceberg. Small companies are certainly better at doing this with tech than bigger incumbents (thats a big part of the reason we exist at Wysh), and a narrow product focus really helps, but there's still some pretty significant levels of human expertise involved to keep it all running.


> You'd probably be shocked how common it is for former-spouses to try and take out insurance policies without the other knowing during divorces.

If they were receiving spousal support (“alimony”) or child support, this seems unsurprising and sensible.


The important detail there is doing it without the knowledge of the (former) spouse.

You need both an insurable interest and consent of the insured in order to buy an insurance policy on someone else’s life.

Couples separating and holding policies on each other is pretty common and carriers have some specific rules to follow to make sure there’s appropriate mutual consent for policy changes etc


Of course! One can die by suicide or as a result of drug abuse, preexisting conditions and all that. Otherwise somebody discovering or suspecting they have an incurable disease would be able to get a policy after that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: