Insurance companies have discovered devious new ways to rip you off - 13 minutes read
Insurance is one of those things that we all need and pay a ton for the privilege of having — but also never think about, hoping we never have to use it, and knowing that if we do, it will likely be a hellish experience. Unfortunately, as premiums for everything from home insurance to car insurance skyrocket, more of us are being forced to dwell on the opaque and convoluted insurance industry.
In the past year, real-estate developers have reported rate increases of up to 50%, and auto insurance has spiked 17%. And the home-insurance crisis is so bad, more people are forgoing coverage each year.
For most people, private insurance is their primary source of security against the vast array of (un)known risks that threaten to upturn their lives. Even if you live outside the US and don't rely on private health insurers, you still need to purchase private insurance for your car, home, income, and life. Beyond that, there are extensive types of commercial insurance that underwrite finance, business, logistics, infrastructure, government, and everything else in society.
But the industry we depend upon as a bulwark against an uncertain world is also increasingly designed to screw us over. Instead of using new technologies like artificial intelligence to calculate a customer's precise risk and determine a fair rate for them to pay, insurers are innovating all kinds of new ways to undermine our security and juice their profits — all under the guise of convenience and objective-risk science.
Insurance is supposed to make us feel safeWhen I talk to insiders working in the industry, they readily (even too eagerly) admit that insurance is a grudge purchase. Consumers agree. Private insurers are among the least trusted industries by consumers, with health insurers being the most hated. As noted in a new report from Caliber, a reputation-management consultancy, consumers see insurance "more as a necessary evil than something that is largely value-driven." That is, we see insurance as driven not by producing value for the public but by "turning an enormous profit for shareholders," Søren Holm, a senior advisor at Caliber, recently told the trade publication Insurance Business.
Still, insurance is a critical tool to make the world safer. By pooling risks among large groups, no one has to bear the full burden of health scares or random catastrophes on their own. That sense of social solidarity is why many early forms of insurance were tied to unions, guilds, and community groups. And it's why insurance is often mandated for things like driving a car or buying a house: It works better when everybody is in it together.
Today's insurers say they are selling "peace of mind" and hawk themselves as neighbors who are always there when you need them. But that sense of security does not ring true for people who feel cheated when insurers use a morass of loopholes and exclusions to deny claims while continuing to raise premiums or cancel policies altogether. For insurers, the best-case scenario is that they keep getting paid by customers who never face disaster, and therefore never need their insurance. So when the bills for a catastrophe come due, companies are not eager to pay out. Not so neighborly after all.
Behind the curtainInsurance is an esoteric, byzantine, and secretive business, so most of us only see the tip of the iceberg — the rejected claims, the raised costs, the revoked coverage. What we don't see are the complex systems that insurers have created to keep us in the dark, collect as much data as possible, and squeeze profits from the customers they are meant to serve. And the further integration of technologies like AI is only supercharging the industry's capacity to rip us off while allowing companies to evade public awareness and accountability.
In years past, insurance policies were based largely on broad demographic categories like age and gender. Now, with the vast range of data insurers have access to, consumers are charged not just based on their objective risks but also based on how much they are willing to pay — a practice called price optimization. To make those predictions, insurers gather and analyze data about individuals to create detailed personal profiles, looking at everything from whether you smoke cigarettes to your shopping habits to which internet browser you use.
As Duncan Minty, an ethics consultant for insurers, recently wrote, "It's difficult to think of data that they haven't been collecting about policyholders."
That data is fed into proprietary models for analysis to determine how much to charge a particular consumer. The personalized prices that the algorithm spits out are not just based on how risky a person is compared to other similar people but also on metrics like Customer Lifetime Value — or the predicted net profit that a customer will deliver over their lifetime.
To determine that magic price tag, insurance companies drill down into the nitty-gritty details of your life. They might look at your home's roof using drones and automated image analysis, or where you're driving based on data from a smart device in your car, or what kinds of foods you're eating by looking at nutrition trackers. They might also look at your credit score, ZIP code, social-media posts, and battery-charging habits. This data can then be used as proxies for social categories like class and race or to make moral judgments about your personal responsibility, which factor into decisions for prices and policies.
"We can now reveal things that, in the past, only God knew about, thanks to technology including AI," the president of Sompo Holdings, one of Japan's largest insurance companies, said last year.
We can now reveal things that, in the past, only god knew about.
Kengo Sakurada, CEO of Sompo Holdings
Insurers justify possessing so much data by saying that it's all in the name of fairness. Everybody should be charged according to their own risk. The only way to know that fair price is for insurers to have a vast amount of information about each individual. But how exactly they reach those decisions is largely unexplained. We have to guess, piece things together, and reverse engineer the results. And the outcomes seem to always favor insurers above all.
A recent survey found that most people are opposed to these kinds of surveillance programs: "68% of Americans would not install an app that collects driving behavior or location data for any insurance discount amount." However, that lack of consumer support has not stopped companies — insurers are starting to make such programs mandatory. For instance, health insurers can mandate employee participation in corporate wellness programs that track lifestyle data, and auto insurers can mandate smart devices in your vehicle if you've been deemed higher risk.
The direction the industry is heading is to use this flood of data to optimize pricing to the extent that your insurance policy is dynamic and constantly changing. For example, insurers are testing new business models like on-demand insurance: Rather than purchase an annual contract for something like car insurance, every time you drive, your insurance would activate, and when you aren't driving, it would deactivate. Each of these activations would be treated as a new transaction with a new contract — and a new price. Driving to get groceries on a sunny weekend morning might cost less than, say, picking up your kids during rush hour on a rainy evening. This emerging model is spreading as insurers experiment with new products such as single-day heat insurance that you can activate using a mobile app.
Colm Holmes, formerly the CEO of Aviva and now CEO of Allianz Holdings — both massive multinational insurers — summed up the problem with this model in a 2020 interview: "The use of data is something I think regulators will have to look at, because if you get down to insuring the individual, you don't have an insurance industry — you just create people who don't need insurance and people who aren't insurable."
Holmes is saying that the end result of this direction is that risky people lose their access to insurance while everyone else never needs to use their insurance — undermining the entire purpose of insurance as a way of collectively pooling risk. We're already seeing this as more people have their home-insurance policies canceled and claims denied. It could also mean endless profits for the companies: Millions of people pay in, while the insurer rarely, if ever, needs to pay out. To keep companies from becoming the architects of their own demise by pursuing that financial incentive, Holmes is saying that regulators need to step in to enforce limits. Otherwise we would have no real insurance to speak of.
Nickel and dimingAt her keynote during the recent International Congress of Actuaries, Inga Beale, the former CEO of the UK insurer Lloyd's of London, shared a story about trying to file a home-insurance claim after her roof had been damaged in a hailstorm. Beale's insurer had required her to get three independent quotes for repairs, fill out a stack of paperwork, engage in long interactions with a claims handler, and on and on. Eventually, Beale was so frustrated by the whole process that she decided to just pay for the repair herself. Beale was a professional underwriter in the highest echelon of the insurance industry, but she was also the victim of claims optimization, the insurance-industry practice where consumers are offered payouts not based on what they fairly deserve but based on what they are willing to accept.
From the freedom of retirement, Beale was taking aim at insurers' obsession with finding exclusions — that is, reasons not to underwrite risk or cover claims. She saw this as antithetical to the social purpose of insurance. What's the point of having insurers if they don't want to insure anything risky? Beale called these practices a systemic feature of an industry that has become too profit-oriented and risk-averse. I'll go even further: To boost their own profits, insurance companies are becoming increasingly antisocial and antagonistic. You may hate your insurer, but they probably hate you more.
One possible way insurers limit how much they pay on claims is by simply paying less on a batch of claims and seeing how many customers complain. If the number of complaints doesn't reach a certain threshold — say, 5% of claim decisions result in a formal complaint — then the amount paid is lowered even further with another batch of claims. The process of lowering payouts, which can be automated by AI tools, is continued until that threshold of complaints is reached.
Insurers can also use their data-driven analysis of customers to predict who is prone to complain and preemptively offer them a fairer deal than those who are more likely to just accept what they are offered. Or, they can target customers with low credit scores — which indicates they might have money troubles and need cash right now — and offer them a quicker, no-hassle process in return for a reduced payout.
In addition to dragging out claims until customers just give up, recent reporting by ProPublica found that the health insurer Cigna uses a system that helps doctors instantly reject a claim on medical grounds without opening the patient file, forcing customers to go through a tortuous appeals process. "Cigna adopted its review system more than a decade ago," writes ProPublica, "but insurance executives say similar systems have existed in various forms throughout the industry."
Optimization is an industry euphemism for discrimination. Instead of drawing redlines around risky populations and deciding to charge them more and pay out less, insurers use automated systems to find patterns in data and optimize parameters for profitable risk management — which often has the same discriminatory results.
Research by advocacy organizations in the UK has found strong evidence for discriminatory pricing, such as the "poverty penalty," the "ethnicity penalty," and the "loyalty penalty," where people are charged higher rates or refused coverage based on being poorer, living in communities of color, and sticking with one insurer rather than regularly shopping around and switching providers.
In the US, a class-action lawsuit against State Farm alleges that the company's use of automated platforms for handling claims resulted in racial discrimination, saying that "Black homeowners had a significantly harder time by several measures" to get their insurance claims approved compared to white homeowners.
A State Farm spokesperson told The New York Times: "This suit does not reflect the values we hold at State Farm. State Farm is committed to a diverse and inclusive environment, where all customers and associates are treated with fairness, respect, and dignity." (A judge denied part of State Farm's motion for dismissal, ruling in September that the lawsuit can proceed with the plaintiff's claim of racial discrimination.)
These kinds of practices that take advantage of consumers are intensified by the use of algorithmic systems designed to optimize profit for insurers by finding patterns across streams of data. Rather than making decisions on causality or objectivity, insurers depend upon correlation and interpretation. Their decisions are then laundered through the opacity of AI, giving insurers plausible deniability if their practices are determined unethical. If the machine is making the decisions, is anyone really at fault?
Private insurers are not unique in succumbing to the financial imperatives of profit growth, but they are uniquely positioned to prey on our insecurities, exploit our desire for protection, and abandon us in our greatest times of need. And insurers hide behind the industry's reputation of being too boring to care about, too esoteric to understand, and too technical to challenge. But as insurance becomes more necessary and less accessible in the face of mounting risks from the climate crisis, we need to pay more critical attention to how insurers operate and whose best interests they really serve.
The crisis of insurance accessibility and rising costs cannot be fixed without real accountability and oversight of the industry. The stuff of insurance is far too important to be left to the insurance industry.
Jathan Sadowski is a Senior Research Fellow in the Faculty of Information Technology at Monash University. He co-hosts the podcast This Machine Kills and wrote the book Too Smart: How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World.
Correction: October 23, 2023 — An earlier version of this story misstated how Cigna's system to process claims works. According to reporting from ProPublica, it doesn't automatically reject claims under a set cost, it helps doctors instantly reject them without reviewing a patient's file.
Source: Business Insider
Powered by NewsAPI.org