Anyone who has ever bought insurance, resented the premium, contested a claim denial, or piled up hours and papers dealing with insurers would welcome anything that promised to lower their rates. Admiral, a British insurer recently proposed an innovative solution to that end: it would discount rates depending on the applicant’s Facebook profile. Relying on an algorithm that connected language and behavior, Admiral’s program would extrapolate the applicant’s future risk profile from their social media activities.
Admiral’s app would interface with a Facebook profile and tabulate connections, status updates, and “likes” to determine the safety quotient of a driver. Admiral believed that a user who “liked” Michael Jordan was likely to present a different profile from one who “liked” Leonard Cohen. Accounts reported that users who wrote in short concrete sentences, made arrangements to meet friends at specific times and locations, and utilized lists would be considered conscientious; those who overused exclamation marks (!!!!) or words such as “always” or “never” would be considered overconfident.
Critics denounced the concept as absurd cargo cult science, or worse. Dr. Yossi Borenstein, the data scientist who developed the program, explained that the algorithm merely identified correlations between social media markers and claims data, and would be constantly improving itself.
Facebook throttled the program, cutting off Admiral’s interface to its data, noting that the Facebook Platform Policy states: “Don’t use data obtained from Facebook to make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan.” The loss of the interface was a significant, but not terminal, reversal for the insurer: Admiral will now use a limited version of the program, one that permits new drivers to share information to get a discounted quote.
The episode is illuminating for a number of reasons:
First, it shows that data will be used in ever more unexpected ways and yield ever more bizarre correlations. Two decades ago, datawarehousing experts insisted that retail data demonstrated that shoppers often purchased beer and diapers together. Target famously discovered a teenage pregnancy before the family did. And on the legal front, a recurring concern of clients is the permissibility of using particular data for particular purposes, such as credit reports for insurance rates.
Second, Facebook’s response was revealing. Like Otter and Boon in Animal House, it effectively amounted to “They can’t do that to our users! Only we can to that to our users.” The company terminated Admiral’s access to the program. But the termination reflects a fundamental business reality not Facebook’s altruism: Facebook’s multi-billion dollar evaluation stems from its mammoth data repository. It is possible to develop technology similar to Facebook’s, but virtually impossible to replicate its data. Thus, the termination was a hardnosed business decision: Facebook will protect its data as the heart of its business model. Allowing other companies free access to that trove would fundamentally gut the core of the Facebook enterprise.
Third, big data issues will be inextricably entangled with fundamental legal issues. A recent study criticized Facebook ad targeting, specifically the ability to limit advertising to, or exclude advertising from, users of a particular “ethnic affinity.” Facebook insisted that ethnic affinity was not a proxy for race or ethnicity, but rather a reflection of interests. Facebook, like the broader advertising industry, embraces some niche marketing, such as Spanish media or ethnic cosmetic products, while rejecting others, such as racially exclusive housing. Indeed, Facebook’s ad policy specifically prohibits use of the targeting option for illegal discrimination.
Nevertheless, investigative journalist site ProPublica was able to obtain approval for a test advertisement that civil rights attorney John Gelman viewed as violating the Fair Housing Act, and raising concerns under the Civil Rights Act of 1964. When queried, Facebook responded that it could not vet the details of every single apartment rental or job application on its website. However, its ad policies strictly prohibited discriminatory advertising, and Facebook aggressively enforced those policies when it became aware of violations.
Fourth, regulation is not the answer. After all, Admiral is a British company. As such it will adhere to the General Data Protection Regulation (GDPR) from May 2018 until such time as Brexit is complete. The GDPR is considerably more stringent than corresponding American regulations. Yet GDPR would not prevent Admiral’s plans or similar projects. The fundamental tenet of GDPR is that data belongs to the user. The user can permit a data processor to collect or use it. Users have consented to both Admiral and Facebook collecting and processing their data. GDPR requires no more.
Privacy advocates argue that users are not fully aware of the extent to which they are giving up control of their data. Even if that is true, data collection is legal in a consent-based model. As Warren Buffett once quipped, “If you’ve been playing poker for half an hour and you still don’t know who the patsy is, you’re the patsy.” The cyberspace variant runs: “If you’re getting a ‘free’ service and cannot figure the product being sold, you’re the product.” Consumer data is increasingly valuable; the laws and regulations around its collection and use will become increasingly critical.
Fifth, regulators will continue to grapple with the implications of data collection and utilization on this unprecedented scale. The Internet of Things (IoT) enables old industries to learn new tricks. The implications include “price optimization” which allows companies to price differentiate on the basis of customer habits. (Anyone who has used two different computers to shop for airline tickets has already experienced this phenomenon). But IoT goes beyond where any company has gone. Admiral’s model, for instance, would have factored love of jalapeno sandwiches into its rate computations. IOT will further facilitate underlying data collection: indeed analysts believe that data could ultimately underpin the IoT in the same way advertising bankrolls popular web services.
The process has begun. Progressive Insurance tracks your car. Other insurers have partnered with IoT pioneers like smart thermostat maker Nest and Fitbit to collect data regarding households and personal health. Data collection is currently purely voluntary. Over time however, it could easily become (implicitly if not explicitly coercive) on the premise that the innocent have nothing to hide. Ultimately, every Coca-Cola consumed, every customer service rant, and every late night out on the town could affect your risk profile.
And the process is just getting started. The technology is already being used in Africa, where the difficulties of acquiring data has led companies to use social media activities to evaluate applications. In the United States and European Union, companies will undoubtedly come up with novel inducements to get individuals to consent to collection of their data.
For example, employers and insurers have begun to offer free or subsidized devices such as Fitbits to Apple Watches to encourage healthier lifestyles and choices. A free Apple Watch would tempt even the wary (which is why they are now banned from the British Cabinet). But as Econ 101 reminds us, there is no such thing as a free lunch – or a free Apple Watch.
Saad Gul and Mike Slipsky, editors of NC Privacy Law Blog, are partners with Poyner Spruill LLP. They advise clients on a wide range of privacy, data security, and cyber liability issues, including risk management plans, regulatory compliance, cloud computing implications, and breach obligations. Saad (@NC_Cyberlaw) may be reached at 919.783.1170 or firstname.lastname@example.org. Mike may be reached at 919.783.2851 or email@example.com.
Physical Address: 301 Fayetteville Street, Suite 1900, Raleigh, NC 27601 | © Poyner Spruill LLP. All rights reserved.