In Compulife Software Inc. v. Newman, No. 18-12004, 2020 WL 2549505 (11th Cir. May 20, 2020), the Eleventh Circuit vacated a trial court ruling that competitors who used a website operator’s server and database did not misappropriate trade secret information.
The case involved “scraping”. Future trade secret cases will increasingly invoke the practice. Scraping is the process of importing a website’s information into another computer. It is a simple but effective method of vacuuming large volumes of data from the web. Once the data is gathered, the importer can offer it on its own website. But it also enables the user to offer another’s data from its own website. It is the electronic version of building equity off other’s investments.
The facts in the Compulife case were complex. Plaintiff Compulife was a life insurance quote database service. It alleged that a competitor scraped insurance quotes from the database. Relying on Florida and federal law, Compulife brought misappropriation of trade secrets claims against the competitor.
To establish trade secret misappropriation, Compulife had to show that it had possessed a trade secret which the defendant misappropriated. The defendant knew or should have known that the secret was obtained through improper methods. Compulife conceded that anyone could obtain quotes through the website. It also conceded that it did not restrict use of the quotes.
The trial court was skeptical of the case. It agreed that the underlying database was a protected trade secret. But the generated quotes were not. The court also found that Compulife failed to show the defendants used improper means to collect data. As a result, the misappropriation of trade claims failed.
The Eleventh Circuit disagreed. It acknowledged that the quotes’ public availability was relevant to ascertaining the existence of a trade secret. General access would conflict with the reasonable protection efforts required to show trade secret status.
Yet the analysis did not stop there. The Eleventh Circuit noted that “the simple fact that the quotes taken were publicly available does not automatically resolve the question in the defendants’ favor.” Even if the quotes themselves were not protected trade secrets, the trial court should have assessed whether the database was secreted away in tiny steps by the millions of bot-driven queries. If enough of the database was thus acquired in this way, then the defendants had appropriated a trade secret.
If the defendant replicated the data by acquiring it incrementally, it did not alter the analysis. If the defendant had acquired the database piecemeal, this was still misappropriation of a trade secret. Put another way, even if the quotes were not secret, the underlying database was.
Moreover, the mere fact that the defendants obtained the quotes through an open website did not necessarily mean that it was properly acquired. A bot can generate exponentially more quotes than any human. In other words, manual collection of Compulife quotes might be proper. Humans come with embedded constraints. Bots do not. They can collect relentlessly.
Finally, Compulife’s decision not to add a Computer Fraud and Abuse Act (CFAA) count is puzzling. In the textbook scraping scenario, the plaintiff might advance a CFAA claim prohibiting “unauthorized access” to the database. This would be typically coupled with a contract claim for breach of the terms of service.
Compulife did invoke the Florida anti-hacking statute. The court dismissed that count. The rationale for the dismissal was that the law only protects networks behind a “technological access barrier.” Compulife had no such barrier.
The decision offers online providers additional authority to rebut scrapers who stake out a “public access” defense. Available data does not equal unlimited data. After all, even free access to proprietary engines comes with strings attached. As any user who has had to strain their eyes to decide whether a square contains a bush can verify.