
by guest blogger Riana Pfefferkorn
Child sex abuse material, or CSAM, is a longstanding scourge on the Internet. Like the baseball diamond in “Field of Dreams,” if you build a service that allows file transmission or storage, someone will come use it for CSAM. Less distressing but equally true (if only marginally less dated a cultural reference) is that the Internet is for porn. While online services inevitably get used for both types of content, service providers tend to treat them very differently, given that adult pornography is generally legal in the U.S. whereas CSAM is illegal everywhere.
What if a provider messes up and treats legal porn like it’s illegal CSAM? That’s the basis for a recent opinion from a Florida federal district court that could have major implications for online services’ CSAM detection and reporting practices.
Relevant Federal Law: Balancing User Privacy with Child Protection
In the 1980s, Congress passed a law called the Stored Communications Act (SCA) that created a statutory right of privacy for Americans’ digital files and communications. The SCA generally prohibits the providers of online services from voluntarily disclosing the contents of communications, except in a few specified circumstances. Out-of-bounds disclosures expose the provider to civil suit by the subscriber or other aggrieved person, though good-faith reliance on a statutory authorization is a complete defense.
One of the SCA’s exceptions lets providers “divulge the contents of a communication … to the National Center for Missing and Exploited Children [NCMEC], in connection with a report submitted thereto under section 2258A” of Title 18. Section 2258A, in turn, requires providers, upon “obtaining actual knowledge” of “any facts or circumstances from which there is an apparent violation of” federal CSAM laws, to report that information to NCMEC’s CyberTipline. Flouting this obligation risks massive fines.
Section 2258A is a reporting requirement, not a monitoring requirement: the statute clearly states that providers do not have to search their services for CSAM. Nevertheless, many choose to do so voluntarily, yielding nearly 36 million reports to the CyberTipline in 2023 alone. NCMEC routes received reports to the appropriate law enforcement agency.
A related statute, 18 U.S.C. § 2258B, limits providers’ liability for making reports to NCMEC. It says a provider may not be held liable for claims “arising from the performance of [its] reporting … responsibilities” under section 2258A, unless there was “intentional, reckless, or other misconduct” by the provider, including acting “with actual malice.”
This limitation of liability, coupled with the SCA’s NCMEC-reporting exception and good-faith-reliance defense, addresses a dilemma providers would otherwise face upon discovering a user’s CSAM: either report it as required by statute and risk getting sued by the user for violating the SCA, or ignore the reporting obligation and risk six- or seven-figure fines.
As discussed in a paper I co-authored last spring, this set-up created an incentive to “over-report” material that might not qualify as CSAM. Trusting they couldn’t be held liable to the user, providers have long been able to err on the side of caution by reporting (leaving NCMEC and law enforcement to sort things out) rather than risk making the wrong call and paying the price. Now, however, a district court decision suggests that providers can no longer take it for granted that they won’t face liability for reporting non-CSAM.
The District Court’s Opinion
Verizon’s cloud storage service is provided by a vendor called Synchronoss. Verizon and Synchronoss monitor the cloud for CSAM using hash lists supplied by NCMEC (among other sources). Some list items include tags (also provided by NCMEC) describing the image category. Verizon and Synchronoss instantly report hash matches to NCMEC in CyberTips without human review or gathering any additional information about the images.
Plaintiff Lawshe was a Verizon customer who stored legal adult pornography in the cloud. Two images he stored got flagged as hash matches. One flagged image was tagged as “apparent” CSAM, the other as “unconfirmed” CSAM. Synchronoss reported each image in a separate CyberTip to NCMEC.
The second CyberTip stated that Synchronoss “had viewed the entire contents” of the second image, which “contained the lascivious exhibition of a ‘pre-pubescent’ minor.” In fact, Lawshe alleges, those statements were false, and the individuals in both images “were easily identifiable as adults by the barest of review.” Nevertheless, Lawshe was investigated and arrested on the basis of the second CyberTip. (It’s not mentioned here, but he later got the charges dropped and filed a civil-rights lawsuit against multiple government officials.)
Lawshe sued Verizon and Synchronoss for defaming him and violating his privacy rights under the SCA. The defendants moved to dismiss, asserting that they were obligated to report both images to NCMEC under section 2258A, that section 2258B immunized them for both reports, that the disclosures to NCMEC fell within the SCA’s exception for NCMEC reporting, and that their good-faith reliance on these statutory authorities is a defense against SCA liability.
The court sides with the defendants as to the first CyberTip but not the second. In short, the court holds that the “apparent CSAM” tag for the first image’s hash match was enough to trigger the defendants’ reporting obligations and shield them from liability, but the “unconfirmed CSAM” tag for the second image was not. Plus, the good-faith-reliance defense doesn’t appear on the face of the complaint so it isn’t available at this early stage.
The opinion dwells heavily on the various statutory provisions I quoted above. The court is irked that Congress used the term “apparent violation” without defining “apparent,” but concludes that “a hash match to ‘apparent CSAM’” constituted “facts or circumstances from which there is an apparent violation” of federal CSAM law, thus obligating the defendants to report the first image to NCMEC, as allowed by the relevant SCA exception and immunized by section 2258B.
But as to the second image, the court says NCMEC’s “unconfirmed CSAM” tag revealed only that someone at NCMEC at some point had somehow decided that image “could not be determined to be CSAM (presumably because the person reviewing the image could not tell if the individual depicted was a minor).” That’s not enough to persuade the court that “knowledge of an ‘unconfirmed’ CSAM tag match is ‘knowledge of the facts or circumstances from which there is an apparent violation’ of CSAM laws” sufficient to trigger the reporting obligation.
Since the second CyberTip wasn’t obligatory, it was voluntary – which severs its link to section 2258A. That means the disclosure to NCMEC wasn’t made “in connection with a report submitted [to NCMEC] under section 2258A” as required to fall within the relevant exception to the SCA’s general rule forbidding voluntary disclosures of users’ private files. It also means the defendants can’t claim section 2258B’s immunity against claims “arising from the performance of [their] reporting responsibilities” under section 2258A, because there was no such responsibility as to that image given how little they knew about it.
In response to the defendants’ argument “that requiring additional investigation into images tagged as ‘unconfirmed’ CSAM would chill providers’ content moderation and undermine Congressional intent,” the court responds only that “a review of § 2258A caselaw reveals that image-by-image human review is not uncommon,” though it acknowledges that some providers automate reports and that “minimizing the number of people who view CSAM is a paramount concern.” Nevertheless, with age-difficult images, “some level of further investigation is appropriate before a provider is shielded from liability for reporting its customer’s private information to the government.”
The court denies that its decision will destroy the immunity Congress gave providers. Congress, it says, intended to immunize only mistaken or incorrect reports, not unfounded reports – like the second CyberTip, where Synchronoss (which allegedly automates all of its tips) supposedly didn’t review the reported image despite representing that it had. That, says the court, is enough to plausibly allege actual malice, which makes the second report ineligible for 2258B immunity even if it arguably was obligatory.
The defamation and SCA claims for the second CyberTip go forward as to each defendant. The next stage of the litigation will likely involve (expensive, time-consuming) fact discovery into the circumstances surrounding the second CyberTipline report.
Implications
To sum up: Mistaken or incorrect reports to NCMEC get immunity, but unfounded reports do not. Consequently, human review of flagged files is preferable from a risk management standpoint, whereas automated reporting is risky. For the many providers who use NCMEC’s hash lists and tags as part of their voluntary CSAM detection efforts, automated reports are OK if based on some tags but not others. It’s safe to rely on NCMEC’s “apparent CSAM” tag (and probably also “known” CSAM), but the “unconfirmed CSAM” tag should trigger further investigation.
What other tags does NCMEC use, and which bucket does each one fall into: safe or unsafe? If using NCMEC’s tags turns immunity from liability into Russian roulette, what good are they, and what good is the law letting NCMEC share them with providers?
This case illustrates Eric’s observation that “determining if a content item is CSAM isn’t always a zero or one. The border cases leave [a provider] caught between its legal obligations to remove and report what might be CSAM and a user’s view that the CSAM characterization was overly cautious,” where “each choice creates legal peril” for the provider.
That said, it’s been surprisingly rare for users to sue service providers for reporting them for CSAM. A Google Scholar search for “2258B” turns up just two prior lawsuits (both fruitless) against AT&T in 2013 and Meta and Yahoo in 2020. And Eric has blogged a couple other failed cases where the gravamen of the complaint was account termination and content removal, not reporting.
This paucity of litigation is remarkable considering the high volume of CyberTips. Maybe Lawshe has opened the door to more – but probably only for reporting content that ultimately wasn’t deemed CSAM. Lawshe’s reported images depicted adults, unlike the users whose unsuccessful AT&T and Meta/Yahoo suits feel a bit like sour grapes over their criminal convictions.
If providers can no longer count on ironclad immunity for filing underbaked CyberTips, that has both pros and cons.
On the plus side, many parties and interests will benefit if providers send less chaff to the CyberTipline. As my prior research notes, a “kick the can down the road” approach diverts NCMEC and law enforcement time and resources from real kids in real danger. Meanwhile, being baselessly reported for CSAM can ruin people’s lives. As Lawshe experienced firsthand (and as those linked stories detail in depth), these “false positives” cost the provider nothing but cost the user dearly, jeopardizing their families and jobs and subjecting them to intrusive police investigations. The default rule of the SCA is to protect people’s digital privacy, and this court is sympathetic to the idea that its exceptions should not be allowed to stretch so far that they swallow the rule. Providers currently externalize the costs of over-reporting without consequence, and the court is saying maybe that situation is due for a correction. (It’s a little reminiscent of the quest to make DMCA § 512(f) mean something. But the desire to shrink the SCA is evocative of a California appellate court decision that would destroy digital privacy if upheld, as Eric and I recently told the California Supreme Court.)
That said, I’m not sure the court understands what “additional investigation” to reduce the incidence of “unfounded” reports would entail in practice. The court’s blasé statement that “image-by-image human review is not uncommon” ignores the sheer scale of content uploaded to the Internet every single day. Even small providers depend on automation to fight CSAM: hash matches; ML tools for detection, triage, and victim identification; automated reporting flows like that allegedly employed by Synchronoss. The court says hash matches for “known” or “apparent” CSAM are reliable – but does that mean any and all other tags demand more scrutiny? The court doesn’t say.
Plus, the court ignores a circuit split that arose from precisely the question of hashing systems’ reliability. In courts on the other side of that split, this court’s rationale would mean providers couldn’t automatically report even matches to “known” and “apparent” CSAM. Every flagged image would require human review in order to preserve immunity against allegations of “unfounded” reporting. Even Meta, which submitted 85% of all 2023 reports, would struggle (as it would require an army of additional moderators), while smaller platforms (without budget for more people) might quickly be swamped.
Not only would that introduce huge delays to a system that’s already under strain, it would come with a major human cost. Human review of suspected CSAM has a mental-health impact on the reviewer for true positives. (Fix “unfounded” reports with this one weird trick: traumatizing low-paid contractors in the Global South!) That’s another “damned if you do, damned if you don’t”: providers are getting sued by content reviewers for the trauma they suffer from doing this work, meaning there’s potential legal liability with or without human review. Even with false positives, there’s some privacy intrusion to the user whose private files get reviewed by moderators, albeit less than that of disclosure to the government. Automation may have falsely ensnared Lawshe, but it plays a crucial role in keeping CSAM off the Internet.
What’s more, human review isn’t a cure-all. As Masnick’s Impossibility Theorem goes, “content moderation at scale is impossible to do well.” At high volume, “edge” cases come up all the time. With age-difficult content, someone has to make that judgment call, perhaps multiple times a day, without a lot of information. Tools for age detection still require human oversight, and they struggle with age determinations around the cutoff. As the court observes, not even NCMEC can reliably tell if an image depicts an adult or a minor. Mistakes happen, and no matter what call gets made, someone will be unhappy with it. For example, Meta went through a negative press cycle in 2022 for reportedly training moderators to “err on the side of an adult” with age-difficult content – just what Lawshe wanted here. (It’s not clear to me how Synchronoss reviewing his images would have helped: if it was so obvious that they depicted adults like he claims, how come he got arrested?) But failure to report false negatives (i.e., true CSAM) can mean messy, expensive FOSTA lawsuits. Yet again, you’re damned if you do, damned if you don’t. Protecting the exercise of discretion, in recognition that batting 1.000 is impossible, is precisely why immunity is so important.
The big question this opinion leaves hanging in the air is, where’s the line between “mistaken” or “incorrect” CyberTipline reports (which it says get immunity) and “unfounded” ones made without actual knowledge of an apparent violation (which don’t)? If the rule is “immunity for mistaken reports but not unfounded reports,” that just tells plaintiffs how to phrase their complaints when they file lawsuits almost nobody bothered to file before now. In a world with tens of millions of CyberTipline reports per year (of which some unknown number aren’t actually CSAM), it’s not feasible to open the courthouse doors to case-by-case litigation over the sufficiency of any given provider’s actions in submitting a specific report. The court criticizes the domain of CSAM detection as “largely unregulated,” but immunity is a form of regulation. And immunity doesn’t mean much if it’s not robust.
At the extreme, the court’s decision could wind up facilitating more CSAM distribution: if courts start routinely letting users second-guess providers’ CSAM reporting practices, why should providers keep looking for CSAM at all? They don’t have to, as the court notes. (Later today, though, we can expect Congress to discuss unconstitutionally making them look.) Not looking means finding less to report, which means fewer lawsuit opportunities. Of course, that would throw the baby (accurate CSAM detection and reporting) out with the bathwater (inaccurate reports like Lawshe’s). But with content moderation in its “fuck it” era, it’s not unthinkable. These days nothing is.
Case citation: Lawshe v. Verizon Commc’ns, Inc., et al., 2025 WL 660778 (M.D. Fla. Feb. 28, 2025). (The operative complaint.)