S. 1080, the “Cooper Davis Act”, is one of several bills before Congress that seek to pressure online service providers to search for and take down materials disfavored by the government, and that do so by applying to new contexts, methods providers already used to remove child sexual abuse materials, or “CSAM.” It’s cosponsored by Sen. Jeanne Shaheen [D-NH], Sen. Richard Durbin [D-IL], Sen. Amy Klobuchar [D-MN] and Sen. Todd Young [R-IN].
The Cooper Davis Act aims to inhibit drug-related material from circulating online. The Act doesn’t require providers to search for it and take it down, but if the provider is alerted by a user to material that appears, on the provider’s review, to indicate an intent to commit a ‘controlled substances violation’, then providers are required to report it to DEA.
This proposal goes far beyond the framework of CSAM reporting. All the CSAM scheme does is match a particular image’s hash value against a list of hash values of CSAM content provided by the National Center for Missing and Exploited Children. Cooper-Davis would require disclosure of planned or imminent as well as past violations, identifying information for an alleged violator, and speech that is not unlawful in itself, directly to law enforcement. Providers who “knowingly and willfully” fail to comply with this would face coercive criminal penalties of $190,000 per violation.
In practice, the safest and therefore most likely response by providers to this Act passing, would be to implement a broad rule suppressing an array of lawful speech, and prohibit on their platforms any mention of fentanyl, methamphetamines, and (less administrably) fake prescription drugs.
The trouble is that lots of discussion of fentanyl, methamphetamines, and fake prescription drugs, is useful discussion. There is indeed ‘fake news’ and misinformation out there about fentanyl – you can’t, despite contentions made by some police officers, overdose on it by touch. However, communities should be able to articulate and discuss policy responses to fentanyl online, for example by letting people know that local fentanyl can be contaminated with xylazine, without running afoul of over-eager censors.
It’s also fair to say that a nominally voluntary censorship scheme where providers can in fact be fined $190,000 if they violate it, is not very voluntary. And that matters for the Fourth Amendment. If the government strongly encourages (not simply requires) providers to conduct a search or seizure of a user’s content, then the provider needs to get a warrant to search that material, even if the material is on the provider’s own servers.
The Fourth Amendment “does not constrain private parties unless they act as agents or instruments of the government.” However, “[w]hen a statute or regulation compels a private party to conduct a search, the private party acts as an agent of the government.” Even when a search is not required by law, however, if a statute or regulation so strongly encourages a private party to conduct a search that the search is not primarily the result of private initiative, then the Fourth Amendment applies.
United States v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (citation omitted)
It is hard to see how the Cooper Davis scheme would not be construed by a court as “strongly encouraging” providers to search users’ content, whenever the provider has been alerted to the existence of such content by a user.
Providers have an incentive to avoid business practices that would increase their labor costs. It costs money to implement this scheme, money to employ people to evaluate potentially violative content, money to hire lawyers to prepare these warrant applications and to defend the provider if someone is wrongfully targeted. So why would they keep hosting drug-related discussions?
So, what would a more Fourth Amendment-compliant approach look like, that did not involve governments “strongly encouraging” providers to search for and take down disfavored content?
Governments can:
- Develop and issue best practices documents.
- Mount public information campaigns, encouraging people to be critical and thoughtful about what they share online.
- Fund research into how disfavored content spreads online, and how providers might be able to inhibit it.
- Fund civics and information studies curricula in middle and high schools.
The Internet has always, since the stone-age days of the bulletin boards, included private networks that choose to be full of toxic content. An effective, Fourth Amendment-compatible solution, based on genuinely voluntary searches and takedowns by providers, and a focus on education and research by governments, is within reach.
Instead, governments across the world are pursuing a model familiar to students of Russia’s “cyber Gulag” or the UK’s Online Safety Bill. This dangerous model involves greater censorship, greater information control, outlawing encrypted services, and punishing providers who don’t exercise a “duty of care”, however each government chooses to define it.
Perhaps, eventually, when every government and corporation across the world has weighed in, to purge from the Internet every form of online discussion that might potentially offend, from sexuality to atheism to lèse-majesté, they’ll let us have it back to play with. But let’s not kid(ify) ourselves: An Internet acceptable to all will be effectively useless to all.