by Tracy Rosenberg
Online Harms Need A Structural Solution: Ham-Handed Censorship Won’t Fix It
There is no doubt about it. Internet 2.0 made some people a lot of money. The quandary of the early 2000’s of how to monetize the Internet was answered by the rise of surveillance capitalism, and those positioned to grab the data in Silicon Valley have made (and in some cases lost) vast fortunes.
But as the early 2000’s receded, it became abundantly clear that the economic miracle of the monetized Internet had grave societal harms. Not just the obvious one of the institutionalization of an oligopoly of Big Tech firms who had scaled beyond any semblance of real competition, but kitchen sink harms that included the exploitation of children and youth, sexual abuse, black markets for harmful drugs and guns and the spread of virulent disinformation.
Not surprisingly, the large-scale distribution and increasing visibility of harmful content led to desires to make the “bad content” go away, some broadly recognized as such and other more ambiguously characterized as such depending on ideology.
Social media companies, as the foremost engine for content distribution in Internet 2.0, came under increasing pressure to moderate out of existence broad categories of disturbing content. Legislative censorship bills targeting one or more categories of content began to spring up in state legislatures as well as the federal government. These bills sought to do one of several things: limit access to the Internet for minors, impose broad penalties for the failure to remove certain targeted content subjects, or impose a duty upon tech companies to protect children from bad things online.
In all cases, these bills took a content-first approach. The problem was the x-rated, abusive, exploitative, dangerous or false content and the problem was solved when that content was rendered gone – moderated, removed, or made inaccessible to younger audiences.
There are some big problems with the content-first approach to online harms.
1. Section 230, a section of federal law that exempts tech companies from the liabilities of publishers and which broadly eliminates legal claims based on content-based harms against tech companies.
2. The scope of the Internet which makes moderation at scale difficult to fully execute and at best, an inconsistent method to remove targeted content.
3. The ongoing risks to freedom of expression and the First Amendment from censorship protocols especially in an increasingly polarized society where the definition of “bad”, “harmful” and “untrue” often differ from person to person, and in any case will inevitably change over time as generational mores shift.
4. Leaving in place the underlying mechanisms that create harm with all kinds of content as a nine-headed hydra to pop up over and over again as new kinds of harm surface from the same structure that remains in place.
Social justice concerns often revolve around a structural argument. Racism, sexism, ableism; in the end we conclude that it is not simply a matter of individual prejudice, but one of a structure that runs through society giving power to bias. The Internet, while artificial, is also a human society and the problems that exist online are also structural. We cannot meaningfully address them with scattershot munitions shooting at random bad content.
A structural approach to online harms focuses on the tools, mechanisms and mechanics that make online harms possible. The basic backbone of the Internet has been consistent for more than 40 years, but numerous layers of refinement, often coached as “user-friendly” improvements, have significantly changed the user experience since the 1990’s. Many of these layers are the structures that have magnified online harms. Although not always manifesting as harmful depending on the content they are handling, “improvements” like AI-enabled aglorithmic feeds, auto play, recommended or featured content, dark patterns and behavioral advertising make possible the common harms we see,
Instead of targeting specific pieces of content, and making a moderation nightmare as well as risking the suppression of legitimate content, we can regulate these added layers while retaining the backbone that has made the Internet one of the powerful systems the world has ever seen. By targeting destructive mechanisms to limit or remove their harms, censorship and its attendant problems are no longer necessary.
For example:
1. Algorithmic feeds, which are supposedly to benefit the user by showing them content they will like better than the content they have themselves selected, should be completely optional, if allowed at all. Optional should be defined as an affirmative opt-in, rather than a default presentation or a requirement for users who don’t want them to locate and utilize hidden tools. The default should be that users want the content they subscribed to – and nothing more.
2. Auto-play of video content should be disabled, The push of a button to play a video is not particularly burdensome and users should choose the content they want to view and that they don’t want to view.
3. Content recommendations should not be defaulted. If a user wants to see “similar content” or content that an online company believes they might like to see, they should affirmatively ask for it. This preserves the ability to be exposed to content one may not be aware of but find useful when that is useful to the user, but avoids pushing content to feed addictive behavior.
4. Dark patterns should be clearly defined and then banned. Manipulating users isn’t marketing, it is abuse. If we want people to be safe on the Internet, we must remove tools used to promote unsafe behavior.
5. Behavioral advertising, by replacing contextual advertising, has transformed the Internet from a source of information and connection to a service that datamines its users. While the economic basis of the Internet is largely the sale of behavioral advertising based on granular analysis of user clicks, it is possible to place limits on the use of advertising metrics. What people choose to purchase online may be fair game, recording every site they view and link they click on is intrusive and unnecessary.
There are less than six degrees of seperation between suggestions like those above and the strengthening of data privacy legislation. A privacy first approach to the Internet not coincidentally and by definition targets some of the most harmful mechanisms on the Internet. These mechanisms, in the wrong hands and with the wrong content, deliver real harms we can all see. But even with innocuous content, they deliver an Internet that is addictive, intrusive, overwhelming and less than useful.
We can tackle online harms and make the Internet better and safer for everyone, kids and adults alike, by reining in privacy-abusive mechanics that we don’t need and don’t help.