How To Improve Social Media
What social media companies can do, what public policy can do, what we can do — and what we shouldn't do.
We live in divided times, but a rare point of consensus is that social media has not been beneficial to society. Even as we use these services, we realize that something is wrong, that these trends and forces are not moving humanity in the right direction. We notice the effect it has on people, and we feel it ourselves. We see what it has done to politics, public discourse, and societal tensions. We see the role it plays in incentivizing bad behavior, amplifying the most extreme voices, and democratizing truth. And we see the worrisome concentration of power in the hands of a few tech companies. So, what should be done? That is where the consensus disperses.
The strategies we’ll explore here are broken down into three main categories: things the government can do through public policy, things social media companies can do with changes to their platforms or business models, and things that we can do as individuals through our choices and behaviors. A few of the strategies will also include a devil’s advocate section that discusses possible downsides.
Identifying the Problems
Negative mental health effects of social media use.
Increasing political polarization, tribalism, and hatred.
Public shaming, online mobs, and cancel culture.
Accelerating the death of journalism.
Post-truth, misinformation, fake news, and conspiracism.
Incitement and glorification of violence.
Distorting and misrepresenting various views or movements as being more well-subscribed or influential than they are.
Bias from the platforms themselves, and selective or unfair enforcement of terms of service.
Privacy concerns and invasive data-mining.
Too much power concentrated in the hands of tech companies.
These are not the only problems, but they are among the biggest, and will serve as a good foundation from which to begin.
Public Policy Strategies
Hold Tech Companies Liable for Criminal Content on their Platforms. Platforms should have rules barring illegal content, along with reasonable policies to remove it promptly and ban offending users. If they fail to do this, they should be subject to liability or legal penalty. This would make tech companies more responsive in removing criminal content. Devil’s advocate: In an effort to take no chances, tech platforms may make their content moderation policies a dragnet that over-polices anything even remotely close to the legal line.
A Data Dividend. All users must be allowed to opt out of having their data collected and sold. This includes past and future data. If users agree to have their data collected and used, they should be paid a portion of the profits generated.
Algorithm Regulation. The algorithms that social media companies use to gamify their platforms; addicting, manipulating, and exploiting users, should be subject to regulation. Given that virtually everything is regulated, from food, to business, to vehicles, to children's toys and household products, it seems absurd that these algorithms that know us better than we know ourselves are subject to so little oversight.
Guarantee Free Speech Platforms. The government should guarantee the presence of at least one platform in the social media landscape with very minimal terms of service (TOS) that police only criminal content and have no rules pertaining to bigotry, “hate speech”, misinformation, conspiracy theories, unpopular opinions, or anything else. If privately owned platforms want to create strict TOS, banning content that, while legal, is deemed reprehensible or undesirable, that is their right, and must remain so. But there must be some place, out in the open where all can see, where anyone can say anything, so long as it breaks no laws.
If such a platform finds the ancillary services it requires to exist denied it by all available tech companies (e.g. hosting services), they should be able to apply through the government to be randomly assigned to companies who would then be legally required to honor it (similar to “assigned risk” in insurance). If it gets to the point where no such companies exist at all, the government should create one of its own, a sort of social media “public option”, as it were.
Break Up Tech Monopolies. If any one company comes to control an inordinate percentage of the digital landscape, they should be made to divest portions of their companies to prevent monopoly. Devil’s advocate: the quality of some platforms may suffer, as divestment from juggernaut parent companies could mean a loss of resources.
Strategies for Social Media Companies
Paid Subscriptions. Most social media platforms make their money through a combination of advertising and data-mining, two mechanisms where the more time users spend engaged on the platform, the more money is made. This creates a whole slew of incentives to addict users. One alternative is paid subscriptions. Users might opt to pay for an ad-free premium version of the service where the attention-hooking mind-games are turned off, along with other perks.
Or, a platform might instead decide to go fully subscription-based, doing away with ads and attention-hooks altogether. This would have the side benefit of dramatically reducing toxicity, specifically trolls, bots, and sock-puppet accounts, as many would be unwilling to pay just in order to do that. Devil’s advocate: The proverbial riffraff would simply flock to the other free platforms and make them more toxic — so toxicity may be redistributed more than reduced in absolute terms. Going fully subscription-based would also create an exclusionary class dynamic, with poor people unable to afford the service. Even so, the pros may outweigh the cons. You get what you pay for, and when you’re paying nothing at all, the product is you.
Invest in Smarter Moderation Algorithms. Platforms offload much of their content moderation to algorithms as a cost-saving measure rather than paying large numbers of humans to do the work. But as any social media user knows, these algorithms are dumbasses that often have trouble making basic judgement calls. For example, YouTube algorithms struggle to differentiate between penalizing content that violates the TOS, with content that criticizes those disallowed behaviors. Because the algorithms aren’t sophisticated enough, they cannot make these and many other common sense distinctions, which makes it difficult to properly enforce rules without penalizing the innocent. Tech companies should either invest in better algorithms, and/or hire more human moderators. The improvement in user experience and engagement would pay for itself.
Limit Comment Sections. Comment sections are probably the single worst aspect of social media, and rank highly on the list of worst aspects of the internet in general. They are addictive, pointless time-sinks, magnetically drawing people to them against their better judgement. Social media companies should expand the options available to users to restrict or limit comments to their posts. In addition, news sites, online magazines, and all manner of websites that currently have comment sections should consider closing them, limiting them, or putting them behind paywalls if their platform is partially subscription-based. There is a constructive purpose, even a need, for comment sections to exist — I am not suggesting we do away with all comment sections. But the internet is overrun with them, and it would improve social media, mental health, and society in general, if they were scaled back a little.
Open Verification to All. Many social media platforms allow certain users to verify and authenticate their identity in order to become “verified”, usually denoted by a checkmark next to their name or on their profile. For the most part, this has served as a way of establishing a digital caste system, where people who are famous, powerful, or well-connected attain the honorific of verification, and the lowly plebs do not. And the content posted by verified users carries more weight because of their status. This obnoxious arrangement should end immediately by opening up verification to anyone willing to verify and authenticate their identity, gaining the checkmark and the legitimacy it brings.
This would help not only with making social media less stratified, but with incentivizing more users to voluntarily abandon their rando anonymity status to join the beautiful people. Anonymity must always exist as an option for users, as one’s life circumstances may preclude them the ability to express themselves freely under their real identity. But it must be recognized that at scale, anonymity has functioned as a shield for bad behavior by insulating people from all of the social mechanisms that restrain our worst impulses in real-life interactions. On a platform with truly open verification, you would not have to wonder whether someone was a bot, troll account, or sock-puppet. By opening verification to all, we can improve two very important problems in a single stroke.
Hide follower counts. Social media is also stratified by follower count, with high-following accounts treated more seriously than low-following ones. By hiding follower counts, users are more likely to be judged by the content of their posts, rather than prejudged by their number of followers.
Getting Rid of “Likes.” Likes function as little dopamine doggie treats that sculpt the human mind into doing more of what produced that reward. They also influence the way others react to a post, taking it more seriously if they see a bazillion likes, even if the content of the post is actually quite idiotic. The original idea behind the like button was to make social media more fun, but with the benefit of hindsight, it just turns people into even bigger approval-seeking conformists than they already are. We don’t need further encouragement in this department. Likes are also one of the metrics platforms use to data-mine you and learn about what you like so they can hack your brain.
More Precision and Clarity in Terms of Service. Social media platforms’ TOS are intentionally left so vague that, technically speaking, they could probably justifiably ban half their users if they wanted to. Of course, no platform would ever do that, for reasons of pure self-interest, but the fact that the TOS are made so broad allow the companies greater flexibility to selectively enforce their rules when, where, and how they want. This inevitably leads to bias and abuse.
The best way to make social media TOS and its enforcement fairer is to sharpen it up. The language should be revised to provide a greater degree of precision and clarity, leaving less to interpretation and discretion, and the new-and-improved TOS should be enforced across the board. A good indicator of a better TOS would be one where a platform could not technically ban more than, say, 5-10 percent of their users at most, even with the strictest possible enforcement. By sharpening up the TOS, it narrows the scope of possible bias and selectivity from the platforms.
Anti-Racism as a Universalist Principle, Not a Woke Religion. Across most of social media, there’s a two-tiered system when it comes to the enforcement of TOS violations relating to racism or bigotry. Users are routinely penalized or banned for posts about non-white or LGBTQ people that would occasion no similar action — and may even be lauded, in certain circles — if said about white and/or non-LGBTQ people. There has been a cultural shift in the past decade among educated, affluent left-of-center people, away from MLK-style universalist anti-racism that sees people as individuals, and toward woke critical-race style anti-racism that defines, essentializes, and morally ranks people by group category. And big tech, like many other sectors of society, has been captured by this emerging ideology.
Should a platform choose to restrict racist or bigoted content in their TOS, they should enforce it across the board, no matter who the subject is. A return to a more universalist zero-tolerance-no-matter-who policy of anti-bigotry would go a long way toward improving the cultural and ideological health of these platforms — and by extension, of society as a whole.
Penalize Repeated False Reporting. Social media platforms allow users to report content that violates the TOS, which then gets reviewed or flagged. Some users have taken to wielding the report button as a cudgel to punish others over personal disputes, political partisanship, or differences of opinion, sometimes inviting others to mass-report them. This is an abuse of these features, makes the platforms more toxic, and wastes time and resources that could be spent reviewing actual TOS violations. Users who repeatedly report things that are not, upon evaluation, legitimate TOS violations, should themselves be penalized for such behavior.
Make It Easier To Delete Old Content. Every social media platform should make it simple to delete undesired old posts en masse, and to filter them by type, age, or keyword. Everyone deserves the right to be forgotten, the right to change their mind, the right not to have every misstatement or stupid thought from many years ago held against them forever, or laying in wait like a time bomb. Give users the tools to clean their accounts without having to delete it altogether.
If it Ain't Broke Don't Fix it. This essay has focused on the many things wrong with social media, but there are also many things not wrong with it. It is in the nature of business that companies always feel pressure to grow, expand, and improve, and for employees to find ways to justify their jobs, and this sometimes leads them to make unnecessary changes that end up introducing new problems. Tech companies should think twice before overhauling their sites, or altering or removing features if they were functioning perfectly well in the first place.
Strategies For Ourselves
Spend Less Time on Social Media. The best thing you can do is to simply spend less time on social media. Turn off email and push notifications for all of your social media accounts. Consider deleting the apps off of your phone and only using the desktop versions on your computer. If you find that you really aren’t getting much value out of some of your accounts, delete them. This is an attention economy. It needs attention to run. Starve the beast.
Tailor Your Feeds To Be Less Toxic. Include a healthy amount of apolitical, non-news, non-drama content. Being deluged in scandals, controversies, hot button issues, debates, outrage, and tragedies all day will turn you into a chest-beating shit-flinging baboon — it’s bad for your mental health, and it’s bad for public discourse. Nobody wins when your crimson ass is waving in the air except the tech companies, who ride your outrage addiction to the bank, laughing at how lucrative your dignity is, and how easily you surrendered it.
Don’t Use WhatsApp. WhatsApp is a messaging app owned by Facebook. Given that Facebook is the single biggest offender of the problems outlined above, plus the fact that there are many other messaging apps that do the same thing as WhatsApp (and do it just as well), we should all help diversify the social media landscape by switching to alternative services.
Protect Your Privacy. Use browsers (I prefer Brave), add-ons (such as ad blockers), search engines (like DuckDuckGo), and other tools that increase your privacy. Avoid things owned by the tech giants to diversify the landscape. Until such time when we have substantive data rights, making smart browsing and search choices will help limit the degree to which you feed this exploitative system and are exploited yourself.
Live By the Code. When on social media, there is a loose code you ought to live by. Call it the Seven Commandments:
Kindness — Remember that the person on the other end is a person too, with their own struggles and suffering.
Charity — Assume good faith from the outset, and try to understand other people’s points of view without taking the very least flattering interpretation of their statements and going into attack mode.
Grace — None of us are perfect, and we all make mistakes. Build bridges to redemption, not pyres for burning heretics.
Less is More — Most internet arguments lead nowhere worthwhile. Learn to walk away.
Agree to Disagree — There are people in the world who do not believe as you do. That’s okay. You can’t convince or destroy them all, and it makes you a neurotic authoritarian weirdo to try. Accept that diversity of opinion exists, and be a good representative of your own views.
No Dog-Piling — Do not initiate or contribute to online lynch mobs. Do not feed this toxic culture.
Don’t Be a Psycho — Don’t doxx, don’t threaten people, don’t track them down, don’t contact their employer, or try to ruin their lives. This is psychotic behavior. It doesn't matter who it is, or what cause you think you’re fighting for. If you are convinced that someone is so dangerous that they must be stopped or brought to justice, then go to the authorities. If the person in question has in fact broken no laws, and is not “dangerous” in any legally actionable way, you should consider the possibility that you are overreacting, and take some time off social media for self care.
Whether you know it or not — whether you like it or not — you are taking part in a sort of social engineering in every interaction you have. Move the needle in the right direction.
Admittedly, the strategies outlined above do not address all of the problems listed at the outset. Making even a substantial dent in them is a project too large for a mere essay. Improving social media is a complex, multi-factorial endeavor — some of the problems are difficult to address, and some solutions may cause more problems than they solve. Indeed some of the problems, such as social media’s bad influence on journalism, cannot be adequately addressed from the social media end of things, and are better tackled from the journalism side itself. Given the fluid and highly technological aspect of this subject, I may periodically revisit, add to, and revise this piece moving forward.
See also: “Twitter Is a Helluva Lot Better Than It’s Given Credit For”
Subscribe now and never miss a new post. Please also consider sharing this on your social networks. You can reach me at @AmericnDreaming on Twitter, or at AmericanDreaming08@Gmail.com.
Jamie, these are some really solid suggestions. I especially like the idea of the data dividend. Makes sense to return some of that value to the user while providing an opt-out feature.
I have also thought that social media companies could utilize a crowd-sourcing mechanism to reduce the reach of false and misleading information: https://www.lianeon.org/p/a-clever-way-to-counter-misinformation