Home Menu

Menu



advertisement
Reply
Thread Tools Display Modes
ShaneG
Account Suspended
 
Member Since Jul 2020
Location: Unknown
Posts: 707
3
371 hugs
given
Default Nov 07, 2020 at 01:55 PM
  #1
Back in 2016, I could count on one hand the kinds of interventions that technology companies were willing to use to rid their platforms of misinformation, hate speech, and harassment. Over the years, crude mechanisms like blocking content and banning accounts have morphed into a more complex set of tools, including quarantining topics, removing posts from search, barring recommendations, and down-ranking posts in priority.

And yet, even with more options at their disposal, misinformation remains a serious problem. There was a great deal of coverage about misinformation on Election Day—my colleague Emily Drefyuss found, for example, that when Twitter tried to deal with content using the hashtag #BidenCrimeFamily, with tactics including “de-indexing” by blocking search results, users including Donald Trump adapted by using variants of the same tag. But we still don’t know much about how Twitter decided to do those things in the first place, or how it weighs and learns from the ways users react to moderation.

What actions did these companies take? How do their moderation teams work? What is the process for making decisions?

As social media companies suspended accounts and labeled and deleted posts, many researchers, civil society organizations, and journalists scrambled to understand their decisions. The lack of transparency about those decisions and processes means that—for many—the election results end up with an asterisk this year, just as they did in 2016.

What actions did these companies take? How do their moderation teams work? What is the process for making decisions? Over the last few years, platform companies put together large task forces dedicated to removing election misinformation and labeling early declarations of victory. Sarah Roberts, a professor at UCLA, has written about the invisible labor of platform content moderators as a shadow industry, a labyrinth of contractors and complex rules which the public knows little about. Why don’t we know more?

In the post-election fog, social media has become the terrain for a low-grade war on our cognitive security, with misinformation campaigns and conspiracy theories proliferating. When the broadcast news business served the role of information gatekeeper, it was saddled with public interest obligations such as sharing timely, local, and relevant information. Social media companies have inherited a similar position in society, but they have not taken on those same responsibilities. This situation has loaded the cannons for claims of bias and censorship in how they moderated election-related content.

Bearing the costs
In October, I joined a panel of experts on misinformation, conspiracy, and infodemics for the House Permanent Select Committee on Intelligence. I was flanked by Cindy Otis, an ex-CIA analyst; Nina Jankowicz, a disinformation fellow at the Wilson Center; and Melanie Smith, head of analysis at Graphika.

As I prepared my testimony, Facebook was struggling to cope with QAnon, a militarized social movement being monitored by their dangerous-organizations department and condemned by the House in a bipartisan bill. My team has been investigating QAnon for years. This conspiracy theory has become a favored topic among misinformation researchers because of all the ways it has remained extensible, adaptable, and resilient in the face of platform companies' efforts to quarantine and remove it.

QAnon has also become an issue for Congress, because it’s no longer about people participating in a strange online game: it has touched down like a tornado in the lives of politicians, who are now the targets of harassment campaigns that cross over from the fever dreams of conspiracists to violence. Moreover, it’s happened quickly and in new ways. Conspiracy theories usually take years to spread through society, with the promotion of key political, media, and religious figures. Social media has sped this process through ever-growing forms of content delivery. QAnon followers don’t just comment on breaking news; they bend it to their bidding.

Sign up for The Outcome - A newsletter on US election integrity and technology.
Enter your email, get the newsletter
Sign up
Stay updated on MIT Technology Review initiatives and events?
Yes
No
I focused my testimony on the many unnamed harms caused by the inability of social media companies to prevent misinformation from saturating their services. Journalists, public health and medical professionals, civil society leaders, and city administrators, like law enforcement and election officials, are bearing the cost of misinformation-at-scale and the burden of addressing its effects. Many people tiptoe around political issues when chatting with friends and family, but as misinformation about protests began to mobilize white vigilantes and medical misinformation led people to downplay the pandemic, different professional sectors took on important new roles as advocates for truth.

Take public health and medical professionals, who have had to develop resources for mitigating medical misinformation about covid-19. Doctors are attempting to become online influencers in order to correct bogus advice and false claims of miracle cures—taking time away from delivering care or developing treatments. Many newsrooms, meanwhile, adapted to the normalization of misinformation on social media by developing a “misinformation beat”—debunking conspiracy theories or fake news claims that might affect their readers. But those resources would be much better spent on sustaining journalism rather than essentially acting as third-party content moderators.

Related Story

Thank you for posting: Smoking’s lessons for regulating social media
Former Facebook executives admit they used the tobacco industry’s playbook for addictive products. Perhaps it can also be used to undo the damage.

Civil society organizations, too, have been forced to spend resources on monitoring misinformation and protecting their base from targeted campaigns. Racialized disinformation is a seasoned tactic of domestic and foreign influence operations: campaigns either impersonate communities of color or use racism to boost polarization on wedge issues. Brandi Collins-Dexter testified about these issues at a congressional hearing in June, highlighting how tech companies hide behind calls to protect free speech at all costs without doing enough to protect Black communities targeted daily on social media with medical misinformation, hate speech, incitement, and harassment.

Election officials, law enforcement personnel, and first responders are at a serious disadvantage attempting to do their jobs while rumors and conspiracy theories spread online. Right now, law enforcement is preparing for violence at polling places.

A pathway to improve
When misinformation spreads from the digital to the physical world, it can redirect public resources and threaten people’s safety. This is why social media companies must take the issue as seriously as they take their desire to profit.

But they need a pathway to improve. Section 230 of the Communications and Decency Act empowers social media companies to improve content moderation, but politicians have threatened to remove these protections so they can continue with their own propaganda campaigns. All throughout the October hearing, the specter loomed of a new agency that could independently audit civil rights violations, examine issues of data privacy, and assess the market externalities of this industry on other sectors.

As I argued during the hearing, the enormous reach of social media across the globe means it is important that regulation not begin with dismantling Section 230 until a new policy is in place.

Until then, we need more transparency. Misinformation is not solely about the facts; it’s about who gets to say what the facts are. Fair content moderation decisions are key to public accountability.

Rather than hold on to technostalgia for a time when it wasn’t this bad, sometimes it is worth asking what it would take to uninvent social media, so that we can chart a course for the web we want—a web that promotes democracy, knowledge, care, and equity. Otherwise, every unexplained decision by tech companies about access to information potentially becomes fodder for conspiracists and, even worse, the foundation for overreaching governmental policy.
ShaneG is offline   Reply With QuoteReply With Quote
 
Thanks for this!
Yaowen

advertisement
Yaowen
Grand Magnate
 
Yaowen's Avatar
 
Member Since Jan 2020
Location: USA
Posts: 3,618 (SuperPoster!)
4
6,475 hugs
given
PC PoohBah!
Default Nov 07, 2020 at 04:15 PM
  #2
Dear Shane,

Thanks for this!

Sincerely yours, Yao Wen
Yaowen is offline   Reply With QuoteReply With Quote
 
Hugs from:
ShaneG
 
Thanks for this!
ShaneG
ShaneG
Account Suspended
 
Member Since Jul 2020
Location: Unknown
Posts: 707
3
371 hugs
given
Default Nov 07, 2020 at 09:27 PM
  #3
Quote:
Originally Posted by Yaowen View Post
Dear Shane,

Thanks for this!

Sincerely yours, Yao Wen
( The light that shines,,,,,,)

Thank you very much for your understanding of the subject.
ShaneG is offline   Reply With QuoteReply With Quote
Reply
attentionThis is an old thread. You probably should not post your reply to it, as the original poster is unlikely to see it.




All times are GMT -5. The time now is 06:34 AM.
Powered by vBulletin® — Copyright © 2000 - 2024, Jelsoft Enterprises Ltd.



 

My Support Forums

My Support Forums is the online community that was originally begun as the Psych Central Forums in 2001. It now runs as an independent self-help support group community for mental health, personality, and psychological issues and is overseen by a group of dedicated, caring volunteers from around the world.

 

Helplines and Lifelines

The material on this site is for informational purposes only, and is not a substitute for medical advice, diagnosis or treatment provided by a qualified health care provider.

Always consult your doctor or mental health professional before trying anything you read here.