7.1 C
Washington
Tuesday, April 23, 2024

Twitter Faces Backlash for Permitting Widespread Circulation of Texas Shooting Images

Pat Holloway has been a photographer for 30 years, during which time she has covered a variety of devastating events, such as the 1993 siege in Waco, Texas, the 1995 bombing of a federal facility in Oklahoma City by Timothy McVeigh, and the 2011 tornado that hit Joplin, Missouri.

She revealed in an interview that she finally snapped this past weekend. She wrote to Elon Musk, Twitter’s owner, demanding action after gruesome pictures of the bloodied victims of a mass shooting at a mall in Texas started spreading on Twitter. The attack left at least nine people dead, including the shooter.

After the shooting on Saturday, many people, including Ms. Holloway, took to Twitter to express their outrage about the platform’s decision to allow graphic photographs, including one of a bloodied kid, to become viral. The exceptionally brutal nature of the photographs prompted repeated uproar from users, despite the fact that gory images have become commonplace on social media in an age when everyone with a smartphone and an internet connection can be a publisher. Furthermore, they shone an unfavourable light on Twitter’s content moderation policies, which have been relaxed since Mr. Musk’s acquisition of the business last year.

Twitter, like many other social media outlets, is now facing a situation somewhat unlike that of print media editors: deciding what, if anything, to display its readers. However, there have been notable exceptions, such as in 1955 when Jet magazine published open-casket images of Emmett Till, a 14-year-old Black boy who had been beaten to death in Mississippi, to show the horrors of the Jim Crow era in the South.

However, unlike publishers of newspapers and magazines, digital businesses like Twitter have to police millions of users using a mix of automatic tools and human content moderators in order to enforce their choices.

Meta, the parent company of Facebook, and Alphabet, the parent company of YouTube, both have substantial teams dedicated to preventing the spread of violent content. Since Mr. Musk purchased the site in late October, Twitter has reduced the number of employees and contractors dedicated to content moderation, mostly via reductions to the trust and safety teams. Musk, who calls himself a “free speech absolutist,” said in November 2017 that he would create a “content moderation council” to choose which comments would be allowed to remain online and which would be removed. He broke his word some time later.

When asked for comment, Twitter and Meta remained silent. YouTube has started deleting footage of the atrocity, according to a company spokeswoman, and is encouraging users to instead view content from reputable news organisations.

Before Mr. Musk came control, Twitter never had a blanket prohibition on violent or sexually explicit material. Images of individuals killed or injured in the conflict in Ukraine, for example, have been permitted on the site on the grounds that they are newsworthy and educational. The corporation sometimes uses warning labels or pop-ups to inform customers that they must opt in to see graphic images.

Some individuals reposted the graphic photos of the carnage and the deceased shooter for shock value, while others did so to highlight the terrible reality of gun violence. A tweet titled, “The N.R.A.’s America,” described this nation. “This isn’t going away,” someone another said. The New York Times is not providing direct access to the disturbing photos on social media.

Tech businesses need to strike a balance between user privacy and the need to keep critical photos online, even if doing so makes people uncomfortable, according to Claire Wardle, co-founder of Brown University’s Information Futures Lab. She used the case of Kim Phuc Phan Thi, the “Napalm Girl” whose picture of her agony after a napalm hit went viral during the Vietnam War, as an example.

The emergence of graphic photographs and videos after horrific acts of violence has been a problem for social media platforms for years. Facebook came under fire last year when advertising appeared alongside a livestream of a racist shooting spree in Buffalo, New York, broadcast on the Twitch platform. The shooter in Buffalo said he was motivated by the 2019 mass shooting in Christchurch, New Zealand, which was also streamed live on Facebook and killed at least 50 people. Twitter has been removing copies of the Christchurch video for years, claiming that doing so is in the interest of user safety because it praises the gunman’s murderous ideology.

Although Twitter was a major conduit for the gruesome photos from the Texas mall massacre, they were less prevalent on other internet sites on Sunday. Instagram, Facebook, and YouTube searches for the shooting in Allen, Texas largely returned news articles and more tame eyewitness recordings.

Sarah T. Roberts, a professor at the University of California, Los Angeles who specialises in content moderation, made a distinction between editors at traditional media companies and social media platforms, noting that the latter are not obligated to adhere to the same ethics that mainstream journalists do, such as attempting to minimise harm to viewers and the loved ones of the deceased.

David Faber
David Faber
I am a Business Journalist of The National Era
Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here