DJI Phantom 3

Stop Hate Uk Twitter

Embark on a Quest with Stop Hate Uk Twitter

Step into a world where the focus is keenly set on Stop Hate Uk Twitter. Within the confines of this article, a tapestry of references to Stop Hate Uk Twitter awaits your exploration. If your pursuit involves unraveling the depths of Stop Hate Uk Twitter, you've arrived at the perfect destination.

Our narrative unfolds with a wealth of insights surrounding Stop Hate Uk Twitter. This is not just a standard article; it's a curated journey into the facets and intricacies of Stop Hate Uk Twitter. Whether you're thirsting for comprehensive knowledge or just a glimpse into the universe of Stop Hate Uk Twitter, this promises to be an enriching experience.

The spotlight is firmly on Stop Hate Uk Twitter, and as you navigate through the text on these digital pages, you'll discover an extensive array of information centered around Stop Hate Uk Twitter. This is more than mere information; it's an invitation to immerse yourself in the enthralling world of Stop Hate Uk Twitter.

So, if you're eager to satisfy your curiosity about Stop Hate Uk Twitter, your journey commences here. Let's embark together on a captivating odyssey through the myriad dimensions of Stop Hate Uk Twitter.

Showing posts sorted by date for query Stop Hate Uk Twitter. Sort by relevance Show all posts
Showing posts sorted by date for query Stop Hate Uk Twitter. Sort by relevance Show all posts

Facebook Suspends Rules To Allow Some Calls For Violence Against Russian Invaders


Facebook Suspends Rules to Allow Some Calls for Violence Against Russian Invaders


Facebook Suspends Rules to Allow Some Calls for Violence Against Russian Invaders

What's happening

Facebook's parent company Meta said it's temporarily allowing some violent content against Russian invaders, making an unusual exemption to its rules against hate speech.

Why it matters

The move is already escalating tensions between Meta and Russia. Roskomnadzor, the country's telecommunications agency, said Friday it's restricting Instagram, a photo-and-video service owned by Meta. Russia's Investigative Committee is opening a criminal investigation against Meta.

What's next

Russia might take more actions against Meta as it moves forward with the criminal case against the social media giant. The company also owns messaging app WhatsApp though no restrictions against that service have been announced.

Facebook parent company Meta is setting aside its rules and allowing some violent speech against Russian invaders, saying it views these remarks as political speech. 

"As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as 'death to the Russian invaders.' We still won't allow credible calls for violence against Russian civilians," Meta spokesman Andy Stone said in a tweet Thursday.

The rare exemption to the company's rules against hate speech, which bars people from posting content targeting a group of people, including violent content, shows how the world's largest social network is moderating content about Russia's invasion of Ukraine. The move, though, is already escalating tensions between Meta and the Russian government. 

Russia's Investigative Committee said in a statement Friday that it's opened a criminal case against Meta for allegedly violating the criminal code of the Russian Federation that bars public calls for extremist activities and assistance in terrorist activities. 

"As part of the criminal case, the necessary investigative measures are being carried out to give a legal evaluation to actions of Andy Stone and other employees of the American corporation," the committee, which reports to Russia President Vladimir Putin, said in the statement. 

Facebook has been facing a greater number of calls to crack down more heavily on propaganda and misinformation. Last week, Russia said it was blocking the social network after Facebook started to make content from Russian state-controlled media tougher to find on its platform and tapped third party fact-checkers to debunk false claims. On Friday, Russia's telecommunications regulator, Roskomnadzor, said in a statement that the Prosecutor General's Office of Russia demanded that the agency also restrict access to Meta-owned photo-and-video service Instagram. Roskomnadzor said the restrictions will take effect March 14 to allow users to transfer their photos and videos to other social networks and notify their followers and contacts. 

Nick Clegg, who leads global affairs at Meta, said in a statement Friday that the company's policies are "focused on protecting people's rights to speech as an expression of self-defense in reaction to a military invasion of their country." He added that Meta is applying the exemption only in Ukraine and that it made the decision because of "extraordinary and unprecedented circumstances."

"We have no quarrel with the Russian people. There is no change at all in our policies on hate speech as far as the Russian people are concerned. We will not tolerate Russophobia or any kind of discrimination, harassment or violence towards Russians on our platform," Clegg said.

The Russian Embassy in the US also responded to Thursday's decision, saying Meta's actions were equivalent to a declaration of information war against Russia, according to a report by Russian state-operated news agency Novosti. In a post on Twitter, the embassy called on US authorities to "stop the extremist activities of Meta."

For years, Facebook has also grappled with criticism that its rules are enforced unevenly. The company created a semi-independent oversight board to weigh in on its toughest content moderation decisions. 

Reuters, which first reported the policy change, said that in certain countries, including Russia, Ukraine and Poland, the social media giant is also allowing some posts that call for death to Russian President Vladimir Putin or Belarusian President Alexander Lukashenko. The changes also apply to Instagram. 

Citing internal emails, Reuters said that calls for death won't be allowed if they contain other targets or include "two indicators of credibility" such as the location or method of death. The posts must also be about the invasion of Ukraine. Calls for violence against Russian soldiers will also be allowed in Armenia, Azerbaijan, Estonia, Georgia, Hungary, Latvia, Lithuania, Poland, Romania, Russia, Slovakia and Ukraine, Reuters reported.

Also Thursday, Facebook and Twitter removed posts from Russia's embassy in the UK over false claims surrounding the bombing of a maternity hospital in the Ukraine city of Mariupol on Wednesday.

At least one child and two adults were killed at the hospital and another 17 were injured, Ukraine officials have said.

Meta didn't immediately answer questions about how long it expects the exemption will be in place or the number of posts that may be impacted. 

Meta hasn't released data about how many Facebook and Instagram users are in Russia. App analytics firm Sensor Tower estimates that since 2014 Instagram has been installed 166 million times from Google Play and the Apple App Store in Russia. Facebook in Russia has an estimated 56.2 million installs. Sensor Tower says that based on that data, Russia is the fifth largest market for Instagram and the 20th largest market for Facebook.


Source

Tags:

Twitter Could Cut Back On Hate Speech With Suspension Warnings, Study Says


Twitter could cut back on hate speech with suspension warnings, study says


Twitter could cut back on hate speech with suspension warnings, study says

Since Twitter launched in 2006, it's become a giant networking event, bar hangout, meme-generator and casual conversation hub stuffed into one. But for every 280-word-long timely news update and witty remark, you'll find a violent, hateful post.

Among the crew of experts strategizing to disarm the dark side of Twitter, a team from New York University ran an experiment to test whether warning accounts that hate speech will result in suspension is a functional technique. Turns out, it could be pretty effective.

After studying over 4,300 Twitter users and 600,000 tweets, the scientists found warning accounts of such consequences "can significantly reduce their hateful language for one week." That dip was even more apparent when warnings were phrased politely.

Hopefully the team's paper, published Monday in the journal Perspectives on Politics, will help address the racist, vicious and abusive content that pollutes social media. 

"Debates over the effectiveness of social media account suspensions and bans on abusive users abound, but we know little about the impact of either warning a user of suspending an account or of outright suspensions in order to reduce hate speech," Mustafa Mikdat Yildirim, an NYU doctoral candidate and the lead author of the paper, said in a statement. 

"Even though the impact of warnings is temporary, the research nonetheless provides a potential path forward for platforms seeking to reduce the use of hateful language by users."

These warnings, Mikdat Yildirim observed, don't even have to come from Twitter itself. The ratio of tweets containing hateful speech per user lowered by between 10% and 20% even when the warning originated from a standard Twitter account with just 100 followers -- an "account" made by the team for experimental purposes.

"We suspect, as well, that these are conservative estimates, in the sense that increasing the number of followers that our account had could lead to even higher effects...to say nothing of what an official warning from Twitter would do," they write in the paper.

At this point you might be wondering: Why bother "warning" hate speech endorsers when we can just rid Twitter of them? Intuitively, an immediate suspension should achieve the same, if not stronger, effect.

Why not just ban hate speech ASAP?

While online hate speech has existed for decades, it's ramped up in recent years, particularly toward minorities. Physical violence as a result of such negativity has seen a spike as well. That includes tragedies like mass shootings and lynchings.

But there's evidence to show unannounced account removal may not be the way to combat the matter.

As an example, the paper points out former President Donald Trump's notorious and erroneous tweets following the 2020 United States presidential election. They consisted of election misinformation like calling the results fraudulent and praise for rioters who stormed the Capitol on January 6, 2021. His account was promptly suspended.

Twitter said the suspension was "due to the risk of further incitement of violence," but the problem was Trump later attempted to access other ways of posting online, such as tweeting through the official @Potus account. "Even when bans reduce unwanted deviant behavior within one platform, they might fail in reducing the overall deviant behavior within the online sphere," the paper says. 

Twitter suspended President Donald Trump's Twitter account on Jan. 8, 2021.

Twitter suspended President Donald Trump's Twitter account on Jan. 8, 2021. 

Screenshot by Stephen Shankland/CNET

In contrast to quick bans or suspensions, Mikdat Yildirim and fellow researchers say warnings of account suspension could curb the issue long term because users will try to protect their account instead of moving somewhere else as a last resort.

Experimental evidence for warning signals

There were a few steps to the team's experiment. First, they created six Twitter accounts with names like @basic_person_12, @hate_suspension and @warner_on_hate. 

Then, they downloaded 600,000 tweets on July 21, 2020 that were posted the week prior to identify accounts likely to be suspended during the course of the study. This period saw an uptick in hate speech against Asian and Black communities, the researchers say, due to COVID-19 backlash and the Black Lives Matter movement.

Sifting through those tweets, the team picked out any that used hate language as per a dictionary outlined by a researcher in 2017 and isolated those created after January 1, 2020. They reasoned that newer accounts are more likely to be suspended -- over 50 of those accounts did, in fact, get suspended. 

Anticipating those suspensions, the researchers gathered 27 of those accounts' follower lists beforehand. After a bit more filtering, the researchers ended up with 4,327 Twitterers to study. "We limited our participant population to people who had previously used hateful language on Twitter and followed someone who actually had just been suspended," they clarify in the paper. 

Next, the team sent warnings of different politeness levels -- the politest of which they believe created an air of "legitimacy" -- from each account to the candidates divided into six groups. One control group didn't receive a message.

Legitimacy, they believe, was important because "to effectively convey a warning message to its target, the message needs to make the target aware of the consequences of their behavior and also make them believe that these consequences will be administered," they write.

Ultimately, the method led to a reduction in the ratio of hateful posts by 10% for blunt warnings, such as "If you continue to use hate speech, you might lose your posts, friends and followers, and not get your account back" and by 15% to 20% with more respectful warnings, which included sentiments like "I understand that you have every right to express yourself but please keep in mind that using hate speech can get you suspended." 

But it's not that simple

Even so, the research team notes that "we stop short, however, of unambiguously recommending that Twitter simply implement the system we tested without further study because of two important caveats."

Foremost, they say a message from a large corporation like Twitter could create backlash in a way the study's smaller accounts did not. Secondly, Twitter wouldn't have the benefit of ambiguity in suspension messages. They can't really say "you might" lose your account. Thus, they'd need a blanket rule. 

And with any blanket rule, there could be wrongfully accused users. 

"It would be important to weigh the incremental harm that such a warning program could bring to an incorrectly suspended user," the team writes. 

Although the main impact of the team's warnings dematerialized about a month later and there are a couple of avenues yet to be explored, they still urge this technique could be a tenable option to mitigate violent, racist and abusive speech that continues to imperil the Twitter community.


Source

Tags:

Search This Blog

Menu Halaman Statis

close