DJI Phantom 3

Hate Speech On Youtube

Embark on a Quest with Hate Speech On Youtube

Step into a world where the focus is keenly set on Hate Speech On Youtube. Within the confines of this article, a tapestry of references to Hate Speech On Youtube awaits your exploration. If your pursuit involves unraveling the depths of Hate Speech On Youtube, you've arrived at the perfect destination.

Our narrative unfolds with a wealth of insights surrounding Hate Speech On Youtube. This is not just a standard article; it's a curated journey into the facets and intricacies of Hate Speech On Youtube. Whether you're thirsting for comprehensive knowledge or just a glimpse into the universe of Hate Speech On Youtube, this promises to be an enriching experience.

The spotlight is firmly on Hate Speech On Youtube, and as you navigate through the text on these digital pages, you'll discover an extensive array of information centered around Hate Speech On Youtube. This is more than mere information; it's an invitation to immerse yourself in the enthralling world of Hate Speech On Youtube.

So, if you're eager to satisfy your curiosity about Hate Speech On Youtube, your journey commences here. Let's embark together on a captivating odyssey through the myriad dimensions of Hate Speech On Youtube.

Showing posts sorted by date for query Hate Speech On Youtube. Sort by relevance Show all posts
Showing posts sorted by date for query Hate Speech On Youtube. Sort by relevance Show all posts

Can Facebook Be Broken Up? What You Need To Know


Can Facebook be broken up? What you need to know


Can Facebook be broken up? What you need to know

Facebook CEO Mark Zuckerberg wields so much power that even one of the social network's co-founders thinks it's both "unprecedented" and "un-American."

Chris Hughes, who co-founded Facebook with Zuckerberg while they were students at Harvard, called for the social network to be broken up in an op-ed published Thursday by The New York Times. "I'm angry that his focus on growth led him to sacrifice security and civility for clicks," Hughes wrote, referring to Facebook's boss and major shareholder. "I'm disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders."

Facebook's rapid growth has been fueled by acquisitions, including Instagram and WhatsApp, a messaging service. Critics and experts say Facebook simply purchased its competition, rather than innovating to meet the challenges they posed.

"Their whole business model is to identify potential threats and then buy them or beat them in some way," said Stephen Diamond, an associate professor of law at Santa Clara University School of Law.

And Facebook has been called out for not doing enough to combat election meddling, misinformation and hate speech. Its enormous power, critics argue, needs to be kept in check. Facebook doesn't want to spin off Instagram and WhatsApp.

Here's what you need to know:

Who wants Facebook broken up? Why?

Calls to break up Facebook aren't new. But it is startling to hear one of the company's co-founders call for such an extreme measure. Hughes argues that Zuckerberg holds so much power that even the company's board of directors can't keep him accountable. Zuckerberg controls around 60 percent of Facebook's voting shares, which means the board technically can't fire him even if he messes up.

Hughes isn't alone. Advocacy groups, including the Electronic Privacy Information Center, Color of Change and Common Sense Media, have previously asked the Federal Trade Commission, the agency that enforces antitrust law, to make Instagram and WhatsApp separate companies. A split would also make it easier for other social media companies to compete with Facebook, the organizations argue.

In addition, a group called Freedom from Facebook has called on the FTC to force Facebook to spin off its Messenger service too.

Sen. Elizabeth Warren, a Democrat from Massachusetts who's also a presidential candidate, is among the lawmakers who want to break up Facebook, as well as other tech giants, including Google and Amazon.

How would Facebook be broken up?

One way to break up Facebook would be for the federal government to file a lawsuit against the company, arguing it stifles competition. That could prompt a negotiation between the parties that could lead to Facebook agreeing to make itself smaller.

Another alternative would be for Congress to pass a law covering tech monopolies. Warren has proposed such a law, which would require tech platforms that take in $25 billion or more in sales to "structurally separate" their products. Amazon, for example, would have to spin off its house brand Amazon Basics. Warren said that if she won the presidential election her administration would also appoint regulators to unwind the mergers of Instagram and WhatsApp from Facebook.

What does Facebook think about the idea?

Facebook has pushed back, arguing that breaking up the company wouldn't hold the social network more accountable for its actions. Instead, Facebook has called for more internet regulation around harmful content, election integrity, privacy and data portability.

"Accountability of tech companies can only be achieved through the painstaking introduction of new rules for the internet," Nick Clegg, Facebook's vice president of global affairs and communications, said in a statement Thursday. The social network also said that having Instagram and WhatsApp under Facebook helps them fight spam, election meddling and crime. Facebook says it has plenty of competition, pointing to YouTube, Snapchat, iMessage and WeChat, among others.

Clegg touched on all those points in a Saturday editorial in The New York Times.

Have tech companies been broken up in the past?

Yes, but it's unusual. In 1974, the US Department of Justice filed an antitrust lawsuit against AT&T but the matter wasn't settled until eight years later. The telephone company was required to spin off two-thirds of its assets into separate companies, according to a 1982 article from The Washington Post. The government has also tried to break up Microsoft and in 2000 a US federal judge ordered that the tech giant split into two companies. Microsoft appealed and the decision was reversed.

What would this mean for users of Facebook?

Facebook is trying to integrate its messaging services so users of Facebook, Instagram and WhatsApp can send messages to one another without switching apps. Splitting up the companies might prevent that from happening.

Those who want the government to break Facebook up argue the move would fuel more competition among social media companies, which could mean more options for consumers. About 2.7 billion people use Facebook, Instagram, WhatsApp or Messenger every month.

Diamond said that breaking up Facebook could also lead to the company stepping up its privacy efforts to match its social media competitors.

What are the chances this happens?

The FTC declined to comment on whether it's looking to break up Facebook. But if history is any indication, it would be a rare move.

"I doubt there is sufficient political momentum to break up Facebook," Diamond said. "I'm skeptical, even though I think there might be good reasons to do it."

Originally published May 10, 5:40 a.m. PT.
Update, May 11: Adds mention of Facebook VP Nick Clegg's Saturday editorial in the Times.


Source

Tags:

Twitter Could Cut Back On Hate Speech With Suspension Warnings, Study Says


Twitter could cut back on hate speech with suspension warnings, study says


Twitter could cut back on hate speech with suspension warnings, study says

Since Twitter launched in 2006, it's become a giant networking event, bar hangout, meme-generator and casual conversation hub stuffed into one. But for every 280-word-long timely news update and witty remark, you'll find a violent, hateful post.

Among the crew of experts strategizing to disarm the dark side of Twitter, a team from New York University ran an experiment to test whether warning accounts that hate speech will result in suspension is a functional technique. Turns out, it could be pretty effective.

After studying over 4,300 Twitter users and 600,000 tweets, the scientists found warning accounts of such consequences "can significantly reduce their hateful language for one week." That dip was even more apparent when warnings were phrased politely.

Hopefully the team's paper, published Monday in the journal Perspectives on Politics, will help address the racist, vicious and abusive content that pollutes social media. 

"Debates over the effectiveness of social media account suspensions and bans on abusive users abound, but we know little about the impact of either warning a user of suspending an account or of outright suspensions in order to reduce hate speech," Mustafa Mikdat Yildirim, an NYU doctoral candidate and the lead author of the paper, said in a statement. 

"Even though the impact of warnings is temporary, the research nonetheless provides a potential path forward for platforms seeking to reduce the use of hateful language by users."

These warnings, Mikdat Yildirim observed, don't even have to come from Twitter itself. The ratio of tweets containing hateful speech per user lowered by between 10% and 20% even when the warning originated from a standard Twitter account with just 100 followers -- an "account" made by the team for experimental purposes.

"We suspect, as well, that these are conservative estimates, in the sense that increasing the number of followers that our account had could lead to even higher effects...to say nothing of what an official warning from Twitter would do," they write in the paper.

At this point you might be wondering: Why bother "warning" hate speech endorsers when we can just rid Twitter of them? Intuitively, an immediate suspension should achieve the same, if not stronger, effect.

Why not just ban hate speech ASAP?

While online hate speech has existed for decades, it's ramped up in recent years, particularly toward minorities. Physical violence as a result of such negativity has seen a spike as well. That includes tragedies like mass shootings and lynchings.

But there's evidence to show unannounced account removal may not be the way to combat the matter.

As an example, the paper points out former President Donald Trump's notorious and erroneous tweets following the 2020 United States presidential election. They consisted of election misinformation like calling the results fraudulent and praise for rioters who stormed the Capitol on January 6, 2021. His account was promptly suspended.

Twitter said the suspension was "due to the risk of further incitement of violence," but the problem was Trump later attempted to access other ways of posting online, such as tweeting through the official @Potus account. "Even when bans reduce unwanted deviant behavior within one platform, they might fail in reducing the overall deviant behavior within the online sphere," the paper says. 

Twitter suspended President Donald Trump's Twitter account on Jan. 8, 2021.

Twitter suspended President Donald Trump's Twitter account on Jan. 8, 2021. 

Screenshot by Stephen Shankland/CNET

In contrast to quick bans or suspensions, Mikdat Yildirim and fellow researchers say warnings of account suspension could curb the issue long term because users will try to protect their account instead of moving somewhere else as a last resort.

Experimental evidence for warning signals

There were a few steps to the team's experiment. First, they created six Twitter accounts with names like @basic_person_12, @hate_suspension and @warner_on_hate. 

Then, they downloaded 600,000 tweets on July 21, 2020 that were posted the week prior to identify accounts likely to be suspended during the course of the study. This period saw an uptick in hate speech against Asian and Black communities, the researchers say, due to COVID-19 backlash and the Black Lives Matter movement.

Sifting through those tweets, the team picked out any that used hate language as per a dictionary outlined by a researcher in 2017 and isolated those created after January 1, 2020. They reasoned that newer accounts are more likely to be suspended -- over 50 of those accounts did, in fact, get suspended. 

Anticipating those suspensions, the researchers gathered 27 of those accounts' follower lists beforehand. After a bit more filtering, the researchers ended up with 4,327 Twitterers to study. "We limited our participant population to people who had previously used hateful language on Twitter and followed someone who actually had just been suspended," they clarify in the paper. 

Next, the team sent warnings of different politeness levels -- the politest of which they believe created an air of "legitimacy" -- from each account to the candidates divided into six groups. One control group didn't receive a message.

Legitimacy, they believe, was important because "to effectively convey a warning message to its target, the message needs to make the target aware of the consequences of their behavior and also make them believe that these consequences will be administered," they write.

Ultimately, the method led to a reduction in the ratio of hateful posts by 10% for blunt warnings, such as "If you continue to use hate speech, you might lose your posts, friends and followers, and not get your account back" and by 15% to 20% with more respectful warnings, which included sentiments like "I understand that you have every right to express yourself but please keep in mind that using hate speech can get you suspended." 

But it's not that simple

Even so, the research team notes that "we stop short, however, of unambiguously recommending that Twitter simply implement the system we tested without further study because of two important caveats."

Foremost, they say a message from a large corporation like Twitter could create backlash in a way the study's smaller accounts did not. Secondly, Twitter wouldn't have the benefit of ambiguity in suspension messages. They can't really say "you might" lose your account. Thus, they'd need a blanket rule. 

And with any blanket rule, there could be wrongfully accused users. 

"It would be important to weigh the incremental harm that such a warning program could bring to an incorrectly suspended user," the team writes. 

Although the main impact of the team's warnings dematerialized about a month later and there are a couple of avenues yet to be explored, they still urge this technique could be a tenable option to mitigate violent, racist and abusive speech that continues to imperil the Twitter community.


Source

Tags:

Facebook Removed More Than 20 Million Posts For COVID-19 Misinformation


Facebook removed more than 20 million posts for covid 19 sportscenter facebook removed more than 20 million posts for covid do you take facebook removed more than 20 million posts for covid vaccine facebook removed more than 20 million posts for signs facebook removed more than 20 years facebook removed more than 20 amino facebook more together commercials facebook removed most recent feed
Facebook removed more than 20 million posts for COVID-19 misinformation


Facebook removed more than 20 million posts for COVID-19 misinformation

Facebook and its photo-service Instagram took down more than 20 million pieces of content containing COVID-19 misinformation between the start of the pandemic and June but couldn't say how prevalent these types of false claims are on the platforms.

The social network measures the prevalence of other types of content such as hate speech and adult nudity because it gives the company a sense of what offensive posts Facebook missed. Providing this metric for COVID-19 misinformation, the company said, is more complex. 

"When it comes to COVID, though, things are evolving even more quickly so it does make prevalence even more difficult to define and measure," said Guy Rosen, Facebook's vice president of integrity, during a press conference on Wednesday.

The action came about a month after the White House singled out Facebook in saying that about a dozen people were responsible for creating 65% of the vaccine misinformation on social media platforms -- all of whom remained active on the social networking giant.

Despite the action against "disinformation dozen," the White House continued to criticize Facebook's response to misinformation.

"In the middle of a pandemic, being honest and transparent about the work that needs to be done to protect public health is absolutely vital, but Facebook still refuses to be straightforward about how much misinformation is circulating -- and being actively promoted -- on their platform," a White House spokesperson told CNN Business on Wednesday.

Facebook didn't immediately respond to a request for comment on the spokesperson's remarks.

Politicians, including US President Joe Biden, and advocacy groups have criticized social networks for failing to effectively combat the spread of COVID-19 and vaccine misinformation. Facebook partners with fact-checkers, directs people to authoritative information and labels misinformation. But researchers have questioned how effective those measures are in curbing the spread of false claims online.

"There will always be examples of things we missed and, with a scale of our enforcement, there will be examples of things that we take down by mistake," Rosen said. "There is no perfect here." 

Facebook said it has more than 65 criteria for false claims about COVID-19 and vaccines that would prompt it to remove posts from its platforms. The company has added to this list, including false claims that COVID-19 vaccines cause Alzheimer's and that being around vaccinated people could cause secondary side effects to others.

The social network said it removed more 3,000 accounts, pages and groups for violating its rules against COVID-19 and vaccines. It has also displayed warnings on more than 190 million pieces of COVID-related content on Facebook that fact-checkers rated, and it displays these posts lower in people's News Feeds.

Facebook, which partnered with Carnegie-Mellon University and the University of Maryland on a COVID-19 survey, said that vaccine hesitancy for people in the US on Facebook has declined by 50%. Vaccine acceptance increased by 35% in France, 25% in Indonesia and 20% in Nigeria, the social network said.

The company also shared new data including what domains, links, pages and posts were the most widely viewed in the US on Facebook between April and June. Facebook counts a view when content appears on the News Feed, so the metric differs from engagement. The social network owns data analytics tool CrowdTangle, but executives have reportedly raised concerns about data that shows high engagement with right-wing sites. 

"The narrative that has emerged is quite simply wrong," Rosen said, noting that CrowdTangle includes data about interactions from a limited set of certain pages, groups and accounts.

Facebook said the most viewed domain was YouTube. The most viewed link was the Player Alumni Resources, and the top page was from Unicef. The most viewed post was an image from a motivational speaker that asked people about the first words they see in a block of letters.


Source

Can Facebook Be Broken Up? What You Need To Know


Can facebook be broken/updated mods can facebook be broken up british label can facebook be broken up synonym can facebook be broken up meaning can facebook be broken up stool can facebook be broken up car can facebook be broken ministry can facebook be sued for invasion of privacy can facebook be deleted permanently can facebook be used in divorce court can facebook use my photos how long can facebook videos be can facebook cause depression can facebook censor free speech

Can Facebook be broken up? What you need to know


Can Facebook be broken up? What you need to know

Facebook CEO Mark Zuckerberg wields so much power that even one of the social network's co-founders thinks it's both "unprecedented" and "un-American."

Chris Hughes, who co-founded Facebook with Zuckerberg while they were students at Harvard, called for the social network to be broken up in an op-ed published Thursday by The New York Times. "I'm angry that his focus on growth led him to sacrifice security and civility for clicks," Hughes wrote, referring to Facebook's boss and major shareholder. "I'm disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders."

Facebook's rapid growth has been fueled by acquisitions, including Instagram and WhatsApp, a messaging service. Critics and experts say Facebook simply purchased its competition, rather than innovating to meet the challenges they posed.

"Their whole business model is to identify potential threats and then buy them or beat them in some way," said Stephen Diamond, an associate professor of law at Santa Clara University School of Law.

And Facebook has been called out for not doing enough to combat election meddling, misinformation and hate speech. Its enormous power, critics argue, needs to be kept in check. Facebook doesn't want to spin off Instagram and WhatsApp.

Here's what you need to know:

Who wants Facebook broken up? Why?

Calls to break up Facebook aren't new. But it is startling to hear one of the company's co-founders call for such an extreme measure. Hughes argues that Zuckerberg holds so much power that even the company's board of directors can't keep him accountable. Zuckerberg controls around 60 percent of Facebook's voting shares, which means the board technically can't fire him even if he messes up.

Hughes isn't alone. Advocacy groups, including the Electronic Privacy Information Center, Color of Change and Common Sense Media, have previously asked the Federal Trade Commission, the agency that enforces antitrust law, to make Instagram and WhatsApp separate companies. A split would also make it easier for other social media companies to compete with Facebook, the organizations argue.

In addition, a group called Freedom from Facebook has called on the FTC to force Facebook to spin off its Messenger service too.

Sen. Elizabeth Warren, a Democrat from Massachusetts who's also a presidential candidate, is among the lawmakers who want to break up Facebook, as well as other tech giants, including Google and Amazon.

How would Facebook be broken up?

One way to break up Facebook would be for the federal government to file a lawsuit against the company, arguing it stifles competition. That could prompt a negotiation between the parties that could lead to Facebook agreeing to make itself smaller.

Another alternative would be for Congress to pass a law covering tech monopolies. Warren has proposed such a law, which would require tech platforms that take in $25 billion or more in sales to "structurally separate" their products. Amazon, for example, would have to spin off its house brand Amazon Basics. Warren said that if she won the presidential election her administration would also appoint regulators to unwind the mergers of Instagram and WhatsApp from Facebook.

What does Facebook think about the idea?

Facebook has pushed back, arguing that breaking up the company wouldn't hold the social network more accountable for its actions. Instead, Facebook has called for more internet regulation around harmful content, election integrity, privacy and data portability.

"Accountability of tech companies can only be achieved through the painstaking introduction of new rules for the internet," Nick Clegg, Facebook's vice president of global affairs and communications, said in a statement Thursday. The social network also said that having Instagram and WhatsApp under Facebook helps them fight spam, election meddling and crime. Facebook says it has plenty of competition, pointing to YouTube, Snapchat, iMessage and WeChat, among others.

Clegg touched on all those points in a Saturday editorial in The New York Times.

Have tech companies been broken up in the past?

Yes, but it's unusual. In 1974, the US Department of Justice filed an antitrust lawsuit against AT&T but the matter wasn't settled until eight years later. The telephone company was required to spin off two-thirds of its assets into separate companies, according to a 1982 article from The Washington Post. The government has also tried to break up Microsoft and in 2000 a US federal judge ordered that the tech giant split into two companies. Microsoft appealed and the decision was reversed.

What would this mean for users of Facebook?

Facebook is trying to integrate its messaging services so users of Facebook, Instagram and WhatsApp can send messages to one another without switching apps. Splitting up the companies might prevent that from happening.

Those who want the government to break Facebook up argue the move would fuel more competition among social media companies, which could mean more options for consumers. About 2.7 billion people use Facebook, Instagram, WhatsApp or Messenger every month.

Diamond said that breaking up Facebook could also lead to the company stepping up its privacy efforts to match its social media competitors.

What are the chances this happens?

The FTC declined to comment on whether it's looking to break up Facebook. But if history is any indication, it would be a rare move.

"I doubt there is sufficient political momentum to break up Facebook," Diamond said. "I'm skeptical, even though I think there might be good reasons to do it."

Originally published May 10, 5:40 a.m. PT.
Update, May 11: Adds mention of Facebook VP Nick Clegg's Saturday editorial in the Times.


Source

Search This Blog

Menu Halaman Statis

close