DJI Phantom 3

How To Report Something On Twitter

Embark on a Quest with How To Report Something On Twitter

Step into a world where the focus is keenly set on How To Report Something On Twitter. Within the confines of this article, a tapestry of references to How To Report Something On Twitter awaits your exploration. If your pursuit involves unraveling the depths of How To Report Something On Twitter, you've arrived at the perfect destination.

Our narrative unfolds with a wealth of insights surrounding How To Report Something On Twitter. This is not just a standard article; it's a curated journey into the facets and intricacies of How To Report Something On Twitter. Whether you're thirsting for comprehensive knowledge or just a glimpse into the universe of How To Report Something On Twitter, this promises to be an enriching experience.

The spotlight is firmly on How To Report Something On Twitter, and as you navigate through the text on these digital pages, you'll discover an extensive array of information centered around How To Report Something On Twitter. This is more than mere information; it's an invitation to immerse yourself in the enthralling world of How To Report Something On Twitter.

So, if you're eager to satisfy your curiosity about How To Report Something On Twitter, your journey commences here. Let's embark together on a captivating odyssey through the myriad dimensions of How To Report Something On Twitter.

Showing posts sorted by date for query How To Report Something On Twitter. Sort by relevance Show all posts
Showing posts sorted by date for query How To Report Something On Twitter. Sort by relevance Show all posts

Facebook Suspends Rules To Allow Some Calls For Violence Against Russian Invaders


Facebook Suspends Rules to Allow Some Calls for Violence Against Russian Invaders


Facebook Suspends Rules to Allow Some Calls for Violence Against Russian Invaders

What's happening

Facebook's parent company Meta said it's temporarily allowing some violent content against Russian invaders, making an unusual exemption to its rules against hate speech.

Why it matters

The move is already escalating tensions between Meta and Russia. Roskomnadzor, the country's telecommunications agency, said Friday it's restricting Instagram, a photo-and-video service owned by Meta. Russia's Investigative Committee is opening a criminal investigation against Meta.

What's next

Russia might take more actions against Meta as it moves forward with the criminal case against the social media giant. The company also owns messaging app WhatsApp though no restrictions against that service have been announced.

Facebook parent company Meta is setting aside its rules and allowing some violent speech against Russian invaders, saying it views these remarks as political speech. 

"As a result of the Russian invasion of Ukraine we have temporarily made allowances for forms of political expression that would normally violate our rules like violent speech such as 'death to the Russian invaders.' We still won't allow credible calls for violence against Russian civilians," Meta spokesman Andy Stone said in a tweet Thursday.

The rare exemption to the company's rules against hate speech, which bars people from posting content targeting a group of people, including violent content, shows how the world's largest social network is moderating content about Russia's invasion of Ukraine. The move, though, is already escalating tensions between Meta and the Russian government. 

Russia's Investigative Committee said in a statement Friday that it's opened a criminal case against Meta for allegedly violating the criminal code of the Russian Federation that bars public calls for extremist activities and assistance in terrorist activities. 

"As part of the criminal case, the necessary investigative measures are being carried out to give a legal evaluation to actions of Andy Stone and other employees of the American corporation," the committee, which reports to Russia President Vladimir Putin, said in the statement. 

Facebook has been facing a greater number of calls to crack down more heavily on propaganda and misinformation. Last week, Russia said it was blocking the social network after Facebook started to make content from Russian state-controlled media tougher to find on its platform and tapped third party fact-checkers to debunk false claims. On Friday, Russia's telecommunications regulator, Roskomnadzor, said in a statement that the Prosecutor General's Office of Russia demanded that the agency also restrict access to Meta-owned photo-and-video service Instagram. Roskomnadzor said the restrictions will take effect March 14 to allow users to transfer their photos and videos to other social networks and notify their followers and contacts. 

Nick Clegg, who leads global affairs at Meta, said in a statement Friday that the company's policies are "focused on protecting people's rights to speech as an expression of self-defense in reaction to a military invasion of their country." He added that Meta is applying the exemption only in Ukraine and that it made the decision because of "extraordinary and unprecedented circumstances."

"We have no quarrel with the Russian people. There is no change at all in our policies on hate speech as far as the Russian people are concerned. We will not tolerate Russophobia or any kind of discrimination, harassment or violence towards Russians on our platform," Clegg said.

The Russian Embassy in the US also responded to Thursday's decision, saying Meta's actions were equivalent to a declaration of information war against Russia, according to a report by Russian state-operated news agency Novosti. In a post on Twitter, the embassy called on US authorities to "stop the extremist activities of Meta."

For years, Facebook has also grappled with criticism that its rules are enforced unevenly. The company created a semi-independent oversight board to weigh in on its toughest content moderation decisions. 

Reuters, which first reported the policy change, said that in certain countries, including Russia, Ukraine and Poland, the social media giant is also allowing some posts that call for death to Russian President Vladimir Putin or Belarusian President Alexander Lukashenko. The changes also apply to Instagram. 

Citing internal emails, Reuters said that calls for death won't be allowed if they contain other targets or include "two indicators of credibility" such as the location or method of death. The posts must also be about the invasion of Ukraine. Calls for violence against Russian soldiers will also be allowed in Armenia, Azerbaijan, Estonia, Georgia, Hungary, Latvia, Lithuania, Poland, Romania, Russia, Slovakia and Ukraine, Reuters reported.

Also Thursday, Facebook and Twitter removed posts from Russia's embassy in the UK over false claims surrounding the bombing of a maternity hospital in the Ukraine city of Mariupol on Wednesday.

At least one child and two adults were killed at the hospital and another 17 were injured, Ukraine officials have said.

Meta didn't immediately answer questions about how long it expects the exemption will be in place or the number of posts that may be impacted. 

Meta hasn't released data about how many Facebook and Instagram users are in Russia. App analytics firm Sensor Tower estimates that since 2014 Instagram has been installed 166 million times from Google Play and the Apple App Store in Russia. Facebook in Russia has an estimated 56.2 million installs. Sensor Tower says that based on that data, Russia is the fifth largest market for Instagram and the 20th largest market for Facebook.


Source

Tags:

Twitter's New Method For Reporting Harmful Content Is Live


Twitter's New Method for Reporting Harmful Content Is Live


Twitter's New Method for Reporting Harmful Content Is Live

Twitter's plans for revamping how users can report policy violations is now available globally, the company said Friday. 

The overhauled process was first outlined in a December blog post. The idea is to shift the focus to asking what happened, instead of asking the person doing the reporting to classify the incident. 

"The vast majority of what people are reporting on fall within a much larger gray spectrum that don't meet the specific criteria of Twitter violations, but they're still reporting what they are experiencing as deeply problematic and highly upsetting," said Renna Al-Yassini, a senior UX manager on the team, in that December post.

Twitter said it saw the number of actionable reports increase by 50% using the new system. The company also said its prior system left people feeling frustrated. The new approach was first tested within a small group of users in the US. 


Source

Tags:

Facebook, WhatsApp And Instagram Coming Back Online After Widespread Outage


Facebook, WhatsApp and Instagram coming back online after widespread outage


Facebook, WhatsApp and Instagram coming back online after widespread outage

Facebook, WhatsApp and Instagram are starting to come back online after a widespread outage lasted more than six hours on Monday, disrupting communications for the company's roughly 3 billion users. 

"To the huge community of people and businesses around the world who depend on us: we're sorry. We've been working hard to restore access to our apps and services and are happy to report they are coming back online now. Thank you for bearing with us," Facebook said in a tweet.

The three social networks -- all owned by Facebook -- started having issues around 11:40 a.m. ET, according to Down Detector, a crowdsourced website that tracks online outages. 

The company acknowledged that it was having issues shortly after noon ET, saying in a tweet from its WhatsApp account that it's "working to get things back to normal and will send an update here as soon as possible." Similar messages were shared on the Twitter accounts for Facebook and Facebook Messenger. 

Hours later, Facebook CTO Mike Schroepfer said in a tweet that the company was "experiencing networking issues" and working as fast as possible "debug and restore" its services.

Facebook later said in a company blog that it believed a "faulty configuration" change was the cause of the outage

The outage -- and the resulting reaction on Twitter -- underscores both our dependency on the social networks and the love-hate relationship they inspire. Being unable to post on Facebook or Instagram elicited equal parts frustration and relief, with some relishing the break from being constantly connected to our digital lives. Ironically, it's those very social media platforms that allow us to express our collectively mixed feelings about the situation. 

Outages are nothing new in the online world, and services often go offline or experience slowdowns. Facebook's outage on Monday, however, was unusual in that it struck a suite of the company's products, including its central site and WhatsApp, an encrypted messaging service used widely around the world. Facebook is deeply enmeshed in global infrastructure and the outage disrupted communications for the company's billions of users. The website and its services are used for everything from casual chatting to business transactions.

It isn't immediately clear what caused the issue for the three properties. Security expert Brian Krebs said it appears to be a DNS related-issue, adding that something "caused the company to revoke key digital records that tell computers and other Internet-enabled devices how to find these destinations online."

Cloudflare, a content delivery network that hosts customers data for fast access around the world, had its own explanation of what might have happened.

"Facebook and its sites had effectively disconnected themselves from the Internet," Cloudflare concluded. "It was as if someone had 'pulled the cables' from their data centers all at once and disconnected them from the internet.

Facebook's problem involved a combination of two fundamental internet technologies, BGP and DNS, both instrumental to helping computing devices to connect across the network. The Border Gateway Protocol (BGP) helps establish the best way to send data hopping from one device to another until it reaches its final destination. The Domain Name System (DNS) translates human comprehensible network names like facebook.com into the numeric Internet Protocol (IP) addresses that actually are used to address and route data across the internet.

Just before 9 a.m. PT, Cloudflare detected a flurry of unusual updates from Facebook describing changes to how BGP should handle Facebook's part of the network. Specifically, the updates cut off network routes to Facebook's DNS servers. With those servers offline, typing "facebook.com" in a browser or using the app to try to reach Facebook failed.

In addition to Facebook's services and apps being down, some of the company's internal tools were also reportedly impacted by the outage. Instagram CEO Adam Mosseri said in a tweet that it felt like a "snow day."

The Facebook outage appears to have caused a headache for Twitter, as well, with more people heading there after finding Facebook down.

"Sometimes more people than usual use Twitter," Twitter tweeted Monday afternoon. "We prepare for these moments, but today things didn't go exactly as planned."

The outage cost Facebook an estimated $60 million in forgone revenue as of 1 p.m. PT/4 p.m. ET, according to Fortune and Snopes. The two publications calculated the lost revenue by using the roughly $29 billion the company reported in its second-quarter earnings. Facebook makes roughly $319.6 million per day in revenue, $13.3 million per hour, $220,000 per minute, and $3,700 per second. The outlets then used those numbers to calculate revenue loss based on how long the outage has lasted.

Shares in the social network dropped nearly 5% to $326.23 per share amid a broad selloff in social media stocks. (Shares of Twitter and Snap were both off more than 5%.)

The slide in Facebook stock weighed on CEO Mark Zuckerberg's net worth, which dropped to $121.6 billion. His net worth is now less than Microsoft co-founder Bill Gates and is the fifth wealthiest person in the world, according to Bloomberg. 

The outage creates another headache for Facebook, which is battling a massive public relations nightmare in the wake of a whistleblower's allegations that the social network is aware of harm that content on its services causes. The allegations were detailed in a series of stories published by The Wall Street Journal based on research leaked by the whistleblower that said the company ignored research about how Instagram can harm teen girls and that an algorithm change made users angrier. 

The whistleblower, a former Facebook product engineer named Frances Haugen, is scheduled to testify to Congress on Tuesday. She detailed some of her allegations in a televised interview on Sunday.

"Facebook, over and over again, chose to optimize for its own interests, like making more money," she told 60 Minutes' Scott Pelley.

As is often the case with outages, users flocked to other social networks to complain and also revel in the Facebook outage. Instagram and Facebook quickly became the top trending topic on Twitter in the US, and dominated other locations around the world as well. Twitter even got in on the joke, with the company's official account tweeting, "Hello literally everyone," and CEO Jack Dorsey asking "how much?" in response to tweets suggesting Facebook's domain was for sale.

This isn't the first time Facebook has suffered from a lengthy outage. In 2019, Facebook's services suffered from a daylong outage that the company blamed on a "server configuration issue." In previous outages, the social network has also cited a DNS issue or a central software problem as causes.,

Read more:  Funniest memes and jokes about Facebook, WhatsApp and Instagram outage

CNET has contacted Facebook for additional comment and we'll update when we hear back. 

CNET's Carrie Mihalcik and Stephen Shankland contributed to this report. 


Source

Tags:

Twitter Tests New Process For Reporting Harmful Content


How to report on twitter report problem to twitter how to report something on twitter how to report a tweet twitter testing new format evaluate the effectiveness of twitter twitter tetsu bozu
Twitter tests new process for reporting harmful content


Twitter tests new process for reporting harmful content

Twitter has begun testing an overhauled process for users to report harmful tweets, with the goal of simplifying the process by asking users to describe what they're seeing, the social network said Tuesday.

Instead of requiring users to identify which Twitter rule a tweet violates, Twitter's new "symptoms-first" approach asks them what they felt was wrong with a tweet, relieving them of the burden of interpreting Twitter's rules. Twitter likened the new approach to an emergency room situation in which a doctor asks where the patient is feeling pain rather than asking if they have a broken leg.

"What can be frustrating and complex about reporting is that we enforce based on terms of service violations as defined by the Twitter Rules," senior Twitter UX manager Renna Al-Yassini said in a blog post. "The vast majority of what people are reporting on fall within a much larger gray spectrum that don't meet the specific criteria of Twitter violations, but they're still reporting what they are experiencing as deeply problematic and highly upsetting."

The testing, which will begin with a small group of Twitter users in the US, comes amid continuing criticism that Twitter isn't doing enough to reduce the amount of abusive or hateful content on the platform. The company said it plans to roll the testing out to a wider audience in 2022.

By refocusing the reporting process on the firsthand information people can provide, Twitter said it hopes to improve the quality of the reports it receives. Even if a specific tweet doesn't violate Twitter's rules, the company said the information it gathers could still be used to improve experiences on the platform.

The move follows an update Twitter announced in November to its private information policy that bans the sharing of photos and videos of private individuals without their consent. Content can be removed if the site determines it's been shared "to harass, intimidate, or use fear to silence them."


Source

Spotify's Joe Rogan Problem: Turns Out His Deal Might Be Worth $200 Million


Spotify s joe rogan problem turns out synonym spotify s joe rogan problem turns out my online spotify s joe rogan problem turns all year spotify s joe rogan problematic spotify s joe problem spotify student spotify sign in how to download spotify songs cancel spotify subscription
Spotify's Joe Rogan Problem: Turns Out His Deal Might Be Worth $200 Million


Spotify's Joe Rogan Problem: Turns Out His Deal Might Be Worth $200 Million

Joe Rogan and his podcast, The Joe Rogan Experience, are at the center of growing concerns over COVID-19 misinformation and the host's use of racial slurs in dozens of episodes. This has put pressure on Spotify, the music streaming service that signed the comedian to an exclusivity deal in 2020.

In January, rock legend Neil Young pulled his music from Spotify over objections to false claims about COVID-19 vaccines on Rogan's popular podcast. Some other artists joined the boycott, but the backlash grew soon after when a compilation video of Rogan using a racial slur on numerous past episodes began circulating on social media.

Spotify CEO Daniel Ek confirmed that Rogan chose to remove multiple episodes of his popular podcast from the streaming service after the company's leadership discussed his use of "racially insensitive language," according to a memo sent to employees. 

Spotify continues to grapple with a dilemma that many internet giants like Facebook and YouTube face: balancing freedom of expression and effective moderation of objectionable content on their platforms. It views Rogan as a key component to its growth as an audio platform, and the comedian has said being able to express himself is one of the reasons he moved his podcast to the streaming service. The company paid the comedian a reported $200 million, double the amount previously thought, according to a report from The New York Times Thursday. 

Rogan posted an apology to Instagram on Feb. 5, saying he "wasn't trying to be racist" and agreeing that he shouldn't use such slurs, regardless of the context. Rogan said the backlash was a "political hit job" in an episode of his podcast posted on Feb. 8 but added that it was a "relief" to address comments he regrets making. 

Here's what you need to know about the backlash against Joe Rogan and Spotify. 

Why were episodes of Rogan's podcast removed? 

Videos of Rogan using racial slurs on past episodes went viral on social media at the end of January. This was layered on top of a growing musician boycott over concerns that Rogan's podcast serves as a platform for COVID misinformation. The hashtags #DeleteSpotify and #CancelSpotify began trending on Twitter as some people called for the removal of Rogan's podcast. A consumer poll from Feb. 1 found 19% of Spotify subscribers said they canceled or will cancel their service, according to a report from Variety. 

On Feb. 4, a fan-made website found that more than 100 episodes of Rogan's podcast were no longer available on Spotify. The website, JREMissing, uses Spotify's API to compare available episodes to a database of all episodes recorded. A total of 113 episodes of Rogan's podcast were shown to be removed: 42 happened last year when Rogan moved his show to Spotify. The other 71 were deleted on Feb. 4 without explanation at the time.

Ek sent a memo to Spotify employees about the development on Feb. 6. He confirmed that Rogan chose to remove multiple episodes of his podcast from the streaming service. This came after Spotify's leadership spoke to the comedian about his use of "racially insensitive language."

CNET couldn't confirm a link between the circulating videos and the episodes that were removed from Spotify.

"Some of Joe Rogan's comments [are] incredibly hurtful -- I want to make clear that they do not represent the values of this company," Ek wrote in the memo, which was provided to CNET by a company spokeswoman. "While I strongly condemn what Joe has said and I agree with his decision to remove past episodes from our platform, I realize some will want more. And I want to make one point very clear -- I do not believe that silencing Joe is the answer."

Ek went on to say the company would invest $100 million -- the earlier reported amount it paid to Rogan for exclusivity rights -- for the "licensing, development, and marketing of music (artists and songwriters) and audio content from historically marginalized groups. This will dramatically increase our efforts in these areas." 

Spotify didn't respond to a request for comment on whether it will increase the investment to $200 million to match the newly reported amount of Rogan's deal. 

What has Rogan said about this?

Rogan uploaded a video to his Instagram account on Feb. 5, the day after the podcast episodes were removed, in which he talked about his use of racial slurs and apologized for his actions. 

"I certainly wasn't trying to be racist," he said, "and I certainly would never want to offend someone for entertainment with something as stupid as racism." Rogan agreed he shouldn't use such slurs, regardless of the context.

In episode #1773 of his podcast, Rogan had comedian Akaash Singh on and started the show talking about the blowup, saying it was a "relief."

"This is a political hit job," he said on his podcast. "They're taking all this stuff I've ever said that's wrong and smooshing it all together. It's good because it makes me address some s*** that I really wish wasn't out there." 

How did this all get started?

In December, Rogan had two guests on his show who have been at the forefront of COVID misinformation. Dr. Peter McCullough, a cardiologist, and Dr. Robert Malone, who has described himself as the inventor of the mRNA vaccine, have used their credentials to try to give credibility to false conspiracy theories regarding the pandemic and vaccines. 

COVID-19 vaccines are highly effective at reducing hospitalizations and deaths, and other public health measures like masking and social distancing have helped slow the spread of the virus. The dangers of the illness are clear. To date, there have been more than 419 million cases of COVID-19 around the world and more than 5.8 million deaths, according to the coronavirus resource center at Johns Hopkins University.

On Jan. 12, 250 doctors, professors and researchers signed an open letter to Spotify calling out the streaming service for platforming COVID misinformation, in particular on Rogan's podcast. Since then, more than 1,000 additional medical professionals have signed the letter. 

After coming across the letter, singer-songwriter Young, who rose to fame in the 1960s and '70s, made an ultimatum to Spotify on Jan. 24: either Rogan goes or his music goes. He removed his music Jan. 27, but some songs featuring Young with other artists are still on the platform.  

Other musicians joined Young in a boycott of the service, including: 

The controversy escalated when Grammy-winning singer Arie joined the boycott, saying she found Rogan problematic, not just for his interviews around COVID, but also his language around race. 

Is Spotify doing anything about COVID misinformation on its platform?

Following the musicians' protest over COVID misinformation, Ek responded in a blog post Jan. 30, saying his company doesn't want to be a "content censor" but will make sure that its rules are easy to find and that there are consequences for spreading misinformation. He acknowledged that Spotify hasn't been transparent about them, which led to questions about their application to serious issues including COVID-19.

"Based on the feedback over the last several weeks, it's become clear to me that we have an obligation to do more to provide balance and access to widely accepted information from the medical and scientific communities guiding us through this unprecedented time," Ek said.

Included in the post was a link to Spotify's platform rules detailing what content isn't allowed on the service. Regarding COVID misinformation, the rules specifically prohibit saying that COVID-19 isn't real, encouraging the consumption of bleach to cure diseases, saying vaccines lead to death and suggesting people get infected to build immunity. 

Ek also said the company is working on a content advisory for any podcast episode that talks about COVID. The advisory will guide listeners to the service's COVID-19 hub.

In a Feb. 2 company town hall, Ek told Spotify employees that Rogan's podcast was key to the future of Spotify, according to audio obtained by The Verge. 

"If we want even a shot at achieving our bold ambitions, it will mean having content on Spotify that many of us may not be proud to be associated with," Ek said during the town hall. "Not anything goes, but there will be opinions, ideas and beliefs that we disagree with strongly and even makes us angry or sad."

Spotify employees were reportedly disappointed by his remarks. Members of the company's board of directors were also reportedly not happy with the response according to The New York Times. 

In an Instagram post Jan. 30, Rogan defended his choice to bring on guests like Malone but said he was happy for Spotify to add disclaimers to podcasts on what he called "controversial" topics. He added that if he could do anything differently, it would be to get experts with differing opinions on directly after "controversial ones." 

Who else had something to say about this? 

The White House chimed in on Spotify's move to add misinformation warnings to podcast episodes. In a Feb. 1 press briefing, press secretary Jen Psaki was asked if tech companies should go further than these disclaimers. 

"Our hope Is that all major tech platforms, and all major news sources for that matter, be responsible and be vigilant to ensure the American people have access to accurate information on something as significant as COVID-19. That certainly includes Spotify," Psaki said. "So this disclaimer, it's a positive step, but we want every platform to continue doing more to call out misinformation and disinformation while also uplifting accurate information." 

Psaki also referred to Surgeon General Dr. Vivek Murthy's warning from July about the dangers of misinformation, calling it an "urgent threat."

The CEO of Rumble, a video streaming service known for being a hub of misinformation and conspiracy theories, said Feb. 7 that he'd offer Rogan $100 million over the course of four years if he brought his podcast to the company, 

"This is our chance to save the world," Chris Pavlovski said in a letter to Rogan posted to Twitter. "And yes, this is totally legit." 

During a question and answer portion of a recent comedy show, Rogan told a crowd he plans to stick with Spotify according to a Feb. 8 report from Hollywood Reporter.

Former President Donald Trump on Feb. 7 posted a message on his site saying Rogan shouldn't apologize for what he said. "How many ways can you say you're sorry," the former president wrote. 


Source

Search This Blog

Menu Halaman Statis

close