I. Paraphrase : Read the following passage and rephrase it in your own words
(while keeping the original meaning) in 250 words.
Social media companies have been under tremendous pressure to do something about the proliferation of misinformation on their platforms. Companies like Facebook and YouTube have responded by applying anti-fake-news strategies that seem as if they would be effective. As a public-relations move, this is smart: The companies demonstrate that they are willing to take action, and the policies sound reasonable to the public.
But just because a strategy sounds reasonable doesn't mean it works. Although the platforms are making some progress in their fight against misinformation, recent research by us and other scholars suggests that many of their tactics may be ineffective - and can even make matters worse, leading to confusion, not clarity, about the truth. Social media companies need to empirically investigate whether the concems raised in these experiments are relevant to how their users are processing information on their platforms.
One strategy that platforms have used is to provide more information about the news' source. YouTube has "information panels" that tell users when content was produced by government-funded organizations, and Facebook has a "context" option that provides background information for the sources of articles in its News Feed. This sort of tactic makes intuitive sense because well-established mainstream news sources, though far from perfect, have higher editing and reporting standards than, say, obscure websites that produce fabricated content with no author attribution. But recent research of ours raises questions about the effectiveness of this approach.
We conducted a series of experiments with nearly 7,000 Americans and found that emphasizing sources had virtually no impact on whether people believed news headlines or considered sharing them. People in these experiments were shown a series of headlines that had circulated widely on social media - some of which came from mainstream outlets such as NPR and some from disreputable fringe outlets like the now-defunct newsbreakshere.com. Some participants were provided no information about the publishers, others were shown the domain of the publisher's website, and still others were shown a large banner with the publisher's logo. Perhaps surprisingly, providing the
additional information did not make people much less likely to believe misinformation.
The obvious conclusion to draw from all this evidence is that social media platforms should rigorously test their ideas for combating fake news and not just rely on common sense or intuition about what will work. We realize that a more scientific and evidence-based approach takes time. But if these companies show that they are seriously committed to that research - being transparent about any evaluations that they conduct internally and collaborating more with outside independent rescarchers who will publish publicly accessible reports ㅡ the public, for its part, should be prepared to be patient and not demand instant results. Proper oversight of these companies requires not just a timely
response but also an effective one.
Source
New York Times
https://www.nytimes.com/2020/03/24/opinion/fake-news-social-media.html
Adapted from an article by Dr. Pennycook and Dr. Rand on March 24, 2020.
Authors: Gordon Pennycook is an assistant professor at the Hill and Levene Schools of Business at the University of Regina, in Saskatchewan. David Rand is a professor at the Sloam School of Management and in the deparment of brain and cognitive sciences at M.I.T.