The Web’s Recommendation Engines Are Broken. Can We Fix Them?

I’ve been a Pinterest user for a long time. I have boards going back years, spanning past interests (art deco weddings) and more recent ones (rubber duck-themed first birthday parties). When I log into the site, I get served up a slate of relevant recommendations—pins featuring colorful images of baby clothes alongside pins of hearty Instant Pot recipes. With each click, the recommendations get more specific. Click on one chicken soup recipe, and other varieties appear. Click on a pin of rubber duck cake pops, and duck cupcakes and a duck-shaped cheese plate quickly populate beneath the header “More like this.”

These are welcome, innocuous suggestions. And they keep me clicking.

But when a recent disinformation research project led me to a Pinterest board of anti-Islamic memes, one night of clicking through those pins—created by fake personas affiliated with the Internet Research Agency—turned my feed upside down. My babies-and-recipes experience morphed into a strange mish-mash of videos of Dinesh D’Souza, a controversial right-wing commentator, and Russian-language craft projects.

Recommendation engines are everywhere, and while my Pinterest feed’s transformation was rapid and pronounced, it is hardly an anomaly. BuzzFeed recently reported that Facebook Groups nudge people toward conspiratorial content, creating a built-in audience for spammers and propagandists. Follow one ISIS sympathizer on Twitter, and several others will appear under the “Who to follow” banner. And sociology professor Zeynep Tufekci dubbed YouTube “the Great Radicalizer” in a recent New York Times op-ed: “It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm,” she wrote. “It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”

click for more
click for more info
click for source
click here
click here for info
click here for more
click here for more info
click here now
click here to find out more
click here to investigate
click here to read
click here!
click here.
click now
click over here
click over here now
click this
click this link
click this link here now
click this link now
click this over here now
click this site
click to find out more
click to investigate
click to read
clicking here
company website
continue reading
continue reading this
continue reading this..
conversational tone
cool training
Get the facts
Related Site
Recommended Reading
Recommended Site
describes it
dig this
discover here
discover more
discover more here
discover this
discover this info here
do you agree
extra resources
find more
find more info
find more information
find out here
find out here now
find out more
find out this here
for beginners
from this source
full article
full report
funny postget more
get more info
get more information
get redirected here
get the facts
go here
go now
go right here
go to the website
go to these guys
go to this site
go to this web-site
go to this website
go to website
going here
great post to read
great site
had me going
have a peek at these guys
have a peek at this site
have a peek at this web-site
have a peek at this website
have a peek here
he has a good point

Today, recommendation engines are perhaps the biggest threat to societal cohesion on the internet—and, as a result, one of the biggest threats to societal cohesion in the offline world, too. The recommendation engines we engage with are broken in ways that have grave consequences: amplified conspiracy theories, gamified news, nonsense infiltrating mainstream discourse, misinformed voters. Recommendation engines have become The Great Polarizer.

Ironically, the conversation about recommendation engines, and the curatorial power of social giants, is also highly polarized. A creator showed up at YouTube’s offices with a gun last week, outraged that the platform had demonetized and downranked some of the videos on her channel. This, she felt, was censorship. It isn’t, but the Twitter conversation around the shooting clearly illustrated the simmering tensions over how platforms navigate content : there are those who hold an absolutist view on free speech and believe any moderation is censorship, and there are those who believe that moderation is necessary to facilitate norms that respect the experience of the community.

As the consequences of curatorial decisions grow more dire, we need to ask: Can we make the internet’s recommendation engines more ethical? And if so, how?

Finding a solution begins with understanding how these systems work, since they are doing precisely what they’re designed to do. Recommendation engines generally function in two ways. The first is a content-based system. The engine asks, is this content similar to other content that this user has previously liked? If you binge-watched two seasons of, say, Law and Order, Netflix’s reco engine will probably decide that you’ll like the other seventeen, and that procedural crime dramas in general are a good fit. The second kind of filtering is what’s called a collaborative filtering system. That engine asks, what can I determine about this user, and what do similar people like? These systems can be effective even before you’ve given the engine any feedback through your actions. If you sign up for Twitter and your phone indicates you’re in Chicago, the initial “Who To Follow” suggestions will feature popular Chicago sports team and other accounts that people in your geographical area like. Recommender systems learn; as you reinforce by clicking and liking, they will serve you things based on your clicks, likes, and searches—and those of people similar to their ever-more-sophisticated profile of you. This is why my foray onto an anti-Islamic Pinterest board created by Russian trolls led to weeks of being served far-right videos and Russian-language craft pins; it was content that had been enjoyed by others who had spent time with those pins.

Leave a Reply

Your email address will not be published.