Artificial intelligence created filter bubbles. Now it’s helping to fight it.

online news
Image credit: Depositphotos

What would happen if a news publisher printed a different copy of its newspaper for each of its subscribers? That is a hypothetical question that would have sounded ridiculous two decades ago, when print newspapers and broadcasters were still the prevalent channels for receiving news.

But those days are slowly come to an end thanks to an explosion in artificial intelligence. Presently, more and more people consume news, articles and other information through online curators like Facebook and Google News. These platforms use AI algorithms to create digital profiles of users and deliver content that conforms to their preferences.

No two users see the same news feed in their social media accounts and their news curation apps. AI-driven news curation makes sure you see “relevant” results, as the curators will tell you. But in effect, these algorithms submerge you in a filter bubble, where you see too much of what you want to see and too little of what you should see.

The direct result of filter bubbles is polarization of societies, amplification of biases, less tolerance for opposing views, and more vulnerability to fake news. We’ve already seen these phenomena create or intensify crises during elections, political rivalries and sectarian violence in different regions of the world.

There’s now increasing speculation on different regulatory, social and technological solutions to rein in the control of algorithmic curators. And there are various efforts being led to define the ethical boundaries of AI.

Interestingly, one of the solutions to overcome our biases is artificial intelligence, the same technology that contributed immensely to the problem in the first place.

How AI creates filter bubbles

News concept

Before we lay all the blame on Facebook and Google and other organizations that use AI algorithms to curate our news, we must understand that these organizations are addressing a real problem. However, they might not be the best party to solve it

Contrary to the age of print newspapers, where publication came with a hefty price tag, in the age of online content, anyone can set up a blog and start posting their heart’s content without spending a dime.

The democratization of publishing has created an explosion of content. In the U.S. alone, there are more than 30 million blogs. Every day, more than 4 billion pieces of content are shared on Facebook and 500 million tweets posted on Twitter. Google News tracks thousands of websites ranging from blogs (like this site) to huge publications such as The New York Times. Even Apple’s news subscription service, which is being criticized for not having a lot of news, tracks 300 major publications.

How can users find their way through this sea of information? “We’re seeing a lot of content being published every day, and there’s no way for consumers to see or read all of it. That’s why online platforms and news providers use algorithms to distill it in some way,” says Tania Ahuja, founder and CEO of Nobias, a New York–based startup that uses AI to track political bias in online content.

Machine learning algorithms, a popular subset of artificial intelligence, are especially good at finding patterns and correlations across large data sets. News curators and social media companies use machine learning to detect the common characteristics of the kind of content users interact with (clicks, likes, retweets, etc.). The AI algorithms then use this information to recommend other content that have similar characteristics or find users who engage with similar content.

Even when filtered down to the preferences of the individual user, there’s still a lot of online content to choose from. That’s why machine learning algorithms usually prioritize them based on user engagement. The more users interact with a specific piece of content, the likelier it is that it appears other users’ news feeds.

At first glance, this seems like a win-win proposal. Instead of wasting precious time scrolling through their news feeds or adjusting their search queries to find useful news and information, users let an AI algorithm take care of the task for them and show them “relevant” results. This means they spend less time searching and more time reading and watching content. As for the companies that run these AI algorithms, they get to keep users glued to their apps and monetize their attention by showing them ads.

But in reality, it’s a zero-sum game. And the user is always the loser.

As Ahuja points out, these AI algorithms create several critical problems. “They reward junky clickbait because the users click on articles that have sensational titles,” she says.

During the 2016 U.S. presidential elections, a group of Macedonian youth created a misinformation crisis by posting stories with sensational headlines about things such as the criminal indictment of Hillary Clinton or the pope’s approval of Trump, both of which never happened. When shared on Facebook, the stories got many clicks by intrigued users. The high level of engagement caused Facebook’s AI algorithms to recommend the stories to other users, which generated even more clicks, and so on.

In the same year, high user engagement caused Facebook’s algorithms to push a false story about the firing of Fox News anchor Megyn Kelly into the platform’s “Trending Topics,” showing the fake news to millions of users.

But fake news is just part of the problem, Ahuja points. AI-based personalization causes another problem at the individual level, which is more hidden and perhaps more dangerous. “Ultimately you get wrapped in a filter bubble. For instance, in the case of political bias, if you’re always reading articles that are left-leaning, you will end up with a news feed that is primarily left-slanted, and publications with the other point of view will get buried in your news feed,” she says.

Like many others, Ahuja is worried that as AI personalizes content, it will influence people’s perceptions on really important topics like politics and business, leading to a fractured and polarized societies, making it easy for bad actors to manipulate people.

The most well-known case of manipulation is Facebook’s notorious Cambridge Analytica scandal, in which a political data firm tried to leverage the company’s vast store of personal information to influence the behavior of millions of U.S. voters. Evaluating the effectiveness of such campaigns is very difficult, but Cambridge Analytica shed light on the dangers of the granularity of control that AI algorithms provide over the content that users see.

Privacy and ethical controversies over the use of artificial intelligence to curate online content have raised awareness on the issue and spurred the companies using these algorithms to alter their practices. Lawmakers in various regions are also speculating regulations that will rein-in tech companies’ practices to collect and mine user data with AI algorithms.

But the process is too slow, Ahuja believes. “It’s not going to come soon, and also, to some extent, the business model of a lot of these platforms are predicated on free access to their platforms, and is financed by digital ads, which in turn is driven by engagement and sharing rather than the actual value of the news,” she says. “I don’t think the tech companies that run these platforms should be deciding what we see.”

So basically, it’s hard to trust companies that built economic empires over monetizing user attention to undercut themselves.

Tracking bias in your news feed

“While I’m not saying AI itself is causing these problems, it’s just that the nature of this onslaught of content, and AI being used to help with it, is actually really not helping because people can’t fully appreciate what is really going on,” Ahuja says.

Ahuja believes it is the civic duty of every person to make sure they read news from a variety of sources and keep themselves informed on all sides of a story. But what tech companies like Nobias can do is help users identify the slant of the news they read and discover the biases they might be unintentionally developing.

To do this, Nobias has developed what Ahuja calls a “Fitbit for news,” a free browser extension that reports on the political bias of online articles and keeps track of the general political standing of the kind of content users consume.

“We created Nobias as a tool to help customers take back control of their news feed. We let them know if they’re only reading one side of a story. They might choose to do so—it’s their prerogative. But at least for those who would want to have access to both sides of information, then we give them this information right in their news feed so they can be selective of what they read and develop their own unique point of view,” Ahuja says.

After installing Nobias, when you browse to news feeds such as Google News, Facebook, and the home pages of popular news sites, the extension will decorate the page with colored pawprints that indicate the political slant of each article. Blue pawprints stand for liberal content, red for conservative, and purple for center. There’s also a gray color that indicates Nobias could not determine the slant.

Nobias screenshot news list

Nobias tracks bias at the article level. So as the above picture shows, some of the articles in the New York Times homepage, clearly a left-leaning publication, are marked as conservative.

Nobias also provides information on the credibility of the source of an article and the level of readability. Inside articles, the tool will provide analysis of the bias of outbound link sources where it can.

At the user level, Nobias will provide you with a detailed summary of the credibility and bias of the news you consume (of course, as long as you’re reading it in your Chrome browser). This can help you find out if you have a balanced diet of online content or you’re too biased toward a specific point of view.

nobias screenshot news summary

Nobias won’t change your news feed. But it will give you a clearer picture of the content you’re consuming online and whether your news feed is showing you everything you need to know.

“It’s important to make people aware that there are AI algorithms at work and that they need to be mindful of their interactions. We want to act as an independent party that helps people get a healthy feed of well-rounded news,” Ahuja says. “We feel that more informed people are less likely to fall for fake news. Fake news works because it targets people who have very little information and don’t have the big picture.”

Using machine learning to fight filter bubbles

Interestingly, the technology at the heart of Nobias is machine learning, the same technique that has had an important role in amplifying biases and creating filter bubbles.

Nobias has published a detailed account of the methodology it uses to grade the bias in news stories and articles. “For the bias algorithm, what we did was we tried to be as objective as possible. We used published methodology by Matthew Gentzcow and Jesse Shapiro in Econometrica, a top economics journal, in 2010 and 2016,” Ahuja says.

The company used records of speeches in Congress as training data for the AI algorithm that detects slant and political bias in online content. “On Congress, you know if somebody is Democrat of Republican. It’s a lot more objective,” Ahuja says.

After being trained on the text of the speeches, the machine learning algorithm develops a model of the kind of keywords that draw the line between left- and right-leaning speech. It then uses these correlations to classify new content. The AI algorithm uses thresholds and multiple levels of bias (far-right and -left, center-right and -left, and center) to reduce false positives.

If users don’t agree or find an error in the analysis of a specific article, they can send feedback to Nobias. The company uses this crowdsourced information to further tune the AI algorithm and make it more accurate. “We update the machine learning algorithm every month, but if we receive a lot of feedback on something that is being discussed a lot, then we will update it more quickly,” Ahuja says.

One of the things that worried me was that bad actors might try to trick or game the AI algorithm by carefully wording their content in ways that might go unnoticed to human readers but have a totally different meaning to the machine learning model.

“Honestly, we’re not large enough to become a worry yet,” Ahuja says, adding that, “What we do is update the machine learning model monthly, so it will be hard for malicious actors to constantly game the AI in their favor.”

Ahuja also says Nobias will not publish the list of keywords that define the classes of political slant, which will make it harder for malicious actors to attack the AI models.

You’re responsible for your own biases

Experts and visionaries suggest that in the age of machine learning and automation, one thing that can protect us against the adverse effects of AI algorithms is to use more AI.

Ahuja makes no such claim with Nobias. “We want to make it easier for people to discover their own biases. That’s why we view this more as a productivity tool. You know that you need to have a balanced diet. This just helps you to get there without much work,” she says.

What this means is that it’s still our own responsibility to step out of our filter bubbles, go out of our way to investigate the different facets of a story, learn to listen to and tolerate opposing views, and try to find common grounds and compromises in debates.

Those are hard human choices that no AI will make for you.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.