4 min.

Interview: Fake news has become a political weapon

Stela Stoyanova |

Interview: Fake news has become a political weapon

Preslav Nakov is one of the few computer experts in Bulgaria, who chose to be a researcher and a teacher that a practicing businessman. Currently, he is now with Qatar Computing Research Institute (HBKU) where he works on a project related to exposing fake news.

You are working on a project that exposes fake news. Would you please tell us more about it and about your career?

I have been working for the Qatar Computing Research Institute. I hold a PhD degree in Computer Science from the University of California at Berkeley and a MSc degree from Sofia University (in computer science). I used to be associated with Bulgarian Academy of Sciences but later I moved to Singapore and Qatar.
We are currently involved in a project, in partnership with the Massachusetts Institute of Technology (MIT), and its goal is to help people figure out what they read and to obtain more balanced information.

How can we recognize fake news? What is user's chance to understand whether they read a piece of fake?

The way to distinguish fake news in general is to find out where it comes from. What is the source – a person, a media... and whether we could trust this source. Another key feature is what has been said and how. We need to find if information comes from the internet, from Twitter, from Facebook? How other users in the network react to this piece of information. All these factors are parts of a system, a neuron system, that decides what this news is and whether we can trust it.

And what are the other goals of this project?

The second goal is to check whether the claim in the news is true or not. We basically match remarks made by politicians but not only by them. This is where we check thousands of verified claims by journalists that check facts and based on their decisions we automatically determine the accuracy of these facts. By doing it we figure out what indeed happened and what was the response to it in social networks and media. For example, of a trustworthy media supports a certain claim it is considered a true one. If a 'bad' media endorses it (media, we know spread disinformation) this is more likely to lead us to the opposite conclusion. It is also about modelling – whether we can rely on the profile of a particular media. And there is another option it is used when we verify claims on forums – like BG Mama for example. We did this for a Qatari forum, equivalent to BG Mama, and for a similar one in Bulgarian. There is a question there, we found out, and we need to verify whether this question has good answers – who responds directly to the issue and not just challenging other users.

How does this happen?

In fact, via this project we solve an entire set of tasks. The first is related to the creation of web sites that we automatically check – we currently study more than 100 of them. There are these ones in Bulgaria, too. We have several distinguishable systems that solve such tasks related to each type of users. Some of them are directed at supporting the work of journalists that want to check certain facts. Another type of systems is targeted at the end users to help him find out himself what he/she is reading, while third ones could aim at social networks.

For example, there could be a particular claim that need verification. There are web sites that can check it. The issue is that in this stream of news there could be thousands of sentences that must be filtered and checked. We have such a system in Bulgarian and Arabic. We try to create a system based on neuron networks of artificial intelligence that makes decisions related to this news. The goal of the system is to tell us: This claim is true. It has to identify tings that have to be checked. Like claims that sound bombastic but nevertheless could be interesting to the audience. In fact, the AI system makes solutions based on internet publications.
Another system that we used in the case of BG Mama and on a similar Qatari forum is to verify which of the answers is 'good', i.e. it is in direct relation with the question and is not just challenging other users. This could be successfully implemented on all forums.


The next level of our work is related not so much to checking claims. We have to clear out whether some things are written for money, or if they are just made for fun, like for example is 'non-news' We also characterise media. In other words, we make a suggestion whether certain news is true or not on the basis of who says it. We also profile the way it is spoken. We try to make a profile of whether the media has bias in a certain direction.


You also worked on exposing internet trolls?

Yeah, we have worked on profiling them. Two years ago, for example, we had some serious efforts spent on this topic with a Bulgarian media and its forum. Maybe you remember that there was a serious controversy when a list of paid trolls was made public. The moment it was revealed, some of them deleted their profiles and created new ones. Interestingly, there tactics after that was to blame other people of being trolls. And the second ones wre more than the initial people. Paid trolls are just a few people...


Standart RSS feed

www.standartnews.com © all rights reserved