
Facebook was created for people to share family photos and memories. But as ads entered the mix, the platform was refined to hold our attention for as long as possible. Quality was not a consideration – until recently. But how to fix the junk news mess without editing it? Maybe Facebook needs a newsroom. I sat down with Tessa Lyons, head of News Feed integrity at Facebook to discuss on this week’s edition of Miles To Go.
SUBSCRIBE ON ITUNES LISTEN ON LIBSYN
TRANSCRIPT
Miles O’Brien: Hello and welcome to another edition of Miles To Go, I’m Miles O’Brien.
You know, there’s never been anything quite like this, in form and at this scale. I’m talking about Facebook. There is no country on the earth bigger than the gathering of 2 billion Facebook users. And let’s not forget Facebook itself is banned in the biggest geographic country on the planet, China.
So when it comes to talk of misinformation, hyper-partisan content, fake news, junk news, and regulating Facebook somehow, we are in unprecedented turf. What sort of regulation might there be? What would the jurisdiction be?
And what do we call Facebook, in the first place? Is it a tech company? Is it a media company? Is it a publisher? It’s something we’ve never seen before and our democracy has become a guinea pig in this huge experiment.
As part of our series on junk news for the PBS NewsHour, producer Cameron Hickey and I got some unusual access inside Facebook headquarters in Menlo Park, California. We invite you to check out that series: go to milesobrien.com and we have everything gathered up there. It will help you understand the problem of misinformation in a new light, we think.
We spoke to some of the smart people who are working on the issue there–they’re all really hyper smart people with great intentions, but you do get the sense they have a tiger by the tail, something that is perhaps not manageable by any human or machine.
Nevertheless, they’re trying to tame that tiger. One of the people we spoke with is Tessa Lyons. She is the product manager of News Feed integrity.
Miles O’Brien: So tell me about your job. How do you go about you know injecting integrity into something that involves two billion people sharing information?
Tessa Lyons: So, when we think about integrity and this is through across Facebook, we think about three different things. The first is that we want to remove any content or accounts that violate our community standards. The second is that we want to reduce any content that is problematic, you know false news, content that people tell us they don’t want to see like clickbait or sensationalism. And the third is that we want to give people more information so that they can make informed decisions about what they read and share with their friends.
Our ultimate goal is to ensure that when people come to Facebook, they have meaningful and authentic experiences, and we know from people that they don’t want to see false information. They want information that’s accurate.
They don’t want to see clickbait, they want to have straightforward experiences when they click on links through News Feed. So we work hard to address these problems through a number of different approaches.
Miles O’Brien: Well, let’s walkthrough and how do we know that by the way? Because it seems to be people — especially if it reinforces a worldview or a feeling they have, they kind of like stuff that maybe it’s too good to be true but I’m going to share that anyway. Don’t they like it?
Tessa Lyons: So, one of the things that we spent a lot of time doing is talking to people directly and serving them. Because if you rely in just on the signals that you get from people who are using their Facebook News Feed, you might find that clickbaity headlines actually lead to a lot of clicks which isn’t surprising. But if you then ask people about the most meaningful experiences that they have their feed and the most annoying experiences they have in their News Feed, what we hear is that people really don’t want clickbait or sensationalist headlines. They don’t want to have experiences where they expect one thing and end up experiencing another which is why it’s really important for us to not just rely on signals that we see on platform but to also do ongoing surveys to make sure we’re getting feedback from our community in an ongoing way.
Miles O’Brien: I think we’ve all had the experience. We see the clickbait headline and you’re like “Wow, should I click, should I see it if it’s true?” And you click on it and ultimately, you do get that sense of “Oh, this article doesn’t even have anything to do with the headline. And so that’s where the negative experience comes but having said that, they have clicked, right?
Tessa Lyons: Yeah. In some cases, they have clicked and oftentimes spent very little time on the resulting landing page. And so, we want to ensure that the experiences people have when they come to Facebook are meaningful and consisting with their expectations. And so, we want them to see less of those clickbaity headlines when they’re scrolling through their feed.
Miles O’Brien: Clickbait is good for business so, isn’t it?
Tessa Lyons: What we want people to do is have a meaningful experience so that they come back to Facebook and feel that we’re adding value for them.
Clickbait can be really valuable for spammers who are trying to drive people using misleading tactics off of Facebook and onto websites that are covered in malicious low quality ads. For those spammers, clickbait works well in order to help them monetize the impressions that they get when people then land on their landing pages. And what we’re trying to do is reduce those financial incentives for spammers so that they get less money from people clicking off on to these low quality ad covered web pages and to ensure that people on the other hand have high quality experiences.
Miles O’Brien: You would think clickbait might be potentially a good revenue source, but in this case because it drives people away from Facebook, you don’t like it so much, so you don’t — you’re not faced with these dilemma as to how do we handle this, it might be bad for business to do this.
Tessa Lyons: Well, to be clear, the thing about clickbait that we want to stop is the misleading experience for people and the bad actors who are incentivized to use this kind of tactic in order to monetize in a low quality way. We’re not trying to stop click-through to high quality content, we’re trying to stop click-through to these misleading experiences that people tell us repeatedly they don’t want to have.
Miles O’Brien: Okay. So how hard of a challenge is that? Is that something that — is it an easy thing to write a line of code in your algorithm to say “Okay, no more clickbait.” I mean these spammers are pretty smart guys.
Tessa Lyons: They are, and the thing that’s through across the integrity space is that it’s very adversarial, so whenever we make improvements, we know and we can watch almost in real-time the iterations and adversarial adaptations from those spammers or bad actors who are operating on the other side.
Now, when we address clickbait to take that one specific example, what we did was we understood first from surveying people that this was a bad experience that they were having on Facebook and specifically on News Feed and from that insight and that feedback, we then worked to develop machine learning classifiers to help us predict clickbait.
So we labeled a lot of data in order to develop a machine learning classifier and we use that classifier to inform News Feed ranking so that articles that we predict with our classifier to be likely to be clickbait will appear lower in News Feed whereas those that don’t have a clickbait prediction will appear higher. And that where when people to come to feed, they’re less likely to see these kinds of negative clickbait experiences.
Miles O’Brien: So, with artificial intelligence, you can make the algorithm smart enough to identify what is clickbait from the spammer?
Tessa Lyons: This is using a machine learning classifier in order to help us scale this problem because at two billion people, we want to ensure that our solutions don’t require individual manual review, but rather they can scale across the platform.
We don’t get paid — there are different ad structures and I’m not an ad expert so I don’t want to — that’s why when you’re saying like is this ads piece like clicking off bad for us.
Miles O’Brien: Right.
Tessa Lyons: What we really focus on is people having high quality experiences and spending time on platform. The reason that we’re trying to disrupt the click-through is because it’s paying money to people who are than encouraged to create more of it but it’s not that it’s bad – it’s not like that them clicking off of Facebook causes us lose money. Does that make sense?
Miles O’Brien: Yeah, I see what you mean.
Let’s back up here for just a second because you were talking about some broad categories of things that you want to — in the world of integrity are focused on. The community standards kind of stuff, that’s easy stuff, right? It’s very like clear cut or not?
Tessa Lyons: Well, it’s clear cut in that we have community standards that we share publicly and we know that when we identify content that violates our community standards, we want to remove it.
Now, it’s less clear cut in that people have different definitions in terms on their interpretations of what constitutes hate speech and different people who are using the platform for example might believe that there’s something they reported as hate speech that doesn’t violate our community standards but they think should be removed. And so, there’s certainly a tension in a definitions of some of these problems that makes it challenging.
Miles O’Brien: So, not so easy. I mean, I’m thinking like the beheadings from ISIS or child pornography, those are easy. Hate speech is it is after all a protected form of speech in our constitution, but is it — there’s maybe a different approach to it but as far as the community standards go and how do you find that?
Tessa Lyons: For the community standards, we have our definitions of them and we do all that we can to remove content that violates them that doesn’t necessarily mean that an individual is going to agree with all of those decisions.
And so they might feel that something should violate a community standard but we don’t remove it because it doesn’t violate our stated policies.
Miles O’Brien: I see, all right. How does the machine get smart? Where do the humans start turning the knobs? Where is the line?
Tessa Lyons: Machine learning, what we’re really doing is using a lot of training data, a lot of data that helps us understand positive examples of a problem and negative examples of a problem and develop a classifier that when it sees a new piece of information, it’s able to predict whether the problem exists. The really critical thing with machine learning is to ensure that we have enough label data and to ensure that the data is labeled really objectively.
Miles O’Brien: But at some point along the way though, somebody has to say “This is positive, this negative” or is the machine to figure out of itself?
Tessa Lyons: Initially, you need positive and negative examples in order to train on. And so you need that initial input of data in order to develop the classifier in the first place. Now, the way that we approach that is we have really clear guidelines that help labelers ensure that they’re clearly labeling things based on a set of objective standards. Where we’re able to do that well is with objective traits like clickbait or sensationalism where it’s more difficult is on issues like misinformation which are much more subjective.
Miles O’Brien: Tell us more about that, because that does make it more difficult to –- what is the label in a misinformation article? How do you find that?
Tessa Lyons: Right. We, as a platform are not an arbiter of truth. We partner with third party fact checking organizations in a handful of countries in order to get reviews of content to determine if they’re true or false. What we’ve been able to do is take what we’ve learned from those fact checking partnerships over the course of the last year and use that data to help us get better a predicting content that might be false.
And that content is what we prioritize to our fact checking partners for their further review. So, we’re able to essentially prioritize from all of the content on Facebook, things that we think are worthy of review by third party fact checkers who are the ones who make the ultimate decision on veracity.
Miles O’Brien: So, that’s where the human comes into the loop here?
Tessa Lyons: In this particular case.
Miles O’Brien: All right. So, tell us more about the third party fact checking program, how that all works.
Tessa Lyons: Yeah. Taking a step back to talk about misinformation —
Miles O’Brien: Yeah, sure. Okay, that’s good.
Tessa Lyons: One of the things we know is that people want to see accurate information on Facebook, and that’s certainly what we want as well. Misinformation is a problem that we’re tackling in a number of different ways. The first is that we know that a lot of misinformation is spread by fake accounts. We’ve been working on fake accounts for a long time, and certainly have more work to do but we’ve made progress in removing fake accounts and therefore stopping some of the accounts who initially seed and spread this misinforming content.
The second thing is that we see that a lot of the misinformation on Facebook is financially motivated, which is why it’s so important for us to disrupt the financial incentives for these spammers and bad actors who are trying to mislead people in order to monetize. We also know that there’s a lot of correlation between misinformation and other things like clickbait or sensationalism and we’re able to act on those.
The third party fact checking piece is just one of the tactics that we employ and it’s useful in that it enables us to get reviews of individual pieces of information to understand whether a third party fact checker would determine it to be true or false. Once they determine that something is false, we take a couple of different actions. One is that we reduce the distribution of that piece of false content in News Feed and we do that through News Feed ranking algorithm. And what that essentially does is ensure that that false piece of content appears lower in someone’s News Feed. And if you think about it when you’re opening your News Feed, you don’t scroll all the way to the bottom. You have a few minutes and you scroll through and read the stories that are relevant to you.
So if the false piece of content is appearing a lot lower down, there’s going to be less views of it. But the other thing that we do because we know that some people will still see it, is we add additional context to it. So we show for example, the fact checking article from the fact checker in a related articles unit right beneath the false content and that way people are able to get more information and have a broader set of understanding of the topic at hand.
Miles O’Brien: So Facebook does not want to be in a position of killing a false story?
Tess Lyons: We don’t want a false story to get distribution, so we demote it and in fact we found that when we’re able to get a false label from a fact checker the future views of that story are reduced by over 80%.
Miles O’Brien: Why not just delete it?
Tessa Lyons: Well, it’s an interesting question and I think, look there’s real tension here. We believe that we’re not arbiters of truth and the people don’t want us to be arbiters of truth, which is why we’re partnering with fact checkers.
We also believe that censoring and fully removing information unless it violates our community standards is not the expectation from our community. So we work to reduce the damage that it can do and ensure that when it is seen, it’s surrounded by additional context.
Miles O’Brien: It’s kind of a tricky balancing act for you guys to navigate this, isn’t it? Because you want free and open and encourage people to share their views but by the same token, you don’t want to allow the misinformation. They’re very contrary goals, aren’t they?
Tessa Lyons: There a lot of tensions in the integrity space. What we’re really clear on is that we want to reduce the amount of misinformation that people on Facebook see and interact with. At the same, we want to ensure that we remain an open platform where people can have authentic and meaningful conversations. And so, we work to address this problem through reducing the spread of it, through adding additional context and through really targeting the bad actors who are benefitting from this content in the first place.
One of the really effective things that we can do is continue to focus on fake accounts and disrupting financial incentives because a lot of this content is motivated by bad incentives that we can target upstream.
Miles O’Brien: Okay. So, obviously if you send the fake news down to the bottom of the queue that has a financial implication. What are the other ways you hit spammers and people who would want to purvey misinformation or just outright falsehoods in their pocket book. How do you go about that?
Tessa Lyons: There’s a few things. One that we’ve talked about is reducing the distribution because if you reduce the number of people who see something, you’re reducing the number of people who are then going to click-through and the way that these bad actors is monetizing is generally by ads on their landing pages once people click off of Facebook and land on these low quality websites.
But the other things that we’re able to do is stop them from building an audience on Facebook in the first place and stop them from leveraging our ads in monetization tools. So we have said that those who repeatedly spread misinformation won’t be able to access our advertising products which is one of the tactics that they use to build their audiences and won’t be able to access our monetization products which is one of the ways that they could have otherwise monetize on this bad content.
Tessa Lyons: There’s someone who is sitting behind a computer, generating some piece of false news content with the objective of making money. So, what they’re going to do is write a false news article and host it on a website and then try to drive people to view that website.
One of the ways that they’ll do that is by trying to use Facebook. So they might to try to post the story organically in News Feed or they might have tried to use ads in order to boost that story or just to grow their audience in general.
Now, what they’re trying to do is ensure that they make more money from the views of the ads on their website than the cost that it takes them to produce the story or pay for the story in the first place. So it’s really a game of arbitrage and what we need to do is disrupt the financial incentives so that they’re no longer making any return on these false new stories and they don’t have an incentive in the first place to sit there and create the false content and try to monetize it in this way.
Now, what we know is that this is an adversarial space so they’re going to try to adapt to any of the efforts that we make and they have done so. Which is why we need to continue to invest and stay ahead of it. And we also know that this behavior is not entirely going to go away, if we’re able to shutdown one vector like or reduce one vector like clickbait, we’ll see their tactics change and we’ll have to make sure that we’re fighting it in those other areas as well. But we want to make it as hard as possible for these bad actors to use our platform to make money by misleading people.
Miles O’Brien: Without naming names or sites, can you give us an example of how the cat and mouse game plays out that you just described?
Tessa Lyons: There’s an example of an article that claimed that a ship had been found after some number of decades of years in the Bermuda Triangle. That was a false news story that someone created in order to monetize. And what we saw was that that one article was copied and pasted onto a bunch of other websites. So we had predicted using machine learning several of these articles and then queued them to fact checkers who gave us a rating saying that the story was false. But we saw that it was spreading across a bunch of other duplicate essentially sites. And so one of the things that we have been investing now is how we can we use technology to better identify all of those duplicates faster so that we don’t have to rely on individual reviews of each copy-pasted version of that piece of content.
Miles O’Brien: It is common for these individuals to have multiple sites, right? For the very reason you described, right?
Tessa Lyons: Yeah and to work in coordinated ways across a few different pages, a few different websites in order to ensure that they’re planting this content and trying to monetize it with as many people as they possibly can.
Miles O’Brien: You can make a lot of money doing these if you’re smart at it, right?
Tessa Lyons: Well hopefully if we do our jobs well, you won’t and we’ve made progress but we have seen some people make real money from this type of arbitrage and this sort of spammy and misleading content, which is why we’re so focused on addressing this problem.
Miles O’Brien: We spent some time with a self-described spammer, who says business is not so good. You’re doing something right. What has changed? What have you done differently?
Tessa Lyons: Well, we’ve had a lot of focus on this area and we’ve addressed it in a number of different ways. I think the thing to really remember is there’s no one solution.
Because it’s such an adversarial space and because there’s so many different tactics that bad actors are employing, we need to ensure that we have a system that is built on really defensive design where we have a number of different ways in which we are targeting the same types of problems so that if one of those systems fails or if one of those systems is something that adversaries are able to overcome, we have a system that’s robust enough to prevent them from making money.
Miles O’Brien: That same person by the way has complained to us that some of his competitors are getting through, somehow, someway. He says it’s unfair, maybe he’s just not as smart, I don’t know. But the point is how difficult is it to apply something that’s going to address a large number of these individuals?
Tessa Lyons: It’s difficult, but it’s an area that we’ve even really focused on and made progress on that we’re going to continue to invest in. We know that there are areas right now that we need to move faster on, we know that there’s more work that we have to do and we are doing that work and really prioritizing it. But I think that this will continue to be a cat and mouse game. The problem of spam is nothing new but it’s certainly playing out at scale in a way that we need to address and I think we’ve seen the progress that we’ve been able to have to our focused investment in this area but we’re going to continue to do that because we know that adversaries are motivated to get around our systems and we want to ensure that any of the potential spammers that you are talking to tell you what the first one did which is that our efforts are working to disrupt his attempt to monetize by misleading individuals on facebook.
Miles O’Brien: So, when you hear that about that about this individual, you would call that a victory?
Tessa Lyons: I would call that progress and I would say that we have to ensure that, that’s true across the board, not just here in the U.S. but around the world.
Miles O’Brien: You know he — actually, I don’t want to spend the whole lot of time on this guy, but he’s spent over the years and he showed us a numbers of more than a million dollars on ads to boost his stuff and he feels like he should be treated like a good customer.
Tessa Lyons: I would disagree.
Miles O’Brien: He’s a customer, right?
Tessa Lyons: We first and foremost have a responsibility to our community and that is what we’re really focused on fulfilling and living up to. And so, we don’t want to take any money from people who are trying to mislead people on Facebook. We have a lot of work to do, there are certainly cases where we’re still not living up to that expectation that we have of ourselves today but we are committed to getting it right.
Miles O’Brien: Give me some insights on that. Take us to the frontlines and where you’re focused on potential and you probably don’t want to give away too much to your potential adversary here as it were but give us an idea of where the trouble spots remain.
Tessa Lyons: It’s a good question. Look, one of the things that we think about a lot is our biggest risks are probably areas that we don’t even know about yet.
And so, we ensure that we have teams who are spending time identifying the unknown unknowns and really looking at a broad set of examples to see where there are patterns of abuse or problematic behaviors that we need to be getting ahead of. So the honest answer is there are a lot of unknown unknowns that we need to be identifying. Now, there are also some known challenges that we need to be moving faster on. We’ve made a lot progress on links, we know that we have a lot of work to do on photos and on videos. We’ve made a lot more progress in some countries than others and we need to catch up and ensure that the work that we’re doing is true globally, because our community is a global one and we have our responsibility to serve all of them and we know that there are some areas where advances in technology like Deepfake videos are going to present new challenges not just for Facebook but for all of us as a society, that we’re going to have to work on collectively to ensure we’re addressing responsibly.
Miles O’Brien: The issue of Deepfake videos is a big one and one that is growing rapidly. We didn’t have time enough to delve into this for our series for the PBS NewsHour on junk news this time but, rest assured, we’re looking at that one and we’re planning some reporting on that in the future.
I asked Tessa if she feels that problem, the problem of fake videos, is much harder for her and her team to tackle.
Tessa Lyons: Images and videos are harder and we have work to do to ensure that we’re addressing them more completely. So images and videos are particularly challenging internationally.
My team was just in Indonesia, the Philippines and Myanmar doing research and one of the things that we really saw on those countries was the degree to which misinformation is spreading through photos and in some cases through videos. And we’re seeing this in the U.S. as well but it’s definitely a global problem. We’re working in few different areas. One of the areas that we’ve been working with our fact checking partners on is ensuring that we are leveraging their expertise, to not just review links but to also review photos and videos and to notify us when they see photos or videos spreading on our platform, even if we haven’t identified them yet. And that’s an area that we’ve been working with our partners on and that we’re hoping to make progress on in the coming months.
But there’s a lot more that we can do using technology to identify photos that have been manipulated, videos that have been manipulated or one of the big problems we’ve seen is photos or videos that have been taken out of context. You know a photo could be entirely accurate in that it’s a photo that was actually taken of real events that transpired.
But the context in which it’s being presented could be suggesting that it was in fact happening at a different time or to a different group of people and that can really mislead people and cause them to draw entirely inaccurate conclusions. So there’s work that we’re exploring on how we can better give people context to say, “Hey, this photo is from this place and time or this is where it originally surfaced” or to say, “Hey, this photo looks a lot like this photo but there’s some differences in them where it seems that there’s some photoshopping that’s taken place.” We need to invest across all of these areas and we need to ensure that as we’re doing so, we’re not just doing it alone, but in partnership with the fact checking community that we work with and other platforms and organizations who are facing similar challenges.
Miles O’Brien: Sounds like a huge challenge for the algorithm to do that.
Tessa Lyons: We think about machine learning, there’s a lot of signals that we have that are based on how something spreads on the types of accounts or patterns of pages that are disseminating a piece of content.
And so, their behavioral signals that transferred regardless of the content type, which is one of the places that we’ll be able to make faster progress but certainly being able to identify that a video has been manipulated to present someone saying something that they didn’t say in a way that’s really compelling and believable is an incredibly difficult challenge for journalists, for platforms and for all of us as a society.
Miles O’Brien: Yeah. I’ve seen some of these videos where they take, you know it’s perfectly lip synced and you’ve seen them all. What you’re suggesting then is that the trajectory that they take is telling, where they come from, how they spread?
Tessa Lyons: I don’t think any one signal is enough but certainly looking at how a piece of information spreads the virality, the types of pages and accounts that are seeding it and working in coordination in order to spread it. Those are signals that can help us at least identify that there might be something worthy of, for example fact checkers to review or that more context might be necessary. So we need to use all of the behavioral signals that we have in addition to doing more and more to understand the types of manipulation and contexts changing that we’re seeing in photos and videos around the world.
Miles O’Brien: So that’s the big one on your list right I imagine right now. What else are you focused on?
Tessa Lyons: So I think photos and videos is a huge focus, international is a huge focus because we see this different challenges with integrity not just misinformation but across the board manifest differently in different countries. And our ability to address them is in some cases informed by local partnerships or the way in which people in different countries use Facebook which we know varies from place to place.
Miles O’Brien: There are places where Facebook is the internet and what Facebook does means an awful lot, right?
Tessa Lyons: We have a responsibility to the people who use Facebook all around the world. And certainly, if people are coming to Facebook to connect with their friends and family and to get information on the things that are meaningful to them, we have a responsibility to ensure that the experience that they’re having lives up to our values and their expectations and I think the stakes are even higher in some places where people use Facebook even more.
Miles O’Brien: We’ve been talking about you know demonetizing, taking the profit motive out of it. What if the motive isn’t profit, what if it is to change an election result for example? Does that make it harder for you to address it?
Tessa Lyons: There’s a number of different types of challenges, certainly the challenge of foreign interference is one that we take very seriously and that we’ve made a lot of investments in over the course of the last year that Mark, Sheryl and others have talked about extensively.
That’s a specific challenge that we’re working to address in the number of ways, it’s not the full picture of misinformation where we really see a lot of financially motivated activity.
Miles O’Brien: Yeah. Try to give us an idea. You know my sense of it is that the financially motivated stuff is the big enchilada and the foreign stuff is a smaller piece. Is that true?
Tessa Lyons: All of these things are difficult to quantify and it’s difficult to quantify someone’s intentions and we see certainly some cases of overlap where even if something is financially motivated, there might be an ideological bent. But we do believe that the financial levers are one of the ways that we can have a lot of impact and move quickly in order to reduce the amount of misinformation. In parallel, we’re working hard to address the cases of foreign interference and cases of individuals who are really trying to use the platform in ways that are divisive or polarizing.
Miles O’Brien: Is that problem different than just purely people trying to make a buck or is it same?
Tessa Lyons: The people who are trying to make money were able to go after the financial incentives and the financial pass that they have and make a lot of progress in disrupting those so that the incentives for them in the first place are reduced and we have made and will continue to make a lot of progress there. If people aren’t financially motivated, then they still want to have a lot people view their content in order for them to achieve whatever their other objectives are and so the work that we do to reduce the spread the type of content is still effective in disrupting their objectives.
Miles O’Brien: All right, so what about these fact checkers, tell me who are they and how do you pick them.
Tessa Lyons: We partner with fact checkers in a handful of countries now and we just announced a couple of more countries and will be continuing to look for places where this might be an additional effective part of our overall efforts. The fact checkers that we partner with are all fact checkers who are signatories of a broader fact checking accreditation body, The International Fact Checking Network.
And we work with them to send them at this time links. And going forward, we’re going to be continuing just more ways to work with them on photos and videos for their review and the fact checkers review the content and give us signal about its accuracy and also give us an article that tells us how they came to that conclusion, that we’re able to use to give people more context about the overall topic.
Miles O’Brien: So, how many people are doing this?
Tessa Lyons: We partner with a number of different organizations and each organization will have a different team size. In the U.S. right now; I believe we have five partners and we have partners internationally as well.
Miles O’Brien: And so five partners, is that it? I mean two billion people, is that enough of a fact checking apparatus you think?
Tessa Lyons: So, there’s a lot that we can do to leverage and get greater scale out of the fact checking community. And we have work to do, we’ve made progress but we still have more to do to ensure that the things that we’re surfacing to them are the right things for them to spend their time on.
If we can be sourcing using machine learning and using other predictions including feedback from our community when they report things as false, then we’re able to ensure that the fact checker’s time that’s spent reporting is focused on the highest priority problems. We also note there’s a lot of we can learn once they give us a false news rating. So, once we know that something is false and we’re able to reduce the distribution of that piece of content, we’re also able to understand the actors who repeatedly spread this type of content and remove their ability to advertise or monetize. So, it’s not just that we need every individual story to be fact checked because we’re really working it out at a system wide level.
Miles O’Brien: So, can you want me through how it like — once again using a little example, something that seems like it might be fake or false. Do you have a specific person you call for a specific subject areas? How does it actually go?
Tessa Lyons: In order to predict things for our fact checkers to review, we rely on a number of signals and one of them is false news reports from our community. So people on Facebook can report individual posts as potentially being false news. And when we get those reports from people, we use that among other signals to prioritize a set of content for our fact checkers to review. They have a tool that we’ve built with them where they’re able to review this content and provide ratings on individual articles.
Miles O’Brien: So do you sign articles to individuals or they just kind of — there’s a place they go and they pick it?
Tessa Lyons: All of our fact checking partners have access to all of things that we’re prioritizing in the queue and they’re able to review any of them.
Miles O’Brien: Who are the five fact checkers for this country, five organizations and how did you decide who to pick?
Tessa Lyons: In the U.S, we’re working with five different fact checking organizations. We’re working with the Associated Press, The Weekly Standard, Snopes, PolitiFact, and FactCheck.org. All of them are signatories of Poynter’s International Fact Checking Network principles for fact checking, and we’ve been working with them now to have them review articles that we predict might be false and that they’re able to provide ratings and reference articles for.
Miles O’Brien: Okay and so, and how that’s going so far?
Tessa Lyons: We’ve, made a progress and we’ve learned a lot. We certainly have more work to do to ensure that the stories that were surfacing to our fact checking partners are the right ones for them to be spending time on. We know right now that we predict some things that are certainly not false and that we are missing some of the false stories that we would like to be predicting. We also know that the whole process can take quite a bit of time and so there’s a lot or work that we can on do on our end to improve that and reduce the amount of latency.
The value that we get from the fact checking partners is an important piece of the overall system, but it’s just one of the many things that we’re doing to fight misinformation and it’s really our investment across the board that’s helped us make progress.
Miles O’Brien: You mentioned that the latency component which is — it takes time to set things out and people click through and are onto the next thing. Is it possible to match the rate at which people use the product?
Tessa Lyons: I think we have a lot of work to do. Look, fact checking to be done well does take time, and so it makes sense that our partners who we work with want to take time to ensure that they are reporting accurately. So, we really need to do work on our end to ensure that we are predicting things to them as soon as possible and that we are not creating additional latency. And we also need to make sure that we’re addressing not just individual pieces of content but really the actors and the pages who are spreading this type of content, because if we’re always waiting for individual facts to be reviewed, that’s going to be slow in each case.
But by working with the fact checkers, if we’re able to understand the pages, the domains, who repeatedly spread this type of information, then there’s more work that we can do upstream to stop it earlier.
Miles O’Brien: So, you find that piece of content that is not factual and you go to the source and say, “Wait a minute, let’s look at this and see if there’s a pattern,” and these are people you would take off of Facebook completely or downrate, what would happen?
Tessa Lyons: Well, it depends. If it’s a page or a domain that is violating our community standards, then we would remove them from Facebook and we do see that from time to time. For example, when we investigate a page that repeatedly spreads misinformation, we’ll see instances of fake accounts that maybe our automated systems didn’t detect but that we’re able to identify upon review and those signals will enables us to take greater action and actually, in some cases remove some of these entities from the platform entirely.
In other cases what we work to do when we see a page or a domain that repeatedly spreads misinformation is reduce their overall distribution and remove their ability to advertise and monetize.
Miles O’Brien: Going back to fact checking, is it a matter of hours, days, before you got the facts back or the veracity of an article is judged or it is just varies?
Tessa Lyons: It varies.
Miles O’Brien: Yeah, and so that’s — but you want to try to speed that up as much as you can I imagine —
Tessa Lyons: Absolutely. And there’s really two stages, there is how quickly can we surface something to fact checkers to review and that’s on us and we’re working to improve and then there’s how quickly can fact checkers review it. And it’s important that the second not be so rushed that they reach inaccurate conclusions so we know that it’s important that reporting takes time and gathering the facts takes time.
We’ve talked to them about ways that we might be able to help them with better tools or better information to help improve their ability to access the information they need to do their jobs.
But we also know that a lot of the responsibility is on our side to be faster in predicting these things in the first place.
Miles O’Brien: So, you know, I’ve had people told me “Oh Snopes, that’s liberal.” You know, even the fact checkers these days are called into question as having some degree of bias. How do you respond to that potential criticism that the fact checkers themselves bring a human bias to the equation?
Tessa Lyons: The fact checkers that we work with are all certified by the International Fact Checking Network and we have systems in place so that if two fact checkers disagree with each other, if one says something is false and one says something is true then we say, “Okay, hey, there’s some kind of disagreement here.” That happens very, very, very rarely. In fact, I can’t even remember the last time that we had that type of disagreement. But in those cases, we ensure that we remove the demotion that we would otherwise have applied and don’t consider that to be a strike against a page or domain because clearly there’s some disagreement about the facts.
At the end of the day though, what the fact checkers are reviewing are facts and so, we’ve seen a lot of consistency in their reviews.
Miles O’Brien: I’ve seen a lot of things that I would classify as junk news, misleading which it would be actually be hard to find a factual error in and that’s a harder problem, isn’t it?
Tessa Lyons: Absolutely. And that’s why this work is really challenging. If there are things that are factually incorrect, we can partner with our fact checking partners in order to identify them, if they are things that are using specific tactics like clickbait or sensationalism, then there are actions we can take.
If they violate our community standards, we have a clear set of things that we can do to remove the content. But there’s a lot of content that is factually accurate but presented in a very misleading way and I think that’s an area that we all need to figure out how to better address and to ensure that at least people have context to make more informed decisions when they encounter it. One of the things for example that we started testing recently is a product called “Article Context.”
When you see an article in your News Feed, this isn’t rolled out completely yet but it’s something were testing. There’s a little eye icon and you’re able to click on it to get more information about the domain that is sharing this article in the first place, so that you understand more about the perspective, they’re coming from or what other information is available on them and can use that to create a more holistic interpretation for yourself of the content that you are reading.
Miles O’Brien: So that would be right there in the News Feed, off to the side or whatever. You’re seeing this and here’s the site it comes from or the people it comes from, is that how it goes?
Tessa Lyons: Exactly, and it’s something that we’re testing so we still have a lot to learn but it enables people to see right in feed more information about the source behind the information that they’re reading.
Miles O’Brien: That’s just being rolled out so it’s hard to say how that’s working, but the fact that the fact checkers are in the mix, do you have any sort of metric to know if this is working?
Tessa Lyons: So, there’s a lot of different problems. And so when you think about what the metrics are and what is working, there are a few different ways to measure things. So we know is if we’ve made progress.
Miles O’Brien: Facebook doesn’t want to become an editorial arbiter. Is that accurate to say? You want to be a free and open public square not a media company, right?
Tessa Lyons: We’re a platform and our values are being an open platform enabling people to have authentic and meaningful conversations with each other. So we take our responsibility to ensure the safety and security of our community really seriously and to ensure that the experience people have on Facebook is consistent with their expectations and responsive to their feedback.
Miles O’Brien: But it’s not the goal of the company to become a traditional editorial gatekeeper?
Tessa Lyons: We’re a technology company. We recognize that we’re a new kind of platform but we’re fundamentally a company that hires engineers.
Miles O’Brien: It is an unusual hybrid though, isn’t it? It’s the kind of crept — it began purely with the technology idea but it’s crept into this area that’s a little more into the media realm, hasn’t it?
Tessa Lyons: As we’ve seen with some of our announcements this year, Facebook is first and foremost about friends and family. And our focus on meaningful social interactions has really been a return to saying “This is what we are created for and we want to ensure that when people use Facebook, they’re connecting with their friends and family and having meaningful connections with those in their communities.
Miles O’Brien: So as you look at ways to go after this you know this hydra of a problem, one of the things that might help is obviously you’re turning to outside sources to help you. Is Facebook being forced to be more transparent that it’s comfortable with about for example how it does its work, what its algorithm looks like, how it makes decisions?
Tessa Lyons: There is an expectation of transparency and it’s an expectation that we’re working really hard to meet.
We recently launched a new ads transparency product in order to give people more information about advertising that they’re seeing on Facebook and we’re continuing to find ways to ensure that we’re being transparent about the information and the work that we do.
Miles O’Brien: You could make an argument.
I have made it myself sometimes I get the sense that people don’t really care about what’s factual anymore. They care about what’s reinforces a world view, what seems interesting, funny makes you angry, my blood is boiling, I click it. Is it possible people maybe are not going to care about all of this fact checking?
Tessa Lyons: I think that people still want to see accurate information and they don’t want to be misled or manipulated. Something we’re working really hard to ensure is that when they come to Facebook, they’re having authentic experiences and so the information that they’re seeing is from authentic sources and the conversations they’re having are with authentic people.
Miles O’Brien: But they said they want authentic. Don’t they really want to just you know click on that thing about Hillary and Benghazi or whatever it is?
Tessa Lyons: I can’t speak for everyone. I think that people want to ensure that the information that they’re consuming is accurate and that they are having authentic connections with those they are interacting with.
That said certainly, you know I don’t always want to be reading one type of information or I don’t always want to be consuming information from one publisher. I myself have a pretty diverse media diet and that’s true of a lot of people. I think our goal really is to ensure that they experience that people have is meaningful and that it’s not just consistent with what they might click on but really what they say at the end of the day adds value to them.
Miles O’Brien: So is there anything Facebook can do to make people perhaps a little more skeptical or perhaps getting outside their little bubble to look at views which may not support their world view. Is that something Facebook — is that even the province of Facebook or is that something that is a much bigger issue?
Tessa Lyons: I think that’s something that people are already doing on Facebook. I think people on Facebook are already connecting with a broader set of perspectives they might have in their day-to-day lives and finding new sources of information or new communities to be part of.
Now, one of the things that we are doing is testing, showing people additional reporting on subjects when they go to share something. So for example, if a fact checker has marked that something its false and someone goes to post it, we’ll actually have an interstitial that pops up and say, “Hey, before you share this, you might want to know that there is additional reporting on this area,” and so far we’ve gotten some feedback from people that it’s helpful, that they feel more informed. And so it think there is an opportunity for us to explore doing more in this area to give people more context and equip them to make decisions about what they feel comfortable trusting.
Miles O’Brien: So you’re piercing the filter bubble a little bit?
Tessa Lyons: We’re certainly working to expose people in this case to additional reporting from other sources that they might not already be following.
Miles O’Brien: Try to give me an idea — it’s two billion people. The scale of this place kind of boggles my mind. How big a job are you facing? Do you get kind of overwhelmed when you think about what you’re up against?
Tessa Lyons: We have a huge responsibility and it’s a responsibility that we take very, very seriously.
We also have huge teams. It’s not just one person tackling all these problems. We’ve scaled by a huge numbers. The investment that we have made in fighting integrity problems in recent years and that investment has enabled us to tackle these problems more effectively around the world to ensure that we are making progress and living up to the responsibility that we have for our two billion users.
Miles O’Brien: But do you feel like you’re making a dent?
Tessa Lyons: I certainly feel like we’re making progress. I also think that there’s a lot of challenges still ahead which is why it’s important that we maintain committed to solving this in the long term.
Miles O’Brien: So, at the end of the day, if all you do here to try to root these all out, means that people spend less time with Facebook, so be it?
Tessa Lyons: I think so be it. I want to ensure that the time that they spend is meaningful and authentic.
Miles O’Brien: That can hurt the bottom line.
Tessa Lyons: Our company has always been about more than the bottom line. We’ve always been about a mission that we’ve worked really hard to achieve for people all over the world. And we have been really open this year and saying that our focus on our mission and on integrity is the most important thing.
Miles O’Brien: Tessa Lyons, thank you for your time.
There’s no question if people stop trusting Facebook, it will hurt their bottom line. And no hegemony in the world of corporations is guaranteed. There was a time when it would be inconceivable to think of PanAm disappearing or Kodak not being relevant all over the world to photography, or the Bell system ensuring our phone calls go through.
Facebook’s position in the world, though dominant, is not guaranteed, and so when they say they’re concerned about people trusting the platform, feeling safe to be there, and not being besieged by junk, I think we do have to take that at face value.
But again, they do have an unprecedented, huge tiger by the tail.
Banner image credit: Tim Bennett | Unsplash.
A very insightful interview. Thanks for the good work Miles.