MAY 17, 2021

Making the Media S1E10: All Things AI

Felix Simon on the Making the Media podcast

Artificial intelligence is a fascinating—and often thoroughly misunderstood—topic, and it's making its way into the world of news.

In this episode, we ask: What are the cutting-edge applications of AI in the news industry? How can we deploy it responsibly? Is AI only accessible to the largest organizations with the deepest pockets? And perhaps most importantly, what's next?

See more episodes and subscribe:
listen on Apple Podcasts
listen on Google Podcasts
listen on Spotify

Listen to Hear:

  • The truth behind common AI myths
  • How news organizations are taking advantage of AI today
  • Why AI can boost efficiencies, but likely not at the cost of human jobs

 

Our Guest This Episode

Felix M. Simon is a doctoral student at the Oxford Internet Institute (OII) and a Knight News Innovation Fellow at Columbia University's Tow Center for Digital Journalism. He also works as a research assistant at the Reuters Institute for the Study of Journalism (RISJ) and regularly writes and comments on technology, media, and politics for various international outlets. His doctoral work at the University of Oxford focuses on the implications of AI in journalism and the news industry. His broader research interests include political communication in the digital age and the changing nature of journalism and the media in the 21st century. He also takes an active interest in populism and the future of mis- and disinformation. Felix graduated with a BA in Film and Media Studies from Goethe-University Frankfurt and holds an MSc in Social Science of the Internet from the OII. He is currently a fellow at the Salzburg Global Seminar.

That's the funny thing about AI in many ways, because the public imaginary image is very much still shaped by Hollywood...It's terminators, it's sentient machines who then do things for us...And opposed to that is what most computer science experts call real AI, which is a much more mundane technology and use...And so there's this story of 'Oh, the robots are going to come and take our jobs.' I think that's a bit apocalyptic in many ways.

Felix Simon, AI Researcher, Oxford Internet Institute

Mentioned in This Episode

using ai-powered search tools in the newsroom
What News Teams Should Know About AI-Powered Search Tools

AI-powered search tools put media into the hands of news producers quickly, allowing them to keep up with the ever-accelerating pace of news production.
Read more

rowboat illustration
Avid and Microsoft Cloud

See how partnering with Avid in the cloud and Microsoft Azure can help your media enterprise grow into the future.
Get the details


 

Episode Transcript

Craig Wilson: Hi, and welcome to the Making the Media podcast. Craig Wilson here, and thanks again for joining me, or if you are a first-time listener, where have you been?

We have covered lots of different subjects in the season so far and this episode is no different. Having talked about the pandemic, planning the newsroom of the future, mobile journalism, multi-platform delivery, how to attract new audiences, and cloud workflows, this time we look at how artificial intelligence—or AI—may shape the future of newsgathering and consumption.

For some it looks like the rise of the machines, as companies seek efficiencies and replacing humans with servers, while for others AI is a costly investment which risks widening the digital divide between the big players and the smaller stations.

Lots to unpack here so let's dive right in. To talk about all things AI, I spoke to Felix Simon.

Felix is a doctoral student at the Oxford Internet Institute and a Knight News Innovation Fellow at Columbia University's Tow Center for Digital Journalism. He also works as a research assistant at the Reuters Institute for the Study of Journalism and regularly writes and comments on technology. His work at Oxford at the moment currently focuses on the implications of AI in journalism and the news business, so he's the right guy to talk to.

Now I began by asking him what the areas were of focus for the industry right now in the use of AI.

Felix Simon: I think we can think of this in basically two settings, and one is news production, one's news distribution. And in the context of news production it's really, what we've seen a lot is that news organizations use, for instance, machine learning to go through very big datasets to filter out the information that they then want to turn into news somehow.

So if you think of the Panama Papers or the Paradise Papers, these big international investigations, part of what they did was actually using machine learning to sift through this massive pile of documents, over financial documents, financial data, to identify potential stories and potential topics of interest. And that's one use case from the production side.

We also increasingly see things like the BBC, for instance, where they use machine learning and natural language processing, which are both sort of AI applications, to change story forms. So say if you've written something as a text for the BBC website and you then want to easily move that into an Instagram format, for instance, or into social media posts of some other kind, then there are currently AI applications in development at the BBC News Labs which can help journalists to do that quicker. So that's the production side.

And then of course, there's distribution. And I think in a distribution context, AI basically, is mostly a continuation and a perfection of tools and techniques that have existed for quite a while. So personalized news, that's nothing new, really. That already exists, but machine learning, for instance, can allow you to make that better, to improve it.

It's very labor intensive, it's quite costly, it's something where you need a high degree of very specialized knowledge, which doesn't sort of easily exist out in the open. So it actually makes sense for organizations to group together and to try to explore this together. Because that's in some ways the only way for them to actually make headway in this space, which is heavily dominated by big technology corporations, which of course have different resources than your average, even international news organization.

So I would assume that we will see, at least in the next couple of years, more collaboration between different news organizations, and I think it actually makes sense for them to explore this together because it's so uncertain, and it's not quite clear yet what the potential benefits of this technology for them are, and to find out together, especially in times when, again, financial pressure's so high, is a logical and sensible step.

CW: You know, you said something earlier on, Felix, that I wanted to pick up on. You mentioned that AI in this sense is actually quite labor intensive. I think a lot of people would think of AI and think it's just a bunch of machines. So what do you mean by that, in terms of it being labor intensive?

FS: Yeah, that's the funny thing with AI in many ways, because the public imaginary image is very much still shaped by Hollywood, in many ways. It's terminators, it's sentient machines who then do things for us. It's Siri, to some extent, or Alexa. These products, which seems to be sentient machines who can talk back to us, which is kind of the imaginary AI. And opposed to that is what most computer science experts call real AI, which is a much more mundane technology and use.

And basically what I mean is that what comes as a package with this imaginary AI, with the "Hollywood AI," is, "Oh yeah, the machine will do everything for us, and I don't have to do anything." And in practice, all these AI applications, they need a lot of data. They need a lot of training data sets. They are quite labor intensive, because you need these specialized workers. You need computer scientists. You need data analysts who can write the code, work with the code, who can make sense of the results.

And so this old story of, "Oh, AI will easily replace lots of people," that might be true to some extent, because it might be replacing certain groups of people in the workforce, but more broadly, you also need very high skilled and trained people to keep in running, to set it up in the first place.

And that's of course true if we think of AI in the news context as well. You actually need specialists to deploy this technology. And so there's a story of, "Oh, the robots are going to come and take our jobs." I think that's a bit apocalyptic in many ways. Not really what we see so far if we look into the industries.

CW: So is it the case that what you're talking about here is that what you need to actually interrogate and understand the information is something that's highly specialized, it's not something that is available as a sort of commercial off-the-shelf package that you can go out and buy? This is something that still requires a huge amount of development?

FS: Yeah, that's totally correct. So there are some off-the-shelf packages and off-the-shelf code which you can—sometimes it's open source, sometimes you have to buy it. Especially with big technology companies, they have platforms which are providing AI services to some degree.

But it's for very broad general use, if you think of something like machine translation, that's something you can buy off the shelf. That exists, if you look to Google, if you look to DeepL. Or automatic transcription, where you just send your audio file to a service and they do it for you thanks to some tool which has probably machine learning and natural language processing at the back. That's something you can already buy and just simply plug in.

But depending on what you want to do with the technology, depending on what your specific interests are as an organization or even an individual journalist, for that you need people to come think about, "OK, what is the problem? What do we want to do with it?" And then try to develop a solution, and that's usually where the work comes in and where you need specialized people who can do this, because it's not something that your sort of average programmer could do.

CW: So obviously, you know, the broadcast and digital industry has a huge range of different outlets now, from large news organizations, multinational news organizations, all the way down to if you think of regional stations in the US, for example, or local stations in the US. So is there a divide between those types of organization, of their level of interest in AI?

FS: Unfortunately yes, and I think again we have to give a two-sided answer. So for one, I think the interest is sort of uniform across the board. So every news organization I know of to some extent has an interest in AI, and they're like, "Oh yeah, of course there's this new technology that could potentially do magical things for us." So of course, there's also, because there's this big hype, lots of organizations have this interest.

But when it comes to being actually able to do something with it, or to invest in it, to tinker around with it, that's where we see a big divide opening up. And that's mainly again, because it's cost intensive. You need lots of data, you need computing power, you need specialized people who can work with this and implement it, and that's, as many things, unfortunately usually clustered at the top.

So you have a set of very well-endowed, big news organizations who can afford this, have the data from their own websites or from their own services that they offer, who have the computing power, and they can afford and they have the resources to experiment with AI in many ways. Whereas if you look at many local newspapers, they are already under a lot of financial strain, depending on where they operate, and for them it's usually unfortunately not an option to even think about, because they have so many other problems with so many other holes to plug in their daily operations. Thinking about something that's so futuristic in many ways as AI is just not on the cards for them.

CW: When it comes to the production side of things, do you think that's something that people are looking at as an efficiency play in some respects? That they see a way of taking content for one platform and repurposing it? Because I think something else that's changed in recent years is, maybe ten, fifteen years ago, there was a view that you could produce once and then just distribute that content everywhere, but in reality what people want now is a different experience on different platforms. So on the production side is that where the area of focus is?

FS: Yes, I think that's an excellent point. So basically AI, as with any other technology, is basically for many people about efficiency and increasing efficiency. So you have a certain task, and a technology's brought in to make it more efficient. That's basically what happened with writing when the typewriter came around. Well, it's just quicker, and it allows you to write certain things quick in some formalized pattern. Same story as computers, the Internet.

So I think one general thing we can say for AI in the news and why executives would also join my interest in it to some extent, is because it promises to make certain tasks more efficient. If I can, say, take this interview, check the transcript in English, and then plug it into an automated AI translation tool, and it gives you the results in German or Russian a couple of seconds later, of course that's going to save you a lot of time. Even if you then still have to go through to check for basic errors or something the AI has gotten wrong, it still saves you, say, an hour. And so that's definitely one of the sort of big, overarching topics.

And more specifically to your point about "produce once and then distribute," I think that's again, that's the promise of AI when it comes to story formats. It seems that potentially it's already happening now to some extent, and I think we will see more of that in the next couple of years. It promises that you can have a certain piece of content, you can not just easily distribute to many people, but you can distribute it to many people in a more targeted and personalized fashion.

CW: Do you think that also, on the other side of that though, is there a risk about plurality of news and also about—I think Twitter's a good example of this—where potentially you end up in an echo chamber where all you hear is the things that you want to hear as well? Is that something that AI can sort of factor in, that is not purely about what your interests are, you need to actually get some other information from other sources as well?

FS: Yeah, I think—let me dispel one myth first. I think echo chambers are sort of something that, again, the public imagination is presented as this big scary threat to democracy in our information environments. And actually, from the research side of things, we find very little evidence for that. And in many cases, the sort of little echo chambers we have, that's usually people self-selecting and not so much algorithms or technology driving them into these fears where they only hear their own stories and what they care about.

But of course, it's something that people are concerned about in the context of AI. And the good news is you can—these systems, they are design choices. So the designers, the people who make them, they can intentionally create it one way or another, and they can think about these problems in advance. And they should actually think about them in the beginning, before they even start rolling it out. And you can, with any recommendation tool, you can think about this in advance and say, "OK, actually, instead of only giving Craig Wilson or Felix Simon the kind of news they're interested in, that they care about, we also should occasionally offer them something or put something their way through this tool, which is probably not what they want, but what they need."

If you look at it from a higher peak or higher vantage point, that's not something—it is a concern with AI, I think it's a justified concern, but it is also something one can work against, and something that can be considered in the design stage of this technology.

CW: Going back to talk a little bit about the production side of things, and Felix, that efficiency play that we talked about earlier on, I think that's something that I think a lot of news organizations want to take advantage of regardless of their size. So, for example, you know at Avid we have integration with Microsoft Azure for some of our AI services. We also have phonetic indexing, so you can search on audio. Are these the kind of practical things that relatively small organizations are looking to take advantage of at the moment, things that already exist and don't really require additional investment?

FS: Yes, that would be a hard yes from my side and from what I've learned through my own research. If there's a tool for, say, a very specified task that you can easily buy off the shelf and apply, and if it's reasonable from a cost perspective and you can trust it from, say, privacy considerations, from that point of view, there's definitely a lot of interest from many news organizations. Just because it means you don't have to hire your own team of developers, you don't have to hire a team of people who will try to build this on their own, and potentially give them lots of money and they work on it for six months and then in the end it doesn't end up working. So of course there's this risk involved, and if there's opposed to that a product which you only have to implement, you probably need someone coming in for a couple of weeks and help you start your organization to make it work, then of course that's to some extent the more attractive opportunity, the more attractive option.

Of course, there's always concern, especially in the news where a lot of emphasis is placed on autonomy—and not just autonomy from political forces, but also autonomy from economic forces—that with a tool that you have not developed in-house, which comes from someone else, basically a black box which you don't really understand how it's working, how, especially in the case of AI, how it's arriving at decisions it ultimately arrives at, that this is sort of this opaque thing which you don't really understand and which potentially has ethical problems baked in. Bias, for instance, is always a massive concern in AI, and rightly so. There might be data violations, security issues, all these things. What happens to, say, my own data? Is it safe? Will it be protected? I think these are legitimate concerns news organizations have when they rely on proprietary products. And of course, that is also the case when it comes to AI.

So of course there's this business interest in it. If it exists, if it's already built, if it's for something I can use right away, great, but of course they also have to think about the other side of the medal.

CW: Do you think people see AI as a threat within the industry?

FS: Yes and no. Again, with every new technology there is this sense of danger, and people are afraid of it to some extent. And it's understandable because there's an uncertainty there when you don't quite know what is happening, and things we don't quite understand usually make us afraid. And it's the same with AI.

I think the fear of AI is probably stronger in the general public, and that's mainly because they are not as well informed—which is not their fault, it's just that people have other things to do than read up on the latest AI research and what it can actually do versus what it's supposedly doing according to Hollywood or the news.

And I think the fear around AI in the industry is a bit more even, and there are people who have looked into it and they've realized, yes, there are potential dangers, there are potential benefits. It's kind of a swings and roundabouts situation and we don't really have to be massively afraid of it. I think that's currently the situation. And again, it depends on if you have a lot to do with it, if you've looked into it in detail, you're probably less afraid than someone who's just sort hearing about it and thinks, "AI is coming for my job, and as of tomorrow I'll be an unemployed journalist rather than someone who sits at a desk somewhere."

CW: A final couple of things to explore, Felix, really about innovation. Is this a key area of innovation for news organizations, or are there other things that they are more interested in at the moment? How do you assess where it stands as list of priorities, I guess, for news organizations?

FS: It's pretty high on the list from what I remember, especially if you look at those of big industry trends reports, the digital news report by the Reuters Institute and Nic Newman. They sort of poll news executives over, I think, every year at the moment, and ask them about, "OK, well, are there big trends?" And AI is definitely high in the list for them, and it's one of the things they're really interested in and one of the technologies they definitely want to explore. So I would say clearly it's one of the key topics of interest when it comes to news innovation.

CW: Where do you think, if you look forward a number of years, you know what do you think are the big things that are going to come in between now and, say, three, four years' time?

FS: I think definitely that we will see a broader rollout across organizations and not just in places like the US and UK, which are usually at the forefront, but also in other countries. And it will be more natural for news organizations to treat AI as a concrete technology that you can use, that you can apply for certain tasks within news work and bring in, rather than this sort of futuristic technology, which usually just exists in films or in books.

CW: So Felix, you're embarking on a very large project, you've got a number of years of study ahead of you. There is one question I'm asking everyone who's on the podcast: What is it, if anything, that keeps you up at night?

FS: Oh God. Not finishing my research in time, because there's so much to explore. I'd say that's definitely one project-related thing that keeps me up at night. Yeah, without kidding, it's definitely, when it comes to my work, it's one of the key concerns. Because it's an exciting project and I'm just one person and there's so much to explore, so much people to speak to, so much things to read. It can be quite hard to keep up with the latest trends and developments.

You sort of always worry as a researcher that what you're doing, by the time gets published, is already outdated. To some extent you just have to accept that, it's part of the work and you can't do much about it in many ways other than just keep on working on it. But yeah, definitely one concern is just that there's too much of these major shifts happening while I'm doing it, and then once I get to the stage where my PhD is finished, it will probably already be outdated. But c'est la vie!

CW: I am sure Felix's work will be a valuable resource in years to come as AI becomes more commonplace across the news industry. I think some really interesting things to consider there around efficiency, balance, bias, and consumption.

If you want to dig a bit deeper, check out the show notes where you'll find an article about how news teams can use AI, plus a couple of tutorial videos showing how it's used in Avid's MediaCentral Cloud UX application.

There is also a link to find out more about Avid's partnership with Microsoft, including their AI engine and much more.

Next time we are going to turn our attention to how best to engage the journalistic team when making technical decisions. For many years, the delivery of new technology to the news team has been driven by engineers, but at TV2 in Denmark, they have flipped that on its head by getting users involved every step of the way. Let's hear a clip from Morten Brandstrup, TV2's Head of News Technology.

Morten Brandstrup: Who knows better how the daily worker is than those who actually do it? So we who support the workflow and the toolbox in general, we definitely have to bring them in in the middle of what we're discussing and listening to them, letting them have a strong voice, and having them to help prioritize stuff about how we want to collaborate, how we want to work together in the newsroom.

CW: Morten is also very active in the European Broadcasting Union—the EBU—and he has lots of great insights to offer, so don't miss out on that episode.

Remember to subscribe on your podcast platform of choice to get notified when the next episode out. Also please feel free to get in touch. I'm on Instagram and Twitter at craigaw1969 or you can email us. Our address is [email protected].

Don't forget, you can also check out any of our previous episodes as well. There's lots of and lots of good stuff there.

That's all for this episode, I'm off now to ask Siri and Alexa what they thought of it. Thanks to our producer Rachel Haberman, but most of all thanks to you for listening. I'm Craig Wilson. Join me next time for more Making the Media.

  • Placeholder Image
  • © 2024