Uncommons with Nate Erskine-Smith
Uncommons with Nate Erskine-Smith
The Future of Online Harms and AI Regulation with Taylor Owen
0:00
-39:00

The Future of Online Harms and AI Regulation with Taylor Owen

Taylor Owen is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University.

After a hiatus, we’ve officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.

Taylor Owen’s work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail’s Machines Like Us podcast and author of several books.

Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.

Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.

We’ll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite.

As always, if you have ideas for future guests or topics, email us at info@beynate.ca

Chapters:

0:29 Setting the Stage

1:44 Core Problems & Challenges

4:31 Information Ecosystem Crisis

10:19 Signals of Reliability & Policy Challenges

14:33 Legislative Efforts

18:29 Online Harms Act Deep Dive

25:31 AI Fraud

29:38 Platform Responsibility

32:55 Future Policy Direction

Further Reading and Listening:

Public rules for big tech platforms with Taylor Owen — UNCOMMONS

“How the Next Government can Protect Canada’s Information Ecosystem.” With Helen Hayes, The Globe and Mail, April 7, 2025.

Machines Like Us Podcast

Bill C-63

Transcript:

Nate Erskine-Smith

00:00-00:43

Welcome to Uncommons, I’m Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I’m joined by Professor Taylor Owen. He’s an expert in these issues. He’s been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he’s been a huge part of getting us in Canada to where we are today. And it’s up to this government to get us across the finish line, and that’s what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.

Taylor Owen

00:43-00:44

It’s a different world.

Taylor

00:45-00:45

In some ways.

Nate Erskine-Smith

00:45-01:14

Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we’ve come a long way because there’s been lots of consultation. There have been some legislative attempts at least, but also we haven’t really accomplished the thing. So let’s talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we’re trying to solve? Yeah, I mean, many of them are the same, right?

Taylor Owen

01:14-03:06

I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they’re worried about safety. They’re worried about algorithmic content and how that’s feeding into what they believe and what they think. They’re worried about polarization. We’re worried about the integrity of our democracy and our elections. We’re worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that’s what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what’s changing slightly. Now, those are still prominent, right? We’re still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?

Nate Erskine-Smith

03:07-03:55

My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you’ve articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It’s interesting. I was reading a recent op-ed of yours, but also some of the research you’ve done. This really stood out. So you’ve got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That’s worth pausing on.

Taylor Owen

03:55-04:31

Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we’re not governing it. Like that is a remarkable statement and it kind of came and went. And I don’t know why we moved off from that so fast.

Nate Erskine-Smith

04:31-05:17

Well, and there’s a lot to pull apart there because you’ve got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don’t want to put you in and say like, what do we do? Okay. So there’s, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That’s one piece of information people should know. Yeah. But at the same time.

Taylor Owen

05:17-05:17

A day. Yeah.

Nate Erskine-Smith

05:18-05:18

So right.

Taylor Owen

05:18-05:27

11 million views a day. And we should sometimes we go through these things really fast. It’s huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.

Taylor

05:27-05:29

So 11 million times a Canadian.

Taylor Owen

05:29-05:45

And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren’t using these tools less. So that journalism was replaced by something else.

Taylor

05:45-05:45

Okay.

Taylor Owen

05:45-05:46

So that’s just it.

Nate Erskine-Smith

05:46-06:04

So on the one side, we’ve got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there’s no news there. Right.

Taylor Owen

06:04-06:04

They say they get.

Nate Erskine-Smith

06:04-06:05

It doesn’t make any sense.

Taylor Owen

06:06-06:23

It doesn’t and it does. It’s terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it’s not there. Journalism isn’t there. Journalism isn’t there. And I think one of the explanations— Traditional journalism. There is—

Taylor

06:23-06:23

There is—

Taylor Owen

06:23-06:47

Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don’t equate journalism with news about the world. There’s not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people’s.

Nate Erskine-Smith

06:47-06:48

Can’t trust the thing we say.

Taylor Owen

06:48-07:05

Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people’s feeds, as I’m sure it will, like that probably gets labeled in people’s minds as news, right? As opposed to pure entertainment, as entertaining as you are.

Nate Erskine-Smith

07:05-07:06

It’s public affairs content.

Taylor Owen

07:06-07:39

Exactly. So that’s one thing that’s happening. The other is that there’s a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you’re more pessimistic though. I shouldn’t have led with the question. 20%.

Taylor

07:39-07:39

Okay.

Taylor Owen

07:39-07:56

So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we’ve shifted, at least on social, the actors and people and institutions that are fostering our public.

Nate Erskine-Smith

07:56-08:09

Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It’s a super interesting question, right?

Taylor Owen

08:09-08:31

Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it’s revelatory and it’s a journalistic act, right?

Taylor

08:31-08:34

Like this is kind of a journalistic act we’re playing here.

Taylor Owen

08:35-08:49

So I don’t think – I think these lines are gray. but I mean there’s some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that’s probably not a good thing yeah I mean that’s why

Nate Erskine-Smith

08:49-09:30

I say it’s terrifying is there’s a there’s a lot of good in the in the digital space that is trying to be there’s creative destruction there’s a lot of work to provide people a direct sense of news that isn’t that filter that people may mistrust in traditional media. Having said that, so many resources and there’s so much history to these institutions and there’s a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.

Taylor Owen

09:30-09:49

And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we’re not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.

Nate Erskine-Smith

09:49-10:13

And that’s what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you’ve got to distinguish between something that is human generated

Taylor

10:13-10:14

and something that is machine generated.

Nate Erskine-Smith

10:15-10:26

That seems straightforward enough. It’s a lot harder if you’re trying to distinguish between Taylor, what you’re saying is credible, and Nate, what you’re saying is not credible,

Taylor

10:27-10:27

which is probably true.

Nate Erskine-Smith

10:28-10:33

But how do you have a signal of reliability in a different kind of content?

Taylor Owen

10:34-13:12

I mean, we’re getting into like a journalistic journalism policy here to a certain degree, right? And it’s a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can’t do and how they can and can’t behave touches on some real like third rails here, right? It’s fraught. However, I don’t think it should ever be about policy determining what can and can’t be said or what is and isn’t journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don’t know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I’ve ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn’t a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It’s not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn’t And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I’m like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we’re choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we’ve been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.

Taylor

13:12-13:15

That’s not a – instead of reactionary –

Taylor Owen

13:15-13:42

Or like what’s most – it gets most engagement or what makes you feel the most angry, which is largely what’s driving X, for example, right now, right? You can torque all those things. Now, I don’t think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that’s where the policy space should play, I think.

Nate Erskine-Smith

13:43-14:12

And your focus on systems and assessing risks with systems. I think that’s the right place to play. I mean, we’ve seen legislative efforts. You’ve got the three pieces in Canada. You’ve got online harms. You’ve got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.

Taylor Owen

14:12-14:23

I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they’re twofold, right?

Nate Erskine-Smith

14:23-14:33

There’s the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.

Taylor

14:33-14:35

So the sort of supply and demand side thing, right?

Nate Erskine-Smith

14:35-14:38

There’s the digital service tax, which is no longer a thing.

Taylor Owen

14:40-14:52

Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.

Nate Erskine-Smith

14:52-14:55

I don’t take full responsibility for that one.

Taylor Owen

14:55-14:56

No, you shouldn’t.

Nate Erskine-Smith

14:58-16:03

But other countries have seen more success. Yeah. And so you’ve got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you’ve got in 2022, 2023, you’ve got Digital Services Act that passes. You’ve got Digital Markets Act. And as I understand it, and we’ve had, you know, we’ve both been involved in international work on this. And we’ve heard from folks like Francis Hogan and others about the need for risk-based assessments. And you’re well down the rabbit hole on this. But isn’t it at a high level? You deploy a technology. You’ve got to identify material risks. You then have to take reasonable measures to mitigate those risks. That’s effectively the duty of care built in. And then ideally, you’ve got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.

Taylor Owen

16:04-16:05

That’s like how I have it in my head.

Nate Erskine-Smith

16:05-16:06

I mean, that’s it.

Taylor Owen

16:08-16:14

Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That’s right. I know.

Nate Erskine-Smith

16:14-16:25

Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.

Taylor Owen

16:25-17:39

Exactly. Let’s touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it’s the thing that’s infuriating about digital policy, is that you can’t do one thing. There’s no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there’s no one solution. So anyone who tells you fix privacy policy and you’ll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we’ll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They’ve been to build a big online harms agenda. They updated their competition regime. And they’re also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.

Nate Erskine-Smith

17:39-17:41

Especially minority parlance, short periods of time, legislatively.

Taylor Owen

17:41-18:20

Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it’s worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens’ assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.

Taylor

18:21-18:25

People you work with on that grant committee. We’re all quick and do our own consultations.

Taylor Owen

18:26-19:40

Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it’s now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you’re going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you’ve decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we’ve said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let’s say, for example, let’s use a tangible example. Let’s say you are a social media platform and you are launching a product that’s going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?

Nate Erskine-Smith

19:40-19:40

Yeah.

Taylor

19:40-19:43

Like what could go wrong? Yeah, a lot could go wrong.

Taylor Owen

19:43-20:27

And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you’ve mitigated it, right? Like you put in a policy in place to show how you’re mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That’s it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.

Taylor

20:28-20:29

Incredibly creepy.

Taylor Owen

20:29-20:37

A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.

Taylor

20:37-20:41

And this kind of mechanism would have just filtered out.

Taylor Owen

20:41-20:51

Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it’s launched and being held accountable to do it for doing that.

Nate Erskine-Smith

20:52-21:05

Yeah, I quite like the we I mean, maybe you’ve got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.

Taylor Owen

21:05-21:06

I know it’s just yeah.

Nate Erskine-Smith

21:07-21:57

I don’t random. I don’t know. But she is a Canadian, for those who don’t know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don’t put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we’ve prioritized consumer safety first and foremost. But in the online world, it’s like an afterthought.

Taylor Owen

21:58-24:25

Well, when you say consumer safety, it’s worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren’t going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that’s all we’re talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we’d passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don’t have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you’re now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don’t have that because that bill didn’t pass, right? So like there’s consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we’re supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven’t gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we’ve learned and the work that’s been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it’s kind of worse, right? Cause it’s like deja vu and then ignoring the

Taylor

24:25-24:29

five years of work. Yeah, deja vu if we were doing it again. Right. We’re not even, we’re not even

Taylor Owen

24:29-24:41

Well, yeah. I mean, hopefully I actually am not, I’m actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.

Nate Erskine-Smith

24:41-24:57

Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.

Taylor Owen

24:58-24:58

Some elements of it. Exactly.

Nate Erskine-Smith

24:58-25:25

I think AI safety is a broad bucket of things. So let’s get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He’s super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he’s mad because he believes Mark Carney ripped him off.

Taylor Owen

25:25-25:25

Okay.

Nate Erskine-Smith

25:25-26:36

Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he’s out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I’m not gonna send the 200 bucks and I’ve grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren’t already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?

Taylor Owen

26:37-27:04

So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney’s speeches from the day before and generate, like morph them into conversations about investment strategies.

Taylor

27:05-27:07

And it was driving people to a crypto scam.

Taylor Owen

27:08-27:11

But it was torquing the political discourse.

Taylor

27:11-27:11

That’s what it must have been.

Taylor Owen

27:12-27:33

I mean, there’s other cases of this, but that’s probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it’s not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –

Taylor

27:33-27:38

It’s clearly illegal. It’s clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?

Taylor Owen

27:38-27:54

So I think there’s probably an Elections Canada response to this that’s needed. And it’s fraud. And it’s fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there’s like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it’s a big problem.

Taylor

27:54-27:55

Yeah.

Taylor Owen

27:55-28:46

I actually think there’s like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you’re meta and you are operating in Canada during an election, you’d have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They’d be called out on it. They’d have to take it down. And like that’s that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.

Nate Erskine-Smith

28:47-30:18

To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn’t possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we’ll take action on this one specific piece of content. And it always struck me from there on out. I mean, there’s no real intellectual consistency here. It’s more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that’s an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they’re making so much money, when they are investing so much money in AI, when they’re ignoring privacy protections and everything else throughout the years, I mean, we can’t leave it up to them. And setting a clear set of rules to say this is what you’re responsible for and expanding that responsibility seems to make a good amount of sense.

Taylor Owen

30:18-30:28

It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.

Taylor

30:28-30:30

So like child pornography is a great example.

Taylor Owen

30:30-31:44

In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that’s to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there’s a lot of naked images on the internet that we can train AI with. So we’re actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it’s okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it’s child pornography, I’m actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it’s a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that’s going to capture a lot of reasonable debate that we probably want.

Nate Erskine-Smith

31:44-31:55

Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it’s, you know, very obviously illegal.

Taylor Owen

31:55-32:02

And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it’s fraud, then you take it down, right? Some of these other things we have to go with.

Nate Erskine-Smith

32:02-32:24

I mean, my last question really is you pull the threads together. You’ve got these different pieces that were introduced in the past. And you’ve got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.

Taylor

32:24-32:25

Absolutely.

Nate Erskine-Smith

32:25-33:04

You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I’ve had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you’ve talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What’s the answer here?

Taylor Owen

33:05-34:14

I mean, I don’t have the exact answer. But I think there’s some like, there’s some lessons from the past that we can, this government could take. And one is piecemeal bills that aren’t centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that’s point one and we’ve never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it’s cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.

Nate Erskine-Smith

34:14-34:35

So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren’t related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I’m holding on to.

Taylor Owen

34:35-34:37

With a frame of consumer safety, right?

Nate Erskine-Smith

34:37-34:37

Exactly.

Taylor Owen

34:38-34:39

If he wants...

Nate Erskine-Smith

34:39-34:54

Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.

Taylor Owen

34:54-36:06

And I think there’s an argument whether that should be one bill or whether it’s multiple ones. I actually don’t think it... I think there’s cases for both, right? There’s concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that’s sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they’re being told by our government by companies that they should be using and embracing this powerful technology in their lives they’re also seeing some risks. They’re seeing risks to their kids. They’re being told their jobs might disappear and might take their... Why should I use this thing? When I’m seeing some harms, I don’t see you guys doing anything about these harms. And I’m seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that’s the real frame here. It’s like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.

Nate Erskine-Smith

36:06-36:25

Right, I agree. And really lean into child safety at the same time. Because like I’ve got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.

Taylor

36:25-36:28

I want to turn to government and go, do your damn job.

Taylor Owen

36:28-36:48

Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that’s going to always be a place with sort of content that I would prefer he doesn’t see. But I would just like some basic safety standards on that thing. So he’s not seeing the worst of the worst.

Nate Erskine-Smith

36:48-36:58

And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.

Taylor Owen

36:59-37:31

Yeah. That’s like de minimis. Can we just torque this a little bit, right? So like maybe he’s not seeing content about horrible content about Charlie Kirk when he’s a 12 year old on YouTube, right? Like, can we just do something? And I think that’s a reasonable expectation as a citizen. But it requires governance. That will not – and that’s – it’s worth putting a real emphasis on that is one thing we’ve learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don’t self-govern.

Taylor

37:31-37:31

Right.

Taylor Owen

37:32-37:39

Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it’ll be pseudo-profit, not the public interest.

Taylor

37:39-37:44

Of course. Because that’s what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?

Taylor Owen

37:44-38:00

We’re creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.

Nate Erskine-Smith

38:00-38:38

But I think you make the right point, and it’s useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it’s you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you’re going to adoption rates will soar. Absolutely. And then we’ll benefit.

Taylor Owen

38:38-38:43

They work in tandem, right? And I think you can’t have one without the other fundamentally.

Nate Erskine-Smith

38:45-38:49

Well, I hope I don’t invite you back five years from now when we have the same conversation.

Taylor Owen

38:49-38:58

Well, I hope you invite me back in five years, but I hope it’s like thinking back on all the legislative successes of the previous five years. I mean, that’ll be the moment.

Taylor

38:58-38:59

Sounds good. Thanks, David. Thanks.

Discussion about this episode

User's avatar