AI in Politics

As artificial intelligence continues to grow and evolve, it affects how we live, interact, and even vote. But how can it be used by policy-makers to make informed decisions to benefit their constituents, and society at large? And, given some of the dilemmas we discussed in the last episode, how can we trust that existing regulation and laws will be able to tackle the new challenges AI is bringing to the table?

AI in Politics transcript

Kaustubh Kapoor (00:01):
As automated technology continues to evolve and grow, it affects how we interact with each other and how we vote. Now that’s all well and good, but how can we make sure that this is happening? Ethically and effectively. And if it’s not, and there’s issues, they are being addressed. And most importantly, how can we make sure that data driven policies are coming into effect so that politicians are making choices that are idealistic towards their constituents and the society at large. Join us to tackle AI and politics right now.

Jason Colby (00:45):
Welcome to smarter than AI. Last time we took a deeper dive into the bigger moral dilemmas that AI currently faces today. The issues that it creates surrounding privacy, data protection, accountability, quality, control, and the overall benefit that it has in our lives has only begun to be debated. And that’s why we’re glad you’ve joined us today. Because we’re going to talk about how AI can be managed by the government and used in politics. The use of AI is becoming more and more popular. However, with this growth becomes strained as existing regulation and laws struggle to deal with new challenges, some of which we highlighted in our last episode. As a result, policymakers around the world are moving quickly to ensure that existing laws and regulations remain relevant and are set up to deal with these emerging challenges in the wake of technological change. However, it’s hard to predict the future without fully knowing how it will be used and what will be abused, most government officials are waiting to see what might happen before they take any action. Hey, costume how’s it going today, bud?

Kaustubh Kapoor (01:52):
And now that it’s, it was a good sunny Friday in Toronto. So that was good. It’s my day off, but you know what? I always we always love recording these. So I’m excited to get into AI and politics. What about you, how is your day?

Jason Colby (02:08):
Busy one, man. It never stops in marketing. Let me tell you.

Kaustubh Kapoor (02:12):
Yeah, crazy. Crazy busy.

Jason Colby (02:14):
And on that note, question one. Are you ready for this?

Kaustubh Kapoor (02:19):
Let’s do it.

Jason Colby (02:20):
So in marketing, we have a term called the sphere of influence. Essentially, it’s a it’s like a term used to describe entities that can persuade you to take action, like buying a new pair of shoes. So those entities could be your social media, your Friends, your family, it could be anything. So today, that sphere of influence though is largely driven by social media and search, like Google, for example. Or Facebook. And this is especially true during COVID. So given that this is these search and Facebook are largely driven by AI and they’re designed to sell ads, how can AI affect your sphere of influence?

Kaustubh Kapoor (03:08):
That’s nice. That’s interesting. I never knew that sphere of influence had a deeper meaning than what you would think, but that’s really cool. The more you get to know about marketing, I guess. So I’ll say this. Okay, so do I think that AI can influence you absolutely. We see that every day. Do I think I can specifically influence example, for example, who we vote for, absolutely again. And we’ve got a few examples of this. So obviously in anticipation of this website, it’s research and wanted to make sure my facts were straight. We’ll talk about president Obama from the United States and his campaign of 2012, okay? So first off, Obama is known as this big data president. Because he put, I think, 200 million into his big data research initiative. But coming back to his campaign, that’s his 2012 campaign. It was largely successful because of all the data mining and micro targeting that was done by his campaign on his behalf. Okay. And let me say this. I don’t know. His him and his campaign came out right to decline that there was any sort of micro targeting and data mining that had been done or what exactly had been done. But we know this happened. So without getting into political views because we know that I can get a little too racy. What are your thoughts specifically on gun control?

Jason Colby (04:28):
That’s a good question, man. I’m a Saskatchewan born and I live in Alberta, right? So. It’s an interesting topic because my views personally, handguns, obviously, they should be regulated, hunting rifles. I think it’s important maybe that they are regulated, but at the same time, I don’t think there needs to be as much control on hunters as there are on the general public that, you know what I mean? It depends on the use. Really, at the end of the day.

Kaustubh Kapoor (05:03):
Yeah, but so you’re from Alberta, a 6 out of one born. And you’re fairly neutral on gun, right? You’re not far left far right, right? But say somebody was a Democrat and Democrats had known to be far left on this issue where guns should be banned, right? And same with liberals here. We see that very often. What micro targeting means is that they are going to be targeted ads specifically targeting you when it comes to controlling gun laws. These ads are specifically going to go towards you and not go towards Republicans, right? It’s not that a president or prime minister, and when the election by only going towards, let’s say one political ideology be it left itis or neutral. You have to try and grab a bunch of all, right? To actually be successful. So that’s why my career targeting is that’s what happened in my targeting is that we know this person has two of their data mining is a Democrat or is it liberal? And most likely will be on the far left when it comes to gun control. So let us only give them ads that are gun loss specific on the far left side, whereas we’re considered Republican the same ad will not be short or when it comes to conservatives that same ad will not be shown. So that’s micro targeting. So you can already see how AI has played effect. And this was in 2012. This is about 9 years ago.

Kaustubh Kapoor (06:34):
Another example of this was in the Indian elections in 2014. I’m Indian by the way, so I follow this. So our president or the president in India, prime minister of India, a lot of what’s happening to me. And he’s known to be people’s president because he’s very sort of in style, you know, you’ll have the fanciest haircuts and the nicest beards and you know, we’ll be up with the lingo, right? He’ll be tweeting all the time and pretty much like Trump was. But not that bad, I guess. I’m going to read a statement out of name more these camp from his 2014 elections. And this is by this guy named ervin Gupta, which was a strategist for their campaign or probably still is. He says that we have developed our own customized digital tools based on both commissioned and open door source data that puts us in direct touch with voters. And this is off the back of 3.67 million followers on Twitter for the Modi. 12 million likes on Facebook and 68 million page views on Google+, right? And the same thing app will Donald Trump in 2016. We’re not, I think, to sort of examples shows us what’s up here, but you see that AI is influencing our sphere of influence, influencing us. Yeah, is affecting us. We have influence. Especially when it comes to votes. We can see this.

Jason Colby (08:01):
Yeah, yeah, for sure. So here’s my question for you then. Given this astronomical reach that these politicians have, you know, and how they can interact directly with their voters, I mean, should it be regulated then? And if we want to regulate it, how do we do that?

Kaustubh Kapoor (08:24):
That’s very interesting. I think there’s definitely two schools of thought, both pointing towards regulation. There’s not many people out there that don’t admit that this technology or this or technology in general should be regulated. So it’s definitely being fast, fast track. Now, these days. And we’ve seen that how even Canada is becoming the leader in cybersecurity, right? And so obviously security laws. I think there’s a month ago, our cybersecurity partner at cybersecurity partner at M and P was talking to us about how these new laws are going to bring much tight knit security measures needed to regulate the cyberspace, which is cool. And especially with the recent U.S. elections, we know that President Biden is heavily he’s sort of what he walks sort of is walking in the footsteps of Obama right. The big data president, like we said, and he’s looking to fast track all AI ML regulations, right? And I did some research on how this is supposed to happen and how we know that U.S. is sort of the leader in this regulation space when it comes to the world in the sense that the world looks up to the U.S. whenever it does something, right? So it’s sort of important to talk about it, not to just toot their horn. But we know that because of Biden, regulations will be fast tracked through House and Senate. Because they have obviously a democratic majority there. And Democrats favor. We can see this through AOC and drug meet Singh, who’s the leader here in brampton, right? From brampton, we can see that happening. They’re both very aligned on whenever they talk about AI. And technology, they’re both very aligned on how it should be regulated. We saw them playing what was it among us on stream, right? And that obviously has nothing to do with AI specifically, but it’s just our politicians being in touch with what’s recent and what’s new, right? And we saw that when there was a recent build called the algorithmic accounting accountability act of 2019 and this saw a senator Ron wyden and Cory brooker, which they’re planning to introduce these bills again in 2021, because in 2019, that was a different administration. We’re trying to, in 2021, these bills are trying to be reintroduced to pass through thousand Senate fairly quickly.

Kaustubh Kapoor (11:09):
The second thing is federal agencies we get much broader mandate that include AI ML. And this is something that is very interesting and very important because we can’t let something get out of hand and then it can be reactive, right? And we saw that in cybersecurity. Even if you talk to a cop that that’s an older cop in they talk about cybersecurity and to them, it’s just all like voodoo. Why is there a unit that looks at crime statistics or fraudulent sort of money wiring? What does that mean? How can you do groundwork, right? But we know that’s super important. And we expect the situation to be addressed fairly quickly, right? By broadening mandates and a lot can only do so much, but it’s only when the agencies that are enacting these laws actually come up to speed, right?

Jason Colby (12:00):
Yeah. Yeah, yeah. So I’ve got another question for you. And it’s kind of based off that sphere of influence question that I posed to you previously. Because I want to take a little bit of a deeper dive into it. I want to talk about malicious actors, like fake accounts created by organizations outside of democratic countries, countries themselves who start to spread false information on social media sites, anybody who creates false like these weird conspiracy theories online or create scams that further an agenda to simply just create chaos from the looks of things that somehow benefits them. And it seems like and I know this for a fact. They do this by purchasing ads that link to sites that seem credible to further their own agenda at the end of the day. And it’s largely targeting democracies across the world where every vote counts. They’re sort of like gaming the system, really. In creating these false narratives, basically I think to just destabilize our economy to shift eyes away from the atrocities they may be committing on themselves. Or simply just to create government distrust in our voters. Can AI help us protect against these malicious actors in any way, to spreading this false information?

Kaustubh Kapoor (13:30):
That’s a really good question. I mean, malicious actors, we’ve sort of known and talked about malicious actors when it comes to sort of viruses, right? I feel like if you think about malware, right? The first thing you think is an antivirus. I mean, you could ask anyone from the ages of 20 to 80. Maybe 90, but 60, 70. Whoever is a computer understands that. But most actors are as an issue, are a much bigger issue than that, right? And it’s sort of sparks that ethics in AI discussion again. And I’m going to try and keep this answer short and sweet. I think this is a very wrapped up thing in at least in my head. So let’s talk about something. Some things that have happened in recent memory, right? Donald Trump’s banning from Twitter. So as much as we’d like to believe that somebody on Twitter’s ends are sitting there and trying to bed specific people, that’s not how it works. There are these AI programs called convolutional neural networks, which is a branch of AI, which I think we’ll talk about sometime. These CNNs are going out there and crawling the Twitter website to look out for false information, right? And that’s so that could be one of the ways that it’s happening. If you look at Facebook and Facebook is going to do this with far right commentary a lot. Well, they’re banned specific fire right pages, right? And that happens. So you could say that that’s AI trying to control these malicious actors, right? But let me ask you this. We all know freedom of speech, and we try and try and keep up to that, right? We will try and push that. So what and we know that AI and bias exist, and we’ve talked about this I think in ethics and AI. I talked about bias, right? So according to you, I mean, is it fair that somebody that’s written this AI could be biased could be not because we don’t know exactly how they work, right? It’s proprietary information. Is that fair that they get to control the narrative around a certain topic? I mean, they’re going to be wrong. I don’t agree with anything or most things that Donald Trump has to say or any fake news agency has to say. I’m not in any way sort of promoting that. I am just saying that is it fair that this AI gets to choose what has commentary, what has a voice, what does it?

Jason Colby (15:57):
Okay, so I’m of two minds of this. I don’t I believe in freedom of speech. I don’t think Donald Trump should have been deplatformed. But I do think that companies need to be able to make decisions that protect their policies, their people, and their profits. So basically what I’m saying is, is companies still need to be able to make a profit. Regardless of who’s on their platform, right and saying what they’re saying. And at the same time, I still think these platforms need to be need to be regulated to ensure the benefits their benefits to people in general, right?

Kaustubh Kapoor (16:36):
Yeah, that does make sense. And I totally agree with you in the fact that. There needs to be an ethics ethics framework that controls all of this. That needs to be out to the public, the public knows how and when certain things are being certain. People are being de platforms and things are being censored, right? Because you never want to feel that each story that has two sides. There’s only one side we represented on a certain platform. You want to you want to have both decides. So you can make up your own mind, right? And that’s one of the things that I think we talked about this a little bit and on the ethics episode was the social dynamic and how it sort of just breaks the forefront, some things that we already know. In terms of confirmation bias, and which is why I think it’s important to have freedom of freedom of speech is that if I am thinking a certain way, so okay, let me ask you, the Internet is a vortex that has anything and everything you search for. You search for something crazy. There’ll be some article proving it. Not necessarily that’s true. But there’ll be some article proving it, right? So it’s important to be able to go on the Internet and look at things and then make up your mind. But to do that, you need to have two sides of a story, which is why I think censoring and fake news is a very interesting topic because like we said, we talked about the COVID situation. In that case, it’s about saving lives. So do what’s necessary. But other situations where one side might be better than the other or not, it’s not for sort of Twitter decide, if that makes sense.

Jason Colby (18:13):
So yeah, that makes perfect sense. The only question that I have to you is how can we ensure that politicians who are typically driven by lobby groups, we see now that they can be driven by fake news groups and conspiracy theories, how do we square that? How do we look to the future and ensure that policies are data driven and how can I make that change?

Kaustubh Kapoor (18:35):
Okay, so Kenny I make sure that data driven policies are the thing of the future. I totally believe so. That’s what AI is sort of here for. Data driven insights that help that should help the world, right? We’ve seen in the past that a lot of policies are again based off of your constituents, their beliefs, right? Positions are bound to do stuff that constituents believe in, which lo and behold that’s where that’s why the certain politician was selected or and put into office is to represent their constituents, which makes sense. But there’s also the society at large, right? We have to make sure that that is represented. So we’ve seen the past that certain policies are based off of personal hype instinct belief, like I said, right? And that’s all well and good. There needs to be a mix of both, but there are so many vulnerabilities specifically saying the economy when it comes to health when it comes to homelessness, stuff like that, that these key sort of pivotal issues, there’s so many holes in them, right? And there could be policies that prevent this from happening, right? So for example, let’s say, what’s something that everybody really wants to not happen, is it a recession. And I know it’s economists say that it’s unavoidable. That’s recession depression and peak in troughs are just part of the economy. But what if there was certain indicators that could tell you ahead of time that, hey, this is going to happen. So you might as well introduce a policy right now to prevent the harshest sort of effects of it, right? And that’s what AI is here to do. To human is to humans, the cognitive abilities are only three dimensional. We look at three dimensions, and that’s what we can perceive. At one point, right? But to an AI, dimensions or features or very data specific term columns, right? It can be multi dimensional, right? It could be the millions of dimensions that can be under two by supercomputer. So some things two things that you might not think are correlated. But might as well be. That effect, let’s say, the economy or the housing crash or increasing homelessness if you could know that a year before that was happening, you could potentially have these data driven policies that could prevent.

Jason Colby (21:00):
Yeah, there’s sort of a warning mechanism that we can create for ourselves. That’s pretty cool.

Kaustubh Kapoor (21:05):
Yeah, and we saw that happen on a very descriptive side in the mid 1990s, actually started with New York. When the New York police department started geographically mapping this and think about this in 2021, it’s like the most archaic thing. But all they did was geographically start mapping where crimes were happening. And at that time it was all paper based, but they started just mapping a little bit of a help of computers where crimes were happening in specific neighborhoods. And they were able to increase police force in that neighborhoods and bring down murder it. I’m just reading it off of my notes here. That plummeted to almost 70%.

Jason Colby (21:41):
That’s incredible.

Kaustubh Kapoor (21:42):
You can see that it’s been happening for a while. But with the AI, we get that sort of lag time that can help us prevent stuff. Instead of being reactive, again, the reactive proactive debate here.

Jason Colby (21:53):
That’s kind of it, right? And I mean, I think people have to lead that to a certain extent, right? And that’s sort of leads me into my next question. How can we ensure the development of AI is led by people who put the benefits that an AI can bring to society over like a profit model, really at the end of the day. I want to talk about the, I’m going to butcher the name. Timnit gebru. The employee at Google? That was fired, what was it? Just a while ago, after she expressed her dismay over the lack of diversity that the company currently represents. So just in that, how can we do that?

Kaustubh Kapoor (22:37):
So the incident for people that are not aware, what I was according to her, she was, she tweeted that she was immediately fired for an email she sent to Google’s brain and women allies, being women and allies internal meaning list. She expressed this me over the ongoing lack of diversity at the company and frustration over an internal process related to the review of a not yet published research paper she co authored. And here’s the thing with that, here’s the thing with. Lack of diversity and which directly leads into bias that we talked about in episode three. This is exactly all straight getting into the conversation around bias and how we should avoid this, right? And her and again, I am a very big advocate of two sides of a story. Who knows what exactly happened because I think Google is not responding to exact questions there. Which makes sense. The large company to do trying to protect their reputation here. So I don’t know exactly what happened, but it might be that there was a certain technology being developed. And due to the lack of diversity, there were certain biases inherent and apparent biases that are being introduced to a certain technology, that wasn’t happy with. And that’s what caused her sort of internally emailing everybody and getting fired for it. How can we make sure that doesn’t happen is to again that ethics framework. And I keep coming back to this and it’s going to sound sort of boring and repetitive. But it’s really important. And ethics framework makes sure that we are not letting timid get bruised in the world, being silenced by bigger companies like Google. Whether she was right or wrong, I don’t know. I’m not commenting on that. I’m just saying, her silence within the company was not warranted because if she’s right, that she’s fired, we’re sending an email of being happy about diversity. You can see that’s all right, because she’s honestly complaining that, hey, because if you like a diversity, some AI you’re producing sub technology producing is not going to be the best in the future. And to an ethics framework, you can make sure that there are certain guidelines for when and when you can’t go and often employ that’s complaining about AI diversity and bias.

Jason Colby (24:54):
Yeah, exactly. I mean, I don’t even think of it as necessarily complaining. I think as she’s presenting an argument to improve the company at the end of the day, right? Because diversity also can lead to a lot of profit benefits as well because you’re reaching out to different people. You’re reaching a new markets, right? That’s the big benefit of diversity, really, at the end of the day.

Kaustubh Kapoor (25:20):
Yeah, and it’s totally necessary because we live in a world that is a conglomerate of different graces, beliefs, casts, you name it. And whatnot, right? Yeah. And we need to have a company like Google that controls world’s over 80%, I think, is world’s search. Search population. So over 80% of the world, I think uses Google to search on a primary basis. They might use other search engines here and there. So when a giant like that is silencing one of its own over the lack of diversity, how can that be prevented if not for an outside organization controlling this? You know, we have the UN so that there’s fees in the world, right? It’s time that AI has its UN.

Jason Colby (26:11):
Or as represented by the UN, really.

Kaustubh Kapoor (26:14):
Exactly either way, either way, represented by the UN or exactly some organization that’s doing that. And on that, I’ll tell you something that could happen in the recent past. And I think this is in December. Where the neural information processing systems and sorry I’m looking down, I’m just reading it off. I wanted to make sure my factor right here. The neural IPS. It took place. So this is a machine learning and compute computational science conference. That is how every December, right? And what happens is they talk about what’s happened, what’s new in the space and how we can improve it. And for the first time, this December, they said, and I’m quoting here is that we need to have a panel of reviewers to scrutinize papers that raise ethical concerns. Jack Paulson, a founder of the industry watchdog tech inquiry in Toronto, Canada says that I think there is a lot of value even getting people to think about these things. That’s the minimum we can do, right? Right. Researchers are already looking into things like fake news to the CNNs. Defects, which is fakes of people, a certain head attached to a different body, which looks exactly like it should, but it’s not. So you can see how they can be sort of a related content produce off of deepfakes. And we’ve seen that. So it’s appeared of techno optimism, says EA sync Gabriel. He’s ethical AI powerhouse in DeepMind. Which is a sister company of Google. And we could see that in this conference, people are arguing and calling out for more and more ethical frameworks.

Jason Colby (27:59):
That’s awesome, man. That’s such good news.

Kaustubh Kapoor (28:06):
And yeah, and especially in the sort of world that we live in, we need regulations. We know that this technology is here and it’s here to stay. Without regulation, we can’t have a world that is functional in my opinion. So why not have those conversations right now and sort of why not make sure that we’re on the right track?

Jason Colby (28:27):
Yeah, absolutely.

Kaustubh Kapoor (28:28):
All right, so I guess that’s where we’re ending today’s conversation. We talked about a lot of interesting things specifically in AI in politics. Talked about fake news, malicious actors, data driven policies, and how it all ties together with AI. And to conclude our talk today, I’m going to talk about something that’s top of mind very recent everybody knows about it thinks about it all the time. It’s a COVID-19 pandemic. And we’ve seen that governments during this time, every action they take is nitpicked. Everything they do is looked at with the narrowest lens ever, right? And that brings up our topic about conversation about fake news. Can fake news about this virus be stopped through an AI? An absolutely can. We’ve seen that with Twitter. We’ve seen of his Facebook, banning people, spreading stopping the spread of this fake news because at the end of the day, this is about life and death. This is not funny. This is very serious. And it should be taken and treated the same way. And there’s evidence, right? There’s evidence that supports this. Countries that took data driven decisions like South Korea, Australia, and New Zealand. The lockdown early that made mass mandatory that made 6 feet distancing a mandatory and in place for as much as long as possible. They’re faring well, but maybe some other countries like the U.S. are not because they didn’t act fast enough. Right. So this past year we’ve seen that AI and politics and data driven decisions and specifically the COVID-19 modeling that the way countries decide or government decide to open things up a lot of things around is through AI based modeling or epidemiological based modeling, which has statistics statistical roots that predicts the number of cases or how and the number of deaths and hospitalizations in the near future, which ends up being the decision factor in whether it’s a country or a economy or a government, it’s going to open or not. We know that that has that we know that AI and politics are tightly monitored. And that gives us a unique opportunity to push AI in the manner. We wanted to. Make sure that data driven policies are now the norm and not policies that are made out of hype or public perception.

Jason Colby (30:50):
Yeah, I mean, to be honest, it’s a hard time to be in politics. If you’re a politician right now, you’re trying to make these decisions and you’re weighing the pros and cons of the economy versus the welfare and healthcare of people of your constituents. I mean, that’s a rough balance for anyone to navigate. You know what I mean? Yeah, that’s kind of the way I feel about it. But at the end of the day, it’s nice to know that we have information being driven to politicians so that they can make decisions based on when to lock down when to open up gyms, when to open up Friends and family, really, at the end of the day, that’s why I think right now, for the most part, is when can I visit people again?

Kaustubh Kapoor (31:44):
Can I get a hug from my best friend when I just go to a movie theater and watch a movie? Yeah, stuff like that. Absolutely.

Jason Colby (31:51):
Yeah, just get back to normal, right? That’s the toughest challenge that politicians right now are navigating through because really at the end of the day, they’re looking at the models. They’re looking at what they’re being the data that they’re being fed and they’re making tough choices because it can change in a moment.

Kaustubh Kapoor (32:09):
Exactly.

Jason Colby (32:13):
And that’s a wrap. Join us next week as we talk about the protections that AI can bring to help enforce the law to safeguard society and solve cases.

(32:22):
And we would love to hear from you. If you have any feedback or questions for our team or topics that you would like us to cover, please let us know. Drop us a line on Twitter, Facebook, or LinkedIn, or visit us at MNP.ca.