Ethics in AI

No system is perfect. Neither is AI. As humankind advances, we continually delegate more and more of our lives to machines. We trust them to make the right decisions, because time is valuable and they are efficient. But what are the ramifications? What happens when people are treated unfairly because of the decisions made by systems incapable of compassion? Join us as we discuss the moral implications of AI, and the effect it has on the human condition.

Ethics in AI transcript

Jason Colby (00:01):
Over time, as machines become more and more sophisticated and learn to make decisions on our behalf. We have been delegating that responsibility to AI, why wouldn’t we? They only do what we allow them to as precisely and efficiently as possible. But what if that efficiency means that real human suffer unfair treatment that those decisions that are impacting life and death are made by systems with no human compassion, join us as we tackle ethics and AI right now?

Jason Colby (00:44):
So costume, what are the 5 big moral questions facing the use of AI today?

Kaustubh Kapoor (00:49):
That’s a good one. So when we talk about moral dilemmas, every big ethics in AI leader has their own perspective. People have people and industries when we talk about this topic, have very vast sort of perspectives that we can delve into. I mean, look at Google, they start up an ethics committee or an ethics board. I think in 2019, dismembered it within a week. I think it was April 2019. I might be getting my dates wrong here. And then obviously what’s going on with Tim knit gabru, I apologize if I didn’t pronounce that right. But we see what’s happening in the giant sphere. And then there’s countries like Australia and Singapore. If you go on their website and the government website and look at their ethics framework, it’s absolutely insane. You see all sorts of guidelines and steps to take to make sure that AI’s behaving ethically. It’s super cool. But there’s obviously some things that everybody can sort of agree upon, right? And the first one is very simple. Privacy and data protection. I don’t want to say more than that needs to be on this topic. We all know that privacy is super important, right? And especially when you talk about this new concept of machine learning, are not new, but evolving concept and machine learning called convolutional neural networks. Which is basically computer vision and stuff related to people’s photos. And that’s when you talk about data requirement for that, it’s basically pictures of people. And that’s you need to be privacy conscious and protection conscious when you’re when you’re talking about people’s photos and images, right?

Kaustubh Kapoor (02:34):
The second one is transparency and explainability. This one is very close to every data scientist’s heart, just because it’s super hard to stick to this, but it’s also very important. People treat sort of AI as a black box and we as sort of machine learning engineers, especially are trying to get away from that. And trying to increase the transparency and explainability within an algorithm to a point where it’s understandable by the general public. The third is bias and discrimination. Again, all of these guidelines and policies are not anything crazy. I’m guessing you’re not surprised here with things that I’m talking about. But it’s super important. People bias is our responsibility. It’s hard. It hurts people that are discriminated against, and it’s very, very important that we talk about this and a vocal about this. And make sure that we eliminate this as much as possible. The fourth one is beneficence and non male efficient against super sort of bright big words. But it’s all we’re trying to say is that NES should benefit people and not harm people. And the last one is autonomy. This one is super interesting because if you think about it every human should have a right in what’s being done or said about them. You shouldn’t be side sort of tracked when a decision is made about you. But it’s complex when you talk about an AI, right? And I might suggest that you don’t get a certain mortgage. Why is that? So that’s the autonomy we try and still keep within the humans. The end sort of decision and result should be within the humans.

Jason Colby (04:15):
Right on. So let’s tackle the issues around privacy and data first. How can the data be collected by AI? How can it be managed effectively? And how can we safeguard against how that data might be used?

Kaustubh Kapoor (04:31):
Okay, that’s okay. That’s a good one. I’ll talk about, and it’s not too my own horn or talk about trying to sell our own practice here, but we’re doing a very interesting project here in M and P with a lottery commission. And what they’re trying to do super novel and cool process, except for the fact that Google and YouTube have started and have them prevented this. They’re trying to they’re trying to help predict age of a certain customer when they walk into, let’s say a retail store so that these lottery tickets can be given to them without actually IDing them in person. So when I talk about you two doing that, it’s actually a YouTube I think this was earlier last year or the year before. It’s 2019 or 2020. In that sort of spectrum day, introduced this within their videos. So you know certain videos, you can’t just watch them because you might be under age. So what YouTube has done is they use your photo. They look the turn on your camera, look at you and then guesstimate or predict how old you are. And that’s what I mean by what we’re trying to do here is predict whether a certain person is a minor or not and whether they’re able to buy the lottery ticket or not. And you think about it. If we successfully prove this, there’s so many things that we can do. You could buy alcohol at a vending machine. You don’t need to be ID, right? If this is successful, it could really, really work. But then again, you could even imagine there’s so much privacy and data sort of protection that we need to make sure that happens.

Kaustubh Kapoor (06:09):
So what we’re doing in our project here is that we’re collecting these images with explicit consent. And specific concern, I mean, is that there’s so many ways you collect this data. And this data is people, their images, right? And it’s close to their toast themselves, right? It’s like people, and they’re literally faces. As much as you can, anonymize that. You’re still putting people’s faces and their face data into an algorithm. And when you think about that, that’s not a light issue to be considered. That’s a pretty heavy thing that you’re doing. So it’s important that we make sure that it’s explicitly known, what are we using this data for? And we’re not scraping it off at the end and I’m not taking this data off of the Internet off some people that accidentally put it there or don’t know it’s there. We’re explicitly telling people, if you’re okay with this, we’re developing this cool cutting edge technology. Would you want to be part of this journey? And that’s how I think privacy can be maintained. I mean, in the age of social media, that’s sort of hard to maintain privacy. But when it comes to AI, we really need to be careful. And when it comes to data protections, are the second thing. It’s all about encryption and security. We just talked about privacy and data protection and you know I mentioned we’re looking at people’s photos and trying to collect images of their face. What are your thoughts to somebody who’s not in this field? If I asked you for your photo to be used in a random algorithm, that people will never directly see your photo being used in it, but you’ll be a part of it. What do you feel?

Jason Colby (07:42):
Well, there’s two sides to that question. So how I would feel about it. It depends on the use of the image and where it’s being used and how it’s being used, most importantly. But it’s also a legal to use my face without. Basically prior consent. It’s not a thing that can be done, so you have to make sure that whatever you’re using it. You’re giving that person has been given consent.

Kaustubh Kapoor (08:17):
Yeah, that’s totally fair.

Jason Colby (08:19):
But that’s the way it works. In Canada. Do you think advertising with that image? I mean, if it’s just as far as facial recognition goes and the AI is just trying to figure out who’s who, I don’t think they’re the legislation is caught up to that yet. I don’t think it can I think if you’re just using it for facial recognition purposes, then legally you’re fine. And personally, I’m fine with that. Because I want to be recognized by the device that I’m using.

Kaustubh Kapoor (08:52):
Okay, cool. Right. So you’re okay with the with your image sort of face being used only again. The explicit consent of content comes in. You’re explicitly okay for your face to be used for your device to recognize you. And let’s say you were an avid lottery person. We’re interested in that. You might be okay with your image being used for that. But again, I think you’re actually the same thing. It’s like, okay, explicitly tell people what you’re using their data for. And they might be more on board with you using it because they know what’s up.

Jason Colby (09:30):
Yeah, yeah, absolutely, right? If I know that, for example, my Alexa device is probably recording my voice even after right now. But how that data is being used and how it’s how long it’s being stored maybe and what’s being recorded, that’s all up for debate, I think, and that’s kind of an open open to interpretation, which I think we need our legislation to catch up on a little bit. And that kind of goes into the next question here. How can you ensure that companies are open and accountable for the decision that their AI makes and what the DAI is doing with that data?

Kaustubh Kapoor (10:11):
That’s very that’s very important. I see a thing for you on based on your comment. I got an email and this is very surprising because you never get this email that my Google Play data was going to be deleted, finally. I will simply maybe in 2015 or 16, right? I hope it’s not even a thing anymore. And I got a notification that my data has been deleted. And I thought this is a very small step towards privacy, but it’s a big step. If Google is willingly telling us that my data is being deleted, I think, again, explicitly telling people what you’re doing with their data, brings into the trust and confidence you need for technology to improve, I think.

Jason Colby (10:57):
Oh yeah, absolutely. There’s no question about that. But you know there’s these are kind of the simple AIs, like they’re actively collecting your data. I don’t know if Google Play that only collects the data that you’re inputting, right? It doesn’t necessarily. So that’s kind of a different story. But when you’re talking about smarter AIs, like things that activate your camera and target you with ads and I mean, is there a way to ensure that the decisions that the AI produces are indiscriminate and unbiased and I mean, I just want to make sure that everybody is treated equally and fairly.

Kaustubh Kapoor (11:39):
That’s so interesting. So you said you asked one question that let’s get into first is whether AI can be companies can be accountable for the decisions that they make, right? And the answer to that totally and wholeheartedly and without any question should be yes. There’s no way a company should not be accountable for what the decisions there. AI that they produced is making. There’s no sort of second way about that. If that makes sense. That’s, again, that’s very important to sort of say that outright is that yes. Will and should be. Now how do you make that happen? The sad sort of part is that technology wise, there’s no real way of making that happen unless you step in. So what I mean by that is, again, we bring it back to one of the dilemmas, which is autonomy that we talked about. Which is essentially that we as creators of a certain AI should be absolutely in control of the final decision that comes out of the AI. By that I mean is NAI will nudge you towards something. But you should make the final decision of whether that should go into place or not.

Kaustubh Kapoor (12:55):
So let’s say, again, I’ll bring that mortgage sort of question example again because I like that. And redlining was huge. In the states for a while. So glad that happens probably less now, but not to the point where we’d like it to be, but it’s, again, it’s super important that we sort of bring that up. That if any I suggest that a certain sort of family certain area, a few people in a certain area are not supposed to get mortgages higher than let’s say 500,000 for some reason. Right. And the reason is more and more often than not is because of what has happened in the past and what has happened in the past we know is not the best. When it comes to these sorts of things, the final decision should be still the humans. So any answers something, but the person giving out this mortgage should make that decision on whether this is right or not. So again, it’s the whole thing where autonomous decisions should and will still be in play for the longest time. We were not going to let AI make decisions for us that are absolute and finite.

Jason Colby (14:06):
Yeah, okay. That makes a whole lot of sense. But I guess my bigger question is is how can we make sure that the decision that the AI makes benefits people rather than harms them? You sort of answer that question when you were talking when you gave the example there, but you’re making sure we’re trying to make sure that a person has the final decision. Right? But do we know can we track the way that that person that the AI story is making that decision for the person that’s ending up making that final judgment? Is there a series of decisions that the AI is making?

Kaustubh Kapoor (14:47):
Yeah. But you can trust yeah, that’s a super that’s a very intuitively nice way to think about it. So now that now that the discrimination of AI that was pretty prevalent in the past has come into the limelight as it were, right? Everybody started talking about everybody wants to be make sure that our AIs are non biased, nondiscriminatory are benefiting people. Google ID IBMs of the world, the facebooks. Everybody’s coming out with tools to make sure that this exact thing the same thing happens. So another we’re not in the first third episode we’re not looking to go into super detail about technology. That’s out there. But I’ll name one. It’s the thing called the Google’s model card toolkit. What it does is it creates it creates model cards for machine learning models and is that bridge between the model creator and the user and gives context and transparency around the model. So like you said, there’s certain decisions that are being taken in phases. This is where you would find this in the model tool card. You find these certain decisions that are affecting people in their lives.

Jason Colby (16:05):
Interesting. So here’s a question for you. It’s sort of my last question, but is there a way that as individuals like for myself, do we have the right to have a look at that decision making power that the AI is that the other side has previously seeded to it? Do you know what I mean? Like the decision of the AI is making. Can we get a can we have a general understanding ever of how the AI is making those decisions? And make sure that we’re always in control.

Kaustubh Kapoor (16:40):
Yeah, no, that’s absolutely valid point. And I think for whoever is the thing out there, everybody’s sort of wonders, how did I get pegged as this person? How did AI decide that I’m dispersing, right? And this is good for me. They should have no I should know more better, right? And I think that’s what the transparency dilemma is all about, where if you look at AI in the past ten years, it’s been pitched and fits and past and fish to everyone as a black box. And to be honest, if I ask you what’s happening behind the curtains, you wouldn’t probably know that much. And that’s probably us as AI practitioners our fault because we should be the people that are explaining what’s happening behind let’s say the most simplest regression model. What’s happening behind the regression model? Why is this final output the way it is? And you’re totally right, it should be shared with the end product.

Kaustubh Kapoor (17:48):
So another cool example, okay. So this is this, I think this is exciting because talking about COVID and COVID is everywhere. So again, we and I mean if you worked with other project, called lifesaver. And it’s this particular supercluster, amazing project where we try and help businesses, communities, function with COVID. So COVID is not going anywhere. We wanted to make a product that would help you function while in these times. And how we do that is that we predict how many people around you are infectious. Right. So you go to the mall doing anywhere, you want to know. I mean, I want to know, at least how many people around me could be infectious. And that’s important. If I see, all right, you know what? Place X is people walking around or like crazy infectious. I don’t want to go there, right? That’s going to be fine for now. That’s what it is. And now this can be thought of as being discriminatory. And in a lot of ways, if you think about it, right? If our app, let’s say it’s been widely used, discriminates against a certain area because of the high case count, you could say, dude, you designed this app to discriminate against this specific specific area. And this is why people are not going there. So to counteract that what we did was sort of actually in the process of writing this, but we’re adding a paper explaining our entire model. So whoever and putting it up alongside our app. So for everyone to look at this, go ahead, decide to design, you know, this is what’s happening. Yeah. Check it out. Do you like it? You can use it if you don’t like your totally open to not using it. But this, I feel like is one of the big things that is going to start coming out in this century is that in this decade is that people build AI models, there has to be something explaining. What the model is doing.

Jason Colby (19:47):
Yeah, that makes perfect sense. That makes perfect sense. Well, I mean, that’s all the questions I think that I have on this topic. I’ve ever met. It’s been pretty enlightening.

Kaustubh Kapoor (19:58):
There’s one other thing that I wanted to talk to you about. And let’s see. I just want to get your thoughts. I just want to write about one sec. It’s about this documentary called the social dilemma. Have you watched it?

Jason Colby (20:16):
I haven’t yet. I’ve heard a lot about it.

Kaustubh Kapoor (20:20):
What are what have you heard about it, and would you watch it?

Jason Colby (20:24):
Oh, I definitely want to watch it. It’s on my list. You know what I mean?

Kaustubh Kapoor (20:30):
I started got my list just keeps going.

Jason Colby (20:33):
It’s Netflix, man. Got me.

Kaustubh Kapoor (20:36):
It would lock down like I swear. I all I do is watch TV shows. But why this is super important is. In that show the one cool thing. So let me backtrack and I see sure an example of a sort of a summary of what that shows about. Yeah, so let’s give this social dilemma Netflix. Let’s give this a summary for anyone who’s like you and hasn’t watched it. It’s on their quote unquote list.

Jason Colby (21:05):
Yeah, tell me about it. I’m interested. Like I said, my list, but I haven’t quite got to it yet.

Kaustubh Kapoor (21:11):
It’s okay.

Jason Colby (21:12):
I actually watched a climate change documentary yesterday with Woody Harrelson. And so it’s not like I’m not watching docs over here.

Kaustubh Kapoor (21:21):
I was just gonna say like if you’re watching just like shows like too hot to handle those drama shows, I’m disappointed.

Jason Colby (21:28):
No, no, no.

Kaustubh Kapoor (21:32):
Zero dad, you know, you gotta be instilling good virtues into the kids.

Jason Colby (21:36):
Well, I think they’re instilling good virtues into me to be honest.

Kaustubh Kapoor (21:43):
All right, so it’s social dilemma. This is a show that explains to us in simple language what Netflix our favorite favorite social social channels, what do they do? Netflix, Instagram, your twitters, everything. What’s happening? What do they do? And they explain to us how these companies make money. And they explain to us how we’re so glued to these things that can move away. If you look at my screen time right now, it’s probably around the 6 hour mark. And that’s not good.

Jason Colby (22:24):
Absolutely not.

Kaustubh Kapoor (22:27):
It’s awful, right? But the reason the reason why it’s interesting is that one of the things that I can remember I think it was a Pinterest executive of somebody, they say, they said that if you don’t know what the product is, you are the product. And that has just stuck with me forever. I mean, all these free services like Facebook Instagram, they’re not free, right? Somebody’s making money. Nothing in corporate America is free. So I’m making that money. Somehow, right? And how they’re making their money is to us. And where the product, right? And to be honest, it’s not the biggest problem most of the time. If I think about it, I’m like, okay, I was, I was talking about buying this guitar and now all I see is guitar commercials. Not the worst thing you know I don’t have to go ahead and search through that I actually found a guitar that I wanted to buy, not too bad. I was thinking about buying something as simple as toilet paper and now I have ten options to apply four by 18 pi. Right, right. And that was that’s fine. But when I think about news, it’s a big problem. I just might be too political, but. More on the left one, right? In the center, just what are your what are your views? If this is not too much to ask.

Jason Colby (23:54):
No, I mean, I think I’m closer to the center. I mean, that’s up to debate, I guess. On the subject. I mean, there’s sometimes there’s the creep factor. I nails it so close that you’re like, how the heck did it know that? Like is it listening to me all the time? You know, like you know what I mean? It’s sometimes it feels like I’ll be having a conversation with my buddy on the train or something and we’re talking about something that we want to purchase. And no word of lie, we go on Facebook and there it is. And we’re like, my phone was off.

Kaustubh Kapoor (24:31):
Yeah. My phone was off how. So that’s one aspect. But when it comes to the political news aspect, it becomes like even crazier, if you think about it, I would consider myself between the left and the center somewhere, right? But I am open to hearing views on the opposite side because I want to be holistically informed. And then make my decision on where I stand. On any topic and any topic I think in life. And in this day and age that’s that confirmation bias is massive. It’s massively because of these companies. Because they look at you, they see your searching for only these types of views. Let’s only feed you more of these types of views. So you wouldn’t even know what’s happening on the other side of the table. And not saying that there is a tableside, or you’re right, but you’d like to know the entire story without making your decision correct. What do you think?

Jason Colby (25:29):
Yeah, and that’s kind of that leads me back into the question that I asked previously, right? How can we know what our data is being used for? How it’s being transferred. All that stuff.

Kaustubh Kapoor (25:42):
Exactly. And it saved Facebook released how and is proprietary so it gets hard. But it’s been released how feeding you more and more of these ads or more of what this news. You at least have the idea of whether you want to continue using the service. But you don’t know, so all you get is that confirmation bias. And you have no need or no sort of desire to go out there and search for yourself on what is actually the truth, I believe.

Jason Colby (26:11):
Yeah. I mean, I think that’s the big difference that you have between credible news agencies, right? And social media. That’s the big difference is social media is going to keep churning your whatever information it wants to provide you, and you’re just going to stay in your bubble, whereas the news media, there might be a certain amount of bias to it, but what they’re supposed to be doing is giving you facts to back up their opinions.

Kaustubh Kapoor (26:40):
Exactly. And that, I feel like is the biggest sort of learning from social dilemma. The Netflix necessity, we can watch here. Is that if anything, we should learn from that from the documentaries that what we see on our social media is confirmation bias. So we should go out there and search for truths and facts and then make up our mind, right? In fact, we’ll be on both sides and tap to us what side we stand on. But at least we’ll have both sides. And another sort of messed up thing that we realized and tried because of that documentary was my buddies girlfriend. She was watching the show called Grey’s Anatomy on FX.

Jason Colby (27:25):
Oh no. Yeah, that’s a dark hole.

Kaustubh Kapoor (27:28):
Exactly. It’s like 14 years. There’s so many people. Our episodes. It’s wild. But she was watching it and none of me and everybody, we never watched it. I see that’s live. I’ve watched it. I just want to say. I started when I was dating somebody and then, you know, I just got hooked. Regardless of the point, do you want to try to make was he never watched it before? Searched up something. He searched up what season does, right? Yeah. And all of his suggestions were from Grey’s Anatomy. Everything was from gay’s anatomy. What season did Derek die in this character and Grey’s Anatomy, something, I can’t remember exactly. But other characters died in when does it end? When’s the next season? Everything with us from that show. And that is because geographically your close to somebody who is really looking into this. So when you look into this, that’s what it shows you. Why?

Jason Colby (28:34):
So here’s another question then. How can AI differentiate between household members? Because you’re kind of bringing this up, right? It was your roommate, you said, that was looking into Grey’s Anatomy. And he got a whole bunch of feedback based off of something that you typed in.

Kaustubh Kapoor (28:51):
Yeah, that’s interesting. So if we look at products that we use, Google, I think it’s Google nest when you have the Google Home mini. It allows for voice activation and resources as well.

Jason Colby (29:04):
Absolutely. So I don’t want to be many times my daughters have asked Alexa, tell me a joke or sing me a song.

Kaustubh Kapoor (29:15):
Yeah. Exactly. So if that is how that is one way that I guess our AI devices could differ between or amongst household members, I guess. But that’s a really good question. I mean, we definitely, I’m going to make this into a question that we answer on our second part of this AI and ethics in AI series because I think we’re doing two episodes on this.

Jason Colby (29:42):
Yeah, we are. Yeah, spoiler alert.

Kaustubh Kapoor (29:46):
Yes. We’re definitely going to look into this. But that does the second, the last sort of thing that I wanted to chat about in this sort of episode three but part one of ethics in AI. Before we sign off the one cool other show and I know keeping your show is Silicon Valley.

Jason Colby (30:09):
You got me bud. I’ve watched I haven’t watched the last season if I’m being honest. But I’ve watched everything since, man. Pied piper, you got to be kidding me.

Kaustubh Kapoor (30:20):
Exactly. So that show in and of itself is amazing because it’s based on true things. And I say that because again, random tensor, but my, for the project that I talked about, did that lottery project. One of the vendors that we talked to to collect this data. In Silicon Valley, there’s this episode this dude, the character gen Yang made a classifier for hot dogs. I don’t know if you remember that.

Jason Colby (31:02):
Oh, yeah, vaguely.

Kaustubh Kapoor (31:04):
Yeah, he made a classifier for like hot images that were classified as hot dogs are not hot dogs. And we were talking to this guy in Silicon Valley yesterday who this show this sort of episode is based on. And he was telling us that he made this and sold it off. He made this classifier this company that got sold off then. He’s doing this thing. That’s actually two things. Yeah. But I came back to the ethics ethics part of why I brought this show up was if you look at cryptocurrency and the new Internet that they wanted to make. It’s very similar. It’s for the people by the people. That’s what cryptocurrency is. It’s not controlled by central institution. Everybody has a saying it. The people in their community. And that is sort of where we should head towards in terms of AI as well. There needs to be more governing boards, like the same way as in the beginning of the top of the episode I was saying that Australian Singapore countries like that have great ethics principles and guidelines out there for their practitioners to follow. There needs to be more of that. But in a community sense so that everybody sort of knows what’s happening what sort of developments are happening and what’s okay and what’s not okay.

Jason Colby (32:22):
Yeah, that makes perfect sense, right? That’s one of the big things. That’s the big dilemma with tech, right? A lot of politicians, a lot of policymakers, lawmakers, what have you. They’re always playing catch up to what’s being developed. And tech is just. Understandable because they’re just trying to they have to know what the social impact is going to be for them to outlaw it, sorry.

Kaustubh Kapoor (32:53):
No, no, you’re right. They can’t. As smarter than 60 on this that whether you can stop a rogue AI and mathematically sort of know. And I think that’s one of the promotional segments of our N 60 they’re pretty cool. 60 seconds, you learn something new. And that sort of thing right. Lawmakers can’t do much and make laws if they don’t understand it. And that’s sort of visible if you looked at the Facebook founder that his Congress testimony.

Jason Colby (33:31):
Exactly. Because it was all in the map. You had politicians that could somewhat stump him. I’m not going to say they could, per se, he did still answer the questions. But there were questions like, how does Facebook even work?

Kaustubh Kapoor (33:50):
And he can’t answer that sitting in there. I mean, there’s Google in itself. The search variable is millions of variables that go into your search. He can’t explain that sitting in a room in like ten seconds or 30 a minute. This is like the worst thing of jeopardy ever. But yeah, it’s just super important for us to keep sort of looking at the new things that are happening like cryptocurrency like the new Internet in that fictitious show that there are ways of controlling technology to make sure that it enhances people’s life again coming back to the beneficence dilemma. But it’s just important that we keep our heads open in our mind open to letting technology get better. And not shut up because it’s going to happen. So if you control the narrative at how it happens, it’s better than taking the backseat and letting things happen anyway.

Jason Colby (34:48):
Right on. Well, let’s call it there.

Jason Colby (34:53):
We all know that AI has the power to change the world, and it will become more and more mainstream. There’s no denying that. Computers themselves are becoming more and more powerful and developers have easy access to open-source learning libraries. The change that AI will bring is going to be rapid and at a massive scale. But it can be argued that anything built for speed and for scale requires a critical examination of its impact for its unintended harms that may also occur at the same scale. Then add the increasing ease of which malicious individuals can deploy state of the art machine learning systems at scale, the urgency hits you. It’s good to know that many experts are tackling the issues that AI brings. So individuals can protect themselves. Join us next week as we talk about the politics surrounding AI.

And we would love to hear from you. If you have any feedback or questions for our team, or topics that you would like us to cover, please let us know. Drop us a line on Twitter, Facebook, or LinkedIn, or visit us at MNPdigital.ca.