Jason Colby (00:01):
Many challenges exist for those in charge of public safety. Police officers everywhere of becoming increasingly reliant on technology for many parts of their job. And this includes AI, which has become essential in areas like crime prevention. Many policing practices are currently undergoing significant adjustments in the name of public safety. As more of a sophisticated criminals emerge, using AI for their own means, law enforcement officers are relying on similar technologies to help them counteract and prevent future crimes from being committed. But how much is too much when it comes to a reliance on AI? And what oversight measures are in place to ensure that the technology that is being used isn’t making matters worse for society. Join us as we tackle AI and law enforcement right now.
Kaustubh Kapoor (00:56):
Welcome to smarter than AI. Last time we picked up the topic of AI and politics and talked about how policymakers are making sure that the laws are relevant enough in the wake of new technological changes, especially when it comes to AI. This episode we’re going to be talking about AI and live forcement and how AI can help keep the society protected at large. But the point is that AI is not totally infallible. It can be controversial, and it must be scrutinized, especially when it comes to the matter of life and death.
Jason Colby (01:26):
Okay, Costa, we’re going to jump right into this one. Can you give me some examples of how AI is being used in law enforcement today?
Kaustubh Kapoor (01:33):
All right, getting right into it, I like it. So the first thing I just want to mention is that we’re talking about extremely protected controversial and proprietary technology. So actually knowing exactly what law enforcement uses can be extremely difficult to find and we’ll talk about, I think, in question three more around that. But I found a couple of examples that I think are super cool. One of them is from the U.S. and one of them is right here from Canada. So the one thing I want to talk about first is the company Intel and the FBI. So what they did was they created a technology working together to help find missing kids, which is extremely cool. So just as a numbers person, every year in the U.S. is more than 8 million tips on missing kids that commit. Now you can imagine how hard extremely in part it can be to actually manage this information and create some insights that could be used to help solve these crimes. And with that, the most important concept is of cataloging this data. You need to know where to find it and how to find it to actually make use of it. And that’s where the concept of big data sort of comes in. And I know we’re sort of keeping it very light in the first introductory series, ten episodes. So I’m not going to go into too much detail. But the point of big data is that it’s like you like the word would suggest a lot of data, a large amount of data, coming in at coming in at crazy speeds, and it needs to be analyzed with the same speed as well. So there’s this concept of the 5 V’s of big data. It is along the lines of volume velocity veracity and many more different things that people use interchangeably. But the how I like to describe big data is that it’s just a lot of data that is coming in at crazy speeds and it needs to be analyzed in a speedy manner, if that makes sense.
Kaustubh Kapoor (03:33):
So what they did was they created an ML model, a machine learning model that would take inputs like IP addresses, phone numbers, texts, and using this big data infrastructure would pop out where a certain missing child could be. The physical location of this missing child. And I thought that was one of the coolest implementations. And one of the rare implementations that you can think about when it comes to especially when it comes to machine learning because you know we all know we all know that there’s facial recognition especially that’s being used in a lot of law enforcement. But I thought this was super cool because it was a little different than our regular facial recognition when you pop somebody’s photo into a database gives you a match sort of idea. What do you think?
Jason Colby (04:21):
Yeah. Yeah, that’s super cool. I know we here at M and P, this is kind of a subject that hits close to home because for a number of years, we’ve donated here in Calgary to an organization called not in my city. And they actually help build awareness around human trafficking in our communities. It’s kind of cool.
Kaustubh Kapoor (04:42):
Do it, wow, that’s intense. And it is really sad. I just want to mention a quick sort of story around my home country that’s, I guess, not my home country anymore. But where I’m originally from India, human trafficking is a huge deal. There as well. And there’s different kind of trafficking ranks that go along. And it is really sad to see when you pull up to a stoplight, and you see kids begging, you know that they aren’t begging because they’re necessarily poor. They might have been a part of this crazy traffic mule trafficking rink where they have put on the street and then are in this crazy organization. And it’s really cool what Intel is done with the working with the FBI to even create a work from what I can imagine an extremely complex ML model to solve this issue.
Jason Colby (05:41):
Yeah, that’s crazy.
Kaustubh Kapoor (05:42):
So the second one. So the second example is right here from Canada. It’s actually my buddies company. I went to university of Waterloo. He also was one of my friends there. This company’s name is guard X and it was actually cool because he was also on dragon’s den. So, you know, he’s, yeah, it’s done really well for what it is right now. So what that company does, it has a machine learning model attached to it where it will try and actually remember your phrase. It’s a roadside safety equipment that helps to reduce impaired drivers through identifying all different kinds of substance abuse while you’re driving. So traditionally, we think about impaired driving, we think about breathalyzers, you blow into a better and it’ll tell you your blood alcohol level. This technology is a little different. It works on specifically on retina. Identification and contraction. I’m not, again, proprietary technology. I’m not exactly sure how it works unless I invest into the company which I’m not doing yet. So yeah, it basically it threw your vision, the AI is able to tell whether you’re impaired on a certain substance or not. So we’ll also tell you what substance that is. So that could be, say, cocaine, methamphetamine, I believe is included in it, alcohol, and many more other substances.
Jason Colby (07:06):
While cost of that is cool. But I’m going to move on to my second question. Have you ever heard of predictive policing? And I’m not talking about Tom Cruise and the minority report. I’m talking about, well, from my understanding anyways, how predictive policing uses AI to analyze large sets of data to help decide where to deploy the police or to identify individuals who are more likely to commit or be a victim of a crime. So, for example, the police can use it to map out where and when a number of cars are being broken into in the city. And then they send units there to further prevent it from happening or hopefully catch criminals in the act. But there’s a lot of people that argue that predictive policing they can help predict crimes more accurately and effectively than traditional police methods. But what are the main issues surrounding its use and how can AI bring these issues to light?
Kaustubh Kapoor (08:03):
All right, so I’ll give some examples here. Of how I can bring it to light. Because there’s been some research done already. My first first thought on this is that predictive policing and racialized policing have a very blurry line attached to them. It’s very, it’s not I personally don’t know where I stand on this topic, but so I’m just going to give out some of my thoughts and facts and hopefully that makes the listeners make up their own mind. So we’ve all heard of or maybe not. But let’s look at an example of a practice that was absolutely horrible called redlining. Where certain parts of the community, right? So say you live in we live in Toronto, certain parts of the community wouldn’t be given the right amount of money based on their predominant race. Thereby you not having enough infrastructure, good schools, right? So the economy would actually suffer, right? And when that happened, that led to another really bad thing happening called reverse redlining, and now what is this? This is when, say, mortgages. When they were being given out, based on your race, you’d get a certain mortgage race, a mortgage rate. So if you erase a, you get a better mortgage rate than this is a tongue twister. If you erase a, you’d get a better market rate than if you erase B now both these practices are really bad, but what they do is set a precedent for an AI. Now say you created an AI to predict what sort of mortgage rate should be given to a certain person based on line of credit, this and that. And historical information, right? Now historically, what you’d get is that a certain race would default more because of their economical conditions being bad. Now the AI would use this data because the AI doesn’t know that it was a bad practice that led to all of this and give out the same sort of reverse redlining results because there is a precedent being set and we’ve seen we’ve talked about that in episode one where we know that AI uses previous data to bring out new insights and sort of predict future, right? So we can see how this predictive policing in itself can be a problem.
Kaustubh Kapoor (10:30):
Now I just want to highlight on how AI is being used and has been used in the past to bring out some of these policing issues or predictive policing issues, right? And I’m just going to put it up on my laptop because I don’t want to mess up the great work that some of these Stanford professors and students have done. What they’ve done is that this one, these two individuals in Dan, giraffe ski and Jennifer, herbart. Used AI to analyze the language used by police from body cam footage. They showed us to using an actual language processing that the police were more likely to be disrespectful to people of darker skin tones. And that sort of brings up the idea around how predictive policing could be wrong because there’s already implicit bias in the policing practices now. Another Stanford researcher called Goya. He found out he wanted to find ways to improve the fairness in written police reports. And if you think about police report, right? That’s an Asian female in the parking lot, height 5 foot three driving a Toyota bumped into a car. This information in itself contains a lot of PII information. Personal information. That we use when we use AI, again, the similar concept of NLP natural language processing, we could master this information and say, person a or individual a or female a used height 5 foot three or 5 for whatever, again, mask that as well. Bonded to a car. Now how this would help how this natural language processing I would help is anonymize this data so that no implicit bias is introduced. Thereby, I think, beautifully seen in itself, if it were to use all this data that has precedent to be sort of racially not correct, then we could get into a problem.
Jason Colby (12:30):
Yeah, that makes sense, right? Like, racial profiling is, it’s relying on the data that currently exists, which means AI is going to use that data regardless of whether we want it to or not. So we’ve got to make sure that we’re sensitive to how that data is distributed as well.
Kaustubh Kapoor (12:48):
Yeah, you’re totally right. I mean, AI uses past data to predict the future. That’s what AI does. Now, if the past data in itself is racially biased and I’m going to quickly mention facial recognition here, very soon. If the AI is using past data that is racially biased, guess what the future is going to be. Racially biased, right? So okay, let me mention that third CS professor from Stanford. I don’t know why I’m mentioning that so much. It’s just they’ve done great work in the AI ethics and AI AI fairness space. She joined Google to democratize AI and data interview with wired magazine. And I found this really interesting because she talks about AI and law enforcement and how facial recognition is used widely in that. But one of her students in working with her and we’ve mentioned her before on this show called her name is Tim Nick gebru. We mentioned her in the ethics in AI to talk. She wanted to create a project called gender shades that highlighted racial bias in commercial facial recognition algorithms. And I just wanted to mention this at this gender shades project because after this was released, companies like Amazon like IBM like Microsoft were very cautious in releasing this facial recognition technology into our regular space. And we know that facial recognition in the past has been racially biased where it would traditionally would recognize caucasians way easier than people of other skin tones. And that just shows us that how previous data or the input data, if it’s not right, the output will also be similar.
Jason Colby (14:45):
That’s crazy to think about how something that’s supposed to be impersonal and unfeeling still has those racial biases. It’s so crazy. Okay, let’s speak a little bit more about facial recognition. And talk about surveillance states because, I mean, anyone who’s a big fan of police dramas like me, specifically The Blacklist, I mean, everyone knows how fast Iran can pull up a suspect using facial recognition. It’s crazy. But let’s look beyond the fiction for because this is being widely adopted around the world. I’ve heard in China, for example, police have started using sunglasses that have a camera on them, and these cameras can upload images to a database containing previously captured images of people. And so if there’s a suspect within this large group, the AI can parse that information and the officer is then notified from the sunglasses that there’s a match, are we, so it’s like, that’s just crazy to think about. Like if you get a parking ticket, you can be easily identified and you can have somebody walk over to you to pay it. That’s just crazy. So like, are we, are we heading into this weird orwellian future? And is this being driven by AI?
Kaustubh Kapoor (16:12):
All right, I love The Blacklist reference here. It should get a lot of lasts in the beers. Let’s talk about what you brought up actually. The China surveillance state, mostly issue in the west for now. So what they’re doing, okay? So it’s called the China’s China experimenting with social credit system. It was launched by China State Council or first outlined in 2014 and it’s supposed to be released by the end of 2020. So what it does is by definition, it takes in all the data that it can about you from all kinds of sources like banks, grocery stores, they also experimenting with surveillance footage, cameras on the side of the street. And with each interaction and transaction you make, there’s a social credit score attached to it. That ranks your trustworthiness and honesty and basically your social aptitude, right? And if you think about it, it’s similar to credit systems. But the problem is that this social credit system is not just for organizations and businesses. It’s for individuals, right? People will have a social credit score attached to them. And that is insane. I mean, think about you going on a date and somebody pulling you up on the Internet, being able to tell how trustworthy you are. You might have committed a felony in when you were 16, you might have, you might have picked up a chocolate bar and forgot to pay for it when you were 16, but I still on your record, and that pulls down your social credit system because you’re in the system, right?
Jason Colby (17:53):
Kaustubh Kapoor (17:55):
And I think that’s just crazy. That’s just insane. I think coming back to the western world, having a look at that would be, it would be quite crazy to see if ever ever we were to implement it here.
Jason Colby (18:07):
Oh yeah, for sure.
Kaustubh Kapoor (18:08):
All right, so now back to the western world, let’s talk about an article I read in vox about police surveillance technology. And I’m going to quote the vox article here. Basically, if you want to know how the NYPD might be using tech to police your neighborhood, tough luck. And that is scary. That is troublesome. I’ll bring back the Stanford researchers comments about transparency. In facial recognition, especially to reduce bias, how will that happen based on this comment?
Jason Colby (18:43):
Think about it. Constructive criticism for as from a variety of different sources can only make these things better, right?
Kaustubh Kapoor (18:51):
Absolutely. And don’t get me wrong here, I totally understand the importance of keeping some of these technological aspects for proprietary to the police so that the malicious actors I know you love that were Jason. Malicious actors don’t come in and manipulate this information, right? So we know that that’s important. But the point of the matter is, going back to the question it had, a surveillance state. We need to make sure that whatever technologies are being used by the police to police the community are at least openly discussed by the community. There has to be a sort of trustworthiness in the community to be able to comment on that. If we believe that majority of the community is good, which we should, as a community, then we should have a say in what’s right and what’s wrong when we’re being the ones being policed. So I think the whole point in this question comes right back to openness and transparency. If that transparency, if we can tell what technologies are being used, what are the inputs to these technologies? Are you using a record that was stricken away 5 years after? Because I picked up a talk to bar and forgot to pay for it. That’s not right. So that sort of openness, I think, will help us trust more in our police, right? So are we moving towards a surveillance state? I don’t think so. I think the west democracy is too powerful and for the right reasons it’s too powerful for this to happen. It will take a majority, like a major societal shift for this to happen.
Jason Colby (20:27):
Well, let’s hope so. I mean, if you start looking at the states, for example, and seeing what, I mean, I’m not necessarily a big fan of Edward Snowden, but I can understand the argument that he made, right? And that we are collecting too much data on absolutely everything. And a lot of it is inconsequential.
Kaustubh Kapoor (20:46):
Absolutely, and it’s just, it’s a very interesting to mention that we are collecting too much data because I remember that this was a big controversial topic when Alexa and other home assisted devices came because the main idea around these devices was that they should only collect data if the person allows it and when the person is talking to this home assisted device. But there’s been multiple reports of when these devices are collecting data while you’re not even talking to them. So there’s definitely an argument here to say that we’re collecting too much data. You’re totally right.
Jason Colby (21:25):
Okay, so you ready for my last question here. And for it, I think we need to talk about RoboCop’s. And while I appreciate the fine work of detective Alex Murphy, what I’m actually referring to is this police robot in Dubai. Have you heard of this? This is crazy. They have a robot that helps report crimes pay fines and receive information by a touch screen on its chest.
Kaustubh Kapoor (21:52):
Honestly, sometimes I just learn more from you than actually for myself. I did not, I’ll be honest to everyone listening out there did not know about this. I researched it just for this podcast. And there’s so much information again propriety technology. Nominal information on how exactly all the AIs within this AI are actually interconnected. But I just found it really cool. So just for more information on this RoboCop, first off, Dubai claims that it’s ready to patrol the city. So if you in Dubai and you see a robot, it’s a police officer, be respectful. But the robot is 5 feet 5, comes with an emotion detector. It’s able to reach people’s facial expression and change itself facial expression or himself. It herself, I’m not quite sure. Yeah. It has a control center, which I think is super important which says, which sends video feeds right back to the police station, which I think was super cool. It has a map built in so it can navigate the streets well. And it’ll shake hands. You can talk at 6 languages. And the main thing right now, it’s being used for is people can report a crime or a pay fines. And I thought that was cool. It was first introduced in Jai seq, which is the golf information security expo and conference. And then it was put into the streets for patrol. What I really found was interesting ways that they are leasing more version of this. One will be three meters tall that can lift heavy equipment. One that can run at 80 km/h. And I thought it was super cool.
Kaustubh Kapoor (23:30):
But not the problems. Now where I see it going wrong. I am going to bring back something that we talked about in episode one, which was EV car crashes. In 2012, this EV car crashed into the boulder while coming off of the freeway. And caused that. Now, if you look at the entire statistics of how many people crash and sadly enough pass away, this specific incident with just a spec, but it got a lot a lot a lot of media attention. The problem was that it was done by an AI. And the same thing I see this. If, say, this robot is given his arm and shoots an accidentally shoots an innocent person. That’s the end of this AI. I think it needs to be released very cautiously into the streets and how much functionality is given so that none of this occurs.
Jason Colby (24:21):
Oh yeah, exactly. I mean, it’s no cyborg, right?
Kaustubh Kapoor (24:25):
It’s no cyborg.
Kaustubh Kapoor (24:31):
The problems that AI can solve, especially when it comes to law enforcement, are endless. It can potentially help save and find missing kids or help with roadside sobriety checks. But I think when I say this, I speak for the both of us, it should be managed effectively, transparently, and openly. This May and will help reduce bias in both gender and race in the technologies that are used by law enforcement currently and help law enforcement make informed decisions that will help us right now and in the future. And just as important is for us to look at the technologies currently at play. And see and comment if the premise of those is correct.
Jason Colby (25:16):
And that’s a wrap. Join us next week as we talk about the opportunities and challenges that AI brings to agriculture.
And we would love to hear from you. If you have any feedback or questions for our team or topics that you would like us to cover, please let us know. Drop us a line on Twitter, Facebook, or LinkedIn, or visit us at mnpdigital.ca.