Hype vs. Reality

Our pilot episode aims to introduce the topic of Artificial Intelligence and Machine Learning; we start our journey by separating the hype from reality. From the start, our hosts demystify popular topics in the world of AI today – such as self driving cars and robotics. Listen as Kaustubh and Jason discuss some common uses of Machine Learning in our daily lives, be it Netflix, iRobot or automated vacuums.

Hype vs. Reality transcript

Jason Colby (00:01):

We all know the power of hype. It’s that feeling you get when that new iPhone is announced. The feeling you get after watching a blockbuster trailer or player trades in the off season, it’s a heightened sense of excitement, or sometimes fear of what is possible when we let our imaginations run wild. But hype is not always grounded in reality. We often mistake what is possible today with what could be possible tomorrow. In this connected age of social media, hype has been supercharged. However, we all know that anyone who’s hyping something is often selling something. And no technology is more hyped than AI. It’s in everything. It’s in our phones, or watches, cars, in our vacuums, hospitals, and children’s toys. But how much of this hype is deserved? And how much not quite yet? Today, on smarter than AI, we’ll dive into the latest technology trends that separate AI hype from AI reality right now.

Kaustubh Kapoor (01:17):

Welcome to smart AI. Today is going to be a fun show. We’re going to deep dive into the cutting edge AI stories from the media and we’ll figure out which ones are real and which ones aren’t quite there yet.

Jason Colby (01:30):

Hi, my name is Jason. I’m a digital marketer and content creator. I’m also the co host of smarter than AI. I’m joined by my colleague cost of, hey, Costa, why don’t you tell us a little bit more about yourself?

Kaustubh Kapoor (01:41):

All right, awesome. Hi, everyone. I’m costed, and I’m one of the co hosts for smarter AI. I’m working at MMP as an AI engineer. And I thought it’d be great for us to show where we break down all the AI concepts and hype and reality and break it all down for everyone listening today. I am you could say a fortune teller, but with math.

Jason Colby (02:03):

Nice. Okay, so before we start breaking it all down in the spirit of completeness, cost of as an expert, could you define AI in terms that someone like me who isn’t really an expert can understand?

Kaustubh Kapoor (02:18):

All right, awesome. Absolutely. Okay, so when we talk about AI, the one time that comes into mind is intelligent agent. Let’s break this phrase down a little bit into the two components, intelligent and agent. Intelligent we know all know what that means. It’s smart perceptive and more than that just comes to mind naturally. Agent is someone or something that can help you. Now, in case of an AI, we deem it an intelligent agent if it acts rationalist rationally and autonomously. Rational because we want our AIs to think more like human and act more like humans. And rationally because we want our AI to be smart and act in a way that human would in giving a certain situation. Now, we want our AI to be autonomous because we wanted to function with little or no help from a human. It needs to be self learning and acting individually with little to no support.

Jason Colby (03:16):

So cost, I’ve heard a lot about the touring test. Now, how does that relate back to AI?

Kaustubh Kapoor (03:21):

Of quite interesting. Okay, so Alan Turing was a visionary and considered an AI got in his time. He was a mathematician in the early 1800s, 1900s. And he defined a test to check out weather. It acknowledged can actually be classified as an AI or not. Because that’s the main thing. Right. What the test is, it’s a simple limitation game. You have three parties, party a, party B, and then an interrogator C now what the idea of the game is that a needs to help the interrogator in any which way possible, either by telling the truth or telling a lie, that’s their task. Party B needs to deceive the interrogator. By lying by telling the truth by getting them off track. And the interrogator needs to be correctly identifying which party is what. So looking at somebody or talking to somebody, the interrogator must be able to tell whether this is party a or party B now that’s the invitation game.

Jason Colby (04:22):

Where I’m struggling is what happens when a machine plays the interrogator game because they’re supposed to be unthinking on feeling, right? We want to make sure that the machine sticks to the facts. How does that work?

Kaustubh Kapoor (04:35):

Absolutely. Okay, so the idea is passing the curing test requires solving one of the most major AI problems, which is natural language processing, and natural language, competence, reasoning, planning, the so called AI completeness problem. Now will the AI be able to differentiate between party and be the same or more number of times than a human can? If that can happen, you can surely think can machines think. And if that happens, then we know that that technology can be correctly labeled as an AI.

Jason Colby (05:14):

Okay, that makes sense. So I guess my next question is, now that we’ve sort of defined what that is, I know that AI is being used everywhere nowadays. It’s being used in my Netflix for crying out loud. Let’s talk about that. Let’s talk about where AI is being used today. Where is it used in our daily routines and what can it do?

Kaustubh Kapoor (05:39):

All right, so you brought up the best example in my opinion the most far reaching and the most interesting example in Netflix. This is something that I think I know about segmentation targeting, optimization, right? These are the questions that I understand. But the question is that you might be wondering, how do they identify so many segments? You know, how is it useful to have so many segments? Doesn’t it does it improve conversion? Does it improve with retention? Do people really use these recommendations? And that’s what I’ll try and answer today. So Netflix began its recommendation engine in the early 2000s. So it used to use this 92nd video to help you select which TV show you’re going to go with. And which service you’re going to use. That is one of the major reasons. And that is when it all started with Netflix being so obsessed with the recommendations to hook users. Now currently I’m just reading it off my thing. Netflix personally recommendations bring about 1 billion a year in value from customer retention. Majority of Netflix users consider recommendations with 80% of Netflix views coming from service recommendations. Now that is insane. The sheer dollar amount and the percentage of people that actually use Netflix recommendations is quite insane. But how does that happen?

Jason Colby (07:11):

Yeah. How does that happen?

Kaustubh Kapoor (07:14):

Netflix has set up 1300 recommendation clusters based on viewing preferences. Netflix segments as users in about 2000 taste group based on taste group that a user falls in, it dictates the recommendations.

Jason Colby (07:32):

So what exactly is a taste group?

Kaustubh Kapoor (07:34):

If you think of it as a bucket that say you’re a sports fan, right? I like soccer. So if you were to put me in widths board, do I like, I’m going to that soccer bucket, right? The same way as when you think about Netflix shows, you can think of it as simple. My favorite genre say is docu series. So if you were to put me in a simple bucket, you would put me in the docuseries bucket. But when it comes to Netflix, it’s not that simple. The 1300 recommendation clusters, they’re trying to do a fuzzy matching of what exactly you like, right? Because it’s true. One day, on a Sunday afternoon, I might be just sitting there board out of my mind watching a drama because you know it’s that type of Sunday. I just want my ice cream and my drama. But most times, like I said, I watched docuseries. So it’s not quite simple as liking one sport over another.

Kaustubh Kapoor (08:28):

And the way it does it is, it’ll capture certain things what a user is doing, right? So you might think of the most common things like what genres someone likes or most watches, what’s the reading of certain shows, viewing history, et cetera. But the most interesting things is they look at the duration that you watch something. You have an ensures you could like TV shows, but you could like sitcoms that are 20 minutes, or you could like the drama shows that are about an hour. They look at which device you’re on. Are you watching on a smaller device such as a phone? Because people tend to watch more short, shorter shows on phones because the experience is different, tend to watch less movies on phones. The time of day you’re watching something I use somebody who likes to put on something for 20 minutes at night, or are you somebody who likes to watch a lot of TV in that you are Steven the afternoons, maybe that’s something that you do to decompress. So that’s how Netflix works. I’m not going to go into the crazy amount of detail in what causal modeling is or in supplement, modeling is a bandit on the first episode, but that’s the idea.

Jason Colby (09:38):

Oh, now you’re just teasing us. Okay, so what about everyday technologies? And I’m talking about the Roomba here. You know that little robotic vacuum. How does it know where to clean and what to clean?

Kaustubh Kapoor (09:55):

Yeah, cool. Okay, so we jump we’re jumping from Netflix to robots. The Roomba has the system called the aware system, which is made up of multiple sensors that pick up environmental data. What’s around it how is it moving, that sort of data, right? It’s built on micro processor that alters the roombas to actions accordingly. The three main sensors that it has is a wall sensor, a cliff sensor and an object sensor. Quite simple. If the Roomba is near a staircase, the cliffs and the cliff sensor, which is at the bottom of the Roomba, will sense the high drop, and tell the robot to turn the other way. If it’s near an object that it might hit and actually damage the body of the Roomba, it’ll go the other way. Okay. The same way as the wall sensors, because while anywhere wondering what’s the difference between an object in a while, a volume concrete and it’s much heavier. So when it looks at a wall, it try and stays away and minimize the damage and minimize the contact with the wall as well.

Jason Colby (11:03):

Right, so it slows down.

Kaustubh Kapoor (11:06):

Exactly. And that’s how essentially a Roomba works. But the first thing that aroma does when you press clean on the Roomba is that it calculates the room size.

Jason Colby (11:18):

So yeah, that’s what I really want to get into. We’re talking now about the AI part of the Roomba, right? Like how can basically think about the size of the room and know what to do?

Kaustubh Kapoor (11:29):

Yeah. Okay, so like I said, so when it gets, this is the AI part of the Roomba that is a little more complex. But when it gets into a room, the Roomba given its environmental sensors, like I mentioned before, we’ll calculate the size of the room. It’ll make sure that it has its boundaries well set up. And then it’ll through those pictures. And this is another part of AI that we’re going to talk about in the future. It’s called CNN. It’ll try and identify objects like the couches, tables, kitchen counters, right? And then it will log these objects so that it can stay away from these objects as much as possible. And create that sort of mapping of where to clean around the house. Now the other cool thing that AI is added to this is that now that now that the CNN has worked as magic, marked out certain zones in the house where it should avoid based on objects, you can also mark down clean zones where say you’re eating on a dining table and you have kids, it spills a lot. Or you have a dog that sheds a lot near where it sleeps. You can mark those as clean zones so that the Roomba will do maybe like two or three laps of that zone. So it’s extra extra clean.

Jason Colby (12:44):

That makes sense and thank God for that. So I’ve heard a lot about self-driving cars lately. They’re all driven by AI. I think the latest one, maybe was Tesla. They’re definitely ramping up their autopilot improvements to their AI guided vehicles. It’s kind of a selling feature for them. And you can’t mention Tesla without talking about Elon Musk, right? He is the CEO of Tesla and he’s probably the one person that’s hyping AI the most. So it cost generally speaking. How will AI change the future and what is in development right now that looks promising?

Kaustubh Kapoor (13:26):

All right, so obviously Tesla has been in development for a long, long time. And we’ve seen some of the tragic events like the 2012 crash where the Tesla crashed into a boulder coming out of a highway, I think. It was, and we know that there’s a long way to go for Tesla. But Tesla’s AI is built on a deep neural network. And it uses cameras. Ultrasonic seltzers and radars to see and sense the environment around the cars. It’s kind of like a Roomba, but much more complex. Right. The robot sensors gives the car an idea of what to do exactly on the road, whether it should be going that speed limit, whether it should be going below the speed limit because it’s a school zone or whether it’s a highway and you’re allowed to go ten over because that’s what the rest of the car traffic is doing. It’s all built on this very complex deep neural network. And I’m sure we’ll go through the complexities of how it actually works in a future episode where we talk a little bit more about the workings of machine learning. But you can think of it as the one of the most complex AIs that are actually in production and available to humans right now.

Jason Colby (14:46):

Yeah, so it’s using the GPS that it’s constantly improving over time, right? Which maps out the roads, the road speeds and what is around those areas. It’s using sensors on the car to dictate how close it is and how far away it is from other vehicles, stuff like that. Is that sort of how it all works?

Kaustubh Kapoor (15:09):

Yeah, you have it almost nailed down there. The only thing I would add is that when you think of how complex say a Roomba is, it’s maybe a million times more complex in that there’s not only one neural net or deep neural networking, there’s multiples and multiples of neural nets trying to predict what they should do in the next phase of driving. And all of that is very well interconnected as well.

Jason Colby (15:42):

Right on. Okay, I’m going to switch gears, no pun intended. We’re going to talk about 2020 and 2021 currently. They’ve been challenging to say the least. I mean, COVID’s definitely changed the landscape. Not only in business in general, but pretty much our everyday lives, the pandemic undoubtedly has heightened the need for AI solutions I would say. What applications exist that maybe aren’t being hyped up enough?

Kaustubh Kapoor (16:10):

That’s actually a good point. And I think the first thing that comes to mind based on the current years that you’ve just mentioned and the pandemic and how long it’s gone, it has to be AI and healthcare. And just based on the pandemic, one of the exciting developments that came out of this was COVID cough tracker. It’s this device that was built that’s being built by a bunch of people. I know, I mean, so I know one of the well aware of one of the people that are doing it in India as well. And how it works is that Dave asked people who are in different stages of their life. So recovered people, people that are going through the disease and different stages of the disease, people that are absolutely healthy to cough into their phones. Now what they do is they then feed it into an AI algorithm that breaks down these phone sounds or soundwaves and tries to make sure that it’s classifying each cough correctly. And it’s interesting because it’s very hard for you to tell whether it’s a COVID cough or not and it’s your body, but that’s what this COVID cough tracker is trying to do. So you wouldn’t have to take a test, essentially this COVID tracker would tell you whether you’re infected with the disease or not. And although it hasn’t, I don’t know if it’s being peer reviewed or not. I can’t speak to that part exactly. It’s just an interesting thing that people came up with and just shows how far AI capabilities can be stretched. And this is just one example. One of the other things that really excites me is similar to this in soundwaves, it’s the tracking of heartbeats to predict hard abnormalities. Again, the same way heartbeats create different sounds. And we know a healthy person’s heart to a non healthy person’s heart. All this data was compiled and the algorithm now tries to listen to your heartbeat and tell if you have any heart abnormalities or not. And this is important because it saves you one trip from that from a doctor if you don’t want to and also helps you with rushing to the hospital when things are just starting to develop.

Jason Colby (18:30):

So this is done through your phone.

Kaustubh Kapoor (18:33):

This one is done through the phone recording, I believe, don’t quote me on that though. But I was just reading that this one is in development and a different stage of the development usage around the world. Yeah, so the third interesting AI in healthcare application that I want to talk about is this application called BOP or Y health. So boy health. It’s an AI based symptom and cure checker. And it works in a really interesting way. What you do is you go into this application and then type out what all your feeling. You might have a cough, you might be this time might not be feeling well. You might have an arm pain, something like that. And it’ll go through all the past cases and using an AI algorithm. It’ll mention maybe like three or 5 potential things that you could have. Now, you might think that why wouldn’t you just tell your doctor and they should know. But the thing is that this database in itself is from multiple countries and multiple languages, all translated into one and can map through diseases that occurred as far back as say the early 1900s or late 1800s. And whatever doctor you go through would potentially not have studied all the diseases from all time from everywhere. So if you have something rare, that’s not that common in your country. You now have access to the resource that can potentially catch this disease early and save the lives of millions and more. So that’s something that I found really interesting. I think Harvard Medical School is one of the hospitals and healthcare providers that uses boys AI. And I think that is just one of the coolest things that has happened to us in the AI field because we’re using AI for something more than just cool cars and whatnot.

Jason Colby (20:32):

Yeah, no kidding. I think that’s important to mention because with everything that’s gone on and all the surprises that we’ve been through over these last few years here. Let’s look at the top three trends in AI. First, let’s talk about the ideas behind a hybrid workforce. What exactly is that?

Kaustubh Kapoor (20:54):

It’s like the word sort of mentioned. AI and humans were pitted against each other from the beginning that the AIs are just going to take jobs and replace jobs and do all of these things. And broadly, that’s pretty true in the sense that you’ll see robots doing most of the machine work now. But that doesn’t mean that humans are needed at a manufacturing plant. It’s just they have different jobs. They make sure the quality assurance is better. And sure, some jobs will become extinct because of technology, but there’s other jobs that are going to be created because the technology because humans are going anywhere in the near future. So that’s one of the things that is becoming more and more common. You see your technology doing more and more things, but humans are not being wiped away or not being removed. It’s just now humans have more time to do more meaningful things that maybe something. As simple as working on an assembly line. And that’s something that is sort of important to register because. Your organization might not gain the buying today, but if they’re not on the process of the journey of gaining their buying, somebody will be left behind.

Jason Colby (22:14):

That’s kind of a scary thought. But okay, let’s talk about secondly. Secondly, I want to talk about the retail business and how ecommerce has changed throughout the entire pandemic.

Kaustubh Kapoor (22:26):

Yeah, ecommerce has changed a lot throughout the pandemic, I at least for myself order groceries online now and I haven’t been to not that I haven’t been but my regular grocery trips don’t happen at the store. They happen on my phone. I thought that was super interesting. One of the cool things we’ve seen is that January 2018, Amazon opened its first high-tech grocery store that does not require traditional checkouts. They have checkouts and their grocery carts, which are always super cool. One of the other things that’s happening in this industry is that in February 2020, where the largest retailer Walmart introduced thousands of robots to its workforce. And these robots help manage inventories, crop floors, and keep the products shells in order. And we’re just going to keep seeing more and more of this happen in the retail industry where jobs like that of a checkout register and more the clean the floors are stalking the shelves. That’s going to be done more and more using these robots. And that’s just a trend that’s coming. And it’s gaining momentum. And it’s important that. Retailers that want to be ahead of the curve start recognizing this, just pulling up a fact about this, why it’s important is that from 2015 to 2018, 81 major retail companies have filed bankruptcy. And that’s not all because of technology, but definitely technology as a part to play in it.

Jason Colby (23:56):

Right. Yeah. That’s crazy. So what about AI chatbots then? That’s also a that’s like the third most popular trend. How have they been used?

Kaustubh Kapoor (24:11):

So chat bots are just computer programs that conduct conversations through text or audio. There are many ways that chatbot can be used in processing payments and marketing. But customer service is really where they excel. When a company develops and implements a chatbot, it is usually seeking to automate at least some portion of that customer interaction. For example, up to 80% of customer inquiries regarding products, I repetitive questions rather than unique communication. And for this reason, this reason alone, creating a chatbot for our company just makes sense because you’ll get those regular FAQ FAQ questions that just are framed in a really in a different manner than the FAQ. So a person might not be exactly sure. It’s just how some people took people talk differently in different cultures and different languages. But the question might be the same. And that’s one of the major things that a chatbot can be used for is answering your mean FAQs. In 2020 though, chatbots will be travelers have become better at matching conversation, human conversation, allowing for a large portion of consumer interactions happen through chatbots. And one of the main things that trends that are coming into chatbots is language understanding. So understanding whether in real time understanding whether a person is talking to a chatbot where they are happy or sad or not enjoy the conversation and want a human interaction. So there’s a big trend coming into it where chatbots especially with customer service, they occur first with an AI. And as if the AI realizes that this person, you know, sounds a little grumpy. They get transferred over to your human agent. Most of them work out. Some of them that don’t get transferred to the human to answer the hard questions of that makes sense.

Jason Colby (26:05):

Yeah, it makes perfect sense. You’re talking about not only typing something into a chat. I think you’re also talking about a person picking up the phone and having a conversation. It’s just not that they’re able to do that much of the heavy lifting now without human interaction. Crazy.

Jason Colby (26:27):

We all know that hype is a completely social phenomenon. And because AI and machine learning are so promising. And because they have the ability to fundamentally transform areas of our lives that we know all too well, people are rightfully excited by and maybe a little uneasy with the possibilities. So for now, let’s just say it’s okay to believe the hype. At least a little. It’s a guarantee that AI and machine learning will impact our lives. Just not necessarily in the ways that we might expect. Thank you for listening to smarter than AI. Each week we will be taking a deeper dive into the positive and negative aspects of artificial intelligence, machine learning, and the possibility it holds. Join us next week as we talk about the evolution of AI. How it all began and what its capabilities are today.

And we would love to hear from you. If you have any feedback or questions for our team, or topics that you would like us to cover, please let us know. Drop us a line on Twitter, Facebook, or LinkedIn, or visit us at MNPdigital.ca.