The Evolution of AI

Can machines think? For over a half-century, science fiction has embraced the tropes – and embellished the more colorful aspects of intelligent machines. In doing so, they also brought the concept of non-deterministic, self-teaching algorithms into the mainstream. Sometimes they are characterized as benevolent, other times they have been seen as a force hell bent on the destruction of humanity. In this episode, we take a dive into the actual history of Artificial Intelligence and Machine Learning, separating the facts from the fiction.

The Evolution of AI transcript

Jason Colby (00:01):
Can machines think this is a question that began as pure science fiction. It was popularized in books in movies. It was the talk of the town for an entire generation of scientists, philosophers and mathematicians. But let’s take a deeper dive into the evolution of artificial intelligence here on smarter than AI right now.

Kaustubh Kapoor (00:37):
For over half a century, science fiction has embraced a more colorful aspect of intelligent machines and brought about the non deterministic self teaching algorithms into life. While sometimes their characterized as benevolent and helpful, but the others they are characterized as help bend to destroy mankind.

Jason Colby (00:56):
But where do we stand today in concrete terms? What’s real? What’s not? I am here with my co host costume Kapoor to discuss the history of artificial intelligence and machine learning. So cost of I have a question for you. When I think of artificial intelligence, this is what I think of. I think of sci-fi characters like data from Star Trek Skynet from terminator terminator himself, really, or Ultron. And yeah, I know I’m a little bit of a nerd, but AI is more than just robots, right? Like, what is it in the most basic terms? How do we identify something as an AI?

Kaustubh Kapoor (01:33):
Ah, that’s a good question. So AI is basically the simulation of human intelligence. And machines that are programmed to think like humans and mimic their actions. The term can also be applied to machines that exhibit traits like that of humans. And can learn and problem solve as well. The idea of AI is to be able to rationalize and think in a very human like nature. A subset of AI is actually machine learning what we have based this entire series on almost. It’s the concept of computer programs can automatically learn and adapt to new situations.

Jason Colby (02:15):
Okay, so it’s pretty much anything that gives us a solution without us really having to figure it out for ourselves.

Kaustubh Kapoor (02:21):
Kind of, yeah, so that concept is actually through deep learning. I know that this is just a second episode so we’re not going to go into too much of detail. But think of it as deep learning is a set of techniques where you can automatically learn things where the machine can automatically learn things like learning the structure of a dataset or learning how an image looks like. Remember, AI is just based on the principles, principle that machines can actually mimic the patterns of human thinking and rationalize ration is the way a human brain would, thereby making correct decisions. And I know there’s a myth that goes around which says that humans only use about 10% of their brain, but you talk to any neuroscientists or neurologists or brain surgeon and they’ll tell you that it’s absolutely not true. The human brain and most of it is actually always automated and is always working. So the goal of AI like I mentioned is to mimic human cognitive activity. Research and developers in the field are making rapid strides in this. It is believed within innovators that we can develop systems that exceed even the way a human thinks. Which will be crazy for when that actually happens currently. It’s not quite there yet. But think of the applications because supercomputer can crunch numbers like no other human can. And when these types of technologies and ideologies can be applied and actually scaled, it will have crazy impacts in the technology and human life as well.

Jason Colby (04:05):
Right, right. Yeah. Totally. So I mean, I’m trying to remember the name of the movie, but if I’m not mistaken, it was Alan Turing that came up with the foundation of AI in the 50s, right? Basically with the invention of the computer, kind of around that time. That’s how we were talking about the Turing test in our lab last episode. If I remember correctly, he was the person that laid the foundation for something called logic theorists. I think that’s I did a little bit of research. I tried to prepare for myself for this. So logic theorists, I think that’s like the first AI that was ever developed. But what could it actually do?

Kaustubh Kapoor (04:49):
So there are several concepts that go into logic theory and obviously logic tourists have developed this field since the 1940s, 90 50s, and there’s a lot to sort of go into. But let’s look at the three main things that actually influence AI and the way it actually is today. And the first thing is reasoning in search. So what these people these very smart people did was they looked at and explored the idea of a search tree where if you’re familiar with statistics, the idea of a hypothesis comes into play, which is nothing, but an idea. And intuitive idea that you’d like to eventually prove. So they looked at a search tree where the root of the tree. So like any other tree, it starts on the bottom, the root, right? Roots need to be strong. That’s the hypothesis. And then you have a bunch of different branches or in math we call them, we’ve called them children of the root, and ultimately through that you’d want to prove an idea and get to the proposition as it were, right? And the goal eventually is the idea of a proof. And that is the sort of baseline idea of reasoning a search. So you take this tree with a start off with that hypothesis. And through multiple logic multiple logical ideas, you get to a proposition, which is the goal of proving your hypothesis.

Kaustubh Kapoor (06:19):
The second thing is the idea of heuristics, and we’ve learned, or we’ve probably heard this heuristics term here and there. But this is nothing but ad hoc logic. So in essence, if you think about philosophers, if you think about, yeah, if you think about philosophers, they take an idea and think a lot about it and try and almost rationalize it and come up with theories behind it. Same idea with the hypothesis, you could think a lot about hypothesis and that which makes your tree grow almost exponentially and it could create growing forever. And that’s not good. We want it to have a proposition and then go, right? So heuristics is basically the ad hoc logic that we apply so that our tree doesn’t go exponentially. And we actually do come to our end goal or our proposition. And the third thing is list processing. Now, this concept in its own is not outdated, but it’s set up the foundation, so it’s not it can’t be outdated, but the idea here is that through this list processing, we had our first programming language called IPL. That was developed. And then which is McCarthy’s list programming, which is also developed through this list processing idea.

Jason Colby (07:34):
I think I understood all of that.

Kaustubh Kapoor (07:36):
Nice. It’s a it’s almost a complicated concept because there’s so much that goes into it. So that’s why I like on the first one. On the second episode, I want to go through too much of detail. I remember we tried to keep this one kind of short just giving people and sort of an idea of where all of this started and where we’re at and what the potential is. But I think people who are resisting can take logic theory or this specific ideas of logic theory as the search for example, which is the most important you start off with something that is a hypothesis, start off with something that is an idea in your head and obviously you’re going to go through logic to confirm whether the idea is a good one or not. Right. Say you were booking a trip to on vacation to LA, say, for example. That would be a hypothesis. Then you apply logic. How do I get to LA? So one of the branches of one of the children could be a car, the other one could be flight, the other one could be walking. And obviously through logic you understand the walking is not going to prove your hypothesis. You’re never going to get to LA in a decent amount of time. So you look at car or driving and based on where you are, you decide and ultimately you get to the goal, which is getting to LA.

Jason Colby (08:51):
Yeah, it’s essentially deductive reasoning, really, right?

Kaustubh Kapoor (08:54):
Yeah.

Jason Colby (08:55):
So okay, so this leads kind of into my next question, which is basically what are the limitations that we’re talking about? Why were things so limited back then, given the fact that they had this foundation, they had something that worked, so I think what it really boiled down to was computational power, right? Like this was especially true in the 50s, 60 70s and 80s. And if I remember correctly, just doing my research, this is when it was huge. The concepts of AI and everything that was developed, it was basically shelved for a little bit basically because of something called Moore’s Law, right?

Kaustubh Kapoor (09:42):
Yep.

Jason Colby (09:43):
So what is Moore’s Law?

Kaustubh Kapoor (09:44):
Okay, so Moore’s Law is that his perception was that the number of transistors or you could look at it as transistors, computing power, whatever you want. It doubles every two years. And the cost of that is actually halved every two years. So his ultimate, I want to say argument here was that of an economic one. With multiple transistors and electronic devices, his theory, his proposition, all right, sorry, his hypothesis, here would be that the more transistors, the less it costs for each transistor to exist. So his hypothesis was that the number of transistors and the cost for each of them is actually inversely proportional, which if you think about it is counterintuitive to a normal human being, right? The number of cars you own, your insurance goes up. So ultimately you’re gas goes up your maintenance fees goes up. Everything goes up. So how does that make sense? The more of something you own, it should be higher, right? But if you think about it in a real estate scenario, the more homes you own, the more renters you have, the cheaper your next home gets because you ultimately start making money and buying a homes off of that money. So that was his brainchild. And the R and D department was super good at understanding that and working towards that cause.

Jason Colby (11:19):
It’s fundamentally if I’m reading this right, basically the law of supply and demand, really, at the end of the day, more transistors will be made, the more people want them. More power will be given. It’s like the evolution of the car. You start off with a four cylinder car, maybe that was made in the 30s or something along those lines. And it got to a point where you had the V12, there wasn’t enough power for the human. They wanted just to go faster. Everybody wants more.

Kaustubh Kapoor (11:49):
Exactly. And like you mentioned, I guess more saw the idea of engineering advances as a possibility if the actual hardware existed. And that’s what he focused on. Transistors, computing power, all of that. The more it exists now, the lesser it costs in the future. And if you can think in terms of AI and big data, you could think of cloud computing as similar things, right? When cloud computing started, all these big cloud companies took a huge loss because people weren’t ready. People weren’t companies weren’t ready to move their data into the cloud, they were scared what is a cloud? Is it really in the sky? What are all these concepts? The more and more it actually got mainstream and people start to understand that there’s huge amounts of benefits to this. The less it costs Microsoft to actually own these things or AWS vaccine on these things at Google to actually own these things. And it was just the smartest idea ever. Store everyone’s data in this in these centers that they actually own. And people don’t have to pay for that. They ultimately won’t have to pay for that. And it would just be, it’s a beautiful thing now. It’s something that everyone wants to get into and use.

Jason Colby (13:01):
Yeah, exactly. And it’s created sort of a centralization of data. But let’s not get into that. Let’s just keep talking about the evolution here because in my research, basically in the 70s and the 80s, the AI dipped the fad kind of disappeared from the that was so prevalent in the 50s, the 60s and the 70s, right? And then if I remember correctly because I was a young kid at this time and it was fascinating to me. Gary Kasparov beat deep blue. I think it was an IBM computer. Or sorry Gary was beat by deep blue. He was beat by him. And that’s when humanity sort of started to look at AI again because they realized that an AI could beat the master at chess, a very strategic game, right? So that’s when all of a sudden we saw a whole ton of advancement again, right? What did I remember correctly there was programs. I think it was dragon that developed text to speech software or speech recognition software or something like that. That wasn’t crazy. We talk about, I mean, let’s just talk about the sophistication of the Internet. Really, that’s what’s fueling AI today if you ask me. And I mean, but I’m no expert mind you. But that’s what kind of led to the age of big data that we’re in right now, right? So basically this is a long lead up to the question, but bear with me. How is all this data that we’re feeding basically the Internet right now and everything that we interact with? How is that changing the game for AI?

Kaustubh Kapoor (14:51):
That’s a really good question. I’m surprised we haven’t we didn’t talk about it in the first episode because it’s kind of like the crux of it all, right? So big data has 5 vs. I at this moment can not remember all the 5, but the biggest three are volume, velocity, and veracity, I believe. Yes. But anyway, big data as the name suggests is a lot of data that can be accessed and then analyzed in a very short amount of time. Volume and velocity, the big two things that are part of big data. So why does this help AI? How does this help AI? I think the biggest thing is, if you look at statisticians and people in the data science data science world, and I think I can’t remember if we talked about data science versus machine learning engineers, I’m sure we will in the future if we haven’t. But data scientists are essentially people that actually develop all these algorithms and play around with different statistical concepts so that these algorithms could actually then be scaled. So in the past, they’ve relied on a sample size. They’ve relied on certain percentage of the actual data because accessing that data was really hard, storing that data was really hard and computing that data was super hard. So that was a big bottleneck. With the evolution of specifically programs like Apache Hadoop and Apache suite altogether, it has become very, very easy to store this data and then very, very easy to access it. And we won’t go into the concepts of oil AP and LTP yet, or the concept of an RDBMS versus not. But the idea of non relational databases when it got mainstream, that’s when Hadoop became a big thing and other big data providers became a big thing. And they gave statisticians and data scientists access to all the data.

Kaustubh Kapoor (16:53):
Now, to the concept of data marts or certain silos, big data silos are center of excellence, people in this field can just access and play around and model with all that data. It might take a couple of days for the model to run, but you will get a proper prediction or a proper idea of your modeling based on the entire subset of the data, not a small chunk of it, which is very, very key. And that’s what big data has done. With the other thing I wanted to mention here was like I was mentioning about big data in the last question. Sorry, cloud technology in the last question, storing data on the cloud is very, very cheap. Accessing data on the cloud is very, very cheap. So the idea that came well, we have all this data, it can be stored in access quickly. Why don’t we use all the data to actually do our analysis? Thus AI became one of the biggest things in the last almost decade. And it’s just catching up and up and up. And the last thing I’ll mention on this question is probably the idea of streaming data with a movement from away from batch service to streaming real-time data. You can imagine how amazing AI scientists and machine learning engineers might have felt because now their predictions can actually be used. Right at this very moment, they don’t have to wait until 4 p.m. until the data actually gets processed. You could also use it right now. And that’s one of the main things about big data that has helped the community.

Jason Colby (18:22):
Yeah, so what are the types of big data data that we’re talking about here that’s being analyzed?

Kaustubh Kapoor (18:28):
So it’s every type of data that you think about. So streaming data would include pictures, audio, video, CSVs, excel, PDFs, and everything. You could think of, I don’t think any type of data that can not be analyzed at this moment. And this might be overstating. But from what I’ve worked with, I’ve worked with almost all types of data. And it’s very impressive to see how far we’ve come in the field of air.

Jason Colby (18:55):
Yeah, that’s crazy ’cause I remember one of the last things was video ironically enough that was sort of on the tipping point of what’s being recognized and I know for a fact that that’s easy to find out now. That one of the things that can be analyzed quite easily.

Kaustubh Kapoor (19:12):
We worked on video facial recognition project where we would predict that someone’s age as they walked into a store so that I deem process would become more and more seamless and it would lead to more customer retention. And that was real time. So somebody walked in and they would get a real-time prediction for that.

Jason Colby (19:30):
Yeah, I mean, here’s another example. Like if you’re looking at the I just got a new iPad Apple pro. And one of the things that it does is you have to basically, upon an initialization, you have to stare at the camera and you got to move your head around, right? Okay. And it does that so we can do facial recognition. So rather than punching a pin or there’s no thumb print, it actually has to recognize your face in real 3D for it to unlock. It’s insane. But very cool. I can’t wait for my daughters to try and open that up. It’s not happening. That’s good. Okay, so I guess my last question then, I guess, really to round this all out is where do we go from here now that we know that the possibilities are pretty much endless? It’s basically based off of theory, what we can use the data for and the computing power that seems to be endless. Endless. Really? As time goes on. So do you care to make any predictions for us? What do you think is going to be used in the future?

Kaustubh Kapoor (20:50):
So I’m actually going to refer to some predictions that have actually read and to me in the political field, there’s a couple of predictions that are super interesting. For example, the political deepfakes, this happened in India and I came up with the other country, but I remember distinctly happened in the Indian elections, Indian state or federal elections, where deepfake video actually got famous online and people thought it was the real person talking. It wasn’t intended to be for a bad purpose or anything, but it just made the process so much easier because you record the person once. And then you can make that person say anything and it’s amazing, right? So one of the predictions is that defects, especially the political field are going to get more and more famous and more and more normal and especially are going to be in North America very soon. And in response to that section two 30 of the communications decency act is going to be in the U.S. is going to be question a lot. It’s already starting to be questioned quite a bit because who takes the responsibility? Is it social media company? Or is it the actual politician? We don’t know. So that’s a point of contention. And I like that because makes things and keeps things interesting.

Kaustubh Kapoor (22:09):
The second one is also in the political field and this one it relates to AI becoming an actual priority policy for most nations. Deep fakes, social media hacking, fake news that goes around based on what you like. All of that is in contention these days. All that’s being talked about a lot. And now is the time that politicians need to actually act upon it and create rules and regulations so that these things can be controlled. And I think the third one that I’ll go for is the MLOps category. Demo ops has been sort of on the fringes because AI was just starting to become a thing and nobody really was thinking about scaling or anything. People were just happy to get predictions because they could see into the future. And that was really cool. But with envelops, people are going away from science fair projects. They’re going away from just creating something that sits and goes nowhere and doesn’t produce real value. And MLOps comes in there. It’s basically DevOps for use by developers, but for machine learning. And I started to become a true skill that each machine learning engineer our data scientist needs to know. It’s not enough to be a data scientist. You need to know how to put that software that you created into production so that it can create value in an ongoing basis.

Jason Colby (23:29):
Yeah, that makes sense.

Kaustubh Kapoor (23:31):
Yeah, and I just one last thing. At MMP, we have a great AI framework. So if any leaders are listening out there and you have any questions about it, we have a framework that ensures all of this. And doesn’t leave you with science fair project, but actually things that can be put into production and save you money.

Jason Colby (23:50):
There you go. Shameless plug. I love it.

Kaustubh Kapoor (23:52):
Shameless plug. Absolutely. I was part of creating it so I might as well plug it.

Jason Colby (23:56):
Exactly, you got to. You got to.

Jason Colby (24:01):
Okay, so to just to sum up in what we’re talking about, we know that AI now is way more than science fiction. We know that what began sort of as a terrifying idea has become a tool that all of humanity relies on now to make our lives easier and more fulfilling. And it has evolved in so many different ways. I mean, you’re not talking about the sky as the limit. It seems to be that the ceiling is only provided by the technology. And that’s been erased. The amount of data that we have access to provides the foundation that we have for almost a limitless future. So thank you for listening to smarter than AI. Each week we’ll be taking a deeper dive into the positive and negative aspects of artificial intelligence and the possibility it holds. Join us next week as we discuss the ethical dilemmas around the use of artificial intelligence and machine learning.

And we would love to hear from you. If you have any feedback or questions for our team, or topics that you would like us to cover, please let us know. Drop us a line on Twitter, Facebook, or LinkedIn, or visit us at MNPdigital.ca.