Background
Intro (00:00:01):
Futurized goes beneath the trends to track the underlying forces of disruption in tech, policy, business models, social dynamics, and the environment. I’m your host, Trond Arne Undheim, futurist and author. In episode eight of the podcast, the topic is reality and hype in deep learning. Our guest is Otkrist Gupta, VP of data science at Lendbuzz and PhD in machine learning from MIT. Otkrist, how are you doing today?
- Futurized.co https://www.futurized.co/
Otkrist (00:00:32):
I’m doing good. How are you doing?
Trond Arne Undheim (00:00:35):
I’m doing great. It’s another day of podcasts.
Otkrist (00:00:38):
Yeah. Another, another day. Another podcast. Yeah.
Trond Arne Undheim (00:00:45):
How are things looking on your end? What do you see out your window?
Otkrist (00:00:50):
Out of my window, you mean right now I don’t have a window in this room, but I saw a second ago. It’s very nice. Actually. It’s not a bad day in Boston. It’s windy. It’s kind of hot. It might rain but you know, it’s usual Boston weather. It’s always a curve ball over here with the weather in Boston.
Camera culture at MIT and image recognition
Trond Arne Undheim (00:01:07):
So we have a, we have a very interesting topic today, Otkrist. You know, we have known each other for a while. So I know your background, then you’re a VP of data science right now at Lendbuzz, which we’ll get into. But you’ve been focused on deep learning in, in several different domains that we’ll get into in a second. You did your PhD at the media lab on you know, in camera culture.
- MIT Camera Culture https://www.media.mit.edu/groups/camera-culture/overview/
- Lendbuzz https://lendbuzz.com/
Otkrist (00:01:30):
Yeah, that’s correct. I didn’t mind PhD at MIT media lab. And before that, I also did my Master’s there. Yeah,
Trond Arne Undheim (00:01:37):
Exactly. And again, it was on different applications of deep learning, especially image recognition and things like that, which were,
Otkrist (00:01:46):
Yeah. Right. So, so my master’s was more around sort of the optics and computer vision topics, than my PhD. It was around this this technology of looking around the corners. So this is this application or a question that we had asked ourselves that you know, when you have a camera, you can look straight in the line of sight of the camera, of course, you know, because everything you can just see it. But the question that we had was that what if you have some sort of obstruction and the light is being obstructed, but it’s bouncing around , it’s bouncing around screen. And on the screen, can you use the bounces or the photons to figure out what is not in the line of sight of camera directly? Which is very interesting, very challenging question that we had asked us almost like a scary question, like a mad science slash, you know, one of those questions that you’re like, there’s no way this would work. And then
Trond Arne Undheim (00:02:43):
How did you even come up with the question?
Otkrist (00:02:47):
So, so the question itself, I would say that most of the credit goes to my advisor Ramesh Raskar. He had discussion, he had sort of a, like a basic idea that how we could solve some approach a like that. So he was like, okay, you know, if you shine the laser, if you shine that ultra fast laser I’ve got faster laser and what does ultra fast mean? It’s basically a very short laser pulse. So it’s a laser pulse with just femtosecond, which is really, really, really short. And then you have a very fast camera as well to go with it. It’s called a picosecond camera which is also a super, super fast camera. We are really pushing the limits of technology by the ribbon. We are using these things and you synchronize them. You have to synchronize. So the ultra fast laser acts like a flash and the speaker second camera is a, is your imager. You start seeing things in a very different way because you know, you’re not just getting the photons, the light is just coming back, but you’re also getting the time it takes the light to go around the scene and come back to you.
- Ramesh Raskar https://www.media.mit.edu/people/raskar/overview/
Trond Arne Undheim (00:03:50):
Why is it? And we’ll get into this in a second, but why is it that optical and camera technology basically has become so instrumental as an application of an early application, I should say of AI. Why is it that this particular application has been so promising?
Otkrist (00:04:07):
You mean like in machine learning terms?
Trond Arne Undheim (00:04:11):
Yes, why has this particular application has been of so much focus?
Otkrist (00:04:14):
Right. So I would say it’s like a combination of the technology and chance also a little bit, I would say, because a lever we weren’t making advancements in machine learning and all different kinds of data sets and thereby, you know, small data sets and they were large data sets. And what we found was for large, very large image data sets, the deep learning works really well. It works so well that it actually beats you now. It’s much better than humans in most, in most applications. There’s like a big star. There is a big asterisk there, these applications, which do not require higher level cognitive thinking. So when, realistically, when we are doing this, when we are feeding an image to a deep learning pipeline, what we are doing is we are kind of replicating the prefrontal cortex. The part that goes right after the eye. We are not doing something more than that we are sort of just replicating that pipeline. But what has been shown is that this is actually both the bi-directional replication in the sense that the behaviors we observe with the machine learning models are actually they’re inside the neuroscientific when they do the research on neuroscience of this prefrontal cortex they see the same kind of stuff.
The inflection point in machine learning
Trond Arne Undheim (00:05:28):
So when would you say that this particular technology started passing humans for these very non sort of contextual tasks that you are describing right now?
Otkrist (00:05:37):
Right. So I think the point of inflection, well, there were a few. I think the big one was image net. There was this big data center leased around 2011. So I think and there was a paper written already seminal paper by Alex Krzyzewski which what they did was they, they came up with this, this new electric, they called Alexnet. So Alex, Alexnet and then what it would do is they would take these images. It could do processing on them. It would produce a result. And it did it so much better. It for so much better than the existing methods. They showed an improvement. I think the absolute improvement was 20 to 30% and relative improvement was more than a hundred percent. So they were able to double the efficiency of the machine learning models that they had. And that was partly because of the deep learning, but also this huge data set that was provided as an input and this data set was actually created by Fei-Fei Li, she’s a Stanford professor. So I would say that it was combination of both the data and the model, which led this thing, this point of inflection. And after that, it just took off, the field took off quite explicitly. So if you go to of the talks, you know, Hinton is one of the godfathers of deep learning right now.
- Fei-Fei Li https://profiles.stanford.edu/fei-fei-li
Otkrist (00:06:55):
So he, you go to him or you, you know, there’s, there’s Bengio, of course there’s LeCun, and there’s Hinton. These three actually got the Turing Prize, I think last year it was.
- Yoshua Bengio https://yoshuabengio.org/
- Yann LeCun https://www.wikiwand.com/en/Yann_LeCun
- Geoff Hinton https://www.cs.toronto.edu/~hinton/
Trond Arne Undheim (00:07:05):
For the listeners. I’m going to link up all the three of them I’ll link up there, their main web pages. So, so people can look it up if they want.
Otkrist (00:07:13):
Yeah. So the hint, so if you, like, if you talk to him and he says that, you know, nobody was actually listening to them, they were saying neural networks. And then finally they decided to attack this highly application based problem. Cause they were writing really technical papers. They had been doing neuroal network research for 20 years. There’s look, there’s a lot of, yeah,
Trond Arne Undheim (00:07:31):
Isn’t that interesting? I think that’s fascinating. The fact that this has been going on for so long and then suddenly all of us noticed like out of the blue, we started noticing, do you have any idea? Was it truly that one of them just said, guys let’s become famous or, or was there actually, a true discovery involved?
the history of AI – and ai winters
Otkrist (00:07:47):
It ends up. There’s a very, very interesting story. I’m like, I’m so happy that you’re talking about this. So, so it all started, let’s rewind back. years from now it’s 1989, 1990s and Marvin Minsky, who was actually the founder of what CSAIL and MIT media lab he does a book, he wrote a book called Perceptrons and you know, you see it and you’re like, okay, this must be a book about neural networks, but actually it was a book about challenges or limitations of neural networks. And if you read the book, what it says is that theoretically, there are certain things that neural nets cannot do, but we tried, he was only studying neural networks, which were one layer. So he was studying just one perceptron was literally, perceptron just one layer of perceptrons. Yes.
- Marvin Minsky https://www.wikiwand.com/en/Marvin_Minsky
- CSAIL https://www.csail.mit.edu/
- MIT Media Lab https://www.media.mit.edu/
- Perceptrons https://www.wikiwand.com/en/Perceptrons_(book)
Trond Arne Undheim (00:08:34):
Isn’t that fascinating that he, he was basically, very famous as kind of one of the fathers, but he was at level one, you know, in, in all of these layers, that’s a, that’s fascinating. So what then happens?
Otkrist (00:08:47):
So he wrote this book and this book caused one of the biggest nuclear winters in AI. So what did this book did, is it killed a lot of research in AI because the book said, well, you know, here’s a problem that you are trying to solve and you cannot solve it with one layer of perceptrons and, you know, yeah. Just like everything else. There’s always this thing that, you know, once this thing goes, sort of goes public and it’s like a wildfire people stop thinking about it. But what he was really saying was that it’s just one layer of neural networks for several layers know, multi-layer neural networks. He didn’t make any claims and it’s actually very hard to analyze the map behind them. They just didn’t have the tools I would say back then. And then what happened is that people stopped studying neural networks.
Otkrist (00:09:27):
So there are these conferences which are called noodle, you know, neural information processing systems, the NEURIPS, right. They stopped accepting papers in neural networks. Like the, you, you listen to liquid and talk about this. And he’s like, he couldn’t get a neural network paper through CVPR like 20 years ago, 15 years ago, he would write these papers. They would get rejected. They stop sending this work to new reps. They start to need to CVPR conference. It’s like a big conference in computer vision. And then finally, you know, they, they fought and they want finally with AlexNet what it was, is that they had the = dataset. Fei-Fei Li had created a challenge. The challenge was do the best in this dataset. And there were these machine and other machine learning models. SVMs, Random Forest. They were doing like 20, 30%, whatever, 25%. And then suddenly Alex does, his model comes in is like 60%. So you’re looking at like this huge jump. And then overall everyone’s like, wow, how did you get 30%? And he’s like, well, I use deep learning. This is what I do. Quite quite a crazy story. Yeah.
- NeurIPS https://nips.cc/
- CVPR http://cvpr2020.thecvf.com/
- SVM algorithm https://scikit-learn.org/stable/modules/svm.html
- Random Forest algorithm https://builtin.com/data-science/random-forest-algorithm
Trond Arne Undheim (00:10:33):
Look, we jumped sort of straight into the topic I wanted to just to dig back a little bit at your background, because you know, when you speak about it this way, it sounds like, you live and breathe this stuff, but for many people this is not their everyday thinking. I mean, after all, yes, there is now a large community in, in machine learning, but how did you even get there? I mean, what was Otkrist thinking about when he was six years old? You know, outside Delhi, because I’ve, you know, if you think about your background, right? I mean, what were you doing when you were 12? This is my question.
Otkrist (00:11:09):
Right? So when I was 12, I would say any point of time in, when I was growing up, if you came and asked me what would you want to do? I would always say this, I want to be a scientist. There was no other answer. And my father had this weird sort of picture of a scientist. That scientists was always like someone who would fail the real world challenges, take someone who wouldn’t do economically. Well, someone who wouldn’t have a good job, wouldn’t be able to support the family. And there were the cases, you know, they were scientists back in the day when, you know, they were just sort of like, they, they just cared about, they were like rock stars. So there are lots of rock stars would get famous, but there’s also a lot of rock music people that are stars, they don’t get famous, but you know, they’re really good, but they live off in, you know, in vans, you know, they don’t have any money.
Otkrist (00:11:54):
So it’s like that, you know, sometimes it can be like that, but I think this is the, this is the era of science. So I got really lucky. I chose a field, which I like, but at the same time, it was very much booming and there’s a lot of demand for good scientists right now. So I was always, I always wanted to be a scientist. And you know, when you talk about science, it always physics is I feel, I would say one of the best sciences to pursue because physics is very much all about the nature, the universe, it’s about understanding it. And, you know, finding the basic rules, which drive the universe, physics is very much they follow strict laws, their rules and laws, which you have to follow. This is what the natural laws are.
Trond Arne Undheim (00:12:35):
It started with physics, is that what you’re saying?
Otkrist (00:12:37):
Yeah. I would say that, that was like the biggest, interesting thing that I had. And then when I was applying to universities the computer science, it was of course in very high demand, but based on my grades, I could get in. And again, it was, so it was my parents who were like, you know, maybe you should choose computer science because there’s this kind of get to sort of pressured into it. I would say I would definitely admit that, but it, it really played out to my benefit because when you are, when you have done a PhD, you decide, you see this, a lot of sciences are connected. They are all connected the top level. And I did end up doing a lot of physics in my bachelor’s and master’s and PhD.
Trond Arne Undheim (00:13:17):
So tell me then what happens? You know, this was a few years ago, and then, you know, you got yourself from, from the IIT in Delhi, you know, which is a great institution in India. And then you’ve got yourself to MIT. And then after that you spent, you know, some rapid fire years at LinkedIn, then at Google. And then I guess back at MIT. Tell us what went into your mind as you were sort of exploring all these opportunities. I mean, you’ve worked at some of the, you even had a summer internship at Yahoo. You have worked at some of the absolute top places. And what, what were those experiences like?
Otkrist (00:13:58):
Yeah. Yeah. Yahoo ‘was such a long time ago. I think I was in my bachelors when I went there. I think it was a very interesting experience. So, you know, there was this whole thing about, you know, in order to make a decision about where I want to spend more time, you know, long term, I needed to get like a taste of everything, because this is what I used to do. I used to talk to a lot of people and ask a lot of questions and try to figure out what they think. And a lot of times, unfortunately I found their help to be not that good. Like they would tell me something and I would be like, I don’t understand. Some people think that the Bay Area is the best place to bring in. There are others who are like the Bay Area is just, you know, there’s so much problems there.
Otkrist (00:14:37):
There’s like, you know, there’s no point in moving there. And the same with Bangalore, you know, Bangalore is like the Bay Area of India. And there’s a lot of infrastructure problem kind of like the area, you know, they don’t have housing, there’s no infrastructure, there’s roads issues. They should on traffic jams you know, Bay Area and Bangalore. So I decided to, you know, what, I need to take an internship, going see it for myself, experience it. It was quite an experience I got to see what it’s like, what the culture is. I kind of enjoyed it, but it was also maybe not the, my first choice after I was out of it. I was like, maybe let’s try something else. Yeah. And that was Yahoo. And then I think that that sort of exploration continued. So something that very few people know about me is that right after my bachelor’s, I actually applied and got into the best business school in India, it’s called IIM Ahmedabad
- IIM Ahmedabad https://www.iima.ac.in/
- Bangalore https://www.wikiwand.com/en/Bangalore
- Bay Area https://www.wikiwand.com/en/San_Francisco_Bay_Area
Otkrist (00:15:28):
And it’s like the Harvard of India, I would say. And then I went and I dropped out after three days. Like I literally went and I took, I took classes, I was classroom. I was listening to professors and here’s the deal. They were really good. And I do think that was excellent institution. But what I realized was this was not for me. Like this was not for me. And I didn’t want to make this big mistake by staying there because I knew that if I stayed there for like a few months, then I would just be like, let’s just finish this lesson. Let’s, you know, let’s not quit. But I had, like, I had just the right time. And I was like, you know what? I have enough energy to sort of almost like, just try not to get out. And I was thinking it was one of the best decisions I made for myself.
Otkrist (00:16:11):
I don’t think I’m not trying to be, you know, negative about IIM. Like, they are very good institutions. There are people who should definitely go there. But for me, I was all about science. Like I, I need to learn more science and that was the art for me. So, I dropped and I changed direction. So I went and took a job in India, in Gurgao, which is another big hub in North India. It’s like the it’s like the Boston or Austin, whatever you wanna call it. Like, it’s a, it’s a hub in the North
Trond Arne Undheim (00:16:40):
Needing to hear about all of these, these experience that you have had across kind of the U S and India in tech, because, you know, they, those two places really have been, you know, for the last few decades, so central to what we now know of as a, you know, do what has fueled the tech hubs and fueled the giant tech company. So today let’s jump back into, into the topic. One question that’s been bugging me for a while is: what’s the deep part in deep learning. I mean, there, you know, you and I have even tried together to, build out and you have helped me build out some deep learning networks. And this question of layers always comes up and there’s a big discussion about, you know, how many layers does it need. And as you pointed out with one layer, okay, well, that’s when the whole discussion of layers shows up.
Trond Arne Undheim (00:17:33):
There are arguably deep learning networks with just two or three layers that can be immensely more efficient than just one layer. Yet, some of the bigger advances, at least in the domains that we were talking about, like with you know, with optics or with the image recognition, have been with deep learning networks with far more layers, but there’s a complexity. And there’s something that happens every time you add a layer. And I want to, I want to talk about this duality of the opaqueness of the layers versus the value of the layers. So tell me what’s deep in deep learning? And is deep a good thing?
Otkrist (00:18:11):
Deep is very good. So you want to go as deep as you should and not more so let me explain what I mean by that. So as you add a layer to a new network, you add to its functional complexity. So each neural network is sort of similar thing, a function out there, a mathematical function. And, and what really it is doing is that you have this data, which is, which is considered to be a vector in math. It’s called a vector. It’s like a row of numbers and it’s mapping it to another row of numbers. So what we see is that there’s this thing called deep embedding. It’s an embedding, essentially you are transforming the row of numbers to another off numbers. That’s all you’re doing. You’re just converting it. But this guy has done to a function. Now, what they say is that in the real world, when you’re trying to model this phenomenon, these are highly nonlinear. They are already very nonlinear. They have lots of twists and bends. It’s not like a one to one. It’s not like a linear function. So to add that you have to have multiple layers and, and
Trond Arne Undheim (00:19:14):
Each layer then acts as a corrective to the first layer. So you could say,
Otkrist (00:19:19):
Yes, so you have this, but there are other ways to look at it. For example, if you look at Alexnet or, you know, other image processing things, what they found is that each layer learns a hierarchical feature. So the first layer sort of learns the edges, the most basic features, but then they went to the second layer and they found that it was actually learning corners and so on and so on. And so, so it kept on getting more and more complex,
Trond Arne Undheim (00:19:45):
This idea of edges and corners, by the way, because those are the two ways that the first deep learning systems literally used. That’s the definition of a person, isn’t it in deep learning? The definition of a face is essentially edges and corners.
Otkrist (00:20:00):
Yeah. So, so one of the things I think you’re talking about Haar-like features, maybe so Haar features are like that, you know, but they’re not deep. I would say that but yeah. So when you are building any kind of model, any kind of image recognition pipeline, it could include faces. Of course you are naming these hierarchical features, which are like lines and all of these different motives. And slowly as you go deeper, you start seeing like actual objects in the responses and the feature maps. So, that’s the part about the actual layers, but then we ask, okay, what is the, what is the opacity? Well, yeah, there’s an opaciiy in the sense that as you get more complex, the function gets more complex. It becomes harder to understand even what’s going on with the neural network. So there’s this whole problem of explainability.
- Deep Learning Haar-cascades http://www.willberger.org/cascade-haar-explained/
Otkrist (00:20:44):
They’re calling it explainability. But what they’re saying is these models that we are learning, they are really, really complex. They work really well, but we have actually no idea what they’re doing, why, are they working so well? And you know, what is going on inside this tiny, dare I say brain. We are, we are building these new electrodes, which are essentially mimicking new neurons of a brain. But we don’t understand now how or why it’s working. So this like huge field now, explainability, you know, you have a decision form in your network. You need to explain why it said what it is.
Trond Arne Undheim (00:21:21):
Can you tell us a little more about that? Because you know, very recently it’s become well, an extremely hot issue, right? Because in the current you know, kind of racial debates here in the U S big tech companies, including IBM have said, we’re gonna stop facial recognition work for now. And I think it was Amazon that said, we’re going to not let the police use our systems for one year or whatever it was. So this is a massive debate because the technologies that you are working on have become so good, quote unquote, right? In terms of its execution, that it does stuff at a efficiency level, that’s really, you know, you can recognize faces, you can put people into ethnic categories, you can actually start making all kinds of assumptions about people. And arguably it’s going in a direction that, you know, could become a little questionable. And certainly if you don’t know why they’re doing this, and so, but there’s more than that, there’s, you know, are they correct in,, are the systems correct? And too, why are they making these determinations? What would you say about the progress in explainability? What do you do to try to explain what layer one, two, three contributes, and then what n plus one layer is actually doing?
Otkrist (00:22:36):
So, okay. Excellent question. Progress in explainability. There has been some progress over the last two or three years. I think we have a long way to go. So if you look at the models that they’re coming up with, to explain these machine learning models by and large, they also sound and seem like another machine learning model a lot of times. So there are two or three different approaches that they’re taking. One is the approach in which they, they don’t actually try to explain a neural network. They experience something like an Random Forest. And then they say that, you know, Random Forest is a disease. It’s a decision tree, a decision tree. You can actually just go on each of the branches at me, like, okay. It decided to choose X because you know, these are the parameters to choose from. And this is what I decided to choose.
Otkrist (00:23:17):
And then if I knew like folks, one of the approach they have is something called saliency. So what they do is that every time you have a neural network propagation through the neurons, you can actually do something called backpropagation. You can literally go backward, and this is how you train it. So what they do is that they do some sort of a it’s called a maximum or non maximum suppression. You take a difference from whatever you had and you take the difference you back propagate. And that gives you an idea of neuron, what neural networks taught was going on in the image. And that can be,
Trond Arne Undheim (00:23:50):
I mean, this is complicated. I just want to stop you for a second. For someone who tries, would try to get into this kind of discussion about explainability is the approach to learn it from a technical point of view, and then start to explain it, or can you literally contribute at least politically in this debate by only reading those papers that are about explainability and not really reading the actual methods.
Otkrist (00:24:18):
So I think the politics of it actually comes more from bias than explainability. At least that’s what I’ve understood from the Amazon’s issue.
Trond Arne Undheim (00:24:28):
See those things that separate the bias is separate from the experience
Otkrist (00:24:31):
They are connected, but they’re not the same thing. So bias in a neural network happens when you have a data set, which is biased and they’re multiple ways, that happens. One of the biggest ones is that you have a face data set, and most of them are light. So exactly what you have is it doesn’t recognize black faces. It doesn’t recognize people who are brown color.
Trond Arne Undheim (00:24:50):
Exactly. And if that’s used at an airport, then of course they would find black faces more questionable because, that would be an anomaly in the system. They just don’t have the systems just can’t separate between them. So they are put aside as questionable just because there’s a lack of data, right?
Otkrist (00:25:09):
Yeah. So that’s one of the problems. And then the second problem that’s coming from is around the policy. I see that there is a bar. It can be considered a para-bias. It might be third issue, but basically what the thing is that you have this technology that has been, that has let me have a lot of bias in it. And then you’re applying it to a justice system, which is already pretty much biased. I would say that this is highly open ended. That’s the opinion that they may have, the companies may have. And then what they’re saying is that, you know, you do all of this and then you build a model and you basically don’t improve things. You make things worse. So that’s why they’re like, we know what maybe we should stop. We need to pause. So it’s like a 1-year pause.
Otkrist (00:25:46):
Let’s not share this model right now. Maybe there needs to be more research. There needs to be more data sets. I also heard they’re a thing that, you know, they found a lot of datasets that data sets that had a lot of racial slurs. I think MIT, there was a data set. They just came into picture that had racial slurs. So they took our data set. And I would say that there are multiple ways to look at it, but it’s more about getting, making sure that you have all of the data you need to have not have the biased data. You need to have the data, which is the right data. And then that
Trond Arne Undheim (00:26:15):
It’s just happening Otkrist because after all, we have made a lot of progress in a short amount of time. And there just hasn’t been enough time to get all the right data. I mean, could this crisis in and of itself, lead us into another AI winter where people become so disillusioned with the results of this, or is there such a push now in your view that this is going to continue, it’s just a, kind of a minor setback. If you will, that where were actually perhaps a necessary correction where people will actually make some strides to, to, to improve the data sets and make them fair and make them cover the globe and all these good things. And certainly if you are applying them to the U S it should reflect the population of the U S and not,a bunch of white people or whatever these datasets are doing.
Otkrist (00:27:04):
Yeah. So first of all, I don’t think that will stop. Yeah, it will not cause an AI winter. And I don’t think it can. AI like economists right now is being driven purely by the fact that you apply AI to any company. And it is going to just completely revolutionize that. There’s a lot of different application areas, where there is a lot of different scope of automation. Like I’m surprised by the amount of automation that can be achieved right now with the AI that we have, and it is not implemented. It has not been implemented yet. So you see all of these companies coming up with even small changes, small improvements, and they just go and they get unicorn status, like,
Trond Arne Undheim (00:27:40):
And you’ll call it AI. But I mean, I think we’re going to talk about this as well. Right? I mean, some of the things you’re talking about are actually just advanced statistics, right? I mean, some of these things are just sort of analytical models. They’re not what I would call really AI, but they’re definitely there. They’re barely machine learning. I mean, they’re kind of like, they’re one step above descriptive statistics. I mean, many businesses haven’t even counted their beans.
Otkrist (00:28:05):
Yeah. I mean, for example, I think there’s like an app which does restaurants. Restaurants have lot of supply chain issues. So you could buy too much of a food and you could end up wasting it, or you may not have enough of it. And there’s just periodic cyclical things that you have to just sort of figure out it’s just statistics. So this kind of stuff, which was not implemented is now being implemented. And there are companies which are doing really well because of this. I don’t think it’s going to stop. So it’s, it’s driven completely by incentives. So in a, in a capitalist system usually concerns take backseat unfortunately to this kind of stuff.
Trond Arne Undheim (00:28:44):
Go ahead.
Otkrist (00:28:45):
Yeah. So I think that researchers are very much interested in the bias field, which is very good. So everyone’s talking about bias, everyone’s talking about explainability. There’s a lot of research going on, which is very, very nice, I think. Yeah.
Clarifying the concept soup: AI, machine learning, deep learning, neural networks
Trond Arne Undheim (00:28:58):
I just wanted to clarify this. So AI machine learning, deep learning, neural networks and a bunch of other concepts. Can you just give us a quick rundown: were they listed in the right order? I mean, cause some, some people say AI is the higher order concept. Other say, machine learning is sort of equal. Give us a rundown of how you see these concepts and what they mean to you.
Otkrist (00:29:22):
Yeah. AI is a top level concept, actually. And then, you know, machine learning is my, my, my friend and my professor in MIT used to joke this, that AI is artificial intelligence of course. And machine learning is AI that works. That works. Yeah. That works. So, so machine learning and pattern matching is a, is a really, yeah, you can see that some very much dub the, a new dubbed version of AI now. So they are kind of similar, but artificial intelligence really encompasses everything. So there are algorithms which don’t necessarily have data driven origins, but they can still be regarded as AI. A classic example is the tic-tac-toe. In tic-tac-toe you can have an algorithm which is completely deterministic and it will win against an adversary. Always do the best it can against an adversary, but it is not data driven. It doesn’t have to be data driven at all, which means that it’s not strictly machine learning. It’s not data driven. So it’s not data sense statistical, but it is considered an artificial intelligence because it is able to do that. It’s able to do this thing thin at we consider as it’s within the scope of intelligence. Like you need intelligence,
Trond Arne Undheim (00:30:33):
To what extent Otkrist, is machine learning, just an appropriation and a branding of the field of statistics with some new clothes.
Otkrist (00:30:44):
I think that question is, well, it’s troubling because I think I would piss off someone. I would piss off people who do statistics. And I would do probably like
Trond Arne Undheim (00:30:58):
Critical question. I wasn’t actually meaning it in a negative way.
Otkrist (00:31:01):
Yeah, we have, they have a lot of overlap. They should be considered quite close to each other. If not the same. S Statistics tends to be much more sort of about since they are here, the variables here, some outputs than you are trying to, to correlate them and study those kinds of things. And machine learning has become a lot about functions, modeling these functions and getting the new. So you’re taking these two, these things and mapping them to embeddings and getting to higher order embeddings. So machine learning is a lot more about embedding into a new space, a new topological space. They call it like surfaces in like peer dimensions. And then I would say that that may be a difference that we can see if you were trying to do it. I would say that that, nobody talks about it anymore because people are all talking about deep learning and deep learning is very specific in the sense that it’s doing machine learning or AI using neural networks with multiple layers, that’s it? You can define deep learning quite well. So
Trond Arne Undheim (00:32:00):
Get back to deep learning in a second, and we’ll go into neural networks specifically, which we were hitting on, but I wanted to just hit by these concepts of data science and analytics as well, because you know, it’s been a few years now that they aren’t as hot as they were anymore. But if you think about data science, I mean, wasn’t that a way to kind of say, well, look, it doesn’t just matter the statistical method. And it doesn’t just matter. You know what you can accomplish. You need to think about the whole totality of what you’re trying to do. The science part of it is making a hypothesis. That makes sense. Because know one of the criticisms of statistics is that if you don’t have a theory, you are basically just doing correlations and you could have spurious effects everywhere. So my interpretation is data science was an approach where you said, well, at least in, in schools, right, you were sort of, at universities, you were saying, we are training data scientists or industry. Should it be asking for data scientists because you actually have real application problems you want to solve? So you need to know something about various domains in addition to knowing the techniques. So in other words, it’s not just about manipulating data. Why, why did that term go out of fashion? That sounded to me like it made some sense.
Otkrist (00:33:14):
I think it was a combination of different things. So, so data science was much more, you know, conventional methods, I would say than machine learning engineers, which is like the new term. And data science also tended to be very data, heavy job from what I saw mostly it would involve a lot more data wranglingdig trying to convert the data in the right formats. You’re trying to really look through the data. And I think that’s still true. If you’re trying to build a machine learning model, anything except images, you have to really study your data. You have to go deep, deep, figuring out what are the things that you really care about, or you should be going inside their model. And at that point, you start fitting these models and you know, you start improving these algorithms, which can sometimes be much smaller part as these algorithms have been studied quite well.
Otkrist (00:34:00):
So it might sometimes just be running this stuff that you extracted the features that you extracted and feeding them. So it becomes very much about, okay, I’m going to go and do the wrangling of data. So you’re writing scripts like Hadoop and SQL scripts. A lot of friends, I knew they used to do that. And then you’re doing feature extraction. Facial extraction is, you know, you look, you look at your data and you decide, okay, I’m going to combine a few of these signals into smaller features, like a smaller number of features. And this is something that, again, I think you’re right, it’s going a little bit out of fashion because neural networks, this is the best part. They do this for you. They learn the features. So deep learning is so, like people love deep learning so much.
Otkrist (00:34:50):
It’s because what they showed essentially is that if you have any data about any, any area, you don’t need that much feature engineering, all you need is a good, deep learning pipeline. So at that point, it becomes a lot more about engineering, the deep learning or machine learning pipeline. At that point, it could not be deep it’s in that case of machine learning pipeline. And I think that’s, to me, at least it was a lot more interesting because it’s, it does involve a lot more analytical thinking in terms of algorithms. Whereas with data science, it’s a lot more about just the person who spends the most time with their data will do the best. So yeah.
Trond Arne Undheim (00:35:29):
Alright. So, so let’s then dig into neural networks. So you’ve been through this a little bit. You, you talked about using the brain as a model. And I, I accept that as a metaphor. And I do understand that the field of neuroscience, you know, has, has studied the brain. But I think in our pre discussion, I was sort of hinting at my slight skepticism that a metaphor like that travels very well. So I just wanted to ask you this question. I understand that the field calls itself neural nets, but it is not brain science. I mean, right now, I mean, this is way before Neuralink and neural connections. So we are actually talking about a metaphor of pretending like oOur methods are the brain agreed.
Otkrist (00:36:13):
Yes, yes and no. So the first mention of perceptrons it was by, I think I’m forgetting the name of this person is a very famous person that he came up with the perceptrons in 1950s, I thinkbut it was solidly placed in neuroscience. They had understood that there’s something called a neuron. And he did say, he’s trying to simulate artificial neuron. And he talked about cat neutrons. This person actually went and studied cat neurons. And then I do think that the research may have diverged a little bit from neuroscience in the last 30 years, but it is coming back and it’s coming back quite strongly. So if you look at some of the great, you said it coming out of BCS, the brain and cognitive sciences department and MIT– amazing place to work. I think it might be the best place to do machine learning right now, which is crazy because you know, they are, they are neuroscience people, they are dissecting brains.
Otkrist (00:37:07):
But what they’re also doing is using some of this information to maybe inform their models or even understand them. So you get these crazy ideas. So one of the ideas is called adversarial examples. So new, like flux can be, can be very good, but they can also be fooled very easily. And one of the things that really perplexes everyone is that why does a neural network feel so spectacularly on images, which are so simple? You know, you look at that image and you’re like, this is clearly a cat. Why the neural network thinking it is a dog, but what they found, this is the experiment they did two years ago when I spent, saw the adverserial, I was like, there’s something wrong with the neural network because humans don’t have this problem. I’d say, well, this is what they studyed. They showed people the same images, but in a flash, they would only show it for a small time, which meant that they couldn’t do high level computation in their brains for that. And what they found was the same images,fool people, so that I had see the adversarial component work, but it worked only for the first part of the image pipeline inside of the brain, which may, which means that we may actually have replicated this thing. We may actually have replicated completely. Therefore they actually more alike than we would say. I personally think that they have a lot to learn from each other, neuroscience and neural networks.
Trond Arne Undheim (00:38:26):
So I think you were talking about Frank Rosenblatt earlier, the Cornell scientist?
Otkrist (00:38:29):
Yes. Thank you.
Trond Arne Undheim (00:38:34):
Okay, so they have a lot common. What then about a adversarial networks? I mean, is that a whole new promising direction and what exactly do these things? I mean, from what I understand, you’re basically setting two or more neural networks, you’re putting them against each other and they’re almost like competing for, who’s going to get to do the analysis. And then they’re pointing out weaknesses in each others methods, I guess, almost like human bantering. And then at the end, I don’t know exactly how you decide on what the result is, but they make each other better by being antagonistic in the analysis, which sort of sounds like a battle for me. How does it work when you’re setting that up?
Auto ML and beyond
Otkrist (00:39:17):
So yeah, I think that that’s a very good way to say it. It’s it’s if you look at it from scientific perspective, it’s game theory, it’s game theory applied to neural networks. So if you if you look at how they architect, then that’s right, the pitch to new electrics against each other. And this is not the only place where they do. It’s not just embassy learning. Reinforcement learning is another place where they started doing this. So what we started seeing more and more is the models that are now winning the research that’s going on now is not just run AI, but multiple AI’s sort of collaborating or competing against each other. So one of them is the competition, which is the adversarial network, which may have been one of the first ideas that came in. And I think it’s a very interesting, and in different ways, I can give a quick overview or whatever adversarial networks that people don’t know the idea is that you have a machine learning model, which is trying to predict something.
Otkrist (00:40:08):
And then you have another model which generates examples, which fool this model. And the idea is, if you do this, then you become really good at both predicting, but also generating examples which can fool. And you can always keep on generating these examples, which are full, which is crazy. Like, you know, you can just fool machine learning models like that, and people are not sure why that happens fully. And then if you look at other other examples, for example, if you look at AlphaGo, they train multiple machine learning models, but they were not generating each other’s inputs. What they were doing was they were playing each other in a game and they were winning or losing against each other. The losing AI got to survive. So now they are applying evolutionary and game theory, evolutionary dynamics and game theory to machine learning, which is, very much the right direction to go.
Otkrist (00:40:56):
I would say it’s one of the biggest thing that I’m very much interested in. And actually one of the things I did a, we called it, it became like this whole thing called auto ML, which was, can you actually use machine learning models to train other machine learning models? So essentially when we are doing this machine learning engineer job, we are actually taking a machine learning pipeline. You’re trying to tweak it, make it better, or maybe making, coming up with your own pipeline. And, you know, this was another one of those questions, looking on the corners, can it actually be done,
The quest for Artificial General Intelligence
Trond Arne Undheim (00:41:27):
Right? I mean, this is the beginning of AGI, which I think, again, again, deserves some explanation, but you know, up till all we’ve been talking about from Marvin Minsky and Rosenblatt, all of these guys, I mean, this is just pure narrow AI. We’re talking about recognizing images and, you know, you tell me what the use cases have been, but they’re, to be honest, they’re, table games, they’re chess, they’re images, they’re numbers, obviously they’re real things, but they are not things beyond very limited kind of contextual constraints. But what you’re talking about now potentially is that one network is training another network that could potentially train a third network. And now you’re edging towards this. Well, you know what some people have as their biggest fear, which is, the machines taking over and others have as their biggest dream, what, where are you on that spectrum? By the way,
Otkrist (00:42:21):
I would say that I extremely am, and it is my dream. And at the same time, I don’t think it will happen in my lifetime. Like the more I learn about AGI and the current AI, I think there might be a few technological breakthroughs which are required for us to get there,
Otkrist (00:42:38):
But there are, there have been some improvements. So let me explain a lot to catch up by the way they, AGI is such a broad term, like artificial general intelligence. How do you even define that? Like
Trond Arne Undheim (00:42:49):
Artificial general intelligence, right? Yes.
Otkrist (00:42:52):
And then I would say that auto ML is not that like, it’s, it is getting closer, definitely in the sense that we are seeing that, Oh, actually you can, you can have a machine learning model which develops and trains another machine learning pipeline in which a human may not be completely involved and initially you have to give it some input that, okay, what are the parameters? What do I want from this model? And what people have done is that they have come up with so many new models that these things generated, which are doing better than human engineered models now. So these models, I think Mobile Net is one of them, but they have multiple nets, which like, which came up through this through this process. And then it’s the same. So the, the, the concept, the core concept that they’re talking about is reinforcement learning in which a machine is able to learn on its own. So it’s unsupervised learning and reinforcement learning. And so by some people would say that reinforcement is a part of unsupervised learning. The idea is it’s like the Holy grail, it’s saying that a machine which can learn on its own and improve on its own is kind of the first step towards general intelligence. And what we’re seeing is that
Trond Arne Undheim (00:43:54):
What things have they applied reinforcement learning to so far?
Otkrist (00:43:59):
Everything–they are then applying them to image recognition, of course, the basic stuff, but also AlphaGo all kinds of games, like any kind of game nowadays it’s based completely in reinforcement learning. It has to be a stuff around robotics. You have to have a reinforcement learning component to it. It’s being applied to places like language modeling and you know, other high order applications. Like I would say that 50 to 60% of new AI, which is being talked about is based in reinforcement learning some sort.
Trond Arne Undheim (00:44:31):
All right, well, this is a lot to swallow what you said. This might not happen in your lifetime. What are some of the limitations that you see as fascinated as you are with your field? What are some of the biggest frustrations when you’re really trying to make massive progress? What are some of the things holding you back?
Otkrist (00:44:47):
So I would say that I think the biggest thing right now, which is sort of holding back is that people have sort of beaten down this initial pipeline that we have. So we have a pipeline that we do in our eyes. If we can look at the image for only a second and we can determine, and what it is that problem has now been solved. And now, you know, still there’s a lot of publication. We wanted to show there needs to be incremental improvement. But what about the stuff after that? You know, we need to think about how we can take this time from one second, to say five seconds or a minute. So they are higher, the problems which have not been talked about. And I think the biggest challenge there is language. So I think language is one of the things which is still not completely solved.
Otkrist (00:45:28):
It’s not like image recognition, language models don’t do as well. I think that the best model out there is huge. You cannot turn it on a normal computer. And even then it doesn’t do nearly as well as humans do at generating text or agonizing text, but it’s getting there. And I think the bigger, the, another one is this thing, this problem of summarization. So if you give an image to a machine, it will look at the image. It will give you a summary. It will be very nice. It’ll actually make sense. But if you give it a video, like an hour long movie, the machine goes just to completely in the wrong direction. And the reason for that, but there are multiple, but one of the things that they talk about is called catastrophic forgetting. Essentially the neurons, once they’re done with an image, once they’re done with the data point, they quickly move on to the next one and so on and so forth. So this is problem of memory has not been solved yet properly in neural networks. There have been some improvements. There was this,
Trond Arne Undheim (00:46:27):
This scares me when it comes to even fairly domain specific things like autonomous driving. I was listening to Elon (Musk) talk about the next you know, iteration. And he was saying something about the next Tesla software update will essentially get us to full autonomy as long as the regulators, accept, a 200 X improvement upon humans. So he’s saying that with the next software update, you’re already at 200 X, more efficient than humans. So it’s up to regulators to accept if that’s good enough. But what you’re telling me though, is that, yes, it may be 200x arguably better than humans on average, but there is this element of catastrophic forgetting, which could come in and bite us in the end. You know, maybe, maybe not just in autonomy, but in many, many other fields where memory is such a fundamental human function and language and memory. I mean, they are some of the things that make us human, and I’m not trying to be, the critic here, but how are we going to solve those things? Yeah. I mean, in
Otkrist (00:47:30):
In the larger sense, they are breakthroughs, I think these might be breakthroughs that we need. I do think, however, the self driving cars may not need a lot of this. They might just work, especially with the current technology. I think the biggest problem right now with self driving cars is unfortunately humans. So there are, these cars need to coexist with human driven cars and people are walking on the seats and people do.
Trond Arne Undheim (00:47:54):
Some people also have catastrophic forgetting, my wife reminds me of this sometimes.
Otkrist (00:48:00):
So you get this problem of chaos happening there, and this is not ready. The robots are not ready for that. Unfortunately. So that would be the biggest factor, which might have an issue with this.
AI startups: path.ai, opal
Trond Arne Undheim (00:48:13):
Yeah. Otkrist, tell me about some of the startups that you’re impressed with as we’re kind of coming towards the end here. What are some of the, what’s some of the work that you’ve been encountering either, in and around MIT or other places and ostensibly also, I want you to comment a little bit on the fact that, you have been in it, you and I have been in and out of university. So we have both spent a lot of time at MIT, but there’s something that you learn in a commercial context that you cannot really do, or you’re not asked to do, day to day in an educational institution. So, and I think that is perhaps part of the reason why some of these startups are making breakthroughs because they’re stepping out of that environment and they’re testing things out. What are some of the startups that you are fascinated by at the moment?
Otkrist (00:48:58):
So, I mean, I think there’s a lot of stuff going on. Very, very interesting stuff. I was going to point out obvious ones like Path.AI is one of the ones companies that I’ve talked to, I’ve talked to the founder, they’re doing something very nice. They are trying to do analysis of pathological slides using AI. I think that problem is already ready to be like, analyze it, study.The technology is out there to do this and this needs to be automated. And there there’s a lot of different kinds of these kind of applications like x-rays and ultrasound, which could be potentially automated like this. So what they’re trying to do is they’re trying to build therapies using the AI that they’re building and maybe help reduce the cancer risk or maybe have a better therapy for the cancer.
Otkrist (00:49:42):
So, so this kind of application, very interesting, very nice. Of course. Another one that I’ve heard very, very nice. I think it’s called Opal. There are multiple companies in this direction, which are, they’re talking about this thing called privacy, preserving AIs, again, very interesting area, also something that I’ve done some research on, but the idea is that, when you are trying to train these deep learning models, you need a lot of data and this data goes on a cloud and then a model gets trained. Then, the problem is you’re sharing your data with the person who’s trying to train the model. And there is this problem that this person has access to so much personal data. Maybe they shouldn’t have it. Maybe your data should stay on your device and what they came up with over the last two or three years, one of the models that I built and other models out of Google, we showed that you can actually do that. You can train a model without taking the data out of the device. So that model sort of gets trained on the device. A part of the model gets trained on the device and then this part gets transferred and the rest of it gets trained on the other, on the cloud. And you can have,
Trond Arne Undheim (00:50:47):
Would be important for an devices such as Google Home and Alexa for all the information they are collecting from the home sphere.
Otkrist (00:50:54):
Yeah. So his application and the edge computing in the, you know, in these small IOT devices, there’s a huge application there’s application in the privacy spaces. And there’s this whole thing. If you talk to Tim Berners- Lee, I think the person who invented web he’s talking about this thing he’s talking about, we need to have a web 2.0 maybe which is distributed. Like the web is right now, very much centralized. There are these players who control all aspects of web and via actually most of these players may have turned out to be okay. You know, they have acted a lot of times in people’s best interests. It’s not good to have this much power in these few corporations organizations. So web 2.0 the keep all of your data in one place at inside your home. And then it will only leave when you want it to leave it, shouldn’t leave your home.
Otkrist (00:51:38):
And then web 2.0 with a distributed learning strategy could completely change the game. Unfortunately, there’s a lot of policy problems. There’s a lot of stakeholders which may have different interests. I don’t know if it’ll actually pan out, but definitely very interesting. I think another one which definitely made it the name would be Boston Dynamics. They’ve built some excellent robots and, you know, they have finally brought the robot to the point that you can actually take it and put it in industrial or industrial environment. And these robots look a lot more general purpose ish rather than the previous robots, which tended to be an, okay, this is a robot which picks something and puts it there. That’s it, there are robots out there which are thousands, thousands of dollars with just do one thing.
- Boston Dynamics https://www.bostondynamics.com/
Trond Arne Undheim (00:52:27):
But their pickers they’re pickers. They’re just, they just pick stuff up.
Otkrist (00:52:31):
Yeah. They just do one thing, but what’s the normal dynamics robots. And actually Amazon has a lot of robots. Now it’s on his floor, which are doing most of the work that humans, humans used to do. And, you know, I’m sure people are like, you know, but humans need this job, but there’s another way to look at it, which is maybe humans could do something else, which is a lot more intellectually rewarding with their lives. It’s maybe a fringe opinion, but I think that maybe humans, humanity could do something else. If an AI could do the basic stuff like that. Yeah. And then
Trond Arne Undheim (00:53:05):
This is the future. A lot of people are super worried about the robots taking their jobs or taking over or both. But I think what you’re pointing out is there’s enough work here. And I mean, a lot of the economists of the beginning of the century were saying, you know, they would be surprised if we were working more than two or three hours a day at this point, yet we find ourselves working pretty hard. It was Milton Friedman who said that.
- Milton Friedman https://www.wikiwand.com/en/Milton_Friedman
Otkrist (00:53:28):
Yeah. But I would say things that we want to do that you might, he wants to do and they need all the help they can get. For example, going to Mars, I think the right approach is to send a lot of autonomous vehicles and autonomous machines there. They shoud do our work. They should do the heavy lifting. They will build a base. Then when it’s ready, we can go and stay there. We can go and do a research. It’s the same with mining asteroids, asteroids, asteroid mining is going to be a big thing. And maybe 30 years from now when people like Space X are going to be like huge, huge, huge people in this space, they’re going to be controlling a lot of this. They’re going to be stakeholders and they are going to be sending people to do the asteroid mining. It’s going to be very hard. It’s going to be very, very difficult and we need, I think we need to send robots. We need robots, which will do this work because this can be automated and it should be automated.
Lendbuzz
Trond Arne Undheim (00:54:20):
Well, we’re certainly looking into an exciting future. I just, I wanted you to get a chance. I don’t know if you give us just a little overview of what you do, you’re in your daytime job, because it seems to me, you were so interested in all these things and it overflows into I’m sure the evening, but daytime you’re actually getting paid to do some of this stuff. Tell, give me a little sense of what Lendbuzz is up to and what you’ve built with them.
Otkrist (00:54:45):
Right? So, so I’m VP of data science at was I had the the data science division. Lendbuzz is a company which is trying to give auto loans to people who will not get it from usual channels. Let me sort of expand on that. Both who are coming in as immigrants to United States, they have this challenging problem of the credit scoring. They don’t get fired scores. They don’t get FICO scores. Exactly. You don’t have a SSN or maybe you have a very, very, very thin history. You have a very short history. In which case you are a very risky borrower to some institutions out there, or you may not have a FICO score at all. You may be a student, you may not have an SSN at all.
- FICO score https://www.myfico.com/
Trond Arne Undheim (00:55:26):
What are the indicators? You guys are able to use?
Otkrist (00:55:29):
A very interesting question, lots of different things. I don’t think I can go into that. Unfortunately, none of this is proprietary, but usually it’s something, you know, if you are looking for a car, you come and apply with us, we ask you a bunch of questions which may be, you know, general or specific to your application. We get some of your documents and your paperwork. We, you know, look at your stuff like bank history. And we use that.
Trond Arne Undheim (00:55:49):
I found some stuff online if I can list off some of it. So it’s, this was, I think one of your sites or marketing sites that said, you know, education, employment, history, family support, I don’t know exactly how you measure that, savings and earning potential. I mean, they were very vague categories, but they are certainly things that don’t typically go into a traditional FICO score, right there. They’re more intangibles. So you have found a clever way to gather some data on those things. And then you do the mysterious, deep learning on top of it.
Otkrist (00:56:16):
Yeah. And that’s a, that’s what happens. So there’s a machine learning model inside their pipeline, which I, you know, I cannot describe but you know, it takes these inputs and we are able to potentially get some risk analysis on that. And then parts of this may be used for figuring out, you know, how good of a borrower you are potentially. And then, you know, there are loan officers out there, which understand the space quite well. And then they use this to make decisions, use the person’s application, the data that the person gave to make that prediction.
How to stay up to date in the AI and deep learning field
Trond Arne Undheim (00:56:46):
Okay. This is super exciting. It’s been a good discussion. So far as we’re rounding up, how do, how do people who are listeners, whether they are experts in this domain and trying to just stay up to date, or they are neophytes just getting into this area or they could even be that 12 year old. We were talking about, you know, Otkrist in 2020, how do they begin? Or how do they continue? Because it’s not just beginning, right? It’s just, you wake up in the morning and you could be out of date.
Otkrist (00:57:14):
Yeah. I would say that for me, it’s not a conscious effort I have. What I’ve learned is that if it’s something that you have to consciously make effort for, you may lose your willpower. If you look at some of these books that I have been reading Autonomic Habits, there’s another book I read and they were talking about, you know, this thing, the willpower is actually a resource that can get depleted. So for me, it’s not about willpower. It’s about the interest, it’s the addictiveness of this area and the amount of research that’s going on. That for me to sort of stay away from it, even for like a big is very, very difficult. Like if I’m going on a vacation, I will probably be checking my laptop or my phone reading up papers when, you know I’m my wife is notaround me and my wife, I get that. But where do you go? Where do you read
Trond Arne Undheim (00:57:58):
Linus tech talks that line Linus tech guides,
Otkrist (00:58:02):
It’s completely different. So there’s very little here. They explain about computer hardware, but it’s also another area which interests me a lot. So I tend to subscribe to a lot of YouTube channels. You know, YouTube is one of those things that I listen to a lot in the background when I’m working because you know, it has some interesting thing going on. It also helps me focus. And then at the same time, I have a, you know, there’s a very nice like Y Combinator has a venry nice blog. There is Google now, which has cards. They automatically start pushing you more and more about the area that you’re reading. So I’ve been getting a lot of cards from Google Now. We will now I have mailing list subscriptions. Those are very nice. I think you can get really easily bogged down. If you start doing all of these too much at once.
- Y Combinator blog https://blog.ycombinator.com/
- Google Now https://www.wikiwand.com/en/Google_Now
Otkrist (00:58:42):
For me, it was in a slowly trickle. I kept on adding these things, these resources, but ultimately it was interest fuel. It was very much coming from inside. So if you’re interested, you love it. Just go and start reading these blogs start following. One thing I would tell is that be really careful. There’s a lot of misinformation and there are a lot of pretenders. Unfortunately, there’s always this with the thing, but you want to go to the right sources and you want to look at multiple sources. So one of the things that I found is that if I see something, unless it’s with a credible source, I always try to find a way to verify it, verify this news, verify this paper, a paper can be verified by something like citation count. How many times have the people cited this paper? How many times have the people used it, replicated it for something like a news place, I will tend to see if there’s another place, which says the same thing and how many of the places agree with it. So I’m super careful about making sure that my sources are ready. I think that’s one of the biggest problems media right now. There is a lot of going on and some of it may not be right.
Trond Arne Undheim (00:59:44):
Well, thank you so much for that Interview and for this fascinating discussion. I certainly learned a lot. Thank you for your time. And we’ll have to do it again.
Otkrist (00:59:53):
Yeah. It was very nice. It was very nice hanging out here and absolutely anytime you guys want, I would love to come back and talk about this and talk about other things. There were so many things that we didn’t talk about. So yeah. Thanks for having me. See ya.
Trond Arne Undheim (01:00:06):
Alright, Perfect. Thank you.
Outro (01:00:26):
You have just listened to episode eight of the Futurized podcast with hosts Trond Arne Undheim, futurist and author. The topic was reality and hype in deep learning. Our guest was Otkrist Gupta, VP of data science at Lendbuz and PhD in machine learning from MIT with background from Google, Yahoo and LinkedIn. My takeaway is that deep learning is still a promising technique within artificial intelligence, but faces a steep challenge getting out of the black box of poor explainability, generalize-ability and data efficiency. The future depends on it. Thanks for listening. If you liked the show, subscribe at Futurized.co or in your preferred podcast player and rate us with five stars. Futurized–preparing you to deal with disruption.
Futurized.co https://www.futurized.co/