Futurized goes beneath the trends to track the underlying forces of disruption in tech policy, business models, social dynamics, and the environment. I’m our host, Trond Arne Undheim, futurist and author. In episode 35 of the podcast, the topic is the quest for artificial intelligence. Our guest is Peter Voss, CEO, and founder of Aigo.ai. In this conversation, we talk about how the field of artificial general intelligence has evolved. What intelligence really is, whether machines have it and what it takes to bring true progress in this field. Quick word from our sponsor. Do you have business challenges where you would like high quality external input from experts? Yogi is an insight network with access to on demand teams made up of select talent from thousands of experts across industries and markets, including financial services, education, software, energy, healthcare, and life science. Check out Yegii.archives.com. That’s Y-E-G-I-I.
- Futurized.co https://www.futurized.co/
- Futurized RSS https://www.futurized.co/feed.xml
- Yegii – insight network service https://archives.yegii.com/
Trond Arne Undheim (00:01:12):
Peter, how are you doing
Peter Voss (00:01:14):
Great. And how are you?
Trond Arne Undheim (00:01:16):
I’m doing great. I’m so excited to speak to you today. So Peter you are no stranger to a podcast or to to the public. I have been in artificial general intelligence for a while now as the founder of a, of a couple of interesting companies and the, the latest one being Aigo.ai, I wanted it, just ask you a little bit about your background because you have this quite interesting background you know, for pursuing AI, you actually have built a company and then you’re using a lot of your own kind of funds to, to really build out a very special vision of, of AI. Tell us a little bit about how you got there and what might be the most important influence that guides you and motivates you, you know, looking at your background you know, and there’s not an enormous amount to find about you previous to these two companies, I guess.
- Peter Voss LinkedIn (vosspeter) https://www.linkedin.com/in/vosspeter/
- Peter Voss Twitter (petervoss) https://twitter.com/peterevoss
- Peter Voss Medium (@petervoss) https://medium.com/@petervoss/my-ai-articles-f154c5adfd37
- SmartAction (https://www.smartaction.ai/)
- AGI Innovations https://agiinnovations.com/
- Aigo (@Aigo_ai) https://www.aigo.ai/
- Podcast interviews: Heartland Institute (2020): https://www.heartland.org/multimedia/podcasts/advances-in-artificial-intelligence-guest-peter-voss; AI In Action (2018): https://alldus.com/blog/podcasts/ai-in-action-e03-peter-voss-founder-ceo-chief-scientist-at-agi-innovations-aigoai/
Trond Arne Undheim (00:02:19):
So maybe you could just outline a little bit for us your path, and then we can get to our current concern.
Peter Voss (00:02:26):
Yeah, certainly. So I started out being fascinated by electronics and I got trained as an electronics engineer, started my own company doing micro controlled systems for industrial applications. But then I fell in love with software and my company changed from a hardware company into a software company. And I developed several software systems, including databases, programming languages, and an ERP software system that became quite successful. And that company grew rapidly from the garage to
Peter Voss (00:03:00):
400 people. And we did an IPO that was super exciting, but that also gave me the flexibility when I exited the company to really pursue what I’ve been doing for the last 20 plus years. And that is to figure out how we can build software that has some intelligence that can actually learn and reason. And, you know, and not just rely on what the programmer thought of you know, catering for.
- DARPA perspective on AI https://www.darpa.mil/about-us/darpa-perspective-on-ai
- Four waves of AI https://fortune.com/2018/10/22/artificial-intelligence-ai-deep-learning-kai-fu-lee/
- The 4 Waves of AI: Who Will Own the Future of Technology? https://singularityhub.com/2018/09/07/the-4-waves-of-ai-and-why-china-has-an-edge/
- Rethinking artificial intelligence http://news.mit.edu/2009/ai-overview-1207
- Transcript of Professor Neil Gershenfeld’s Talk at AI World Society Summit https://bostonglobalforum.org/aiws/ai-world-society-summit/ai-world-society-summit-ai-world-society-summit/transcript-of-professor-neil-gershenfelds-talk-at-ai-world-society-summit/
- Hubert Dreyfus https://www.wikiwand.com/en/Hubert_Dreyfus
Trond Arne Undheim (00:03:27):
Was it in your engineering background that, that kind of forced that perspective on you? Or would you say there was something that you kind of discovered on on the way?
Peter Voss (00:03:39):
Yeah, so I mean, the transition from hardware to software I found designing hardware systems microcontrollers and so on you know, quite fascinating, but when, you know, when hardware became more programmable, when the first microprocessor really started becoming available and powerful you get instant gratification, you know, you can sit down and write a, write something, and, and then a few hours later, you actually can have some working program. Whereas with hardware, you design the circuit boards and you send them out for manufacturer and, you know, it might take a few days or a week to come back and then assemble it and all that. So, you know, the excitement of working with software and the flexibility that you have was really, really sold me on that. And just how much you can achieve in, in a relatively short period of time.
Taking off five years to study intelligence
Peter Voss (00:04:33):
And it’s sort of limitless the complexity that, you know, and intelligence that you can build. But what struck me, you know, even though I was very proud of my own software that we built it was still dumb. You know, I mean, if, if you know, the programmer didn’t think of some scenario, then it would just crash or come up with an error message or something you couldn’t teach the software, wouldn’t learn, it wouldn’t adapt. It wouldn’t reason. So, you know, when I had sort of the time on my hands to reflect on this really I really said, how can we solve that problem? How can we make software intelligent? So I actually took off five years and studied intelligence, very deeply starting with a piston biology series of knowledge. How do we know anything? You know, what is reality? What is our relationship to reality? How do children learn? How does, how does our intelligence differ from animal intelligence? What do I, Q tests measure all of that and that laid,
Trond Arne Undheim (00:05:33):
Curious about that five year process and I’m slightly envious. It’s a, and I’m sure a lot of people are envious to take out five years. So there are life made career, I guess, too, to study something ostensibly new. And, and at that in depth, how did you go about studying intelligence? So I, you said epistemology
Trond Arne Undheim (00:05:58):
And anyone consciousness and, and all these subjects, but did you do desk research? Did you interview experts in the field? Did you subscribe to newsletters? Did you go to libraries? What was your approach?
Peter Voss (00:06:09):
The reading, I, you know, I bought hundreds of books on the topic and just reading primarily I had a number of discussion groups as well. Some that I started myself, as I said, I joined sort of philosophy, psychology discussion groups and so on. And then I went to a lot of AI conferences as well, but that’s sort of the other perspective, you know, on the one hand is a cognitive perspective. And then there’s what people had actually done in the field of AI. And one of the, because I don’t have an academic background I didn’t automatically slot into the, one of the two groups of AI, the one being the connectionist neural network group and the other one’s sort of, I guess, what’s not called good old fashioned or symbolic AI. And what really struck me is that these two groups, these two factions were almost like religions. They didn’t talk to each other, they couldn’t relate to each other. And I found that really quite weird because I could often translate what the symbolic people in the symbolic camp would say into the connectionist domain and vice versa. So it’s really integrating these many sources of, of information that that, that was, I think, essential to figuring out how to build an intelligence system.
The two camps of AI – connectionist and symbolic AI
Trond Arne Undheim (00:07:30):
So I’d like to build this up a little slowly and I’ll get back in a second to these two camps that you characterize as connectionist and symbolic AI, but a little bit before that, well, first off, would you say which of the two traditions kind of relate themselves at all to this cognitive tradition in, in psychology and philosophy first off?
Peter Voss (00:07:50):
Yeah. In my mind, they, they do both. Absolutely. they absolutely do. And I could probably illustrate that. I’m not sure it’s going to sort of come across with, with the context that I take for granted, but concept formation is such a central part to intelligence, to human intelligence. So the ability for us to form new concepts on the fly and to use those concepts contextually basically, you know, in, so if we, we, we talk about a giraffe or something you know, we have a picture of a giraffe how tall it is and you know, what size it is, what colored has, but then you just change the context a little bit. And you’re now talking about toys and suddenly we are able to take this concept of giraffe and immediately contextualize it and say, which of the attributes of giraffe I important.
Peter Voss (00:08:48):
Once you have it in the domain of toys, suddenly the size is no longer important. In fact, you know, it’s going to be smaller. The color actually, isn’t important because if you have a pink tear off, Hey, it’s still you’re off. If it’s a tie, you know, and, and so on. Whereas if you saw a picture off, out in the wild, you would immediately think, well, okay, what happened? Did somebody play a joke there? Or, you know, it’s a fake fake news or what is it, you know, so it’s, it’s the the importance of concepts and, and, and why I’m talking about concepts being so important. The the, the thing that builds concept is really much more on the connectionist side, where you have, where you’re dealing with perception, you know, we’re connectionist, and, and now the second way of deep learning machine learning tries to sort of get into that domain where you have a lot of raw information, perceptual information that really serves as the input for building concepts, but the concepts themselves absolutely need the symbolic anchor or the symbol to, for us to be able to manipulate and to think about once we, you know, try to even think and concepts that think conceptually, we actually thinking in symbols sort of directly or indirectly, and of course it was language.
Peter Voss (00:10:03):
We, you know, it’s the symbols that are driving the, which concepts to activate and what context to activate. So I see both the symbolic side as sort of at a language level, and then the connection aside more at the, the perceptual, the input to forming, forming the concept. But there, there are also many others sort of insights you get from both camps.
Trond Arne Undheim (00:10:27):
Let’s take a detour back a little bit. How do you define AI as in just the plain concept of artificial intelligence before we even get to what you are more known for, which is kind of artificial general intelligence, which, which I understand you were part of coining as a term, but let’s go back to kind of what most people traditionally have understood with AI,
- Nick Bostrom, Oxford University https://www.ox.ac.uk/news-and-events/find-an-expert/professor-nick-bostrom
- Artificial General Intelligence (2007) https://www.amazon.com/Artificial-General-Intelligence-Cognitive-Technologies/dp/354023733X
- AGI Revolution: An Inside View of the Rise of Artificial General Intelligence (2016) https://www.amazon.com/AGI-Revolution-Artificial-General-Intelligence/dp/0692756876/
- Artificial General Intelligence http://www.scholarpedia.org/article/Artificial_General_Intelligence
- Artificial General intelligence https://www.wikiwand.com/en/Artificial_general_intelligence
- What is the Turing Test and Why Does it Matter? https://www.unite.ai/what-is-the-turing-test-and-why-does-it-matter/
- A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741
Peter Voss (00:10:52):
Right? So that depends on how far back you go on the tradition. If you go back 60 odd years when the, the term was first coined it was about building thinking machines about building machines that can think and learn and reason the way humans do that was the original vision, the original dream of AI, but, and, you know, and they thought they could crack it. And, you know, a year or two, I’m very optimistic. Yes. And it turned out, of course it’d be much, much harder. So what happened over the years, AI really turned into narrow AI and you know, pretty much anybody working in the field of AI is studying AI really is working in now in the field of narrow AI. And you know, what that translates to is basically, you know, whether you go back to deep blue that becoming the chess world champion you know, as symbolic approach or whether you’re talking about the machine learning, deep learning narrow AI, you know, whether they’re used for image recognition, categorization, or whatever, it’s narrow AI, it’s basically we have a particular problem or a defined problem that we’re trying to solve.
Peter Voss (00:12:04):
And then the human in the loop, the programmer, or the data scientist basically figures out how to use the tools, the various tools that we have to create a program or a model that will solve that problem. But it’s really the external intelligence that’s making this, this work. There’s much,
Trond Arne Undheim (00:12:25):
I’m sensing from what you’re saying, that, you know, you’re, you’re characterizing it as narrow AI, but what’s wrong with narrow AI because I’m sensing it kind of a pejorative slant when you’re talking about it, you think of it as something negative. And a lot of people are extremely excited about AI these days. Although there are some signs now that, you know, people are hedging their bets and saying, well, you know, it can’t accomplish everything and, and all that stuff, but narrow AI is already is already a negative statement, I guess. Or is it more of a descriptive statement for you?
Peter Voss (00:12:58):
Well, I think if you picked up that I’m a little negative about it is I, I think that’s true, but that’s maybe my personal quirk or history. You know, when, when people talk about AI and like that kind of really,
Trond Arne Undheim (00:13:15):
I think conflating it for you, they, they, they’re kind of, they’re conflating the term for you because narrow a big, I guess what I’m picking up on is when you say narrow AI, I’m sensing that you don’t even really think that they’re doing intelligence in a certain sense. Right,
Trond Arne Undheim (00:13:32):
Exactly. There’s the intro. And then I think there’s a very important distinction that the intelligence is external to the program or the model to a large extent that the key intelligence, a key thing that solves the problem is the programmer or the data scientist whereas real AI. And in fact, that’s the original term that I wanted to use when we wrote the book on artificial general intelligence, but it was a little bit too much in your face, you know, for, for an academic publication. So we decided artificial general intelligence is a term to use in 2001 when we coined the term. So yeah, whether there’s prerogative or not it’s it, it clearly is extremely useful. But the thing that worries me that a it’s people have really forgotten about the original, what the original meaning of the term of AI was because it clearly was about building thinking machines. And that seems to have been forgotten. And there are people don’t make that distinction that they believe in my mind incorrectly so that if we have enough error, narrow AIS, and we throw them together, we’ll, we’ll actually have human level intelligence. And I think nothing could be further from the truth.
The origin of the AI tradition
Trond Arne Undheim (00:14:52):
So let’s, let’s go back and time for a second. So the summer of 1956, there’s a conference in new England. That’s been you know, presented as kind of the beginning of AI with a lot of notable experts from around the world. But notably it was arranged by a couple of professors I guess at a, at MIT certainly leading the way and in the decade that followed. And in fact for the next few decades, arguably it was this kind of expert systems. Rule-Based AI, at least that’s what I’ve understood as kind of a main thread in, and kind of what most people would consider the first wave of AI. Why did that period lasts so long? Some people will say that it’s basically around 1989 is the cutoff from when this next stage, which, you know, currently what you, you, you called it connectionists. And, you know, as a, it’s more of a statistical approach. Definitely. And, and let’s get to that, you know, where’s really the difference between statistics in that sense. And when you get to these more deeper learning systems, but first off, why did this first phase of AI take so immensely long time? So you, you mentioned, they said we can solve it in a couple of years. What was the big problem that took 30 years?
Is current AI a dead end?
Peter Voss (00:16:08):
Well, I mean, a lot longer than 30 years because now, you know, 60 years later we still done have a, you know, sort of the majority of people working in AI by far still don’t have really any idea how to solve AI. And, and in fact, most AI experts agree with that. You know, that I mean, take them as hobbies of, you know, the founder and CEO of deep mind their mission is to solve intelligence and, you know, they have a substantial budget I mean, what 600 PhD level researchers working on the problem. And yet they freely admit that all the work they’re doing now is not going to get them to intelligence and they don’t really have a clear past
Trond Arne Undheim (00:16:58):
I think that to intelligence, but surely they are building blocks towards something when you agree, or are they building blocks on some, in some different direction, but.
Peter Voss (00:17:07):
I think it’s a dead end
Trond Arne Undheim (00:17:11):
That those are, that’s a pretty powerful statement, Peter, how are you so convinced that it’s a dead end, because there, there are some people who do, who agree with you, but it is a very extreme position to consider basically everything that’s happened from 1990 until largely now as a complete dead end. That’s a, you’re convicting a lot of professors and and industry professionals. So with that,
Peter Voss (00:17:36):
Yes. I am and I think the starting point is really, you have to understand what intelligence is, you know, in the context of AI and what the essential attributes of intelligence. And, and here we talked to you about the kind of intelligence that the founders of the 60 years ago of what people were talking about, systems that can learn and reason that have deep understanding that can learn and can can reason in the real world. I don’t see really any you know, any of these are sort of main efforts addressing those requirements at all. You know, it’s just, they just, you know, this there’s so much money to be made there, such a momentum and deep learning machine learning right now. And it’s so useful in so many areas that that’s where all the, all the money goes, as well as the development guys. And, you know, just to give you one tiny example that I think is very representative of what I’m talking about is we had a brilliant intern from Germany, work on our system, on our cognitive architecture. So third wave of AI. And you know, he totally got it. You, you know, brilliant guy, he went back to Germany to do his PhD. He couldn’t find a sponsor for anything cognitive. So you ended up doing his PhD and deep learning machine learning, you know, it’s, it’s the only game in town. So
Trond Arne Undheim (00:19:04):
So let me read you a quote from, from a guy that I respect highly at MIT, which is where I worked for many years. So he says, this is Neil Gershenfeld. He says there are three specific areas having to do with the mind, the memory and the body where AI research has become stuck programmers. It, computers are programmed by writing a sequence of lines of code, but the mind doesn’t work that way because in the mind everything happens everywhere all the time. Is that the problem that they have been trying to solve a problem that isn’t actually to do with the mind? Or is it even a larger problem than that in your eyes?
“the secret of AI is likely to be that there isn’t a secret; like so many other things in biology, intelligence appears to be a collection of really good hacks. “ AND “We’ve long since become symbiotic with machines for thinking; my ability to do research rests on tools that augment my capability to perceive, remember, reflect, and communicate. Asking whether or not they are intelligent is as fruitful as asking how I know I exist—amusing philosophically, but not testable empirically.”NEIL GERSHENFELD, MIT. HTTPS://WWW.EDGE.ORG/RESPONSE-DETAIL/26101
Peter Voss (00:19:50):
Yes, I think that, that would be I think that description resonates with me. It’s understanding what the mind does, what the mental processes do and clearly turning them into code. I, I absolutely believe that we can turn them into code, but it has to be the kind of code that can and I don’t want to say self modify because it’s, that’s kind of a slight miscarriage characterization. I don’t believe that the right, the right methodology is to have a system that writes new code and, and some people have tried that, but it’s more that your, your mental processes are executed in a way in a, in a neural network kind of way, but that, that neural network can learn and adapt instantaneously. And I think that’s, what’s, what’s really missing is, you know, I’d say the key requirements of real intelligence of fluid intelligence, human level intelligence is that we learn instantaneously.
Peter Voss (00:20:53):
And I can give one or two examples here. I mean, if I say, you know, just takes words. So, you know, my sister’s cat spot is pregnant. I sit, if I say that to a five year old child at all, immediately understand five or six facts from that, I understand Peter’s talking, I have a sister, my sister has a cat cat’s name is spark. You might think it’s male. And, you know, then you find us pregnant. You, and I know as female, and you have that information available immediately for whatever follows the next sentence, you know, she’s really big. Or you might ask when will the kittens arrive or whatever. So it’s, it’s the ability to learn instantaneously, to form new concepts in tenuously, to be able to reason about things. Those are essential characteristics for for intelligence systems. And if your approach, if your architecture doesn’t allow for those you’re in a dead end.
The neural network metaphor
Trond Arne Undheim (00:21:54):
So, so let me probe on that a little bit. One of the things that is extremely in fashion right now, when you were alluding to it, the whole notion of a neural network is of course, a metaphor, because what it currently means in AI is not a neural network. It means the metaphor of what they think brain researchers have figured out about the brain implied and operationalized into some software is not right. So the whole idea of a brain metaphor in the second wave of AI, what do you, what do you make of that?
Peter Voss (00:22:31):
Right? I mean, deep learning machine learning. Yes. it’s, it’s, it’s a metaphor because you don’t really see neurons and you know, it’s basically just mathematics that is expressed in, but I think it is an extremely useful metaphor and it’s certainly helped me to design the system. And, and that you have, you know, spreading activation, for example, you know, you have a certain concept that that is currently active. And what are the things related to that concept that sort of come to mind that that need to be activated? So I think neural networks are a very good model for thinking about the mind, but again, it’s not one or the other, it’s sort of the, the language is really much more on the symbolic side, but the symbols don’t work in isolation, the symbols in fact represent areas of activation.
Trond Arne Undheim (00:23:35):
So, but beyond the brain, then there’s also a train of thought. Definitely. Well, it’s more of a, an undercurrent, I guess, in the
Trond Arne Undheim (00:23:46):
Psychology of perception that now has come a little bit more to the surface. And these guys are talking about and bodied perception and, and going beyond the, the brain to look for inputs for our, for our knowledge. And some of those are doing experiments in visual cognition and, and memory. And some of them are also newer scientists, but they’re, they’re actually venturing outside of the traditional territory of the brain to look at how our body and a much larger set of our nervous system works. Also outside of at least the brain as kind of a container of all that activity. Is, is that at all, what’s implied in some of what we’re going to talk about in a second, this third wave, or would you still consider any concentrations that have to do with with those concepts with w within the earlier metaphors, I mean, the earlier sort of waves of AI,
Peter Voss (00:24:43):
Right? Yeah, this is quite a complex question and sort of, as it relates to how much grounding do you do your content needs how much real grounding, you know, perceptual grounding do they need, as opposed to what you could read up in books or what you could learn or download or whatever. And, you know, I don’t have a clear answer of how little you can get away with certainly having say a robotic system, whether it’s a physical robot or a robot in a virtual environment that can, you know, sense the environment and use that input to, to build a model of the world to build concepts. Right. I think sounds like a really good approach to me. The, and in fact, we started off when we started doing our initial experiments. That’s exactly how we started off, but it turned out that that’s actually really, really hard and slow. Starting with perception. If you start with physical robots, you spend 99% of your time messing with hardware.
Trond Arne Undheim (00:25:46):
Oh, I don’t disagree. I think it’s hard, but I think even, even in terms of the way humans learn, I mean, you know, if you’re trying to reproduce how humans learn and you’re not taking the body into account, you’re discounting a large part of our sensory apparatus.
Peter Voss (00:26:00):
Right. Right. So I agree that, you know, there are definite advantages to that, but then the actual experiments we did and the past we followed over the last 15 plus years has really been with without having that input and really starting at the language level. And part of that is a practical consideration. I’d say it’s just really, really hard to make progress with physical robots and virtual reality or virtual virtual robotics. It’s really still struggling to, to work well as well, you know, so it has also severe limitations. And so we started with language and use it as a scaffolding essentially with, with minimal external inputs. Now, my theoretical justification for, for that is why, you know, why I’m okay with that is and this is not a complete, you know, so complete justification, but what I call the, the Helen Hawking model of, of AI and what I mean by that sort of think of Helen Keller on the one hand, you know, a blind and deaf and yeah, she was, you know, perfectly intelligent.
Peter Voss (00:27:19):
In fact, it’s absolutely fascinating reading her autobiography and the instance in her life. It was actually one incident in her life where she felt she changed from an animal existence to a human existence when mentally she had the breakthrough of understanding what concepts are. And I found that absolutely fascinating and extremely instructive but why call it Helen Hawking? And then you get Stephen Hawking who has very little you know, dexterity. So clearly you can be very intelligent with very limited same security and dexterity. Or if you take this a step further and talk about a grip, a brain and Yvette, let’s take a person who already grew up in the world but becomes totally paralyzed. And let’s say, even you take it to the extreme that it’s a brain in the bat, but it’s a person that has, or does already have all the knowledge. It’s kind of a proof of concept that once you have the information and the brain that you’ve got through the sensors, you can then function intelligently. So it’s, how do we get that information into,
- Steve Hawking https://www.biography.com/scientist/stephen-hawking
- Helen Keller https://www.biography.com/activist/helen-keller
The third wave of AI
Trond Arne Undheim (00:28:32):
Well, I guess what we’re going to talk about, because the third wave of, of AI and the way I understand you define it, and you, you explain it to me in, in some ways it started a long time ago, right? I mean, some people trace it back to kind of 2000, 2001, and for you, it almost goes as far back as that, when you really started your explorations define this third wave of AI for us. So you have called it artificial general intelligence. You have explained to me that it has to do with some sort of linguistic awareness. To that, I just have the question. I mean, even Marvin Minsky, one of the, I believe participants in that 1956 seminar and a kind of a legacy at MIT for, for kind of having had a role in co-founding both the, the sea sale, you know artificial intelligence lab, and I mean, MIT media lab being very central there, he already had a very strong linguistic body of work.
Trond Arne Undheim (00:29:33):
I have a couple of those books just lying around right here. And I was just browsing through them in preparation for our chat. He is very on board with this idea. And in fact, he created several data sets that became the backbone of some part of this idea of, we actually need to understand language before we understand anything else, and we need to teach computers real language, but explain to us what you saw lacking in, in the second approach that now is slowly because I understand it’s a long process slowly being fixed by rebuilding and basically getting at this from, from scratch, teaching computers language, essentially tell us how, tell us how you went about it. What were your initial thinking was in terms of how you were going to build a process and what it does, what do you do day to day with your team?
Peter Voss (00:30:30):
Quite a, quite a few different things here. So yeah, first, first of all I just like to clarify what I mean by AGI artificial general intelligence. So I had already finished my five years of research in 2001, and I got together with a few other people that kind of had the same idea that the time has arrived for us to get back to the original dream of AI to build thinking machines. So we got together in 2001 and, and three of us coined the term artificial general intelligence as a title for a book that we wrote.
Trond Arne Undheim (00:31:08):
But who was that by the way, you are three?
Peter Voss (00:31:11):
It’s Ben Goertzel. So and Shane Legg who’s one of the founders of DeepMind. Shane was actually working for my company at the time. So so artificial general intelligence does not specifically focus on language, was not necessarily prerequisite for AGI. And the reason I say that is while AGI clearly aimed at human level intelligence I believe there’s a lot of good work you could do at the level of animal intelligence, sort of the proton concepts that we have animals conform, simple concepts that are directly grounded in experience. If you built a system that had animal level intelligence, but of the right kind where the animal can learn interactively from the environment, I mean, think about a dog or a chimpanzee, or, you know, some higher level animal. If you could build a system that had this general ability to, to learn in the wild, I believe you’d be extremely close to having human level intelligence.
- Ben Goertzel https://goertzel.org/
- Shane Legg https://www.wikiwand.com/en/Shane_Legg
Peter Voss (00:32:27):
I think it would be trivial to upgrade the system then to human level intelligence. So it’s really that the key, the cornerstones or the, the key keystones for what intelligence, you know, the ability to, to, to learn and reason in real time and to, to adapt, to use context and, and so on. So, so artificial general intelligence AGI is really more about the generality that we get away from the programmer deciding why do I want this AI to do you know, do I want it to optimize container packaging or traffic control or, you know, medical diagnosis and, and then using their, their intelligence to basically write some code or to, to build a model to get away from that,
Trond Arne Undheim (00:33:15):
All of those ambitions, aren’t those actually the things that people are this the most scared about? I mean, this, I mean, what you’re talking about here is I guess both the biggest dream of, of, of Ben Gartrell and, and, and people like that, but, and, and perhaps yourself, but it’s also what people fear the most about AI. It’s very paradoxical. So what you’re dreaming of here is, is basically Elon Musk and Stephen Hawkings and, and a lot of people’s biggest fear. So w why would you try to dedicate your life to create what some people are the scary, scary, really, really scared about?
Peter Voss (00:33:48):
Well, I think they have come to the wrong conclusions about AI. So I don’t, I don’t, I see the opposite, and of course that’s a whole another discussion I’d like to answer some of the other things you want to go down, down that road. So the conclusion I’ve come to is that we, the human race actually needs AI to help us cope with, with basically modern life, that we don’t seem to be doing a great job of that. And we actually need more intelligence. We need AGI to help us solve a whole lot of problems that we have,
Trond Arne Undheim (00:34:29):
And which problems in particular, because that would clarify things for me. What kinds of problems do you hope to solve?
Peter Voss (00:34:35):
Well probably at the, I would put governance as the first thing. We not very good at managing our affairs and, you know, with, without getting into a political hot potato, but I I’m willing to risk the, if, if I make the statement that, you know, if Donald Trump and Hillary Clinton were the absolutely best to people, to, you know, be the leaders of the free world, whatever that means that, that seems that something is a miss with the system.
Trond Arne Undheim (00:35:12):
You’re talking about the selection of leaders or the mental faculties of leaders once they are elected. And, and kind of, you’re talking about some sort of compliment to the advisory roles that are necessarily advising both presidential candidates and, and clearly also advising presidents and statesmen,
Peter Voss (00:35:30):
Well actually advising voters more than any of these voters. Got it. I mean, that, that’s kind of the yeah, so, so yeah, I think governance is one of them, but then, you know, you, you, you look at obviously things that concerned us about energy, about pollution, about climate change but also, you know, disease. I mean, let’s look at where we are right now. It was covered. Certainly bringing more intelligence to this problem should be a good thing. In fact, I think we may find that, that we need it.
Trond Arne Undheim (00:36:05):
Okay. So I think the research agenda now is clearer to me. How have you gone about it? That was one of the embedded nested questions in my long kind of soliloquy there. How have you gone about it? So you, you told us about your first five years, you studied a lot of different cognition and consciousness, and then what was your first step after that? Right.
Peter Voss (00:36:28):
So in 2001, I then started my first AI company and hired about a dozen people. What was this called adaptive AI inc. Day two. I too, in fact, I see there’s just a new company that, or some university that uses [inaudible]. But yeah, I try to, as a first company, and for about five years, we just built various prototypes. And so basically to turn my ideas into actual working systems, into working code, that said, we actually started off with now animal model because that kind of made most theoretical sense that the system would learn through interacting with the environment. We upgraded that animal model from, from a mouse to a dog. And then we decided that a child would actually be better. And then we finally decided all things considered that working front concentrating on adult level language was, was actually the way we would make the fastest progress. And quite frankly, to have something that we could commercialize, because while I’m, you know, I made a fair amount of money out of the IPO that I did, it wasn’t enough for me to you know, indefinitely fund the company. And I could only afford really to have a group of 10 people. And I don’t think AGI will be solved by by a group of 10. So so,
Trond Arne Undheim (00:37:57):
So interesting. So, so a bit, have you had a group of 10 essentially working on this since 2001, have you steadily had it?
Peter Voss (00:38:04):
Yes. Pretty much. Yeah.
Trond Arne Undheim (00:38:07):
Yeah. That’s an independent question, but 10 people is for some startups, a lot of people. So presumably you could accomplish a lot, but it depends what you’re trying to do. If you’re just trying to sell an e-commerce project, I can you know, a product that can tell you 10 people, you sh you should get pretty far. Right. But for the kind of tasks that you’re doing, I mean, w where is the energy spent and where, where have you spent the, you know, the bulk of the time?
Peter Voss (00:38:33):
Yeah, absolutely. I mean, what we’re is, is unchartered territory, you know, it’s not just a matter of picking up things off the shelf and plugging them together and, you know, building a product. I mean, to get a sense of that, you could, yeah. I mean, DeepMind has 600 PhD level people are working on it and you could ask how much has really been achieved there, you know, and
Trond Arne Undheim (00:38:56):
Working on the same thing though, Peter, that’s my question.
Peter Voss (00:38:59):
Not at all. I’m just saying generally working on a, I mean, they, well, they are in a way they are actually working on the side, they’re trying to solve intelligence, which is what, what we are doing. They’re just using a different approach. So, you know, it clearly is very difficult and it’s you know, it’s so much along the way has to be invented, you know, the tools that we use, how we train the system, how we test it even figuring out what kind of people are the right people to work on the project, what kind of skills you need training up those, those those people. And then, you know, you train up somebody for two years, three years, and then the kind of smart people we have or decide, well, actually I love this project, but I really, I really need to go back and finish my PhD because I promised myself that, you know, and so you,
Trond Arne Undheim (00:39:51):
I’m curious, I mean, there’s some details here that you’re not sort of disclosing it by, you know, the details of, of what you guys are working on, which I can appreciate, but on the other hand, isn’t there something to do with, I mean, crowdsourcing has also become really popular during this period when you have it working on this, wouldn’t there have been, or isn’t there still a way that one could make progress and RN is anybody trying to make progress on AGI by using more of a crowdsourced approach, because surely if some of these things are modular that’s of course the only way that you can use crowdsourcing is if some of the tasks are modular and, and their degree, they’re not, of course you’re in trouble because you need to integrate all of it at every point of time. And in that point, you know, it doesn’t matter if you have 600 or eight or 10 people, you, at some point you just need to sit down and kind of make an integration. So what kind of a problem is
Peter Voss (00:40:45):
So to give you a quick answer, yes, it is a highly integrated system. And in fact, I could, I can use that on to, to also explain a little bit more about what the third wave of AI is. And I have a slightly different timeline from the one you mentioned is I think symbolic AI really was the, you know, the King until probably eight, nine years ago because neural networks had not really worked that well in, and they’ve been very limited. Most of most AI projects were really a symbolic sort of first wave. And people had actually written off a neural net connectionist approach is to a large extent until the breakthroughs we had about eight, nine years ago, I putting the whole
Trond Arne Undheim (00:41:34):
Statistical approach into that bucket. So it was a double buckets you’re completely in line with. I understand, I mean, deep learning as such. I completely agree with you that,
Peter Voss (00:41:45):
So if we call it decade yeah. If we call that the second way of, then that’s only, you know, 99 years old or so, so right. What does third wave is? And, and DARPA, you know, DARPA came up with these terms first, second, and third wave that I’m using here, they’ve given certain presentations and their description of third wave is focusing on adaptive. You know, that the system is adaptive. It can learn in real time and, and, and things and, and can reason. And the way I, I tend to describe it from a practical point of view, it’s a cognitive architecture. And by cognitive architecture, I basically mean you start off by saying, what does intelligence require? What does general intelligence require? And then you make sure that the architecture that you have has the components that will give you these, that don’t meet these requirements now, cognitive architecture. So I’ve actually been around for quite a few decades in universities. And you know,
- DARPA–perspective on AI https://www.darpa.mil/about-us/darpa-perspective-on-ai
Trond Arne Undheim (00:42:47):
What are some examples of other cognitive architectures that are, that you’re building on in order to get this done?
Peter Voss (00:42:53):
We actually not building on any of the existing ones. And you know, there’s a list of 30 or 40 of them that have, you know, been around. I actually don’t remember what they are, but I mean, Ben Goodwill has, has, as one was, you know, open cog. And yeah, they’ve, they’ve been you know, they’ve, they’ve, they’ve been a number, but they’re sort of in the same place where neural nets were nine years ago, where people say, well, Hey, we’ve tried this for decades and it hasn’t worked, you know, well, it doesn’t work until it works, you know? And a neural. Yeah,
Trond Arne Undheim (00:43:29):
No, I mean, that’s every innovator’s trouble, right? You look very dumb until you look very smart,
Peter Voss (00:43:35):
Right. And it was, you know, it was deep learning machine learning. It was basically just suddenly we had enough training data. We had enough computing power and a few tweaks to the algorithm and, you know, suddenly deep learning machine learning was just incredibly powerful and successful. And I mean, it’s now the only game in town now, machine learning to get back to the sort of integration, how modular it is using building, why haven’t cognitive architectures worked well, there, there are a couple of reasons. I, and I would say three reasons that I’d like to cite. The, one of them is exactly all the cognitive architectures out there other than our own user, very modular approach, because that’s, you know, it’s a sensible engineering approach to use. You have something modular, it’s much easier to debug and you can use different components. So it makes sense to have a modular thing, but the brain, the mind can’t actually work that way effectively.
Peter Voss (00:44:37):
And I’ll unpack that a little bit. So what people might do is say, okay, we need a passer. What’s the best passer we’ll use the Stanford parser, you know, that’s great. Okay. Now we need some kind of a a knowledge graph to represent the knowledge that we have. So what’s a good graph database, and they’ll pick a graph database and plug that in and then, okay, we need a reasoning engine. But having these components, these separate components that when designed to really work together in any way, you have this horrible impedance mismatch, they simply cannot work effectively together. So the fact that people have, you know, had modular architectures is, is exactly one of the things that I see as why it hasn’t worked up to now. The second thing is the knowledge graphs that are being used. If you use an external graph database, you have a tremendous performance penalty.
Peter Voss (00:45:33):
So we’ve developed our own graph database. That’s fully integrated with all of the other components, all of the components we developed ourselves so that they can work together synergistically. And the graph database that we have is two orders of magnitude faster than any graph database, commercial graph database. So, you know, a hundred times faster. So if you know, you have a one second response time when our system that would be a hundred seconds and, you know, they’re just push us outside of the realm of being practical or useful, or even being able to do experiments. Whereas, so the modular, the performance of the graph database as a two sort of technical reasons of why they haven’t, haven’t worked. And the third one is, is it’s basically sort of an accident of history that machine learning deep learning has been so incredibly successful in the last, you know, eight, nine years. It sucked all the oxygen out of the air. So basically nobody can, can work on cognitive architectures unless bloody reminders like I am and have enough funds to pursue it.
Trond Arne Undheim (00:46:38):
Hmm. Well, so let me ask you this question. You are very optimistic that AGI is going to solve a lot of great problems. If you take Ray Kurzweil, who, I mean his leg of a mix of an optimist and a realist in my mind, right? Singularity is near here wrote in 2005 and says that between 2015 and 2045, and, you know, he’s already five years expired, you know, there’s a X percentage chance that singularity will arrive. And, and, and what does singularity mean? And do you relate at all to that concept?
- Ray Kurzweil https://www.kurzweilai.net/
Peter Voss (00:47:17):
Well, it’s always a difficult topic because, you know, you end up making a lot of enemies or you’re written off as a cook if you start talking about it, but feel free to say you refuse to
Trond Arne Undheim (00:47:30):
Acknowledge the term, but I’m just curious because, you know, you are one of the few people that uses the word AGI. So I have to be able to ask you about singularity, which is quite related.
Peter Voss (00:47:41):
Yeah, no, I’m, I’m I’m willing to step into that step into that water. So yes, to me, the singularity is not that we will have infinite infinitely crying intelligence, but the singularity to me is a very important point at which AI becomes capable enough to improve its own design. So it’s basically when I, as an AI designer become obsolete. So when my AI, when my AI is basically better at improving its design than, than, you know, my,
Trond Arne Undheim (00:48:23):
In your mind, a fairly limited design in the sense that it’s not going to be a generic, you know, sort of self reproducing type of thing, right. I think what, what Kurzweil has in mind and definitely what a Bostrom has, has adopted in his book, super intelligence. Right. Just talk about to people who are concerned about this singularity type superintelligence thing is some sort of generic machine like technological growth. That just the moment it passes us just will radically outpace us and be irreversible and resulting in unforeseen and unquestionably negative changes to human civilization just because we can’t control it.
Peter Voss (00:49:13):
Yeah. So, yeah, I think there are a lot of different things sort of packed together. And I do believe that once we reached that level, there will be a very rapid increase in intelligence. And of course you can’t put the genie back in the bottle. In fact, you can’t put the genie back in the bottle right now, unless we have a complete breakdown of civilization, we are going to get human level AI. And once we get human level AI, we will get super human level AI because AI will be better at being able to design itself, improve itself then than we are.
Trond Arne Undheim (00:49:50):
So, Peter, I mean, the counter argument has always been that in some people’s mind that if you look at human AI, I mean, if you look at human intelligence, it varies a lot. And between every Stephen Hawking and, and every polymath out there, and even every good artist, there’s thousands and thousands of bad artists, why is this, like, why is there a perception of that? Even the variability within human intelligence is so large, why, you know, what kind of point are we really talking about? Because are we talking about the average intelligence? Are you saying
Trond Arne Undheim (00:50:28):
It doesn’t really matter once we get into the vicinity of human intelligence, it’ll immediately spike up.
Peter Voss (00:50:35):
Yeah. I don’t think it really, it really matters too much.
Trond Arne Undheim (00:50:39):
So at that point, you know, between Tron and Peter and Einstein, the point is the machine doesn’t care because that’s kind of like a, that’s a bleep, that’s a week’s work basically for a machine when it gets to that level.
Peter Voss (00:50:51):
Correct. Because I mean, I do agree that there’s a variability in human intelligence. But there’s also the whole issue of motivation access to knowledge and, you know, yeah. Motivation and having access to knowledge. And those are of course, very much controlled and an AI. It will have access to all the knowledge you want to give it. It will have maximum motivation because we’ll be building it to have maximum motivation. It’s not just going to goof off, you know, it’s not going to get interested in, you know, girl AI’s or by AIS or whatever, you know? So it’s not going to have those kinds of distractions that we as humans tend to have where many brilliant minds, you know, I could have achieved a lot more intellectually or building companies or whatever, if they hadn’t, if they focused better or had access to better funds or more better information. You know, I mean, if you look at all the very smart people in underdeveloped world part of the world they could probably do a lot better if they had access to better infrastructure. So yeah, I, I,
Trond Arne Undheim (00:52:01):
But let me, let me just stop you there for a second, because isn’t there also some sort of machine based group thing, somewhere in the system where you can get stuck in machine loops where the machine actually doesn’t make real progress. It just starts calculating things that are meaningless in terms of making real progress. Is that not even, I mean, that surely at this stage of AI, that that must be happening. I mean, it’s not just an ever increasing, increasing instead of capacity for, for calculations within very organized domains. I understand it’s almost linear, but for, for the kinds of breakthroughs you were talking about, how can it, you’re just saying at some point, once you’ve mapped the system, it seizes to be linear.
Peter Voss (00:52:41):
Yes. Because it, the human will no longer be in the loop to improving it. It’s intelligence. I mean, at the moment, you know, the, the work we’re doing, we are slowly cranking up the capabilities, the IQ of, of, of our system. But the, the process that is involved there is, you know, we, we, we work and then we have to go and sleep and we have weekends and we have other things that happen. And
Trond Arne Undheim (00:53:06):
How smart is your system now, Peter. And if you compare it to human intelligence, a hundred is a, you know, not so good, right. 120, and now you’re a, you know, now you should be getting a PhD, but a w w where’s your sister.
Peter Voss (00:53:20):
So w what we usually say in our sales spiel is you, if, if Siri and Alexa and IQ off 10 we’re at 25. So we are still a long, long way from human level intelligence. But the real question is, does our architecture, our approach allow us to go from 25 to 30 to 35 to 40 and so on. I believe it does. So yes, we’re a long way in, in our own design from, from, you know, being at an IQ of a hundred, but you know, of course machine IQ is going to be quite different from human IQ machines are going to be much better at things, you know, even when we, what we might call an overall level of 60, it’ll already be 200 and some, some domains, and then,
Trond Arne Undheim (00:54:13):
Well, exactly because it’s the domain specific, right. So it can be, I mean, it’s already what you call narrow. AI is extremely useful in some domains.
Peter Voss (00:54:23):
Yeah. And so
Trond Arne Undheim (00:54:24):
Chess. It’s, it’s very good.
Peter Voss (00:54:26):
Correct. So, but it’s really when it becomes we’re not at a point where our system can hit the box, you know, read Wikipedia and really make sense of it. I mean, just, you know, scanning Wikipedia and creating a whole lot of you know, bits of information, atoms of information out of Wikipedia is not understanding Wikipedia. Understanding Wikipedia would be to really read every sentence and integrate it into your knowledge graph in a way that can actually be used. And, you know, once you get to that sort of level, that doesn’t require a lot of additional coaching and question answering where basically the human human isn’t that much in the loop, and this is where crowdsourcing can also certainly help because, you know, if I go or whatever, you know, AI gets to a point where it can actually read Wikipedia, for example, well enough that it can actually ask intelligent questions of people out there in the world. We can say, I don’t understand that. Can you explain this to me? Or what does this actually mean? You know, what else, what else do I need to read up in order to understand it?
Trond Arne Undheim (00:55:35):
That’s kind of a test that you’re hoping to pass with your system at some point, this Wikipedia test, and is that more relevant and kind of a touring test. And, you know, I don’t know if all my listeners are fully aware of what a Turing test is, but, you know, w w how relevant is a Turing test and how, you know, what do you see as kind of the, today? What does that meaningful Turing test, which is a test that was kind of theorized about the, what, what would, how you would distinguish computer and human and, you know, essentially a good computer would be able to full full of human about being human. That would be a, you know, then you’d ask the Turing test.
- Turing test https://plato.stanford.edu/entries/turing-test/
Peter Voss (00:56:11):
Yeah. I’ve actually written about the Turing test. And in short, I think it asked us too much and too little in different ways. It asks too much in that it’s supposed have all the nuances of a human on how the experience we have growing up as children and so on, and supposed to have all of that kind of knowledge and the emotional sort of infrastructure architecture infrastructure, or whatever that humans have, which I think is asking way too much to say is the system machine intelligence, but it asks too little in the sense that all it has to do is follow enough of the judges. So if it becomes really good at fooling judges, and this is basically how the touring test competitions have really been run, is how can we follow the judges in the, in the, in the, you know, and this is using external human intelligence to build a system that’s good at fooling judges. Like the one supposedly AI that passed the touring test a few years ago, you know, we had folded judge up as say, Hey, I’m just a little foreign, young kid or whatever, and I don’t really understand language too well. And, and the judge has worked for it and they say, Oh yeah, I think this is a human.
Natural Language Processing – not even close to AGI
Trond Arne Undheim (00:57:20):
So Peter, there’s a whole, there’s a whole a group of people within a I that are working on natural language processing or NLP is that within this third wave. And are they related to the kinds of things that you’re doing? I’m just trying to get a sense of who else are actually contributing to this quest that you’re on. I mean, is the NLP field working on language and the way that you are talking?
- Is NLP the key to unlocking artificial intelligence? https://rakuten.today/blog/nlp-key-to-unlocking-artificial-intelligence.html
Peter Voss (00:57:49):
No, not, not, not really not, not at all. Again, a lot of the NLP analogy, you know, does does terms I mean, and are you talking about natural language understanding really today, if you look at industry and go to conferences where people really referred to is kind of a stimulus response, you know, can it identify the intent correctly? But there isn’t really any deep understanding and in any, in any sense. So again, most of the work that’s being done is basically big data approaches statistical approaches. And I don’t see, I’m not seeing really anything that that’s particularly useful for. So third wave cognitive architectures, AGI.
Conversational AI, cognitive computing, chatbots
Trond Arne Undheim (00:58:39):
Hmm. Talk to us a little bit more about the use case that you’re building. So your company, I go sometimes it’s described as a chat bot and definitely you’re operating in the landscape of conversational AI somehow. What is a chat bot in, in, in your mind?
Peter Voss (00:58:58):
Right. I struggled a lot to to decide whether we would call it a chat bot or not, because we obviously don’t think it’s a chat bot as people understand it. And in my previous AI company, I had the same struggle. The previous company is smart action. That’s the first generation of our technology. I launched that company in 2008. And there we were targeting is called IVR interactive voice response or automated phone systems. And you know, people hate these IVR things. You get to it and have to press one this, or, you know, they’re just really annoying. So, yeah, so we ended up with, when we told people, no ours, isn’t an IVR. We couldn’t get our customers to understand what we’re talking about.
Trond Arne Undheim (00:59:42):
So, so then you had to kind of put yourself in the category of an Alexa, Google home, Siri, a Google assistant, Microsoft Cortana. These are the chatbots that people know, and then there’s our unknown smaller startups, but, but you’re attaching yourself onto that later.
- Alexa https://developer.amazon.com/en-US/alexa
- Google Home https://store.google.com/us/magazine/compare_nest_speakers_displays
- Siri https://www.apple.com/siri/
- Microsoft Cortana https://www.microsoft.com/en-us/cortana
- Top 60 Chatbot Companies of 2020: In-depth Guide https://research.aimultiple.com/chatbot-companies/
Peter Voss (00:59:58):
So, so, so that people are sort of, Oh, okay. I know roughly what you’re talking about. So in the IVR space, we ended up branding it as an IVR with a brain, and we’re basically doing the same thing. Now we’re saying it’s a chat bot was a brain or a cognitive chat bot. Basically none of these chat bots where Siri Cortana or whatever have a brain, they basically don’t have, they don’t learn interactively. They don’t have reasoning ability. You know, they don’t have deep understanding, so they don’t have a brain.
Trond Arne Undheim (01:00:29):
How can the market assessment be so huge then? I mean, I think you put on your website that the market for chatbots is around a hundred billion dollars. I don’t know where that, where you got that from. But, and there’s like, I, I found some sort of 60 chat bot companies doing, you know, on one list online. So there’s, there are a lot of people building companies in this rough space. What is the fascination around chat bots and what, what can one reasonably expect chat bots to achieve in the next, even just the next decade?
Peter Voss (01:01:05):
Well, if they’re cognitive chatbots a lot. All right. So tell us about that. If they have a brain you can achieve a lot. Now
Trond Arne Undheim (01:01:14):
Tell us about some other things that you think that you will solve with Aigo.
- Aigo (@Aigo_ai) https://www.aigo.ai/
Peter Voss (01:01:19):
Yeah. So, you know, things that we are solving and, and we’ll solve increasingly well our, you know, our conversations and I’ll give a few examples here. So one of them would be for example, to help a person manage their diabetes or whatever condition that they have, that you can actually have a conversation. Whereas right now you might have a human coach that you can only speak to once a week because they expensive. But with I go, you can have a conversation every day about, you know, your food choices and how you’re feeling, and, you know, whether you should do this or that. So while the cognitive ability is general, the knowledge base is narrow, and we have to do that right now because getting the right background knowledge, getting the common sense, background knowledge in any domain is actually very hard.
Peter Voss (01:02:15):
It’s a very costly exercise. So for us to, you know, as a small company, to be able to get common sense knowledge across the board, you know, on anything ballet and sports and movies and all the different professions you have is it’s just too expensive. So we are concentrating on applications that do require this learning ability. So if you tell your cognitive assistant that, you know, you are going to be on vacation, you’re going someplace on vacation and can you get your medication there, or, you know, what, you know, what, whatever it might be, that if you, if you using it for diabetes management or who is, should you inform, you know, if something happens or whatever it might be. Another example is we were using ICO as a assistant for salespeople. Salespeople are notoriously bad at using Salesforce because you know, it’s just, well, I can understand why I have struggled myself.
Peter Voss (01:03:20):
And if, you know, if you can use, Aigo.ai to just talk to, I go and say, and when we talk chat bot, this also includes voice interaction, and this is what makes the market obviously even even much bigger. But if you can talk to, I go and say, Hey, tell me about my next appointment. You know what what, what product is he interested in? Does he have any hobbies? Does you know, does he have any kids or whatever you want to talk about your chat with your prospect, with your client, then when you’re done, you can say, I go set us to high priority, or remind me next Monday to follow up, send them brochure X and let my boss
Trond Arne Undheim (01:03:56):
And I are, is I go your product, doing this with clients, helping clients on these particular two challenges right now.
Peter Voss (01:04:05):
I’m glad you asked the question because we actually just got off the phone earlier today that we have a very, very large client that we expect to go live with this week. It’s a, the eCommerce space, and it’s basically an, a personalized assistant that helps you interact your e-commerce. We have actually developed a prototype on the sales assistant for one of our clients and, you know, working, working with them. So yeah, we have a number of, of, of these another application is we working with a company that does VR training. They train salespeople and HR people in a virtual environment, but they need a cognitive engine to actually have meaningful conversations, you know, with, with that. So there are just a ton of applications for conversational AI,
Trond Arne Undheim (01:05:02):
You, I think you told me 10 people and somewhere online, it says that you guys have 15 people, but regardless how many people are required to crack the bigger problem that you are concerned with the AGI challenge, how many people, how many years, how much money? Like what, what kind of a challenge is this?
Peter Voss (01:05:25):
Yeah, so w we’ve geared up the company we’ve now raised, raised some money and we’ve, you know, we’ve increased. In fact, we’re hiring hiring right now. So we hope to be about 50 people early next year. And then, you know, grow rapidly how much money it will require is very hard to hard to say, to get to that inflection point where things become a lot easier and can be automated, but it’s going to be more than 10 million. I mean, I’ve put about that much into the project by now probably take more than a hundred million will it take several billion? My guess is you could probably do it with, you know, less, less than a, a billion dollars or a few billion dollars which, you know, considering the amount of money that’s already been spent or in my mind wasted on pursuing intelligence, there’s not a lot of money. So I certainly am. I’m quite confident that it would not take 10 billion or a hundred billion or trillion dollars to do, and how many people, few hundred people working on it until you get to a point where you can use crowdsourcing and the system can learn by itself is sort of the idea I have.
The future of AI – what is the endpoint?
Trond Arne Undheim (01:06:47):
And well, you said this much earlier, but what exactly would the end point be? And given that you think that this is a controllable and appoint I, well, first off, it’s not an endpoint, but it is at a stage when AGI is a reality, but it’s not a runoff you know, a problem of superintelligence or singularity. What exactly can you describe the state of development after this $1 billion is spent? What are you using the system for? So in what benefits directly would we be getting from a, such a system? Right?
Peter Voss (01:07:29):
So the, the, the kind of things we pursuing now in terms of, you know, automating really what would otherwise be done by human labor, in customer support and, and so on, but we’re people can’t afford the human labor. So, you know, that’s valuable, but that’s not ultimately what what’s driving me. Ultimately what’s driving me is in having millions of PhD level AIS working at the problems that we’d like to have solved. And, you know, I mentioned some of them earlier, some of the other ones that are particularly dear to my heart are just generally disease and aging, I think it’s it’s criminal from my perspective said just when we learn how to live properly, we die.
Trond Arne Undheim (01:08:22):
So listen, Peter, I’m getting a more clear sense of what you’re saying in your future picture, and by the way, how far into the future, let’s just say that your company is, and we’ll talk in a second about some other startups that might be pursuing smart things, but let’s just say that it’s, you, it’s a few others may maybe quite a few others, because let’s not just put our eggs in one basket. And let’s say that you all have a reasonable amount of money, the same kind of money that a unicorn startup would would, would get. So, you know, you’d eventually get to a billion dollar valuation, you’d have a few hundred million dollars to, to run with. And then you know, at some point you’d have some revenue along the way that kind of, kind of where you can kind of run your own operation and, and, and then eventually you’d be a few billion dollar companies, and they’ll be more than one of you. What, how far? Yeah, I mean, is this within this decade we’re talking about
Peter Voss (01:09:18):
I think it’s, I think it’s very possible to have human level AI was, was in this decade. If the right people or the right, in fact, I, I want to expand a little bit on that. Yes. If the right people work on the right project.
Trond Arne Undheim (01:09:35):
No, I, I get that. And the other caveat, which I found pretty interesting is you called it a millions of PhD level AIs. So it’s very interesting to kind of distinction you’re saying, you’re saying they will kind of cap out at a very smart human being who has been dedicating their careers to being smart in a couple of domains, like a PhD level person. That’s the, and the magnitude of having more than 10 and 20 and 30 and 40, the magnitude of the numbers of that level of smartness combined is what you’re envisioning this system to consist of. So individual computers clustered, or not that each of them, whatever problem they’re tackling, they’re tackling at at least the level of a human PhD level person, that’s kind of what you’re looking at.
Peter Voss (01:10:29):
Yeah. I don’t, I don’t believe at all, it’s top out at that PhD level intelligence. So it’s kinda more shorthand for having
Trond Arne Undheim (01:10:38):
No, I understand, but it’s a description of kind of vaguely where you kind of envisioned that. Yeah,
Peter Voss (01:10:43):
That is because the beauty is said once you’ve trained one, you know, one AI in a particular domain you know, let’s, let’s say it’s cancer research. You know, everybody is familiar with, and most people would support you know, you make a million copies of it and you now have a million of these AIS pursuing this pursuing different angles, you know, collaborating much better than humans would collaborate because they don’t have egos and we’re working 24 seven communicating so much, much better. So we will make tremendous progress in all sorts of domains.
Trond Arne Undheim (01:11:22):
So I’m just wondering as a kind of a devil’s advocate, wouldn’t it just be easier to design some sort of incentive structure so that a million PhDs were working on something fruitful independently of these machines. I mean, couldn’t you just today just decide to spend a couple of billion dollars and tell a, a million PhDs to work on something useful, then stop infighting. Wouldn’t that be even fun?
Peter Voss (01:11:48):
Well, no, it wouldn’t be
Trond Arne Undheim (01:11:49):
Isn’t that, for instance, I mean, the reason I’m thinking of this as right now, there probably is a million PhD is working on COVID. I actually haven’t seen the statistics, but if you consider that most scientists with any worth their salt right now, who previously was doing that were doing cancer research in whatever field of biology they’re in, they are now trying to do something on COVID or at least for the last six months, they have been trying to do something on coding.
Peter Voss (01:12:16):
Right. But I mean, first of all, humans are, are human and, and have the limitations of, of human. So, I mean, your incentive structures can only go so far. I mean, we’ve seen, yes, people pull together and war times and, and so on, but I don’t see people pulling together all that wealth was covered for example, but no, AI’s have tremendous advantages over humans. They have photographic memory, they have instantaneous access to the internet. They work 24 seven. They can communicate and share information extremely effectively. They can download whole parts of their brain, you know, and copy that over to, you know, a million PhDs I’ll get cloned. We can’t do that. I mean, it takes you what seven years to become a PhD. And then the specialization in a particular field, how many years in AI, you know, if a particular problem is solved or your, you would realize that you’re in a dead end, you’re just, you know, re relearn you reprogram it, copied down different new knowledge.
Peter Voss (01:13:21):
And you can, you can switch gears and you can turn on a dime basically in terms of what your knowledge is, what you’re pursuing. It’s, it’s a complete game changer, but there’s something else that I’m maybe even more excited about. And that is how AI will improve us as individuals, how it will make for better human beings, how to help us live our lives. And that is what we call the personal, personal assistant or PPA. And the reason I double up on the word personal should actually be troubled. It’s, it’s personal for three reasons, a you own it, it’s yours and it follows your agenda, not a mega corporations agenda as we have now. So it’s personal in that way. It’s personalized to you. So it knows everything about you that you wanted to know about you. So it’s like your most trusted friend and it’s personalized.
Peter Voss (01:14:18):
And then so person, the third peers is that it’s private. That basically it will only share information with whom you wanted to share information. So think of it as a little angel we have on our shoulder that can, you know, advise you and, you know, if you make decisions in your life of, you know, what to do business to go into a relationship or medical or political or whatever, and you can kind of talk to this trusted friend and can help you. I actually see this personal, personal assistant as really becoming an XR cortex and extension to our own minds, because we’ll have psychological coupling with it. It doesn’t even have to be built into our brain. It’s just, it’ll become, it’ll become part of who we are and make it,
Trond Arne Undheim (01:15:09):
Yeah, it doesn’t have to be built into your brain. I’ve been, I’ve been studying kind of Elon Musk’s Neurolink statements lately, and he claims is going to make another statement on the 28th of August. What, what do you think about that approach? And, and that’s as a, as a lead into kind of other startups that are working on something in AI that you consider promising, I mean, is who, who are the others that are at least working on more promising strains of AI than the current kind of only the deep learning perspective? Meaning kind of, yeah.
Peter Voss (01:15:44):
So new neural link. I, I don’t, I think it’ll be useful to have that, but I think it’s also going to be extremely challenging to get this to work well. And I mean the one hurdle, I can just use a single word FDA. I mean, that immediately puts it out by, you know, a few decades before something like it will be approved, you know people are gonna freak out about mind control, you know, who has, who controls us computers. And, but it’s actually not going to be that useful because the we can’t speed up our brains, our thinking. So, you know, we, it’s not going to, if we get bombarded with information through the neural link faster than our, our meat brains can process it, we’ll just have seizures. You know, it’s it’s not, you know, it’s not, you, you can’t think at a hundred times a speed or a thousand times the speed, whereas an XL cortex can, it can think at a million times faster than, than you can with your midbrain. So
Trond Arne Undheim (01:16:50):
Not inside Dylan’s mind, but I have an idea here that he, I is probably thinking that it could go two ways, right? I mean, it’s one thing is to give the brain input, but it’s for perhaps also to learn more about the brain in an, in which case, if you do, and you believe that this metaphor has something to it, then you could design computer systems that truly are neural links, meaning that they’re extensions of the brain, but but you you’re skeptical.
Peter Voss (01:17:19):
Yeah. I don’t, I don’t think I don’t think that using sort of the neurophysiology, you know, the biology of what we have as particularly useful to building advanced AI and the analogy that I always use there is you know, we’ve had flying machines for more than a hundred years, and yet we know when you’re reverse engineering a bird, you know, you really want to say, well, what is it we trying to achieve? And then you use some material that we have at hand and use the engineering skills that we’re hand to basically build a flying machine, but that’s very different from, from a bird. So in the same way, as I see us building a thinking machine but that’ll be much more doable then reverse engineering the brain.
Trond Arne Undheim (01:18:09):
So you have a singular mind, Peter, you’re very dismissive of a lot of activities out there. So I’m curious, is there anybody in the startup field or in the corporate side, or even within universities, do you currently see anybody that’s really doing the right thing and are doing promising things? So newer link you’re sorta like thinking, so, so what are, what are some of the other actors that potentially are, are doing this the right way?
Peter Voss (01:18:42):
Yeah, so I don’t spend a lot of time you know, sort of on a day to day basis worrying about it because I’ve already spent too much time trying to have communities with other AGI minded people and I’m still on various lists, you know? And, and so on. I think I sent you a list actually, of some of the people that
Trond Arne Undheim (01:19:03):
Yeah, no, no, no, no. These are some of the people and some of them do have companies. I mean, Ben Gottschall obviously has several companies and he’s working on Sophia, you know, the talking ADI
Peter Voss (01:19:16):
Any, anybody in the field of third wave or AGI that actually has any commercial product. Quite frankly, I’m not aware of, of, of anyone
Trond Arne Undheim (01:19:32):
It’s quite remarkable as a statement though, to say that you don’t think you’re aware of anyone else that’s doing something innovative in this field. I mean, is it really possible that the entire globe could be on a wrong perspective? That’s a little bit like saying, you know, the Copernican worldview and the Galilean worldview. I mean, are you really saying that the, that this thinking around how the mind works and on this brain this neural kind of metaphor has, has gotten the entire planet thinking in circles?
Peter Voss (01:20:09):
Well, I mean, these things do happen. And it’s, it’s not that we, the only people that, you know, my team, the only people that think along these ways, I mean, after all Dabo, you know, presented this third wave of AI, so clearly people at DARPA are thinking about what we really should be working on. In fact, they supposedly funded this by $4 billion, the third wave of AI, but when government money is involved, who knows where this ends up, you know, it always tends to end up in the wrong pocket.
Trond Arne Undheim (01:20:40):
So my point is, at least they must have a ma maybe there, maybe they would be willing to come on my show and explain where this money went. But they must be sending it somewhere, right? I mean that, they’re not all going to secret underground labs, they’re going to start it over.
Peter Voss (01:20:55):
People have also become very good at convincing, convincing people with money that deep learning machine learning is the way to go. So use this AI, I’m convinced Microsoft to put a billion dollars in, in there with a statement saying we’ve cracked AI, or we need as more data. And I think that
Trond Arne Undheim (01:21:17):
Open AI also into this second AI wave for you or not,
Peter Voss (01:21:22):
Absolutely their CEO has stated quite emphatically. All they need is more data and more computing, power, more data, and that’s all they need. And I think they are totally wrong. So, you know, there are hundreds of people that I communicate with that I’m very sympathetic to that, you know, agree with the shortcomings of, of deep learning machine learning and that we need something like the third wave or cognitive architecture, but none of these people that are aware of actually have the wherewithal you know, the funding to actually do something significant in the field.
Tracking the field of AGI
Trond Arne Undheim (01:22:04):
All right. So Peter summing up then let’s assume that some of our listeners actually have the resources and are interested enough to sort of pursue some of your line of thinking, how do they go about tracking this field, discovering this field finding out about individuals and approaches that they should follow. I will obviously link up your work and that would be, you know, your answer. Number one is look at what Peter’s doing and I will give them that opportunity. But if there’s anything else that has been marginally useful to you.
Peter Voss (01:22:37):
Yeah, absolutely. And I mean, we’ve, we’ve been asked this question, you know, dozens, if not hundreds of times, by potential investors and people interested in, in you know, working background information, who else is working on it and so on. And we have a whole pile of information. It really depends on what their background is, how technical it is, what their interests are, what they, the conclusion safe already come to. And, you know, basically where they’re coming from makes the big difference of where the conversation goes. But yes, we have a lot of a lot of information on,
Trond Arne Undheim (01:23:14):
I mean, Ben Goertzel still obviously has a lot of interviews and there’s podcasts interviews. My, good colleague, Lex Fridman at MIT who runs his podcast on AI has interviewed him for three hours. That shouldn’t be a pretty rich source about singularity, singularity net that IO these kinds of places. Do you know?
- Lex Fridman podcast https://lexfridman.com/podcast/
Peter Voss (01:23:38):
Yeah. I mean, Ben Guerzel, I’ve obviously worked with him, you know, we’ve worked together to kind of term 2001. As far as I can tell his very much obviously in the last two years he did an ICO blockchain raised a bunch of money through a blockchain. So his focus is right now of really building a community of people who can throw together different AI algorithms and, and have a marketplace for AI algorithms. The best of my understanding that’s where his focus is now. To me, that is not at all useful in terms of solving AGI. It may be useful in itself to have a marketplace of clever algorithms. But of course I already many other companies doing something similar. So whether a blockchain approach and rewarding people, paying people for their algorithms, whether that will work out well I don’t know, I don’t know enough about it, but I don’t, unfortunately I don’t see his work really contributing to solving the problem of AGI.
Trond Arne Undheim (01:24:50):
What about the AGI society and the AGI conference, these annual conferences, are these societies active? Is there some work going on there? Yeah,
Peter Voss (01:24:59):
Unfortunately this year of course had had to be canceled. I haven’t been able to go to the, I think for the last three years or so, but I have been to several of them and yeah, it’s a fantastic place to meet other people who, who are really interested and knowledgeable about AGI you know, with some, I dunno, a hundred attendees sales, so you probably have not quite 200 different approaches, but you know, maybe 30 or 40 different approaches.
- Forums. Facebook AGI group https://www.facebook.com/groups/Artificial-General-Intelligence-(AGI)-1649304532006202/
- Like-minded people (Ben Goertzel)
- Opencog Foundation https://opencog.org/
- SingularityNET https://singularitynet.io/
- Hanson Robotics https://www.hansonrobotics.com/
- Sophia, Chief Humanoid https://www.wikiwand.com/en/Sophia_(robot)
- Annual conference on AGI http://agi-conference.org/
- AGI society http://www.agi-society.org/
- Think about it for yourself.
- Patricia Churchland (Neuroscientist/philosopher) https://patriciachurchland.com/
Trond Arne Undheim (01:25:28):
So you are an eclectic bunch trying to solve the world’s biggest problem.
Peter Voss (01:25:33):
Yes. but it’s, it’s, I certainly found it always very worthwhile going, going to those conferences and meet really interesting people there.
Trond Arne Undheim (01:25:45):
Well, Peter has been a, a very mind opening discussion for me. I hope that you got some of your points across, and I thank you very much for sharing all your insight with with me and the listeners.
Peter Voss (01:26:00):
Well, thank you for guiding me down a dangerous territory and allowing me to sort of express myself freely. So thank you. You are welcome. Thanks so much for coming on the show. All right, bye. Bye.
You have just listened to episode 35 of the Futurized podcast with host Trond Arne Undheim, futurist and author. The topic was the quest for artificial general intelligence. Our guest was Peter Voss, CEO and founder of Aigo.ai in this conversation, we talked about how the field of artificial general intelligence has evolved. What intelligence really is, whether machines having and what it takes to bring true progress in this field. We also delve into chatbots. My takeaway is that artificial general intelligence has an interesting future, but it is not immediate. Whether an entire generation of software developers are on the wrong path, I’m not sure about, but clearly deeply understanding language and especially understanding context has a way to go for existing machines, maybe just as well, because humans are not even adjusting to statistical AI’s predictive capabilities. Thanks for listening. If you liked the show, subscribe to Futurized.co or in your preferred podcast and rate us five stars. Futurized– preparing you to deal with disruption.