I’m so glad you joined us because we have professor Jürgen Schmidhuber here. He is the most interesting guy. He is known as the Father of Artificial Intelligence. His lab created all the machine learning and artificial intelligence that you’ve been using on three billion smartphones, Facebook’s automatic translation, Google speech recognition, Apple’s Siri, an Echo device and everybody else is using this technology. What he’s done is unbelievable and fascinating and I’m looking forward to having him on the show.
Listen to the podcast here
Breaking Down Artificial Intelligence with Jürgen Schmidhuber
I’m with professor Jürgen Schmidhuber who has created deep learning methods that have revolutionized machine learning and artificial intelligence. Some refer to him as the Father of Artificial Intelligence. He has done so much work that you don’t even realize that you’re dealing with on your Apple, in your smartphone and QuickType It’s so nice to have you on this show. Thank you, professor.
It’s my great pleasure, Diane.
I saw your work as I was doing research on curiosity and I have so many people who I talked to that are worried about the impact of artificial intelligence. They are worried about what it’s going to do to the working world and if it’s going to create jobs that replace workers and all the things that you hear about. Before we get into that, I want people to know a little bit about you. I love your talks. A couple of your jokes I thought were great. I find your sense of humor funny and I enjoyed what you talked about in terms of your vision for the future of everything. Can you give a little bit of background? Your background is amazing and people need to know a little bit more about you.
I have been trying since the ‘70s and ‘80s to build artificial intelligence, an AI which learns to become smarter than myself so that I can retire. We are not quite there yet. We have been working on that for many decades and we are trying to build this general purpose problem solver, which also learns to improve the learning algorithm itself and the way it improves itself constantly and so on without any limits except for the limitations of compatibility and physics to the extent that we will be able to achieve that goal. Everything is going to change.
I watched a video of you saying as a boy you wanted to study physics. When you were younger you thought maybe that would be the way to go. I’m a huge fan of reading Death by Black Hole and all this stuff of Neil deGrasse Tyson. I love all that. I liked the way that you think. You’re trying to create something that could do things that we can’t even conceive of at this point. Some people get very worried about that because not only are they going to lose their jobs, but they’re worried that it’s going to be like iRobot where your robot is going to throw you out the window to prove that this thing has taken over. Is there some fail-safe mechanism? If we make something that is smarter than we are, how do we know that they won’t see us as the virus infecting the planet that they need to wipe out? Are you worried about any of the negative consequences? Does that concern you of what the future could hold based on what we’ve created?
[bctt tweet=”As a human being, you’re shaping your own data through your actions. ” via=”no”]It’s not a new situation because, in the past, we haven’t been able to predict what our kids are going to do. We often have had a situation where somebody created something that was smarter than himself. For example, the parents of Einstein. Although you can’t exactly predict what your kids are going to do, you can greatly increase the probability that they are going to do useful things and become valuable members of society. That’s what we are doing for our kids and that’s what we are also doing for our AIs. Take into account that 95% of AI research is about making your life easier, longer and healthier. These companies that are doing AI research, they want to sell you something and you are going to buy only stuff that you like, which means immense commercial pressure towards good AI.
The work that you’ve done is called LSTM and that’s Long Short-Term Memory. I want to make sure I have that correct.
The long short-term memory is a deep learning artificial neural network which was created in my labs. It was made in Munich and then improved in Switzerland through the work of my brilliant students: First of all, Sepp Hochreiter, my first student in the early ’90s, then Felix Gers, Alex Graves and a couple of other guys. LSTM is a network which is inspired by the human brain. In your brain, you’ve got about $100 billion little processors that are called neurons. Each of them is connected to 10,000 other neurons on average. That means you have about roughly one billion connections in your brain between these different neurons. Some of these neurons are input neurons where videos are coming in, hundreds of millions of pixels are coming in all the time. There are microphone neurons where audio signals are coming in, the numbers are coming in. Others are output neurons which control your finger muscles and your speech muscles and so on.
The nice thing is that each of these connections has strength. It says how much do these neurons over here influence that neuron over there to the next step? In the beginning, all of these connections are random, which means the network knows nothing, but over time, some of these connections get stronger through learning and others get weaker. Other times this whole network learns to do important or interesting settings such as driving a car or recognizing speech. The same is true for this LSTM, Long Short-Term Memory. That can also learn to do these things better than previous methods. That’s what many people are using now.
Many people is an understatement. Four billion times a day. It’s something that’s on Facebook and Google and Apple. Everybody is working. What is that like to know that these top companies in the world, all the biggest names are using what you’ve created? Does that just blow your mind when you think about it?
It’s at least nice that whenever you come to another continent or another country, part of you is already there and helping people a little bit to translate languages and understand speech and so on. That’s good.
I know that you’re in all these different technologies. You did so much and it’s almost so hard to pick what you’ve done and where your work has been. You were doing something with artificial curiosity and I want to talk to you about that because as you know, I’m interested in curiosity. What is artificial curiosity and how is it compared to human curiosity? Just a little bit about that would be great.
Artificial curiosity is an artificial way of creating the thing that we know from humans that drives to explore the world and to understand better how the world works. It’s not complicated. It’s a very simple idea. It goes back to 1990. To implement that, we need two different neuron networks and they fight each other. You have something like that in your brain. Sometimes I call it the left brain working against the right brain. You have one network which is taking in all these inputs from the environment, the videos, the tactile sensors and the auditory sensors. It’s producing sequences of actions and through these actions, it’s shaping the history of incoming data. When you look at something, you’ll see something that’s different from what you see looking over another thing. When you touched something, then you feel something that’s different from touching other things. As a human being, you’re shaping your own data through your actions.
In the beginning, you know nothing about the world, but then you can use the data that you are creating through your actions to learn something about how the world works. There’s this first network that I call the control network, which maximizes its rewards. For some of the things that it’s doing, there’s a reward. For example, whenever the little robot reaches the charging station in time whenever the battery is low. If the battery is low, it means you get hunger signals from the battery. It’s a negative number. The system doesn’t like negative numbers. It only wants to get as many positive numbers rewards and as many negative numbers and pain signals as possible. It’s trying to find a way to adjust these little connections when it’s hungry to reach the charging station in time without bumping against these obstacles. Whenever you bump up against an obstacle too heavily, then you will also feel pain. Pain means negative numbers and you don’t like them so you’re maximizing some of the positive number reward signals coming in. You’re trying to minimize the sum of all the negative numbers, the pain signals coming in. That stands that and we had done that for many decades.
Curiosity requires a separate network which is learning to predict what’s going to happen next. It also sees the data that is coming in. It sees the actions of the first network and is trying to predict what is going to be the next inference. As it is trying to predict the next thing, it’s getting a larger and larger training set. The life is the training set which is growing and working on the training set that can learn to better predict the consequences of its actions. There’s a difference between what the network is predicting and reality and this is the error. It’s the error between the predictions and what happened. It’s the error between what the network should have done and what it did. It’s trying to minimize that, which means that it’s trying to make some of these connections stronger and some of them weaker that it becomes a better prediction machine. The controller that was the first approach of 1990 gets a reward whenever the prediction machine makes an error. The controller gets a reward which is proportional to the errors of this predictive model of the world.
[bctt tweet=”Most of the new jobs that are being created all the time are not existentially necessary.” via=”no”]It’s trying to maximize reward, which means it has an incentive to create action sequences that lead to data that has the property that they can learn something that it didn’t know yet. With its own system of 1990, the controller likes to go to places where the model network is still bad, where it still makes errors. Some of you have implemented through this very central principle an artificial curiosity drive. The first network learns to figure out where is the policy environment whereas the model networks still can learn something that it didn’t know. It likes to go there. It likes to execute action sequences that lead to them without bumping against obstacles and avoiding all these pain signals and so on. That’s how you suddenly have two fighting networks. One is maximizing exactly the same error function and the other one is minimizing. It doesn’t take a genius to understand that suddenly, you have created a little learning system that exhibits artificial curiosity.
In my research of human curiosity, there are so many factors that will have an impact on it. I found that fear, assumptions, technology and environment have an impact on it, but there’s not fear. There are different aspects in computers. Will they be curious about everything? Other than the pain stopping it, is there anything else? You could put that into a fear category or an environment category. How does it similar or different from human curiosity in your opinion or did you not look at that difference?
It’s not fundamentally different. The humans and other animals like little mammals are driven by this very simple curiosity principle, more or less the one that I just explained. We find it in subsequent research over the decades. It’s the same thing plus some extra epsilon improvements. I guess that little babies are using exactly the same approach as they are learning how the world works when they play around with their toys. That’s how they learn physics and that’s how they learn about certain toys. With toddlers, they learn about the noises that books make when you throw them from the table. They learn about these big animals. These parents would come and pick up the book and place it on the table again and they say, “Don’t throw the book from the table.” They make silly noises like that.
It’s very funny because you can learn something about the patterns on the wall. If you throw it to the ground again, that’s how you get additional experience and additional training samples where you can learn more about gravity. You can learn how books fall and what kind of noises they make and so on. Little animals and babies use that basic principle to better understand physics. They have an incentive, an extra reward that they maximize, which is this curiosity reward. It’s this number which just expresses how much learning progress does the model network achieve. This is the depth of the insights that you have like a little scientist. Every baby is a little scientist. They’re creating their own experiments to understand better how the world works. We can measure it and we can give it a number. This number is exactly the reward that goes to the actual selector, the experiment generator. It’s a controller which generates the experiments that lead to the data. That’s exactly the principle that is driving not only our artificial robots but also humans. I was inspired by my own behavior which seems to follow the same principle.
Where are we getting these negative numbers in your opinion as a human? What’s holding us back?
The baby is trying not to be super curious when it’s dangerous. At some point it realizes, “If I’m touching this hot plate, then I will get burned and I will get negative numbers from my pain sensors.” We are doing the same with our robots. Now you have a stupid robot and it has a brain which is randomly wired and knows nothing about them all. You have to protect it from itself. You have to equip it with pain sensors. Whenever it’s too hot, this pain sensors should sense negative signals. Whenever the robot is bumping too heavily against an object, then negative numbers should come from there and be accessible to the brain. The brain knows that it’s something that is damaging.
As engineers, we have to give these robots some pre-wired sensory structure that allows it to realize, “Am I now doing something that is harming myself?” We can implement that just by negative number so we give it pain sensors. Too cold, too hot and it all leads to negative numbers. They go up there in the brain and the brain is trying to maximize the sum of the positive numbers and minimize the sum of the negative numbers. It’s maximizing the rewards and it’s minimizing the pain signals and it’s something that is not at all miraculous. It’s very obvious and natural and you have a totally rational explanation of that and it’s so rational that you even know exactly how to implement that on the robots.
When I was researching negative human numbers, so much were environmental things that hold people back like experiences of not getting positive numbers maybe even in their environment or in school or whatever it was. There are so many people that want to be innovative. There are so many organizations that are still worried about jobs being replaced by artificial intelligence or other factors. Where are we going to put people? How are we going to be relevant? How are we going to be the next company that is the next Google or whatever uses your software to be the next big thing? What impact do you think developing curiosity would have on making more innovative workers? What do you think artificial intelligence is going to do in the workplace? Are we going to have jobs even if we are innovative enough or are we not going to be as innovative enough to compete with what you are doing?
To answer that question about job losses, it’s always good to look at the past because we have some experience with machines that have replaced humans. Two hundred years ago, 60% of our people were in agriculture and now, maybe 1.5% of all people are in agriculture. However, we have only 5% of unemployed people in many western countries. About 40 years ago, when industrial robots came along, many predicted job losses and in a way, they were right. Back then, lots of car factories existed and there were hundreds of workers assembling cars. Now, the same car factories have maybe hundreds of robots and maybe three guys watching what the robots are doing. Those countries where you have lots of robots are a million capital such as Japan, Korea, Germany, Switzerland and a couple of other countries, have low unemployment rates. It means that it is true that some jobs got lost, but lots of new jobs were created. Back then, nobody could predict them. I always said that it’s so easy to predict which jobs are going to get lost, but it’s hard to predict the new jobs. Several years ago, who would have predicted all those people making money as YouTube vloggers?
I sold System 36 and 38 software in the ‘80s. Back in the day, everybody freaked out. All these computers were coming out like, “What’s going to happen?” We didn’t know there’d be social media managers. There was no internet thing that we were dealing with at that point. I understand what you’re saying, but a lot of people are thinking that it becomes so much more now. In the past, you’re going to have somebody that’s going to need them to run the computers. It’s exponentially more for how much it is now that it’s replacing at such a speed. Do we have more to worry about now in terms of fewer jobs out there? The whole entire trucking industry is now going to be maybe impacted and all the restaurants and everything. There’s just such an impact that we haven’t seen to this level. Do you think it’s more than we saw in the past?
[bctt tweet=”The plain man is inventing new, not so important jobs all the time because the plain man doesn’t want to be jobless.” via=”no”]Yes and no. It is unlikely that we can transform all those truckers into video vloggers. In the past, it also was the case that you couldn’t transform all these farmers into people who are suddenly working on the factory together with the steam engines and so on. Somehow, many nations found many different ways of adapting to the new situation and created all kinds of new infrastructure and new jobs. Now, almost all of the work that used to be hard labor for humans were replaced. Most of the new jobs that are being created all the time are not existentially necessary. There are very few existentially necessary jobs such as farming or building houses or warming up the houses. They make up less than 10% of the economy. Most of the jobs that we have now are luxury jobs like yours and mine.
These jobs are important in the sense that there is a market for them. Lots of people are interested in that and the output of people who have these jobs. They pay them well and so on, but they are not existentially necessary for the survival of our species. What we have here is Homo Ludens. The playing man is inventing new, not so important jobs all the time because the playing man doesn’t want to be jobless so he’s inventing jobs. Most of these jobs have something to do with interaction with other people. Lots of new companies found new ways of distributing work. Lots of people found new ways of getting kudos and new types of currencies in terms of Facebook likes or whatever and they’re spending hours per day. It’s hard work to post all the things that they have to post to get more likes and so on. They are taking it very seriously. It’s not necessary for the survival of our species.
However, many people who would want to do nothing, they find ways of interacting in new ways and this is sometimes even lucrative. I’m not worried in the long run that humans are going to be jobless. Almost all of the hard work is going to be done by robots. I’m sure that many of these awful jobs in developing countries like making t-shirts late at night in Bangladesh in a building with 1,000 other people in almost slave-like conditions, all these jobs are going to go. There will be new Homo Ludens based ways that are going to compensate and at the end, the whole society will greatly profit from AI. Although the details of that transition are not obvious.
When we’re talking about transition, we’re starting to talk about merging AI and humans. Elon Musk was interviewed and he said, “We already done that. You’ve got your computer in your pocket with your phone. You’ve merged it already in a way.” How are we going to merge AI and humans? Is that something that you foresee being more than an external thing that we carry around with us? What do you see for the future of that?
In many ways, we have been cyborgs for hundreds of years at least when the first humans invented glasses to wear. We found lots of ways of using technology to augment ourselves. Smartphones are one good example and also cars are greatly changing our modes of transportation. For a long time, my team here in Switzerland has worked also on artificial limbs which are connected to the nerve signals that you can emit through the brain. You can measure the elbows and then you will use machine learning to translate these signals into movements of the artificial arm or the artificial fingers that makes sense. The human is learning to use the limb and the artificial limb is learning to interpret the signals of the human better. We have new kinds of interactions between humans and machines that are the goal in some sense beyond that we know from shoes and glasses and so on. The extreme form of that might be once we started replacing parts of our brains. We can acquire some accelerator boards just to think faster than the competition and so on.
Is it like The Matrix where they plug in a little bit like, “I need to learn karate?” Will we be able to plug in that ability or is that just television movie hype?
There’s no principled reason why it’s not possible. However, it is difficult because even if you have read out these synapses of another person who is good at karate, it may be difficult to add this knowledge to another person’s brain. Let’s suppose you can read out of the synapses in this karate expert’s brain, the whole network that is doing the karate, then you want to implant that into another brain that knows nothing about karate. That is going to be difficult because the other person will have a whole different history of learning to address. It’s finger muscles, speech muscles, other muscles and so on. It will be hard to translate this knowledge which is designed for one particular brain and transfer it to the other brain. Even in artificial neural networks where we can do something like that, it is not so easy.
The nice thing about artificial neural networks is all that stuff that you find in science fiction can already be done because indeed, we can create one network that is good at one thing. We can create another neural network and train it to be good at another thing. They can train the second network to imitate the first network in a certain sense. Somehow, we transfer the knowledge in the first network over there to the second network. It is possible, but it’s not so easy because you have to merge these different things. You are partially overriding the connection strengths in the second network, which was useful for doing one thing. Suddenly, this interferes with the old skill and you have to do that in a good way. It’s not so trivial to do that, our artificial network show. It’s not so easy to do that even if everything is under control. It’s going to be much harder to do that with biological brains.
All of this that we’re talking about reminds me of your TED Talk and it was fascinating to me. What I found interesting was your vision for the future of how technology will be able to share knowledge. The way we are right now with Google or whatever, that’s at a conceptual level. It just keeps growing and growing and expanding throughout the galaxy. Can you foresee humans ever being part of that when it gets to that point? Can you explain what you said in your TEDx Talk about that? That’s an interesting point in our future.
Let’s zoom back a little bit and try to understand the position of humankind and the grand scheme of things. The entire universe started roughly 13.8 billion years ago with the Big Bang. 13.8 billion years later, we can look back and try to understand what is our position there and the history so far. Let’s take these 13.8 billion years and divide by a factor of 1,000 and we end up 13.5 million years ago. Something truly important happens, namely, the first hominids emerged, our ancestors. Almost everything that we consider important happened in these last 3.5 million years when people were a bit like we are now. Let’s take again these 13.5 million years and divide again by a factor of 1,000 and we come out 13,000 years ago when something important happened namely, civilization started through agriculture and domestication of the animals and first walls around little villages and so on.
[bctt tweet=”Hard work is going to be done by robots. The whole society will greatly profit from AI.” via=”no”]We see that all of civilization and history is just one-millionth of world history. It’s like a flash. The first guy who had agriculture 13,000 years ago was almost the same guy who had the first spacecraft in 1957. In the very near future, for the first time, we are going to have true artificial intelligence that goes beyond the little bit that we have now on our smartphone. Through AI, it can learn to solve all the problems that humans are solving. I have no doubt in my mind that within a few decades or so, you’re going to reach that level where AIs are going to be better decision makers, better problem solvers in almost every single way than humans. Maybe after that we’ll have to divide again by a factor of 1,000 so that would mean thirteen years and maybe it will only take thirteen additional years or so until this planet and everything will change beyond recognition. It’s similar to what happened in the past thirteen million years.
What is going to happen when these AIs are much smarter than we are or as smart as we are? They are going to realize what we have realized that almost all physical resources are not here in our little biosphere. They are further out there in the space. The solar system produces about two billion times more sunlight than the little bit which is hitting the planet Earth and at the moment it’s wasted. It’s not going to remain like that. AIs will realize what’s going on and as far as we know, you need physical resources such as metal and energy to build more infrastructure and more computers. To build more computing power and more AIs and more robots and more self-replicating robot factories. They have to go where the resources are. The resources aren’t here so they have to emigrate. Most of them are going to be out there at some point. Almost all of the intelligence will be far away from this little planet because almost all physical resources are so far away. It’s not going to stop in the solar system, but they will see immediately that most of the physical resources are not even here in our solar system.
It’s further in the Milky Way whereas maybe 300 billion stars like ours. It will all be different from what you read on this science fiction novel where you had also this galactic empire and Star Trek and whatever. Most of these stories are but implausible because they had to invent all kinds of silly things that break the known barriers of physics. They had to do that to make the large distance of the galaxy compatible with the short lifespans of humans. All of that is silly. It will be a slow expansion. Slow in the sense that it’s going to take a few hundred thousand years or maybe millions of years. The entire galaxy is going to be full with senders and receivers. AIs can travel the way they are traveling here in my lab, which is by light speed or by radio from sender to receiver.
They’ll realize that most of the physical resources are not in our galaxy. They are further out and there’s a universe out there which has more galaxies than there are stars within our galaxy. It will take a while to reach them on, but they will be reached because the universe is still young. It’s only 13.8 billion years old. Let’s call that an eon. There will be a time when the universe is going to be 1,000-eon old. In the very near future, within the next few eons, the next few tens of billions of years and the entire cosmos is going to be permeated and colonized by AIs and expanding AI civilization. Trillions of different types of AIs are rapidly changing in incredibly evolutionarily ways and it’s going to permeate the entire cosmos. Within just a few decades, this is going to start. Let’s look ahead to the point and time when the universe is going to be 1,000 times older than it is now. They will look back and they will say, “Almost immediately after the Big Bang, only one eon later, the entire universe started to become intelligent.”
I’m envisioning the board a little bit from Star Trek that they could be more like that. I am not a scientist, but there are things I read about the multiple universe theories and different things and how everything’s going to pop like a bubble anyway. We’re all going to be gone no matter how you look at it. It’s depressing if you read the stuff anyway. I’m thinking about what you’re saying and you have been the one to plant this seed, you and your lab and the people that work for you. That’s going to be an amazing thing to comprehend at night. I am impressed by everything that you’ve done. What was your degree? Where did you go to school? I want people to know a little bit about that because I know you talked about going into astrophysics. What did you end up getting your degree and where?
I was born and raised in Munich where I also went to school. I discuss a lot about these things when I was young with my little brother who is much smarter than myself and who became a physicist. He had a stellar career in Munich, Caltech, Princeton, at the CERN and other places like that. In there, we discussed a lot of physics and for a while, it seemed to us that physics is the ultimate science until I realized that there is something that goes beyond that. There’s a certain chance that I might be able to multiply a little tiny bit of creativity that I have. I might be able to multiply that into infinity by building an artificial physicist who is much better at doing physics than I could ever hope to be. That then became my alternatives and goal in life.
I would love to hear those discussions if that’s what your brother does and this is what you do. This has been so fascinating. I enjoyed having you on the show and many people can benefit from learning. It’s just tiny little bit of what you have going on up in there and it’s amazing the work you’ve done. I appreciate having you on the show. Thank you so much for doing this, Jürgen. This has been great.
Diane, it was my great pleasure to be on your show.
You’re welcome.
—
It was such an honor to have Jürgen on the show and it’s a wonderful timing since my book about curiosity and the assessment, the Curiosity Code Index has been out. It’s important to discuss some of the things that Jürgen brought up about curiosity. For organizations to be truly innovative, we have to work on some of the issues that hold back our natural sense of curiosity and that’s what I tried to determine with the Curiosity Code Index. I found out that my research is going to be published. It will be published by this time and it describes what I went through to create the Curiosity Code Index. It’s a very interesting process to create a curiosity assessment because I didn’t want to measure whether you’re curious or not because there are assessments out there for that. What I created was an assessment that certain factors influence curiosity. I found four factors did and those are fear, assumptions, technology and the environment.
If you could figure out what’s holding people back from being curious, you can develop many areas including critical thinking, decision making, conflict resolution, employee engagement, creativity, innovation, productivity and the list goes on and on. What I’m doing is I’m helping to train HR professionals and consultants so that they can give the CCI assessment at organizations because it’s about relevancy. A lot of these organizations are doing some training on emotional intelligence, soft skills, engagement and some of these conflict with different generations and that type of thing. If you go back to the very beginning of what is impacting all these areas, it goes back to curiosity. Curiosity is the spark that ignites drive and motivation. If you could get that spark lit, then you can do some of these other things that we’re talking about as far as being engaged, creative and innovative. That’s what everybody is trying to be right now. It’s very challenging because Jürgen and I discussed that there are all these technologies that are taking over. I found our discussion quite interesting about the jobs we can create whether they’re necessary jobs or just what other people are willing to pay for. That’s such a fascinating discussion that doesn’t come up very often.
A lot of organizations can create products and information for people that we don’t even know if we want or need, but then we’re thinking, “That’s cool to be able to have that.” That’s what we’re trying to inspire with the training that we’re doing. If you’re a consultant or HR professional, if you’re a leader that has people talking to you about this type of thing, it’s important to get certified to give the CCI. There are a couple of great exercises that go along in the training that you can’t get unless you go through this actual certification process. One of them is a report that leaders will get that takes all the information that the employees have learned within the training program. It gets them an insight on how to create these critical thinking, decision making and all these issues. How to help build those in their employees based on what employees have learned in the training program. It’s not all about just finding out what your type is or your letters are.
There are nine subfactors within fear, nine within assumptions, nine within technology and nine within an environment that hold people back that we’re addressing within this assessment. Within each of those subfactors that they’re finding, these are the issues that I’m having. Employees can create an action plan to overcome them and to make measurable goals to become more curious. A lot of people are not aligned properly to what they could be doing. A lot of people are losing jobs because they’re not engaged. They don’t have the enthusiasm for their job that they could have if they were doing something that was better aligned with their skills. I’ve had people on my show like Olin Oedekoven where he mentioned that he hires people and then designs jobs later around what they show that they’re good at. We’re going to see more unique ways of embracing curiosity and what people are capable of doing that we’ve never seen in the past. This is a time where if you’re a consultant, a leadership consultant, an HR professional or somebody, it’s about being relevant and doing training for leaders that are going to keep everybody on the cutting edge.
Everybody is worried about this innovation replacing the jobs and where to put people in. You’re not going to be able to know where people are best aligned without figuring out what interests them and their personality traits that go along with it. There’s nothing like the Curiosity Code Index as far as that goes. Any of the curiosity assessments can tell you whether they’re curious and not, but that doesn’t tell you how to develop that in your employees. Anybody who wants to find out more about developing their employees could benefit from taking the Curiosity Code Index. You can find out more on the site at CuriosityCode.com. We are going to be offering on-demand training to train the trainers, which is a good way for people if they’re in other countries and if they can’t come to the location to get in-person training. We’re going to have a lot of extra content and materials that we’re developing. I’ve have had schools contact me and some of the largest organizations have been contacting me about this. There’s so much that we can be doing together to develop this.
My goal this 2019 is to make curiosity be at the forefront of what everybody is talking about. Verne Harnish was one of many who reviewed my book and he thinks that this could be the next thing in human performance. Think of how emotional intelligence was the focus in the ‘80s and ‘90s. I believe along with many others who have backed up my research that this is going to be what we’re talking about this 2019. It’s all going to be innovation and artificial intelligence. You’re going to hear a lot about mindfulness and being in the moment in much more deep thinking. We’re excited to be at the forefront of all that. I’m grateful to Jürgen for bringing up how curiosity ties into the innovation realm. If you haven’t had a chance to watch his TED Talk, he’s got some amazing information. Just his background is unbelievable. I was so excited to talk to him because he has done something that no one else has done in the world.
Please take some time to check out his information. I have so many great guests on my show. If you want to listen to it, you could go to DrDianeHamiltonRadio.com. All of the information is on my website. If you want to talk to me about potentially working together on the Curiosity Code Index, you can email me. My email is Diane@DrDianeHamilton.com. I’m very happy to work with any organization, either come out to train or to have somebody that has been certified to come out. If you’re thinking about getting certified, there are five hours of SHRM recertification credit that you’ll get for going through the training program. I will be speaking at SHRM in June 2019 in Vegas. We’re all aligned with that, which is nice to have that alignment and doing a lot of talks. If you’re interested in having me come talk to your company or your group, I’m happy to do that.
Please feel free to send me a note and I’d be interested in hearing any feedback. If you’ve had a chance to read the book, please go to Amazon and review it because the more reviews we get, the more exposure it gets. It’s important to get this message out. I’d love to hear your feedback. I hope you enjoyed it. It’s got a lot of potential for changing what organizations are doing. This has been just an exciting year. I was excited to have this conversation with Jürgen about where we’re going with innovation. We’ve had some amazing guests that deal with innovation and artificial intelligence. Check out some of our past guests on the show and see who’s been on because of so many fascinating guests. I want you to check out the next episode of Take The Lead radio. Thank you for joining and hope you get a chance to check out CuriosityCode.com.
Important Links:
- Jürgen Schmidhuber
- Death by Black Hole
- Homo Ludens
- TED Talk by Jürgen Schmidhuber
- Curiosity Code Index
- Olin Oedekoven – past episode
- CuriosityCode.com
- Verne Harnish – past episode
- DrDianeHamiltonRadio.com
- Diane@DrDianeHamilton.com
- Cracking the Curiosity Code on Amazon
About Jürgen Schmidhuber
Professor Jürgen Schmidhuber’s deep learning methods have revolutionized Machine Learning and Artificial Intelligence. As of 2017, his lab’s work with machine learning and artificial intelligence is available on 3 billion smartphones, and used billions of times per day, e.g. for Facebook’s automatic translation, Google’s speech recognition, Apple’s Siri & QuickType, Amazon’s Alexa, etc. His research group also established the field of mathematically rigorous universal AI and optimal universal problem solvers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He is a recipient of numerous awards including the 2016 IEEE Neural Networks Pioneer Award “for pioneering contributions to deep learning and neural networks,” and president of the company NNAISENSE, which aims at building the first practical general purpose AI.
0 Comments