automatically generated captions
00:00:00 Yoshua: Yes, my name is Yoshua Bengio. And I am a professor here at the University of Montreal.
00:00:05 I also lead an institute called the Montreal Institute for Learning Algorithms,
00:00:10 that is specializing in my area of science, which is machine learning, how computers learn from examples.
00:00:18 Speaker 2: And what is the difference between, you say, machine learning?
00:00:27 Yoshua: Yes.
00:00:27 Speaker 2: But there's also this new thing called deep learning.
00:00:27 Yoshua: Right.
00:00:27 Speaker 2: What's the easiest way to,
00:00:31 Yoshua: Yes, so deep learning is inside machine learning, it's one of the approaches to machine learning.
00:00:40 Machine learning is very general, it's about learning from examples.
00:00:45 And scientists over the last few decades have proposed many approaches for allowing computers to learn from examples.
00:00:51 Deep learning is introducing a particular notion that the computer learns to represent information
00:01:03 and to do so at multiple levels of abstraction.
00:01:07 What I'm saying is a bit abstract, but to make it easier,
00:01:10 you could say that deep learning is also heavily inspired by what we know of the brain, of how neurons compute.
00:01:17 And it's a follow up on decades of earlier work, on what's called neural networks, or artificial neural networks.
00:01:26 Speaker 2: So, what is your background that you can relate to this?
00:01:31 Yoshua: I got interested in neural networks and machine learning, right at the beginning of my graduate studies.
00:01:40 So when I was doing my master's, I was looking for a subject
00:01:43 and I started reading some of these papers on neural networks.
00:01:46 And this was the early days of the so-called Connectionist Movement.
00:01:51 And I got really, really excited and I started reading more.
00:01:54 And I told the professor who was gonna supervise me that this is what I want to do.
00:02:02 And that's what I did, and I continued doing it and I'm still doing it.
00:02:04 Speaker 2: And do you think with your research, that you are on a route or a mainline, main thinking line,
00:02:19 which will get you somewhere?
00:02:22 Yoshua: So, say it's funny that you ask this question, cuz it depends.
00:02:24 It's like some days I feel very clearly that I know where I'm going and I can see very far.
00:02:34 I have the impression that I'm seeing far in the future and I see also where I've been and there's a very clear path.
00:02:44 And sometimes maybe I get more discouraged and I feel, where am I going? [LAUGH]
00:02:50 Yoshua: It's all exploration, I don't know where the future, what the future holds, of course.
00:02:56 So I go between these two states, which you need.
00:02:59 Speaker 2: Where are you now?
00:03:01 Yoshua: Right now I'm pretty positive about a particular direction.
00:03:11 I've moved to some fundamental questions that I find really exciting, and that's kind of driving a lot of my thinking,
00:03:21 looking forward.
00:03:22 Speaker 2: Can you tell me, I'm a not a scientist, most of our viewers are not as well.
00:03:27 But can you describe for me where you think your path leads to?
00:03:39 Because you sometimes you have a clear goal, you know where you're going.
00:03:42 Yoshua: Right.
00:03:42 Speaker 2: Where are you going?
00:03:43 Yoshua: So,
00:03:45 Yoshua: My main quest is to understand the principles that underlie intelligence.
00:03:53 And I believe that this happens through learning, that intelligent behavior arises in nature
00:03:59 and in the computers that we're building through learning.
00:04:02 The machine, the animal, the human becomes intelligent because it learns.
00:04:10 And understanding the underlying principles is like understanding the laws of aerodynamics for building airplanes,
00:04:19 right?
00:04:21 So I and others in my field are trying to figure out what is the equivalent of the laws of aerodynamics
00:04:29 but for intelligence.
00:04:31 So that's the quest, and we are taking inspiration from brains,
00:04:38 we're taking inspiration from a lot of our experiments that we're doing with computers trying to learn from data.
00:04:46 We're taking inspiration from other disciplines, from physics, from psychology, neuroscience.
00:05:00 And other fields, even electrical engineering, of course statistics, I mean, it's a very multi-disciplinary area.
00:05:09 Speaker 2: So you must have a clue?
00:05:11 Yoshua: Yes, I do. [LAUGH] So, one of the, well, may not be so easy to explain.
00:05:21 But one of the big mysteries about how brains manage to do what they do,
00:05:25 is what scientists have called for many decades the question of credit assignment.
00:05:30 That is, how do neurons in the middle of your brain, hidden somewhere, get to know how they should change.
00:05:38 What they should be doing that will be useful for the whole collective, that is, the brain.
00:05:44 And we don't know how brains do it, we now have algorithms that do a pretty good job at it.
00:05:52 They have their limitations
00:05:55 but one of the things I'm trying to do is to better understand this credit assignment question.
00:06:01 And it's crucial for deep learning, because deep learning is about having many levels of neurons talking to each other.
00:06:09 So that's why we call them deep, there are many layers of neurons. That's what gives them their power.
00:06:15 But the challenge is, how do we train them, how do they learn? And it gets harder the more layers you have.
00:06:23 So, in the 80s people found how train networks with a single, hidden layer.
00:06:30 So just not very deep, but they were already able to do interesting things.
00:06:35 And about ten years ago we started discovering ways to train much deeper networks,
00:06:40 and that's what led to this current revolution called deep learning.
00:06:44 Speaker 2: And this revolution, I didn't read it in the papers, so it's not front page news,
00:06:51 but for the science world it's a breakthrough.
00:06:53 Yoshua: Yes, so in the world of artificial intelligence there has been a big shift brought by deep learning.
00:07:04 So there has been some scientific advances but then it turned into advances in application.
00:07:11 So very quickly these techniques turned out to be very useful for improving how computers understand speech for example,
00:07:20 that speech recognition.
00:07:21 And then later much bigger, I would say, in terms of impact, effect fact happened
00:07:27 when we discovered that these algorithms could be very good for object recognition from images.
00:07:34 And now many other tasks in computer vision are being done using these kinds of networks.
00:07:40 These deep networks
00:07:41 or some specialized version of deep networks called convolutional networks that work well for images.
00:07:48 And then it moves on, so now people are doing a lot of work on natural language.
00:07:53 Trying to have the computer to understand English sentences, what you mean Being able to answer some questions
00:08:03 and so on. So these are applications but they have a huge economic impact and even more in the future.
00:08:11 That has attracted a lot of attention from other scientists, from the media,
00:08:20 and from of course business people who are investing billions of dollars into this right now.
00:08:26 Speaker 2: Yeah, is it exciting for you to be in the middle of this new development?
00:08:31 Yoshua: It is, it is very exciting and it's not something I had really expected.
00:08:37 Because ten years ago when we started working on this there were very few people in the world,
00:08:43 maybe a handful of people interested in these questions.
00:08:46 And initially it started very slowly we, it was difficult to get money for these kinds of things.
00:08:53 It was difficult to convince students to work on these kinds of things.
00:08:59 Speaker 2: Well maybe you can explain to me the ten years, or whatever,
00:09:05 12 years ago you were three people because it was not popular [CROSSTALK]
00:09:06 Yoshua: Right, that's right, that's right. Yes, that's right.
00:09:09 So there has been a decade before the last decade where this kind of research essentially went out of fashion.
00:09:18 People moved on to other interests.
00:09:20 They lost the ambition to actually get AI, to get machines to be as intelligent as us,
00:09:30 and also the connection between neuroscience and machine learning, it got divorced.
00:09:36 But a few people including myself and Jeff Hinton and Yann Lecun continued doing this and we started to have good results.
00:09:46 And other people in the world were also doing this and more people joined us.
00:09:53 And in a matter of about five years it started to be a more accepted area and then the applications,
00:10:02 the success in applications started to happen, and now it's crazy.
00:10:07 We get hundreds of applicants, for example, for doing grad studies here and companies are hiring like crazy
00:10:16 and buying scientists for their research labs.
00:10:20 Speaker 2: Do you notice that. Do they approach you as well?
00:10:24 Yoshua: Yeah.
00:10:25 Speaker 2: Big companies.
00:10:26 Yoshua: Yes. [LAUGH]
00:10:27 Yoshua: So I could be much richer. [LAUGH]
00:10:31 Yoshua: But I chose to stay in academia.
00:10:33 Speaker 2: So you've made some good thinking? And now it has become popular.
00:10:43 Yoshua: Yes.
00:10:44 Speaker 2: But, it has become valuable as well.
00:10:46 Yoshua: Yes, very valuable, yes.
00:10:49 Speaker 2: Why? Maybe-
00:10:51 Yoshua: Basically it's at the heart of what companies like Google, Microsoft, IBM, Facebook, Samsung, Amazon, Twitter.
00:11:05 Speaker 2: Why?
00:11:05 Yoshua: All of these companies they see this as a key technology for their future products
00:11:14 and some of the existing products already.
00:11:16 Speaker 2: And
00:11:17 Speaker 2: Are they right?
00:11:18 Yoshua: Yeah, they are. Of course, I don't have a crystal ball.
00:11:30 So there are a lot of research questions which remain unsolved, and it might take just a couple of years
00:11:38 or decades to solve them, we don't know. But even if, say, scientific research on the topic stopped right now.
00:11:44 And you took the current state of the arts in terms of the science, and you just applied it, right,
00:11:53 collecting lots of data sets because these items need a lot of data.
00:11:59 Just applying the current science would already have a huge impact on society.
00:12:04 So I don't think they're making a very risky bet,
00:12:10 but it could be even better because we could actually approach human level intelligence.
00:12:14 Speaker 2: You know that or you think so?
00:12:17 Yoshua: We could.
00:12:18 I think that we'll have other challenges to deal with and some of them we currently know are in front of us,
00:12:31 others we probably will discover when we get there.
00:12:34 Speaker 2: So now you're in the middle of a field of exciting research.
00:12:43 Yoshua: Yeah.
00:12:44 Speaker 2: That you know you're right and you have the goal and sometimes you see it clearly,
00:12:47 and it has become popular around people who want to study here.
00:12:48 Yoshua: Yep.
00:12:48 Speaker 2: And the companies want to invest in you.
00:12:50 Yoshua: Yes.
00:12:51 Speaker 2: So you must feel a lot of tension or a lot of-
00:12:55 Yoshua: It's true, it's true. Sudden.
00:12:58 Speaker 2: How does it feel to be in the middle of this development?
00:13:02 Yoshua: So initially it's exhilarating to have all this attention, and it's great to have all this recognition.
00:13:11 And also, its great to attract really the best minds that are coming here for doing PhD's and things like that.
00:13:21 It's absolutely great. But sometimes I feel that it's been too much, that I don't deserve that much attention.
00:13:30 And that all these interactions with media and so on are taking time away from my research
00:13:41 and I have to find the right balance here.
00:13:47 I think It is really important to continue to explain what we're doing so that more people can learn about it
00:13:55 and take advantage of it, or become researchers themselves in this area.
00:13:59 But I need to also focus my main strength which is not speaking to journalists.
00:14:07 My main strength is to come up with new ideas, crazy schemes, and interacting with students to build new things.
00:14:17 Speaker 2: Have you thought of the possibility that you're wrong?
00:14:20 Yoshua: Well, of course, science is an exploration. And I'm often wrong.
00:14:33 I propose ten things, nine of which end up not working.
00:14:39 But we make progress, so I get frequent positive feedback that tells me that we're moving in the right direction.
00:14:51 Speaker 2: If your right enough to go on.
00:14:53 Yoshua: Yes, yes, yes and these days because the number of people working on this has grown really fast,
00:15:01 the rate at which advances come is incredible.
00:15:06 The speed of progress in this field has greatly accelerated and mostly because there are more people doing it.
00:15:15 Speaker 2: And this is also reflected in what the companies do with it.
00:15:17 Yoshua: Yes, so companies are investing a lot in basic research in this field which is unusual.
00:15:25 Typically companies would invest in applied research where they take existing algorithms
00:15:31 and try to make them use them for products.
00:15:34 But right now there's a big war between these big IT companies to attract talent.
00:15:40 And also they understand that there is the potential impact,
00:15:47 the potential benefit of future research is probably even greater than what we have already achieved.
00:15:52 So for these two reasons, they have invested a lot in basic research and they are basically making offers to.
00:16:00 Professors
00:16:00 and students in the field to come work with them in an environment that looks a little bit like what you have in
00:16:07 universities where they have a lot of freedom, they can publish, they can go to conferences and talk with their peers.
00:16:13 So it's a good time for the progress of science because companies are working in the same direction as universities
00:16:20 towards really fundamental questions.
00:16:23 Speaker 2: But then they own it, that's the difference?
00:16:25 Yoshua: Yeah, that's something that's one of the reasons why I'm staying in academia.
00:16:32 I want to make sure that what I do is going to be, not owned by a particular person, but available for anyone.
00:16:40 Speaker 2: But is that the risk?
00:16:42 Is it really a risk that because the knowledge is owned by a company that, why would it be a risk?
00:16:49 Yoshua: I don't think it's a big deal right now, so the major research, industrial research centers,
00:17:04 they publish a lot of what they do.
00:17:10 And they do have patents, but they say that these patents are protective so in case somebody would sue them.
00:17:15 But they won't prevent other people, other companies using their technologies. At least that's what they say.
00:17:20 So right now there's a lot of openness in the business environment for this field.
00:17:30 We'll see how things are in the future.
00:17:32 There's always a danger of companies coming to a point where they become protective.
00:17:38 But then what I think is that companies who pull themselves out of the community,
00:17:43 and not participate to the scientific progress and exchange with the others. They will not progress as fast.
00:17:50 And I think that's the reason, they understand that, if they want to see the most benefits from this progress,
00:17:58 they have to be part of the public game of exchanging information and not keeping information secret.
00:18:04 Speaker 2: Part of the mind of the universe.
00:18:06 Yoshua: Yes, exactly. Part of the collective that we're building of all our ideas and our understanding of the world.
00:18:17 There is something about doing it personally into in that enables us to be more powerful and understanding.
00:18:24 If we're just trying to be consumers of ideas.
00:18:27 We're not mastering those ideas as well as if we're actually trying to improve them.
00:18:32 So
00:18:32 when we do research we get on top of things much more than if we're simply trying to understand some existing paper
00:18:43 and trying to use it for some product.
00:18:45 So there's something that is strongly enabling for companies to do that kind of thing, but that's new.
00:18:54 One decade ago for example many companies were shutting down their research labs and so on,
00:19:01 so it was a different spirit.
00:19:03 But right now, the spirit is openness, sharing, and participating in the common development of ideas through science
00:19:14 and publication and so on.
00:19:16 Speaker 2: It's funny that you said basic research is the same thing as fundamental research
00:19:22 Yoshua: Yes, yes. Yes.
00:19:24 Speaker 2: And that it becomes popular in some way.
00:19:27 Yoshua: Well, I think first of all it's appealing.
00:19:30 I mean as a person, I find researchers, PhD's candidate or professor or something.
00:19:39 It's much more appealing to me to know that what I do will be a contribution to humanity, right,
00:19:45 rather than something secret that only I and a few people would know about
00:19:49 and maybe some people will make a lot of money out of it that. I don't think it's as satisfying.
00:19:56 And as I said I think there are circumstances right now, that even from purely economic point of view,
00:20:01 is more interesting for companies to share right now. And be part of the research.
00:20:06 Speaker 2: So I think first to understand what you're really into I would like to know from you some basic definitions.
00:20:25 Yoshua: Yes.
00:20:26 Speaker 2: For example.
00:20:28 Speaker 2: What in your way of thinking is, and would you describe thinking?
00:20:37 Yoshua: Yes.
00:20:38 Speaker 2: What is thinking?
00:20:39 Yoshua: Right, well obviously we don't know. Because the brain-
00:20:43 Speaker 2: What do we don't know?
00:20:44 Yoshua: We don't know how the brain works. We have a lot of information about it.
00:20:51 Too much maybe, but not enough of the kind that allows us to figure out the basic principles of how we think,
00:20:59 and what does it mean at a very abstract level. But of course, I have my own understanding, so I can share that.
00:21:07 And with the kinds of equations I drew on the board there, and other people in my field.
00:21:16 There's this notion that what thinking is about is adjusting your mental configuration to be more coherent,
00:21:32 more consistent with everything you have observed, right?
00:21:38 And more typically, the things you're thinking about, or what you are currently observing.
00:21:44 So if I observe a picture, my neurons change their state to be in agreement with that picture and agreement,
00:21:53 given everything that the brain already knows, means that they are looking or an interpretation for that image.
00:21:59 Which may be related to things I could do that are related like I see this,
00:22:04 I need to go there because it tells me a message that matters to me.
00:22:08 So everything we know is somehow built in this internal model of the world that our brain has
00:22:14 and you get all these pieces of evidence each time we hear something, we listen to something
00:22:21 and our brain is actuating all of that stuff and then what it does is try to make sense of it,
00:22:30 reconcile the pieces like a piece of a puzzle. And so sometimes you know, it happens to you, something clicks right.
00:22:39 Suddenly you see a connection that explains different things.
00:22:44 Your brain does that all the time and not always that you get at this conscious impression, and thinking is this,
00:22:52 according to me, it's finding structure, and meaning, and the things that we observing and we've seen,
00:23:06 and that's also what science does, right?
00:23:08 Science is about finding explanations for what is around us,
00:23:13 but thinking it's happening in our head where science is a social thing.
00:23:18 Speaker 2: It's outside your head.
00:23:20 Yoshua: Science has a part inside.
00:23:25 Yeah, science has a part inside of course, because we are thinking when we do science. But science has a social aspect.
00:23:33 Science is a community of minds working together,
00:23:37 and the history of minds having discovered concepts that explain the world around us,
00:23:45 and sharing that in ways that are efficient. [talk in Dutch]
00:23:48 Yoshua: One thing I could talk about too is learning, right.
00:24:23 You asked me about thinking but I think a very important concept in my area is learning, I think.
00:24:37 I can explain how that can happen in those models or brains. [talk in Dutch] Yeah, yeah.
00:24:43 Speaker 2: Okay [Dutch] So you explained what thinking is. Now we'd like to know what is intelligence?
00:24:56 Yoshua: That's a good question. I don't think that there's a consensus on that either.
00:25:01 Speaker 2: On what?
00:25:02 Yoshua: On what is intelligence.
00:25:04 Speaker 2: If you reframe my question that I can.
00:25:07 Yoshua: Okay. So what is intelligence?
00:25:09 That's a good question and I don't think that there's a consensus but in my area of research people generally,
00:25:17 understand intelligence as the ability to take good decisions. And what good decisions.
00:25:24 Speaker 2: What's good?
00:25:24 Yoshua: Good for me. Right?
00:25:27 Speaker 2: Okay.
00:25:28 Yoshua: Good in the sense that they allow me to achieve my goals, to, If I was a animal to survive my predators,
00:25:37 to find food, to find mates. And for humans good might be achieving social status, or being happy, or whatever.
00:25:45 It's hidden in your mind. What is it that's good for you.
00:25:49 But somehow we are striving to take decisions that are good for us and, in order to do that,
00:25:58 it's very clear that we need some form of knowledge.
00:26:02 So, even a mouse that's choosing to go left or right in a maze is using knowledge,
00:26:10 and that kind of knowledge is not necessarily the kind of knowledge you find in the book, right?
00:26:16 A mouse cannot read a book, cannot write a book,
00:26:19 but in the mouse's brain there is knowledge about how to control the mouses' body in order to survive in order to find
00:26:28 food and so on. So the mouse is actually very intelligent in the context of the life of a mouse.
00:26:35 If you were suddenly teleported in a mouse, you would probably find it difficult to do the right things.
00:26:44 So, intelligence is about taking the right decision and it requires knowledge.
00:26:48 And now the question is to build intelligent machines or to understand how humans and animals are intelligent,
00:26:54 where are we getting the knowledge? Where can we get the knowledge?
00:26:59 And some of it is hard-wired in your brain from birth.
00:27:04 And some of it is going to be learned through experience, and that's the thing that we're studying in my field.
00:27:10 How do we learn or rather what are the mathematical principles for learning that could be applied to computers
00:27:18 and not just trying to figure out what animals, how animals learn.
00:27:22 Speaker 2: Then we get to the point the learning.
00:27:25 Yoshua: Right.
00:27:26 Speaker 2: So can you explain To me, because for everybody else, you think of learning, you learn at school?
00:27:38 Yoshua: Yeah.
00:27:39 Speaker 2: You read books, and there's someone telling you how the world works.
00:27:47 So what, in your concept, is the definition of learning?
00:27:50 Yoshua: Yes, my definition of learning is not the kind of learning that people think about when they're in school
00:27:57 and listening to a teacher. Learning is something we do all the time.
00:28:01 Our brain is changing all the time in response to what we're seeing, experiencing. And it's an adaptation.
00:28:08 And we are not just storing in our brain our experiences, it's not learning by heart, that's easy,
00:28:18 a file in a computer is like learning by heart. You can store facts.
00:28:23 But that's trivial, that's not what learning really is about.
00:28:28 Learning is about integrating the information we are getting through experience into some more abstract form that
00:28:38 allows us to take good decisions. That allow us to predict what will happen next.
00:28:43 That allow us to understand the connections between things we've seen. So, that's what's learning is really about.
00:28:52 In my field, we talk about the notion of generalization.
00:28:56 So, the machine can generalize from things it has seen and learned from, to new situations.
00:29:05 That's the kind of learning we talk about in my field.
00:29:09 And the way we typically do it in machines and how we think it's happening in the brain is that it's a slow,
00:29:17 gradual process. Each time you live an experience, one second of your life, there's gonna be some changes in your brain.
00:29:26 Small changes. So it's like your whole system is gradually shifting towards what would make it take better decisions.
00:29:38 So that's how you get to be intelligent, right?
00:29:40 Because you learn, meaning you changed the way you perceive and act, so that next time you would see something,
00:29:49 you will have some experience similar to what happened before, you would act better
00:29:54 or you would predict better what would have happened.
00:29:57 Speaker 2: So, it's very experienced based.
00:29:59 Yoshua: Yes, learning is completely experienced based.
00:30:03 Of course, in school we think of learning as, teaching knowledge from a book or some blackboard.
00:30:13 But, that's not the really the main kind of learning.
00:30:18 There is some learning happening when the student integrates all that information and tries to make sense of it.
00:30:23 But just storing those facts is kind of useless.
00:30:30 Speaker 2: It's a difference that you have to have an interest in it.
00:30:32 Yoshua: Well motivation for humans is very important. Because we are wired like this.
00:30:36 The reason we are wired like this is there are so many things happening around us that emotions help us to filter
00:30:45 and focus on some aspects more than others, those that matter to us, right?
00:30:50 That's a motivation, might be fear as well sometimes.
00:30:53 But for computers, basically they will learn what we ask them to learn, we don't need to introduce motivation
00:31:02 or emotions. These, up to now, we haven't needed to do that.
00:31:05 Speaker 2: But when you explain this deep learning.
00:31:08 Yoshua: Yes, yes.
00:31:12 Speaker 2: Maybe from the perspective of a machine and a human, you can learn computer experience
00:31:20 I think, but not interest or.
00:31:29 Yoshua: Well you can, emotions are something you're born with.
00:31:35 We're born with circuits that make us experience emotions because some situations matter more to us.
00:31:47 So, in the case of the computer, we also, in a sense,
00:31:51 hardwire these things by telling the computer Well this matters more than that
00:31:57 and you have to learn well to predict well here and here it matters less.
00:32:01 So we don't call that emotions but it could play a similar role.
00:32:06 Speaker 2: It looks like emotions.
00:32:08 Yoshua: Right.
00:32:08 Speaker 2: But then it's still program.
00:32:10 Yoshua: Absolutely so AI is completely programmed.
00:32:13 Speaker 2: Yeah.
00:32:14 But as I understand it well, you are reaching searching in this area where this program, which is beyond programming.
00:32:27 That they start to think for themselves.
00:32:27 Yoshua: Okay. So there's an interesting connection between learning and programming.
00:32:29 So the traditional way of putting knowledge into computers,
00:32:33 Is to write a program that essentially contains all our knowledge.
00:32:37 And step by step you tell the computer, if this happens you do this, and then you do that, and then you do that,
00:32:43 and then this happens you do that, and so on and so on. That's what a program is.
00:32:47 But when we allow the computer to learn we also program it, but the program that is there is different.
00:32:55 It's not a program that contains the knowledge we want a computer to have.
00:33:00 We don't program the computer with the knowledge of doors and cars and images and sounds.
00:33:05 We program the computer with the ability to learn and then the computer experiences.
00:33:11 You know, images, or videos, or sounds, or texts and learns the knowledge from those experiences.
00:33:21 So you can think of the learning program as a meta program and we have something like that in our brain.
00:33:28 If one part of your cortex dies you have an accident,
00:33:31 that part used to be doing some job like maybe interpreting music or some types of songs or something.
00:33:40 Well, if you continue listening to music then some other part will take over
00:33:47 and that function may have been sort of impaired for some time
00:33:52 but then it will be taken by some other part of your cortex. What does that mean?
00:33:57 It means that the same program that does the learning, works there in those two regions of your cortex.
00:34:04 The one that used to be doing the job, and the one that does it now.
00:34:08 And that means that your brain has this general purpose learning recipe that it can apply to different problems
00:34:19 and that this different parts of your brain will be specialized on different tasks.
00:34:25 Depending on what you do and which how the brain is connected.
00:34:29 If we remove that part of your brain then some other parts will start doing the job,
00:34:35 if the job is needed because you do those experiences, right?
00:34:39 So if I had a part of my brain that was essentially dealing with playing tennis and that part dies,
00:34:47 I'm not gonna be able to play tennis anymore. But if I continue practicing it's gonna come back.
00:34:55 And that means that the same learning, general purpose learning recipe is used everywhere at least in the cortex.
00:35:04 And this is important not just for understanding brains,
00:35:07 but for companies building products because we have this general purpose recipe
00:35:11 or family recipes that can be applied for many tasks.
00:35:17 The only thing that really differs between those different tasks is the data, the examples that the computer sees.
00:35:23 So that's why companies are so excited about this because they can use this for many problems that they wanna solve so
00:35:28 long as they can teach the machine by showing it examples.
00:35:30 Speaker 2: Is it always, is learning always positive?
00:35:34 Yoshua: Learning is positive by construction in the sense that it's moving the learner towards a state of understanding
00:35:51 of its experiences. So in general, yes, because learning is about improving something.
00:36:00 Now, if the something you're improving is not the thing you should be improving, you could be in trouble.
00:36:06 People can be trained into a wrong understanding of the world and they start doing bad things,
00:36:14 so that's why education is so important for humans.
00:36:18 And for machines right now the things we are asking the machines to do are very simple like understanding the content
00:36:25 of images and texts and videos and things like that.
00:36:26 Speaker 2: So learning is not per se positive because also you can learn wrong things.
00:36:30 Yoshua: Right but if you're just observing things around you and taken randomly then it's just what the world is right.
00:36:40 Speaker 2: And that's the state of the some kind of primitive learning of computers right now or?
00:36:45 Yoshua: Right now, yeah the learning the computers do is very primitive. It's mostly about perception.
00:36:53 And in the case of language some kind of semantic understanding, but it's still a pretty low level understanding.
00:37:00 Speaker 2: Is it possible for you to explain that in a simple way how is it possible for a computer to learn?
00:37:11 Yoshua: So the way that the computer is learning is by small iterative changes, right?
00:37:22 So let's go back to my artificial neural network, which is a bunch of neurons connected to each other,
00:37:29 and they're connected through these synaptic connections.
00:37:33 At each of these connections there is the strength of the connection which controls how a neuron influences another
00:37:39 neuron. So you can think that strength as a knob. And what happens during learning is those knobs change.
00:37:48 We don't know how they change in the brain, but in our algorithms, we know how they change.
00:37:52 And we understand mathematically why it makes sense to do that and they change little bit each time you see an example.
00:37:58 So i show the image of a cat but the computer says it's a dog.
00:38:03 So, I'm going to change those knobs so that it's going to be more likely that the computer is going to say cat.
00:38:10 Maybe the computer outputs a score for dog and a score for cat.
00:38:15 And so what we want to do is decrease the score for dog and increase the score for cat.
00:38:21 So that the computer, eventually, after seeing many millions of images, starts seeing the right class more often
00:38:32 and eventually gets it as well as humans.
00:38:35 Speaker 2: That still sounds like putting just enough data or less data for a computer to recognize something.
00:38:43 But how do you know that the computer is learning? How do you know
00:38:48 Yoshua: Well, you can test it on new images.
00:38:51 So if the computer was only learning by heart, copying the examples that it has seen,
00:38:56 it wouldn't be able to recognize a new image of say, new breed of dog, or a new angle, new lighting.
00:39:04 At the level of pixels, those images could be very, very different.
00:39:11 But, if the computer really figured catness,
00:39:15 at least from the point of view of images it will be able to recognize new images of new cats, taking on new postures
00:39:25 and so on and that's what we call generalization.
00:39:28 So we do that all the time, we test the computer to see if it can generalize to new examples, new images,
00:39:35 new sentences Can you show that to us, not right now but-
00:39:40 Speaker 2: Yeah. You can show that proof of learning skills.
00:39:43 Yoshua: Yeah, yeah I'll try to show you some examples of that, yeah.
00:39:49 Speaker 2: Great, so is there something, I'm missing that right now for understanding deep learning?
00:39:57 Yoshua: Yes.
00:39:59 Speaker 2: Okay, tell me.
00:40:00 Yoshua: I thought this was a statement, not a question.
00:40:04 Well, but yes, of course I [LAUGH] think there are many things that you are missing.
00:40:10 So there are many, many interesting questions in deep learning
00:40:14 but one of the interesting challenges has to do with the question of supervised learning versus unsupervised learning.
00:40:25 Right now, the way we teach the machine to do things
00:40:30 or to recognize things is we use what's called supervised learning where we tell the computer exactly what it should do
00:40:37 or what output it should have for a given input.
00:40:41 So let's say I'm showing it the image of a cat again, I tell the computer, this is a cat.
00:40:49 And I have to show it millions of such images.
00:40:53 That's not the way humans learn to see and understand the world or even understand language.
00:41:00 For the most part, we just make sense of what we observe without having a teacher that is sitting by us
00:41:11 and telling us every second of our life. This is a cow, this is a dog.
00:41:16 Speaker 2: A supervisor.
00:41:16 Yoshua: That's right. There is no supervisor.
00:41:19 We do get some feedback but it's pretty rare and sometimes it's only implicit.
00:41:25 So you do something and you get a reward but you don't know exactly what it was you did that gave you that reward.
00:41:37 Or you talk to somebody, the person is unhappy and you're not sure exactly what you did that was wrong
00:41:44 and the persons not gonna tell you in general what you should have done.
00:41:48 So this is called reinforcement learning when you get some feedback but it's a very weak type.
00:41:54 You did well or you didn't do well.
00:41:56 You have an exam and you achieved 65% but you don't know, if you don't know what the errors were
00:42:05 or what the right answers are it's very difficult to learn from that.
00:42:08 But we are able to learn from that, from very weak signals or no reinforcement at all, no feedback,
00:42:16 just by observation and trying to make sense of all of these pieces of information.
00:42:21 That's called unsupervised learning.
00:42:23 And we're not yet, we are much more advanced with supervised learning than with unsupervised learning.
00:42:32 So all of the products that these companies are building right now, it's mostly based on supervised learning.
00:42:38 Speaker 2: So the next step is unsupervised learning?
00:42:41 Yoshua: Yes, yes.
00:42:42 Speaker 2: Does that mean that unsupervised learning that the computer can think for themselves?
00:42:47 Yoshua: That means the computer will be more autonomous, in some sense. That we don't need.
00:42:56 Speaker 2: That's a hard one.
00:42:57 Yoshua: More autonomous.
00:42:59 Speaker 2: Autonomous computer?
00:43:00 Yoshua: Well more autonomous in its learning. We're are not talking about robots here, right?
00:43:04 We are just talking about computers gradually making sense of the world around us by observation.
00:43:13 And we probably will still need to give them some guidance, but the question is how much guidance.
00:43:20 Right now we have to give them a lot of guidance. Basically we have to spell everything very precisely for them.
00:43:27 So we're trying to move away from that so that they can essentially become more intelligent because they can take
00:43:34 advantage of all of the information out there which doesn't come with a human that explains every bits and pieces.
00:43:44 Speaker 2: But when a computer starts to learn.
00:43:47 Yoshua: Yes.
00:43:48 Speaker 2: Is it possible to stop the computer from learning? [LAUGH]
00:43:51 Yoshua: Sure.
00:43:54 Speaker 2: How? It sounds like if it starts to learn, then it learns.
00:43:58 Yoshua: It's just a program running. It's stored in files. There's nothing like, there's no robot.
00:44:05 There is no, I mean at least in the work we do,
00:44:08 it's just a program that contains files that contain those synaptic weights for example.
00:44:19 And as we see more examples we change those files so that they will correspond to taking the right decisions.
00:44:27 But there's no, those computers don't have a consciousness, there's no such thing right now, at least, for a while.
00:44:44 Speaker 2: Is it right when I say, well, deep learning or self learning computer is becoming more autonomous.
00:44:51 Yoshua: Autonomous in its learning, right?
00:44:54 Speaker 2: Yes, free.
00:44:56 Yoshua: Again, it's probably gonna be a gradual thing where the computer requires less and less of our guidance.
00:45:04 That we probably, so, If you think about humans, we still need guidance.
00:45:09 If you take a human baby nobody wants to do that experiment.
00:45:15 But you can imagine a baby being isolated from society. That child probably would not grow to be very intelligent.
00:45:24 Would not understand the world around us as well as we do. That's because we've had parents, teachers and so on, guide us.
00:45:34 And we've been immersed in culture.
00:45:37 So all that matters, and it's possible that it will also be required for computers to reach our level of intelligence.
00:45:44 The same kind of attention we're giving to humans, we might need to give to computers.
00:45:48 But right now, the amount of attention we have to give to computers for them to learn about very simple things,
00:45:53 is much larger than what we need to give to humans.
00:45:57 Humans are much more autonomous in their learning than machines are right now.
00:46:01 So we have a lot of progress to do in that direction.
00:46:04 Speaker 2: Is the difference also just the simple fact that we have biology?
00:46:09 Yoshua: Well biology is not magical. Biology is, can be understood.
00:46:17 It's what biologists are trying to do
00:46:19 and we understand a lot but there As far as the brain is concerned there's still big holes in our understanding.
00:46:25 Speaker 2: A baby grows but a computer doesn't.
00:46:28 Yoshua: Sure it can, we can give it more memory and so on right? So you can grow the size of the model.
00:46:40 That's not a big obstacle.
00:46:42 I mean computing power is an obstacle, but I'm pretty confident that over the next few years we're gonna see more
00:46:50 and more computing power available as it has been in the past,
00:46:55 that will make it more possible to train models to do more complex tasks.
00:47:00 Speaker 2: So how do you tackle all the people who think this is a horror scenario?
00:47:10 Of course, people start to think about growing computers and it's not about that.
00:47:15 Yoshua: So I think.
00:47:18 Speaker 2: You have to have a stand point.
00:47:20 Yoshua: That's right. I do. So first of all, I think there's been a bit of excessive expression of fear about AI.
00:47:34 Maybe because the progress has been so fast, it has made some people worried.
00:47:40 But if you ask people like me who are into it every day.
00:47:46 They're not worried, because they can see how stupid the machines are right now.
00:47:51 And how much guidance they need to move forward.
00:47:55 So to us, it looks like we're very far from human level intelligence
00:48:01 and even have no idea whether one day computers will be smarter than us. Now that may be a short term view.
00:48:11 What will happen in the future is hard to say, but we can think about it.
00:48:18 And I think it's good that some people are thinking about the potential dangers.
00:48:25 I think it's difficult right now to have a grasp on what could go wrong.
00:48:31 But with the kind of intelligence that we're building in machines right now, I'm not very worried.
00:48:37 It's not the kind of intelligence that I can foresee exploding, becoming more and more intelligent by itself.
00:48:46 I don't think that's plausible for the kinds of deep learning methods and so on.
00:48:51 Even if they were much more powerful and so on, it's not something I can envision.
00:48:56 That being said, it's good that there are people who are thinking about these long term issues.
00:49:02 One thing I'm more worried about is the use of technology now, or in the next couple of years or five or ten years.
00:49:11 Where the technology could be developed and used in a way that could either be very good for many people
00:49:19 or not so good for many people.
00:49:21 And so for example, military use and other uses, which I think I would consider not appropriate,
00:49:28 are things we need to worry about.
00:49:31 Speaker 2: All right, can you name examples of that?
00:49:35 Yoshua: Yeah, so there's been a fuss
00:49:37 and a letter signed by a number of scientists who tried to tell the world we should have a ban on the use of AI for
00:49:48 autonomous weapons that could essentially take the decision to kill by themselves.
00:49:53 So that's something that's not very far fetched in terms of technology and the given science.
00:49:59 Basically, the science is there, it's a matter of building these things.
00:50:03 But it's not something we would like to see, and there could be an arms race of these things.
00:50:09 So we need to prevent it, the same way that, collectively, the nations decided to have bans on biological weapons
00:50:18 and chemical weapons and, to some extent, on nuclear weapons. The same thing should be done for that.
00:50:25 And then there are other uses of this technology, especially as it matures,
00:50:30 which I think are questionable from an ethical point of view.
00:50:32 So I think that the use of these technologies to convince you to do things, like with publicity,
00:50:40 and trying to influence, maybe think about influencing your vote, right?
00:50:50 As the technology becomes really stronger,
00:50:53 you could imagine people essentially using this technology to manipulate you in ways you don't realize.
00:51:01 That is good for them, but is not good for you.
00:51:05 And I think we have to start being aware of that and all the issues of privacy are connected to that as well.
00:51:14 But in general, because we're training currently, companies are using these systems for advertisements.
00:51:21 Where they're trying to predict what they should show you, so that you will be more likely to buy some product, right?
00:51:29 So it seems not so bad, but if you push it, they might bring you into doing things that are not so good for you.
00:51:41 I don't know, like smoking or whatever, right?
00:51:45 Speaker 2: Well, we just stopped at a point where I was going to ask you about,.
00:51:54 is that why you wrote the manifest about diversity and thinking? Because I'll show you, [FOREIGN] Okay.
00:52:11 Speaker 2: Because computers,
00:52:12 Speaker 2: You can learn them a lot of things, but it's almost unimaginable that you can learn them diversity.
00:52:24 Am I correct that that has a connection?
00:52:26 Yoshua: If you want, I will elaborate now. So you're asking me about diversity,
00:52:34 Yoshua: And I can say several things.
00:52:40 First of all, people who are not aware of the kinds of things we do in AI, with machine learning, deep learning, and so on.
00:52:48 May not realize that the algorithms, the methods we're using already include a lot of what may look like diversity,
00:53:00 creativity. So for the same input, the computer could produce different answers.
00:53:05 And so there's a bit of randomness, just like for us.
00:53:08 Twice in the same situation, we don't always take the same decision.
00:53:12 And there are good reasons for that, both for us and for computers. So that's the first part of it.
00:53:17 But there's another aspect of diversity, which I have studied in a paper a few years ago,
00:53:23 which is maybe even more interesting. Diversity is very important, for example, for evolution to succeed.
00:53:35 Because evolution performs a kind of search in the space of genomes of the blueprint of each individual.
00:53:46 Yoshua: And up to now, machine learning is considered what happens in a single individual, how we learn,
00:53:57 how a machine can learn.
00:53:59 But has not really investigated much the role of having a group of individuals learning together, so a kind of society.
00:54:09 And in this paper a few years ago, I postulated that learning in an individual could get stuck.
00:54:19 That if we were alone learning by observing the world around us, we might get stuck with a poor model of the world.
00:54:26 And we get unstuck by talking to other people and by learning from other people,
00:54:32 in the sense of they can communicate some of the ideas they have, how they interpret the world.
00:54:39 And that's what culture is about. Culture has many meanings, but that's the meaning that I have.
00:54:45 That it's not just the accumulation of knowledge, but how knowledge gets created through communication and sharing.
00:54:54 Yoshua: And what I postulated in that paper is that there is a, it's called an optimization problem,
00:55:02 that can get the learning of an individual to not progress anymore.
00:55:08 In a sense that, as I said before, learning is a lot of small changes,
00:55:13 but sometimes there's no small change that really makes you progress.
00:55:19 So you need some kind of external kick that brings a new light to things.
00:55:25 And another connection to evolution, the connection to evolution, actually,
00:55:32 is that this small kick we get from others is like we are building on top of existing solutions that others have come
00:55:42 up with. And of course, the process of science is very much like this. We're building on other scientists' ideas.
00:55:47 But it's true for culture, in general.
00:55:50 And this actually makes the whole process of building more intelligent beings much more efficient.
00:55:59 In fact, we know that since humans have made progress, thanks to evolution and not just. thanks to culture
00:56:10 and not just to evolution, we've been making. our intelligence has been increasing much faster.
00:56:18 So, evolution is slow whereas you can think of culture,
00:56:24 the evolution of culture as a process that's much more efficient. Because we are manipulating the right objects.
00:56:31 So what does this mean in practice?
00:56:33 It means that just like evolution needs diversity to succeed, because there are many different.
00:56:40 Variants of the same type of genes that are randomly chosen and tried,
00:56:49 and the best ones combine together to create new solutions just like this in cultural evolution.
00:56:56 Which is really important for our intelligence as I was saying, we need diversity,
00:57:02 we need not just one school of thought, we need to allow all kinds of exploration, most of which made fail.
00:57:09 So, in science we need to be open to new ideas, even if it's very likely it's not gonna work,
00:57:16 it's good that people explore, otherwise we're gonna get stuck.
00:57:20 In some, in the space of possible interpretations of the world, it may take forever before we escape.
00:57:27 Speaker 2: It is like doing basic research but you don't have-
00:57:31 Yoshua: Yes.
00:57:33 Speaker 2: A specific goal.
00:57:33 Yoshua: That's right so basic research is exploratory, it's not trying to build a product.
00:57:38 It's just trying to understand and it's going in all possible directions.
00:57:43 According to our intuitions of what may be more interesting but without a strong constraint.
00:57:48 So, yeah basic research is like this, but there's a danger because humans they like fashionable things, and trends,
00:58:00 and compare each other, and so on, that we're not giving enough freedom for exploration.
00:58:10 And it's not just science, it's in general, right in society we should allow a lot more freedom.
00:58:15 We should allow marginal ways of being and doing things to coexist.
00:58:22 Speaker 2: But if you allow this freedom,
00:58:23 of course most people think well let's don't go that way because then you have autonomous, self-thinking computers
00:58:27 Speaker 2: Creating their own diversity,
00:58:28 and so there are a lot of scenarios which people think of because they don't know, and which scare them, so this.
00:58:46 Yoshua: Well, it's a gamble and I'm more on the positive side.
00:58:52 I think that the rewards we can get by having more intelligence in our machines is immense.
00:58:59 And the way I think about it is, it's not a competition between machines and humans.
00:59:05 Technology is expanding what we are, thanks to technology we're now already much stronger
00:59:14 and more intelligent than we were.
00:59:18 in the same way that the industrial revolution has kinda increased our strength and our ability to do things physically.
00:59:26 The sort of computer revolution and now the AI revolution is gonna increase, continue to increase our cognitive abilities.
00:59:34 Speaker 2: That sounds very logical, but I can imagine you must get tired of all those people who don't,
00:59:41 who fear this development.
00:59:42 Yoshua: Right,
00:59:44 but I think we should be conscious that a lot of that fear is due to a projection into things we are familiar with.
00:59:53 So, we are thinking of AI like we see them in movies,
00:59:58 we're thinking of AI like we see some kind of alien from another planet, like we see animals.
01:00:03 When we think about another being, we think that other being is like us and so we're greedy.
01:00:10 We want to dominate the rest and if our survival is at stake, we're ready to kill right.
01:00:16 So, we project that some machine is gonna be just like us, and if that machine is more powerful then we are,
01:00:25 then we're in deep trouble, right?
01:00:26 So, it's just because we are making that projection, but actually the machines are not some being that has an ego
01:00:35 and a survival instinct. It's actually something we decide to put together.
01:00:41 It's a program and so we should be smart enough
01:00:44 and wise enough to program these machines to be useful to us rather than go towards the wrong needs.
01:00:52 They will cater to our needs because we will design them that way.
01:00:56 Speaker 2: I understand that, but then there's also this theory of suppose you can develop machines
01:01:03 or robots that can self-learn. So, if that grows with this power of.
01:01:17 Yoshua: Yes.
01:01:18 Speaker 2: There is some acceleration in their intelligence or that's.
01:01:27 Yoshua: Maybe, maybe not, I don't, that's not the way I,
01:01:32 what you're saying is appealing if I was to read a science fiction book.
01:01:36 But it doesn't correspond to how I see AI, and the kind of AI we're doing, I don't see such acceleration,
01:01:47 in fact what I see is the opposite. What I foresee is more like barriers than acceleration. So our-
01:01:56 Speaker 2: Slowing you down?
01:01:57 Yoshua: Yes, so our experience in research is that we make progress.
01:02:01 And then we encounter a barrier, a difficult challenge, a difficulty, the algorithm goes so far
01:02:07 and then can't make progress. Even if we have more computer power, that's not really the issue.
01:02:13 The issue are more, are basically computer science issue that things get Harder as you try to solve,
01:02:21 exponentially harder, meaning much, much harder as you try to solve more complex problems.
01:02:27 So, it's actually the opposite I think that happens that.
01:02:30 And I think that would also explain maybe to some extent why we're not super intelligent ourselves.
01:02:37 I mean, the sense that our intelligence is kind of limited. There are many things for which we make the wrong decision.
01:02:44 And then it's true also of animals.
01:02:46 Why is it like that some animals have much larger brains than we do and they're not that smart?
01:02:54 You could come up with a bunch of reasons but it's not they have a bigger brain.
01:02:59 And their brain, a mammal's brain is very very close to ours. So it's hard to say.
01:03:08 Now I think it's fair to consider the worst scenarios and to study it
01:03:14 and have people seriously considering what could happen and how we could prevent any dangerous thing.
01:03:21 I think it's actually important that some people do that.
01:03:24 But, right now I see this as a very long term potential, and the most plausible scenario is not that,
01:03:31 according to my vision.
01:03:32 Speaker 2: Does it have to do with the fact that you tried to develop this deep learning That if you know how it works,
01:03:42 then you also know how to deal with it. Is that why you are confident in not seeing any problem?
01:03:49 Yoshua: You're right that I think we are more afraid of things we don't understand.
01:03:54 And scientists who are working with deep learning everyday don't feel that they have anything to fear because they
01:04:03 understand what's going on.
01:04:04 And they can see clearly that there is no danger that's foreseeable, so you're right that's part of it.
01:04:11 There's the psychology of seeing the machine as some other being. There's the lack of knowledge.
01:04:17 There's influence of science fiction.
01:04:18 So all these factors come together and also the fact that the technology has been making a lot of progress recently.
01:04:23 So all of that I think creates kind of an exaggerated fear.
01:04:27 I'm not saying we shouldn't have any fear I'm just saying it's exaggerated right now.
01:04:31 Speaker 2: Is your main part of life, or your, how you fill the day, is it thinking? Is your work thinking?
01:04:52 What do you physically do?
01:04:54 Yoshua: I'm thinking all the time, yes.
01:04:57 And whether I'm thinking on the things that matter to me the most, maybe not enough.
01:05:03 Managing a big institute, with a lot of students, and so on, means my time is dispersed, but.
01:05:10 When I can focus, or when I'm in a scientific discussion with people, and so on.
01:05:18 Of course there's a lot of thinking, and it's really important, that's how we move forward.
01:05:24 Speaker 2: Yeah, what does that mean? The first question I asked you was about what is thinking.
01:05:30 Yoshua: Yes.
01:05:31 Speaker 2: And now we are back to that question.
01:05:33 Yoshua: Yeah, yeah, so, so.
01:05:34 Speaker 2: You are a thinker so what happens.
01:05:40 Yoshua: Okay.
01:05:40 Speaker 2: During the day?
01:05:41 Yoshua: Yes.
01:05:42 Speaker 2: With you?
01:05:43 Yoshua: So when I listen to somebody explaining something.
01:05:48 Maybe one of my students talking about an experiment, or another researcher talking about their idea.
01:05:55 Something builds up in my mind to try to understand what is going on.
01:06:03 And that's already thinking but then things happen so other pieces of information and understanding connect to this.
01:06:12 And I see some flaw or some connection and that's where the creativity comes in.
01:06:24 And how I have the impulse of talking about it. And that's just one turn in a discussion. And we go like this. And,
01:06:39 Yoshua: New ideas spring like this. And it's very, very rewarding.
01:06:43 Speaker 2: Is it possible for you not to think?
01:06:47 Yoshua: Well, yes. Yes, it is possible not to think.
01:06:56 It's hard, but if you really relax or you are experiencing something very intensely,
01:07:06 then you're not into your thoughts, you're just into some present-time experience.
01:07:18 Speaker 2: Like it's more emotional rather than rational?
01:07:22 Yoshua: For example, yes, but thinking isn't just rational.
01:07:28 A lot of it is, I don't mean it's irrational,
01:07:31 but a lot of the thinking is something that happens somehow behind the scenes.
01:07:36 It has to do with intuition that has to do with analogies and it's not necessarily a causes b causes c.
01:07:49 It's not that kind of logical thinking that's going on in my mind most of the time.
01:07:54 It's much softer and that's why we need the math in order to filter and fine tune the ideas,
01:08:03 but the raw thinking is very fuzzy. But it's very rich because it's connecting a lot of things together.
01:08:15 And it's discovering the inconsistencies that allow us to move to the next stage and solve problems.
01:08:26 Speaker 2: Are you aware of that you are in that situation when you are thinking?
01:08:34 Yoshua: It happens to me.
01:08:36 I used to spend some time meditating and there you're learning to pay attention to your own thoughts.
01:08:46 So it does happen to me.
01:08:50 It happens to me also that I get so immersed in my thoughts in ordinary,
01:08:54 daily activities that people think that I'm very distracted and not present and they can be offended. [LAUGH]
01:09:02 Yoshua: But it's not always like this, sometimes I'm actually very, very present.
01:09:08 I can be very, very present with somebody talking to me and that's really important for my job, right?
01:09:16 Because if I listen to somebody in a way that's not complete, I can't really understand fully
01:09:29 and participate in a rich exchange.
01:09:34 Speaker 2: I can imagine that when you are focused on a thought.
01:09:39 Or you were having this problem and you're thinking about it, thinking about it.
01:09:40 And then you are in this situation that other people they want something else of you like attention for your
01:09:46 children or whatever. Then there's something in you which decides to keep focused or how does it work with you?
01:09:54 Yoshua: Right.
01:09:54 Speaker 2: You don't want to lose the thought of course.
01:09:57 Yoshua: That's right. So I write, I have some notebooks. I write my ideas.
01:10:05 Often when I wake up or sometimes an idea comes and I want to write it down, like if I was afraid of losing it.
01:10:11 But actually the good ideas, they don't they don't go.
01:10:15 It turns out very often I write them, but I don't even go back to reading them.
01:10:17 It's just that it makes me feel better, and it anchors.
01:10:21 Also, the fact of writing an idea kind of makes it take more room in my mind.
01:10:31 And there's also something to be said about concentration.
01:10:36 So my work now, because I'm immersed with so many people, can be very distractive.
01:10:42 But to really make big progress in science, I also need times when I can be very focused
01:10:54 and where the ideas about a problem and different points of view and all the elements sort of fill my mind.
01:11:02 I'm completely filled with this.
01:11:05 That's when you can be really productive and it might take a long time before you reach that state.
01:11:11 Sometimes it could take years for a student to really go deep into a subject. So that he can be fully immersed in it.
01:11:20 That's when you can really start seeing through things and getting things to stand together solidly.
01:11:28 Now you can extend science, right? Now, when things are solid in your mind, you can move forward.
01:11:36 Speaker 2: Like a base of understanding?
01:11:38 Yoshua: Yeah, yeah, when you need enough concentration on something to really get these moves.
01:11:46 There's the other mode of thinking, which is the brainstorming mode.
01:11:49 Where, out of the blue, I start a discussion, five minutes later something comes up.
01:11:54 So that's more like random and it's also very, it could be very productive as well.
01:12:01 It depends on the stimulation from someone else.
01:12:03 If someone introduces a problem and immediately I get a, something comes up. And we have maybe an exchange.
01:12:13 So that's more superficial, but a lot of good things come out of that exchange because of the brainstorming.
01:12:19 Whereas the other, there's the other mode of thinking which is I'm alone nobody bothers me.
01:12:26 Nobody's asking for my attention. I'm walking.
01:12:30 I'm half asleep, and there I can fully concentrate, eyes closed
01:12:36 or not really paying attention to what's going on in front of me, because I'm completely in my thoughts.
01:12:41 Speaker 2: When do you think?
01:12:46 Yoshua: When?
01:12:47 Speaker 2: During the day. Let's start a day.
01:12:49 Yoshua: So the two times when I spend more on this concentrated thinking, is usually when I wake up, and
01:13:00 when I'm walking back and forth between home and university.
01:13:04 Speaker 2: Just enlarge this moment, what happens?
01:13:10 Yoshua: So I emerge to conciousness like everybody does every morning, and eyes closed
01:13:20 and so on Some thought related to a research question or maybe non-research question comes up
01:13:31 and if I'm interested in it I start like going deeper into it. And.
01:13:37 Speaker 2: Still with your eyes closed?
01:13:39 Yoshua: Still with my eyes closed.
01:13:40 And then it's like If you see a thread dangling and you pull on it, and then, more stuff comes down.
01:13:55 Now, you see more things and you pull more, and there's an avalanche of things coming.
01:14:02 The more you pull on those strings, and the more new things come, or information comes together.
01:14:11 And sometimes it goes nowhere and sometimes that's how new ideas come about.
01:14:15 Speaker 2: And at what stage in this pulling the thread, do you open your eyes?
01:14:21 Yoshua: I could stay like this for an hour.
01:14:24 Speaker 2: Eyes closed.
01:14:25 Yoshua: Yeah.
01:14:26 Speaker 2: Pulling a thread.
01:14:28 Yoshua: Yeah.
01:14:28 Speaker 2: Seeing what's happening.
01:14:30 Yoshua: Yeah.
01:14:31 Often what happens is I see something that I hadn't seen before and I get too excited, so that wakes me up
01:14:37 and I want to write it down. So I have my notebook not far and I write it down.
01:14:42 Or I wanna send an email to somebody saying, I thought about this and it's like six in the morning [LAUGH]
01:14:48 and they wonder if I'm working all the time. [LAUGH]
01:14:53 Speaker 2: So, and then, what happens then? Then you woke up.
01:14:58 Yoshua: Yeah.
01:14:58 Speaker 2: You open your eyes or you wrote it down?
01:15:02 Yoshua: So once I'm writing it down, my eyes are open and it's like, I feel relieved, it's like now I can go
01:15:13 and maybe have breakfast or take a shower, or something.
01:15:16 So having written it down, it might take some time to write it down, also sometimes I write an email
01:15:26 and then it's longer. And now the act of writing it is a different thing.
01:15:33 So there's the initial sort of spark of vision, which is still very fuzzy.
01:15:41 But then, when you have to communicate the idea to someone else. Say, in an email.
01:15:46 You have to really make a different kind of effort, you realize some flaws in your initial ideas
01:15:51 and you have to clean it up and make sure it's understandable. Now it takes a different form.
01:15:58 And sometimes you realize when you do it, that it was nothing really. Yeah, it was just half dream.
01:16:06 Speaker 2: What does your partner think of the ideas, that [INAUDIBLE]
01:16:10 Yoshua: I didn't understand the question.
01:16:13 Speaker 2: What does your partner think of this? That you wake up or you have to write something down?
01:16:24 Yoshua: She's fine with that. I think she's glad to see this kind of thing happen.
01:16:34 And she's happy for me that I live these very rewarding moments.
01:16:41 Speaker 2: But she understands what happens.
01:16:43 Yoshua: Yeah. I tell her often, I just had an idea. I wanna say, i just wanna.
01:16:54 Speaker 2: Does she understand?
01:16:55 Yoshua: What do you mean the science?
01:16:56 Speaker 2: Yes.
01:16:56 Yoshua: No, no but she understands that it's really important for me and this is how I move forward in my work
01:17:07 and also how emotionally fulfilling it is.
01:17:12 Speaker 2: Okay, then at a certain moment you have to go to work.
01:17:21 Yoshua: Yes.
01:17:23 Speaker 2: Let's talk about the walk you do every day.
01:17:23 Yoshua: Yes.
01:17:24 Speaker 2: So what does it mean?
01:17:26 Yoshua: So that walk is you can really think of it as a kind of meditation.
01:17:30 Speaker 2: Tell me about what you were doing if you want to.
01:17:31 Yoshua: So everyday I walk from my house. Yeah, so everyday I walk up the hill from my home to the university.
01:17:43 And it's about half an hour and it's more or less always the same path.
01:17:50 And because I know this path so well, I don't have to really pay much attention to what's going on.
01:17:55 And I can just relax and let thoughts go by, and eventually focus on something, or not.
01:18:05 Sometimes it's just maybe more in the evening where I'm tired maybe just a way to relax and let go.
01:18:17 Speaker 2: Quality thinking time is the problem.
01:18:19 Yoshua: Yes. Absolutely. Because I'm not bombarded by the outside world I can just.
01:18:30 Speaker 2: Normal people are bombarded by every signs, and cars, and sounds.
01:18:33 Yoshua: Yeah.
01:18:34 Speaker 2: And the weather.
01:18:36 Yoshua: Yeah I kind of ignore that. [LAUGH]
01:18:38 Speaker 2: So you are when there are thoughts around you.
01:18:46 Yoshua: When I was young I used to hit my head [LAUGH] on poles. [LAUGH]
01:18:54 Speaker 2: Because you were thinking [CROSSTALK] yourself..
01:18:57 Yoshua: Yeah, or reading while walking [LAUGH]
01:19:07 Speaker 2: [LAUGH] Now it doesnt happen any more.
01:19:09 Yoshua: No.
01:19:10 Well, actually it does now, because I sometimes, I check my phone [LAUGH] I see lots of people do that,
01:19:16 not being paying attention to what's going on.
01:19:20 Speaker 2: Yeah.
01:19:20 Yoshua: Yeah.
01:19:20 Speaker 2: So, well we will film your walk may be something happen Mm-hm.
01:19:27 [LAUGH] but during this walk, if you do it for such a long time, walking uphill.
01:19:32 Yoshua: Yeah.
01:19:32 Speaker 2: That's kind of a nice metaphor, walking up the hill.
01:19:43 Yoshua: Yeah.
01:19:43 Speaker 2: Are there, on this route situations, or positions, or places
01:19:47 when you had some really good ideas that you can remember?
01:19:51 Yoshua: Well.
01:19:52 Speaker 2: How was it?
01:19:53 I was waiting at the traffic light, or was it- Yeah, I have some memories of specific moments going up.
01:20:02 Yoshua: Thinking about some of the ideas that have been going through my mind over the last year in particular.
01:20:14 I guess these are more recent memories. Can you enlarge one of those moments like you did with waking up?
01:20:22 Right, right, so, as I said earlier, it's like if the rest of the world is in a haze, right.
01:20:32 It's like there's automatic control of the walking and watching for other people and cars, potentially.
01:20:43 But it's like if I had a 3-D projection of my thoughts in front of me, that are taking most of the room.
01:20:52 And my thinking works a lot by visualization. And I think a lot of people are like this.
01:20:59 It's a very nice tool that we have, using our kind of visual analogies to understand things.
01:21:09 Even if it's not a faithful portrait of what's going on, the visual analogies are really helping me, at least,
01:21:18 to make sense of things. So it's like I have pictures in my mind to illustrate what's going on, and it's like I see
01:21:26 Yoshua: What do I see? I see information flow, neural networks.
01:21:41 It's like if I was running a simulation in my mind of what would happen if
01:21:49 Yoshua: Some rule of conduct was followed by in this algorithm in this process.
01:21:58 Speaker 2: And that's when you walk up the hill that's what you see?
01:22:00 Yoshua: Yeah, yeah, so it's like if I was running a computer simulation in my mind.
01:22:07 To try to figure out what would happen if I made such choices or if we consider such equation.
01:22:18 what would it entail what would happen?
01:22:21 Imagine different situations and then of course it's not as detailed as if we did a real computer simulation.
01:22:30 But it provides a lot of insight for what's going on.
01:22:36 Speaker 2: But then you walk up the hill everyday.
01:22:37 Yoshua: Yeah.
01:22:37 Speaker 2: And describe the most defining moment during one of those walks. Where you were? Where you stood?
01:22:48 Which corner?
01:22:49 Yoshua: Well, so I remember a particular moment. I was walking on the north sidewalk of Queen Mary Street.
01:23:01 And I was seeing the big church we have there, which is called the oratoire. It's beautiful.
01:23:15 Yoshua: And then I got this insight about perturbations propagating in brains.
01:23:24 Speaker 2: Maybe you want to do that sooner than that.
01:23:26 Yoshua: Yeah, yeah. From the beginning or just the last sentence?
01:23:29 Speaker 2: The last one. Go on.
01:23:30 Yoshua: And so, then I got this insight, visually of these perturbations happening on neurons.
01:23:40 That propagate to other neurons, that propagate to other neurons.
01:23:43 And like I'm doing with my hands, but it was something visual. Then suddenly I had the thought that this could work.
01:23:56 That this could explain things that I'm always trying to understand.
01:23:59 Speaker 2: How did this feel?
01:24:01 Yoshua: Great, I think of all the good feelings that we can have in life, the feeling we get when something clicks,
01:24:15 the eureka. Is probably, maybe, the strongest and most powerful one that we can seek again and again.
01:24:24 And only brings positive things. Maybe stronger than food and sex and those usual good things we get from experience.
01:24:39 Speaker 2: You mean this moment?
01:24:40 Yoshua: This- These kinds of moments provide pleasure.
01:24:46 Yoshua: It's a different kind of pleasure, just like different pleasures or different sensory pleasure or so on.
01:24:54 But it's really, I think, when your brain realizes something, understands something.
01:25:00 It's like you send yourself some molecules to reward you. Say great, do it again if you can, right?
01:25:08 Speaker 2: Did you do it again?
01:25:10 Yoshua: Yeah, yeah, that's my job.
01:25:12 Speaker 2: So this is one moment at the church. Was it a coincidence that it was at a church?
01:25:18 Yoshua: No.
01:25:18 Speaker 2: That has nothing to do with it.
01:25:19 Yoshua: I don't believe in God.
01:25:20 Speaker 2: But, when, I don't believe in God either but if you think of God as someone who created us as is,
01:25:35 and he is our example.
01:25:37 Yoshua: Yes.
01:25:38 Speaker 2: Trying to understand what's happening in your head or your brain.
01:25:44 Yoshua: Yes.
01:25:45 Speaker 2: Isn't that what other people call God?
01:25:49 Speaker 2: Or looking for?
01:25:51 Yoshua: I'm not sure I understand your question.
01:25:58 Speaker 2: How can I rephrase that one?
01:26:11 Speaker 2: When you understand how a brain works-
01:26:18 Yoshua: Yes.
01:26:19 Speaker 2: Maybe then you understand who God is.
01:26:22 Yoshua: When we understand how our brains work we understand who we are to some extent,
01:26:29 I mean a very important part of us. That's one of my motivations.
01:26:33 And the process of doing it is something that defines us individually but also as a collective, as a group,
01:26:46 as a society. So there may be some connections to religion which are about connecting us to some extent.
01:26:54 Speaker 2: That's one of those layers you were talking about. Religion is one of them.
01:27:00 Yoshua: Mm-hm. Yep.
01:27:02 Speaker 2: So but doing this show [NOISE] this half an hour, then you were almost here so-
01:27:09 Yoshua: Sometimes I think it's too short. But then, I have things to do, so.
01:27:16 Speaker 2: Let's continue this metaphor. It's uphill, when you are uphill, what do you feel?
01:27:24 Yoshua: I feel, so I'm going uphill, my body's working hard.
01:27:30 I mean, I'm not running, but I'm walking and I can feel the muscles.
01:27:35 Warming up, and my whole body becoming more full with energy. And I think that helps the brain as well.
01:27:46 That's how it feels, anyway.
01:27:49 Speaker 2: But I mean, when you. Moses went up to the mountain and he saw the Promised Land. [LAUGH]
01:27:55 Speaker 2: When you go uphill what do you see?
01:27:58 Yoshua: When I go uphill [LAUGH] I see the university, but there is something that's related to your question.
01:28:10 Which is, each time I have these insights, these Eureka moments, it's like seeing the Promised Land.
01:28:16 It's very much like that. It's like you have a glimpse of something you had never seen before and it looks great.
01:28:27 And you feel like you now see a path to go there.
01:28:31 So I think it's very, very close to this idea of seeing the Promised Land.
01:28:37 But of course it's not just one Promised Land.
01:28:39 It's one step to the next valley and the next valley, and that's how we climb, really, big mountains.
01:28:46 Speaker 2: So is there anything you want to add to this yourself? Because I think we are ready now to go uphill.
01:28:58 Yoshua: No, I'm fine.
01:29:00 Speaker 2: Maybe just a few questions about Friday, so what you're going to do. What are you going to do on Friday?
01:29:12 Yoshua: So Friday I'm going to make a presentation to the rest of the researchers in the lab in the institute about one
01:29:25 of the topics I'm most excited about these days.
01:29:30 Which is trying to bridge the gap between what we do in machine learning, what has to do with AI
01:29:38 and building intelligent machines and the brain. I'm not really a brain expert.
01:29:44 I'm more a machine learning person, but I talk to neuroscientists and so on.
01:29:48 And I try, I really care about the big question of how is the brain doing the really complex things that it does.
01:29:57 And so the work I'm going to tell about Friday is one small step in that direction that we've achieved in the last few
01:30:08 months.
01:30:10 Speaker 2: On your path to the Promised Land?
01:30:12 Yoshua: Yes, exactly, that's right.
01:30:14 And I've been making those small steps on this particular topic for about a year and a half.
01:30:21 So it's not like just something happens and you're there, right?
01:30:27 It's a lot of insights that make you move and get understanding. And science makes progress by steps.
01:30:41 Most of those steps are small, some are slightly bigger.
01:30:44 Seen from the outside, sometimes people have the impression that, there's this big breakthrough, breakthrough.
01:30:49 And journalists like to talk about breakthrough, breakthrough, breakthrough.
01:30:52 But actually science is very, very progressive because we gradually understand better the world.