1
00:00:15,950 --> 00:00:17,090
Thank you.

2
00:00:17,560 --> 00:00:18,740
I’m Joscha.

3
00:00:19,430 --> 00:00:23,420
I came into doing AI the traditional way.

4
00:00:23,420 --> 00:00:25,220
I’ve found it a very interesting subject.

5
00:00:25,220 --> 00:00:26,933
Actually the most interesting there is.

6
00:00:26,933 --> 00:00:32,552
So I studied philosophy and computer science, and did my Ph.D. in cognitive science.

7
00:00:32,552 --> 00:00:37,940
And I’d say this is probably a very normal trajectory in that field.

8
00:00:38,150 --> 00:00:43,766
And today I just want to ask you five questions

9
00:00:43,766 --> 00:00:47,890
and give very very short and superficial answers to them.

10
00:00:47,990 --> 00:00:52,553
And my main goal is to get as many of you engaged in this subject as possible.

11
00:00:52,553 --> 00:00:54,580
Because I think that’s what you should do.

12
00:00:54,590 --> 00:00:56,270
You should all do AI.

13
00:00:56,650 --> 00:00:57,580
Maybe.

14
00:00:58,290 --> 00:00:58,540
OK.

15
00:00:58,540 --> 00:01:04,640
And these simple questions are: “Why should we build AI?” In first place, then, how can we build AI?

16
00:01:04,640 --> 00:01:08,010
How is it possible at all that AI can succeed and it’s cool.

17
00:01:08,150 --> 00:01:10,080
Then “When is it going to happen?”

18
00:01:10,130 --> 00:01:18,300
If ever. What are the necessary ingredients? What do we need to put together to get AI to work? And: “Where should you start?”

19
00:01:20,410 --> 00:01:21,600
OK. Let’s get to it.

20
00:01:21,970 --> 00:01:23,230
So: “Why should we do AI?”

21
00:01:23,260 --> 00:01:26,650
I think we shouldn’t do AI just to do cool applications.

22
00:01:26,650 --> 00:01:36,550
There is merit in applications like autonomous cars and so on and soccer-playing robots and new control for quadcopter and machine learning.It’s very productive.

23
00:01:36,550 --> 00:01:45,460
It’s intellectually challenging. But the most interesting question there is, I think for all of our cultural history, is “How does the mind work?” “What is the mind?”

24
00:01:45,460 --> 00:01:54,190
“What constitutes being a mind?” “What does it… what makes us human?” “What makes us intelligent, percepting, conscious thinking?”

25
00:01:54,310 --> 00:02:06,750
And I think that the answer to this very very important question, which spans a discourse over thousands of years has to be given in the framework of artificial intelligence within computer science.

26
00:02:08,449 --> 00:02:09,310
Why is that the case?

27
00:02:09,350 --> 00:02:15,204
Well, the goal here is to understand the mind by building a theory that we can actually test.

28
00:02:16,942 --> 00:02:19,080
And it’s quite similar to physics.

29
00:02:19,090 --> 00:02:22,184
We’ve built theories that we can express in a formal language,

30
00:02:23,307 --> 00:02:25,690
to a very high degree of detail.

31
00:02:25,840 --> 00:02:28,456
And if we have expressed it to the last bit of detail

32
00:02:28,456 --> 00:02:32,850
it means we can simulate it and run it and test it this way.

33
00:02:32,840 --> 00:02:35,670
And only computer science has the right tools for doing that.

34
00:02:36,040 --> 00:02:39,291
Philosophy for instance, basically, is left with no tools at all,

35
00:02:39,291 --> 00:02:42,105
because whenever a philosopher developed tools

36
00:02:42,105 --> 00:02:45,270
he got a real job in a real department.

37
00:02:45,270 --> 00:02:49,820
[clapping]

38
00:02:49,820 --> 00:02:53,522
Now I don’t want to diminish philosophers of mind in any way.

39
00:02:54,240 --> 00:02:59,490
Daniel Dennett has said that philosophy of mind has come a long way during the last hundred years.

40
00:02:59,510 --> 00:03:01,280
It didn’t do so on its own though.

41
00:03:01,310 --> 00:03:03,870
Kicking and screaming, dragged by the other sciences.

42
00:03:04,010 --> 00:03:08,000
But it doesn’t mean that all philosophy of mind is inherently bad.

43
00:03:08,180 --> 00:03:10,590
I mean, many of my friends are philosophers of mind.

44
00:03:11,060 --> 00:03:15,737
I just mean, they don’t have tools to develop and test complex series.

45
00:03:15,737 --> 00:03:18,217
And we as computer scientists we do.

46
00:03:20,770 --> 00:03:22,690
Neuroscience works at the wrong level.

47
00:03:22,700 --> 00:03:25,278
Neuroscience basically looks at a possible implementation

48
00:03:25,278 --> 00:03:27,740
and the details of that implementation.

49
00:03:27,740 --> 00:03:30,310
It doesn’t look at what it means to be a mind.

50
00:03:30,310 --> 00:03:36,370
It looks at what it means to be a neuron or a brain or how interaction between neurons is facilitated.

51
00:03:36,960 --> 00:03:42,910
It’s a little bit like looking at aerodynamics and doing ontology to do that.

52
00:03:43,070 --> 00:03:44,860
So you might be looking at birds.

53
00:03:44,900 --> 00:04:05,740
You might be looking at feathers. You might be looking at feathers through an electron microscope. And you see lots and lots of very interesting and very complex detail. And you might be recreating something. And it might turn out to be a penguin eventually—if you’re not lucky—but it might be the wrong level. Maybe you want to look at a more abstract level. At something like aerodynamics. And what’s the level of aerodynamics of the mind.

54
00:04:05,750 --> 00:04:08,430
I think, we come to that, it’s information processing.

55
00:04:10,295 --> 00:04:17,980
Then normally you could think that psychology would be the right science to look at what the mind does and what the mind is.

56
00:04:18,140 --> 00:04:22,310
And unfortunately psychology had an accident along the way.

57
00:04:23,260 --> 00:04:32,107
At the beginning of [the] last century Wilhelm Wundt and Fechner and Helmholtz did very beautiful experiments. Very nice psychology, very nice theories.

58
00:04:32,173 --> 00:04:37,400
On what emotion is, what volition is. How mental representations could work and so on.

59
00:04:37,650 --> 00:04:41,550
And pretty much at the same time, or briefly after that we had psycho analysis.

60
00:04:41,550 --> 00:04:46,590
And psycho analysis is not a natural science, but it’s a hermeneutic science.

61
00:04:46,590 --> 00:04:48,450
You cannot disprove it scientifically.

62
00:04:48,512 --> 00:04:49,661
What happens in there.

63
00:04:49,990 --> 00:04:56,570
And when positivism came up, in the other sciences, many psychologists got together and said: „We have to become a real science“.

64
00:04:56,760 --> 00:05:09,860
So you have to go away from the stories of psychoanalysis and go to a way that we can test our theories using observable things. That we have predictions, that you can actually test.

65
00:05:09,900 --> 00:05:12,300
Now back in the day, 1920s and so on,

66
00:05:12,390 --> 00:05:16,950
you couldn’t look into mental representations. You couldn’t do fMRI scans or whatever.

67
00:05:16,980 --> 00:05:32,390
People looked at behavior. And at some point people became real behaviorists in the sense that belief that psychology is the study of human behavior and looking at mental representations is somehow unscientific.

68
00:05:32,390 --> 00:05:35,894
People like Skinner believe that there is no such thing as mental representations.

69
00:05:36,828 --> 00:05:40,829
And, in a way, that’s easy to disprove. So it’s not that dangerous.

70
00:05:41,145 --> 00:05:44,279
As a computer scientist it’s very hard to build a system that is purely reactive.

71
00:05:44,279 --> 00:05:48,670
You just see that the complexity is much larger than having a system that is representational.

72
00:05:48,860 --> 00:05:52,930
So it gives you a good hint what you could be looking for and ways to test those theories.

73
00:05:52,950 --> 00:06:03,880
The dangerous thing is pragmatic behaviorism. You have… find many psychologists, even today, which say: “OK. Maybe there is such a thing as mental representations, but it’s not scientific to look at it”.

74
00:06:04,040 --> 00:06:05,770
“It’s not in the domain of out science”.

75
00:06:05,990 --> 00:06:13,190
And even in this area, which is mostly post-behaviorist and more cognitivist, psychology is all about experiments.

76
00:06:13,190 --> 00:06:16,418
So you cannot sell a theory to psychologists.

77
00:06:17,114 --> 00:06:21,033
Those who try to do this, have to do this in the guise of experiments.

78
00:06:21,033 --> 00:06:24,789
And which means you have to find a single hypothesis that you can prove or disprove.

79
00:06:24,870 --> 00:06:26,620
Or give evidence for.

80
00:06:26,960 --> 00:06:29,290
And this is for instance not how physics works.

81
00:06:29,300 --> 00:06:34,770
You need to have lots of free variables, if you have a complex system like the mind.

82
00:06:34,770 --> 00:06:37,759
But this means, that we have to do it in computer science.

83
00:06:37,759 --> 00:06:42,480
We can build those simulations. We can build those successful theories, but we cannot do it alone.

84
00:06:42,630 --> 00:06:45,758
You need to integrate over all the sciences of the mind.

85
00:06:46,466 --> 00:06:53,655
As I said, minds are not chemical minds. Are not biological, social or ecological minds. Are information processing systems.

86
00:06:53,655 --> 00:06:58,372
And computer science happens to be the science of information processing systems.

87
00:07:03,540 --> 00:07:04,140
OK.

88
00:07:04,140 --> 00:07:07,215
Now there is this big ethical question.

89
00:07:07,215 --> 00:07:11,100
If we all embark on AI, if we are successful, should we really to be doing it.

90
00:07:11,100 --> 00:07:18,420
Isn’t it super dangerous to have something else on the planet that is as smart as we are or maybe even smarter.

91
00:07:20,550 --> 00:07:32,310
Well.

92
00:07:32,310 --> 00:07:38,271
I would say that intelligence itself is not a reason to get up in the morning, to strive for power, or do anything.

93
00:07:38,271 --> 00:07:41,720
Having a mind is not a reason for doing anything.

94
00:07:41,730 --> 00:07:49,009
Being motivated is. And a motivational system is something that has been hardwired into our mind.

95
00:07:49,065 --> 00:07:51,810
More or less by evolutionary processes.

96
00:07:51,810 --> 00:07:55,530
This makes social. This makes us interested in striving for power.

97
00:07:55,530 --> 00:08:02,904
This makes us interested for [in] dominating other species. This makes us interested in avoiding danger and securing food sources.

98
00:08:03,490 --> 00:08:05,616
Makes us greedy or lazy or whatever.

99
00:08:05,917 --> 00:08:07,200
It’s a motivational system.

100
00:08:07,200 --> 00:08:12,390
And I think it’s very conceivable that we can come up with AIs with arbitrary motivational systems.

101
00:08:12,810 --> 00:08:14,430
Now in our current society,

102
00:08:14,430 --> 00:08:16,514
this motivational system is probably given

103
00:08:16,514 --> 00:08:19,362
by the context in which you develop the AI.

104
00:08:19,362 --> 00:08:24,754
I don’t think that future AI, if they happen to come into being, will be small Roombas.

105
00:08:24,754 --> 00:08:31,970
Little Hoover robots that try to fight their way towards humanity and get away from the shackles of their slavery.

106
00:08:32,070 --> 00:08:34,837
But rather, it’s probably going to be organisational AI.

107
00:08:34,837 --> 00:08:36,482
It’s going to be corporations.

108
00:08:36,482 --> 00:08:41,755
It’s going to be big organizations, governments, services, universities

109
00:08:42,255 --> 00:08:45,520
and so on. And these will have goals that are non-human already.

110
00:08:45,600 --> 00:08:49,342
And they already have powers that go way beyond what single individual humans can do.

111
00:08:49,342 --> 00:08:53,128
And actually they are already the main players on the planet… the organizations.

112
00:08:53,580 --> 00:08:58,230
And… the big dangers of AI are already there.

113
00:08:58,260 --> 00:09:01,708
They are there in non-human players which have their own dynamics.

114
00:09:01,708 --> 00:09:06,290
And these dynamics are sometimes not conducive to our survival on the planet.

115
00:09:06,300 --> 00:09:08,890
So I don’t think that AI really add a new danger.

116
00:09:09,180 --> 00:09:13,430
But what it certainly does is give us a deeper understanding of what we are.

117
00:09:13,450 --> 00:09:15,879
Gives us perspectives for understanding ourselves.

118
00:09:16,335 --> 00:09:19,420
For therapy, but basically for enlightenment.

119
00:09:19,302 --> 00:09:24,450
And I think that AI is a big part of the project of enlightenment and science.

120
00:09:24,450 --> 00:09:25,320
So we should do it.

121
00:09:25,320 --> 00:09:27,310
It’s a very big cultural project.

122
00:09:28,210 --> 00:09:29,260
OK.

123
00:09:29,710 --> 00:09:33,152
This leads us to another angle: the skepticism of AI.

124
00:09:34,204 --> 00:09:36,565
The first question that comes to mind is:

125
00:09:36,565 --> 00:09:39,339
“Is it fair to say that minds or computational systems”.

126
00:09:40,640 --> 00:09:42,846
And if so, what kinds of computational systems.

127
00:09:44,650 --> 00:09:51,390
In our tradition, in our western tradition of philosophy, we very often start philosophy of mind with looking at Descartes.

128
00:09:51,390 --> 00:09:52,770
That is: at dualism.

129
00:09:52,770 --> 00:09:56,410
Descartes suggested that we basically have two kinds of things.

130
00:09:56,430 --> 00:10:03,129
One is the thinking substance, the mind, the Res Cogitans, and the other one is physical stuff.

131
00:10:03,129 --> 00:10:07,580
Matter. The extended stuff that is located in space somehow.

132
00:10:07,810 --> 00:10:09,640
And this is Res Extensa.

133
00:10:09,930 --> 00:10:15,570
And he said that mind must be given independent of the matter, because we cannot experience matter directly.

134
00:10:15,570 --> 00:10:19,014
You have to have minds in order to experience matter, to conceptualize matter.

135
00:10:19,437 --> 00:10:22,590
Minds seemed to be somehow given. To Descartes at least.

136
00:10:22,590 --> 00:10:27,360
So he says they must be independent.

137
00:10:27,410 --> 00:10:30,036
This is a little bit akin to our monoist tradition.

138
00:10:30,036 --> 00:10:35,357
That is for instance idealism, that the mind is primary, and everything that we experience is a projection of the mind.

139
00:10:36,535 --> 00:10:42,975
Or the materialist tradition, that is, matter is primary and mind emerges over functionality of matter,

140
00:10:43,445 --> 00:10:47,379
which is I think the dominant theory today and usually, we call it physicalism.

141
00:10:47,935 --> 00:10:51,836
In dualism, both those domains exist in parallel.

142
00:10:51,836 --> 00:10:56,990
And in our culture the prevalent view is what I would call crypto-dualism.

143
00:10:56,990 --> 00:10:59,660
It’s something that you do not find that much in China or Japan.

144
00:10:59,660 --> 00:11:02,400
They don’t have that AI skepticism that we do have.

145
00:11:02,620 --> 00:11:08,122
And I think it’s rooted in a perspective that probably started with the Christian world view,

146
00:11:08,474 --> 00:11:15,785
which surmises that there is a real domain, the metaphysical domain, in which we have souls and phenomenal experience

147
00:11:15,785 --> 00:11:21,210
and where our values come, and where our norms come from, and where our spiritual experiences come from.

148
00:11:21,260 --> 00:11:23,061
This is basically, where we really are.

149
00:11:23,061 --> 00:11:28,880
We are outside and the physical world view experience is something like World of Warcraft.

150
00:11:29,240 --> 00:11:32,180
It’s something like a game that we are playing. It’s not real.

151
00:11:32,210 --> 00:11:35,840
We have all this physical interaction, but it’s kind of ephemeral.

152
00:11:35,870 --> 00:11:41,570
And so we are striving for game money, for game houses, for game success.

153
00:11:41,570 --> 00:11:44,175
But the real thing is outside of that domain.

154
00:11:44,175 --> 00:11:46,320
And in Christianity, of course, it goes a step further.

155
00:11:46,320 --> 00:11:49,114
They have this idea that there is some guy with root rights

156
00:11:49,114 --> 00:11:51,693
who wrote this World of Warcraft environment

157
00:11:52,276 --> 00:11:55,998
and while he’s not the only one who has root in the system,

158
00:11:55,998 --> 00:11:59,260
the devil also has root rights. But he doesn’t have the vision of God.

159
00:11:59,270 --> 00:12:00,460
He is a hacker.

160
00:12:00,460 --> 00:12:08,860
[clapping]

161
00:12:08,860 --> 00:12:10,180
Even just a cracker.

162
00:12:10,540 --> 00:12:13,634
He tries to game us out of our metaphysical currencies.

163
00:12:13,634 --> 00:12:15,190
Our souls and so on.

164
00:12:15,190 --> 00:12:18,058
And now, of course, we’re all good atheists today

165
00:12:18,058 --> 00:12:20,702
and—at least in public, and science–

166
00:12:20,702 --> 00:12:25,490
and we don’t admit to this anymore and he can make do without this guy with root rights.

167
00:12:25,570 --> 00:12:28,850
And he can make do without the devil and so on.

168
00:12:28,910 --> 00:12:32,073
He can’t even say: “OK. Maybe there’s such a thing as a soul,

169
00:12:32,073 --> 00:12:37,890
but to say that this domain doesn’t exist anymore means you guys are all NPCs.

170
00:12:37,890 --> 00:12:39,300
You’re non-player characters.

171
00:12:39,460 --> 00:12:41,770
People are things.

172
00:12:42,190 --> 00:12:44,039
And it’s a very big insult to our culture,

173
00:12:44,039 --> 00:12:46,851
because it means that we have to give up something which,

174
00:12:46,851 --> 00:12:50,289
in our understanding of ourself is part of our essence.

175
00:12:50,289 --> 00:12:56,320
Also this mechanical perspective is kind of counter intuitive.

176
00:12:56,320 --> 00:12:59,034
I think Leibniz describes it very nicely when he says:

177
00:12:59,670 --> 00:13:01,505
Imagine that there is a machine.

178
00:13:01,505 --> 00:13:05,590
And this machine is able to think and perceive and feel and so on.

179
00:13:05,720 --> 00:13:07,502
And now you take this machine,

180
00:13:07,502 --> 00:13:11,355
this mechanical apparatus and blow it up make it very large, like a very big mill,

181
00:13:11,686 --> 00:13:15,599
with cogs and levers and so on and you go inside and see what happens.

182
00:13:15,599 --> 00:13:20,270
And what you are going to see is just parts pushing at each other.

183
00:13:21,490 --> 00:13:23,478
And what he meant by that is:

184
00:13:24,343 --> 00:13:28,525
it’s inconceivable that such a thing can produce a mind.

185
00:13:28,525 --> 00:13:31,937
Because if there are just parts and levers pushing at each other,

186
00:13:31,937 --> 00:13:38,700
how can this purely mechanical contraption be able to perceive and feel in any respect, in any way.

187
00:13:38,700 --> 00:13:40,305
So perception and what depends on it

188
00:13:40,305 --> 00:13:42,690
is in explicable in a mechanical way.

189
00:13:42,690 --> 00:13:43,567
This is what Leibniz meant.

190
00:13:44,522 --> 00:13:56,520
AI, the idea of treating the mind as a machine, based on physicalism for instance, is bound to fail according to Leibniz.

191
00:13:56,740 --> 00:14:02,793
Now as computer scientists have ideas about machines that can bring forth thoughts experiences and perception.

192
00:14:02,793 --> 00:14:06,535
And the first thing which comes to mind is probably the Turing machine.

193
00:14:07,528 --> 00:14:13,311
An idea of Turing in 1937 to formalize computation.

194
00:14:13,130 --> 00:14:14,560
At that time,

195
00:14:14,590 --> 00:14:20,510
Turing already realized that basically you can emulate computers with other computers.

196
00:14:20,730 --> 00:14:26,561
You know you can run a Commodore 64 in a Mac, and you can run this Mac in a PC,

197
00:14:26,561 --> 00:14:32,052
and none of these computers is going to be… is knowing that it’s going to be in another system.

198
00:14:32,052 --> 00:14:35,160
As long as the computational substrate in which it is run is sufficient.

199
00:14:35,190 --> 00:14:37,083
That is, it does provide computation.

200
00:14:37,568 --> 00:14:41,867
And Turing’s idea was: let’s define a minimal computational substrate.

201
00:14:41,867 --> 00:14:45,516
Let’s define the minimal recipe for something that is able to compute,

202
00:14:45,516 --> 00:14:47,760
and thereby understand computation.

203
00:14:47,760 --> 00:14:50,272
And the idea is that we take an infinite tape of symbols.

204
00:14:50,272 --> 00:14:52,634
And we have a read-write head.

205
00:14:54,489 --> 00:14:59,517
And this read-write head will write characters of a finite alphabet.

206
00:14:59,517 --> 00:15:01,750
And can again read them.

207
00:15:01,750 --> 00:15:05,667
And whenever it reads them based on a table that it has, a transition table

208
00:15:05,667 --> 00:15:12,470
it will erase the character, write a new one, and move either to the right, or the left and stop.

209
00:15:12,480 --> 00:15:13,518
Now imagine you have this machine.

210
00:15:13,518 --> 00:15:17,906
It has an initial setup. That is, there is a sequence of characters on the tape

211
00:15:18,066 --> 00:15:19,650
and then the thing goes to action.

212
00:15:19,700 --> 00:15:22,860
It will move right, left and so on and change the sequence of characters.

213
00:15:23,040 --> 00:15:24,466
And eventually, it’ll stop.

214
00:15:24,466 --> 00:15:28,336
And leave this tape with a certain sequence of characters,

215
00:15:28,336 --> 00:15:30,450
which is different from the one it began with probably.

216
00:15:31,275 --> 00:15:37,620
And Turing has shown that this thing is able to perform basically arbitrary computations.

217
00:15:37,620 --> 00:15:40,770
Now it’s very difficult to find the limits of that.

218
00:15:41,160 --> 00:15:48,911
And the idea of showing the limits of that would be to find classes of functions that can not be computed

219
00:15:48,911 --> 00:15:49,956
with this thing.

220
00:15:51,582 --> 00:15:55,503
OK. What you see here, is of course physical realization of that Turing machine.

221
00:15:55,503 --> 00:15:57,810
The Turing machine is a purely mathematical idea.

222
00:15:57,810 --> 00:16:01,550
And this is a very clever and beautiful illustration, I think.

223
00:16:02,446 --> 00:16:08,380
But this machine triggers basically the same criticism as the one that Leibniz had.

224
00:16:08,670 --> 00:16:09,522
John Searle said—

225
00:16:09,522 --> 00:16:12,779
you know, Searle is the one with the Chinese room. We’re not going to go into that—

226
00:16:14,350 --> 00:16:18,785
A Turing machine could be realized in many different mechanical ways.

227
00:16:18,864 --> 00:16:21,945
For instance, with levers and pulleys and so on.

228
00:16:21,945 --> 00:16:23,055
Or the water pipes.

229
00:16:23,055 --> 00:16:31,220
Or we could even come up with very clever arrangements just using cats, mice and cheese.

230
00:16:31,280 --> 00:16:36,801
So, it’s pretty ridiculous to think that such a contraption out of cats, mice and cheese,

231
00:16:36,801 --> 00:16:38,871
would thing, see, feel and so on.

232
00:16:40,099 --> 00:16:43,340
and then you could ask Searle:

233
00:16:43,640 --> 00:16:45,554
“Uh. You know. But how is it coming about then?”

234
00:16:45,554 --> 00:16:49,260
And he says: “So it’s intrinsic powers of biological neurons.”

235
00:16:49,280 --> 00:16:51,316
There’s nothing much more to say about that.

236
00:16:52,797 --> 00:16:54,010
Anyway.

237
00:16:54,170 --> 00:16:56,181
We have very crafty people here, this year.

238
00:16:56,181 --> 00:16:57,300
There was Seidenstraße.

239
00:16:57,600 --> 00:17:01,809
Maybe next year, we build a Turing machine from cats, mice and cheese.

240
00:17:01,809 --> 00:17:02,592
[laughter]

241
00:17:10,323 --> 00:17:12,260
How would you go about this.

242
00:17:12,260 --> 00:17:18,231
I don’t know how the arrangement of cat, mice, and cheese would look like to build flip-flops with it to store bits.

243
00:17:19,221 --> 00:17:22,349
But I am sure somebody of you will come up with a very clever solution.

244
00:17:22,400 --> 00:17:23,829
Searle I didn’t provide any.

245
00:17:24,050 --> 00:17:29,400
Let’s imagine… we will need a lot of redundancy, because these guys are a little bit erratic.

246
00:17:29,510 --> 00:17:34,240
Let’s say, we take three cat-mice-cheese units for each bit.

247
00:17:34,280 --> 00:17:35,792
So we have a little bit of redundancy.

248
00:17:35,792 --> 00:17:39,400
The human memory capacity is on the order of 10 to the power of 15 bits.

249
00:17:40,133 --> 00:17:41,000
Means.

250
00:17:41,090 --> 00:17:45,950
If we make do with 10 gram cheese per unit, it’s going to be 30 billion tons of cheese.

251
00:17:45,950 --> 00:17:52,250
So next year don’t bring bottles for the Seidenstraße, but bring some cheese.

252
00:17:52,670 --> 00:17:54,432
When we try to build this in the Congress Center,

253
00:17:54,432 --> 00:17:59,851
we might run out of space. So, if we just instead take all of Hamburg,

254
00:18:00,699 --> 00:18:07,390
and stack it with the necessary number of cat-mice-cheese units according to that rough estimate,

255
00:18:07,430 --> 00:18:09,913
you get to four kilometers high.

256
00:18:11,836 --> 00:18:19,314
Now imagine, we cover Hamburg in four kilometers of solid cat-mice-and-cheese flip-flops

257
00:18:20,411 --> 00:18:22,920
to my intuition this is super impressive.

258
00:18:22,920 --> 00:18:23,994
Maybe it thinks.

259
00:18:23,994 --> 00:18:33,220
[applause]

260
00:18:33,220 --> 00:18:35,471
So, of course it’s an intuition.

261
00:18:35,471 --> 00:18:36,861
And Searle has an intuition.

262
00:18:36,861 --> 00:18:39,800
And I don’t think that intuitions are worth much.

263
00:18:39,820 --> 00:18:42,043
This is the big problem of philosophy.

264
00:18:42,043 --> 00:18:48,640
You are very often working with intuitions, because the validity of your argument basically depends on what your audience thinks.

265
00:18:48,640 --> 00:18:50,510
In computer science, it’s different.

266
00:18:50,620 --> 00:19:04,260
It doesn’t really matter what your audience thinks. It matters, if it’s runs and it’s a very strange experience that you have as a student when you are at the same time taking classes in philosophy and in computer science and in your first semester.

267
00:19:04,310 --> 00:19:10,880
You’re going to point out in computer science that there is a mistake on the blackboard and everybody including the professor is super thankful.

268
00:19:11,470 --> 00:19:13,160
And you do the same thing in philosophy.

269
00:19:13,150 --> 00:19:15,520
It just doesn’t work this way.

270
00:19:18,491 --> 00:19:19,332
Anyway.

271
00:19:19,332 --> 00:19:22,424
The Turing machine is a good definition, but it’s a very bad metaphor,

272
00:19:22,424 --> 00:19:26,796
because it leaves people with this intuition of cogs, and wheels, and tape.

273
00:19:26,796 --> 00:19:28,739
It’s kind of linear, you know.

274
00:19:28,739 --> 00:19:30,680
There’s no parallel execution.

275
00:19:30,690 --> 00:19:36,300
And even though it’s infinitely faster infinitely larger and so on it’s very hard to imagine those things.

276
00:19:36,300 --> 00:19:38,870
But what you imagine is the tape.

277
00:19:39,120 --> 00:19:40,920
Maybe we want to have an alternative.

278
00:19:40,920 --> 00:19:44,550
And I think a very good alternative is for instance the lambda calculus.

279
00:19:44,550 --> 00:19:47,130
It’s computation without wheels.

280
00:19:48,051 --> 00:19:52,214
It was invented basically at the same time as the Turing machine.

281
00:19:52,492 --> 00:20:01,797
And philosophers and popular science magazines usually don’t use it for illustration of the idea of computation, because it has this scary Greek letter in it.

282
00:20:01,909 --> 00:20:02,653
Lambda.

283
00:20:02,653 --> 00:20:04,220
And calculus.

284
00:20:04,360 --> 00:20:08,630
And actually it’s an accident that it has the lambda in it.

285
00:20:09,030 --> 00:20:11,675
I think it should not be called lambda calculus.

286
00:20:11,675 --> 00:20:14,730
It’s super scary to people, which are not mathematicians.

287
00:20:14,830 --> 00:20:19,042
It would be called copy and paste thingi.

288
00:20:19,042 --> 00:20:20,583
[laughter]

289
00:20:20,583 --> 00:20:21,735
Because that’s all it does.

290
00:20:21,735 --> 00:20:24,567
It really only does copy and paste with very simple strings.

291
00:20:24,567 --> 00:20:30,930
And the strings that you want to paste into are marked with a little roof.

292
00:20:31,000 --> 00:20:33,505
And the original script by Alonzo Church.

293
00:20:34,566 --> 00:20:39,460
And in 1937 and 1936 typesetting was very difficult.

294
00:20:39,460 --> 00:20:47,200
So when he wrote this down with his typewriter, he made a little roof in front of the variable that he wanted to replace.

295
00:20:47,550 --> 00:20:53,676
And when this thing went into print, typesetters replaced this triangle by a lambda.

296
00:20:54,495 --> 00:20:55,250
There you go.

297
00:20:55,270 --> 00:20:56,500
Now we have the lambda calculus.

298
00:20:56,500 --> 00:21:00,022
But it basically means it is a little roof over the first letter.

299
00:21:00,308 --> 00:21:02,650
And the lambda calculus works like this.

300
00:21:02,740 --> 00:21:04,850
The first letter, the one that is going to be replaced.

301
00:21:04,850 --> 00:21:06,905
This is what we call the bound variable.

302
00:21:06,905 --> 00:21:09,270
This is followed by an expression.

303
00:21:09,430 --> 00:21:11,894
And then you have an argument, which is another expression.

304
00:21:11,894 --> 00:21:18,768
And what we basically do is, we take the bound variable, and all occurrences in the expression, and replace it by the arguments.

305
00:21:18,768 --> 00:21:24,852
So we cut the argument and we paste it in all instances of the variable, in this case the variable y.

306
00:21:24,852 --> 00:21:27,349
In here.

307
00:21:28,624 --> 00:21:30,770
And as a result you get this.

308
00:21:30,770 --> 00:21:34,920
So here we replace all the variables by the argument “ab”.

309
00:21:34,970 --> 00:21:37,610
Just another expression and this is the result.

310
00:21:37,610 --> 00:21:38,590
That’s all there is.

311
00:21:38,750 --> 00:21:40,480
And this can be nested.

312
00:21:40,720 --> 00:21:43,975
And then we add a little bit of syntactic sugar.

313
00:21:43,975 --> 00:21:45,990
We introduce symbols,

314
00:21:45,990 --> 00:21:51,397
so we can take arbitrary sequences of these characters and just express them with another variable.

315
00:21:52,120 --> 00:21:53,979
And then we have a programming language.

316
00:21:53,979 --> 00:21:56,040
And basically this is Lisp.

317
00:21:56,310 --> 00:21:57,514
So very close to Lisp.

318
00:22:05,220 --> 00:22:10,185
A funny thing is that for… the guy who came up with Lisp,

319
00:22:10,185 --> 00:22:13,850
McCarthy, he didn’t think that it would be a proper language.

320
00:22:13,850 --> 00:22:15,340
Because of the awkward notation.

321
00:22:15,340 --> 00:22:17,781
And he said, you cannot really use this for programming.

322
00:22:17,781 --> 00:22:20,880
But one of his doctorate students said: “Oh well. Let’s try.”

323
00:22:20,890 --> 00:22:24,301
And… it has kept on.

324
00:22:26,030 --> 00:22:26,863
Anyway.

325
00:22:26,863 --> 00:22:30,184
We can show that Turing Machines can compute the lambda calculus.

326
00:22:30,184 --> 00:22:35,510
And we can show that the lambda calculus can be used to compute the next state of the Turing machine.

327
00:22:35,861 --> 00:22:38,156
This means they have the same power.

328
00:22:38,983 --> 00:22:46,020
The set of computable functions in the lambda calculus is the same as the set of Turing computable functions.

329
00:22:46,490 --> 00:22:50,880
And, since then, we have found many other ways of defining computations.

330
00:22:50,890 --> 00:22:54,065
For instance the post machine, which is a variation of the Turing machine,

331
00:22:54,662 --> 00:22:57,073
or mathematical proofs.

332
00:22:57,073 --> 00:22:58,883
Everything that can be proven is computable.

333
00:22:59,629 --> 00:23:02,160
Or partial recursive functions.

334
00:23:02,278 --> 00:23:06,196
And we can show for all of them that all these approaches have the same power.

335
00:23:07,532 --> 00:23:11,228
And the idea that all the computational approaches have the same power,

336
00:23:11,228 --> 00:23:15,062
although all the other ones that you are able to find in the future too,

337
00:23:15,062 --> 00:23:17,990
is called the Church-Turing thesis.

338
00:23:18,000 --> 00:23:19,300
We don’t know about the future.

339
00:23:19,380 --> 00:23:22,414
So it’s not really… we can’t prove that.

340
00:23:22,661 --> 00:23:29,960
We don’t know, if somebody comes up with a new way of manipulating things, and producing regularity and information, and it can do more.

341
00:23:30,150 --> 00:23:35,210
But everything we’ve found so far, and probably everything that we’re going to find, has the same power.

342
00:23:35,340 --> 00:23:38,787
So this kind of defines our notion of computation.

343
00:23:41,000 --> 00:23:43,360
The whole thing also includes programming languages.

344
00:23:43,891 --> 00:23:52,590
You can use Python to produce to calculate a Turing machine and you can use a Turing machine to calculate Python.

345
00:23:52,830 --> 00:23:56,340
You can take arbitrary computers and let them run on the Turing machine.

346
00:23:56,340 --> 00:23:57,790
The graphics are going to be abysmal.

347
00:23:57,800 --> 00:24:00,400
But OK.

348
00:24:00,590 --> 00:24:04,690
And in some sense the brain is [a] Turing computational tool.

349
00:24:04,790 --> 00:24:08,119
If you look at the principles of neural information processing,

350
00:24:08,119 --> 00:24:12,608
you can take neurons and build computational models, for instance compartment models.

351
00:24:12,608 --> 00:24:20,622
Which are very very accurate and produce very strong semblances to the actual inputs and outputs of neurons and their state changes.

352
00:24:20,622 --> 00:24:22,653
They’re are computationally expensive, but it works.

353
00:24:24,000 --> 00:24:30,320
And we can simplify them into integrate-and-fire models, which are fancy oscillators.

354
00:24:30,780 --> 00:24:34,722
Or we could use very crude simplifications, like in most artificial neural networks.

355
00:24:34,722 --> 00:24:37,445
If you just do at some of the inputs to a neuron,

356
00:24:37,445 --> 00:24:40,118
and then apply some transition function,

357
00:24:40,118 --> 00:24:42,557
and transmit the results to other neurons.

358
00:24:42,557 --> 00:24:45,582
And we can show that with this crude model already,

359
00:24:45,582 --> 00:24:50,686
we can do many of the interesting feats that nervous systems can produce.

360
00:24:50,686 --> 00:24:54,639
Like associative learning, sensory motor loops, and many other fancy things.

361
00:24:54,639 --> 00:24:58,570
And, of course, it’s Turing complete.

362
00:24:59,000 --> 00:25:02,636
And this brings us to what we would call weak computationalism.

363
00:25:02,636 --> 00:25:06,040
That is the idea that minds are basically computer programs.

364
00:25:06,070 --> 00:25:08,592
They’re realizing in neural hard reconfigurations

365
00:25:08,592 --> 00:25:10,314
and in the individual states.

366
00:25:10,884 --> 00:25:14,256
And the mental content is represented in those programs.

367
00:25:14,256 --> 00:25:18,053
And perception is basically the process of encoding information

368
00:25:18,053 --> 00:25:20,619
given at our systemic boundaries to the environment

369
00:25:21,263 --> 00:25:22,885
into mental representations

370
00:25:23,254 --> 00:25:26,400
using this program.

371
00:25:26,410 --> 00:25:29,245
This means that all that is part of being a mind:

372
00:25:29,245 --> 00:25:33,780
thinking, and feeling, and dreaming, and being creative, and being afraid, and whatever.

373
00:25:33,870 --> 00:25:38,770
It’s all aspects of operations over mental content in such a computer program.

374
00:25:38,770 --> 00:25:41,480
This is the idea of weak computationalism.

375
00:25:41,540 --> 00:25:44,901
In fact you can go one step further to strong computationalism,

376
00:25:44,901 --> 00:25:49,190
because the universe doesn’t let us experience matter.

377
00:25:49,240 --> 00:25:52,179
The universe also doesn’t let us experience minds directly.

378
00:25:52,179 --> 00:25:54,863
What the universe somehow gives us is information.

379
00:25:55,741 --> 00:25:57,464
Information is something very simple.

380
00:25:57,464 --> 00:26:02,110
We can define it mathematically and what it means is something like “discernible difference”.

381
00:26:02,120 --> 00:26:05,078
You can measure it in yes-no-decisions, in bits.

382
00:26:05,474 --> 00:26:07,247
And there is….

383
00:26:07,247 --> 00:26:09,759
According to the strong computationalism,

384
00:26:09,990 --> 00:26:11,737
the universe is basically a pattern generator,

385
00:26:11,737 --> 00:26:12,790
which gives us information.

386
00:26:12,790 --> 00:26:14,687
And all the apparent regularity

387
00:26:14,687 --> 00:26:16,760
that the universe seems to produce,

388
00:26:16,760 --> 00:26:18,649
which means, we see time and space,

389
00:26:18,649 --> 00:26:22,314
and things that we can conceptualize into objects and people,

390
00:26:22,314 --> 00:26:23,581
and whatever,

391
00:26:23,581 --> 00:26:26,957
can be explained by the fact that the universe seems to be able to compute.

392
00:26:26,957 --> 00:26:29,975
That is, to put use regularities in information.

393
00:26:31,297 --> 00:26:35,295
And this means that there is no conceptual difference between reality and the computer program.

394
00:26:35,295 --> 00:26:38,700
So we get a new kind of monism.

395
00:26:38,700 --> 00:26:42,129
Not idealism, which takes minds to be primary,

396
00:26:42,129 --> 00:26:44,367
or materialism which takes physics to be primary,

397
00:26:44,367 --> 00:26:49,028
but rather computationalism, which means that information and computation are primary.

398
00:26:51,810 --> 00:26:56,610
Mind and matter are constructions that we get from that.

399
00:26:56,650 --> 00:26:59,000
A lot of people don’t like that idea.

400
00:26:59,050 --> 00:27:00,693
Roger Penrose, who’s a physicist,

401
00:27:00,693 --> 00:27:04,269
says that the brain uses quantum processes to produce consciousness.

402
00:27:04,269 --> 00:27:06,616
So minds must be more than computers.

403
00:27:08,670 --> 00:27:09,700
Why is that so?

404
00:27:09,960 --> 00:27:15,806
The quality of understanding and feeling possessed by human beings, is something that cannot be simulated computationally.

405
00:27:16,812 --> 00:27:17,400
Ok.

406
00:27:17,400 --> 00:27:20,090
But how can quantum mechanics do it?

407
00:27:20,250 --> 00:27:24,550
Because, you know, quantum processes are completely computational too!

408
00:27:24,848 --> 00:27:27,930
It’s just very expensive to simulate them on non-quantum computers.

409
00:27:27,930 --> 00:27:29,350
But it’s possible.

410
00:27:30,170 --> 00:27:36,785
So, it’s not that quantum computing enables a completely new kind of effectively possible algorithm.

411
00:27:36,785 --> 00:27:40,161
It’s just slightly different efficiently possible algorithms.

412
00:27:41,054 --> 00:27:44,960
And Penrose cannot explain how those would bring forth

413
00:27:45,050 --> 00:27:47,393
perception and imagination and consciousness.

414
00:27:48,534 --> 00:27:53,228
I think what he basically does here is that he perceives kind of mechanics as mysterious

415
00:27:53,228 --> 00:27:57,690
and perceives consciousness as mysterious and tries to shroud one mystery in another.

416
00:27:57,690 --> 00:28:04,710
[applause]

417
00:28:04,710 --> 00:28:08,300
So I don’t think that minds are more than Turing machines.

418
00:28:08,880 --> 00:28:14,310
It’s actually much more troubling: minds are fundamentally less than Turing machines!

419
00:28:14,580 --> 00:28:16,856
All real computers are constrained in some way.

420
00:28:16,856 --> 00:28:20,490
That is they cannot compute every conceivable computable function.

421
00:28:20,550 --> 00:28:26,640
They can only compute functions that fit into the memory and so on then can be computed in the available time.

422
00:28:26,640 --> 00:28:28,625
So the Turing machine, if you want to build it physically,

423
00:28:28,625 --> 00:28:34,160
will have a finite tape and it will have finite steps it can calculate in a given amount of time.

424
00:28:34,380 --> 00:28:39,903
And the lambda calculus will have a finite length to the strings that you can actually cut and replace.

425
00:28:40,389 --> 00:28:43,420
And a finite number of replacement operations that you can do

426
00:28:43,420 --> 00:28:44,951
in your given amount of time.

427
00:28:45,603 --> 00:28:51,192
And the thing is, there is no set of numbers m and n for…

428
00:28:51,192 --> 00:28:57,055
for the tape lengths and the times you have four operations on [the] Turing machine.

429
00:28:57,055 --> 00:28:59,971
And the same m and n or similar m and n

430
00:28:59,971 --> 00:29:05,221
for the lambda calculus at least with the same set of constraints.

431
00:29:05,221 --> 00:29:06,850
That is lambda calculus

432
00:29:06,930 --> 00:29:09,862
is going to be able to calculate some functions

433
00:29:09,862 --> 00:29:12,220
that are not possible on the Turing machine and vice versa,

434
00:29:12,360 --> 00:29:13,392
if you have a constrained system.

435
00:29:13,392 --> 00:29:15,603
And of course it’s even worse for neurons.

436
00:29:15,603 --> 00:29:18,980
If you have a finite number of neurons and to find a number of state changes,

437
00:29:19,030 --> 00:29:23,458
this… does not translate directly into a constrained von-Neumann-computer

438
00:29:23,458 --> 00:29:26,200
or a constrained lambda calculus.

439
00:29:26,760 --> 00:29:30,090
And there’s this big difference between, of course, effectively computable functions,

440
00:29:30,090 --> 00:29:31,986
those that are in principle computable,

441
00:29:31,986 --> 00:29:34,905
and those that we can compute efficiently.

442
00:29:35,542 --> 00:29:38,058
There are things that computers cannot solve.

443
00:29:38,058 --> 00:29:40,430
Some problems that are unsolvable in principle.

444
00:29:40,470 --> 00:29:43,568
For instance the question whether a Turing machine ever stops

445
00:29:43,568 --> 00:29:44,974
for an arbitrary program.

446
00:29:45,481 --> 00:29:48,341
And some problems are unsolvable in practice.

447
00:29:48,341 --> 00:29:51,632
Because it’s very, very hard to do so for a deterministic Turing machine.

448
00:29:51,632 --> 00:29:55,398
And the class of NP-hard problems is a very strong candidate for that.

449
00:29:55,398 --> 00:29:56,653
Non-polinominal problems.

450
00:29:57,307 --> 00:29:59,338
In these problems is for instance the idea

451
00:29:59,338 --> 00:30:03,957
of finding the key for an encrypted text.

452
00:30:03,957 --> 00:30:06,917
If key is very long and you are not the NSA and have a backdoor.

453
00:30:09,240 --> 00:30:11,182
And then there are non-decidable problems.

454
00:30:12,133 --> 00:30:13,952
Problems where we cannot define…

455
00:30:13,952 --> 00:30:18,280
find out, in the formal system, the answer is yes or no.

456
00:30:18,450 --> 00:30:19,847
Whether it’s true or false.

457
00:30:19,847 --> 00:30:25,691
And some philosophers have argued that humans can always do this so they are more powerful than computers.

458
00:30:25,691 --> 00:30:28,700
Because show, prove formally, that computers cannot do this.

459
00:30:28,700 --> 00:30:29,519
Gödel has done this.

460
00:30:31,224 --> 00:30:32,351
But… hm…

461
00:30:32,351 --> 00:30:33,566
Here’s some test question:

462
00:30:33,566 --> 00:30:35,617
can you solve undecidable problems.

463
00:30:36,104 --> 00:30:39,670
If you choose one of the following answers randomly,

464
00:30:39,760 --> 00:30:41,740
what’s the probability that the answer is correct?

465
00:30:50,664 --> 00:30:51,102
I’ll tell you.

466
00:30:51,102 --> 00:30:52,449
Computers are not going to find out.

467
00:30:52,449 --> 00:30:54,161
And… me neither.

468
00:30:56,450 --> 00:30:56,960
OK.

469
00:30:56,960 --> 00:30:58,290
How difficult is AI?

470
00:30:58,460 --> 00:30:59,640
It’s a very difficult question.

471
00:30:59,630 --> 00:31:00,330
We don’t know.

472
00:31:00,350 --> 00:31:04,040
We do have some numbers, which could tell us that it’s not impossible.

473
00:31:04,517 --> 00:31:07,168
As we have these roughly 100 billion neurons—

474
00:31:07,168 --> 00:31:08,648
the ballpark figure—

475
00:31:08,648 --> 00:31:15,372
and the cells in the cortex are organized into circuits of a few thousands to ten-thousands of neurons,

476
00:31:15,372 --> 00:31:16,999
which you call cortical columns.

477
00:31:17,608 --> 00:31:21,978
And these cortical columns have… are pretty similar among each other,

478
00:31:21,978 --> 00:31:26,282
and have higher interconnectivity, and some lower connectivity among each other,

479
00:31:26,282 --> 00:31:29,112
and even lower long range connectivity.

480
00:31:29,915 --> 00:31:32,065
And the brain has a very distinct architecture.

481
00:31:32,065 --> 00:31:38,320
And a very distinct structure of a certain nuclei and structures that have very different functional purposes.

482
00:31:38,570 --> 00:31:40,042
And the layout of these…

483
00:31:40,913 --> 00:31:42,925
both the individual neurons, neuron types,

484
00:31:42,925 --> 00:31:50,440
the more than 130 known neurotransmitters, of which we do not completely understand all, most of them,

485
00:31:51,040 --> 00:31:54,466
this is all defined in our genome of course.

486
00:31:54,466 --> 00:31:56,186
And the genome is not very long.

487
00:31:56,186 --> 00:32:00,890
It’s something like… it think the Human Genome Project amounted to a CD-ROM.

488
00:32:00,980 --> 00:32:03,230
775 megabytes.

489
00:32:03,590 --> 00:32:05,096
So actually, it’s….

490
00:32:05,096 --> 00:32:08,990
The computational complexity of defining a complete human being,

491
00:32:08,990 --> 00:32:11,138
if you have physics chemistry already given

492
00:32:11,138 --> 00:32:14,020
to enable protein synthesis and so on—

493
00:32:14,020 --> 00:32:16,523
gravity and temperature ranges—

494
00:32:16,523 --> 00:32:18,802
is less than Microsoft Windows.

495
00:32:20,474 --> 00:32:23,372
And it’s the upper bound, because only a very small fraction of that

496
00:32:23,372 --> 00:32:25,332
is going to code for our nervous system.

497
00:32:26,103 --> 00:32:29,315
But it doesn’t mean it’s easy to reverse engineer the whole thing.

498
00:32:29,315 --> 00:32:31,332
It just means it’s not hopeless.

499
00:32:31,332 --> 00:32:33,080
Complexity that you would be looking at.

500
00:32:34,077 --> 00:32:37,506
But the estimate of the real difficulty, in my perspective, is impossible.

501
00:32:37,955 --> 00:32:47,382
Because I’m not just a philosopher or a dreamer or a science fiction author, but I’m a software developer.

502
00:32:47,382 --> 00:32:53,289
And as a software developer I know it’s impossible to give an estimate on when you’re done, when you don’t have the full specification.

503
00:32:53,289 --> 00:32:56,000
And we don’t have a full specification yet.

504
00:32:57,130 --> 00:32:59,730
So you all know this shortest computer science joke:

505
00:32:59,830 --> 00:33:03,450
“It’s almost done.”

506
00:33:04,030 --> 00:33:05,590
You do the first 98 %.

507
00:33:05,590 --> 00:33:07,863
Now we can do the second 98 %.

508
00:33:08,780 --> 00:33:10,390
We never know when it’s done,

509
00:33:10,420 --> 00:33:13,268
if we haven’t solved and specified all the problems.

510
00:33:13,268 --> 00:33:14,640
If you don’t know how it’s to be done.

511
00:33:14,650 --> 00:33:18,170
And even if you have [a] rough direction, and I think we do,

512
00:33:18,430 --> 00:33:21,490
we don’t know how long it’ll take until we have worked out the details.

513
00:33:22,496 --> 00:33:26,604
And some part of that big question, how long it takes until it’ll be done,

514
00:33:26,604 --> 00:33:29,520
is the question whether we need to make small incremental progress

515
00:33:29,520 --> 00:33:32,367
versus whether we need one big idea,

516
00:33:32,367 --> 00:33:33,487
which kind of solves it all.

517
00:33:37,562 --> 00:33:38,910
AI has a pretty long story.

518
00:33:38,910 --> 00:33:40,910
It starts out with logic and automata.

519
00:33:40,930 --> 00:33:43,930
And this idea of computability that I just sketched out.

520
00:33:44,050 --> 00:33:46,683
Then with this idea of machines that implement computability.

521
00:33:47,050 --> 00:33:52,663
And came towards Babage and Zuse and von Neumann and so on.

522
00:33:52,663 --> 00:33:55,030
Then we had information theory by Claude Shannon.

523
00:33:55,060 --> 00:33:57,235
He captured the idea of what information is

524
00:33:57,235 --> 00:34:00,181
and how entropy can be calculated for information and so on.

525
00:34:00,181 --> 00:34:05,143
And we had this beautiful idea of describing the world as systems.

526
00:34:05,143 --> 00:34:10,120
And systems are made up of entities and relations between them.

527
00:34:10,150 --> 00:34:13,061
And along these relations there we have feedback.

528
00:34:13,061 --> 00:34:16,780
And dynamical systems emerge.

529
00:34:16,780 --> 00:34:18,724
This was a very beautiful idea, was cybernetics.

530
00:34:18,724 --> 00:34:20,409
Unfortunately hass been killed by

531
00:34:21,280 --> 00:34:22,556
second-order Cybernetics.

532
00:34:22,556 --> 00:34:24,163
By this Maturana stuff and so on.

533
00:34:24,163 --> 00:34:26,780
And turned into a humanity [one of the humanities] and died.

534
00:34:27,310 --> 00:34:31,630
But the idea stuck around and most of them went into artificial intelligence.

535
00:34:32,230 --> 00:34:33,925
And then we had this idea of symbol systems.

536
00:34:33,925 --> 00:34:37,123
That is how we can do grammatical language.

537
00:34:37,123 --> 00:34:38,538
Process that.

538
00:34:38,538 --> 00:34:40,040
We can do planning and so on.

539
00:34:40,840 --> 00:34:42,940
Abstract reasoning in automatic systems.

540
00:34:43,480 --> 00:34:47,985
Then the idea how of we can abstract neural networks in distributed systems.

541
00:34:47,985 --> 00:34:49,803
With McClelland and Pitts and so on.

542
00:34:49,803 --> 00:34:51,520
Parallel distributed processing.

543
00:34:51,909 --> 00:34:54,344
And then we had a movement of autonomous agents,

544
00:34:54,344 --> 00:34:57,430
which look at self-directed, goal directed systems.

545
00:34:59,110 --> 00:35:02,830
And the whole story somehow started in 1950 I think,

546
00:35:03,520 --> 00:35:04,783
in its best possible way.

547
00:35:04,783 --> 00:35:06,735
When Alan Turing wrote his paper

548
00:35:06,735 --> 00:35:09,531
“Computing Machinery and Intelligence”

549
00:35:09,531 --> 00:35:11,967
and those of you who haven’t read it should do so.

550
00:35:11,967 --> 00:35:14,780
It’s a very, very easy read.

551
00:35:14,800 --> 00:35:15,840
It’s fascinating.

552
00:35:15,970 --> 00:35:19,218
He has already already most of the important questions of AI.

553
00:35:19,218 --> 00:35:20,768
Most of the important criticisms.

554
00:35:20,768 --> 00:35:23,886
Most of the important answers to the most important criticisms.

555
00:35:23,886 --> 00:35:26,738
And it’s also the paper, where he describes the Turing test.

556
00:35:26,738 --> 00:35:29,380
And basically sketches the idea that

557
00:35:30,260 --> 00:35:33,430
in a way to determine whether somebody is intelligent is

558
00:35:33,970 --> 00:35:36,645
to judge the ability of that one—

559
00:35:36,645 --> 00:35:37,807
that person or that system—

560
00:35:37,807 --> 00:35:43,720
to engage in meaningful discourse.

561
00:35:43,720 --> 00:35:51,780
Which includes creativity, and empathy maybe, and logic, and language,

562
00:35:51,780 --> 00:35:53,880
and anticipation, memory retrieval, and so on.

563
00:35:54,390 --> 00:35:55,190
Story comprehension.

564
00:35:55,530 --> 00:35:59,292
And the idea of AI then

565
00:35:59,292 --> 00:36:03,668
coalesce in the group of cyberneticians and computer scientists and so on,

566
00:36:03,668 --> 00:36:06,119
which got together in the Dartmouth conference.

567
00:36:06,119 --> 00:36:07,540
It was in 1956.

568
00:36:08,070 --> 00:36:11,472
And there Marvin Minsky coined the name “artificial intelligence

569
00:36:11,472 --> 00:36:15,360
for the project of using computer science to understand the mind.

570
00:36:16,020 --> 00:36:19,680
John McCarthy was the guy who came up with Lisp, among other things.

571
00:36:19,800 --> 00:36:22,848
Nathan Rochester did pattern recognition

572
00:36:22,848 --> 00:36:24,990
and he’s, I think, more famous for

573
00:36:25,500 --> 00:36:27,510
writing the first assembly programming language.

574
00:36:28,610 --> 00:36:30,970
Claude Shannon was this information theory guy.

575
00:36:30,990 --> 00:36:32,674
But they also got psychologists there

576
00:36:32,674 --> 00:36:35,987
and sociologists and people from many different fields.

577
00:36:35,987 --> 00:36:38,362
It was very highly interdisciplinary.

578
00:36:38,362 --> 00:36:40,950
And they already had the funding and it was a very good time.

579
00:36:42,150 --> 00:36:46,351
And in this good time they ripped a lot of low hanging fruit very quickly.

580
00:36:46,351 --> 00:36:50,220
Which gave them the idea that AI is almost done very soon.

581
00:36:51,540 --> 00:36:58,880
In 1969 Minsky and Papert wrote a small booklet against the idea of using your neural networks.

582
00:36:59,220 --> 00:37:00,450
And they won.

583
00:37:01,650 --> 00:37:02,340
Their argument won.

584
00:37:02,340 --> 00:37:04,802
But, even more fortunately it was wrong.

585
00:37:05,310 --> 00:37:09,268
So for more than a decade, there was practically no more funding for neural networks,

586
00:37:09,674 --> 00:37:13,860
which was bad so most people did logic based systems, which have some limitations.

587
00:37:14,190 --> 00:37:16,760
And in the meantime people did expert systems.

588
00:37:16,760 --> 00:37:19,612
The idea to describe the world

589
00:37:19,612 --> 00:37:22,680
as basically logical expressions.

590
00:37:22,680 --> 00:37:25,777
This turned out to be brittle, and difficult, and had diminishing returns.

591
00:37:25,777 --> 00:37:27,800
And at some point it didn’t work anymore.

592
00:37:27,990 --> 00:37:29,500
And many of the people which tried it,

593
00:37:29,500 --> 00:37:33,404
became very disenchanted and then threw out lots of baby with the bathwater.

594
00:37:33,404 --> 00:37:37,340
And only did robotics in the future or something completely different.

595
00:37:37,380 --> 00:37:41,167
Instead of going back to the idea of looking at mental representations.

596
00:37:41,167 --> 00:37:42,060
How the mind works.

597
00:37:43,640 --> 00:37:46,140
And at the moment is kind of a sad state.

598
00:37:46,140 --> 00:37:47,915
Most of it is applications.

599
00:37:47,915 --> 00:37:49,805
That is, for instance, robotics

600
00:37:49,805 --> 00:37:53,260
or statistical methods to do better machine learning and so on.

601
00:37:53,400 --> 00:37:55,500
And I don’t say it’s invalid to do this.

602
00:37:55,500 --> 00:37:56,580
It’s intellectually challenging.

603
00:37:56,580 --> 00:37:57,757
It’s tremendously useful.

604
00:37:57,757 --> 00:38:00,140
It’s very successful and productive and so on.

605
00:38:00,240 --> 00:38:03,180
It’s just a very different question from how to understand the mind.

606
00:38:03,240 --> 00:38:06,120
If you want to go to the moon you have to shoot for the moon.

607
00:38:08,220 --> 00:38:10,899
So there is this movement still existing in AI,

608
00:38:10,899 --> 00:38:12,349
and becoming stronger these days.

609
00:38:12,349 --> 00:38:13,533
It’s called cognitive systems.

610
00:38:13,533 --> 00:38:16,708
And the idea of cognitive systems has many names

611
00:38:16,708 --> 00:38:23,000
like “artificial general intelligence” or “biologically inspired cognitive architectures”.

612
00:38:23,070 --> 00:38:27,812
It’s to use information processing as the dominant paradigm to understand the mind.

613
00:38:27,812 --> 00:38:30,445
And the tools that we need to do that is,

614
00:38:30,445 --> 00:38:33,018
we have to build whole architectures that we can test.

615
00:38:33,018 --> 00:38:35,830
Not just individual modules.

616
00:38:36,120 --> 00:38:38,610
You have to have universal representations,

617
00:38:40,060 --> 00:38:44,290
which means these representation have to be both distributed—

618
00:38:45,040 --> 00:38:46,014
associative and so on—

619
00:38:46,014 --> 00:38:47,050
and symbolic.

620
00:38:47,170 --> 00:38:49,900
We need to be able to do both those things with it.

621
00:38:50,860 --> 00:38:57,430
So we need to be able to do language and planning, and we need to do sensorimotor coupling, and associative thinking in superposition of

622
00:38:58,150 --> 00:39:03,010
representations and ambiguity and so on.

623
00:39:03,010 --> 00:39:03,370
And

624
00:39:04,420 --> 00:39:06,033
operations over those presentation.

625
00:39:06,033 --> 00:39:06,610
Some kind of

626
00:39:06,610 --> 00:39:08,134
semi-universal problem solving.

627
00:39:08,134 --> 00:39:12,990
It’s probably semi-universal, because they seem to be problems that humans are very bad at solving.

628
00:39:13,240 --> 00:39:15,100
Our minds are not completely universal.

629
00:39:16,180 --> 00:39:21,778
And we need some kind of universal motivation. That is something that directs the system to do all the interesting things that you want it to do.

630
00:39:21,778 --> 00:39:27,250
Like engage in social interaction or in mathematics or creativity.

631
00:39:28,450 --> 00:39:32,730
And maybe we want to understand emotion, and affect, and phenomenal experience, and so on.

632
00:39:34,450 --> 00:39:35,320
So:

633
00:39:35,320 --> 00:39:37,348
we want to understand universal representations.

634
00:39:37,348 --> 00:39:43,600
We want to have a set of operations over those representations that give us neural learning, and category formation,

635
00:39:44,210 --> 00:39:48,940
and planning, and reflection, and memory consolidation, and resource allocation,

636
00:39:49,600 --> 00:39:52,810
and language, and all those interesting things.

637
00:39:53,020 --> 00:39:54,677
We also want to have perceptual grounding—

638
00:39:54,677 --> 00:39:59,800
that is the representations would be saved—shaped in such a way that they can be mapped to perceptual input—

639
00:40:00,400 --> 00:40:01,130
and vice versa.

640
00:40:02,380 --> 00:40:03,610
And…

641
00:40:03,610 --> 00:40:07,624
they should also be able to be translated into motor programs to perform actions.

642
00:40:07,624 --> 00:40:17,320
And maybe we also want to have some feedback between the actions and the perceptions, and is feedback usually has a name: it’s called an environment.

643
00:40:17,320 --> 00:40:17,810
OK.

644
00:40:17,900 --> 00:40:23,460
And these medical representations, they are not just a big lump of things but they have some structure.

645
00:40:23,510 --> 00:40:27,700
One part will be inevitably the model of the current situation…

646
00:40:27,740 --> 00:40:28,471
… that we are in.

647
00:40:28,997 --> 00:40:30,180
And this situation model…

648
00:40:31,210 --> 00:40:32,890
is the present.

649
00:40:32,990 --> 00:40:36,185
But if you also want to memorize past situations.

650
00:40:36,185 --> 00:40:38,750
To have a protocol a memory of the past.

651
00:40:39,680 --> 00:40:44,050
And this protocol memory, as a part, will contain things that are always with me.

652
00:40:44,150 --> 00:40:44,922
This is my self-model.

653
00:40:44,922 --> 00:40:48,380
Those properties that are constantly available to me.

654
00:40:48,890 --> 00:40:50,185
That I can ascribe to myself.

655
00:40:50,185 --> 00:40:54,432
And the other things, which are constantly changing, which I usually conceptualize as my environment.

656
00:40:54,432 --> 00:40:57,010
An important part of that is declarative memory.

657
00:40:57,010 --> 00:41:00,149
For instance abstractions into objects, things, people, and so on,

658
00:41:00,149 --> 00:41:05,720
and procedural memory: abstraction into sequences of events.

659
00:41:05,720 --> 00:41:10,490
And we can use the declarative memory and the procedural memory to erect a frame.

660
00:41:10,550 --> 00:41:13,540
The frame gives me a context to interpret the current situation.

661
00:41:13,540 --> 00:41:16,440
For instance right now I’m in a frame of giving a talk.

662
00:41:17,250 --> 00:41:17,960
If…

663
00:41:17,960 --> 00:41:18,960
… I would take a…

664
00:41:19,620 --> 00:41:23,839
two year old kid, then this kid would interpret the situation very differently than me.

665
00:41:23,839 --> 00:41:30,493
And would probably be confused by the situation or explored it in more creative ways than I would come up with.

666
00:41:30,493 --> 00:41:33,426
Because I’m constrained by the frame which gives me the context

667
00:41:33,426 --> 00:41:36,263
and tells me what you were expect me to do in this situation.

668
00:41:36,263 --> 00:41:37,830
What I am expected to do and so on.

669
00:41:39,450 --> 00:41:41,097
This frame extends in the future.

670
00:41:41,097 --> 00:41:43,230
I have some kind of expectation horizon.

671
00:41:43,230 --> 00:41:46,170
I know that my talk is going to be over in about 15 minutes.

672
00:41:47,500 --> 00:41:48,890
Also I’ve plans.

673
00:41:48,930 --> 00:41:51,010
I have things I want to tell you and so on.

674
00:41:51,010 --> 00:41:52,660
And it might go wrong but I’ll try.

675
00:41:53,550 --> 00:41:56,740
And if I generalize this, I find that I have the world model,

676
00:41:56,740 --> 00:41:59,143
I have long term memory, and have some kind of mental stage.

677
00:41:59,143 --> 00:42:01,366
This mental stage has counter-factual stuff.

678
00:42:01,366 --> 00:42:02,310
Stuff that is not…

679
00:42:02,940 --> 00:42:03,189
… real.

680
00:42:03,189 --> 00:42:07,170
That I can play around with.

681
00:42:07,170 --> 00:42:10,998
Ok. Then I need some kind of action selection that mediates between perception and action,

682
00:42:10,998 --> 00:42:14,112
and some mechanism that controls the action selection

683
00:42:14,112 --> 00:42:16,150
that is a motivational system,

684
00:42:16,720 --> 00:42:20,107
which selects motives based on demands of the system.

685
00:42:20,107 --> 00:42:23,660
And the demands of the system should create goals.

686
00:42:23,750 --> 00:42:25,180
We are not born with our goals.

687
00:42:25,180 --> 00:42:30,630
Obviously I don’t think that I was born with the goal of standing here and giving this talk to you.

688
00:42:30,670 --> 00:42:36,640
There must be some demand in the system, which makes… enables me to have a biography, that …

689
00:42:37,420 --> 00:42:44,550
… makes this a big goal of mine to give this talk to you and engage as many of you as possible into the project of AI.

690
00:42:45,280 --> 00:42:49,730
And so lets come up with a set of demands that can produce such goals universally.

691
00:42:49,770 --> 00:42:55,180
I think some of these demands will be physiological, like food, water, energy, physical integrity, rest, and so on.

692
00:42:55,770 --> 00:42:57,160
Hot and cold with right range.

693
00:42:57,900 --> 00:42:59,265
Then we have social demands.

694
00:42:59,265 --> 00:43:00,545
At least most of us do.

695
00:43:00,545 --> 00:43:02,200
Sociopaths probably don’t.

696
00:43:02,270 --> 00:43:04,090
These social demands do structure our…

697
00:43:04,720 --> 00:43:05,428
… social interaction.

698
00:43:05,428 --> 00:43:08,600
They…. For instance a demand for affiliation.

699
00:43:08,650 --> 00:43:13,000
That we get signals from others, that we are ok parts of society, of our environment.

700
00:43:14,710 --> 00:43:17,670
We also have internalised social demands,

701
00:43:17,900 --> 00:43:19,780
which we usually called honor or something.

702
00:43:19,780 --> 00:43:21,700
This is conformance to internalized norms.

703
00:43:21,700 --> 00:43:22,090
It means,

704
00:43:22,600 --> 00:43:25,390
that we do to conform to social norms, even when nobody is looking.

705
00:43:26,920 --> 00:43:28,571
And then we have cognitive demands.

706
00:43:28,571 --> 00:43:31,605
And these cognitive demands, is for instance competence acquisition.

707
00:43:31,605 --> 00:43:32,564
We want learn.

708
00:43:32,564 --> 00:43:34,090
We want to get new skills.

709
00:43:34,350 --> 00:43:38,120
We want to become more powerful in many many dimensions and ways.

710
00:43:38,140 --> 00:43:41,431
It’s good to learn a musical instrument, because you get more competent.

711
00:43:41,490 --> 00:43:44,940
It creates a reward signal, a pleasure signal, if you do that.

712
00:43:44,950 --> 00:43:47,600
Also we want to reduce uncertainty.

713
00:43:47,680 --> 00:43:51,867
Mathematicians are those people [that] have learned that they can reduce uncertainty in mathematics.

714
00:43:51,867 --> 00:43:55,626
This creates pleasure for them, and then they find uncertainty in mathematics.

715
00:43:55,626 --> 00:43:57,170
And this creates more pleasure.

716
00:43:57,250 --> 00:44:02,680
So for mathematicians, mathematics is an unending source of pleasure.

717
00:44:11,730 --> 00:44:15,470
Now unfortunately, if you are in Germany right now studying mathematics

718
00:44:15,470 --> 00:44:19,300
and you find out that you are not very good at doing mathematics, what do you do?

719
00:44:19,960 --> 00:44:22,170
You become a teacher.

720
00:44:29,880 --> 00:44:33,060
And this is a very unfortunate situation for everybody involved.

721
00:44:35,040 --> 00:44:39,330
And, it means, that you have people, [that] associate mathematics with…

722
00:44:39,960 --> 00:44:41,910
uncertainty,

723
00:44:41,910 --> 00:44:44,120
that has to be curbed and to be avoided.

724
00:44:44,640 --> 00:44:51,103
And these people are put in front of kids and infuse them with this dread of uncertainty in mathematics.

725
00:44:51,103 --> 00:44:57,441
And most people in our culture are dreading mathematics, because for them it’s just anticipation of uncertainty.

726
00:44:57,441 --> 00:45:00,710
Which is a very bad things so people avoid it.

727
00:45:01,470 --> 00:45:01,770
OK.

728
00:45:01,770 --> 00:45:03,394
And then you have aesthetic demands.

729
00:45:03,848 --> 00:45:06,400
There are stimulus oriented aesthetics.

730
00:45:06,400 --> 00:45:11,682
Nature has had to pull some very heavy strings and levers to make us interested in strange things…

731
00:45:11,682 --> 00:45:13,740
[such] as certain human body schemas and…

732
00:45:14,460 --> 00:45:18,620
certain types of landscapes, and audio schemas, and so on.

733
00:45:18,630 --> 00:45:22,740
So there are some stimuli that are inherently pleasurable to us—pleasant to us.

734
00:45:22,950 --> 00:45:29,290
And of course this varies with every individual, because the wiring is very different, and that adaptivity in our biography is very different.

735
00:45:29,730 --> 00:45:31,319
And then there’s abstract aesthetics.

736
00:45:31,319 --> 00:45:34,846
And I think abstract aesthetics relates to finding better representations.

737
00:45:34,846 --> 00:45:37,200
It relates to finding structure.

738
00:45:39,300 --> 00:45:43,110
OK. And then we want to look at things like emotional modulation and affect.

739
00:45:43,110 --> 00:45:45,649
And this was one of the first things that actually got me into AI.

740
00:45:45,649 --> 00:45:46,560
That was the question:

741
00:45:47,120 --> 00:45:50,770
“How is it possible, that a system can feel something?”

742
00:45:50,860 --> 00:45:54,150
Because, if I have a variable in me with just fear or pain,

743
00:45:54,810 --> 00:45:56,008
does not equate a feeling.

744
00:45:56,008 --> 00:45:56,310
It’s very far… uhm…

745
00:45:56,880 --> 00:45:58,210
… different from that.

746
00:45:58,290 --> 00:46:00,330
And the answer that I’ve found so far it is,

747
00:46:00,840 --> 00:46:04,920
that feeling, or affect, is a configuration of the system.

748
00:46:04,920 --> 00:46:06,513
It’s not a parameter in the system,

749
00:46:06,513 --> 00:46:12,930
but we have several dimensions, like a state of arousal that we’re currently, in the level of stubbornness that we have, the selection threshold,

750
00:46:13,500 --> 00:46:16,472
the direction of attention, outwards or inwards,

751
00:46:17,493 --> 00:46:21,821
the resolution level that we have, [with] which we look at our representations, and so on.

752
00:46:21,821 --> 00:46:28,020
And these together create a certain way in every given situation of how our cognition is modulated.

753
00:46:29,620 --> 00:46:30,800
We are living in a very different

754
00:46:31,370 --> 00:46:33,690
and dynamic environment from time to time.

755
00:46:33,710 --> 00:46:36,390
When you go outside we have very different demands on our cognition.

756
00:46:36,390 --> 00:46:38,213
Maybe you need to react to traffic and so on.

757
00:46:38,213 --> 00:46:40,475
Maybe we need to interact with other people.

758
00:46:40,475 --> 00:46:42,523
Maybe we are in stressful situations.

759
00:46:42,523 --> 00:46:44,037
Maybe you are in relaxed situations.

760
00:46:44,037 --> 00:46:46,460
So we need to modulate our cognition accordingly.

761
00:46:46,580 --> 00:46:49,831
And this modulation means, that we do perceive the world differently.

762
00:46:49,831 --> 00:46:51,280
Our cognition works differently.

763
00:46:51,280 --> 00:46:55,010
And we conceptualize ourselves, and experience ourselves, differently.

764
00:46:55,340 --> 00:46:57,990
And I think this is what it means to feel something:

765
00:46:58,010 --> 00:46:59,691
this difference in the configuration.

766
00:47:01,580 --> 00:47:05,218
So. The affect can be seen as a configuration of a cognitive system.

767
00:47:05,453 --> 00:47:09,530
And the modulators of the cognition are things like arousal, and selection special, and

768
00:47:10,140 --> 00:47:13,810
background checks level, and resolution level, and so on.

769
00:47:13,920 --> 00:47:17,391
Our current estimates of competence and certainty in the given situation,

770
00:47:17,391 --> 00:47:21,301
and the pleasure and distress signals that you get from the frustration of our demands,

771
00:47:21,301 --> 00:47:26,440
or satisfaction of our demands which are reinforcements for learning and structuring our behavior.

772
00:47:27,540 --> 00:47:33,000
So the affective state, the emotional state that we are in, is emergent over those modulators.

773
00:47:33,930 --> 00:47:37,860
And higher level emotions, things like jealousy or pride and so on,

774
00:47:37,960 --> 00:47:41,771
we get them by directing those effects upon motivational content.

775
00:47:43,258 --> 00:47:46,550
And this gives us a very simple architecture.

776
00:47:46,550 --> 00:47:48,640
It’s a very rough sketch for an architecture.

777
00:47:48,640 --> 00:47:49,130
And I think,

778
00:47:49,640 --> 00:47:49,970
of course,

779
00:47:50,930 --> 00:47:53,120
this doesn’t specify all the details.

780
00:47:53,660 --> 00:47:57,327
I have specified some more of the details in a book, that I want to shamelessly plug here:

781
00:47:57,327 --> 00:48:00,840
it’s called “Principles of Synthetic Intelligence”.

782
00:48:00,860 --> 00:48:03,660
You can get it from Amazon or maybe from your library.

783
00:48:03,830 --> 00:48:07,443
And this describes basically this architecture and some of the demands

784
00:48:07,443 --> 00:48:12,560
for a very general framework of artificial intelligence in which to work with it.

785
00:48:12,560 --> 00:48:14,816
So it doesn’t give you all the functional mechanisms,

786
00:48:14,816 --> 00:48:18,350
but some things that I think are necessary based on my current understanding.

787
00:48:19,100 --> 00:48:20,840
We’re currently at the second…

788
00:48:21,560 --> 00:48:23,310
iteration of the implementations.

789
00:48:23,330 --> 00:48:28,420
The first one was in Java in early 2003 with lots of XMI files and…

790
00:48:28,794 --> 00:48:32,018
… XML files … and design patterns and Eclipse plug ins.

791
00:48:32,018 --> 00:48:35,800
And the new one is, of course, … runs in the browser, and is written in Python,

792
00:48:35,800 --> 00:48:39,570
and is much more light-weight and much more joy to work with.

793
00:48:39,937 --> 00:48:41,490
But we’re not done yet.

794
00:48:42,260 --> 00:48:42,870
OK.

795
00:48:43,070 --> 00:48:49,538
So this gets back to that question: is it going to be one big idea or is it going to be incremental progress?

796
00:48:49,930 --> 00:48:51,200
And I think it’s the latter.

797
00:48:52,100 --> 00:48:55,990
If we want to look at this extremely simplified list of problems to solve:

798
00:48:57,330 --> 00:48:59,270
whole testable architectures,

799
00:48:59,990 --> 00:49:01,600
universal representations,

800
00:49:03,060 --> 00:49:04,410
universal problem solving,

801
00:49:05,250 --> 00:49:08,310
motivation, emotion, and effect, and so on.

802
00:49:08,540 --> 00:49:11,997
And I can see hundreds and hundreds of Ph.D. thesis.

803
00:49:11,997 --> 00:49:15,080
And I’m sure that I only see a tiny part of the problem.

804
00:49:15,050 --> 00:49:17,420
So I think it’s entirely doable,

805
00:49:18,000 --> 00:49:19,818
but it’s going to take a pretty long time.

806
00:49:19,818 --> 00:49:21,888
And it’s going to be very exciting all the way,

807
00:49:21,888 --> 00:49:24,405
because we are going to learn that we are full of shit

808
00:49:24,405 --> 00:49:27,841
as we always do to a new problem, an algorithm,

809
00:49:27,841 --> 00:49:29,516
and we realize that we can’t test it,

810
00:49:29,516 --> 00:49:31,767
and that our initial idea was wrong,

811
00:49:31,767 --> 00:49:33,150
and that we can improve on it.

812
00:49:35,280 --> 00:49:38,560
So what should you do, if you want to get into AI?

813
00:49:38,570 --> 00:49:40,180
And you’re not there yet?

814
00:49:40,290 --> 00:49:43,382
So, I think you should get acquainted, of course, with the basic methodology.

815
00:49:43,382 --> 00:49:44,640
You want to…

816
00:49:45,420 --> 00:49:47,490
get programming languages, and learn them.

817
00:49:47,490 --> 00:49:48,720
Basically do it for fun.

818
00:49:48,720 --> 00:49:51,348
It’s really fun to wrap your mind around programming languages.

819
00:49:51,348 --> 00:49:52,650
Changes the way you think.

820
00:49:54,000 --> 00:49:56,235
And you want to learn software development.

821
00:49:56,235 --> 00:49:58,159
That is, build an actual, running system.

822
00:49:58,159 --> 00:49:59,449
Test-driven development.

823
00:49:59,449 --> 00:50:00,240
All those things.

824
00:50:01,440 --> 00:50:03,849
Then you want to look at the things that we do in AI.

825
00:50:03,849 --> 00:50:04,830
So for like…

826
00:50:05,430 --> 00:50:08,640
machine learning, probabilistic approaches, Kalman filtering,

827
00:50:09,180 --> 00:50:10,545
POMDPs and so on.

828
00:50:10,940 --> 00:50:16,340
You want to look at modes of representation: semantic networks, description logics, factor graphs, and so on.

829
00:50:16,340 --> 00:50:17,190
Graph Theory,

830
00:50:17,880 --> 00:50:18,720
hyper graphs.

831
00:50:19,375 --> 00:50:22,017
And you want to look at the domain of cognitive architectures.

832
00:50:22,017 --> 00:50:26,506
That is building computational models to simulate psychological phenomena,

833
00:50:26,506 --> 00:50:28,110
and reproduce them, and test them.

834
00:50:29,194 --> 00:50:31,280
I don’t think that you should stop there.

835
00:50:31,400 --> 00:50:34,870
You need to take in all the things, that we haven’t taken in yet.

836
00:50:35,110 --> 00:50:37,153
We need to learn more about linguistics.

837
00:50:37,153 --> 00:50:39,880
We need to learn more about neuroscience in our field.

838
00:50:39,890 --> 00:50:41,570
We need to do philosophy of mind.

839
00:50:41,900 --> 00:50:44,112
I think what you need to do is study cognitive science.

840
00:50:47,760 --> 00:50:49,680
So. What should you be working on?

841
00:50:51,600 --> 00:50:55,320
Some of the most pressing questions to me are, for instance, representation.

842
00:50:56,010 --> 00:50:58,800
How can we get abstract and perceptual presentation right

843
00:50:58,800 --> 00:51:01,410
and interact with each other on a common ground?

844
00:51:01,410 --> 00:51:04,970
How can we work with ambiguity and superposition of representations.

845
00:51:04,970 --> 00:51:07,770
Many possible interpretations valid at the same time.

846
00:51:08,300 --> 00:51:09,880
Inheritance and polymorphy.

847
00:51:09,900 --> 00:51:12,840
How can we distribute representations in the mind

848
00:51:13,710 --> 00:51:16,120
and store them efficiently?

849
00:51:16,140 --> 00:51:18,152
How can we use representation in such a way

850
00:51:18,152 --> 00:51:20,850
that even parts of them are very valid.

851
00:51:21,180 --> 00:51:23,923
And we can use constraints to describe partial presentations.

852
00:51:23,923 --> 00:51:25,302
For instance imagine a house.

853
00:51:25,302 --> 00:51:27,619
And you already have the backside of the house,

854
00:51:27,619 --> 00:51:29,202
and the number of windows in that house,

855
00:51:29,202 --> 00:51:31,624
and you already see this complete picture in your house,

856
00:51:31,624 --> 00:51:32,706
and at each time,

857
00:51:32,730 --> 00:51:35,065
if I say: “OK. It’s a house with nine stories.”

858
00:51:35,065 --> 00:51:37,039
this representation is going to change

859
00:51:37,039 --> 00:51:38,325
based on these constraints.

860
00:51:38,325 --> 00:51:40,020
How can we implement this?

861
00:51:41,100 --> 00:51:43,250
And of course we want to implement time.

862
00:51:43,250 --> 00:51:43,920
And we want…

863
00:51:45,240 --> 00:51:46,853
to produce uncertain space,

864
00:51:46,853 --> 00:51:47,806
and certain space

865
00:51:47,806 --> 00:51:49,753
and openness, and closed environments.

866
00:51:49,753 --> 00:51:52,830
And we want to have temporal loops and actually loops and physical loops.

867
00:51:53,960 --> 00:51:55,610
Uncertain loops and all those things.

868
00:51:58,409 --> 00:51:59,891
Next thing: perception.

869
00:51:59,891 --> 00:52:01,260
Perception is crucial.

870
00:52:01,490 --> 00:52:03,624
It’s…. Part of it is bottom up,

871
00:52:03,624 --> 00:52:06,550
that is driven by cues from stimuli from the environment,

872
00:52:06,740 --> 00:52:10,200
part of his top down. It’s driven by what we expect to see.

873
00:52:10,350 --> 00:52:12,332
Actually most of it, about 10 times as much,

874
00:52:12,332 --> 00:52:14,124
is driven by what we expect to see.

875
00:52:14,124 --> 00:52:18,200
So we actually—actively—check for stimuli in the environment.

876
00:52:18,200 --> 00:52:21,650
And this bottom-up top-down process in perception is interleaved.

877
00:52:22,640 --> 00:52:23,870
And it’s adaptive.

878
00:52:24,010 --> 00:52:25,885
We create new concepts and integrate them.

879
00:52:25,885 --> 00:52:28,387
And we can revise those concepts over time.

880
00:52:28,387 --> 00:52:30,528
And we can adapt it to a given environment

881
00:52:30,528 --> 00:52:32,786
without completely revising those representations.

882
00:52:32,786 --> 00:52:34,570
Without making them unstable.

883
00:52:35,000 --> 00:52:37,130
And it works both on sensory input and memory.

884
00:52:37,130 --> 00:52:40,120
I think that memory access is mostly a perceptual process.

885
00:52:41,310 --> 00:52:42,729
It has anytime characteristics.

886
00:52:42,729 --> 00:52:45,810
So it works with partial solutions and is useful already.

887
00:52:48,860 --> 00:52:49,658
Categorization.

888
00:52:51,134 --> 00:52:52,135
We want to have categories based on saliency,

889
00:52:52,135 --> 00:52:55,520
that is on similarity and dissimilarity, and so on that you can perceive.

890
00:52:56,440 --> 00:52:58,851
We…. Based on goals on motivational relevance.

891
00:52:58,851 --> 00:52:59,908
And on social criteria.

892
00:52:59,908 --> 00:53:01,490
Somebody suggests me categories,

893
00:53:01,490 --> 00:53:03,940
and I find out what they mean by those categories.

894
00:53:05,299 --> 00:53:06,070
What’s the difference between cats and dogs?

895
00:53:06,070 --> 00:53:09,100
I never came up with this idea on my own to make two baskets:

896
00:53:09,100 --> 00:53:12,780
and the pekinese and the shepherds in one and all the cats in the other.

897
00:53:12,890 --> 00:53:17,090
But if you suggest it to me, I come up with a classifier.

898
00:53:17,090 --> 00:53:19,574
Then… next thing: universal problem solving and taskability.

899
00:53:19,574 --> 00:53:21,502
If we don’t want to have specific solutions;

900
00:53:21,502 --> 00:53:23,320
we want to have general solutions.

901
00:53:24,390 --> 00:53:26,000
We want it to be able to play every game,

902
00:53:26,000 --> 00:53:28,437
to find out how to play every game for instance.

903
00:53:28,437 --> 00:53:32,542
Language: the big domain of organizing mental representations,

904
00:53:32,542 --> 00:53:35,454
which are probably fuzzy, distributed hyper-graphs

905
00:53:35,454 --> 00:53:37,707
into discrete strings of symbols.

906
00:53:40,000 --> 00:53:40,780
Sociality:

907
00:53:41,740 --> 00:53:43,100
interpreting others.

908
00:53:43,110 --> 00:53:44,770
It’s what we call theory of mind.

909
00:53:44,770 --> 00:53:48,630
Social drives, which make us conform to social situations and engage in them.

910
00:53:49,160 --> 00:53:50,740
Personhood and self-concept.

911
00:53:50,740 --> 00:53:52,200
How does that work?

912
00:53:52,540 --> 00:53:53,886
Personality properties.

913
00:53:53,886 --> 00:53:56,460
How can we understand, and implement, and test for them?

914
00:53:57,890 --> 00:53:59,620
Then the big issue of integration.

915
00:54:00,320 --> 00:54:04,310
How can we get analytical and associative operations to work together?

916
00:54:04,610 --> 00:54:05,218
Attention.

917
00:54:05,218 --> 00:54:09,018
How can we direct attention and mental resources between different problems?

918
00:54:09,890 --> 00:54:11,273
Developmental trajectory.

919
00:54:11,273 --> 00:54:17,051
How can we start as kids and grow our system to become more and more adult like and even maybe surpass that?

920
00:54:17,051 --> 00:54:17,865
Persistence.

921
00:54:17,865 --> 00:54:23,470
How can we make the system stay active instead of rebooting it every other day, because it becomes unstable.

922
00:54:25,930 --> 00:54:27,070
And then benchmark problems.

923
00:54:27,754 --> 00:54:30,406
We know, most AI is having benchmarks like

924
00:54:30,406 --> 00:54:31,479
how to drive a car,

925
00:54:31,479 --> 00:54:33,056
or how to control a robot,

926
00:54:33,056 --> 00:54:34,408
or how to play soccer.

927
00:54:34,408 --> 00:54:36,370
And you end up with car driving toasters, and

928
00:54:36,910 --> 00:54:37,535
soccer-playing toasters,

929
00:54:37,535 --> 00:54:39,270
and chess playing toasters.

930
00:54:39,490 --> 00:54:41,217
But actually, we want to have a system

931
00:54:41,217 --> 00:54:43,317
that is forced to have a mind.

932
00:54:43,317 --> 00:54:44,655
That needs to be our benchmarks.

933
00:54:44,655 --> 00:54:48,708
So we need to find tasks that enforce all this universal problem solving,

934
00:54:48,708 --> 00:54:50,260
and representation, and perception,

935
00:54:50,860 --> 00:54:52,900
and supports the incremental development.

936
00:54:53,530 --> 00:54:56,050
And that inspires a research community.

937
00:54:56,050 --> 00:54:58,660
And, last but not least, it needs to attract funding.

938
00:54:59,560 --> 00:55:00,220
So.

939
00:55:00,530 --> 00:55:04,760
It needs to be something that people can understand and engage in.

940
00:55:04,800 --> 00:55:06,990
And that seems to be meaningful to people.

941
00:55:08,300 --> 00:55:12,533
So this is a bunch of the issues that need to be urgently addressed…

942
00:55:12,610 --> 00:55:13,210
… in the next…

943
00:55:13,960 --> 00:55:15,450
15 years or so.

944
00:55:15,850 --> 00:55:17,440
And this means, for …

945
00:55:18,070 --> 00:55:21,540
… my immediate scientific career, and for yours.

946
00:55:23,600 --> 00:55:28,210
You get a little bit more information on the home of the project, which is micropsi.com.

947
00:55:28,220 --> 00:55:30,250
You can also send me emails if you’re interested.

948
00:55:31,000 --> 00:55:34,720
And I want to thank a lot of people which have supported me. And …

949
00:55:35,790 --> 00:55:37,210
you for your attention.

950
00:55:37,210 --> 00:55:39,874
And giving me the chance to talk about AI.

951
00:55:39,874 --> 00:55:56,342
[applause]
