Re:
progress?
I would think of it more of a space / field effects , Not =ecursive algorithm s
Last week I got to know Steve Hyman, Daniel Kahneman and Bob Horvitz.
=elefonica invited all of us to a two day workshop with Pablo Rodriguez, Ke= Morse and a few others, where we were meant to advise them on how to use =1 for health applications. I told them that I think the goal of therapeuti= invention is not to increase happiness, but integrity. Happiness is merel= an indicator, not the benchmark. Current apps tend to subvert the motivat=on of people, but I don't think that this is necessary or the best str=tegy. Humans are meant to be programmable, not subverted. They perceive th=ir programming as "higher purpose". If we can come from the top,=supporting purpose, instead of from the bottom, subverting attention, we m=ght be more successful. (Downside might be that we create cults.) Of the bunch, Hyman managed to be the most interesting (Kahneman was v=ry charismatic but mostly tried to see if he could identify an application=for his system one/system two theory). Gary Marcus was there, too, but ann=yed everyone by being too insecure to deal with his incompetence.
Did I tell you that I discovered that Deep Learning might be best unde=stood as Second order AI?
First order Al was the classical AI that was started by Marvin Minsky =n the 1950ies, and it worked by figuring out how we (or an abstract system= can perform a task that requires intelligence, and then implementing that=algorithm directly. It yielded most of the progress we saw until recently:=chess programs, data bases, language parsers etc.
Second order Al does not implement the functionality directly, but we =rite the algorithms that figure out the functionality by themselves. Secon= order Al is automated function approximation. Learning has existed for a =ong time in Al of course, but Deep Learning means compositional function a=proximation.
Our current approximator paradigm is mostly the neural network, i.e. c=ained normalized weighted sums of real values that we adapt by changing th= weights with stochastic gradient descent, using the chain rule. This work= well for linear algebra and the fat end of compact polynomials, but it do=s not work well for conditional loops, recursion and many other constructs=that we might want to learn. Ultimately, we want to learn any kind of algo=ithm that runs efficiently on the available hardware.
Neural network learning is very slow. The different learning algorithm= are quite similar in the amount of structure they can squeeze out of the =ame training data, but they need far more passes over the data than our ne=vous system.
The solution might be meta learning: we write algorithms that learn ho= to create learning algorithms. Evolution is meta learning. Meta learning =s going to be third order Al and perhaps trigger a similar wave as deep le=rning.
I intend to visit NYC for a workshop at NYU on the weekend of the > 16th=
We just moved into a new apartment; the previous one had only two bedr=oms and this one has three, so I can have a study. It seems that we are as=lucky with the new landlords as with the previous ones.
Bests, and thank you for everything!
Joscha
understanding is a multi dimensional space =AO the language is a projection in that space. or =n arrow in category theory. the local point has hi=tory . so like the play appears different from every sea= in the theatre the integaration over each point=C2 projects his understanding on the language.
What do you think of as space/field effects? The univ=rse or learning?
Btw., did you ever come across Schmidhuber's idea of a Goedel Machine?<=r>
