A recent talk at Nottingham Café Scientifique was presented by Prof Barry Cooper from Leeds University and was entitled “Alan Turing - The Building of a Brain ”
About half of the talk consisted of a biography of Turing. Given that Turing has a surprisingly detailed Interweb presence, this part of the talk is perhaps best covered by reference to some of the following resources for information of Turing's life:
* Alan Turing's Wikipedia Page (like, duh!)
*Andrew Hodges (author of "Alan Turing: The Enigma") website devoted to Turing.
* The Turing Digital Archive
The other half discussed the nature human and artificial intelligence, including mentions of some experts in the field, and this is covered below.
One of the many experts referenced was Nassim Taleb, a Lebanese American essayist and scholar whose work focuses on problems of randomness, probability and uncertainty. Taleb is the author of the "Black Swan" theory (and book). This theory describes the extreme impact of certain kinds of rare and unpredictable events and humans' tendency to find simplistic explanations for these events retrospectively. Taleb correctly predicted (and made a lot of money out of) the 2008 financial crash, so he is perhaps someone worth listening to!
Prof Cooper posed the question of how nature computes, pointing out that the universe around us is arranged in a complicated way. A relevant expert here is theoretical physicist Dr Peter Woit, who has highlighted that "The Standard Model" of Physics only works because 17 key parameters have been given arbitrary values, suggesting that we do not have a good understanding of the forces and nature of the universe.
Fundamentally, as Dr Cooper said "The trouble is, we don't really know what reality is, do we?", instead we try and fit reality into the straitjacket of a mathematical model.
Related to this is the phenomena of "Morphogenesis" (how lifeforms take their shape). This is an area Turing looked at in an important 1952 paper, which you can read here and read about, in laymans terms, here
Also related is the idea of "Emergence" (how complex systems and patterns arise out of a multiplicity of relatively simple interactions). A good example being the way in which complex termite mounds are built by very simple actions of many termites.
Prof Cooper then went on to discuss the famous "Turing Test" as a way of determining whether a computer program genuinely had artifical intelligence (see also here). He pointed out that there was something of an "AI War" underway between those (such as Marvin Minsky) who have taken a rather analytical approach and those (such as Rodney Brooks) who take a more experimental path to developing AI technologies.
The consensus seems to be that AI may work well in specific, narrow, applications (such as chess computers) but will be more difficult to implement in the wide ranging way that humans, for example, have intelligence.
Things got pretty heavy and philosophical at this point, with the talk looking at the relationship between mind and body. One person to note here is Jaegwon Kim
Prof Cooper made quite a few mentions of "data types". For example, in a typcial living room there is a lot of high level data. All the items have a temperature, texture, size, shape, smell, sound, hardness, porosity etc. But when a human looks at that room, all they do is sample a very small part of the available information (largely visually) and construct a mental model of the room from that.
In addition, there is something special about the human brain that allows us to appreciate the "higher level" nature of complicated structures such as Mandlebrot sets or termite mounds - something that computers find difficult to do.
In the (always interesting) question and answer session, Prof Cooper commented that people were starting to realise that context is very important to data. For example, a person might give very different answers to a question depending on his or her perception of the environment (is is threatening, do they feel safe, it is warm or cold, who is asking the question, why are they asking the question)
Prof Cooper felt that this developing understanding of the complexity of intelligence are likely to result in a lot of algorithmic code being junked over the coming years, perhaps being replaced by "Evolutionary Algorithms".
The final word should perhaps be given to one of the last quotes of the evening, from American inventor, scientist, engineer, entrepreneur, and author Danny Hillis, who said "Maybe we'll evolve evolutionary machines before we understand them"
Image Sources
Turing, Bombe
About half of the talk consisted of a biography of Turing. Given that Turing has a surprisingly detailed Interweb presence, this part of the talk is perhaps best covered by reference to some of the following resources for information of Turing's life:
* Alan Turing's Wikipedia Page (like, duh!)
*Andrew Hodges (author of "Alan Turing: The Enigma") website devoted to Turing.
* The Turing Digital Archive
The other half discussed the nature human and artificial intelligence, including mentions of some experts in the field, and this is covered below.
One of the many experts referenced was Nassim Taleb, a Lebanese American essayist and scholar whose work focuses on problems of randomness, probability and uncertainty. Taleb is the author of the "Black Swan" theory (and book). This theory describes the extreme impact of certain kinds of rare and unpredictable events and humans' tendency to find simplistic explanations for these events retrospectively. Taleb correctly predicted (and made a lot of money out of) the 2008 financial crash, so he is perhaps someone worth listening to!
Alan Turing |
Prof Cooper posed the question of how nature computes, pointing out that the universe around us is arranged in a complicated way. A relevant expert here is theoretical physicist Dr Peter Woit, who has highlighted that "The Standard Model" of Physics only works because 17 key parameters have been given arbitrary values, suggesting that we do not have a good understanding of the forces and nature of the universe.
Fundamentally, as Dr Cooper said "The trouble is, we don't really know what reality is, do we?", instead we try and fit reality into the straitjacket of a mathematical model.
Related to this is the phenomena of "Morphogenesis" (how lifeforms take their shape). This is an area Turing looked at in an important 1952 paper, which you can read here and read about, in laymans terms, here
Also related is the idea of "Emergence" (how complex systems and patterns arise out of a multiplicity of relatively simple interactions). A good example being the way in which complex termite mounds are built by very simple actions of many termites.
Prof Cooper then went on to discuss the famous "Turing Test" as a way of determining whether a computer program genuinely had artifical intelligence (see also here). He pointed out that there was something of an "AI War" underway between those (such as Marvin Minsky) who have taken a rather analytical approach and those (such as Rodney Brooks) who take a more experimental path to developing AI technologies.
The consensus seems to be that AI may work well in specific, narrow, applications (such as chess computers) but will be more difficult to implement in the wide ranging way that humans, for example, have intelligence.
Things got pretty heavy and philosophical at this point, with the talk looking at the relationship between mind and body. One person to note here is Jaegwon Kim
A rebuild of a WW2 "Bombe" Codebreaker at Bletchley Park Turing was a key figure in its development |
Prof Cooper made quite a few mentions of "data types". For example, in a typcial living room there is a lot of high level data. All the items have a temperature, texture, size, shape, smell, sound, hardness, porosity etc. But when a human looks at that room, all they do is sample a very small part of the available information (largely visually) and construct a mental model of the room from that.
In addition, there is something special about the human brain that allows us to appreciate the "higher level" nature of complicated structures such as Mandlebrot sets or termite mounds - something that computers find difficult to do.
In the (always interesting) question and answer session, Prof Cooper commented that people were starting to realise that context is very important to data. For example, a person might give very different answers to a question depending on his or her perception of the environment (is is threatening, do they feel safe, it is warm or cold, who is asking the question, why are they asking the question)
Prof Cooper felt that this developing understanding of the complexity of intelligence are likely to result in a lot of algorithmic code being junked over the coming years, perhaps being replaced by "Evolutionary Algorithms".
The final word should perhaps be given to one of the last quotes of the evening, from American inventor, scientist, engineer, entrepreneur, and author Danny Hillis, who said "Maybe we'll evolve evolutionary machines before we understand them"
Image Sources
Turing, Bombe
No comments:
Post a Comment