On Thursdays at noon Yhouse holds a lunch meeting at the Institute of Advanced Study, in Princeton. The format is a 15 minute informal talk by a speaker followed by a longer open-ended discussion among the participants, triggered by, but not necessarily confined to, the topic of the talk. In order to share I am posting a synopsis of the weekly meetings.
Synopsis of 9/28/17 YHouse Luncheon Talk at IAS by Liat Lavi
Expectationalism and Artificial Intelligence
Liat Lavi, Bar Ilan University
Present: Piet Hut, Ed Turner, Michael Solomon, Yuko Ishihara, Olaf Witkowski, Ohad Nachtomy, Liat Lavi
ABSTRACT:
In my talk I will present the account of understanding I am developing under the title of 'expectationalism'. The account draws heavily on Jamesian Pragmatism and the thought of Heidegger and Merleau-Ponty. Its central premises are: 1. That the meaning of something is its consequences, and to understand something is to grasp its consequences. and 2. That expectations are not some internal content, but are rather actualized by our bodies. I will link this account with contemporary approaches in cognitive science and philosophy of mind, and suggest that if the account is correct this implies that strong AI is possible and that limited instances of it already exist.
Talk: Liat spoke about a theory of knowledge she is proposing that she calls Expectationalism. Her theory has two fundamental parts. 1) To know something is to hold certain expectations (for the future). 2) Expectations are expressed by the body.
The topic of how we know (epistemology) is an age old one in philosophy and has special significance for cognitive science and artificial intelligence. Her thesis does not resolve such a complex problem, but may offer changes to some presuppositions about consciousness.
What is Knowledge? Knowledge comes in two forms. The first is Theoretical Knowledge in the form of words and abstract concepts. The second is Practical Knowledge, the sort that lets us hold a glass of water or handle objects in the environment. Heidegger calls the latter Primordial and the former Derivative.
Most theories of knowledge focus on the past as giving us a representation and the present as comparing our sensory input with past representations. Her theory focuses on the future. To know something really, is to know what to expect of it. Knowing is taking a stance about the future. These ideas are consistent with William James (the subject of Liat’s PhD thesis), and with Heidegger and Merleau-Ponty. Considering the table we are sitting at, if she moves one end, the other end moves, so it is one object. If the table splits, then it was never one object but was two. What could happen to something to say it was never what we thought it to be? If water in a glass tastes like vodka, then it was never water.
In 2013 there appeared a new theory of brain and mind called Predictive Processing Theory. Andy Clark and Jacob Hohwy are philosophers who drew from Karl Friston’s approach to neuroscience in advocating this theory. They argue that the sole purpose of the Mind is to make predictions of what will happen. Quoting notes kindly provided by Liat, “So, we don’t really experience the world, but only our predictions. This is consistent with Pragmatism in which ‘the meaning of something is its consequences’. In Phenomenology, Husserl describes the constitutive role of the horizon in experience, which he describes as the ‘totality of organized serial potentialities involved in the object as noema’. It is also consistent with Heidegger’s account of understanding in terms of potentiality and possibility. ‘Understanding’, writes Heidegger, ‘press[es] forward into possibilities’… [B&T, 145], and ‘Interpretation... is… the working out of possibilities projected in understanding’ [B&T, 148].” In Predictive Processing Theory (PPT) all accounts of knowledge are representational. Hubert Dreyfus (a Heidegger interpreter at MIT) was a critic of Representationalism and the computational approach to the mind. Liat’s theory is NonRepresentational.
Again quoting Liat’s notes, “The second thesis of Expectationalism is that our expectations are not mental models, they are rather actualized by our bodies. This thesis is even harder to stomach than the first. It eliminates the mind, and conceives of the body (and not just the brain!) as the cognitive agent.” Every movement of my body depends on my expectations of the world. “This view draws from another development in cognitive science and philosophy of mind, namely the rise of Embodied theories of mind. This development is not that new, it has been with us at least since the early 90’s. Put generally, theories of embodiment stress the role the body plays in cognition. But within the embodiment camp we find views that share very little in common. Some views remain representational but stress the role of the body in shaping the mental representations. PPT is an embodied theory in this sense. Other approaches sometimes termed ‘strong embodiment’ (e.g. Dempsey and Shani), argue that the body’s contribution to cognition cannot be reduced to representations. Representations on this model are not rejected, but, it is argued that they cannot tell the whole story of cognition. The body on this view ‘complements’ the mind. Finally there is the ‘Enactivist’ branch of embodiment (Thompson, Rosch and Varela, Chemero, Noe). The enactivists’ view denies representations, and takes experience and cognition to be the product of the dynamic mind-body-world interaction. Experience on this view is not something that occurs within us, it is externalized.”
Liat takes these approaches to be highly instructive, but none of them, to the best of her knowledge, argues, as she will, that the body is the cognitive agent. Lastly, in relating the theory to AI, if knowledge is to hold expectations and to be able to express these expectations, then some systems may be Intelligent. i.e. weather forecasting algorithms can predict the future. But the difference in this form of intelligence is quantitative, a matter of complexity and range, not qualitative. She asks, are machine learning models now representational or not? Are neurons expressing or representing?
Liat ended her presentation here and opened the discussion.
Q: Olaf asked What do you mean by body?
A: One reason the field is not unified is because we have no uniform definition of a Body. A body is not based on DNA or a unity. Her approach is “the body bag”.
Q: Piet asked what do you mean by Mind?
A: There is no Mind. There is a Mindful Body – a body that can recognize its environment.
Q. Olaf What about your body as an ecological system, including you and all your gut bacteria?
A: Body is a metaphor for an organized system. The way you describe things is very important. In Enactivism there is almost no knowing Agent. Only the Interaction.
Q: Olaf People mean very different things by “representation”. Neural networks represent intentions.
Q: Piet: For every new idea there is a delay in understanding. There is a distinction between labels and real new ideas.
Q: Olaf: the concept of Red is not in a neural network on one place, but is distributed within the system. Still “red” can be retrieved by the network.
A: Liat: Before Representation became a question in computer science it was a question in philosophy. Do we need representation?
Q: Ed : Do you distinguish Information from Knowledge?
A: Whether information exists is a metaphysical, ontological question. The present discussion is limited to Epistemology: How do we know? Information may or may not exist in our world.
Q: Ed You mentioned neurons and brains a lot as body, but did not privilege the brain. Steven Hawking is very like a brain in a vat, but he has a lot of knowledge. Others may have damaged brains with healthy bodies and not much knowledge.
A: The consensus approach is that cognition occurs in the brain. But I argue it is not enough to look at the brain and neurons to explain knowledge. She has knowledge of China although she has never been there. Does Hawking have knowledge of holding a glass? If he cannot lift and hold the glass now, then his understanding of the glass is limited. For Heidegger understanding is a notion of disclosure. You can uncover aspects of the world. The view of brain as the cognitive agent pictures brain as the puppeteer, but her aim is to challenge this approach and see the brain as an organ of the body.
Q: Piet I like the idea of adding Expectation. Instead of dividing the Actual and the Potential, you say the Actual Expected, and the Potential Expected. That changes the philosophical discussion. (This is a reference to Husserl’s discussion of the Actual and Possible. Quoting the Stanford Encyclopedia of Philosophy, “What binds together the intentional horizon of a given indexical experience? According to Husserl, all of the (actual or potential) experiences constituting that horizon share a sense of identity through time, which sense he labels as the determinable X they belong to.”)
Q: Ohad Your account is all future based. But why bias the future and not consider past experiences?
A: That’s absolutely correct. You must take the past into consideration.
Q: Ohad What is it to be wrong about your expectations?
A: Knowledge is not binary – true or false. Knowledge is what is useful – predictions that work.
Q: Yuka: I’m on board with Representationalism. The rest is the Mind/Body Dualism. But you’ve eliminated the Mind completely.
A: Consciousness does not exist as an entity, but as a performance. So, there is no Mind/Body Dualism.
Q: Yuka: If you call mind a “Bodily Function” there still must be a dualistic approach.
A: I’m taking the Body Bag approach to body. Embodiment now (Pierre Jacob) makes it hard to draw lines between the body and the world. Liat’s analysis is to define a cognating entity that is separate from the world. Body is the knower, object is the known.
Q: Peit: Where is the wetness in H2O molecules? You are not eliminating wetness. You are saying the mind is gone and it is all body.
A: What is the Mind? Mind is not an entity. So, what is left is Mind is the system or representation.
Olaf: says that has been rejected by cognitive scientists.
Liat: says that is not so.
Q: Piet: Who has consciousness?
A: PPT considers itself a representational approach. To date Enactivism has not eliminated Representationalism.
Q: Piet There are clearly areas in the brain that represent areas in the world. But memory is not in one place. The truth is more interesting. The early Enactivists brought extra tools to the tool box. That was “useful”.
Q: Olaf in Neural networks and in Brains we find “grandmother” neurons, those that activate the idea of your grandmother, but also find distributed information, not localized to a single area in the brain.
A: Clearly the antirepresentational approach contributed directly to computer science, consider e.g. Rodney Brooks’s work on intelligence without representation – the philosophical debate can instigate new approaches in computer science, lead to the development of new tools.
Q: Piet Why must you eliminate some tools when you find new ones?
Ohad: Maybe the goal is not to add tools but to add clarity. So, you remove cloudiness.
Q: Piet: Our shift from Actual/Potential to Expected Actual/Expected Potential did not eliminate Potential. We still have the dichotomy.
Q: Yuka: We are not eliminating the Mind, but adding Mind Function to the body.
A: This does preserve knowing but removes representation.
Q: Ed: Eliminating the mind seems to violate the Lumberjack principle – “Don’t cut off the limb you are sitting on”. In physics there is no present. Only our minds perceive the distinction between past and future.
Q: Michael: The focus on predicting seems very biologic. The usefulness of consciousness is determined by imagining and avoiding possible threats to survival. Self-preservation is the goal.
A: Ohad: The pragmatist philosophers were very much influenced by Darwin and his introduction of evolution as a concept.
A: Liat: One needs to be cautious with using evolutionary explanations. for example, in PPT predicting permits not only self-preservation, but also serves to minimize or eliminate surprise. This supposedly implies the agent is not supposed to act, because acting is already risky. Something is wrong with this account. Evolution does not stand for necessity, it is a process that involves contingency, plus, too much stress is placed on the survival of individuals, where in fact evolution is also much about procreation.
Q: Ed: There is knowledge that is not “useful” in predicting. i.e. his former advisor liked Scotch Whiskey. This knowledge is of no practical use, but still exists.
A: But your memories may be useful in a fictitious future.
Q: Piet: If I were to describe your thinking using quantum physics as a metaphor, I would say that just as a quantum may behave as both a particle and as a wave, you are a particularly wavy philosopher.
Respectfully,
Michael Solomon, MD