Spring 2023
Date | Seminar | |
---|---|---|
Mar 7 |
Cognitivism and Consciousness: A Degenerating Research Programme? Abstract: It is sometimes argued that Descartes invented the modern mind-body problem— by defining "mind" in direct opposition to "physical", he created a gap between the two which could never subsequently be resewn. To avoid the modern mind-body problem, the story then goes, we have only to adopt a non-Cartesian understanding of these concepts. In this talk, I shall argue that cognitivist cognitive science commits to precisely such a conceptual Cartesianism which has resulted in a(n inevitably) degenerating research programme. Accordingly, I suggest that we are best off conducting a non-Cartesian cognitive science (for instance, of the kind to be found in ecological and enactive research paradigms). I begin my argument by providing a Lakatosian construal of the cognitivist paradigm. First, I explain its ‘hard core’ theses— the unfalsifiable fundaments of the paradigm— to be two-fold: 1) mind is constituted by the publicly accessible processing of unconscious brain-based representations; 2) consciousness is inherently subjective. Then, I highlight a representative range of the dispensable and revisable auxiliary hypotheses making up its ‘protective belt’ (representations are symbolic or sub-symbolic entities; perception is a bottom-up or top-down process; and so on). I next contend that cognitivism’s ‘hard core’ tenets are, in all relevant respects, indistinguishable from Descartes' concepts of "physical" (tenet one) and "mental" (tenet two). Accordingly, that research within cognitivist cognitive science increasingly fits within the binary categories of "illusionism" or "qualia realism" is unsurprising— since Descartes' conceptual distinctions do not allow for a physical understanding of consciousness, it follows that they require realism about either physicalism (illusionism) or consciousness (qualia realism), but never both. A good theory of consciousness should, however, allow for realism about both physicalism and consciousness. Since extant cognitivist theorising shows no sign of allowing for this, I label it a degenerating research programme. In response to such arguments, cognitivists often contend that future theoretical and empirical advances will obviate them. Whilst granting this may be the case writ large, I retort that cognitivism cannot avail of such a response. This is because, by the strictures of Lakatos, changes within paradigms can only be made to their ‘protective belts’. But the problem with cognitivism lies in its ‘hard core’, and to revise these tenets would requiring giving up on cognitivism all together. I recommend precisely this course of action, suggesting that the explicitly non-Cartesian ecological and enactive research paradigms constitute better bets for successfully pursuing a science of consciousness. |
Arts A 108 Passcode: 8264 |
Mar 14 |
The entangled brain: the integration of emotion, motivation, and cognition Abstract: Research on the “emotional brain” often focuses on particular structures, such as the amygdala and the ventral striatum. In this presentation, I will discuss research that embraces a distributed view of both emotion- and motivation-related processing, as well as efforts to unravel the impact of emotion and motivation across large-scale cortical-subcortical brain networks. In the framework presented, emotion and motivation have broad effects on brain and behavior, leading to the idea of the “entangled brain” where brain parts dynamically assemble into coalitions that support complex cognitive-emotional behaviors. According to this view, it is argued that decomposing brain and behavior in terms of standard mental categories (perception, cognition, emotion, etc.) is counterproductive. |
Online Passcode: 1563 |
Mar 23 |
Generative AI. What it might reveal about 4E cognitive science and the shape of the human mind. Abstract: Even though, up until recently, philosophers and cognitive scientists have been surprisingly quiet about the new AI. The new transformer / deep learning / AI systems will likely change both how we think about AI, but also how we think about intelligence and cognition more generally. One vantage-point on the new AI is the 1990s discussion over 4E cognitive science. The (programmatic) work of Rodney Brooks with creatures, mobots and subsumption architectures seems to suggest notions about intelligence and how to build (many) AI systems which are, on the face of it, deeply at odds with today's generative AI. ChatGPT, Dall-E and LaMDA seem to be more directly inheritors of connectionism than 4E cog sci and if anything examples of 0E cognition. Moreover, the analysis of such systems by their creators seem to be shot through with (versions of) representationalist assumptions many believed needed to be transcended. One might think that with the advent of deep learning and generative AI at least the radical anti-representationalist flavours of 4E cognitive science may now face a severe challenge. So, does the new AI really clash with 4E cog sci, behaviour-based robotics and its inheritors, and if so, what does that say about the broader 4E programme? I will offer three possible paths forward for the 4E program in the wake of generative AI and predictive processing. I’ll ask what they might tell us about the structure of the human mind and some possible shapes for our cognitive futures. Robert W Clowes is the coordinator of the Lisbon Mind, Cognition and Knowledge Group and a senior researcher at IFILNOVA, Universidade Nova de Lisboa. |
Fulton 205 Passcode: 4375 |
Mar 28 |
Assembled bias: Biased judgments and exaggerated images in machine learning Abstract:Machine learning (ML) models already drive much of contemporary society, and newer ML models, such as ChatGPT and DALL-E, demonstrate impressive competence in tasks such as text and image generation once outside the bounds of artificial intelligence (AI). However, when algorithmic systems are applied to social data, flags have been raised about the occurrence of algorithmic bias against historically marginalized groups. Further, some users of the popular portrait-creating LENSA have reported misogynistic and distorted body images generated from head-only selfies. Those working in AI and the broader algorithmic fairness community point to human biases against marginalised groups and social stereotypes that algorithms inherit from the data set on which they operate as a source of such bias and distortion in AI output. Here we argue that such bias and exaggeration has a further, more epistemically pernicious source that has been overlooked, the distinctive generative process of feature creation with ML. Specifically, we make the case for the emergence of novel kind of exaggeration and bias, which we term assembled bias, with the use of ML. To do so, we make the process of ML more transparent using the combination of visualisation in topological data analysis and sociopolitical concepts. We demonstrate that ML constructs a non-interpretable exaggerated representation of its target population as a whole (e.g., human bodies, parole applicants) due to the kinds of features favored and massively reconfigured in ML feature space. We introduce the notions of assembled privilege and assembled marginalization to explain the effect this has on representation of interpretable social groups. We contend, therefore, that assembled bias in part drives the biased and exaggerated outputs of operative ML systems. Such bias is epistemically opaque and distinct in both source and content from the kinds under discussion in the algorithmic fairness literature. |
Chichester 3 - 3R143 Passcode: 2863 |
May 2 |
Consciousness, Causal Powers, and Physical Qualities Abstract:(Joint work with Neal Anderson) I argue that consciousness consists at least in part of physical qualities. I begin by introducing two types of natural property: physical qualities and causal powers. I introduce levels of composition and realization. I introduce mechanisms and the notions of multiple realizability and medium independence. I argue that physical computation is a medium-independent notion. Finally, I argue that cognition is largely medium-independent and hence a matter of computation but phenomenal consciousness most likely involves physical qualities, which are aspects of physical reality that outstrip its causal powers. |
online Passcode: 769538 |
June 20 |
Tethered Rationality: A Model of Behavior for the Real World Abstract: In a December 2021 interview, Francis Collins, the departing director of the NIH noted "to have now 60 million people still holding off of taking advantage of lifesaving vaccines is pretty unexpected. It does make me, at least, realize, ‘Boy, there are things about human behavior that I don't think we had invested enough into understanding.’ “ Decision-making models--intended to explain and predict volitional behavior-- are extreme abstractions from the biology of Homo sapiens. They invariably pick out only cognitive/rational mechanisms. To the extent that an abstraction captures salient features it is valuable. To the extent that it fails to do so, it can be misleading. It is proposed that by picking out only cognitive/rational mechanisms models of decision-making are far too abstract and removed from the biology to accurately capture behavior. A case is made for tethering cognitive/rational decision-making models to “lower level” noncognitive systems. Volitional behavior is then a blended response of these various systems. To make this case I appeal to (i) data from cooperative economic decision-making tasks to support the blended response hypothesis; (ii) evolutionary and anatomical evidence for the tethered brain; and (iii) the neuroscience literature on affect and arousal to propose a lingua franca of communication and a control structure for the tethered mind. I conclude by explaining some real world behaviors with tethered rationality. |
Arundel 230 Passcode: 9823 |
June 27 |
Irruption theory: making mind matter Abstract:All human affairs assume a priori that people are conscious agents, and that our perspective genuinely makes a difference to behavior. Yet natural science has been in apparent tension with this genuine subjectivity, because in principle nothing but material processes are measurable in our bodies. I therefore propose a novel framework centered on the prospect that the role of consciousness is related to an underdetermination of the material properties of our living embodiment. Specifically, I introduce the concept of irruption to operationalize the uncertainty in measurement resulting from a person’s exercise of agency: increased subjective involvement in embodied action is indirectly observable as increased underdetermination of the body’s physiological processes, akin to a decrease in their material constraints. Counterintuitively, irruptions do not entail random behavior; rather, as indicated by recent models of complex adaptive systems, they can facilitate flexible behavioral switching and the self-organization of a capacity for generalization. An approximate information-theoretic measure of irruptions is entropy, which therefore provides a novel perspective on why increased neural entropy has been found to be associated with consciousness, cognition, and agency. The upshot of irruption theory is that we are conscious agents who can rely on our first-person perspective to make an effective difference with our embodied actions, yet without being able (nor needing) to directly control our body’s material processes. |
Arundel 230 Passcode: 4913 |
Contact COGS
For suggestions for speakers, contact Simon Bowes
For publicity and questions regarding the website, contact Simon Bowes.
Please mention COGS and COGS seminars to all potentially interested newcomers to the university.
A good way to keep informed about COGS Seminars is to be a member of COGS. Any member of the university may join COGS and the COGS mailing list. Please contact Simon Bowes if you would like ot be added.
Follow us on Twitter: @SussexCOGS