COGNITIVE SCIENCE COLLOQUIUM

* In-person seminars are cancelled effective March 13, 2020.  Zoom links will be provided for the talks listed below and to those on the mailing list.

The Fall 2020/Spring 2021 COGNITIVE SCIENCE COLLOQUIUM SERIES schedule is shown below.  Details will be posted as soon as they are available.  As usual, the colloquium will be held on Fridays (unless otherwise noted), from 12:00 - 1:30 p.m., in the Speech, Language, and Hearing Sciences Building, Room 205, 1131 E Second Street. Recordings of these talks are available on Panopto, which can be accessed with a UA NetID.

Since 2012, an annual feature of the colloquium series is a special talk given by the Roger N. Shepard Distinguished Visiting Speaker. Please follow the link for a list of past speakers.

If you would like to receive email announcements about these and other events, please contact Program Coordinator Kirsten Cloutier Grabo at kirstencg@arizona.edu to be added to the colloquium listserv.

Information about previous talks during this academic year can be found at the bottom of this list. Other past talks can be found at COGNITIVE SCIENCE COLLOQUIUM ARCHIVE.

2020/21 COGNITIVE SCIENCE COLLOQUIUM SERIES

SPRING 2021

 

______________________________

April 23, 2021

Shaowen Bao, Associate Professor, Physiology, University of Arizona

https://physiology.arizona.edu/person/shaowen-bao-phd

Zoom: https://arizona.zoom.us/j/84515579405

Hearing Loss and Cognitive and Emotional Disorders

Abstract: Compared to individuals with normal hearing, those with a mild, moderate, and severe hearing impairment, respectively, had a 2-, 3-, and 5-fold increased risk of dementia. Hearing loss is also associated with anxiety and depressive disorders. I will discuss our ongoing animal research on how noise-induced hearing loss impacts hippocampus and amygdala functions, and its potential connection to cognitive and emotional disorder.

 

______________________________

April 30, 2021

STUDENT SHOWCASE

Paulo Ricardo da Silva Soares
Ph.D. Candidate, Computer Science

University of Arizona

Designing an Intelligent Collaborative Agent for Urban Search and Rescue Missions

Abstract: Intelligent agents have been widely used to assist humans in several daily situations. In most cases, however, these agents cannot reason about people's beliefs and desires and fail to effectively bond with their human mates. Some works have tried to attenuate this issue by computationally encoding human mental states. However, problems are usually constrained to a single-person domain, and inference is usually policy-driven. In this talk, I will comment on the work I have been doing to design a collaborative agent to reason about people's beliefs and interact with them in a virtual urban search and rescue mission. First, I will present some results we got in a single-player case showing that the agent can infer how much participants know about the mission design based on their behaviors. Next, I will display our current efforts to extend the model to a multiplayer context in which the agent must keep track of individual and shared mental states. Finally, I will discuss our future direction, which takes affective states into account - grounded by neural and physio data - as well as agent-to-human communication.

 

Li-Chuan (Matt) Ku
PhD Candidate, Cognition and Neural Systems  
University of Arizona

 

______________________________

 

COLLOQUIUM SPEAKERS who have already visited 2020/21

____________________________

September 4, 2020

Adarsh Pyarelal, Research Scientist, School of Information, University of Arizona

http://adarsh.cc/

Building machines that understand humans

Abstract: As anyone who's been frustrated by Siri or Alexa can readily verify, computers just don't get us. While artificially intelligent (AI) agents are getting quite good at understanding explicit instructions, they still struggle to understand implicit information conveyed through prosody, facial expressions, informal/imprecise language, etc. This difficulty presents a major obstacle to the development of AI 'theory of mind', i.e., the ability to infer the beliefs, desires, and intentions of humans.  In this talk, I will give an overview of ToMCAT (Theory of Mind-based Cognitive Architecture for Teams) - a 4-year project aimed at developing AI agents with social intelligence and testing them in a Minecraft-based virtual testbed.

 

______________________________

September 11, 2020

Chris Baldassano, Assistant Professor of Psychology, Columbia University

https://psychology.columbia.edu/content/christopher-baldassano

Cognitive maps: events, spaces, semantics, and development

Abstract: Understanding and remembering realistic experiences in our everyday lives requires activating many kinds of structured knowledge about the world, including spatial maps, temporal event scripts, and semantic relationships. My recent projects have explored the ways in which we build up this schematic knowledge (during a single experiment and across developmental timescales) and can strategically deploy these cognitive maps to construct event representations that we can store in memory or use to make predictions. I will describe my lab's ongoing work developing new experimental and analysis techniques for conducting functional MRI experiments using narratives, movies, poetry, virtual reality, and "memory experts" to study complex naturalistic schemas.

 

______________________________

 

September 25, 2020

Samuel Gershman, Roger N. Shepard Visiting Scholar, Associate Professor, Department of Psychology, Harvard University

https://psychology.fas.harvard.edu/people/samuel-j-gershman

Predictive maps in the brain

Abstract: In this talk, I will present a theory of reinforcement learning based on a "predictive map" that can be used to efficiently evaluate different states of the environment. I show how such a map explains many aspects of hippocampal representation. The map can be decomposed to reveal latent structure resembling entorhinal grid cells. I will then present evidence that humans employ such a predictive map to solve reinforcement learning tasks. Finally, I will discuss the role of dopamine error signals in learning the predictive map.

 

______________________________

October 2, 2020

Dwight Kravitz, Associate Professor, Cognitive Neuroscience, Department of Psychological & Brain Sciences, George Washington University

https://cogneurolab.columbian.gwu.edu/dwight-j-kravitz

Predicting functional organization and its effects on behavior

Abstract: In many ways, cognitive neuroscience is the attempt to use physiological observation to clarify the mechanisms that shape behavior. Over the past 25 years, fMRI has provided a system-wide and yet somewhat spatially precise view of the response in human cortex evoked by a wide variety of stimuli and task contexts. The current talk focuses on the other direction of inference; the implications of this observed functional organization for behavior. To begin, we must interrogate the methodological and empirical frameworks underlying our derivation of this organization, partially by exploring its relationship to and predictability from gross neuroanatomy. Next, across a series of studies, the implications of two properties of functional organization for behavior will be explored: 1) the co-localization of visual working memory and perceptual processing and 2) implicit learning in the context of distributed responses. In sum, these results highlight the limitations of our current approach and hint at a new general mechanism for explaining observed behavior in context with the neural substrate. 

 

______________________________

October 9, 2020

Gondy Leroy, Professor, Department of Management Information Systems, Eller Fellow, Eller College of Management, University of Arizona

https://eller.arizona.edu/people/gondy-leroy

Design and Development of Decision Support Tools for Surveillance of Autism Spectrum Disorders (ASD) using EHR

In this presentation, I will provide a high level overview of a natural language processing (NLP) information system that can help improve, speed up, and facilitate reporting of cases of ASD using free text in EHR. I will show the approach and results of rule-based and machine learning algorithms to automatically recognize phenotype expression of ASD in text as well as label a child as ASD or not through review of the EHR free text.  I will discuss current results and problems encountered because this is a low resource area. I will also show examples of new analyses made possible with this type of data creation and biases to be taken into account. I hope to conclude with a discussion with the audience of new promising areas and potential extensions, applications, and collaborations.

 

______________________________

October 16, 2020

Winslow Burleson, Professor, School of Information, University of Arizona

https://ischool.arizona.edu/people/winslow-burleson

Motivational Environments: Cyberlearning, Digital Health, and Society’s Grand Challenges

Abstract: Novel forms of human computer interaction and learning sciences, applied to health, technology, education, and innovation are radically transforming the socio-technical environments in which we interact. Affective learning companions personally tailor interactions to mitigate Stuck and promote Flow. Supportive dressing systems adapt to the needs of persons living with cognitive impairment. Smart home and assistive technologies can foster independence for people living with autism. The University of Arizona Holodeck, a powerful Experiential Supercomputer, is empowering transdisciplinary collaborations advancing convergent research, education, and innovation.

 

_____________________________

October 23, 2020

Maureen Ritchey, Assistant Professor, Psychology Department, Boston College

https://www.bc.edu/bc-web/schools/mcas/departments/psychology/people/faculty-directory/maureen-ritchey.html

Making memories: Brain networks supporting episodic binding and reconstruction

Abstract: When we remember an event, we weave together its specific features into a coherent episode. In effect, we rebuild the world in our minds. How does the human brain accomplish this feat? In this talk, I will discuss the hippocampal and cortical network interactions that transform experience into memory. This transformation process begins at encoding, as feature representations are bound through the hippocampus and embedded within the spatiotemporal structure of events. As memories are retrieved, cortico-hippocampal networks interact to reconstruct these features into a richly detailed experience. I will highlight recent work suggesting that, within the posterior medial cortico-hippocampal network, there are distinct subnetwork alliances that support different aspects of episodic representations. Finally, I will discuss ongoing efforts to modulate the reconstruction of emotional memories, leveraging what we know about making memories to make them feel better.

 

______________________________

November 6, 2020

Molly Gebrian, Assistant Professor, Fred Fox School of Music, University of Arizona

https://music.arizona.edu/people/directory/mgebrian/

Music and Early Language Acquisition

Abstract: The phrase “music is the universal language” is ubiquitous in our culture, but this talk will explore the idea that, rather than music being a language, language can best be understood as a type of music. Infants first experience language not as a content-rich utterance, but rather as pure sound, devoid of any concrete meaning but full of interesting and varied acoustic information. Music perception is often treated as an ancillary ability in our culture and is thought to mature more slowly than language perception and acquisition. This talk will demonstrate that, to the contrary, not only do music and language abilities develop in parallel, but musical hearing and ability are essential for successful language acquisition. A review of the relevant literature in infant and child development, cognitive neuroscience, and musicology will show that the ability to hear musically is fundamental to our identity and linguistic abilities as human beings.

 

______________________________

November 13, 2020

Gina Kuperberg, Professor, Cognitive Science, Tufts University

https://projects.iq.harvard.edu/kuperberglab/people/gina-r-kuperberg
 

Does hierarchical predictive coding mediate language comprehension? Evidence from multimodal neuroimaging

Abstract: One of the most fundamental questions in Cognitive Science is how the brain is able to extract meaning from streams of rapidly unfolding, noisy linguistic inputs. The process of language comprehension can be understood as probabilistic inference — the use of a hierarchy of linguistic and non-linguistic knowledge (a generative model) to infer the underlying representations that best explain the linguistic input. It has been proposed that the brain carries out this type of probabilistic inference using an algorithm known as hierarchical predictive coding. Although the term predictive coding has sometimes been used in a broad sense to describe any type of predictive processing by the brain,  it actually refers to a specific computational architecture that was originally used to simulate extra-classical receptive-field effects in the visual system, and that has since been extended into a more general theory of cortical function. Predictive coding inherits many of the basic principles of parallel, interactive and constraint-based accounts, which have transformed our understanding of language processing over the past few decades. It also instantiates probabilistic prediction at multiple levels of linguistic representation, which is thought to play a major role in ensuring that real-time language comprehension is both fast and accurate. However, it is distinguished both from classic connectionist models and from more general predictive processing frameworks by committing to a particular arrangement of feedforward and feedback connections and flow of activity, both within and across different levels of the cortical hierarchy. In addition to its biological plausibility, this framework makes principled predictions that address some of the most fundamental questions in the neurobiology of language: When, where and how does incoming information interact with prior top-down contextual constraints as it becomes available to successive levels of the cortical hierarchy over time? Can contextual predictions pre-activate item-specific representations, and if so, at what levels of representation? And how is the brain able to rapidly shift away from prior predictions so that it can rapidly and flexibly respond to systematic changes in the underlying message? In this talk, I will address some of these questions, discussing the evidence we already have, and the evidence we still need to support the hypothesis that the brain carries out predictive coding during language comprehension.

 

______________________________

November 20, 2020

Sol Lim, Assistant Professor, Systems and Industrial Engineering, University of Arizona

https://sie.engineering.arizona.edu/faculty-staff/faculty/sol-lim

Human Motion Analysis with Inertial Sensing and Predictive Modeling for Improved Health and Well-being

Abstract: Wearable sensing technologies that gained popularity with health and fitness tracking present many new opportunities for human factors & ergonomics research. Obtaining interpretable and actionable information from the vast amounts of data generated by these sensors will require merging traditional ergonomics theory and first principles with statistical techniques adept at handling large data. My research presents a framework for combining wearable inertial sensing, biomechanical modeling, and predictive modeling techniques for ergonomics assessment. Examples include estimating exposures to manual material handling tasks of different intensity and duration, with insights on the body’s biomechanical response to external loads in dynamic tasks. This study was motivated by the high prevalence of overexertion injuries from high force exertions and awkward postures during manual material handling which account for one-third of all work-related injuries costing the US economy $13.7 billion annually. The developed approach is aimed at overcoming some of the conventional challenges of manually measuring workers’ exposures to force exertions and work postures in non-routinized work such as in variable material handling (e.g., UPS/Amazon fulfillment centers), patient-care (e.g., nurses, patient transporters), and construction work. I will conclude with an overview of other on-going studies to illustrate the broad potential of low-cost wearable sensing and predictive modeling for improve human health and well-being.

 

______________________________

December 4, 2020

Stacey Tecot, Associate Professor, School of Anthropology, University of Arizona

https://anthropology.arizona.edu/user/stacey-tecot

The socioendocrinology of raising babies: insights from defiant lemurs

Determining how species mitigate environmental stress to survive and reproduce is central to an understanding of human and non-human primate evolution and health, and is also critical for forecasting the fates of species in the face of climate change and habitat degradation. An estimated 98% of lemurs are threatened with extinction, and the strategies that they have evolved to cope with challenges in their environment may be insufficient to respond to more recent, relatively abrupt challenges caused by human activities. Here, I focus on the evolution and proximate mechanisms of shared infant care, reproductive timing, and the stress response. I use a combination of detailed behavioral and physiological data on the red-bellied lemur as well as comparative data across species to determine how lemurs negotiate reproduction and survival in a naturally dynamic environment, and how anthropogenic factors might impact those strategies. 

 

______________________________

January 22, 2021

Roeland Hancock, Assistant Research Professor, Assistant Director, Brain Imaging Research Center, University of Connecticut

https://psych.uconn.edu/person/roeland-hancock/

Genetic topology of the language network

Abstract: I will discuss recent work that combines behavioral genetics and functional brain imaging to examine sources of individual variation in the language processing network, and better delineate the functional architecture of the multiple, hierarchically interacting neural systems that underlie language faculty. I first discuss the use of genetic correlation to investigate how shared genetic factors may contribute to covariance in spatially distributed language-related task fMRI activation within left temporal and inferior frontal cortex. These results broadly provide novel support for a dorsal/ventral dual stream model of language processing, with dorsal and ventral streams having distinct genetic influences, yet also raise questions about the role of premotor cortex (PMC) and the anterior temporal lobe (aTL) within language networks. In light of this genetic map, I will suggest that popular hierarchical divisions of incremental language processing models (e.g. from lexical co-occurrence statistics to probabilistic context free grammar) do not reflect the natural architecture of the language network and briefly discuss models that may better account for interactions between these levels.

 

______________________________

January 29, 2021

Liz Chrastil, Assistant Professor, Neurobiology and Behavior, University of California, Irvine

https://cnlm.uci.edu/chrastil/

Using spatial navigation to understand human learning and memory

Abstract: Navigation is a central part of daily life. For some, getting around is easy, while others struggle, and certain clinical populations display wandering behaviors and extensive disorientation. Working at the interface between immersive virtual reality and neuroimaging techniques, my research uses these complementary approaches to inform questions about how we acquire and use spatial knowledge. In this talk, I will discuss both some of my recent work and upcoming experiments that center on three main themes: 1) how we learn new environments, 2) how the brain tracks spatial information, and 3) how individuals differ in their spatial abilities. More broadly, I will discuss how navigation lends insight into processes of human learning and memory. The behavioral and neuroimaging studies presented in this talk inform new frameworks for understanding spatial knowledge, leading to novel approaches to answering the next major questions in navigation, learning, and memory.

 

______________________________

February 5, 2021

Alison Hawthrone Deming, Professor, English, University of Arizona

Evan MacLean, Assistant Professor, Anthropology, University of Arizona

https://anthropology.arizona.edu/user/evan-maclean

https://english.arizona.edu/people/alison-hawthorne-deming

A discussion on intelligence

Intelligence is an important but controversial concept in both science and society.  In this interactive session we will discuss intelligence from the perspective of a comparative psychologist (Evan MacLean) and a poet (Alison Deming).  We will also solicit ideas and perspectives from others in the group so please come prepared to contribute to an interactive session probing the nature of intelligence!

 

______________________________

February 12, 2021

Miriam Spering, Assistant Professor, Neuroscience, University of British Columbia

http://www.neuroscience.ubc.ca/spering.htm

Eye movements as a window into decision making

Abstract:  Seeing and perceiving the visual world is an active and often multimodal process that involves orienting eyes, head and body towards an object of interest. It is also a highly dynamic process during which the eyes continuously scan the visual environment to sample information. Eye movements are used in many contexts and by many research disciplines, ranging from developmental and cognitive psychology to computer science and art history, to measure visual perception, object categorization, recognition, and other mental processes.

My research group uses human eye movements as sensitive indicators of performance in real-world interceptive tasks. Tasks such as catching prey or hitting a ball require prediction of an object’s trajectory from a brief glance at its motion, and an ultrafast decision about whether, when and where to intercept. I will present results from two research programs that use eye movements as a readout of these types of decision processes. The first series of studies investigates go/no-go decision making in healthy human adults and baseball athletes and reveals that eye movements are sensitive indicators of decision accuracy and timing. The second set of studies probes decision making in patients with motor deficits due to Parkinson’s disease and shows differential impairments in visual, motor and cognitive function in these patients. I will conclude that eye movements are both an excellent model system for prediction and decision making, and an important contributor to successful motor performance.

 

______________________________

February 19, 2021

Sara Aronowitz, Assistant Professor, Philosophy and Cognitive Science, University of Arizona

http://www-personal.umich.edu/~skaron/

What is a cognitive structure?

Abstract: In this talk, I'll present some of my recent theoretical and experimental work on the variety of information structures that we use to solve problems. What do these structures have in common, and how do structures of different types interact in learning? I'll present two cases from explanation and memory. (1) In response to why-questions, people offer both narrative and abstract explanations. These two structures, I'll argue, work together to allow us to understand and communicate - and data from adult learners suggests that both structures are in some sense equally explanatory. (2) In memory, spatial (and spatio-temporal) map-like structures have been posited to extend to all kinds of knowledge domains beyond the literally spatial. But what is lost when we extend the concept of a map this far? Putting these cases together, we see that even simple learning problems are not best solved by finding the "right" structure, but instead require a more complex array of structures.

 

______________________________

February 26, 2021

Andrew Belser, Professor, School of Theatre, Film & Television, University of Arizona

https://tftv.arizona.edu/people/directory/awbelser/

Possible Threads: Linking Neuroscience with Theatre/Film Performance 

Abstract: The neuroscience research revolution of the past quarter century affords new models for situating the brain/body interaction with the world in ways that are useful for performance training. Professor Belser will outline some areas and of his research and writing that is focused on contemporary film and theatre performance, and can also apply to other human performance endeavors like teaching, sports or job interviews. Belser’s work links recent learning in the cross-disciplinary area of neuroscience research such as perception, spatial awareness, emotion, learning processes, and image processing to expand studio teaching and practices.

 

______________________________

March 5, 2021

Debbie Kelly, Professor, Psychology Department, University of Manitoba

http://home.cc.umanitoba.ca/~kellyd/index.html

Crumbling Foundations: Age-related Decline in Geometric Cue Use During Spatial Reorientation

Orientation is the critical first step in navigation. When lost, or in a new environment, one must reorient to determine which direction to begin traveling. Considerable research over the years has shown a diversity of species use featural (e.g., color or pattern) and geometric (e.g., distance or direction) information from objects or boundaries to reorient. Interestingly, even when featural cues are present and reliably predict the location of a hidden target, many animals encode geometric information from the surfaces of an enclosed environment. Geometry has been argued to provide a foundation upon which featural cues are built. However, the encoding of geometry may decline with age. The crumbling foundations of a spatial representation may leave older individuals lost in featurally familiar environments. My research presentation will take a comparative approach to examine this possibility.

 

______________________________

March 12, 2021

Anne Charity Hudley, Professor, Department of Linguistics, University of California, Santa Barbara

https://www.linguistics.ucsb.edu/people/anne-h-charity-hudley

A Dialogic Model of Linguistic and Racial Identity Development for Black College Students

Abstract: Knowledge about language and culture is an integral part of the quest for educational equity and empowerment, not only in PreK-12 but also in higher education. As Black students transition from high school to college, they work add their voices and perspectives to academic discourse and to the scholarly community in a way that is both advantageous and authentic. The Talking College Project is a Black student and Black studies centered way to learn more about the particular linguistic choices of Black students, while empowering them to be proud of their cultural and linguistic heritage.

One key question of The Talking College Project is: how does the acquisition of different varieties of Black language and culture overlap with identity development, particularly intersectional racial identity development?  To answer this question, we used a community based participatory research methodology to conduct over 100 interviews with Black students at several Minority-Serving Institutions, Historically Black College, and Predominantly White Universities. Prior to the pandemic, we also conducted ethnographies on over 10 college campuses. Based on information collected from the interviews and our ethnographies, it is evident that Black students often face linguistic bias and may need additional support and guidance as they navigate the linguistic terrain of higher education. I will present themes and examples from the interviews that illustrate the linguistic pathways that students choose, largely without sociolinguistic knowledge that could help guide their decisions.

To address the greater need to share information about Black language with students, I highlight our findings from interviews with Black students who have taken courses in educational linguistics to demonstrate the impact of education about Black language and culture on Black students’ academic opportunities and social lives. We have a focus on how this information particularly influenced those who went on to be educators. These findings serve to help us create an equity-based model of assessment for what educational linguistic information Black students need in order to be successful in higher education and how faculty can help to establish opportunities for students to access content about language, culture, and education within the college curriculum. We address the work we need to do as educators and linguists to provide more Black college students with information that both empowers them raciolinguistically AND respects their developing identity choices.

 

______________________________

March 19, 2021

Catherine Brooks, Associate Professor, Director of the iSchool, University of Arizona

https://ischool.arizona.edu/people/catherine-brooks

Popular Discourse around Deepfakes

Abstract: This research interrogates the discourses that frame our understanding of deepfakes and how they are situated in everyday public conversation. It does so through a qualitative analysis of popular news and magazine outlets. This project analyzes themes in discourse that range from individual threat to societal collapse. This paper argues that how the deepfake problem is discursively framed impacts the solutions proposed for stemming the prevalence of deepfake videos online. That is, if fake videos are framed as a technical problem, solutions will likely involve new systems and tools. If fake videos are framed as a social, cultural, or as an ethical problem, solutions needed will be legal or behavioral ones. As a conclusion, this papers suggests that a singular solution is inadequate because of the highly inter-related technical, social, and cultural worlds in which we live today.

 

______________________________

April 9, 2021

Melville Wohlgemuth, Assistant Professor, Neuroscience, University of Arizona

https://neurosci.arizona.edu/person/melville-wohlgemuth-phd

Sensorimotor integration for active sensing behaviors

Abstract: A major goal in neuroscience is to dissect the neural circuits that support complex behaviors.  Comparative approaches are fundamental to the success of this goal, to separate species specializations from general principles, and to understand the brain in light of its evolved functions. Our lab uses the echolocating bat as a model to understand the role of an interconnected cortico-collicular brain circuit in processing information from the environment to adapt active sensing behaviors.  In humans, this process typically involves assessing visual information for the control of head and eye movements; for the bat, auditory information is used to adapt features of sonar vocalizations, positioning of the head and ears, and kinematics of flight control.  The same brain structures involved in auditory spatial attention in humans have been co-opted by the bat for their echolocation behaviors: namely the inferior and superior colliculus in the midbrain, as well as auditory cortex.  These structures form an interconnected loop that processes incoming auditory information for the purposes of species-specific orienting behaviors.  Today’s talk will focus on efforts to understand how the superior colliculus processes auditory information, its role in driving the bat’s orienting echolocation behaviors, and preliminary work on the inferior colliculus to understand auditory response topologies for processing natural sounds.  From this work, we have identified general rules by which the brain processes sensory information for the purpose of adaptive spatial attention behaviors.

 

______________________________

April 16, 2021

Peter Turkeltaub, Associate Professor, Neurology, Georgetown University Medical Center

https://neurology.georgetown.edu/patientcare/cognitiverecovery/#

Biological Mechanisms of Language Outcomes After Stroke

Abstract: About one third of people who have a stroke experience aphasia, a loss of language and communication ability that can have devastating consequences on a person's life. Most people with aphasia never fully recover, even with maximal medical care and speech-language therapy. A major goal of research is thus to understand the brain basis of aphasia recovery, with the hope that this information will lead to improved biologically based treatments. For many years, neuroimaging studies have described topological patterns of changes in brain activity in people with aphasia, suggesting that recovery involves recruitment of tissue around the stroke in the left hemisphere known as "perilesional recruitment," along with recruitment of a mirror-image language network in the right hemisphere, and possibly engagement of "domain general" non-linguistic processors to support recovery. The field has been somewhat lacking, however, in tests of clear biological mechanistic hypotheses to explain these observations. I will present several neuroimaging studies examining brain structure, function, and connectivity to investigate specific hypotheses regarding the biological mechanisms that might underlie changes in language networks after stroke. The results challenge some commonly held ideas regarding the brain basis of aphasia recovery.