When

Noon – 1:30 p.m., Oct. 10, 2025
Image
Randy Gallistel

Charles Gallistel
Distinguished Professor Emeritus
Behavioral and Systems Neuroscience / Cognitive Psychology
Rutgers University

Zoom: https://arizona.zoom.us/j/89643263719
 

The new understanding of associative learning: neurobiological and machine- learning implications
 
Abstract: In the old understanding, which goes back to Aristotle, associations are activity-conducting connections between ideas, nodes or neurons. It’s an ontological mistake; an association is a readily measured temporal dependency between a signal (Conditional Stimulus) and an expected rate of some event of interest (the “reinforcement”) out there in world, not a conducting connection in a mind or brain. The stimulus for the perception of temporal association is the informativeness of the CS—the ratio between the conditional event rate and the rate expected when in the context in which the CS occurs, in other words, the contrast between the predicted rate and the background rate.  Under easily arranged experimental conditions, the rate at which rodent subjects poke and avian subjects peck or scratch are scalar functions of the background and conditional rates of reinforcement over at least three orders of magnitude variation. The percepts of event rates are neurobiological numerals (numerons) that encode perceived rates and their ratios. The encoded values scale with the rates they represent. The mapping from encoded values to measured behavioral rates is also scalar. Plastic synapses cannot be the physical basis of memory because they do not encode the values of variables. LLMs do not learn the way brains learn (pace Geoffrey Hinton), because a behaviorally effective percept of a temporal association may be generated by a single experience of a highly informative dependency. One shot learning is impossible in an LLM. You cannot tell the machine anything; you must train it.

Contacts

Massimo Piattelli-Palmarini