Description
This course is about how the brain creates our sense of spatial location from a variety of sensory and motor sources, and how this spatial sense in turn shapes our cognitive abilities.
Knowing where things are is effortless. But “under the hood,” your brain must figure out even the simplest of details about the world around you and your position in it. Recognizing your mother, finding your phone, going to the grocery store, playing the banjo – these require careful sleuthing and coordination across different sensory and motor domains. This course traces the brain’s detective work to create this sense of space and argues that the brain’s spatial focus permeates our cognitive abilities, affecting the way we think and remember.
The material in this course is based on a book I’ve written for a general audience. The book is called “Making Space: How the Brain Knows Where Things Are”, and is available from Amazon, Barnes and Noble, or directly from Harvard University Press.
The course material overlaps with classes on perception or systems neuroscience, and can be taken either before or after such classes.
Dr. Jennifer M. Groh, Ph.D.
Professor
Psychology & Neuroscience; Neurobiology
Duke University
www.duke.edu/~jmgroh
Jennifer M. Groh is interested in how the brain process spatial information in different sensory systems, and how the brain’s spatial codes influence other aspects of cognition. She is the author of a recent book entitled “Making Space: How the Brain Knows Where Things Are” (Harvard University Press, fall 2014).
Much of her research concerns differences in how the visual and auditory systems encode location, and how vision influences hearing. Her laboratory has demonstrated that neurons in auditory brain regions are sometimes responsive not just to what we hear but also to what direction we are looking and what visual stimuli we can see. These surprising findings challenge the prevailing assumption that the brain’s sensory pathways remain separate and distinct from each other at early stages, and suggest a mechanism for such multi-sensory interactions as lip-reading and ventriloquism (the capture of perceived sound location by a plausible nearby visual stimulus).
Dr. Groh has been a professor at Duke University since 2006. She received her undergraduate degree in biology from Princeton University in 1988 before studying neuroscience at the University of Michigan (Master’s, 1990), the University of Pennsylvania (Ph.D., 1993), and Stanford University (postdoctoral, 1994-1997). Dr. Groh has been teaching undergraduate classes on the neural basis of perception and memory for over fifteen years. She is presently a faculty member at the Center for Cognitive Neuroscience and the Duke Institute for Brain Sciences at Duke University. She also holds appointments in the Departments of Neurobiology and Psychology & Neuroscience at Duke.
Dr. Groh’s research has been supported by a variety of sources including the John S. Guggenheim Foundation, the National Institutes of Health, the National Science Foundation, and the Office of Naval Research Young Investigator Program, the McKnight Endowment Fund for Neuroscience, the John Merck Scholars Program, the EJLB Foundation, the Alfred P. Sloan Foundation, the Whitehall Foundation, and the National Organization for Hearing Research.
What you will learn
Course Introduction and Vision (Part 1)
This module contains an introduction to the course as a whole (Video 1.1) and an exploration of how our eyes detect light and deduce the location light is coming from (Videos 1.2-1.6). You’ll also learn about how scientists from Democritus to Alhazen to Kepler figured this out. The final video for the module involves an experiment to test what happens when special goggles turn the world upside down (Video 1.7). I’ll show experiments frequently throughout this course — they are how we know what we know. This module’s quiz is ungraded and available to both auditors and certificate students. Consider it a sample of the style of question in the quizzes for the remaining modules, and an opportunity to determine if you’d like to pursue a certificate for this course.
Vision (Part 2), the Body, and Neural Signals
In this unit, we cover the visual scene in 3D – the many clues to depth. We then turn to body senses (position and touch) and how our brains detect the configuration of our own bodies. Along the way, we cover the resting membrane potential, the action potential, and how they arise. Finally, we bring vision and the body together, and throw some beanbags at a visual target while wearing prisms! This material is covered in Making Space, chapters 2 and 3.
Brain Maps
In this unit, we turn to the brain and how it uses the spatial position of neurons within the brain to organize information about the spatial position of stimuli in the world (Making Space chapter 4). You’ll learn about how we identify where one object ends and another begins, what a receptive field is, and how some neurons are sensitive to edges and the boundaries of objects. Maps occur in both visual cortex and body (somatosensory) cortex, and these maps may be responsible for various “phantom” sensations (examples from normal vision, patients with body part amputations, and electrical stimulation experiments).
Sound and Brain Representations
In module 4, we turn to the fascinating puzzle of how we deduce sound location–a process that requires quite a bit of detective work. Our brains piece together multiple types of clues, including subtle differences in timing, loudness, frequency content, and how sounds appear to change as we turn our heads. Because our ears don’t form images of sounds, our brains don’t have to use maps to encode sound location. The second half of the videos this module concern alternative forms of brain representation, how the brain translates between different types of representation, and what we know about brain representations for sound location. The material is covered in chapter 5, “Sherlock Ears” and chapter 6, “Moving with Maps and Meters”, in Making Space. Be forewarned, there are about 70 minutes of video this module, as compared to previous modules’ 50-60 minutes. After watching the full set, you’ll see why these videos are grouped together as a unit. To make things more manageable, we’ve broken the quiz into two parts; that way, you can get feedback on one part before moving on to the next, if you like.