Wide-Field Cortical Maps of Sound Frequency 

One approach we have to look at how mammals perceive sound is to map out the neural responses to sound.  Another approach is to map out an indirect metric of the neural response with a highly sensitive videocamera that captures small changes in light reflectance that are indirectly related to neuronal spiking levels.  The sensory epithelium of the ear responds to all the audible sound frequencies or in other words all the notes on a piano keyboard.  By time one reaches cortical levels neurons are organized topographically with some positions of cortex responding to low sound frequencies (e.g., blue regions above) and other responding to middle and higher tone frequencies (e.g. yellow, green colored responses above0).    The ear has only has one topographic map of the “piano keyboard” or  audible frequency range but the cortex has many maps of the piano keyboard that in the above image look like rainbows of color.    We can quantify the cortical area representing each quarter octave of sound and the direction and magnitude of change in tone sensitivity with the overlaid vectors (arrows above).  With these wide-field maps we have discovered that each cortical field processes and responds to sound in a very different way.


Behavioral measures of sound perception and discrimination

It is well known that humans and other animals discriminate tonal frequencies in natural sounds including vocalizations.  However, less is known about how they discriminate temporal cues in sound including cues that contribute to perception of tempo, timbre and loudness.  Research teams in the Read lab are training rats to initiate operant behavior (start trial, top left) and to discriminate target sounds played from an overhead speaker (discriminate sound for reward, top right).  With this task we find rats readily discriminate sound tempo for strawberry protein shake for reward.  We are currently examining the time scales rats use to discriminate various vocal communications.


Neuron spiking patterns encode sound shape and rhythm

It was previously thought that cortical neurons keep track of sound rhythms by ticking off one spike with each beat of the rhythm like the tick-tock of a metronome.  We discovered that some cortical neurons spike with a sustained “jitter” that follows the sound envelope change over time.  To illustrate this we play a shaped burst of noise from our speakers (top left, sound pressure wave) and for ten trials and record all the spikes from a single cortical neuron (colored dots each spike time).  We perform a cross-correlation of all spike times to generate an averaged cross correlation function (autocorrelogram) that we fit (red line) with a modified Gaussian model to estimate the reliability and jitter (derived from its area and width).  The regular pattern in the autocorrelogram indicates a consistent spike timing pattern.  When we quantify the width of our autocorrelogram fit we find it increases with the duration of noise bursts in our sound sequence.  This is significant because it supports the theory that the spike timing pattern provide a “temporal code” of  sound envelop shape and rhythm.  (Lee et al., J Neurophysiology 2016).


Parallel Thalamoco-Cortical Sound Processing Pathways

At first glance, one may think that all sound processing cortices are wired up to process sound the same way. However, we find this is not the case. For example, we find separate populations of neurons form thalamic pathways to A1 and cSRAF.  The pathway to A1 has genes encoding for two glutamate transporters (see blue colored pathway above); whereas the pathway to cSRAF expresses only one (see orange colored pathway above). We think these different gene expression patterns allow the pathways to encode sound with different tone frequency and timing resolution. This is significant because it suggests gene expression reflects neural functionality (Storace et al., J Neuroscience, 2010).



Undergraduate Student Projects

Screen Shot 2015-03-18 at 4.10.57 PM

Kelsey Dutta

Majors: Physiology and Neurobiology, Electrical Engineering

Project Title: Stimulation Coding for an Auditory Midbrain Implant

Committee: Heather Read, Psychology Behavioral Neuroscience (chair), Joseph LoTurco, Physiology & Neurobiology, and Monty Escabi, Electrical & Computer Engineering

Project Description

Screen Shot 2015-03-18 at 4.11.45 PM

Deric Zhang

Major: Physiology and Neurobiology (2015)

Project Title: Discrimination of Temporal Cues in Rat Call Sequences

Faculty Mentor: Dr. Heather Read, Department of Psychology


Project Description