Skip to Content


General Research Areas

Encoding of complex sounds in the auditory system

One of the major thrusts of the laboratory is toward understanding how complex sounds such as species-specific vocalizations are represented robustly in the auditory system. Robust representation implies that behaviorally relevant acoustic features can be extracted under a variety of environmental conditions. In particular, we examine conditions of variable sound intensity, variations in temporal context of sounds and different mixtures of sounds. "Reverse-engineering" the neural encoding of sounds under these variable conditions, as opposed to the static experimental conditions typically studied, may lead to both improved understanding of normal auditory function in natural environments as well as improved engineering of devices intended to process relevant sounds. The latter include hearing aids, cochlear implants and computers capable of automatically recognizing speech.

Human language processing measured with electrocorticography (ECoG)

Animal models of sound processing face inherent limitations when one attempts to explore the richness of human speech. In recent years functional magnetic resonance (fMRI) has been used successfully to probe directly the human brain activity underlying speech and language tasks at high spatial resolution. This method suffers from a low time resolution, however, so the dynamic nature of brain activity is mostly inaccessible to it. We use recording electrodes placed directly upon the brain, termed electrocorticography (ECoG), to examine rapidly evolving brain activity responsible for the processing of both simple and complex linguistic tasks. These experiments can  lead to new insights into how dynamic, coordinated brain activity results in human speech processing. Additionally, these findings may ultimately enable individuals to control external devices by thinking particular words or phrases and having their brain activity decoded by a computer.

Brain repair via induced neural plasticity

More recently, the lab has shifted focus somewhat to begin investigating the principles behind "forward-engineering" novel brain function by rewiring native cortical brain networks to implement new algorithms. Following brain injury such as a stroke, some function is lost and the brain network is pathologically disrupted. The principles of system theory and neuroplasticity are applied toward developing brain-computer interfaces that can rewire brain networks using strategic neurostimulation and thus potentially recover the lost function. Collectively, this research represents a combination of neuroscientific and neuroengineering endeavors that have the potential to alleviate focal losses of nervous system function such as in stroke.

Game-based auditory training

Cognitive training software provides exercises whose completion strengthens certain cognitive processes. We seek to develop listening training software in the form of compelling video games playable on smartphones that naturally encourage individuals to complete their auditory training. The goal of this work is to optimize the function of hearing assist devices such as hearing aids and cochlear implants, as well as to enable individuals with a newly correct hearing deficit to learn to communicate effectively.

Novel hearing tests

We are using modern principles of machine learning to improve upon longstanding hearing test formats while also devising completely new ones. The new tests are much more efficient and deliver much more information than current tests, which will enable new inferences to be drawn about the hearing of patients with deficits.


Research Projects 

Feature mapping in cortical areas

While the cerebral cortex of the brain in higher mammals is a three-dimensional organ, one of its primary organizational features is a two-dimensional (2D) arrangement of neuronal collections called columns. Any particular cortical area has a 2D arrangement of neurons that tends to place neurons near one another that connect together, which is a feature that tends to keep the amount of interneuronal wiring relatively low. These "feature maps" can take on a wide variety of forms that often provide clues to neurophysiologists about the significance and internal organization of various neuronal response features. The maps of visual and somatosensory cortical areas have been extensively worked out over the years, while maps of auditory cortical areas have been more challenging to discern for unclear reasons. We pursued a series of modeling studies aimed at defining principles underlying functional mapping in auditory cortex and discovered that the reason functional maps are unclear and even variable in auditory cortex relates to the nature of sound encoding. Visual and somatosensory (touch) stimuli are inherently 2D (on the retina and body, respectively), while the cochlea of the ear fundamentally encodes only the frequency of sounds. The resulting 1D maps do not map so readily onto the 2D cortical surface. These maps are strongly influenced by individual variability in the shapes of the areas as well as brain development, more so than corresponding maps in other sensory areas. Understanding this distinction allows experimenters to improve their investigation of auditory cortical function with electrical recordings and functional imaging.

PV Watkins, TL Chen and DL Barbour. "A computational framework for topographies of cortical areas." Biol Cybern 100(3):231-248, 2009.

TL Chen, PV Watkins and DL Barbour. "Theoretical limitations on functional imaging resolution in auditory cortex." Brain Res 1319:175-89, 2010.

N Katta, TL Chen, PV Watkins and DL Barbour. "Evaluation of techniques used to estimate cortical feature maps." J Neurosci Meth 202(1):89-97, 2011.

DL Barbour. "Intensity-invariant coding in the auditory system." Neurosci Biobehav Rev (Epub 2011 Apr 16).

Representation of sound intensity in the auditory system

The auditory system is unique in that a large fraction of its neurons are tuned to respond best at a particular sound intensity: both louder and softer sounds relative to their best intensities result in a decreased response. We have thoroughly documented the properties of these neurons in primary auditory cortex, finding that they are easily the most sensitive neurons (i.e, have the lowest response threshold) of all central auditory neurons. Their best intensities are also strongly skewed toward lower sound intensities, further implying that they are preferentially encoding the softest sounds. How the responses of these neurons and others are combined together to create robust encoding of sounds across the wide range of sound level found in the environment is the subject of continuing investigation.

PV Watkins and DL Barbour. "Rate-level responses in awake marmoset auditory cortex." Hear Res 275(1-2):30-42, 2011.

DL Barbour. "Intensity-invariant coding in the auditory system." Neurosci Biobehav Rev (Epub 2011 Apr 16). 

Adaptive processes in auditory cortex

The auditory system is unique in that a large fraction of its neurons are tuned to respond best at a particular sound intensity: both louder and softer sounds relative to their best intensities result in a decreased response. This observation is 60 years old and has been widely interpreted to reflect a change in the code of sounds in the brain relative to the ear that makes it easier for the neurons to encode sounds at different intensities. In an extensive series of experiments we demonstrated for the first time that intensity tuning of auditory neurons is strongly correlated to their short-term adaptive processes. The trends we discovered indicate that that the strong inhibition active at higher sound intensities actually shields these neurons from the desensitization that usually accompanies intense stimuli. By not adapting much in response to loud sounds, these neurons are more sensitive to softer sounds follow immediately. This encoding process dramatically expands the overall dynamic range over which the auditory system can operate at short time scales and consequently enables robust encoding of real-world dynamic stimuli when the acoustic environment is relatively unstable or unpredictable.

PV Watkins and DL Barbour. "Specialized neuronal adaptation for preserving input sensitivity." Nature Neurosci 11:1259-1261, 2008 (Epub 2008 Sep 28).

PV Watkins and DL Barbour. "Level-tuned neurons in primary auditory cortex adapt differently to loud versus soft sounds." Cer Cortex 21(1):178-190, 2011.

Representation of noisy vocalizations in auditory cortex

Vocalizations typically occur in and must be decoded from complex acoustic environments containing other competing sounds and environmental noise.  Biological auditory systems are expert at extracting usable information from such an environment, but engineered systems typically fail. Our studies of the neural encoding of noisy vocalizations have revealed a variety of individual neuronal responses to mixtures of these sounds. The population of neurons responds most accurately to the vocalizations, but some respond to everything and a few respond better to the noise. Linking basic neuronal response characteristics to the behavior of the same neurons in response to complex acoustics will elucidate the important features of the auditory system for real-world listening. Furthermore, the insights gained from this work may lead to improved engineered systems intended to process sounds with interference.

Temporal flow of brain activity during speech perception and production

Electrocorticography (ECoG) recording electrodes placed directly upon the brain can reliably reveal rapidly evolving brain activity at reasonably high spatial resolutions. Using ECoG, we have recorded brain activity of human subjects performing speech perception and production tasks. The brain areas active during these tasks were consistent with findings of functional imaging studies. Because ECoG preserves more timing information than functional imaging studies can, the relative activation sequence of the brain areas involved in hearing and speaking can be extracted. While a very similar collection of brain areas is active in both of these tasks, their order of activation is essentially complementary. Extensions of this type of experiment will allow dynamic brain network configurations in a variety of tasks to be analyzed.

CM Gaona, M Sharma, ZV Freudenburg, JD Breshears, DT Bundy, J Roland, DL Barbour, G Schalk and EC Leuthardt. "Nonuniform high-gamma (60-500 Hz) power changes dissociate cognitive task and anatomy in human cortex." J Neurosci 31(6):2091-2100, 2011.

X Pei, DL Barbour, EC Leuthardt and G Schalk. "Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans." J Neural Eng 8(4):046028, 2011.

E Leuthardt, X Pei, J Breshears, C Gaona, M Sharma, Z Freudenburg, D Barbour and G Schalk. "Temporal evolution of gamma activity in human cortex during an overt and covert word repetition task." Front Hum Neurosci 6:99, 2012.

Directed neuronal growth in vivo

Practical proposals to induce and maintain an artificial concentration gradient of a diffusible agent parallel to an organ surface do not exist without completely encapsulating the organ. All diffusible drug delivery systems rely upon diffusion of molecules from a high-concentration source to areas of lower concentrations within an organ of interest. Short of cutting into an organ and inserting a drug delivery system, manipulating concentration gradients along organ boundaries does not seem possible. Creating just such a concentration gradient of growth factors in the brain or spinal cord could be of profound practical value to induce the extension of neural processes for repairing damage. To achieve just such a result, we propose a novel drug delivery system termed discrete controlled release (DCR). DCR is achieved by creating multiple release points on small rods arranged in a grid that is inserted into the brain parenchyma. Proper adjustments of drug loading and release parameters should enable a consistent growth factor concentration gradient parallel to the brain's surface to be achieved within the confines of the grid, which will promote neural process extension over longer distances than could be achieved through simple diffusion alone. As a result, options for designing neural repair mechanisms are expanded.

EY Walker and DL Barbour. "Designing in vivo concentration gradients with discrete controlled release: a computational model." J Neural Eng 7(4):046013, 2010.

Network effects of spike timing induced plasticity

A fundamental property of many neural networks, including the cerebral cortex, is that neurons active at the same time become more easily activated at the same time in the future. This type of network modification appears to be instrumental in forming new memories and acquiring new skills. We are using computational and neurophysiological models to probe systematically the effects of synaptic "learning rules" upon small-, medium- and large-scale neural network behavior. We have observed systematic changes across these networks when small subnetworks are manipulated. By working out the rules governing such network modification, we anticipate developing novel techniques for making arbitrary changes to biological neural networks following injury that can be critical to optimizing the functional repair of damaged neural tissue.  

Ni R, Ledbetter N and Barbour DL. “Modeling of topology-dependent neural network plasticity induced by activity-dependent electrical stimulation.” International IEEE EMBS Conference on Neural Engineering. 6, Program No. THET9.15, San Diego, CA, 2013.

Sinha DB, Ledbetter NM, Barbour DL. "Spike-timing computation properties of a feed-forward neural network model." Front Comp Neurosci. 8:5, 2014. 

Pure-tone audiometry via Baysian active learning

Traditional psychometrics upon which audiometry is based has always approached the problem of estimating detection thresholds by estimating the probabilty that someone will hear a tone delivered very near their threholds. This view of probability is an inherently frequentist view, such that these probabilities of tone detection have always been estimated diectly by sampling. We have developed a novel, purely Bayesian estimation procedure that is considerably more efficient than conventional methods because it does not rely upon sampling theory to generate its estimates. The time required for this new audiometric test is dramatically reduced as a consequence.

Song XD, Wallace BM, Gardner JR, Ledbetter NM, Weinberger KQ, Barbour DL. “Fast, continuous audiogram estimation using machine learning.” Ear and Hearing. (in press).

Gardner JR, Song XD, Cunningham JP, Barbour DL, Weinberger KQ. “Psychophysical testing with Bayesian active learning.” Uncertainty in Artificial Intelligence (in press)