Here’s one for all the non-babies out there . . . this beautiful op-ed by Dr. David Haskell (link here: https://nyti.ms/3DS4bY6 ) laments the decline of hearing over life because of the destruction of hair cells and other factors including a decline in auditory processing. Dr. Peter Attia, one of the country’s top sources of useable information for health optimization, recently wrote about the same topic (link: https://bit.ly/45ouVeE), noting that “Hearing requires the sensory auditory system and processing of auditory signals by several parts of the brain; both of which often diminish with age.” So, are hearing aids the only way out?
Maybe not. We can also address the degradation that occurs in neuronal connections that process language sounds – the auditory processing Peter Attia refers to. Indeed, one reason people over 40 begin to fear noisy restaurants is because language networks formed to process those sounds in infancy have weakened over time so, as older adults, they have trouble discriminating similar sounds – like “b”, “p” and “t” – efficiently. But, this “acoustic mapping” can potentially be improved via experience-expectant plasticity, the ability of the brain to change based on external stimulation.
Older adult brains are significantly less “plastic” than young brains, but they can and do reorganize language pathways during sleep. We might be able to take advantage of this effort by exposing the sleeping – but still active! – brain to non-speech sounds (that is sounds that might be language as they have similar characteristics to language such as 10s of millisecond timing and rapidly changing transitions). The brain will process the complex incoming acoustic patterns and try to discriminate between differing sound profiles, which can enhance the brain’s ability to perceive and process actual speech sounds. By focusing the brain on the basic acoustic components of each sound, rather than initiating the automatic, more holistic processing that we do as adults, this type of exposure could improve the neural encoding of speech sounds.
Another way that passive training with non-speech might improve language processing is through the enhancement of attention and cognitive control. Listening to non-speech sounds can improve the brain’s ability to selectively attend to relevant auditory stimuli while ignoring irrelevant noise, which is an important skill for speech processing in noisy environments. Additionally, passive training with non-speech can improve executive function, such as working memory and cognitive flexibility, which are also important for language processing. Our RAPTbaby Smarter Sleep already uses these types of acoustic cues because an infant brain is primed to look for them during the critical period of the first year when foundational language networks are formed. We believe these same types of cues might be just the ticket to help repair those networks much later in life. Stay tuned – you may not be a baby anymore but we’re still here to help!