Your Brain on Music: What Neuroscience Is Finally Proving
Based on a conversation with cognitive neuroscientist Daniel Levitin on StarTalk with Neil deGrasse Tyson
Humanity has long suspected it. Music heals. Music teaches. Music reaches into the brain in ways that language alone cannot. Now cognitive neuroscientist and bestselling author Daniel Levitin has written the book that makes the case with actual evidence, after reading 4,000 peer-reviewed articles. His latest, I Heard There Was a Secret Chord, covers everything from bone flutes to Billie Eilish, from Parkinson’s disease to the opioid crisis.
Humans Were Musicians Before They Could Talk
The oldest musical artifacts on record are bone flutes, 40,000 to 60,000 years old, found in human burial sites. Someone drilled precisely spaced holes into a femur to produce specific pitches. As Levitin points out, the bone flute almost certainly was not the first instrument. Before arriving there, there was percussion, there was singing, there was the human voice itself.
The deeper revelation is neurological. The neural structures that encode music are phylogenetically older than those that encode speech. In terms of brain evolution, humans were musicians before they were talkers. Musicologist Steven Mithen has argued, in his book *The Singing Neanderthals*, that early humans may have communicated through musical sound before anything resembling language developed.
Why Music Evolved: The Memory Argument
Written language is only about 5,000 years old. Humans have been on the planet for 40,000 to 200,000 years depending on how the term is defined. For most of that time, there was no writing. The question is how hunter-gatherer communities preserved critical information across generations.
They sang it.
A song encoding the route to the water source. A song warning that the neighboring tribe was dangerous. A song explaining how to boil a plant to remove its toxins. Music resists distortion in ways that plain speech does not. Rhythm, meter, accent, rhyme scheme, and melodic structure create a constrained space that limits how much a message can drift over time. The Old Testament was sung for a thousand years before it was ever written down. Children still learn the alphabet through a song. Humanity has always known this works. Now there is scientific evidence for why.
The Default Mode Network
One of the biggest discoveries in cognitive neuroscience in recent decades is the default mode network: the finding that the brain actively wants to mind wander.
Paying attention costs glucose. The brain is already the body’s most energy-intensive organ, and focused attention consumes even more. After sustained concentration, the mind begins to drift. This is not a failure. It is the brain shifting into a mode where most nonlinear problem solving actually happens. The default mode network was identified by Levitin’s colleague Vinod Menon at Stanford.
There are three reliable ways to enter this state intentionally: meditation, walking in nature, and listening to music. The solution that eludes a person while working tends to arrive while walking to the kitchen or lying in the bath. Dreams are another manifestation of the default mode, the brain running loose, making unexpected connections.
Music as Medicine: What the Evidence Now Shows
Levitin has been cautious about making medical claims for music since his first major book in 2006. The evidence was not there yet. Now it is.
Immune function. Music boosts immunoglobulin A, the antibody responsible for fighting infections of the mucosal system, the same system targeted by colds and COVID. It also increases natural killer cells and T cells. The cellular response is documented, even if the full clinical picture continues to be studied.
Pain management. Levitin’s lab was the first to show that listening to music a person loves triggers the brain’s production of endogenous opioids. Not at pharmaceutical levels, but at levels sufficient to meaningfully reduce pain. His argument is that music could have been part of a toolkit that helped avert the opioid crisis, allowing patients to manage pain with smaller doses for shorter periods.
Parkinson’s disease. A technique called Rhythmic Auditory Stimulation, developed by Michael Thaut, plays music at the tempo of a patient’s natural walking pace. This activates a subsidiary neural circuit that entrains to the beat, allowing Parkinson’s patients to walk smoothly. After a course of this therapy, some patients have been able to abandon their walkers and crutches for months.
Tourette syndrome and stuttering. Both conditions involve disruptions to the brain’s timing circuits. Billie Eilish has Tourette’s and has observed that the tics largely disappear when she is singing. These kinds of findings have prompted formal grant applications to the NIH, where Levitin serves on a research panel. The National Institutes of Health now has budget lines across multiple institutes for music and medicine research.
Music, Memory, and Alzheimer’s
Tony Bennett had Alzheimer’s. During episodes when he could not remember that he was Tony Bennett, he could still sing his songs. Glen Campbell performed well into his diagnosis. His brain scans during his farewell tour showed that roughly half his brain was functionally offline. He was still arguably one of the best guitarists alive.
This is explained by cognitive reserve. Musicians build such an extensive redundancy of neural pathways through years of practice that they can lose enormous amounts of brain tissue and still draw on the circuits that remain. A unique sensory cue, a song associated with a specific period of life, activates the same neural family as the original experience. That pattern may survive damage that destroys almost everything else.
The Sad Song Paradox and Learning at Any Age
One of the most counterintuitive findings in Levitin’s work involves treating depression. Playing happy music for a depressed person tends to make things worse. Depression often comes with a sense of being misunderstood, and a cheerful song can feel like one more person who does not understand. A sad song, on the other hand, offers company. Someone else has been in that same place, stared into the same darkness, and came through it enough to create something beautiful. Levitin is currently working with a UCLA group on treating drug-resistant depression using a combination of pharmaceuticals, talk therapy, and carefully chosen music.
On the subject of learning: Levitin’s grandmother escaped Nazi Germany in 1939 and, on her 80th birthday, received an $80 keyboard from Radio Shack. She taught herself to play “God Bless America” every morning. By her 82nd birthday, she had worked out a left-hand harmony. She played that song until she died at 97. The neuroscience is clear that learning an instrument at any age is neuroprotective. It builds cognitive reserve, creates new neural pathways, and generates a genuine sense of agency. It is never too late to start.



That reference to "sad songs" and clinical depression was interesting, but I really think it depends on the baseline temperament of the individual.
There's a reason most people can only handle upbeat music and movies with happy endings. A lot of people simply aren't emotionally calibrated for the darker subtext of life. The shadows that offset the light.
One person's delicious and life-affirming melancholy is another's spiraling despair. Coltrane at his most free is complex ecstasy for me, but the musical equivalent of an extended drug cartel torture session for another.
One thing is clear though: toxic positivity in the face of suffering is contraindicated. You can't gloss over hard knocks.
And that's why music is so astounding: it gives shape to nebulous feeling in a way nothing else can. Except perhaps love. Or unbidden magic.