Blog

A Tale About the Frontal Lobes as Told by a Neurologist | The MIT Press Reader

A full understanding of frontal lobe function continues to elude neurologists and neuroscientists. Neurologists caring for patients with frontal lobe damage describe dramatic changes in their cognition and personality. Cognitive neuroscientists who study healthy individuals in the lab have discovered various frontal lobe functions, such as working memory, inhibition, and cognitive flexibility. Do the findings in the lab explain the real-life impact of frontal lobe damage? Can we ever develop a theory of frontal lobe function without incorporating clinical observations of individuals with frontal lobe damage? Through the lens of the neurological patients Mark D’Esposito has encountered and from what he has learned in his lab, he attempts here to answer these crucial questions.

The following article, a transcript of D’Esposito’s acceptance speech upon receiving the Cognitive Neuroscience Society’s Distinguished Career Contributions Award (March 2023), was originally published in the Journal of Cognitive Neuroscience (September 2023 issue). Blonde Curly Wig Human Hair

A Tale About the Frontal Lobes as Told by a Neurologist | The MIT Press Reader

My tale starts in 1988 when I was a neurology resident at Boston University and rotating through the Boston VA Hospital. The patients most fascinating to me were in the Behavioral Neurology ward. It was where some of the most eminent behavioral neurologists, such as Norman Geschwind, Frank Benson, Marty Albert, and Mick Alexander, saw patients, which they elegantly described in the neurology literature. And some of the most eminent neuropsychologists, such as Edith Kaplan and Harold Goodglass, sharpened their assessment tools with these patients. It was a big, open 16-bed ward, and making rounds in the morning was always a neurological adventure. The deficits I observed in patients with frontal lobe damage, whether from a cerebral aneurysm rupture, a stroke, or trauma, were always the most challenging for me to understand.

Dr. Benson described one frontal patient in this ward in his book, “The Neurology of Thinking.” This patient had a condition called diabetes insipidus, which required that the amount of water he drank each day be restricted. The patient was instructed, “Don’t drink any water; don’t go near the water fountain.” Within a few minutes, the patient would be observed having a drink at the water fountain. When he was asked what he had just been told, he would immediately reply: “Don’t drink any water; don’t go near the water fountain.” He understood and remembered the instructions but did not use that knowledge to guide his actions appropriately.

As a Neurology resident, I attended a lecture by Pat Goldman-Rakic at the Boston Society of Psychiatry and Neurology meeting. I was blown away by her physiological studies of frontal lobe function in monkeys. She showed that single neurons in the prefrontal cortex (PFC) were active while the monkey was temporarily holding information in mind. And a small lesion in the PFC caused a “memory scotoma”; that is, the monkey could not temporarily retain a specific location of a stimulus presented to them. She said these findings were the neural basis of representational memory, which she said was akin to what others called working memory.

After she said that, my first thought was, what the heck is working memory? I spent 4 years in medical school and 3 years in a neurology residency, and I had never heard of working memory. Working memory was not mentioned in neurology textbooks. Neurologists did not test working memory in patients with frontal lobe damage. Sure, we tested short-term memory; I would ask my patients to repeat after me: 4–3–7–1–5–0–6, but is that working memory? And I never heard of a patient complaining of difficulty performing a delayed saccade task.

“I spent 4 years in medical school and 3 years in a neurology residency, and I had never heard of working memory.”

Without a smartphone that I could pull out of my pocket, all I could do was go to the medical library to figure this all out. But at the library, I could not find anything in the clinical neurology journals written about working memory. For the younger folks reading this, a library is a big building with rows and rows of shelves of hard copies of books and journals. This building usually has no windows, you cannot talk or Zoom there, and it smells musty.

Next, I went over to the main campus library, and lo and behold, I found a book, just written in 1986, called “Working Memory” by a psychologist, Alan Baddeley. I checked out the book and read it cover to cover. Like the data presented by Pat Goldman-Rakic in her monkey physiology studies, I had a hard time linking the data from these sophisticated cognitive experiments in healthy individuals to my clinical observations of frontal patients. At that time, it seemed to me that I was being confronted with monkey neurophysiology data loosely tied to human behavior and human behavioral data loosely tied to the brain.

So, this is where my tale begins. We have really tasty peanut butter (aka monkey physiology data) and really tasty milk chocolate (aka human behavioral data). But what I wanted was a Reese’s peanut butter cup. Two great tastes that taste even better together.

Working memory is the ability to temporarily maintain and manipulate information without relevant sensory input. The term working memory was introduced by George Miller over 50 years ago in a book called “Plans and the Structure of Behavior.” Subsequently, Karl Pribham proposed that the neural machinery supporting working memory likely includes the PFC.

If you type “working memory” into Google Scholar, you will get over 6 million hits. If you type “working memory” into ChatGPT, the Web site may crash. I think it is safe to say that mine is not the only interest to have been sparked by Pat Goldman-Rakic and Alan Baddeley over the past 40+ years.

In this tale, I will attempt to unpackage this definition of working memory in mechanistic neural terms and tie it to frontal lobe function. And what I mean by “mechanism” is the “process by which something takes place,” which, of course, can be described at many levels of detail. Because this is my tale, I have decided to give you the “light and airy” level of “detail” of these mechanisms rather than the “deep dive” level of detail.

The first mechanism critical for working memory is one that underlies the online maintenance of relevant information necessary for a goal-directed behavior. Our ability to hold information in mind allows us to bridge time and act based on internal goals and intentions rather than be at the mercy of the constant sensory input in our environment.

Our understanding of the neural basis of working memory took a significant leap forward in 1971 when Joquain Fuster at UCLA (and Kubota and Niki in Japan) first discovered neurons within monkey PFC that exhibited activity during the retention interval of a delayed match-to-sample task. In this type of working memory task, food, such as a piece of apple, is placed in one of two wells in front of the monkey where they can clearly see it. Subsequently, a blind is lowered, preventing the monkey from seeing the food. Finally, after a short delay, the blind is lifted, and the monkey is allowed to reach with their hand one of the wells to test their ability to temporarily retain information.

In the early 1990s, our laboratory attempted to determine if human PFC also exhibited activity during the delay period of working memory tasks, but I was not sure at the time that we had the tools to address this question. fMRI had just been discovered, and it did not seem to me that this method had the temporal resolution to observe brief behavioral events, such as the delay period, within a single trial. A shout-out to Geoff Aguirre and Eric Zarahn, my first two graduate students at the University of Pennsylvania, who developed a method to do this, and my first post-doc, Brad Postle, who implemented and refined this method for working memory tasks. And other shout-outs to Clay Curtis, Jason Druzgal, Charan Ranganath, Bart Rypma, and Eric Schumacher in these early years who helped make this method, which we called “trial-based functional MRI,” a reliable tool for the questions we were asking.

With fMRI, we and others have consistently shown that the PFC in humans, like in monkeys, exhibits delay activity while maintaining task-relevant information. Arguing against the idea that these findings were epiphenomenon of the fMRI signal, Clay Curtis showed that this delay activity tracks, on a trial-by-trial basis, the accuracy of the participant in remembering information over a short period. That is, this delay activity reflects the fidelity of the actively maintained representation.

How information is maintained has been an active area of research. The early view that online maintenance occurs via persistent spiking activity is evolving into the idea that representations can also be maintained by sparse bursts of neuronal spiking that induce changes in synaptic weights. Moreover, it has also become clear that direct thalamic input into the lateral PFC is critical for sustaining delay activity.

We soon discovered that the PFC is not the only brain region that exhibits delay activity. In fact, many brain regions exhibit delay activity. For example, holding the image of someone’s face in mind will evoke delay activity in the fusiform face area. Likewise, holding in mind the smell of a flower will evoke delay activity in olfactory cortex. These observations suggest that the online maintenance of any type of information activates the same neural circuits engaged while perceiving that information.

On the basis of our work and others investigating online maintenance, we reasoned that any ensemble of neurons could serve as a working memory buffer through delay activity. This mechanism eliminates the need for currently relevant representations to be transferred from perceptual systems to dedicated, specialized buffers in the brain. This idea is referred to as the sensorimotor recruitment model of working memory. If perceptual systems temporarily store the same information they process, what type of information is stored in the PFC?

In a landmark article published over 20 years ago, Earl Miller and Jon Cohen proposed that the PFC “represents goals and the means to achieve them.” And that cognitive control stems from the active maintenance of these goal representations. However, the PFC covers a lot of territory. The lateral portion of the PFC, where these goal representations are proposed to reside, comprises at least 12 functionally distinct areas, each with distinct cytoarchitecture and connectivity patterns. Thus, a significant focus of our laboratory over the years has been to understand how goal representations might be organized throughout this heterogeneous portion of the PFC.

Twenty years ago, Etienne Koechlin and colleagues published an article that hypothesized that the lateral frontal cortex is organized hierarchically; as you move anteriorly in the PFC, regions are involved in higher-order processing of plans en route to action. This idea led to numerous studies investigating whether a frontal hierarchy exists and, if it does, what the nature of this hierarchy might be.

David Badre, when he was a postdoc in our laboratory, did a fMRI study in healthy individuals to test this idea, where he convincingly demonstrated that a functional gradient along the posterior-to-anterior axis of the frontal cortex does exist, which he described as a representational hierarchy. He found that when an action is selected based on less abstract representations (such as a color corresponding to one particular finger response), posterior regions of the frontal cortex are engaged. However, when an action is selected based on more abstract representations (such as when color only corresponds to one particular finger response only when matched to one specific shape), activity moves forward in location in the PFC.

In collaboration with Bob Knight’s laboratory, Brad Voytek analyzed data from intracranial neuronal recordings of four epilepsy patients performing a task similar to the one David used in his fMRI study with healthy individuals. A directional analysis between frontal regions revealed that theta-phase encoding in the PFC was a stronger predictor of gamma activity in the more posterior, premotor/motor cortex than the reverse. This finding suggested that there was information flow from the higher-order, more anterior regions of the frontal cortex to lower-order, more posterior regions.

For over 20 years since I moved from Penn to Berkeley, Bob Knight has been a great mentor, colleague, and friend. Bob’s ability to play golf, however, is not so great. But his attempt at golf is a perfect example of the type of representational hierarchy that likely exists in the lateral PFC. Bob always finds himself in a situation where he really needs his frontal lobes. When Bob’s golf ball lands behind a bush, the most posterior portion of his lateral PFC is maintaining the location of the green that he is aiming for because he cannot see it through the bushes. A more anterior portion of Bob’s PFC is maintaining a more abstract representation — the rules of golf — which prevents him from kicking the ball into the fairway and being penalized two strokes. And finally, the most anterior portion of Bob’s PFC is maintaining the most abstract representation of all — that golf will make him healthier and live longer by lowering his cholesterol level and blood pressure.

And I know how important it is in today’s world to replicate one’s scholarly work, so I want to assure you that Bob’s wayward golf shots have been replicated over and over again.

The fMRI and intracranial neuronal recording studies I mentioned, and my observations of Bob playing golf, support the idea that there is a functional gradient across the lateral PFC that may be organized hierarchically. However, the precise nature of this functional gradient remains debated, and the jury is still out. Some argue that what distinguishes one level of the hierarchy from the next is the type of processes that are engaged rather than the type of stored representations. Others argue that it is the complexity of action rules being represented that is the organizing principle. Regardless of the ground truth, everyone agrees that hierarchical representation and processing of information in the brain is advantageous because it is computationally efficient and flexible.

These initial models of a frontal hierarchy were derived from data using neurophysiological methods, such as fMRI or electrocorticography in humans or single-unit recording in monkeys, which only provide indirect evidence for such a hierarchy. Direct evidence for hierarchical relationships between brain regions can only be obtained from methods where one can disrupt function at one level and observe its effects on another level. One way to understand the logic of this approach is to visualize a hierarchy as a pyramid. If a hierarchy exists, damage to the lowest level of the pyramid will affect all levels above it. In contrast, damage to the highest pyramid level will not disrupt levels beneath it. The effects of damage would be asymmetric.

A shout-out to the lesion method, which can directly test if an asymmetric pattern of deficits is observed in a group of patients with focal frontal lesions in different locations. We administered the tasks from our fMRI study to our frontal patients and observed that if a patient had a frontal lesion at one level, their impairment was more likely to occur on tasks that required that level and the next higher level. However, damage at one particular level did not affect performance on tasks at a level beneath it. These findings are direct evidence that goal representations in the lateral PFC are organized hierarchically.

There is one minor conceptual problem with the idea that a frontal hierarchy exists where the most anterior portion of the frontal cortex is at the top. A grant reviewer elegantly stated this problem. Here is the direct quote: “The logic of this experiment seems to dictate that a fourth level of abstraction (‘metacontext’ if there is such a thing) would be represented just in front of the forehead.”

As a postdoc in our laboratory, Derek Nee performed several elegant studies that produced results that offer a resolution to this conundrum regarding where the “top” or the “apex” of the frontal hierarchy is. Using fMRI, where he analyzed connectivity patterns between frontal regions, and TMS, where he disrupted function in different frontal regions, Derek did not find support that the top of the frontal hierarchy was located in the most anterior portion of the frontal cortex or a location just in front of the forehead. Instead, he found evidence that a mid-lateral region of the PFC appears to be the apex of the frontal hierarchy. He proposed that this region serves as a convergence zone for information that receives inputs from more anterior and posterior frontal regions. The most posterior frontal regions likely represent concrete contextual information, and the most anterior frontal regions likely represent more abstract information, such as future goals and plans. Both of these regions feed into this mid-lateral prefrontal region, allowing these different levels of representation to be integrated. In this way, this mid-frontal region is an area of functional convergence, sometimes referred to in the literature as a dynamic hub.

It turns out that this mid-lateral prefrontal region has unique characteristics that have been underappreciated and support this version of the organization of a frontal hierarchy. First, based on monkey anatomical data, the mid-lateral frontal cortex sends more projections to other areas, as compared with receiving projections, than any other area of the frontal cortex. Second, the mid-lateral frontal cortex, and not the frontal pole, is the last area of the frontal cortex to develop fully. And third, the mid-lateral frontal cortex has the highest concentration of dopamine receptors in the frontal cortex.

So that is a rough sketch of the likely architecture of the lateral PFC. I will now turn to how these different levels of frontal representations can serve as top–down control signals that can modulate processing in the rest of the brain.

The PFC projects to and receives projections from all other areas of the cortex. It also has extensive reciprocal connections with all subcortical regions, such as the amygdala, hippocampus, basal ganglia, thalamus, and brainstem neuromodulatory systems. This massive connectivity places it in a highly privileged position to provide feedback signals to the rest of the brain. What is the evidence that feedback signals emanate from the PFC? The first direct evidence came from a study by Joaquin Fuster almost 40 years ago in monkeys, during which he disrupted prefrontal cortical function while simultaneously recording neural activity in the visual association cortex. Surprisingly, this brilliant study has gotten relatively little attention over the years.

In this study, lateral PFC function was disrupted with a cooling probe while simultaneously recording single unit activity in the inferior temporal cortex during a delayed match-to-sample task. Cooling of the PFC caused a decrease in delay activity in the temporal cortex, providing direct evidence that the PFC was the source of a feedback signal that could modulate temporal cortex activity.

In addition to modulating the gain of temporal cortex activity, Fuster also discovered that the PFC modulates the selectivity of activity in the temporal cortex. For example, temporal cortex neurons that coded for specific color attributes became less color selective after the PFC cooling. In other words, a neuron that responded only to the color “green” before the PFC was cooled was equally responsive to any color after cooling. Think about that finding for a moment. Patches of visual association cortex that we have conceptualized as coding for highly specialized information — color, faces, motion — are less selective when not under the influence of the PFC.

Our laboratory attempted to replicate these findings in two ways. First, by performing fMRI studies in healthy individuals after perturbation of prefrontal function with TMS, and second by scanning patients with focal PFC lesions. A shout-out to Brian Miller and Taraz Lee for doing these two studies.

In these studies, we investigated the effect of disrupting PFC function on the selectivity of category representations of faces or scenes in the temporal cortex. Different object categories, such as faces and scenes, are represented by spatially distributed yet overlapping areas in the extrastriate visual cortex and can be easily identified with fMRI. We reasoned that, like Fuster’s finding that color selectivity is reduced in the temporal cortex without frontal feedback, we would find less selectivity to faces and scenes after disruption of PFC function with TMS in healthy individuals or in patients with focal frontal lesions. And this is precisely what we found. The face and scene areas were less selective to their corresponding category after frontal TMS in healthy participants. In addition, in the patients with unilateral focal frontal lesions, the face and scene areas in the same hemisphere as the frontal lesion were less selective to their corresponding category than in the other hemisphere. Moreover, participants with the greatest reduction in category tuning following frontal TMS also exhibited the greatest working memory deficit. Together, this causal evidence clearly supports the notion that the lateral PFC is one source of top–down feedback signals that act via both gain and selectivity mechanisms.

The cerebral cortex is strongly modulated by diffuse inputs from subcortical and brainstem systems transmitting dopamine, norepinephrine, acetylcholine, and serotonin. However, how these neurochemical systems modulate the PFC is still relatively underspecified because the number of studies using pharmacological manipulations is small.

I first learned about the chemistry of the frontal lobes as a Neurology Fellow when I read a landmark monkey study published in 1979 by Pat Goldman-Rakic and colleagues. In that study, dopamine was depleted in the PFC, which caused monkeys to perform poorly on a working memory task. The deficit was as severe as in monkeys with a frontal lesion. Importantly, the depletion of other neurotransmitters, such as serotonin, did not impair their working memory function, just the depletion of dopamine. And dopaminergic drugs administered to these dopamine-depleted monkeys reversed their working memory deficits.

As a neurologist in training, I was taken aback by the idea that there could be such a tight link between a single neurotransmitter and a specific cognitive process. At the time, we had no drugs to prescribe to patients to improve cognitive function.

When I was a resident at the Boston VA Hospital, there was a ward full of patients with Parkinson’s disease because it was common then to adjust their medications in the hospital rather than at their home. Because these patients had severe dopamine depletion off their medications, I reasoned that I could test them on tasks before and after they took their dopaminergic replacement drug to determine the effects of dopamine on cognition.

When I was a resident and on call, requiring me to stay overnight in the hospital, I would wander over to the Neurology ward early in the morning before the nurses gave my patients their first dopaminergic medication of the day and tested them on a few cognitive tests at the bedside. Then, I would return and test them after they got their dopaminergic medication. I consistently observed that my Parkinson’s disease patients were worse on cognitive tasks thought to be sensitive to frontal lobe function before taking their medications, that is when they were dopamine depleted, compared with when they had their dopamine replenished with their medications. These observations gave me an initial glimpse into the role of dopamine in human frontal lobe function but left me with many questions that I hoped I could answer someday.

After I began my first faculty position at Penn in the Neurology department, I would regularly exit the medical center at lunchtime onto Spruce Street to go to my favorite food truck. At that truck, I was always stuck waiting in line behind undergraduates. One day while waiting patiently, I realized that these undergraduates might be interested in volunteering for a study I had in mind. Given the safety of the dopaminergic drugs I prescribed to my Parkinson’s patients, I thought that another way to test the effect of dopamine on frontal lobe function was to give these same drugs to healthy young individuals. These dopaminergic drugs were short-acting, safe, and free of side effects, and I was proposing to improve their working memory function rather than harm them in any way.

With this approach, I was asking my graduate students and postdocs if they wanted to perform the most uncontrolled studies they would ever do in their careers. Unlike pharmacological studies in animals, in human studies, we cannot precisely control the amount of dopamine that enters their brains or target a specific location. Metaphorically, we could only cut open their skull, pour dopamine all over their brain, and observe what happens. A huge shout-out to all the brave souls in our laboratory over the years that were willing to do this — Esther Aarts, Roshan Cools, Charlotte Boettiger, Ian Cameron, Daniella Furman, Sasha Gibbs, Andy Kayser, Dan Kimberg, Sharon McDowell, Margeret Sheridan, Michael Silver, Deanna Wallace, Rob White, and Bianca Wittmann.

Individuals with lower baseline working memory capacity improve on working memory tasks with dopamine augmentation, whereas those with higher baseline working memory get worse.

We observed that young, healthy individuals perform better on working memory tasks when given dopaminergic medications than when given a placebo. We also discovered that the effect of a dopaminergic medication was not the same for everyone but interacted with their working memory capacity. Individuals with lower baseline working memory capacity improve on working memory tasks with dopamine augmentation, whereas those with higher baseline working memory get worse. This observed U-shaped dose–response was consistent with monkey studies, indicating that “more” dopamine in the PFC is not “better.” Rather, there is an optimal dopamine concentration necessary for optimal function of the PFC.

When Roshan Cools joined our laboratory, our dopamine studies became turbocharged when she designed experiments aimed at testing precise mechanisms that were less exploratory in nature, as well as adding PET imaging, where we could measure dopamine in the brain to complement our pharmacological fMRI studies. The story of the relationship between dopamine and the PFC is complex and still evolving. The short summary of our main findings is that there is an intimate relationship between dopamine that acts on the striatum versus dopamine that acts on the PFC, as well as a differential role in working memory for different classes of dopamine receptors. A critical finding by Roshan that still motivates our work today is that high dopamine levels within the PFC likely optimize the maintenance of task-relevant representations, whereas high dopamine levels within the striatum optimize the flexible updating of that maintained information.

When Emily Jacobs was a graduate student in our laboratory, she enlightened me, in a gentle, nonconfrontational sort of midwestern way, that I was ignoring hormones in this dopamine story. She then delivered papers to my e-mail inbox, again, in a gentle, nonconfrontational midwestern sort of way, that demonstrated that estradiol enhances dopamine activity in the brain, estradiol levels are higher in the PFC than other cortical areas, and studies of postmenopausal women on estrogen suggest a direct link between estrogen and working memory. For her thesis, she performed a heroic study to test the hypothesis that the modulation of prefrontal dopamine activity mediates estradiol’s effects on working memory. On 24 young, healthy women, she performed fMRI scans during a working memory task at two points in their menstrual cycle when their estradiol levels, based on blood samples, were at their lowest and highest. She found that estradiol levels modulated prefrontal cortical activity depending on one’s baseline dopamine levels, as measured by the COMT enzyme in their blood. Moreover, the extent of this modulation predicted an individual’s performance on the working memory task. Since leaving Berkeley, Emily has made it loud and clear that findings such as these have direct ramifications for women’s health that must be addressed.

The take-home message from these studies is that neurotransmitters and hormones matter. We can only develop a complete model of cognition by incorporating the role of brainstem neuromodulatory and hormonal systems. And importantly, studying neurotransmitters and hormones’ effects on brain function provides a blueprint for potential therapies to remediate frontal lobe function deficits.

Studying neurotransmitters and hormones’ effects on brain function provides a blueprint for potential therapies to remediate frontal lobe function deficits.

My focus in this tale has been on one frontal system, the lateral PFC. The medial and orbital PFC comprise other distinct frontal systems. Clinically, damage to these different frontal systems leads to different cognitive and behavioral profiles. The whole tale of the frontal lobes will require an understanding of the relationship between these systems. And we must also understand how these frontal systems are embedded within the large-scale organization of the brain. The boon in network neuroscience in recent years has given us the tools to make significant progress in this area.

As I reflect on the mechanisms I have described in this tale, I believe there are plausible links between the breakdown of these mechanisms and the behaviors seen in my patients.

For example, faulty online maintenance of goal representations may explain why frontal patients often stop what they do before their intended task is completed or abruptly switch to doing something else rather than what they originally intended to do. In his book “The Working Brain,” the eminent Russian neuropsychologist Alexander Luria described a patient he sent from his office to the patient’s hospital bed to fetch the patient’s cigarettes. The patient fully understood Dr. Luria’s instructions and made his way to his hospital bed, but when he met a group of patients coming toward him, he turned around and then followed them, never following through on his original plan.

A breakdown in the hierarchical organization of goal representations may explain why patients can achieve low-level goals but do not always reach their higher-level goals. Myrna Schwartz described a patient at the Moss Rehabilitation Research Institute in Philadelphia whom she asked to wrap a present as a gift. The patient performed all the lower-level subgoals that are required to successfully wrap a gift box, such as properly cutting the wrapping paper, but the patient did not succeed at reaching the ultimate, highest-level goal because she omitted to insert the gift into the box before wrapping it.

A failure of top–down feedback signals emanating from the PFC may lead to what Dr. Francois Lhermitte, a French neurologist, called imitation and utilization behavior. This behavior is the remarkable tendency for frontal patients to imitate the gestures and behaviors of the clinician examining them without the clinician instructing them to do so, even when these behaviors are contextually inappropriate and might be expected to cause embarrassment. For example, in one frontal patient, when Dr. Lhermitte put on his eyeglasses, the patient picked up a pair of eyeglasses that were sitting on a table in front of them and put them on, although the patient was already wearing eyeglasses. A former nurse with a frontal lobe tumor saw a tongue depressor on a table, grabbed it, placed it in front of Dr. Lhermitte’s mouth, and examined his throat. She also picked up a blood pressure gauge and took his blood pressure. The mere sight of an object, without feedback from the PFC providing the proper context, results in behavior that is not appropriate for the situation.

As mentioned previously, when Parkinson’s disease patients are “off” their dopaminergic medications and in a state of dopamine depletion, their behavior is strikingly similar to patients with focal frontal lesions.

The breakdown of these mechanisms leads to the behaviors I have observed in my neurology clinic and those observed by family and friends of patients with frontal damage. In retrospect, I now realize there was a link between the neural mechanisms of working memory that Pat Goldman-Rakic and Joaquin Fuster had discovered and what I observed in my patients as a neurologist in training. I was just too young to see it, but I see it now, loud and clear. I believe they call this wisdom, which is one of the benefits of getting old.

Over the past 30 years, tremendous progress has been made in understanding these mechanisms. I believe we have reached a point where this knowledge can be translated into meaningful therapeutic interventions for patients with frontal lobe damage. Many neurological disorders can damage the PFC. These include traumatic brain injury, stroke, cerebral aneurysm rupture, neoplasms, herpes encephalitis, epilepsy, and neurodegenerative diseases such as frontotemporal dementia. Dysfunction of the PFC is also proposed to underlie many psychiatric disorders such as schizophrenia, depression, and obsessive–compulsive disorder, as well as developmental disorders such as attention-deficit hyperactivity disorder and autism. And the normal function of the PFC can be affected by many other conditions, such as stress, sleep disorders, and normal aging. In this much broader context, frontal lobe syndromes are highly prevalent in our society.

A behavior commonly seen in patients with frontal lobes is called preservation, an abnormal repetition of a specific behavior, such as a motor act, verbalization, drawing, or writing. For example, when one individual was asked to write a single word, a repetitive string of letters was produced — “ho – ho – ho – ho -lo- ho – lo – ho – ho – lo -lo -lo….” — which filled an entire page. However, this writing sample was produced by my daughter Zoe when she was about 3 years old, showing off her writing skills at the time. My wife was very impressed, but all I could see was how undeveloped her frontal lobes were. It is not easy being the kid of a neurologist. To me, frontal lobe syndromes are everywhere I look.

The pharmaceutical industry, rightly so, focuses on developing drugs that will cure neurological and psychiatric diseases. But unfortunately, it has yet to focus on drugs targeting specific brain systems that may improve cognitive deficits in the disorders I have highlighted here. Neurologists in clinical practice regularly prescribe medications to patients with frontal lobe deficits that have been approved for other conditions. For example, the dopaminergic drugs I have described that are approved for Parkinson’s disease are prescribed to patients with a traumatic brain injury and are somewhat effective in improving their behavioral and cognitive deficits. But that is not good enough, and we need a more significant effort by the pharmacological industry to develop novel therapies that can improve cognition in a meaningful way.

A promising approach that can potentially be as effective as drugs is novel cognitive therapies designed to target and strengthen the mechanisms I have discussed today. These cognitive therapies can be either therapist driven or technology driven.

One example of a therapist-driven approach for improving frontal lobe function is goal management training, developed by Brian Levine and Ian Robertson at the University of Toronto. And a shout-out to Tony Chen and Gary Turner when they were in our laboratory, and our colleague Tatjana Novakovic-Agopian, for testing, refining, augmenting, and implementing this training in patients with traumatic brain injury and healthy elders. These 5 weeks of training comprise extensive learning and practice of strategies for performing progressively complex tasks to complete projects based on subject-defined goals. Specifically, patients learn to achieve realistic goals through individual projects, such as planning a meal, or through group projects, such as planning an outing or a presentation. After this training, significant behavioral improvements in frontal lobe function were found in traumatic brain injury patients and healthy elders.

Frontal lobe function can also be enhanced with technology-driven interventions. A shout-out to another member of our laboratory, Adam Gazzaley, a world leader in pushing forward this approach that involves designing immersive video games that again target the mechanisms I have discussed. The first game he developed in his laboratory is called NeuroRacer, which was specifically designed to target multitasking abilities. After training, improvements in multitasking were found, which persisted for 6 months. The novel finding, however, was that participants also improved in cognitive control abilities that were not trained. More recently, Adam’s team has developed another game called Endeavor, the first and only video game approved by the Food and Drug Administration to treat cognitive deficits. It is now approved for children with attention-deficit disorder and hopefully will be approved for other cognitive disorders. Physicians can prescribe it, and hopefully paid for by health insurance companies. This approach, called digital medicine, can pave the way into a new era for treating cognitive deficits in brain disorders.

These cognitive therapy approaches deviate from those implemented in rehabilitation hospitals when I was in training over 30 years ago. At that time, the idea was to observe a symptom, such as poor memory, and target a therapy to mitigate that symptom, which usually meant teaching compensatory strategies, such as instructing patients how to use a memory aid, such as a diary, rather than offering them a treatment for their memory deficit. Goal management training, Neuroracer, and Endeavor are not compensatory strategies; they are therapies that have evolved from understanding the neural mechanisms underlying frontal system function that they are targeted to treat.

When a person suffers damage to their PFC, they are no longer the person they used to be. Phineas Gage was “no longer Gage” after his accident, and the family and friends of my patients who have suffered frontal damage tell me that their loved ones are no longer the people they used to be.

A patient with damage to their fusiform gyrus, who develops an inability to recognize faces, no longer interacts with the world in the same way, but in my experience, they seem to be the same person after their injury.

“The way I see it, we can learn so much from observing individuals who, unfortunately, have suffered a brain injury, and we owe it to them to do so.”

A patient with damage to Broca’s area, who develops a complete inability to speak a single word, no longer interacts with the world in the same way, but in my experience, they seem to be the same person after their injury.

To me, this makes the frontal lobes special.

Mike Gazzaniga, whom I cannot thank enough for the impact he has had on my career, and for that matter, on the career of all of us in the field of cognitive neuroscience, had similar experiences to me when he observed his split-brain patients. In a video where Alan Alda is interviewing Mike for a Scientific American television special, they watch a patient do multiple tachistoscopic experiments, and each hemisphere seems to have a mind of its own. Alan Alda sees Mike looking amazed and asks, “Are you having a moment?” And Mike says, “I’ve been doing this for 35 years, and it gets me every time.”

The way I see it, we can learn so much from observing individuals who, unfortunately, have suffered a brain injury, and we owe it to them to do so. My patients who participate in our research studies always say they do it because they hope the knowledge we gain from them will help others.

If you are a graduate student or postdoc interested in figuring out how the brain works, I encourage you to seek a greater appreciation of what happens when the brain does not work. You can do that by reading the literature or finding an opportunity to have direct contact with patients with brain disorders. I have no doubt that your research can be translated at some level into knowledge that can help our patients.

And that ends my tale about the frontal lobes.

Dr. Mark D’Esposito is a Distinguished Professor of Neuroscience and Psychology and Founder and former Director of the Henry H. Wheeler, Jr. Brain Imaging Center (2000-2020) at the Helen Wills Neuroscience Institute at the University of California, Berkeley. He is also a staff neurologist at the Northern California VA Health Care System. This article first appeared in the Journal of Cognitive Neuroscience.

There is no free lunch when it comes to tricky decisions; you have to do the thinking.

Ben R. Newell and David R. Shanks | Sep 22

Huxley was a very special kind of expert witness to his own unusual states of consciousness.

Carmel Raz explores how historical links between nerves and vibration have shaped modern neural sciences.

A Tale About the Frontal Lobes as Told by a Neurologist | The MIT Press Reader

Body Wave Wig With Bangs Rhythm plays an important role in how we perceive — and connect with — the world.