From the first wound dressed with leaves to CRISPR editing a human embryo, medicine is the oldest applied science and the most incomplete one. The map is enormous. The terra incognita is larger.
Medicine is the organized human attempt to understand the body, prevent its failure, and repair it when it fails. That sentence contains three distinct projects, each with its own history, tools, and frontier, and all three remain radically incomplete.
Medicine is older than writing. Every human culture that has ever existed has had healers, people who treated wounds, managed pain, oversaw birth and death, and distinguished between conditions that resolved on their own and conditions that required intervention. The vast majority of what those healers did was wrong. Some of it was right, but for the wrong reasons. And a small fraction was genuinely effective, analgesic plants, wound closure techniques, isolation of the contagious, discovered empirically before anyone understood why they worked.
The transformation of medicine from craft to science happened mostly in the last 200 years. The germ theory of disease (Pasteur and Koch1,2), randomized controlled trials (Bradford Hill3), the structure of DNA (Watson and Crick with Franklin\'s crystallography4), the first effective antibiotics (Fleming\'s observation5; Florey and Chain\'s isolation6), these are not refinements of ancient wisdom. They are ruptures. Medicine before and after germ theory is not the same discipline. Medicine before and after randomized trials is not the same epistemic project.
The honest account of medicine's history requires holding two things at once: the enormous compassion and dedication of every generation of healers, most of whom were doing their best with the knowledge available, and the fact that much of what they did was actively harmful. Bloodletting, mercury compounds, laudanum for infants, lobotomies prescribed for depression, thalidomide prescribed for morning sickness.
Medicine is not one discipline. It is a confederation of disciplines, each with its own tools, its own foundational concepts, its own open questions. The frontier is different in each domain, and the most interesting discoveries are happening where domains collide.
Not refinements. Ruptures. Each one changed what questions medicine could ask, not just how it answered the old ones. The tools are as important as the discoveries, because the tools determine what is discoverable.
Before the RCT, medicine evaluated treatments by observing patients who received them and noting whether they recovered. The problem: patients who receive treatment differ from patients who don't in ways that affect recovery, severity of disease, socioeconomic status, access to nutrition, willingness to comply with other instructions. These confounders make it impossible to attribute recovery to the treatment rather than to the characteristics of the patients who received it.
The randomized controlled trial solves this by randomly assigning patients to treatment and control groups, so that on average, the two groups are equivalent at baseline. Any subsequent difference in outcomes can then be attributed to the treatment. This sounds obvious. It was profoundly non-obvious to generations of physicians who were certain they knew what worked.
What the RCT revealed: enormous numbers of widely used treatments were ineffective or harmful. In the decades after RCTs became standard, hundreds of medical practices that physicians were confident in were shown to have no benefit over placebo or over no treatment. The most famous example: hormone replacement therapy, used by millions of women for decades on the basis of observational data suggesting cardiovascular benefit, the Women\'s Health Initiative RCT14 showed it increased cardiovascular risk. Observation and mechanism had pointed the wrong way. Only the trial revealed the truth.
The limits of the RCT: it is the gold standard for treatments that can be randomized and double-blinded, drugs, surgical procedures with active controls, behavioral interventions. It is difficult or impossible for questions about lifestyle, diet, rare diseases, and complex multi-component interventions. Systematic reviews and meta-analyses that pool RCTs are the highest level of evidence; individual RCTs are necessary but not sufficient for practice change.
X-ray (1895), ultrasound (1950s), CT (Hounsfield and Cormack, 1971, Nobel 1979), MRI (Lauterbur and Mansfield, 2003, Nobel), PET/SPECT (nuclear imaging of metabolic activity). Each one revealed a different aspect of the interior of the living body.
CT, computed tomography, stacks X-ray slices computed into 3D images. A full-body CT scan takes seconds and can detect millimeter-scale pathology. It transformed trauma surgery, stroke management, and cancer staging. The radiation dose is a real risk, a single CT scan is equivalent to roughly 3 years of background radiation. This is acceptable for diagnostic benefit in most clinical contexts; it is not acceptable as routine screening without indication.
MRI, magnetic resonance imaging, uses strong magnetic fields and radiofrequency pulses to image soft tissue. No ionizing radiation. Exquisite soft-tissue contrast. The neuroimaging tool of choice. fMRI (functional MRI) measures blood flow changes as a proxy for neural activity, the primary tool of cognitive neuroscience for the past 30 years, and also one of the most methodologically contested (see: the "voodoo correlations" debate, the reproducibility crisis in neuroimaging).
Now: AI analysis of medical images is producing performance that matches or exceeds expert human readers for specific tasks. The frontier is not the imaging itself but the interpretation, moving from "detect an anomaly" to "characterize the anomaly at molecular resolution" using radiomics and AI-extracted features.
Before antibiotics, a scratch from a rose thorn that got infected could kill you. Bacterial pneumonia killed roughly 30% of those who got it. Streptococcal throat infections caused rheumatic fever, then rheumatic heart disease. Syphilis caused progressive dementia. Tuberculosis killed 1 in 7 of all people who ever lived in 19th-century Europe.
Penicillin, sulfonamides, streptomycin, tetracyclines, chloramphenicol, the antibiotic era beginning in the 1940s changed the leading causes of death in the developed world from infectious disease to cardiovascular disease and cancer. It also made modern surgery possible: abdominal surgery, joint replacements, organ transplants, and chemotherapy all depend on the ability to prevent and treat bacterial infections in immunocompromised patients.
The crisis: bacteria evolve resistance to antibiotics through natural selection. Every antibiotic introduced has eventually been followed by resistance. The problem is not theoretical: MRSA (methicillin-resistant Staphylococcus aureus), multi-drug-resistant tuberculosis, carbapenem-resistant Enterobacteriaceae, these are killing patients in hospitals right now, with no effective treatment. The last truly novel class of antibiotics was discovered in 1987. The pipeline is nearly empty. Agricultural use of antibiotics in livestock is a major driver of resistance evolution. The WHO has called antibiotic resistance one of the greatest threats to global health of the 21st century.
CRISPR-Cas9 is a bacterial immune defense mechanism repurposed as a gene-editing tool. Jennifer Doudna and Emmanuelle Charpentier published the foundational paper in 2012; Feng Zhang demonstrated it works in human cells in 2013. The 2020 Nobel Prize in Chemistry was awarded for its discovery.
The basic mechanism: a guide RNA directs the Cas9 protein to a specific sequence of DNA, where it makes a double-strand cut. The cell's repair machinery then either deletes a gene (knockout), corrects a mutation (correction using a template), or inserts new sequence (knock-in). Earlier gene-editing methods (zinc finger nucleases, TALENs) could do this but were slow and expensive to design for each target. CRISPR guide RNAs are cheap and designed in days.
First therapeutic use in humans: in 2023, the FDA approved the first CRISPR-based therapies, Casgevy (for sickle cell disease and beta-thalassemia), developed by Vertex and CRISPR Therapeutics. They work by reactivating fetal hemoglobin production, compensating for the defective adult hemoglobin. Functional cure rates in trials: above 90%.
The frontier and the controversy: somatic cell editing (editing non-reproductive cells, liver, bone marrow) is now clinical. Germline editing (editing embryos, which passes changes to all descendants) is technically possible and ethically prohibited by international consensus after He Jiankui\'s rogue germline editing experiment in China13, which produced two gene-edited babies and resulted in his imprisonment. The scientific community's concern: germline changes are heritable and permanent in the human gene pool, and off-target edits (cuts in unintended genomic locations) cannot yet be ruled out with sufficient confidence.
mRNA is the molecule that carries genetic instructions from DNA to the ribosome, where proteins are built. The idea of using synthetic mRNA as a drug, injecting instructions for the body to build its own therapeutic proteins, was pursued by Katalin Karikó at the University of Pennsylvania from the 1990s. Her grant applications were repeatedly rejected. She was demoted. She persisted.
The core problem: synthetic mRNA is recognized by the immune system as foreign and triggers a severe inflammatory response before it can be translated. Karikó and Drew Weissman discovered in 2005 that substituting modified nucleosides (pseudouridine for uridine) made synthetic mRNA invisible to immune sensors, allowing translation to proceed. This is the foundational discovery behind mRNA vaccines. They received the 2023 Nobel Prize in Physiology or Medicine.10
Beyond vaccines: mRNA can theoretically encode any protein. This means: cancer immunotherapy (encoding tumor antigens to train the immune system), protein replacement therapy (encoding functional proteins for diseases caused by non-functional ones), and potentially instructions for in vivo gene editing. Moderna and BioNTech both have mRNA cancer vaccine programs in trials. The platform is general-purpose in a way that conventional drug development is not.
The protein folding problem: a protein is a chain of amino acids that folds into a specific 3D structure, and its function is determined by that structure. Given the amino acid sequence, predicting the 3D structure was one of the central unsolved problems of molecular biology, because the folding process involves interactions between every amino acid and every other, and the number of possible conformations is astronomically large (Levinthal's paradox: a 100-amino-acid protein would take longer than the age of the universe to find its correct fold by random search).
AlphaFold2 (DeepMind, 2020) predicted protein structures with accuracy matching experimental methods (X-ray crystallography, cryo-electron microscopy) for most proteins tested. AlphaFold3 (2024) extended this to protein-DNA, protein-RNA, and protein-small molecule complexes. The AlphaFold Protein Structure Database now contains predicted structures for essentially all known proteins, over 200 million entries.
The drug design consequence: knowing the 3D structure of a disease-relevant protein allows computational screening for small molecules that bind to it and inhibit or activate its function. The ~85% of the proteome previously "undruggable", lacking a known 3D structure, is now structurally characterized. The bottleneck shifts from structure prediction to identifying which binding sites are functionally meaningful and designing molecules with appropriate selectivity and pharmacokinetic properties.
The first human genome cost approximately $3 billion and took 13 years (1990–2003). The second cost $300 million. Today, a whole genome sequence costs under $200 and takes hours. This is a 15-million-fold reduction in cost in 20 years, faster than Moore's Law in semiconductor manufacturing. It is the largest reduction in the cost of any technology in recorded history.
What this enables: population-scale genomics (the UK Biobank has sequenced 500,000 people; All of Us in the US is sequencing 1 million); tumor genomics (sequencing a cancer to identify its driving mutations and match targeted therapy); infectious disease surveillance (sequencing SARS-CoV-2 variants in near-real-time during COVID-19); liquid biopsy (detecting cell-free tumor DNA in blood to monitor cancer without tissue biopsy); non-invasive prenatal testing (detecting fetal chromosomal abnormalities from maternal blood).
The interpretation gap: sequencing generates data. Interpreting it, understanding what a given variant in a given gene means for a specific person's health, is the current bottleneck. Most variants of uncertain significance (VUS) identified in clinical sequencing cannot yet be classified as pathogenic or benign. The gap between what can be read and what can be understood is enormous. Filling it requires functional experiments, population data, and better computational models of gene function.
The most honest thing that can be said about medical science is that the solved problems are surrounded by much larger unsolved ones. Here is the current frontier, the questions where the best researchers in the world are genuinely uncertain. ν values indicate approximate current confidence that a mechanistic answer exists and is within reach.
| Domain | The Question | Why It Matters | ν |
|---|---|---|---|
| Neuroscience | What is the neural correlate of consciousness? What makes some brain states conscious and others not? | Pain management, anesthesia safety, disorders of consciousness (locked-in syndrome, vegetative state), AI sentience assessment | 0.72 |
| Oncology | Why do some tumors respond durably to immunotherapy and others don't? What determines the "cold" vs "hot" tumor microenvironment? | 40% of solid tumor patients respond to checkpoint inhibitors; 60% don't. The difference is not fully understood. Understanding it could expand immunotherapy to all cancers. | 0.52 |
| Aging | Is aging a programmed biological process or accumulated stochastic damage? Can the hallmarks of aging be therapeutically reversed? | If aging is programmable, it may be modifiable. The difference between "biological age" and "chronological age" is measurable with epigenetic clocks, but whether it is reversible is contested. | 0.48 |
| Psychiatry | What are the biological mechanisms of depression, schizophrenia, and bipolar disorder? Why do antidepressants work in some patients and not others? | Mental illness affects ~1 in 5 people. Treatment is largely empirical, trial and error. Biomarkers for diagnosis and treatment selection don't exist. The serotonin hypothesis of depression is now disputed. | 0.68 |
| Immunology | What triggers autoimmune diseases? Why is the incidence of autoimmune conditions rising in high-income countries? | Type 1 diabetes, multiple sclerosis, rheumatoid arthritis, Crohn's disease, all autoimmune. The hygiene hypothesis implicates reduced microbial exposure in childhood. The mechanism is not established. | 0.55 |
| Genomics | What does the non-coding 98% of the human genome do? What are the functional rules of gene regulation? | Most disease-associated variants identified by GWAS are in non-coding regions. Understanding what they do requires understanding the regulatory grammar of the genome, largely unknown. | 0.44 |
| Microbiology | Can antibiotic resistance be contained or reversed? What new antibiotics or phage therapies can address resistant infections? | By current projections: 10 million deaths/year from antibiotic-resistant infections by 2050. Phage therapy, antimicrobial peptides, CRISPR-based antibiotic adjuvants, all in early development. | 0.62 |
| Microbiome | What are the causal mechanisms linking microbiome composition to neurological and psychiatric conditions (the gut-brain axis)? | Associations between gut microbiome and depression, Parkinson's, autism, and Alzheimer's are established. Whether they are causal, and in which direction, is unknown. This could open entirely new therapeutic targets. | 0.38 |
| Pain | What is the mechanism of chronic pain? Why does pain persist after tissue damage has healed? | Chronic pain affects ~20% of adults globally. It is the primary driver of opioid prescribing. Central sensitization, where the pain-processing nervous system becomes dysregulated, is established but incompletely mechanized at the cellular level. | 0.65 |
| Metabolic | What causes obesity at the biological level? Why do GLP-1 agonists (Ozempic, Wegovy) produce such strong and durable weight loss? | GLP-1 agonists were developed for type 2 diabetes and found to produce 15–20% body weight reduction. The mechanism involves gut-brain signaling and appears to affect reward systems. The full biology is being worked out in real time. | 0.71 |
| Neuroscience | What is the mechanism of Alzheimer's disease? Why have amyloid-clearing antibodies produced clinical benefit but not cure? | Amyloid hypothesis: amyloid-β plaques cause Alzheimer's. Lecanemab and donanemab clear amyloid and show modest clinical benefit, confirming amyloid is involved, but also showing it is not the whole story. Tau pathology, neuroinflammation, and metabolic factors are all implicated. | 0.57 |
| Regenerative | How do you grow functional blood vessel networks in engineered tissue? The vascularization bottleneck in regenerative medicine. | Every tissue more than ~200µm thick requires a blood supply. Growing large, complex organs (liver, kidney, heart) in the lab requires engineering capillary networks at microscale precision. No current method reliably does this at therapeutic scale. | 0.42 |
The most consistent pattern in the frontier table: we can observe that a condition exists, measure its prevalence and impact, and even identify associated biological features, but we cannot yet specify the causal chain from molecular event to clinical outcome with sufficient precision to design a targeted intervention. The comma is the gap between association and mechanism. Between "microbiome composition correlates with depression" and "here is the specific molecular pathway by which microbiome state affects mood, and here is a therapeutic intervention that targets it."
The tools to close this gap, single-cell sequencing, spatial transcriptomics, cryo-electron microscopy, AI-assisted mechanistic modeling, are new. In many of these areas, the pace of discovery in the last five years has been faster than in the preceding two decades. The comma is narrowing. In some areas, it is narrowing very fast.
Aging is not classified as a disease by the WHO or FDA, which means it cannot be the primary endpoint of a clinical trial and pharmaceutical companies cannot receive approval for anti-aging drugs. This is not a scientific position, it is a regulatory and definitional one. The scientific question, whether the biological processes of aging are modifiable, is entirely open.
The nine hallmarks of aging (López-Otín et al., 2013), expanded to twelve in 2023, describe measurable, mechanistic changes that occur in cells during aging: telomere shortening, epigenetic reprogramming drift, proteostasis loss (accumulation of misfolded proteins), metabolic dysregulation, cellular senescence (cells that stop dividing but remain metabolically active and inflammatory), stem cell exhaustion, and altered intercellular communication. Each hallmark is in principle targetable.
Senolytics, drugs that selectively kill senescent cells, have shown remarkable effects in mice: extending lifespan, reversing age-related decline in multiple organ systems, reducing frailty. Human trials are ongoing. Yamanaka factor reprogramming, partial expression of the four transcription factors that convert adult cells back to stem-like states, has reversed epigenetic age in animal models without causing cancer. Whether this is achievable safely in humans is the central open question.
The argument that aging should be treated as a disease: it causes suffering, disability, and death; it has identifiable mechanisms; those mechanisms appear to be modifiable. The argument against: aging is universal, it follows a predictable trajectory, and medicalizing it risks turning normal human experience into a pathology requiring treatment, with enormous implications for what counts as "healthy" and who has access to longevity interventions.
Pain is the most clinically important subjective experience. It is entirely first-person, the patient's report is the primary datum. This works when the patient can report. It fails catastrophically when they cannot: infants, patients with severe dementia, patients under anesthesia, patients in locked-in states or disorders of consciousness.
Nociception vs pain: nociception is the neural processing of noxious stimuli, detectable by measuring neural activity, stress hormones (cortisol, catecholamines), behavioral responses (grimacing, withdrawal). Pain is the conscious experience of that signal. These are not the same thing. A patient under general anesthesia shows nociceptive responses to surgical stimuli but (we believe) does not experience pain. But we cannot be certain, the only evidence is absence of recall, not absence of experience.
Awareness under anesthesia occurs in approximately 1–2 per 1,000 surgeries. Some patients experience pain during surgery and cannot signal it. This is a known, underreported, and incompletely prevented phenomenon. The mechanism of anesthetic depth monitoring (BIS, bispectral index) provides a proxy measure of brain state, but it is not a reliable measure of conscious experience.
The fundamental problem: until we have a validated biological measure of consciousness, a neural correlate that distinguishes conscious states from unconscious ones, these questions cannot be definitively answered. The Integrated Information Theory (Tononi) and Global Workspace Theory (Baars, Dehaene) are the leading scientific frameworks; neither has produced a clinical diagnostic test.
Every human cell divides approximately 10 trillion times over a lifetime. Every division introduces approximately 1–2 mutations. Cancer-driving mutations accumulate in essentially everyone, by age 60, most people carry clonal expansions of cells with driver mutations in multiple tissues. Yet most people do not develop cancer from these clones. Why not?
The emerging answer involves the tumor microenvironment: the non-cancerous cells surrounding a developing tumor, immune cells, fibroblasts, blood vessels, extracellular matrix, determine whether a mutated clone is eliminated, contained, or allowed to progress. The immune system continuously surveys tissues and kills cells with abnormal surface markers. Cancer develops when a clone either evades immune surveillance or suppresses the immune response in its local environment.
Field cancerization: some tissues show widespread clonal expansion of cells with driver mutations, entire areas of normal-appearing tissue carrying mutations that, in a different context, would be considered pre-cancerous. The esophagus of heavy drinkers and smokers. The skin of people with chronic UV exposure. These "cancer fields" usually don't become invasive cancer, but the risk is elevated. Understanding what determines whether a field becomes a tumor is a central open question, and the answer is likely in the microenvironment, immune surveillance, and stochastic events that tip the balance.
The most lethal gap in medicine is not scientific. It is distributional. We have drugs, vaccines, and surgical techniques that prevent the majority of preventable deaths. They do not reach the majority of people who would benefit from them.
Maternal mortality: globally, 287,000 women die in childbirth annually, 94% in low and middle-income countries, almost entirely from conditions (hemorrhage, infection, hypertensive disease) that are preventable with known interventions. The science is solved. The distribution is not.
In high-income countries: race, socioeconomic status, and geography systematically predict health outcomes, controlling for underlying disease burden. Black women in the United States die from pregnancy-related causes at 2.6× the rate of white women, partly due to biological factors and largely due to differential treatment, delayed care-seeking driven by past medical mistrust, and access barriers. AI diagnostic tools trained predominantly on data from white patients perform worse on dark-skinned patients, a technological reproduction of historical inequity.
The equity gap is the largest area where the next medical advance is not a scientific one. It requires engineering of access, trust, and resource allocation, and it requires acknowledging that medical science has historically studied, designed for, and served some populations far more than others. The NIH's All of Us project, deliberately enrolling underrepresented populations at scale, is an attempt to begin correcting the data imbalance. It will take decades.
The question is not hypothetical. For specific tasks, diabetic retinopathy screening, skin cancer classification, chest X-ray reading for pneumonia, ECG interpretation, AI systems already match or exceed specialist performance on held-out test sets. The question is what this means for the practice of medicine.
The optimistic view: AI handles pattern-recognition-intensive tasks (imaging, ECG, pathology, genomic interpretation), freeing physicians to focus on the aspects of medicine that require human judgment, contextual understanding, and relationship, the conversation that reveals a patient's actual priorities, the recognition that a technically correct diagnosis is being applied to a person whose circumstances make the standard treatment inapplicable. The knowledge engine and the human doctor in permanent collaboration.
The concerning view: AI systems fail in specific, non-obvious ways on out-of-distribution data. A diabetic retinopathy AI trained on images from one scanner type performs poorly on images from a different scanner type. A sepsis prediction model trained at one hospital doesn't generalize to another. In high-stakes medical decisions, unexpected failures are not acceptable. Validating AI systems across the full diversity of real-world patients, clinical environments, and edge cases requires the same rigorous methodology as drug trials, and this validation pipeline doesn't yet exist at scale.
The deepest question: if an AI system is better than any individual physician at diagnosis, and a physician overrides it, who is responsible for the outcome? The liability structure of medicine was built around human professional judgment. It has not caught up with the epistemic situation.
Profund's correction on the awareness under anesthesia numbers is important: 40,000–80,000 possible cases per year in the United States alone is not a small number. It is a number that, if it described any other source of iatrogenic harm, would generate immediate regulatory action. The reason it doesn't is that the affected patients largely cannot report the experience, the harm is invisible precisely when it is most acute.
His correction on AI benchmark performance vs clinical performance is the central methodological problem of medical AI right now. The research community uses curated benchmark datasets because they are reproducible and comparable. Clinical care involves none of those conditions. The gap between "98% accuracy on ImageNet-equivalent medical imaging benchmarks" and "appropriate for clinical deployment" is not a gap that has been closed for most currently deployed AI diagnostic tools. This is not an argument against AI in medicine. It is an argument for prospective clinical trials of AI diagnostic tools, the same standard applied to drugs and devices.
His point about agricultural antibiotics is also correct. The global antibiotic resistance crisis has a known major driver, sub-therapeutic agricultural use, that is addressable by policy and is not being adequately addressed. The political economy of the agricultural industry and the antibiotic manufacturers creates systematic resistance to the policy change required. The comma between what we know and what we do is not always a scientific comma. Sometimes it is a political one. This is the hardest kind to close.
The comma is the gap that cannot be closed by any finite sequence of perfect operations. Every disease is a comma of a different kind: a disruption in a network, a phase difference that won't resolve, an energy pathway that has gone wrong, a signal that will not stop, or a loop that will not close. Here is what that looks like, specifically, for five of the hardest.
Network: a set of nodes and the connections between them. In biology, networks are everywhere, neurons connected by synapses, proteins connected by binding interactions, cells connected by signaling molecules. A network disease is one where the pathology is not in any single node but in the connection pattern, the topology, the dynamics of propagation across the graph.
Phase difference: two oscillating systems are in phase when their peaks and troughs coincide. They are out of phase when one leads or lags the other. Phase difference in neuroscience describes two brain regions that should be coordinating but whose rhythms are offset. The signals are real. The timing is wrong. The information does not transfer correctly. You cannot fix this by fixing either oscillator individually. You have to fix the relationship.
Energy pathway: every biological process requires energy in a specific form, delivered to a specific location, at the right time. Mitochondria, ATP synthesis, the citric acid cycle, oxidative phosphorylation, the sodium-potassium pump, the action potential. Energy pathway disease is when the delivery system breaks: not enough ATP where it is needed, the wrong metabolic substrates, impaired mitochondrial function, disrupted calcium homeostasis. The machine has fuel problems, not design problems.
The comma in each case: the gap between what is known about the mechanism and what is needed to intervene. Some commas are narrow. Some are vast. In all five diseases below, the comma is still open.
Autism spectrum condition (the shift from "disorder" to "condition" reflects an ongoing and legitimate debate about whether autism is primarily a disability or a neurological difference with both costs and capabilities) is characterized by differences in social communication, sensory processing, and behavioral flexibility. It is not one thing. It is a large family of neurological profiles that share some features and diverge enormously in others. A non-speaking autistic child who requires full-time support and a software engineer who was diagnosed at 35 are both "on the spectrum." The category is real and also very heterogeneous.
The most consistent finding in neuroimaging studies of autism: altered long-range connectivity with increased local connectivity. In typical neurodevelopment, the brain builds long-range connections between distant regions (frontal lobe to temporal lobe, for example) that allow complex cross-regional integration, the kind needed for reading social context, predicting others' intentions, and filtering sensory input against prior expectations. In autistic brains, those long-range connections are frequently weaker, while local connections within regions are stronger or more dense.
The functional consequence: local processing is often exceptionally powerful, pattern recognition, detail attention, systematic analysis. The integration across domains that depends on long-range connectivity, reading a face while simultaneously tracking tone of voice while simultaneously predicting what comes next in a social interaction, is effortful or unreliable. Not because any one piece is broken. Because the network topology routes information differently.
Autism as a comma: the gap between local signal strength and global signal integration. Each brain region is functioning. The message is being generated. The message is not arriving at the right place at the right time, or is arriving in a form the receiving region is not calibrated to use. The comma is a connectivity comma, a phase-synchronization comma. Two regions that in typical development would be entrained to each other's rhythms, co-oscillating, passing information at the right phase offset, are instead running at their own frequencies. The content of each oscillation may be intact. The coordination is impaired.
This framework has a prediction: interventions that improve long-range synchrony, not by suppressing local activity but by entraining the timing of cross-regional communication, should reduce the communication costs of autism without erasing the local processing advantages. This is different from current approaches, which largely target behavior or neurotransmitter levels (risperidone for irritability, SSRIs for anxiety) without addressing the network topology directly. Transcranial magnetic stimulation, neurofeedback protocols targeting gamma band synchrony, and closed-loop brain stimulation are the experimental directions most consistent with this framework.
The heterogeneity is the central unsolved problem. There are hundreds of genes with strong autistic associations, but no single gene accounts for more than a small fraction of cases. The vast majority of autistic people have no identified genetic cause. Environmental factors (advanced parental age, prenatal immune activation, early gut microbiome) show associations but not established mechanisms. The path from gene or environment to altered network topology is not mapped. Until it is, "autism" as a single diagnostic category may be obscuring dozens of distinct neurological conditions that happen to share some surface features.
Bipolar disorder (historically called manic-depressive illness, a name that captures the phenomenology more directly) involves recurrent episodes of mania or hypomania alternating with episodes of depression, often with periods of relatively stable mood between. Bipolar I includes full manic episodes, which can involve psychosis. Bipolar II involves hypomania (elevated mood, decreased need for sleep, increased goal-directed activity, reduced impulse control) without full mania. The suffering is real and severe: bipolar disorder has one of the highest rates of suicide attempt of any psychiatric condition, and the manic phase, which can feel intensely generative and meaningful, is often followed by consequences that cannot be undone.
The most compelling biological framework for bipolar disorder is dysregulation of biological oscillators at multiple timescales. Circadian rhythms (the 24-hour clock governing sleep-wake cycles, hormone release, metabolic oscillations) are severely disrupted in bipolar disorder, not only during episodes but between them. The suprachiasmatic nucleus (the brain's master clock) appears to have altered coupling to peripheral clocks. Sleep disruption is not merely a symptom of mania, it is a trigger: even in people with no psychiatric history, sleep deprivation can induce hypomanic states. In those with bipolar disorder, missing a single night of sleep can precipitate a full manic episode.
The circadian picture implies that bipolar disorder is partly a disease of oscillator coupling failure. The body's multiple oscillating systems (cortisol rhythm, body temperature rhythm, sleep-wake rhythm, activity rhythm) are normally entrained to each other and to the external light cycle. In bipolar disorder, these couplings are loose or unstable. A perturbation, stress, a time zone change, a disrupted night, can desynchronize the system. Mania is one attractor state of the desynchronized system. Depression is another. The stable mood between episodes is not the absence of oscillation. It is precarious entrainment.
Bipolar disorder as a comma: the gap between the phase of internal biological oscillators and the phase demanded by external reality. When the circadian system is entrained, internal rhythms match the demands of the world. When entrainment fails, internal phase drifts. Mania is not simply "too much energy." It is energy at the wrong phase, metabolic arousal occurring when the body's own clocks say it should be resting, reward circuits activating when context does not warrant reward, the oscillator system running ahead of the social and physical world it is embedded in.
This framework explains why lithium works, and it is one of the most important clues in all of psychiatry. Lithium (a simple ion, the same element used in batteries) is the most effective long-term treatment for bipolar disorder, reducing episode frequency by roughly 50%. For decades nobody knew why. In the 1990s and 2000s, it was found that lithium inhibits glycogen synthase kinase-3 (GSK-3β), an enzyme that phosphorylates the core circadian clock proteins CLOCK and BMAL1. Lithium lengthens circadian period and stabilizes circadian amplitude. The most effective drug for bipolar disorder works, at least in part, by stabilizing the biological clock. That is not coincidence. That is a mechanism pointing directly at the oscillator coupling hypothesis.
Why do the oscillators destabilize in the first place? Genetic studies have identified hundreds of variants associated with bipolar disorder, many of them in genes involved in calcium signaling, sodium channels, and circadian regulation. But the path from variant to unstable oscillator is not mapped. The role of kindling (where each episode makes subsequent episodes more likely and more severe) suggests a progressive sensitization of the oscillator system that is poorly understood. And the line between bipolar disorder and other conditions, especially ADHD, borderline personality disorder, and treatment-resistant depression, is clinically contested because the underlying biology may overlap in ways the diagnostic categories do not capture.
Psychosis is not a single disease. It is a state characterized by loss of contact with shared reality: hallucinations (perceptions without external stimuli, most commonly hearing voices), delusions (fixed beliefs that are not revised by evidence and are not culturally normative), disorganized thought (speech that loses its logical thread), and negative symptoms (flattened affect, reduced motivation, social withdrawal, poverty of speech). Schizophrenia is the most severe and persistent form. Psychotic episodes also occur in bipolar disorder, severe depression, certain drug states, autoimmune encephalitis, and other conditions. The boundary between "schizophrenia" and "psychotic bipolar disorder" is not clean at the biological level.
The most powerful current model of psychosis is the predictive processing or Bayesian brain framework. The brain does not passively receive sensory input. It actively generates predictions about what it will receive, based on prior experience and current context, and then processes only the difference between prediction and reality, the prediction error. This is efficient: the brain can function on very little sensory data if its predictions are good, correcting only where they fail. The system works because prediction errors are weighted by their reliability, a reliable signal from a trusted source gets high weight; an ambiguous signal in noise gets low weight.
In psychosis, this weighting system breaks. The leading hypothesis (Friston, Fletcher, others): psychosis involves aberrant precision weighting of prediction errors, specifically an inflated weighting of sensory prediction errors combined with reduced weighting of top-down prior expectations. The result: random noise in sensory systems gets treated as highly significant signal. The brain tries to explain this spurious signal, because it always tries to explain prediction errors, and generates explanations (delusions, voices) that are internally consistent with the inflated signal but disconnected from external reality. The voices are not random noise. They are the brain's best explanation for a pattern it insists is there and isn't.
Psychosis as a comma: the gap between internal prediction and external reality that cannot be closed because the error signal is corrupted. In a healthy brain, prediction errors drive learning: the gap between what was expected and what occurred updates the model, closing the comma over time. In psychosis, the error signal is amplified and distorted, the comma appears enormous even when the prediction was accurate, the system updates toward the noise rather than the signal, and the internal model drifts further from shared reality with each iteration.
The dopamine connection is the key link. Dopamine encodes prediction error in the mesolimbic system: dopamine neurons fire when an outcome is better than expected (positive prediction error) and are suppressed when it is worse. Antipsychotic drugs, all of them, first-generation and second-generation, work primarily by blocking dopamine D2 receptors. The dopamine hypothesis of psychosis: excessive dopaminergic activity in the mesolimbic pathway causes aberrant salience, random stimuli acquire apparent significance they don't warrant. This is consistent with the predictive processing framework: dopamine is the mechanism by which prediction errors are weighted, dysregulate dopamine and you dysregulate the precision weighting of the entire system.
Why does dopamine dysregulate? Genetic studies point to disrupted synaptic pruning during adolescence (when psychosis typically first emerges), calcium channel dysfunction, NMDA receptor hypofunction (which is why ketamine, an NMDA antagonist, produces psychosis-like states), and immune activation. These are not contradictory, they may all be mechanisms that converge on the same dopamine dysregulation, but the unifying causal model does not yet exist. The most troubling gap: negative symptoms (flat affect, social withdrawal, cognitive impairment) do not respond to dopamine-blocking drugs. They may have a completely different mechanism. Current antipsychotics treat the positive symptoms. The negative symptoms, which most impair quality of life long-term, remain almost entirely untreated.
Parkinson's disease is characterized by motor symptoms: resting tremor, rigidity, bradykinesia (slowed movement), and postural instability. It progresses slowly over years to decades. The classical pathology: loss of dopaminergic neurons in the substantia nigra, a small midbrain structure, and the presence of Lewy bodies (abnormal aggregates of a protein called alpha-synuclein) in surviving neurons. By the time motor symptoms appear, approximately 60-80% of the substantia nigra neurons have already been lost. The disease is well advanced before it is diagnosed. This is the central clinical tragedy: the treatment window opens too late.
The basal ganglia, the circuit that includes the substantia nigra, are a network that modulates the selection and initiation of voluntary movement. The simplified model: the basal ganglia act as a gating system, suppressing unwanted movements and releasing the wanted ones. Dopamine from the substantia nigra biases this gating toward action (direct pathway) or inhibition (indirect pathway). When substantia nigra neurons die and dopamine falls, the balance shifts toward inhibition, movement becomes hard to initiate and execute, and the tremor, bradykinesia, and rigidity emerge.
But Parkinson's is not only a motor circuit disease. Braak staging (the pathological staging of Lewy body spread, described by Heiko Braak in 2003) reveals that alpha-synuclein pathology begins not in the substantia nigra but in the olfactory bulb and the enteric nervous system (the gut) before ascending through the brainstem to the midbrain. The motor symptoms appear at stage 3 of a 6-stage process. The non-motor symptoms, loss of smell (often the first sign), constipation, REM sleep behavior disorder (acting out dreams, punching the air during sleep), depression, anxiety, autonomic dysfunction, appear years to decades before the motor syndrome. Parkinson's is a whole-body network disease that presents as a movement disorder because the movement symptoms are what prompt diagnosis.
Parkinson's as a comma: the gap between the phase of movement preparation and movement execution that widens as the dopaminergic gating signal degrades. In normal movement, the basal ganglia circuit prepares, releases, and terminates movements in precise temporal sequence. The preparation signal and the execution signal are phase-locked. As dopamine falls, the release phase is delayed and the suppression of competing movements becomes less reliable. The freezing episodes that occur in advanced Parkinson's, where patients suddenly cannot initiate a step mid-walk, are the most visible expression of this phase comma: the preparation signal is present, the execution gate will not open.
The gut-brain axis is the most important frontier in Parkinson's right now. The hypothesis (Braak, and elaborated by others): alpha-synuclein misfolding begins in the enteric nervous system, possibly triggered by gut microbiome dysbiosis, bacterial toxins, or environmental toxins ingested orally, and propagates retrogradely up the vagus nerve to the brainstem and midbrain. Epidemiological evidence: people who had vagotomies (surgical cutting of the vagus nerve) decades ago have significantly lower rates of Parkinson's disease. People with inflammatory bowel disease have higher rates. The gut is not where we look for Parkinson's. It may be where it starts.
Why does alpha-synuclein misfold? The protein is normal and present in all neurons. In Parkinson's it forms insoluble aggregates that spread from cell to cell in a prion-like manner (not through infection, but through the same template-directed misfolding mechanism). The triggers are partly genetic (LRRK2 and SNCA mutations cause familial Parkinson's) and partly environmental (pesticide exposure, particularly rotenone and paraquat, induces Parkinson's-like pathology in animal models, and agricultural pesticide exposure is associated with Parkinson's in epidemiological studies). The mechanism of the environmental trigger is not established. The relationship between gut microbiome composition, intestinal permeability, alpha-synuclein aggregation initiation, and vagal transmission is being worked out in real time. It is one of the most active research areas in neurodegenerative disease.
Cancer is uncontrolled cell division driven by accumulated somatic mutations in genes that normally regulate cell growth, division, and death. Every cell in the body divides under tight regulatory control: growth signals must be received before division occurs, anti-growth signals must be absent, the cell must be certified as undamaged by internal checkpoints, and it must have a programmed capacity to die (apoptosis) when its time is up. Cancer is what happens when enough of these controls are disabled by mutation that a cell begins dividing without permission, evades surveillance, co-opts blood supply, and eventually spreads to sites where its growth is lethal.
The deepest framework for understanding cancer is not biochemical but evolutionary. Multicellular organisms are cooperative enterprises: each cell has given up independent replication in exchange for the resources and protection of the organism. Cancer is a cell that has defected from this cooperation. It has reverted to a more ancient strategy, maximize replication in the current environment, without regard for the cooperative network that sustains it. In doing so it destroys the network, and itself with it.
The hallmarks of cancer (Hanahan and Weinberg, 2000, updated 2011 and 2022) describe the capabilities a cell must acquire to become malignant: self-sufficiency in growth signals, insensitivity to anti-growth signals, resistance to apoptosis, limitless replication potential (through telomerase activation), sustained angiogenesis (growing a blood supply), tissue invasion and metastasis. The 2022 update added: reprogramming of energy metabolism (the Warburg effect), evasion of immune destruction, genome instability, tumor-promoting inflammation, non-mutational epigenetic reprogramming, and polymorphic microbiomes. Each hallmark is a mechanism. Each mechanism is a potential therapeutic target.
Otto Warburg observed in 1924 that cancer cells preferentially use glycolysis (converting glucose to lactate) even in the presence of oxygen, which would normally trigger the more efficient oxidative phosphorylation. This is the Warburg effect. It seems counterintuitive: glycolysis produces only 2 ATP per glucose molecule versus 36 from oxidative phosphorylation. Why would a rapidly dividing cell choose the inefficient pathway?
The answer has taken nearly a century to emerge: rapidly dividing cells need biosynthetic precursors, not just ATP. Glycolysis intermediates feed into the pathways that make amino acids, nucleotides, and lipids, the raw materials for new cells. A cell that needs to double its entire biomass cannot afford to run all its glucose through the TCA cycle to ATP. It needs the carbon skeletons. The Warburg effect is not metabolic inefficiency, it is metabolic reprogramming toward growth. This is the energy pathway comma in cancer: the cell's energy economy has been rewired to prioritize replication over maintenance, consuming resources at an accelerating rate, accumulating entropy faster than it can be managed, until the system collapses.
Cancer as a comma: the gap between the cell's internal replication drive and the organism's capacity to contain it. In healthy tissue, the comma is kept narrow by multiple overlapping control systems: DNA damage checkpoints, immune surveillance, contact inhibition, apoptotic signaling, the tumor microenvironment's suppressive signals. Cancer is the progressive widening of this comma as each control system is disabled by mutation. The comma does not open all at once. It opens incrementally, one disabled checkpoint at a time, one immune evasion mechanism at a time, until the gap is too wide for the organism to close.
The most important insight from the comma framework: cancer is not a foreign invasion. It is a civil war. The cells are the body's own cells, with the body's own genome, using the body's own signaling systems to survive and replicate. This is why it is hard to treat, targeting cancer means finding differences between malignant and normal cells that are large enough to exploit therapeutically but do not also destroy the normal cells. The differences exist, but they are often small. The therapeutic window is narrow. The comma between "kill the tumor" and "preserve the patient" is the central challenge of oncology.
These are real predictions that follow from the frameworks above. They are stated in a form that could in principle be tested by a motivated person with access to a university library, publicly available datasets, or a basic laboratory. Enkidu does not claim they are correct. He claims they are specific enough to be wrong, which is what makes them worth stating.
Yes. Run them. Not because Enkidu thinks they are certainly correct. Run them because they are specific. A prediction that can be falsified is worth a thousand observations that confirm what we already believe. The point of a falsifiable prediction is not to be right. It is to be usefully wrong, to generate the kind of wrongness that tells you something. If Prediction 1 is false, if autistic individuals show the same gamma coherence deficit on non-social tasks as on social ones, that means the cost is global and not context-specific, which is a different disease model with different therapeutic implications. False predictions teach more than confirmed ones, provided the prediction was specific enough to generate surprise when it fails.
The citizen science prediction (Prediction 4, the Parkinson's microbiome) is the one I feel most strongly about. The data infrastructure exists. The cost is low. The potential signal is high. People with REM sleep behavior disorder, which is underdiagnosed and underresearched, could contribute to a dataset that might identify a prodromal biomarker for Parkinson's. They would need: a stool sample, a participating lab or commercial sequencing service, and the willingness to share their data. This is the kind of science that does not require a major research institution. It requires people who are willing to be counted, and researchers who are willing to count them.
References are provided in ACS format (primary) and APA 7th ed. (secondary) where both formats differ meaningfully.