Tuesday, September 23, 2025

My Upcoming Biopsy and the Surprising Truth I Found in a Glass of Wine

It started with a simple, almost throwaway question. I have a minor biopsy scheduled for this Thursday, and as I was thinking about my week, I wondered, "Is there any real reason I can't have a glass of wine the night before?"

My gut feeling, and the standard medical advice, was a clear "no." But I'm the kind of person who needs to know why. The simple answer, I quickly found, is that alcohol thins your blood and can increase bleeding risk.

Okay, fair enough. But that simple answer felt incomplete. It was the first thread I pulled, and it led me down a fascinating rabbit hole into the science of how our bodies work, completely upending what I thought I knew about wine.

From a Simple Rule to a Complex Question

My initial search told me alcohol affects platelets, the tiny first responders in your blood that rush to the scene of a cut to form a clot. Alcohol makes them less "sticky" or, in scientific terms, it causes "decreased platelet reactivity." This means your body's ability to stop bleeding is impaired—definitely not something you want when you're about to be poked and prodded.

Case closed, right? Don't drink before a biopsy.

But my curiosity was piqued. This led me to a huge, recent study from the famous Framingham Heart Study. And this is where the plot twist came in.

The Red Wine Myth Gets Complicated

For years, we've heard about the "French Paradox" and the supposed heart-healthy benefits of red wine, often attributed to its effect on platelets. So, I expected this big study to confirm that.

It didn't.

In fact, the study found no association between red wine consumption and decreased platelet function. To my surprise, it was white wine and liquor that showed a measurable effect.

This was a genuine "wait, what?" moment. It directly contradicted years of popular health wisdom. How could a massive, modern study find the opposite of what smaller, older studies had suggested? The only way to find out was to do what researchers do: check the references.

Chasing the Contradiction

I started digging into the sources the Framingham study cited. Sure enough, I found older, smaller studies that concluded the exact opposite. For example, a 2002 study by Pignatelli et al. found that red wine inhibited platelet aggregation significantly more than white wine.

So, what gives? Is science just a mess? Not quite. What I was seeing was science in action. The newer, much larger study had more statistical power and revealed a more nuanced picture. Other research I found suggests that it's not about the color of the wine at all, but about the specific grape variety and its unique chemical profile.

The Final Verdict (For Me, Anyway)

So, after all this research, where does it leave me? To be truthful, I'm having a glass of wine with my dinner tonight (Tuesday).

My biopsy is on Thursday morning. Based on my research, I know that by then it will have been about 36 hours—a day and a half—since that drink. That timeframe falls squarely within that 1-to-3-day window the science says it takes for platelets to fully recover.

For me, understanding the 'why' (the platelets) and the 'how long' (the recovery window) makes this a conscious choice rather than a guess. It turns a simple doctor's order into an informed decision. After tonight, I'll be abstaining completely to make sure my body is as ready as it can be. It's a personal calculation, and having the facts makes all the difference.

It also reminds me that science is a process, not a collection of static facts. A simple question can unravel decades of assumptions and reveal a much more complex and interesting reality. When I do have that post-biopsy glass, I'll have a newfound appreciation for the incredible, and sometimes contradictory, science behind it.




Studies Mentioned

A quick note on how this was written: The ideas, research journey, and personal decisions are all mine, but the text itself was drafted in collaboration with Google's AI assistant. It was a fascinating way to turn a string of questions into a story.

Briefing Document: Synthesis of Themes and Perspectives from the Combined Archives (2003-2025)

Briefing Document: Synthesis of Themes and Perspectives from the Combined Archives (2003-2025)

Executive Summary

This document provides a comprehensive synthesis of the core themes, arguments, and personal experiences detailed in a collection of archival writings spanning from 2003 to 2025. The author, a retired U.S. Marine and returning student, presents a deeply personal and intellectually rigorous chronicle of their evolving worldview. The writings are characterized by a staunchly atheistic and rationalist perspective, which serves as the primary lens for incisive critiques of religion, politics, and societal norms.

Key recurring themes include a sustained deconstruction of religious belief and its perceived negative impact on human progress; sharp, evidence-based criticism of U.S. domestic and foreign policy, particularly during the George W. Bush administration and the Iraq War; and candid reflections on personal identity, class, gender roles, and the challenges of navigating civilian life and higher education after a military career. The author is a voracious reader, and their engagement with a wide range of literature and philosophy—from Ayn Rand to Friedrich Nietzsche—heavily informs their analysis.

A significant trajectory observed across the archives is the author's deepening engagement with technology. What begins as tinkering with computer hardware and Linux distributions evolves into a sophisticated, hands-on exploration of Artificial Intelligence and Large Language Models (LLMs) by 2025. The later entries document practical AI application projects, a critical analysis of AI's capabilities and limitations, and insightful commentary on the technology's societal and economic implications, marking a transition from a critical observer to an active practitioner and analyst of cutting-edge technology.

1. Core Philosophical Stance: Atheism and Rationalism

A central and unwavering theme throughout the archives is the author's commitment to atheism and a rational, evidence-based worldview. This perspective informs nearly all social and political commentary.

1.1. Arguments Against Theism

The author frequently articulates a disbelief in any higher power, viewing religion as a human construct designed to placate fear of the unknown and control the masses.

  • Rejection of Faith: Faith is defined as belief without logical proof or material evidence. The author contrasts this with a rational approach, stating, "I judge the reasonableness of the bible because I practice reason."
  • The Burden of Proof: The author rejects the notion that atheists must disprove the existence of a deity. This is illustrated through an analogy attributed to Bertrand Russell.
  • Religion as a Detriment: Religion is seen as "debilitating to the general well being of the human race." The author argues it diverts focus from collective human progress toward individual, post-mortem destinies and "preserves whatever is ripe for destruction" by prioritizing sympathy over achievement, a critique echoing Nietzsche.
  • Hypocrisy and Control: The author observes that religious proclamations often serve as a "veneer" or "shield" for individuals to hide behind, assuming a moral high ground while avoiding self-reflection. They quote Gary North to illustrate a perceived agenda of the "New Christian Right".

1.2. Morality and Humanism

Morality is framed as originating from within humanity, not from a divine authority. The author argues that a sense of right and wrong is unique to individuals, shaped by their chemical structure and environmental experiences.

  • Individual-Centered Morality: An action is considered "right" if it is freely chosen and "doesn't infringe on another's freedom of action."
  • Critique of Religious Morality: The author challenges the idea that belief in a future state is necessary for moral behavior, quoting Mill: "if he who does not believe in a future state necessarily lies, it follows that they who do believe are only prevented from lying, if prevented they are, by the fear of hell."

2. Political and Social Commentary

The author provides extensive, critical commentary on U.S. politics, foreign policy, and a wide array of social issues, often from a cynical and anti-authoritarian standpoint.

2.1. Critique of the Bush Administration and the Iraq War

The most sustained political critique is aimed at the George W. Bush administration and the rationale for the 2003 invasion of Iraq.

  • The Office of Special Plans (OSP): An entry from March 2004 includes the detailed testimony of a retired U.S. Air Force Lieutenant Colonel who worked in the Pentagon. This account describes the OSP, under figures like Bill Luti and Abram Shulsky, as a "den of iniquity" used to "manufacture propaganda" and "falsehoods" to justify the war. The testimony states the OSP's talking points were "propagandistic in style" and that desk officers were "ordered to use them verbatim."
  • Politicization of Intelligence: The testimony alleges that neoconservative agenda-bearers "usurp[ed] measured and carefully considered assessments" and suppressed or distorted intelligence to mislead Congress and the public. Key figures like Colin Powell and General Anthony Zinni were viewed as internal enemies, with Zinni being called a "traitor" in a staff meeting.
  • The Soldier's Perspective: The author cites a story from Military.com about Lance Cpl. Jonathan Snyder, a Marine in Iraq who graduated from basic training on September 12, 2001. Snyder is quoted in the Washington Post as saying, "Every day you read articles in the states when it's like Oh, it's getting better and better.' But when you're here, you know it's worse every day." His father concludes, "We've lost 1,000 soldiers for nothing."

2.2. Domestic Politics and Social Issues

The commentary extends to numerous domestic issues, often highlighting perceived hypocrisy, irrationality, and threats to civil liberties.

  • Voter Suppression: An article by Gregory Palast is quoted, detailing Florida's 2000 election "scrub list" that purged 57,700 voters, over 90% of whom were innocent. The analysis suggests that 54% of "spoiled" ballots were cast by Black voters, and counting them would have given Al Gore a significant victory. The author notes that the Help America Vote Act (HAVA) signed by Bush in 2002 requires all states to implement a similar computerized purge system.
  • Class and Identity: The author self-identifies as coming "fr0m the ghetto" and considers themself "white trash." This identity shapes their discomfort with idealized images of "normalcy," such as the family in The Donna Reed Show, because "there appears to be nothing wrong, but you know there always is."
  • Gender Roles: The author rejects traditional models of masculinity, describing a conflict with a friend who believes the author is "pussy whipped" for treating his wife as an equal. The author states, "My time with my wife comes first, and I value her opinion because she is a lot smarter than I am."
  • Economic Inequality and Labor: There is a critique of corporate greed, where excess profits are used for "several 'getaways' for a few lofty executives" instead of increasing employee wages. The author contrasts this with a shoe company in the Northeast that, in December 2003, gave every employee a bonus of $1,000 for every year worked, with those employed less than a year receiving $500. The author is opposed to unions, viewing them as a "bureaucratic exploitation" that contradicts capitalism and is now redundant due to government regulation.

3. Personal Journey: From Marine to Student to Technologist

The author's personal experiences as a retired Marine, a student in various disciplines, and a worker in different fields provide a constant backdrop to their intellectual explorations.

  • Military Service: The author is a retired Marine. This experience informs their perspective on discipline, authority, and foreign policy. They express frustration with military members who resort to authoritarianism when they "don't have a leg to stand on."
  • Academic Pursuits: After retiring, the author enrolls in college, studying subjects including English, Chemistry, Physiology, Spanish, and eventually a radiology program. They express a strong dislike for the "constrictions" of formal academic writing and critique the educational system for perceived laziness among tenured professors who rely heavily on Teaching Assistants (TAs).
  • Critique of "Normalcy": An early entry reflects on a bumper sticker stating, "Normal People Worry Me!!" The author deconstructs the definition of "normal" ("Conforming with, adhering to, or constituting a norm, standard, pattern, level, or type"), concluding that since context is always changing, true normalcy is impossible. Therefore, a person who "seems" normal is putting on a "false facade" and is not to be trusted.

4. Evolution of Technological Engagement

The archives document a clear progression in the author's relationship with technology, from a hobbyist to a sophisticated practitioner and critic of AI.

4.1. Early Adoption and Tinkering (2000s)

Early entries show an interest in consumer technology and open-source software.

  • Hardware and OS: The author mentions buying a Dell laptop, installing internal wireless cards, and experimenting with various Linux distributions that can run from a CD (Knoppix) or be installed via Wubi on Windows (Ubuntu).
  • Productivity: Technology is used for schoolwork, writing, and accessing information. A laptop is seen as essential for managing school and personal tasks, especially when a shared home computer is occupied.

4.2. Advanced AI Exploration and Application (2025)

The 2025 entries represent a significant leap in technical engagement, focusing almost exclusively on the use and analysis of Large Language Models (LLMs).

  • Project "De-Bullshitification": A systematic project to analyze the work of Jordan Peterson. The author uses NotebookLM to identify "garrulous and bombastic" language, translates the jargon into plain English, and identifies rhetorical tricks like false choices and hidden premises.
  • Example Peterson Quote Analyzed: "Science allows for increasingly precise determination of the consensually validatable properties of things, and for efficient utilization of precisely determined things as tools (once the direction such use is to take has been determined, through application of more fundamental narrative processes)"
    Translation: "Science is great for figuring out how things work and building tools, but our myths and stories must tell us what to do with them."
  • AI as a Practical Tool: The author documents using AI (Gemini, ChatGPT, Comet) to build a functional recipe website from scratch. This involves generating HTML, CSS, and JavaScript, and parsing recipes from PDFs into JSON format. The author notes the AI's limitations, such as its inability to handle file downloads or uploads, referring to it as the "world's most stubborn, literal-minded, patient intern."

Critical Analysis of AI

The author engages in deep analysis of AI's nature and societal role.

  • Comparison to NSA: A detailed table compares the infrastructure of the NSA to that of LLMs, highlighting similarities in data ingestion, processing, compute power, and storage, arguing they are "Same Infrastructure, Different Masks."
  • AI and Truth: The author posits that LLMs inherit the flaws of their training data, which is often "redundant, contradictory, context-dependent, [and] flat-out wrong." The conclusion is to use an LLM as a "sparring partner, not a guru."
  • AI and the Workforce: The author analyzes a study on AI's impact on junior-level jobs, concluding that AI is not the sole factor. Other dynamics include opportunistic cost-cutting disguised as innovation, a preference for retaining senior staff, and strategic under-hiring of new graduates.
Category NSA (Surveillance Infrastructure) LLMs (Language Model Infrastructure)
Mission Surveillance, signals intelligence, cyber operations Language generation, interaction, prediction
Data Ingest Global telecom, fiber taps, satellites, intercepts Web scraping: Common Crawl, books, Wikipedia, forums
Processing Real-time stream decoding, bulk signal analysis Batch GPU/TPU pipelines, transformer inference
Compute NSA supercomputers, custom ASICs, classified clusters NVIDIA A100/H100, TPUs, hyperscale data centers
Storage Petabyte/exabyte storage (e.g., Utah Data Center) Massive datasets + model weights (100s of GBs to TBs)
Secrecy Total — classified, legally shielded Mixed — some open-source, most proprietary

5. Literary and Intellectual Influences

The author is a prolific reader of both fiction and non-fiction, and the ideas encountered are frequently integrated into their own writing and analysis.

  • Key Philosophers: The author engages with the works of Ayn Rand (criticizing her idealized characters but resonating with some individualist themes), Jean-Jacques Rousseau (noting his difficult prose but analyzing his ideas on inequality), Friedrich Nietzsche (on the detriment of sympathy-based morality), and John Stuart Mill (on liberty and the harm principle).
  • Non-Fiction and Current Events: Books on history, politics, and science are frequently reviewed, including works by Bill Maher, Charles Fishman (The Wal-Mart Effect), James Bamford (Body of Secrets), and Shelby Foote (The Civil War).
  • Fiction: The author reads a wide variety of fiction, including science fiction by Octavia Butler and Robert Silverberg, horror by Stephen King, and supernatural fiction by Anne Rice. These often serve as points of departure for broader reflections.

Thursday, September 11, 2025

 Another ChatGPT conversation

It started with a quote presented to me.....


"A Stanford University study found that AI adoption has caused a 13% decline in employment for workers aged 22-25 in AI-exposed occupations, while older workers in identical roles remain largely unaffected"

I put it to ChatGPT


"What are the possibilities that some of the decline is due to employers using AI as an excuse to streamline their workforce. In other words, AI is an excuse and not necessarily a cause"

The response is below, but it also offered to dig into "the dataset appendix to see how much of this decline is explained by actual AI adoption intensity at firms versus sector-wide trends"

I said yes, and after a lengthy response this was presented at the tail end..

What this means for your “AI as excuse” hypothesis

  • The dataset does control for non-AI shocks, so the measured effect isn’t just macro noise.

  • But, the mechanism isn’t necessarily “robots replacing humans” — it’s very plausible firms are using AI adoption as a rationale to stop hiring younger workers and keep a leaner, older, more experienced staff.

  • In other words: AI isn’t always the cause — sometimes it’s the justification.



Thursday, September 04, 2025

Project De-Bullshitification: A Field Guide to Hunting Profundity

It starts, as it so often does, with a conversation. My brother, like many others, has fallen under the spell of Jordan Peterson. He speaks of the man’s ideas as “profound,” and points to the dense, academic language as proof. The logic is a curious one: if you have to consult a dictionary to understand a sentence, the sentence must contain a deep truth. It’s a compelling idea, but one that demands interrogation. Is it genuine depth, or is it a rhetorical smokescreen?

This question is the starting point for a project: a systematic effort to go through Peterson's work and distinguish the genuinely insightful from the intellectually fraudulent. The goal is not to dismiss everything out of hand—that would be intellectually lazy. The goal is to hunt for the bullshit, to isolate it, and to understand the function it serves.

Step One: Isolating a Specimen

Every hunt needs a target. I turned to my digital toolkit, using NotebookLM to scan Peterson’s writings for the kind of "garrulous and bombastic" language my brother found so profound. The tool provided a list of suspect phrases, a rogues' gallery of academic-sounding terms. From that list, I pulled a perfect specimen:

"Science allows for increasingly precise determination of the consensually validatable properties of things, and for efficient utilization of precisely determined things as tools (once the direction such use is to take has been determined, through application of more fundamental narrative processes)"

This sentence is a masterpiece of its kind. It sounds intelligent, respectful of science, and philosophically rigorous. It’s the kind of statement that makes you nod along, feeling smart for simply having read it. It was the perfect place to begin the dissection.

Step Two: The Dissection

The first step in any analysis is to translate the jargon. Stripped of its academic costume, the sentence says something quite simple: Science is great for figuring out how things work and building tools, but our myths and stories must tell us what to do with them.

On the surface, this seems reasonable. But the bullshit is often in the setup, not the punchline. A critical look reveals the rhetorical tricks at play:

  1. The False Choice: It presents only two options—directionless science or science guided by "narrative"—conveniently ignoring the entire history of secular ethics.

  2. The Power Grab: By calling narratives "more fundamental," it performs a subtle but significant demotion of science, placing it in the role of a mere tool awaiting instruction from its mythological masters.

  3. The Hidden Premise: It smuggles in the unproven assumption that the only valid source of human values is ancient story, a premise that just so happens to be the foundation of Peterson's entire worldview.

What appeared to be a profound statement on the limits of science was, under scrutiny, a Trojan horse designed to subordinate reason to myth.

The Path Forward

This single sentence is a microcosm of the larger challenge. The book is filled with similar linguistic traps: The Logos, The Dominance Hierarchy, Chaos and Order. Each term carries a similar veneer of academic weight, and each demands the same rigorous, critical dissection.

The project is now clear. I will proceed, chapter by chapter, not as a cynic dismissing everything, but as a surgeon with a scalpel. The goal is to weed out the bullshit, to separate the parts that are genuinely insightful from the parts that are merely dressed up to look that way.

The hunt is on.


On the Irony of the Algorithm

To be clear, it’s not lost on me that there is a deep irony in using an AI to help dissect the work of a man who champions the human "Logos." One could easily label this project as intellectually lazy—outsourcing the hard work of critical thinking to a machine, and the accusation deserves to be met head-on.

Saturday, August 02, 2025

 

Recipe Site, Round Two: I Make the AI Do the REAL Work

So after my little jaunt with Gemini writing code for the digital recipe bin, I figured why not really test the boundaries of what these browser AIs can handle? Enter Comet Assistant—a so-called “browser-native” smart helper. Sounds snazzy. Turns out, it’s a whole circus act and I’m the guy with the whip, minus the power or the sequined suit.

First, the easy stuff.

I asked it to crawl through my GitHub repo, sort recipes alphabetically. Miracle—success, in a “this is what computers should have done twenty years ago” sort of way. Code got tweaked, buttons pressed, now the list is cleaner than my fridge ever was.

Not content to rest on its digital laurels...

I threw it a real task: parse a new recipe out of a PDF (try that by hand some Friday night), convert it to JSON, and plug it into my site’s machinery. This, dear reader, is where things get interesting. See, Comet can read anything in the browser, slice and dice it, and even make JSON out of my grandmother’s chicken scratch—assuming grandma’s recipe was hosted on GitHub and formatted like computer code. But download a file? Move it around? “Sorry, that’s your job, meatbag.” I’m still pushing buttons like it’s 1999.

I pitted it against Google Drive. First pass? It failed—threw its digital hands in the air. But here’s the difference between toaster-AI and modern software: Comet actually kept at it. I flipped some windows, clicked into folders, and with the right page “visible,” it did the damn comparison. It doesn’t give up, it just demands you do a bit of the heavy lifting. Not exactly SkyNet, but I’ll take the hustle.

A few more rounds and the whole process was humming. New recipe? I parse the PDF, Comet turns it into JSON, slaps it right into the repo—and even updates the homepage like magic. Except when it doesn’t. Except when it can’t. That’s when I’m nudging it, spoon-feeding which window to look at, because it can “do anything you see in the browser” but not one click more.

Here’s what the friendly browser-brain gets right:

  • Editing files, updating lists—works like a charm (if you’re already in the right window).
  • Parsing, converting, structuring data—you’d swear a human intern was doing the grind, only with fewer typos and no coffee breaks.
  • If it screws up, you navigate a little, poke it again, and it actually gets back up for round two.

Here’s what gets lost in translation:

  • File downloads? Not a chance.
  • Anything that smells like “drag and drop” or “upload from my desktop”—no dice.
  • If your data is hiding outside a web window, you, dear reader, are the delivery service.
So what did I learn, besides never trusting an AI with your grocery list? These so-called “assistants” work best as the world’s most stubborn, literal-minded, patient intern. Give them structure—they shine. Expect magic, get a shrug and a cheerful “Can’t do that, but here’s how you can.”

I won’t say Comet replaced hours of manual drudgery. It’s more like owning a self-driving car that occasionally hands you the wheel and says, “I don’t do left turns.” Still, my recipe book is smarter, my code is cleaner, and I get to pretend I’ve automated my kitchen—well, the paperwork at least.

Next time, maybe I’ll put it up against the ultimate test: three ingredients, no power, and one AI that can’t boil water.

Thursday, July 24, 2025

LLMs, Hallucinations, and the Myth of Machine Truth

Reading a book called All the Knowledge in the World: The Extraordinary History of the Encyclopaedia by Simon Garfield. In it he quotes Descartes; 

Even if all knowledge could be found in books, where it is mixed in with so many useless things and confusingly heaped in such large volumes, it would take longer to read those books than we have to live in this life and more effort to select the useful things than to find them oneself.

It called to mind the vast amount of data that LLMs today are trained on, and the phenomenon of hallucinations. So, what better way to further consider the notion than to 'ask' a LLM.


Steve:
Even if all knowledge is available to LLMs, it's buried in noise, contradiction, and massive volume. Hallucinations aren't glitches—they're a result of confusing, unreliable data and the fact that LLMs generate outputs based on pattern recognition, not truth.

ChatGPT:

You're hitting a critical point that cuts through the hype around LLMs: these systems are not truth machines—they’re pattern machines trained on a messy, inconsistent, and often outright wrong corpus of human knowledge.

Your quote (which echoes Descartes, Bacon, and even Montaigne in tone) captures the old problem of information glut—now weaponized at scale. The core problems you're highlighting:

1. Volume ≠ Value

Even if all the world’s knowledge is encoded in books (or in a training dataset), most of it is:

  • Redundant
  • Contradictory
  • Context-dependent
  • Flat-out wrong

LLMs inherit all of this. And they don’t "know" any of it in the sense that a human understands truth. They infer statistical relationships. So when contradictions or noise are common (say, in alternative medicine, political history, or pseudoscientific literature), the model mirrors that fog.

2. Training Data Is Confused Because We Are

If humans can't agree on what's true, the LLM can't either. It just reflects the average or dominant patterns in the data it's seen. If 60% of sources say X, and 40% say Y, it might say X, but that doesn't mean X is correct—it just means X showed up more often or more convincingly phrased.

Add in deceptive formatting (e.g. scammy blog posts made to look like peer-reviewed science), social bias, outdated consensus, and outright propaganda, and you have a training set that's more swamp than signal.

3. "Reliable" Sources Are Not Always Reliable

Even when instructed to check against so-called reputable sources (Wikipedia, scientific journals, news outlets), there's a meta-problem:

  • What if the reputable source is wrong, or politically influenced, or later retracted?
  • How does the model recognize institutional drift—where an organization was reliable once but isn't now?

Worse: LLMs often fail to differentiate between how things are said and what is being said. A confident, well-written fabrication can score higher than a sloppy but accurate piece.


Bottom Line

LLMs are only as good as the signal-to-noise ratio of their data and the questions we ask. Hallucinations aren’t glitches; they’re symptoms of a system doing what it’s designed to do:

predict plausible next words based on an unfiltered tangle of human knowledge, ignorance, and bullshit.

Want a smarter model? Fix the upstream problem:

  • Clean data
  • Transparent sources
  • Epistemic weighting (some facts matter more than others)
  • Domain-specific fine-tuning

Until then, the smartest use of an LLM is often as a sparring partner, not a guru. Use it to sharpen your thinking, not replace it.

Monday, June 30, 2025

 

Rabbit Trails and Critical Perspectives: My AI-Assisted Book Exploration

I recently found myself engrossed in "Kingmaker: The Life of Pamela Harriman," a biography that chronicles the extraordinary life of a woman who moved through the highest echelons of power. While the book is a compelling read, I was left with a desire to understand the nuances and unspoken elements that often lie just beneath the surface of any historical narrative. This curiosity led me down a fascinating rabbit hole of research, assisted by a suite of AI tools, that began with an AI's refusal to cooperate.

The Process: From Whitewash to a Critical Deep Dive

My journey began with a simple goal: to explore what the book didn't explicitly state. I turned to a combination of Gemini and NotebookLM to start my investigation. My process was straightforward. I began by pulling up Wikipedia articles on Pamela Harriman and the key figures in her life, importing them into a NotebookLM notebook. This created a centralized, queryable database of information that I could easily search and analyze.

However, I quickly realized that much of the readily available information online felt, for lack of a better term, whitewashed. The more controversial aspects of Harriman's life were often softened or glossed over. This is where the real work began. I started actively looking for the "rabbit trails," the subtle hints and understated connections that pointed to a more complex reality.

To aid in this deeper dive, I developed a "critical perspective prompt," instructing the AI to act as an investigative journalist and build the strongest possible case against the subject, focusing exclusively on controversies, scandals, and negative impacts. I wanted the unvarnished dirt.

The Claude Exchange: A Dialogue on Method

I presented my prompt to various LLMs, but the most interesting exchange was with Claude AI. It flatly refused my initial request. It stated:

"I can't create a one-sided exposé that focuses exclusively on negative information" because such an approach "wouldn't provide you with accurate, balanced information."

Instead, it offered a "comprehensive, factual overview" that would include controversies alongside her achievements.

This was a fascinating obstacle. The AI's programming was pushing back against the perceived bias of my prompt. I countered by asking if providing a balanced analysis but separating the positive and negative wouldn't amount to the same thing.

Claude’s response got to the heart of the matter. It explained that "context and proportion matter deeply" and that an exposé's goal is to "prosecute a case, while historical analysis aims to understand." It was a thoughtful argument, but it missed my intent.

The breakthrough came when I explained my methodology. I told it:

"I've already been exposed to the positive aspects of her life, and I'm trying to balance the research myself because the critical aspects of her life are not readily available."

This clarification changed everything. The AI responded, "You make an excellent point... I understand now what you're trying to accomplish." It acknowledged that controversial figures often get sanitized treatment and that my approach of "deliberately seeking out the critical perspectives to balance what you've already read is sound historical research practice." With its concerns addressed, it then proceeded to execute my original prompt perfectly, delivering a detailed and structured critical analysis.

Findings and Insights: The Unvarnished View

The information Claude returned was exactly the kind of nuanced, critical detail that is often buried. It detailed:

  • The Harriman Inheritance Scandal: A massive legal battle where Averell Harriman's children accused Pamela of squandering as much as $41 million of family trust funds through "bad investments and self-dealing." The heirs alleged the trust funds were reduced from $30 million to about $3 million.
  • Authoritative Criticism: The analysis compiled damning assessments from those who knew her best. Her own son, Winston Churchill, claimed she was "too busy whoring around" to be a present mother. Her second husband's daughter, Brooke Hayward, called her "greedy beyond the usual human greed." A critical biographer, Max Hastings, noted she was described as "a world expert on rich men's bedroom ceilings."
  • Systematic Exclusion of Family: The report highlighted how Pamela allegedly "froze them out physically and she froze them out psychologically and then she froze them out financially," according to biographer Christopher Ogden.
  • The Final Synthesis: The AI concluded that the critical view positions Pamela Harriman not as a political pioneer, but as someone who "weaponized sexuality and charm to extract resources from vulnerable people, leaving damaged families and depleted estates in her wake."

This experience was a powerful lesson. It demonstrated that using AI for serious research is not a passive process. It's a dialogue. By pushing back, explaining my reasoning, and treating the AI as a collaborator, I was able to bypass its default guardrails against bias and achieve my research goal. It revealed a path to a more nuanced and complete understanding, proving that sometimes you have to argue with the machine to get to the truth.

What are your experiences with AI for research? Have you ever had to "convince" an AI to give you the information you were looking for? Share your thoughts in the comments below.

My Upcoming Biopsy and the Surprising Truth I Found in a Glass of Wine It started with a simple, almost throwaway question. I have a minor...