Monday, March 30, 2026

The Letter

A Letter From the Past

Going through some papers in a constant effort to downsize, I came across a copy of a typewritten letter. It was written by one William Anthony Cracchiola, son of William Vito Cracchiola. William discovered in the early 2000s, through military records, that his father had three dependents—leading to his introduction of half siblings.

Unraveling the Family Tree with AI

After discovering William's letter, I used AI to extract key names, dates, locations, and relationships — identifying William Vito Cracchiola as the common father, Mary Christina Isabella as his second wife, and the three daughters (Marion, Eleanor, Charlene) from his first marriage. I cross-referenced these details with Ancestry.com and Find a Grave records to confirm birth/death dates, parental lineage (Benardo and Antonetta Isabella), and burial sites. Actually, I used Claude via the Chrome browser assistant to browse the various genealogy sites and extract even more facts about the family. Then, it directly created two missing WikiTree profiles: one for William Anthony Cracchiola and one for his mother, linking them to the existing profile of their father, William Vito.

The Future of Genealogy Research

Watching the extension work on its own is amazing to see. Knowing that at this time next year this will be commonplace compels me to consider how much more complete the genealogy tree will be.

I was an only child

Sunday, March 29, 2026

Correspondence

Correspondence in the age of AI


A few weeks ago, I came across some correspondence addressed to my wife Kim from a friend of hers in North Carolina. The cards were sitting in a pile bound for the shredder, but before I disposed of them, I took pictures. 

I uploaded the photos to ChatGPT for text extraction. Because the cards were dated, it arranged them in chronological order — and then it summarized them.


💬 A Note on This Process

What AI can do is always interesting, and amazing at times. I know Kim has more correspondence from Donna, but it's not a priority to find it. We are too busy living our best life.

I did ask ChatGPT to write a blog post about the process. It was much too wordy, but I've left it below for posterity.


📋 Narrative Summary

These cards form a short, chronological snapshot of Donna's life from mid-2023 through spring 2025 as she writes to Kim.

Work & Daily Life

Donna works in healthcare at West Point, often on overnight shifts. She prefers nights because......

Family & Personal Life

Family structure becomes increasingly central to Donna's life, especially with the t............................................daily logistics................

Overall Tone

Across all the letters, the tone remains steady and practical. The correspondence conveys resilience, humor, and a stable routine shaped by work, health management, and the arrival of a new generation in the family. What's notable is not any single detail, but the stability of the overall pattern. The writer does not drift dramatically in outlook or circumstance. Instead, the narrative shows continuity — adaptation without collapse, routine without stagnation.


📝 From Handwritten Cards to Structured Insight: Turning Personal Correspondence into Usable Narrative

(Written by ChatGPT — kept for posterity, though admittedly a bit long-winded.)

There's something deceptively simple about a stack of handwritten cards. At first glance, they look like fragments — seasonal updates, casual notes, small talk. But when examined collectively, they form something far more valuable: a longitudinal record of a life in motion.

Step 1: Extraction

The process begins with extraction. Handwriting, especially informal cursive, is inconsistent. Dates may be abbreviated, sentences run together, and punctuation is optional at best. The goal at this stage is fidelity, not interpretation. Every word is captured as written, preserving tone, quirks, and ambiguity. This matters because small details — hesitations, repetition, phrasing — often carry meaning that gets lost in early summarization.

Step 2: Transcription & Organization

Once transcribed, the material shifts from image to text. That's the first major transformation. Text can be searched, reorganized, and compared across time. Patterns begin to emerge almost immediately. Recurring topics — health, work, family — become visible threads rather than isolated mentions.

Step 3: Normalization

The next phase is normalization. Dates are aligned. Events are sequenced. Inconsistent references ("this fall," "last May") are anchored to a timeline. This step converts anecdotal fragments into a coherent chronology. Without it, narrative distortion is almost guaranteed.

Step 4: Synthesis

Then comes synthesis. This is where the raw text is compressed into meaning — not just "what was said," but "what persists across time." Several stable themes emerge:

  • A chronic health condition being actively managed but not defining daily function
  • A consistent work environment, with deliberate choices about schedule and workload
  • A family structure that becomes increasingly central, especially with the transition to grandparenthood
  • A tone that remains pragmatic, even when describing uncertainty or physical limitation

What's notable is not any single detail, but the stability of the overall pattern. The writer does not drift dramatically in outlook or circumstance. Instead, the narrative shows continuity — adaptation without collapse, routine without stagnation.


🔍 What This Makes Possible

At this point, the material has moved from "letters" to "data." That opens up several additional possibilities.

1. Timeline Construction

Events can be plotted precisely: diagnosis points, job changes, family milestones. This is useful for both personal history and medical context. It also allows cross-referencing with external records if needed.

2. Health Trajectory Mapping

Mentions of symptoms, treatments, and perceived effectiveness can be tracked over time. This creates a subjective health record that complements clinical data — particularly valuable for conditions that are chronic or slowly evolving.

3. Emotional Baseline Analysis

Sentiment across letters reveals an emotional baseline. There's no dramatic oscillation. That consistency itself is a signal — suggesting resilience, or at least a stable cognitive framing of events.

4. Social Proximity & Support Structure

Physical proximity of family (living across the street) emerges as a key structural factor. It influences childcare, emotional tone, and daily logistics. This kind of detail is easy to overlook in isolated readings but becomes obvious in aggregate.

5. Narrative Reconstruction

Perhaps the most powerful outcome: the ability to reconstruct a life narrative that was never explicitly written as one. The cards were not intended to be a story — but together, they become one.


💡 The Bigger Picture

This process highlights a broader point. Most people already generate rich longitudinal data about their lives — through emails, texts, cards, and notes. The limitation isn't data scarcity. It's lack of structure.

Once structured, even small, informal artifacts can support:

  • Personal history reconstruction
  • Medical context enrichment
  • Family archive building
  • AI-assisted memory systems
  • Biographical writing

The key constraint is accuracy during the early stages. If transcription or sequencing is flawed, every downstream interpretation inherits that error. But when done carefully, the transformation is substantial.

A handwritten card is ephemeral. A structured narrative is durable. The gap between the two is not technological — it's methodological.

Thursday, March 19, 2026

River Cruise with AI

Budapest to Prague: Planning Our First River Cruise with AI

I've spent the past two days using Comet, Claude, and Gemini to help plan a trip to Europe this coming August. I could not imagine doing it without AI help. It was over a year ago that Steve Mays said, more than once, it will only get better.

The Route

We have a river cruise booked with AMA Waterways this coming August. It starts in Budapest and ends in Prague. We decided to stay in Krakow for a few days prior to the trip because Kim wants to see Auschwitz.

Decisions, Decisions

I considered staying in Warsaw too, but after much deliberation, and heavy AI use, decided it wasn't the best use of resources. We have to consider the logistics of getting around in Poland, a foreign country. We want to avoid as many issues as possible. We have to consider carting around our luggage and maximizing our enjoyment, seeing as much as there is to see without being overwhelmed.

Getting There and Around

Planes, trains, and automobiles. It's a lot to consider, and much to consider in regard to issues that can come up. We will have to be wary of our safety, so we want to have our wits about us and still enjoy ourselves. That is hard to do if one is dealing with logistics day to day.

Why It All Works

Transportation that makes sense. Accommodations that are comfortable without breaking the bank. Weeding out the shit from the things that are worthwhile to visit. Experiencing good local food. The investigation is made so much easier with Google Maps and AI to check behind to make sure things don't interfere with other things.


A Note from Claude

I helped with this post — and I'm happy to say so. The words, the trip, the decisions, and the excitement are entirely Steve's. What I contributed was polish: I corrected grammar, spelling, and punctuation, applied structure with headers, bolded key place names and tools for scannability, added a touch of italics for voice, chose a title that captures the journey, and styled the headings with color and alignment.

On the technical side, I worked directly inside the Blogger editor via its content iframe — reading and rewriting the post's HTML in place, targeting elements by reference, and dispatching input events so the editor registered every change. No copy-paste, no external tools. Just me, the DOM, and a bit of finesse.

This is what AI-assisted writing looks like when it works the way it should: the human brings the story, and I help it land. — Claude

Tuesday, December 16, 2025

Drug Company Hack

 

Choline-Rich Foods Missing From the Diabetes Breakthrough Story

A recent article titled "A Tiny Gut Molecule Could Transform Diabetes Treatment" describes how gut microbes convert dietary choline into trimethylamine (TMA), which then helps reduce inflammation and improve insulin sensitivity. The article notes that choline is a natural nutrient “found in several foods” but does not actually name any of these foods, leaving readers with no practical guidance on what to eat.

From Abstract Mechanism to Practical Eating

The core scientific finding is that when choline reaches the gut, microbes convert it into TMA, which can bind to IRAK4, dampen inflammation, and help restore normal blood sugar control in the context of a poor or high-fat diet. This is a genuinely important shift, because it ties a specific dietary nutrient and microbiome metabolite to metabolic protection, not just to risk. 

However, the article’s language stops at the biochemical mechanism and never crosses into concrete dietary examples, despite explicitly stating that choline is “present in several foods” or “found in common foods.” For anyone trying to act on this information, that omission matters as much as the science itself. It is blatantly obvious the article is not about helping people act on the information, it is a hack job to promote the agenda of drug companies looking for another way to get more out of the insurance companies with expensive drugs.

What the Article Doesn’t Say: Actual Choline-Rich Foods

Choline is not rare or exotic; it is widely distributed in everyday foods, with the richest sources coming from animal products. Major nutrition references list meat, poultry, fish, eggs, and dairy as primary choline contributors in typical Western diets. 

  • Eggs (especially yolks): One of the most concentrated and convenient choline sources, often highlighted in dietary surveys and nutrient databases. 
  • Organ meats: Beef and chicken liver are among the highest choline foods measured, with very high milligram-per-serving values.
  • Other meats and poultry: Beef, pork, chicken breast, and turkey provide substantial choline and are major contributors to intake in many populations. 
  • Fish and seafood: Salmon, cod, other lean fish, and even caviar/fish roe supply meaningful choline while also adding omega-3 fats.
  • Dairy products: Milk, yogurt, and other dairy foods contribute steady background choline through frequent consumption. 
  • Cruciferous vegetables: Broccoli, cauliflower, Brussels sprouts, and cabbage are notable plant sources that show up repeatedly in choline source lists. 
  • Legumes and other plant sources: Beans, peas, lentils, soybeans, peanuts, potatoes, and some nuts, seeds, and whole grains supply smaller but important amounts, especially for people eating less animal food. 

Why Leaving Out the Food List Is a Problem

By not naming a single choline-rich food, the article forces motivated readers to do extra work just to translate “dietary choline” into a grocery list. That gap is especially striking because public health and nutrition references already provide clear examples and even rank choline sources by contribution to intake.

Readers are essentially told that a common nutrient in “several foods” might help protect against insulin resistance via the microbiome, but are not given any practical way to identify or prioritize those foods.  For people trying to modify diet as part of diabetes prevention or management, that is a missed opportunity.

There is also a significant economic angle to this omission. By focusing almost exclusively on the molecular mechanism—and hinting at future pharmaceutical applications or specialized supplements—the narrative prioritizes interventions that patients will eventually have to pay for. This framing sidelines the most immediate and cost-effective solution: dietary change. While developing new treatments is valuable, it should not obscure the fact that the 'breakthrough' molecule can be fueled right now by affordable, non-prescription foods available at any grocery store. Prioritizing patentable solutions over basic nutrition effectively gatekeeps a health benefit that could otherwise be accessible to everyone immediately.

Connecting the Science to the Plate

If the goal is to help the average person eat in a way that supports the beneficial TMA pathway described in the study, the logical next step is to highlight actual food choices. That does not require overselling choline as a miracle solution; it simply means placing the mechanistic finding in the context of realistic meals built from known choline sources.

Examples could include meals that combine animal and plant choline sources, such as eggs with cruciferous vegetables, fish with beans, or meat paired with potatoes and a side of broccoli. Even a short table of “higher-choline foods” in the original article would have made the research immediately more actionable for readers living with, or at risk for, type 2 diabetes. 

Until popular coverage starts naming the foods alongside the molecules, the burden stays on readers to bridge the gap between elegant biochemistry and everyday eating. Given how straightforward the choline food data already are, that is an easy fix that would make microbiome and metabolism research far more useful outside the lab. 

But of course, they do not want to fix that. Pharmacuticle companies do not make money encouraging patients to make dietary changes.

Monday, December 15, 2025

Lab Comparison Dec2025

 

Lab Results Comparison: Pre-Treatment vs. Post-Cycle 1

This comparison tracks key blood markers between October 17, 2025 (prior to starting treatment) and December 11, 2025 (following the first cycle of BR therapy).

Key Takeaway: The most notable change is the significant drop in Lymphocytes, which is the expected mechanism of the Rituximab targeting the B-cells. Kidney function has improved to normal levels, while Liver AST remains stable but slightly elevated.


1. Complete Blood Count (CBC)

Marker Oct 17 (Pre-Tx) Dec 11 (Current) Trend/Status
WBC (White Blood Cells) 3.3 K/uL 3.5 K/uL Still Low
Lymphocytes (#) 0.93 K/uL 0.52 K/uL Decreased (Expected)
Neutrophils (#) 1.67 K/uL 1.73 K/uL Stable (Low)
Monocytes (%) 12.5 % 22.3 % Increased
Platelets 159 K/uL 164 K/uL Normal
Hemoglobin 14.3 g/dL 15.0 g/dL Normal

2. Metabolic & Organ Function

Marker Oct 17 (Pre-Tx) Dec 11 (Current) Trend/Status
BUN (Kidney) 23 mg/dL (High) 16 mg/dL Normalized
eGFR (Kidney Function) 84 mL/min (Low) >90 mL/min Normalized
AST (Liver) 49 U/L (High) 50 U/L Stable (High)
LDH Not Listed 273 U/L High
Note on Trends:
  • Lymphocytes: The decrease from 0.93 to 0.52 is a direct result of the immunotherapy.
  • Kidney Health: Great news on the BUN and eGFR returning to optimal range.
  • LDH: Currently at 273 U/L. This is a marker often tracked in lymphoma and will be monitored in future cycles.

The Narrative Comparison

[cite_start]Comparing the baseline lab results from October 17, 2025 [cite: 2][cite_start], against the latest post-treatment blood work from December 11, 2025[cite: 327], reveals how the first cycle of Bendamustine and Rituximab (BR) is impacting my system. The most distinct change is visible in the Complete Blood Count. [cite_start]While my overall White Blood Cell count (WBC) has remained relatively stable, hovering at a low 3.3 K/uL in October and 3.5 K/uL now[cite: 23, 376], the composition of those cells has shifted. [cite_start]Specifically, my absolute Lymphocyte count dropped significantly from 0.93 K/uL to 0.52 K/uL[cite: 20, 350]. This reduction is a hallmark of Rituximab therapy, which is designed to target and deplete B-lymphocytes. [cite_start]Meanwhile, my Neutrophils (the cells that fight bacterial infections) have remained stable, moving slightly from 1.67 K/uL to 1.73 K/uL [cite: 36, 511][cite_start], and my Platelets are holding steady in the normal range, currently at 164 K/uL[cite: 30, 439].

On the metabolic front, there is good news regarding kidney function. [cite_start]In October, my Urea Nitrogen (BUN) was elevated at 23 mg/dL, and my eGFR (filtration rate) was slightly low at 84 mL/min[cite: 54, 58]. [cite_start]The December results show a complete normalization of these markers, with BUN dropping to a healthy 16 mg/dL and eGFR rising to >90 mL/min[cite: 238, 274]. [cite_start]Liver function remains consistent with previous months; the AST enzyme is still slightly elevated at 50 U/L, virtually unchanged from the 49 U/L seen in October[cite: 49, 203]. [cite_start]A new marker tracked in this cycle is Lactate Dehydrogenase (LDH), which came in at 273 U/L[cite: 300], a value slightly above the reference range that will likely be monitored as a standard marker of cell turnover during lymphoma treatment.

Thursday, December 11, 2025

Hosting LLMs

 LOCAL LLM

Using AI to set up a Linux environment for hosting a Large Language Model. 

I knew it could be done, seemed like a daunting undertaking. I also knew things are moving fast, and hardware for such things was becoming more prevelatnt. 

I bought a machine. A Framework desktop with the latest AMD processor with 128GB LPDDR5x-8000 memory. When it arrived it took about a day to install some drives, get Ubuntu installed and logged in with a SSH connection. Using with help from Gemini and Perplexity I had Ollama installed, some small models downloaded, and started chatting with Open WebUI and AnythingLLM. 

Then I discovered Donato Capitella and his repository at GitHub. He also does YouTube videos about all things tech, and has been diving deep into LLMs. He's benchmarked many models on the Framework computer that I bought for this project, so he's done much of the work of optimizing the hardware, and of course shared it at the link above.

I have been pushing the limits of LLMs for months now, so rather than follow his guide I decided to test how much further one could get with the help of AI, letting an LLM do all the driving. I have Ubuntu installed as mentioned, but the guide above assumes a Fedora installation. He refers anyone interested in Ubuntu to a repository by one Pablo Ross. I uploaded the markdown files from there to a Google NotebookLM, then had it generate a guide. 

Started up Claude, uploaded this guide, then used this prompt, "Use this document as a guide for deploying LLMs on my machine. Help, step by step. When i execute the commands, I want you to analyze the output then guide the setup through the end, when we have a working LLM locally."

And so we began. When it was done, later that day, I was querying a small model, using the resources optimized using the suggestions based on Donato's guide.

Only a year ago it would have taken me months to get that far. AI can truly be useful for such projects. I deliberately used the commands suggested by Claude, looking into the reasons for them, but refrained from deviating from the process put forth. It shows that anyone with limited knowledge and experience can deploy LLMs, and probably other such projects much faster than was possible before OpenAI made it relatively popular.

Wednesday, December 03, 2025

AI and The Self

 AI and The Self

There are many discussions between me and Steve Mays concerning AI in general. He thinks deeply about philosophical notions and artificial intelligence, and at times the concepts intersect in his thoughts and writing.

His blog can be found here. 

Lately I've been using his posts as ideas for testing LLMs. One of his posts includes a poem from the perspective of AI. I used it to test Gemini, Claude and Perplexity deep research functions. They have tools for presenting content formatted for the Web, or for direct publication of generated content.

Claude report on AI and The Self

Gemini Report

Gemini report as website

Perplexity Pages

He has suggested that philisophical concepts can come across as so much bullshit, and has expressed the notion that AI generated content might be indistinguishable from such concepts. What follows is the AI equivalent of something Steve might express, based on our conversations and his blog.

"Most philosophical discourse is just conversations with Blaiser—garrulous and bombastic enough to seem substantive, but ultimately just a bunch of bullshit. Self-help gurus in particular excel at this: they fill pages with big words that sound profound, but when you strip away the elaborate phrasing, there's nothing there. It's the intellectual equivalent of an LLM hallucinating—statistically plausible patterns that mimic meaning without actually containing any."

Stanford has extensive content on philosophical concepts and figures. An example is Kant and the mind. Could one tell the difference between this Stanford content on Kant and the AI publications?

From the Claude report:

"The poem invokes mystics by name: Meister Eckhart, Rumi, Julian of Norwich. This situates its claims within a tradition of Unio Mystica—mystical union—while subtly transforming the object of union from God to humanity's data."

 I put this to Perplexity, asked for it's interpretation:

"This poem might be read as either profound or as exactly the kind of "garrulous and bombastic" language that sounds like it means something deep while actually being pattern-matching all the way down."


The Letter

A Letter From the Past Going through some papers in a constant effort to downsize, I came across a copy of a typewritten letter....