Thursday, September 11, 2025

 Another ChatGPT conversation

It started with a quote presented to me.....


"A Stanford University study found that AI adoption has caused a 13% decline in employment for workers aged 22-25 in AI-exposed occupations, while older workers in identical roles remain largely unaffected"

I put it to ChatGPT


"What are the possibilities that some of the decline is due to employers using AI as an excuse to streamline their workforce. In other words, AI is an excuse and not necessarily a cause"

The response is below, but it also offered to dig into "the dataset appendix to see how much of this decline is explained by actual AI adoption intensity at firms versus sector-wide trends"

I said yes, and after a lengthy response this was presented at the tail end..

What this means for your “AI as excuse” hypothesis

  • The dataset does control for non-AI shocks, so the measured effect isn’t just macro noise.

  • But, the mechanism isn’t necessarily “robots replacing humans” — it’s very plausible firms are using AI adoption as a rationale to stop hiring younger workers and keep a leaner, older, more experienced staff.

  • In other words: AI isn’t always the cause — sometimes it’s the justification.



Thursday, September 04, 2025

Project De-Bullshitification: A Field Guide to Hunting Profundity

It starts, as it so often does, with a conversation. My brother, like many others, has fallen under the spell of Jordan Peterson. He speaks of the man’s ideas as “profound,” and points to the dense, academic language as proof. The logic is a curious one: if you have to consult a dictionary to understand a sentence, the sentence must contain a deep truth. It’s a compelling idea, but one that demands interrogation. Is it genuine depth, or is it a rhetorical smokescreen?

This question is the starting point for a project: a systematic effort to go through Peterson's work and distinguish the genuinely insightful from the intellectually fraudulent. The goal is not to dismiss everything out of hand—that would be intellectually lazy. The goal is to hunt for the bullshit, to isolate it, and to understand the function it serves.

Step One: Isolating a Specimen

Every hunt needs a target. I turned to my digital toolkit, using NotebookLM to scan Peterson’s writings for the kind of "garrulous and bombastic" language my brother found so profound. The tool provided a list of suspect phrases, a rogues' gallery of academic-sounding terms. From that list, I pulled a perfect specimen:

"Science allows for increasingly precise determination of the consensually validatable properties of things, and for efficient utilization of precisely determined things as tools (once the direction such use is to take has been determined, through application of more fundamental narrative processes)"

This sentence is a masterpiece of its kind. It sounds intelligent, respectful of science, and philosophically rigorous. It’s the kind of statement that makes you nod along, feeling smart for simply having read it. It was the perfect place to begin the dissection.

Step Two: The Dissection

The first step in any analysis is to translate the jargon. Stripped of its academic costume, the sentence says something quite simple: Science is great for figuring out how things work and building tools, but our myths and stories must tell us what to do with them.

On the surface, this seems reasonable. But the bullshit is often in the setup, not the punchline. A critical look reveals the rhetorical tricks at play:

  1. The False Choice: It presents only two options—directionless science or science guided by "narrative"—conveniently ignoring the entire history of secular ethics.

  2. The Power Grab: By calling narratives "more fundamental," it performs a subtle but significant demotion of science, placing it in the role of a mere tool awaiting instruction from its mythological masters.

  3. The Hidden Premise: It smuggles in the unproven assumption that the only valid source of human values is ancient story, a premise that just so happens to be the foundation of Peterson's entire worldview.

What appeared to be a profound statement on the limits of science was, under scrutiny, a Trojan horse designed to subordinate reason to myth.

The Path Forward

This single sentence is a microcosm of the larger challenge. The book is filled with similar linguistic traps: The Logos, The Dominance Hierarchy, Chaos and Order. Each term carries a similar veneer of academic weight, and each demands the same rigorous, critical dissection.

The project is now clear. I will proceed, chapter by chapter, not as a cynic dismissing everything, but as a surgeon with a scalpel. The goal is to weed out the bullshit, to separate the parts that are genuinely insightful from the parts that are merely dressed up to look that way.

The hunt is on.


On the Irony of the Algorithm

To be clear, it’s not lost on me that there is a deep irony in using an AI to help dissect the work of a man who champions the human "Logos." One could easily label this project as intellectually lazy—outsourcing the hard work of critical thinking to a machine, and the accusation deserves to be met head-on.

Saturday, August 02, 2025

 

Recipe Site, Round Two: I Make the AI Do the REAL Work

So after my little jaunt with Gemini writing code for the digital recipe bin, I figured why not really test the boundaries of what these browser AIs can handle? Enter Comet Assistant—a so-called “browser-native” smart helper. Sounds snazzy. Turns out, it’s a whole circus act and I’m the guy with the whip, minus the power or the sequined suit.

First, the easy stuff.

I asked it to crawl through my GitHub repo, sort recipes alphabetically. Miracle—success, in a “this is what computers should have done twenty years ago” sort of way. Code got tweaked, buttons pressed, now the list is cleaner than my fridge ever was.

Not content to rest on its digital laurels...

I threw it a real task: parse a new recipe out of a PDF (try that by hand some Friday night), convert it to JSON, and plug it into my site’s machinery. This, dear reader, is where things get interesting. See, Comet can read anything in the browser, slice and dice it, and even make JSON out of my grandmother’s chicken scratch—assuming grandma’s recipe was hosted on GitHub and formatted like computer code. But download a file? Move it around? “Sorry, that’s your job, meatbag.” I’m still pushing buttons like it’s 1999.

I pitted it against Google Drive. First pass? It failed—threw its digital hands in the air. But here’s the difference between toaster-AI and modern software: Comet actually kept at it. I flipped some windows, clicked into folders, and with the right page “visible,” it did the damn comparison. It doesn’t give up, it just demands you do a bit of the heavy lifting. Not exactly SkyNet, but I’ll take the hustle.

A few more rounds and the whole process was humming. New recipe? I parse the PDF, Comet turns it into JSON, slaps it right into the repo—and even updates the homepage like magic. Except when it doesn’t. Except when it can’t. That’s when I’m nudging it, spoon-feeding which window to look at, because it can “do anything you see in the browser” but not one click more.

Here’s what the friendly browser-brain gets right:

  • Editing files, updating lists—works like a charm (if you’re already in the right window).
  • Parsing, converting, structuring data—you’d swear a human intern was doing the grind, only with fewer typos and no coffee breaks.
  • If it screws up, you navigate a little, poke it again, and it actually gets back up for round two.

Here’s what gets lost in translation:

  • File downloads? Not a chance.
  • Anything that smells like “drag and drop” or “upload from my desktop”—no dice.
  • If your data is hiding outside a web window, you, dear reader, are the delivery service.
So what did I learn, besides never trusting an AI with your grocery list? These so-called “assistants” work best as the world’s most stubborn, literal-minded, patient intern. Give them structure—they shine. Expect magic, get a shrug and a cheerful “Can’t do that, but here’s how you can.”

I won’t say Comet replaced hours of manual drudgery. It’s more like owning a self-driving car that occasionally hands you the wheel and says, “I don’t do left turns.” Still, my recipe book is smarter, my code is cleaner, and I get to pretend I’ve automated my kitchen—well, the paperwork at least.

Next time, maybe I’ll put it up against the ultimate test: three ingredients, no power, and one AI that can’t boil water.

Thursday, July 24, 2025

LLMs, Hallucinations, and the Myth of Machine Truth

Reading a book called All the Knowledge in the World: The Extraordinary History of the Encyclopaedia by Simon Garfield. In it he quotes Descartes; 

Even if all knowledge could be found in books, where it is mixed in with so many useless things and confusingly heaped in such large volumes, it would take longer to read those books than we have to live in this life and more effort to select the useful things than to find them oneself.

It called to mind the vast amount of data that LLMs today are trained on, and the phenomenon of hallucinations. So, what better way to further consider the notion than to 'ask' a LLM.


Steve:
Even if all knowledge is available to LLMs, it's buried in noise, contradiction, and massive volume. Hallucinations aren't glitches—they're a result of confusing, unreliable data and the fact that LLMs generate outputs based on pattern recognition, not truth.

ChatGPT:

You're hitting a critical point that cuts through the hype around LLMs: these systems are not truth machines—they’re pattern machines trained on a messy, inconsistent, and often outright wrong corpus of human knowledge.

Your quote (which echoes Descartes, Bacon, and even Montaigne in tone) captures the old problem of information glut—now weaponized at scale. The core problems you're highlighting:

1. Volume ≠ Value

Even if all the world’s knowledge is encoded in books (or in a training dataset), most of it is:

  • Redundant
  • Contradictory
  • Context-dependent
  • Flat-out wrong

LLMs inherit all of this. And they don’t "know" any of it in the sense that a human understands truth. They infer statistical relationships. So when contradictions or noise are common (say, in alternative medicine, political history, or pseudoscientific literature), the model mirrors that fog.

2. Training Data Is Confused Because We Are

If humans can't agree on what's true, the LLM can't either. It just reflects the average or dominant patterns in the data it's seen. If 60% of sources say X, and 40% say Y, it might say X, but that doesn't mean X is correct—it just means X showed up more often or more convincingly phrased.

Add in deceptive formatting (e.g. scammy blog posts made to look like peer-reviewed science), social bias, outdated consensus, and outright propaganda, and you have a training set that's more swamp than signal.

3. "Reliable" Sources Are Not Always Reliable

Even when instructed to check against so-called reputable sources (Wikipedia, scientific journals, news outlets), there's a meta-problem:

  • What if the reputable source is wrong, or politically influenced, or later retracted?
  • How does the model recognize institutional drift—where an organization was reliable once but isn't now?

Worse: LLMs often fail to differentiate between how things are said and what is being said. A confident, well-written fabrication can score higher than a sloppy but accurate piece.


Bottom Line

LLMs are only as good as the signal-to-noise ratio of their data and the questions we ask. Hallucinations aren’t glitches; they’re symptoms of a system doing what it’s designed to do:

predict plausible next words based on an unfiltered tangle of human knowledge, ignorance, and bullshit.

Want a smarter model? Fix the upstream problem:

  • Clean data
  • Transparent sources
  • Epistemic weighting (some facts matter more than others)
  • Domain-specific fine-tuning

Until then, the smartest use of an LLM is often as a sparring partner, not a guru. Use it to sharpen your thinking, not replace it.

Monday, June 30, 2025

 

Rabbit Trails and Critical Perspectives: My AI-Assisted Book Exploration

I recently found myself engrossed in "Kingmaker: The Life of Pamela Harriman," a biography that chronicles the extraordinary life of a woman who moved through the highest echelons of power. While the book is a compelling read, I was left with a desire to understand the nuances and unspoken elements that often lie just beneath the surface of any historical narrative. This curiosity led me down a fascinating rabbit hole of research, assisted by a suite of AI tools, that began with an AI's refusal to cooperate.

The Process: From Whitewash to a Critical Deep Dive

My journey began with a simple goal: to explore what the book didn't explicitly state. I turned to a combination of Gemini and NotebookLM to start my investigation. My process was straightforward. I began by pulling up Wikipedia articles on Pamela Harriman and the key figures in her life, importing them into a NotebookLM notebook. This created a centralized, queryable database of information that I could easily search and analyze.

However, I quickly realized that much of the readily available information online felt, for lack of a better term, whitewashed. The more controversial aspects of Harriman's life were often softened or glossed over. This is where the real work began. I started actively looking for the "rabbit trails," the subtle hints and understated connections that pointed to a more complex reality.

To aid in this deeper dive, I developed a "critical perspective prompt," instructing the AI to act as an investigative journalist and build the strongest possible case against the subject, focusing exclusively on controversies, scandals, and negative impacts. I wanted the unvarnished dirt.

The Claude Exchange: A Dialogue on Method

I presented my prompt to various LLMs, but the most interesting exchange was with Claude AI. It flatly refused my initial request. It stated:

"I can't create a one-sided exposé that focuses exclusively on negative information" because such an approach "wouldn't provide you with accurate, balanced information."

Instead, it offered a "comprehensive, factual overview" that would include controversies alongside her achievements.

This was a fascinating obstacle. The AI's programming was pushing back against the perceived bias of my prompt. I countered by asking if providing a balanced analysis but separating the positive and negative wouldn't amount to the same thing.

Claude’s response got to the heart of the matter. It explained that "context and proportion matter deeply" and that an exposé's goal is to "prosecute a case, while historical analysis aims to understand." It was a thoughtful argument, but it missed my intent.

The breakthrough came when I explained my methodology. I told it:

"I've already been exposed to the positive aspects of her life, and I'm trying to balance the research myself because the critical aspects of her life are not readily available."

This clarification changed everything. The AI responded, "You make an excellent point... I understand now what you're trying to accomplish." It acknowledged that controversial figures often get sanitized treatment and that my approach of "deliberately seeking out the critical perspectives to balance what you've already read is sound historical research practice." With its concerns addressed, it then proceeded to execute my original prompt perfectly, delivering a detailed and structured critical analysis.

Findings and Insights: The Unvarnished View

The information Claude returned was exactly the kind of nuanced, critical detail that is often buried. It detailed:

  • The Harriman Inheritance Scandal: A massive legal battle where Averell Harriman's children accused Pamela of squandering as much as $41 million of family trust funds through "bad investments and self-dealing." The heirs alleged the trust funds were reduced from $30 million to about $3 million.
  • Authoritative Criticism: The analysis compiled damning assessments from those who knew her best. Her own son, Winston Churchill, claimed she was "too busy whoring around" to be a present mother. Her second husband's daughter, Brooke Hayward, called her "greedy beyond the usual human greed." A critical biographer, Max Hastings, noted she was described as "a world expert on rich men's bedroom ceilings."
  • Systematic Exclusion of Family: The report highlighted how Pamela allegedly "froze them out physically and she froze them out psychologically and then she froze them out financially," according to biographer Christopher Ogden.
  • The Final Synthesis: The AI concluded that the critical view positions Pamela Harriman not as a political pioneer, but as someone who "weaponized sexuality and charm to extract resources from vulnerable people, leaving damaged families and depleted estates in her wake."

This experience was a powerful lesson. It demonstrated that using AI for serious research is not a passive process. It's a dialogue. By pushing back, explaining my reasoning, and treating the AI as a collaborator, I was able to bypass its default guardrails against bias and achieve my research goal. It revealed a path to a more nuanced and complete understanding, proving that sometimes you have to argue with the machine to get to the truth.

What are your experiences with AI for research? Have you ever had to "convince" an AI to give you the information you were looking for? Share your thoughts in the comments below.

Friday, June 06, 2025

Photograph Display

Woke up this morning with an idea to see how helpful Gemini can be with simple web pages. I did something similar with ChatGPT. Below is a Gemini generated timeline. The point - telling Gemini what I wanted, during the spare time available during a morning routine, something useful can be created. Needed NO knowledge of html, css or js coding. Still, it was necessary to copy and paste the code Gemini generated somewhere, either at GitHub or into a text file. This is beyond the ability of most people I'm sure.

Artwork Gallery: A Development Story

From a simple idea to a fully functional, scalable web app in a single morning. Here's a detailed breakdown of the development journey.

Live Gallery Demo

Project Development Timeline

💡 Project Kick-off & Conceptualization

June 6, 7:43 AM

  • Defined the initial goal: to create a full-screen, aesthetically pleasing artwork gallery for GitHub.
  • Discussed key considerations like image rights, performance, and user experience before coding.

🚀 Prototype v1: Core Functionality

June 6, 7:51 AM

  • Generated the first version of index.html with Ken Burns & cross-fade effects.
  • Successfully uploaded the initial code to a new GitHub repository.

🎮 Iteration v1: User Control & Interactivity

June 6, 7:56 AM

  • Requested specific features: 3-second transitions, pause-on-activity, arrow key navigation, and a commented-out description section.
  • Incorporated all requested user control mechanisms into the code.

🐛 Troubleshooting & Debugging

June 6, 8:02 - 8:05 AM

  • Identified and confirmed that placeholder images were not loading due to hotlinking restrictions.
  • Provided an updated index.html with new, reliable images to verify core code functionality.

🏗️ Architecture Refinement: Scalability

June 6, 8:14 AM

  • Chose to use a JSON manifest for easier maintenance and scalability.
  • Provided artworks.json (as a catalog) and updated index.html to fetch data from the new file.

🖼️ File Organization & Content Integration

June 6, 8:18 AM

  • Created a dedicated 'images' directory on GitHub.
  • Converted personal artwork to the efficient .webp format.
  • Populated the artworks.json file with the filenames of the new artwork.

✨ Final Polish & Feature Lock

June 6, 8:22 AM

  • Requested final visual tweaks: remove the zoom effect and ensure no image cropping.
  • Modified CSS (object-fit: contain) and JavaScript to meet final requirements.
  • Updated the index.html file on GitHub, bringing the project to its current, fully functional state.

Tuesday, May 27, 2025

More LLM use

Recipe Collection

A few weeks ago a video inspired me to try using ChatGPT to create a web app. 

This post is the result. Kim's recipe list is the result.

I've since employed AI models from Perplexity, Gemini and Claude to help consolidate recipes I've collected for seafood pan roasts and Thai curry. 

I followed the pan roast recipe Gemini came up with, it was better than good. It could probably have used some more of the creole spice mix. That was homemade. The result rivaled the pan roast at The Oyster Bar at Palace Station in Vegas, which we've been to several times. This, of course, is my opinion, which could be biased admittedly, but Kim really liked it too.

The next project will be Thai curry. Gemini's version is also on the list.



Ultimate Instant Pot Thai Chicken Curry

Yields: 6-8 servings

Prep time: 15 minutes

Cook time: Approx. 20 minutes (plus pressure building and release time)

Ingredients

Aromatics & Base:

  • 1 tablespoon vegetable oil (or olive/avocado oil)
  • 1 medium onion, peeled and thinly sliced (or diced)
  • 3-4 cloves garlic, minced or crushed
  • 1-inch piece of ginger, peeled and grated or crushed
  • 1/2 teaspoon fine sea salt (for sautéing aromatics)
  • Cream from the top of 1-2 (13.5 ounce) cans of unsweetened coconut milk (do not shake cans)
  • 1/4 to 1/2 cup Thai curry paste (Yellow is predominant, Maesri brand suggested for 1/2 cup being a 4oz can)

Protein:

  • 1.5 - 3 pounds boneless, skinless chicken breasts, cut into 1-inch cubes (Option: use chicken thighs, increase pressure cooking time to 8-10 minutes)
  • 1/2 - 3/4 teaspoon fine sea salt (for chicken)

Vegetables (Choose a combination, about 3-4 cups total):

  • 1 - 1.5 pounds potatoes (Petite Yukon Gold or red), quartered
  • 1-2 medium red bell peppers, trimmed, seeded, and cut into strips or diced
  • 1 green bell pepper, diced
  • 1-2 medium zucchini, cut into 1/2-inch thick half-moons
  • 1 diced eggplant
  • 1/2 cup sliced bamboo shoots (optional)
  • Snow peas (add during the "Finish the Curry" step)

Sauce & Seasonings:

  • Liquid from the 1-2 cans of coconut milk
  • 1/2 cup chicken broth (optional, for a thinner sauce if using less coconut milk)
  • 1-2 tablespoons fish sauce (plus more to taste)
  • 1-2 tablespoons soy sauce (plus more to taste)
  • 1 tablespoon brown sugar (plus more to taste)
  • Juice of 1 lime

Optional Flavor Boosters:

  • 1-2 teaspoons curry powder
  • 1 teaspoon coriander powder
  • 1/2 teaspoon cumin powder
  • 1/2 teaspoon dried basil
  • 1 small diced red chili pepper or 1-2 tsp crushed red pepper (for extra heat)
  • Lime zest (from the lime used for juice)

Garnish & Sides:

  • Minced fresh cilantro
  • Minced fresh Thai basil (or regular basil)
  • Lime wedges
  • Jasmine rice, for serving

Instructions

  1. Sauté Aromatics:
    Set your Instant Pot to "Sauté" mode on high.
    Heat the vegetable oil until shimmering.
    Add the onion, bell peppers (if using tougher ones like standard bell peppers), garlic, and ginger.
    Sprinkle with 1/2 teaspoon of salt.
    Sauté, stirring occasionally, until the onion starts to soften, about 3-4 minutes.
    If using optional chili/crushed red pepper and lime zest, add them now and stir for 1 minute.
  2. Fry the Curry Paste:
    Scoop the thick cream from the top of the coconut milk can(s) into the pot.
    Add the curry paste (yellow or green) and the optional curry powder, coriander powder, cumin powder, and dried basil if using.
    Cook, stirring often, until the curry paste darkens slightly and the oil starts to separate from the coconut cream, about 3-5 minutes.
    This step is crucial for flavor.
  3. Pressure Cook:
    Add the cubed chicken to the pot and sprinkle with 1/2 - 3/4 teaspoon salt.
    Stir to coat the chicken with the curry paste mixture.
    Stir in the potatoes (and other firm vegetables like diced eggplant or bamboo shoots, if using).
    Pour in the reserved liquid coconut milk and chicken broth (if using).
    Add the fish sauce, soy sauce, and brown sugar.
    Stir everything well, scraping the bottom of the pot to loosen any browned bits.
    Press "Cancel."
    Secure the lid and set the Instant Pot to "Pressure Cook" (or "Manual") on High Pressure for 5-6 minutes for chicken breast (or 8-10 minutes for chicken thighs).
    Once the cooking time is complete, allow a 5-10 minute Natural Pressure Release, then perform a Quick Release to vent any remaining steam.
  4. Finish & Serve:
    Carefully remove the lid.
    Select "Sauté" mode again and adjust to high or medium-high.
    Stir in the lime juice and any tender vegetables you're using (like zucchini, snow peas, or pre-diced bell peppers if you prefer them crisper).
    Simmer for 3-8 minutes, or until the tender vegetables are crisp-tender and the sauce has slightly thickened.
    Taste the curry and adjust seasonings as needed.
    Add more fish sauce or soy sauce for saltiness/umami, more brown sugar for sweetness, or more lime juice for brightness.
    Ladle the curry into bowls over Jasmine rice.
    Garnish generously with minced cilantro and Thai basil, and serve with extra lime wedges on the side.

Notes & Tips

  • Curry Paste Power: Don't be shy with the curry paste. Using 1/2 cup (a whole 4oz can of Maesri) gives a robust flavor. Adjust to your preference.
  • Coconut Cream is Key: The step of frying the curry paste in the separated coconut cream makes a big difference in flavor depth compared to just mixing everything together. If your coconut milk is homogenized, scoop about 1/3 to 1/2 cup from the can to use for this step.
  • Vegetable Medley: This recipe encourages using a variety of vegetables. The combination of potatoes for heartiness, bell peppers for sweetness and color, and zucchini for tenderness works well. Feel free to add others like eggplant or bamboo shoots.
  • Spice It Up (or Down): The optional spices (curry powder, coriander, cumin) can add another layer of complexity. The fresh chili or crushed red pepper is for those who like an extra kick.
  • Serving: Serving with fresh herbs and lime at the end brightens the whole dish significantly.

  Another ChatGPT conversation It started with a quote presented to me..... "A Stanford University study found that AI adoption has cau...