AI Work

The current system is a structured generation pipeline built from the live `llm_storywriter` flow.

This page now reflects the pipeline that is actually running. It starts from a configurable mode, generates a structured listing package, renders a coordinated gallery, publishes the assets, and indexes the final document into OpenSearch. The point is not “agents talking.” The point is producing a usable, inspectable content artifact.

Pipeline sketch
Mode Config
personas + rules
Prompt Build
structured package
OpenAI
JSON listing output
ComfyUI
gallery renders
Publisher
asset URLs
OpenSearch
indexed artifact
Public experiments

Experiments and new avenues are often published in public while they are still evolving. Current oddities and live public output can be explored at Bare Metal Bridge, where the present run behaves more like a marketplace of oddities than a polished product surface.

Flow overview

1. Load mode configuration

Each run starts from a mode file that defines personas, offering type, humor rules, weirdness, archetypes, reality failures, and gallery slots. That mode acts like the control plane for the whole generation pass.

2. Generate structured listing package

The engine builds a controlled prompt and sends it to OpenAI expecting valid JSON back. The response is not just prose. It includes title, summary, description, features, context, location, and a full gallery plan.

3. Render gallery imagery

Each planned gallery slot becomes a render job. Prompts are handed to ComfyUI, images are generated, and the first published image becomes the hero visual for the finished listing.

4. Publish assets

Rendered images are pushed to the publishing target so the listing has real asset URLs and can behave like a finished content object instead of a local experiment.

5. Index final document

The completed listing document, prompts, persona metadata, captions, gallery, and timestamps are written into OpenSearch so the output can be surfaced elsewhere in the site and stack.

What changed
Configuration-driven: the run is shaped before generation begins instead of improvising everything at runtime.
Structured output first: the system asks for parseable JSON and normalizes the package before publishing.
Image generation is native to the flow: visuals are part of the document pipeline, not a disconnected add-on.
Publishable artifacts: the final result is a stored, indexed listing with prompts, metadata, and URLs.
Operationally legible: this is built to be inspectable, rerunnable, and understandable after the fact.
Why this is useful

This flow is a better representation of the work because it shows a complete system: configuration, generation, rendering, publishing, and indexing. It behaves more like a production content pipeline than a demo of prompting tricks.

It also fits the broader consulting story better. The value is in building systems that can be guided, observed, and trusted, not in hiding complexity behind vague AI language.

System components
Mode files

Define personas, weirdness, archetypes, failures, offering labels, and gallery slots.

Engine

Selects persona/archetype/failure inputs, builds prompts, coordinates text, render, publish, and indexing stages.

OpenAI

Generates the structured listing package that becomes the document foundation.

ComfyUI

Renders slot-based gallery images from the planned visual prompts.

Publisher

Pushes finished image assets to the remote target and returns public URLs.

OpenSearch

Stores the final listing document so the output can be queried and surfaced cleanly.