| |
| |
|
| |
|
SHORT ⚡ CIRCUIT
|
| |
|
KEEPING CURRENT
|
| |
|
Vol. 1 · Issue 4 · April 28, 2026
|
|
| |
|
|
A Workflow You Can Steal
|
| |
|
The shift that doubled what I get out of AI in a week: I stopped using it as a search engine.
Most people treat AI like a one-shot — type a question, get an answer, copy it out. That works for trivia. It doesn't work for anything that matters: market research, vendor evaluations, competitive analysis, regulatory checks. The answer has to be defensible. The output has to be yours.
The fix is to use AI as one stop in a five-step workflow, not the workflow itself. AI does what it's good at — synthesizing sources, shaping prose, comparing options. You do what it's not good at — defining the question, pulling primary sources, verifying claims, owning the deliverable.
Total time on a research project drops from about four hours to about a hundred minutes. The trustworthiness of the output goes up, not down — because every load-bearing claim gets verified by the human who's going to use it.
The full workflow is below.
|
|
|
|
|
Try This:
The 5-Step Research Workflow
|
|
| |
|
From vague question to defensible answer in about 100 minutes. Each step has a job. Don't skip the human ones.
1. Frame the question. (15 min)
Get sharp on what you actually want to know. Vague briefs produce vague answers. AI helps here too: paste the topic and ask "What questions should I be asking to make a smart decision?" Pick the three sharpest.
2. Pull primary sources. (20 min)
Don't ask AI for facts directly — that's where hallucinations live. Ask it to suggest where to look. Then go to the sources yourself: vendor pages, analyst reports, regulatory filings, primary articles. Capture the actual content.
3. Synthesize with AI. (20 min)
Paste the sources into one long prompt. Ask for analysis: "Compare these three perspectives. Where do they agree? Where do they disagree? What's missing?" This is where AI earns its keep — pattern-matching across many sources is what it does best.
4. Verify. (15 min)
Check every load-bearing claim against the source. Numbers, dates, names, citations. If AI introduced a fact that isn't in the sources, cut it.
5. Write. (30 min)
Now use AI to draft the deliverable. You bring the synthesis, AI shapes the prose. Edit until the prose sounds like you, not the model.
The trick is using AI for shape, not for substance.
|
|
|
|
|
|
This Week in AI
|
| |
Google Cloud Launches Gemini Enterprise Agent Platform
At Google Cloud Next '26 (April 22), Google rolled out the Gemini Enterprise Agent Platform — a system for businesses to build, manage, and govern autonomous AI agents that handle multi-step processes. Eleven partners (Adobe, Atlassian, Deloitte, Oracle, Salesforce, ServiceNow, Workday, and more) shipped agents on the platform on day one. The signal: enterprise software is reorganizing around agents, not features. Read more →
|
|
| |
AMD Ships Its First Dual 3D V-Cache Desktop Chip
On April 22, AMD launched the Ryzen 9 9950X3D2 Dual Edition — the first desktop processor with 3D V-Cache stacking on both core complexes (208 MB total cache). The launch is timed to a real shift: AI models in the 7–13 billion parameter range, the size most useful for local code generation and document processing, benefit a lot from expanded cache. Translation: running capable AI on your own machine just got meaningfully faster. Read more →
|
|
| |
Anthropic Opens Sydney Office, Expands to APAC
On April 28, Anthropic officially opened its Sydney office and named former Snowflake executive Theo Hourmouzis as general manager for Australia & New Zealand. The Sydney office follows existing posts in Tokyo and Bengaluru, with Seoul next. The pattern is what matters more than the location: AI vendors are scaling like established software companies — geography by geography — which usually means enterprise pricing, regional support, and procurement become available shortly after. Read more →
|
|
|
|
|
🎓 CIRCUIT SCHOOL
|
AI 101 · Part 4 Building your AI vocabulary from the ground up
|
|
|
Writing Prompts That Reduce Hallucinations
|
| |
|
Last issue we covered what hallucinations are — confidently-stated AI outputs that aren't true. This week, four prompt techniques that meaningfully reduce them.
1. Ground in sources. Paste the actual source text into the prompt before asking your question. The model is more likely to stick to what's in front of it than to invent something.
2. Ask for citations. "Cite the specific source for each claim. If you can't find a source, say so." This forces the model to either anchor to real text or admit uncertainty.
3. Ask for confidence levels. "For each claim, label it 'high confidence,' 'moderate confidence,' or 'guessing.' Don't pretend to be certain about things you don't know."
4. Ask what's missing. "What's the most important thing you don't know about this question?" Hallucinations thrive in gaps — making the model name the gaps reduces the urge to fill them.
None of these is perfect. AI still hallucinates with all four in place. But the rate goes from "noticeable" to "manageable" — and you build the habit of expecting AI to be uncertain, instead of expecting it to be right.
Next issue we'll cover the context window — what it is, why your AI sometimes "forgets" the start of a long conversation, and what to do about it.
|
|
|
|
|
|
|
HOW WE MADE THIS
The Workflow lead came from Rich's actual research process — the one he uses for vendor evaluations and competitive analysis. Claude helped tighten the framing and time-box each step. The news items were researched by Claude and selected by Rich for relevance to the audience.
See you next Tuesday.
|
|
|
|
| |
|
|
Sponsored By
|
|
Neo Crucible
|
|
|
|
Thanks to Neo Crucible for being our first sponsor. Follow the Short Circuit Project to see what I make with AI and a new Creality 3D printer!
|
|
|
|