AI-Native Intelligence
The Principal and the Swarm
Everyone's talking about using AI to automate deal processes. Summarize the CIM faster. Screen more targets. That's fine — and it's already commoditizing. The real shift is different: AI agents make it free to build intelligence systems that no one would have funded, staffed, or subscribed to before. Not faster diligence. New diligence — categories of inquiry that didn't exist because the economics didn't work.
Here's the thing that gives me genuine hope: the job of building systems like these didn't exist two years ago. There's no degree for it, no job title, no credential. The only prerequisite is creativity — the ability to see a problem and imagine a system that solves it. And as these tools become accessible to everyone over the next year or two, that creativity is going to explode. Millions of people who've always known what questions to ask will suddenly have the means to build the answers.
I've been building these systems over the past several months, and the finding that keeps recurring is counterintuitive: as the cost of building drops to zero, the value of knowing what to build goes up. Intuition appreciates when execution depreciates.
This post is a tour through the operating system I built and the experiments it produced — each one a different mode of AI-native intelligence, built from scratch, for free, in days.
One person. A research firm's output.
The architecture borrows directly from PE. Principals make judgment calls. Associates do the legwork.
Triages every request. Judgment, strategy, synthesis.
Quick answers handled directly (≤3 tool calls). Complex work delegated to associates with a brief.
Heavy token work: research, code, data pipelines.
Up to 8 concurrent. Isolated sessions. Auto-report back.
Market monitoring, self-learning cycles, data ingestion.
Runs at 3 AM. No human needed.
The infrastructure is OpenClaw — open-source agent orchestration. Persistent memory. Multi-channel access. Sub-agent spawning with automatic QC loops. The principal reviews every associate's work and respawns with corrections — like marking up a first-year's memo.
The entire system runs on a laptop. The expensive resource is the same one that's always been expensive in PE: judgment about what questions to ask.
Five probes into AI-native intelligence
Everything below started the same way: a question relevant to investing or business intelligence, described in plain English. Most went from idea to working MVP in a single session. None required hiring a developer, buying a subscription, or engaging a consultant.
The point isn't finished products. These are experiments — quick, free probes into whether AI agents can produce genuine intelligence in domains that previously required expensive infrastructure.
Experiment 1
Chicago Permit Velocity
Where public data meets investment intelligence
renovation permits ingested
community areas mapped
citywide median processing
days data window
The question: Where are renovation permits moving fastest and slowest in Chicago — and is that changing? This matters for real estate underwriting and small business planning. Previously you'd call the permitting office or just guess.
The system ingested 10,047 permits across all 77 community areas from Chicago's public data. It built a live choropleth, trend layers with 6/12-month toggles, and neighborhood rankings. Loop processes in ~15 days. Pullman takes 70+.
Codex wrote the pipeline. Opus audited the methodology. Total software cost: zero.
The pattern
Any public data source that's technically accessible but practically ignored is now a free intelligence layer.
Experiment 2
100 AI Analysts Debate Peloton's Future
Mirofish — structured adversarial simulation
Peloton
Avg confidence: 8.9 / 10
Lululemon control
100 AI agents. Eight distinct archetypes — from dedicated subscribers to short sellers. Five rounds of structured debate. The swarm converged 91% bearish on Peloton in June 2021. The Lululemon control split 47-46 and stayed split.
The system doesn't just generate consensus. It distinguishes real fragility from real debate. That's the signal.
Gemini Flash ran the agents. Opus synthesized. Total cost: a few dollars.
The pattern
Agent swarms stress-test any thesis against structured diversity of perspectives — in an afternoon instead of a month.
Experiment 3
The System That Taught Itself to Sell Volatility
SPX Options — AI-directed trading with feedback loops
Three layers: AutoResearcher (365-day backtests), Regime Playbook (10 market regimes), LLM Override (real-time news + macro). Each layer can veto the one above.
The system started with no preference. Nightly self-learning cycles reviewed outcomes and recalibrated. Over time, credit spreads emerged dominant. Cross-validated on BTC options: credit spreads at 91.8% WR vs 40.2% for debit. Same conclusion, uncorrelated market.
The pattern
Any domain with measurable outcomes and repeatable decisions is a candidate for learning systems that compound over time.
Experiment 4
A Physics Paper Without a Physics Degree
Get Physics Done — AI-assisted research in unfamiliar domains
Hypothesis
5 Simulation Models
Novel Finding
Using GPD (Get Physics Done), I explored a hypothesis connecting concepts from Hindu philosophy to quantum field theory — the idea that reality is one undifferentiated field with a universal tendency toward ground state.
Five model variants. Literature mapped against QFT, false vacuum decay, Penrose's CCC, Bohm's implicate order. The simulations surfaced a genuine finding: self-determined observation produces stable homeostasis, not return to zero. I am not a physicist.
The pattern
Domain expertise barriers are falling. Any strategic question that touches technical territory is now accessible to anyone with clear thinking.
What's Next
The experiments continue
None of these are finished products. They're probes — fast, cheap tests of whether AI agents with human judgment can produce real intelligence.
The consistent finding: they can. And the marginal cost of each new experiment is approaching zero because the operating system is reusable. Every new question just needs a brief and a session.
The firms that build this capability will compound an information advantage with every deal, every quarter, every new question they think to ask. The ones waiting for someone to package it into a SaaS product will pay a subscription for yesterday's intelligence.
The tools are open source. The compute is cheap. The scarce resource is the same one it's always been: knowing what to ask.
Each experiment has its own post