The Pit Stop¶
A Pit Stop is a deliberate pause in development to review the entire codebase before writing another line of code. Stop building features. Stop fixing bugs. Instead: read through existing code, run every test, write new tests to cover gaps, hunt for duplicated logic, fix anything that's broken, and verify the overall architecture still makes sense.
Think of it like a pit stop in a race — the car comes in, the crew inspects everything, replaces what's worn, tightens what's loose, and only then does the car go back on the track. The goal is to clean the codebase and confirm its integrity so that the next round of changes starts from solid ground.
Why Pit Stops Matter¶
Every change to a codebase is local. You focus on one feature, one bug, one endpoint — and you move on. After 10–15 changes without stepping back, invisible problems accumulate:
- Duplicated logic across multiple files or flows.
- Tests that no longer match the actual behavior.
- Dead code — functions, vertices, or components that nothing uses.
- Missing annotations, missing version bumps, missing test coverage.
- Structural drift — flows that should be connected pipelines but ended up as isolated stubs.
A senior engineer catches this intuitively by periodically reading through the code. An AI agent doesn't — its context resets every session, so it never notices the slow decay. The Pit Stop formalizes what good engineers do naturally: stop, look at everything, and clean up before continuing.
The Rule¶
BACK RULE #3 / FRONT RULE #3
Every 3–5 changes (new vertices, new graph nodes, significant modifications), stop all development and perform a Pit Stop — a full, checklist-driven review of the entire system.
This is not optional. It is the single most important habit in HBIA-based development. Skip it, and you accumulate invisible debt that compounds with every subsequent change.
Backend Pit Stop Checklist¶
1. Lint All Flows¶
The linter catches structural problems: missing edges, disconnected vertices, undeclared handlers, invalid effect annotations.
2. Validate All Flows¶
Validation catches schema errors (malformed YAML, missing required fields) and graph errors (cycles, unresolvable bindings, dangling references).
3. Run the Full Test Suite¶
Run the entire suite, not just the tests for the feature you just added. Changes in one flow can affect data that downstream flows depend on.
4. Check Tech Debt¶
The health command provides an overview of the project's structural health: flow count, vertex count, test coverage, lint warnings.
5. List All Flows¶
Look at the output and ask: does this list make sense? Can you explain the purpose of every flow? If not, something has drifted.
6. Review for Structural Problems¶
This is the most important step. Look for:
- Isolated single-vertex flows (CRITICAL). If you see 4+ flows each with one vertex and no
next, the architecture is broken. Combine them into proper multi-step pipelines. This is the single most common mistake AI agents make with HBIA. - Disconnected vertices. Vertices that have no
nextpointing to them and nonextpointing from them. They're dead code. - Duplicate logic across flows. If two flows do similar things, they probably share vertices that should be extracted and reused.
- Missing
effectannotations. Every vertex must declare whether it ispureorside_effect. Missing annotations mean the engine can't cache or reason about the vertex correctly. - Missing
versionstrings. If you changed a vertex's logic but didn't bump the version, the cache may serve stale results. - Missing tests for new vertices. Every vertex should have at least one test. New vertices added since the last Pit Stop must be covered.
7. Document the Pit Stop¶
Leave a brief note — a commit message, a PR comment, or an entry in docs/pit_stops.md — summarizing what was checked and what was found. This creates an audit trail.
Frontend Pit Stop Checklist¶
1. Validate All Domains¶
2. Lint All Domains¶
3. Check for Circular Effect Chains¶
The linter detects these automatically. A circular chain like EffectA → mutates StateB → EffectB → mutates StateA causes infinite re-renders and must be fixed immediately.
4. Check for Unused State¶
State nodes that are declared but never read by any UI component or effect are dead weight. The linter flags these as warnings.
5. Check for Dead Events¶
Events declared in YAML but not emitted by any UI component are unreachable code.
6. Verify Hook Safety (Critical)¶
- Does every UI component's
readslist match what it actually renders? Over-reading causes unnecessary re-renders. - Does every effect
watchthe minimum set of fields it needs? Broad watches cause cascading effects. - Are there effects that
mutatestate they alsowatch? This is a self-triggering loop — always an error. - Are there effects with
debounce_ms: 0watching rapidly-changing state? This causes render storms.
7. Verify Zero localStorage Usage (Mandatory)¶
Search the entire codebase for localStorage, sessionStorage, and window.storage. Any occurrence is a blocking error. The linter catches this automatically, but verify during Pit Stops.
8. Verify No Raw Errors in the UI¶
Check that all API calls use ServiceResult<T> from services. Confirm that error messages shown to users come from error.userMessage, never raw error objects or tracebacks.
9. Verify Reactive Chain Completeness¶
For every user-facing feature, trace the full chain:
UI event → Event YAML → mutation steps → State change →
Effect triggers → Side effects → UI re-render
If any link is missing, the feature is broken.
10. Regenerate Runtime (If YAML Changed)¶
11. Run the Test Suite¶
12. Document the Pit Stop¶
Same as backend — leave a note summarizing what was checked and found.
When to Pit Stop¶
The rule of thumb is every 3–5 changes, but some situations demand an immediate Pit Stop:
| Trigger | Why |
|---|---|
| Added a new flow | New flows can create disconnected subgraphs. |
| Changed data bindings | Binding changes can break downstream vertices silently. |
| Added an atomic group | Atomicity boundaries affect execution order and failure semantics. |
| Added a new frontend domain | New domains need validation, linting, and codegen. |
| Significant refactor | Any structural change warrants a full review. |
| Before merging a PR | Never merge without a clean Pit Stop. |
The Pit Stop Mindset¶
A Pit Stop is not bureaucracy — it's maintenance. Code rots silently: a missing annotation here, a duplicated function there, a test that no longer matches behavior. Each one is trivial on its own. Left unchecked for twenty changes, they compound into a codebase that nobody — human or AI — can trust.
Regular Pit Stops catch problems when they're small and cheap to fix. The habit is simple: stop, review everything, clean up, confirm the tests pass, and only then get back to building.
The Pit Stop is the difference between a codebase that scales and one that collapses.