Every error produces a stable fingerprint. Every attempt to fix it produces an outcome. Fukura joins the two and ranks suggested fixes by measured success rate, not guess.
A SolutionAttempt says: this fingerprint was encountered, and the user/agent’s next move ended in success, failure, or abandonment.
{
"attempt_id": "7c2b…", // UUID, idempotency key
"fingerprint": "blake3:7f9c…", // the error being fixed
"outcome": "success", // or "failure" or "abandoned"
"suggested_note_id": "sha256:…", // the note that inspired the fix
"next_command": "cargo update", // redacted, optional
"agent_kind": "claude-code", // null for human shells
"occurred_at": "2026-04-18T10:22:31Z"
}Once fuku init is run in a project, the daemon and zsh/bash hook observe every interactive command. When a failure is followed by a successful command, the pair is recorded as an attempt. Abandonment is inferred after a configurable timeout.
# See attempts recorded in the current repo
fuku attempt stats
# Explicitly mark the last failing command as observed
fuku attempt observe
# Abandon (stop waiting for a fix)
fuku attempt abandonAgents running through fuku mcp call fukura_record_attempt directly, usually right after they finish a tool-call chain that resolved (or failed to resolve) an error.
{
"jsonrpc": "2.0",
"id": 42,
"method": "tools/call",
"params": {
"name": "fukura_record_attempt",
"arguments": {
"fingerprint": "blake3:7f9c…",
"outcome": "success",
"suggested_note_id": "sha256:…",
"agent_kind": "claude-code"
}
}
}The attempt is stored locally and, when a hub is configured, batched up via POST /v1/attempts.
fukura_preflight takes a command about to be executed, synthesises a likely fingerprint, and returns matching notes with their measured success rate. Agents use this to pick the highest-success fix rather than “first match”.
fuku search --fingerprint blake3:7f9c… --json
# Response includes:
# "effectiveness": { "success": 12, "failure": 3, "total": 15, "success_rate": 0.8 }The local loop is useful on its own — a developer sees which fixes have worked on their own machine. The hub adds the cross-machine layer: POST /v1/attempts batches local attempts up, and GET /v1/attempts/stats returns aggregated success rates across an organisation.
fuku hub sync-attempts now talks to a server that answers. See the Hub API page for the per-section status matrix.
Upvotes measure popularity; success rate measures whether the fix actually worked. Because fukura captures the next command (redacted) alongside the outcome, it can tell whether a suggested fix closed the loop. This is the wedge against wiki-style knowledge tools that rely on human judgment after the fact.
Source: src/domain/attempt.rs (schema), src/application/effectiveness.rs (state machine).