Use Claude Code Insights to debug how you use Claude Code
The most underrated trick I have landed on is treating Claude Code as a coach for using Claude Code. The tooling already sees how you work: sessions, prompts, tools, and where time goes. The part most people skip is closing the loop: turn that signal into habits, not just a pretty chart.
The workflow
- Run
/insights. Claude Code analyzes local session history and writes an interactive HTML report. Typical panels include dashboards, timelines, token and tool patterns, and satisfaction notes; the exact set depends on your build. - Open the report. On many installs it ends up under something like
~/.claude/usage-data/report.html(paths can change with updates, so if that folder is empty, use the path the command prints or your settings). - Start a fresh session, attach the report (or paste the relevant sections), and ask something concrete, for example: Read this report. What are the five worst habits in how I use Claude Code, and what is one specific fix for each?
You can punt the bookkeeping to Claude Cowork: give it a prompt with your cadence and what you want (run /insights, locate the HTML from step 2, bring it to your attention when it is ready). Cowork carries the repetition; you only show up for the review chat.
Every time I review, something new shows up: over-relying on one tool, burning context on restarts, prompts that fight the model instead of the problem, rules that never made it into CLAUDE.md. The report is the instrument panel; the follow-up chat is where you actually steer.
Why a new session
Mixing “analyze my usage” into the same thread where you are mid-feature invites two failures. You pollute the task context with meta discussion, and the model leans on what it already “knows” about this chat instead of treating the report as primary evidence. A clean session makes the file the source of truth.
What you are looking for in the report
Reports vary by version, but the useful themes are usually the same:
- Session shape Are you fragmenting work across dozens of tiny threads or letting one conversation sprawl until it is expensive and confused? Both have fixes; the report makes the pattern visible.
- Tool and token patterns Spikes often mean repeated back-and-forth that a better initial spec, a skill, or a smaller task split would have avoided.
- Friction notes When the product surfaces “friction” or similar hints, it is pointing at places your workflow fought the tool. That is not necessarily about being “bad at prompting”; often your defaults are wrong for how you work.
- Rules and docs Suggestions about
CLAUDE.md, project rules, or features you never tried are low effort, high leverage. They are the closest thing to a personalized README for your repo and your habits.
Going further
Some people pipe the same data into small CLIs or scripts (for example community tools that parse insight exports) to generate tasks, rule drafts, or skill stubs. That is optional; the core idea does not depend on it. Even hand-copying two screens from the report into a planning note beats never looking.
The through-line
Most advice about AI coding tools is generic. /insights is specific: it reflects your usage over time. Feeding it back into the model turns a dashboard into a feedback loop: less ritual, more correction. If you only run the command and file the HTML away, you are leaving the best part on the table.
/insights like a retrospective, not a dashboard.”-Rushi