Mastodon
Menu
home
start
about
me
links
reach out
journal
writings
social
posts
bookmarks
saves
photos
memories
news
breaking
podcasts
listening
uses
tools & stack
startyparty
homepage
Contact & Social
© 2008 - 2026 Marko Bajlovic
Version5.19.6

Coding with Claude & Plugins

Jan 5, 2026

So, after quite a bit of reluctance, I started using Claude Code, a while ago, combined with Superpowers, and some essential plugins. It really is the ultimate, always (when Claude isn't down lol) on call pair coder.


What Claude Code Actually Is

Claude Code is a command-line tool/agent. You install it, open it with claude in the directory of a codebase, and it can read files, write files, run shell commands, execute tests, and iterate. But it displays each thing it will do and asks for permission before doing it. It's closer to pairing with a really booksmart junior engineer.

The key mental shift: You are orchestrating, not prompting.

You set up the context (what files matter, what constraints exist, you define what "done" looks like) and then Claude Code works toward that target. You intervene when it drifts or when it's needed.

npm install -g @anthropic-ai/claude-code # open claude code in the directory of your codebase claude # ** ready. Working directory: ~/Git/test # Type a task or / for commands. Ctrl+C to exit.

Caveman

First off, install caveman. This is a plugin for claude-code that adds saves an average of 65% output reduction across 10 prompts. Save those precious tokens.

curl -fsSL https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.sh | bash

So now when claude responds, it will use way less tokens than before. ie:

The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object.

to:

New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo.

Setting Up CLAUDE.md & Starter Skills

andrej-karpathy-skills

Claude likes to run off and do things on its own sometimes, going off on its own and doing things without any direction. So I typically will start a fresh project from a set of instructions provided by andrej-karpathy-skills.

It typically solves:

PrincipleAddresses
Think Before CodingWrong assumptions, hidden confusion, missing tradeoffs
Simplicity FirstOvercomplication, bloated abstractions
Surgical ChangesOrthogonal edits, touching code you shouldn't
Goal-Driven ExecutionLeverage through tests-first, verifiable success criteria

To install this, inside a claude code session, run:

/plugin marketplace add forrestchang/andrej-karpathy-skills

once the marketplace plugin is installed, you can use the next command to install it:

/plugin install andrej-karpathy-skills@karpathy-skills

or:

curl -o CLAUDE.md https://raw.githubusercontent.com/forrestchang/andrej-karpathy-skills/main/CLAUDE.md

mattpocock/skills

Depending on the depth and need of the project, there's also: mattpocock/skills. These small, composable skills offer a flexibility, leveraging decades of engineering experience to ensure you'll maintain full control when building and debugging.

To install mattpocock/skills:

Run the skills.sh installer:

npx skills@latest add mattpocock/skills

Pick the skills you want, and which coding agents you want to install them on. Make sure you select /setup-matt-pocock-skills.

Run /setup-matt-pocock-skills in your claude to start. It'll prompt you to answer a few questions about your project.

The skills I really like from mattpocock/skills are:

/grill-me #or /grill-with-docs

They help you sync up with Claude before getting started. It forces Claude to think deeply about the change you're making. Use them every time you want to make a change.

/diagnose

"Disciplined diagnosis loop for hard bugs and performance regressions: reproduce → minimise → hypothesise → instrument → fix → regression-test."

/handoff

Summarize this conversation into a transition document for a seamless handoff to another agent/AI, sheeeeeeeeeeeeet another human lol.


Superpowers

Superpowers is the plugin layer that sits on top of Claude Code. Think of it as a curated set of system prompts on steroids. Each plugin gives Claude Code a specialized toolset and knowledge base for that particular kind of task.

If you want to install it, in a Claude Code session type /plugins, search for "superpowers", highlight it and hit enter to install it. If you'd like to use it right away, you'll need to /reload-plugins right away.

Once Superpowers is installed, run /using-superpowers at the start of a new session. This is the one I missed for the first few weeks and it made a real difference when I found it. It bootstraps the whole skill-discovery system, so Claude knows how to find and load the right plugin before it responds to anything, including clarifying questions. Without it, you can end up getting generic answers before the right context has loaded.

Claude will now trigger the appropriate skill before any response.

You'll need to make this a habit. First thing you type in a new session. Beyond that, there are a handful of slash commands worth knowing. Three of them are technically deprecated but still work, and honestly the old names are clearer than the new ones:

  • /superpowers:brainstorming kicks off an open-ended ideation session scoped to your codebase.
  • /superpowers:writing-plans was my go-to for laying out a multi-step refactor before touching any code. Same idea, Superpowers helps you produce a structured plan document you can then feed back into an execution session.
  • /superpowers:executing-plans is the other half of that pair. You hand it a plan, it works through the steps.

The Superpowers Commands I Use Most

Most of Superpowers' value isn't in the plugins. It's in these six slash commands. They're not modes you switch into, they're more like protocols you invoke at specific moments in your workflow. Once I started using them in the right order, the whole thing clicked.

/using-superpowers

Run this first. Every session. Before anything else. What it actually does is establish skill discovery, meaning Claude won't just guess at a response based on its general training. It will find and trigger the right skill before replying to anything, including your first clarifying question.

(I skipped this for weeks because it felt ceremonial. It isn't. Without it you're getting generic Claude, not Superpowers-aware Claude).

/using-git-worktrees

Run this before any feature work that touches something you don't want to break mid-session. This creates an isolated git worktree for the work about to happen, with smart directory selection so it doesn't clobber your current workspace. The practical upside: Claude Code can work on the feature branch in isolation while you stay on main. If things go sideways you just discard the worktree. I now run this before executing any plan that touches more than two files.

/executing-plans

This one pairs with the planning workflow. You write a plan first (in a separate session or via /superpowers:writing-plans), then hand it to this command to execute with review checkpoints between steps.

/verification-before-completion

This one I wish I'd had from the start. Run it before Claude declares anything done, fixed, or passing.

What it enforces is simple but important: Claude has to actually run the verification commands and show you the output before it can claim success. No more "the tests should be passing now." It has to prove it. I've caught a bunch of cases where it was about to commit something naughty/failed, purely because it was confident based on what it had written rather than what had actually run.

Before most commits or PRs, I run this command.

/subagent-driven-development

I don't use this one much to be honest, but mostly when I have an implementation plan with tasks that need to run independently of each other in the current session.

The difference from /executing-plans is subtle but real. This one spins up independent subagents per task rather than working sequentially. Parallelism, basically.

/systematic-debugging

Run this the moment you hit a bug, a test failure, or anything that behaves unexpectedly. Before you ask Claude to propose a fix.

Without this command, Claude's default instinct seems to be to just jump to a fix. With it, you get a structured investigation first: reproduce the issue, understand the actual failure, identify the root cause, then fix.


The Plugins I Actually Use

There are a lot of plugins in the ecosystem. Here are the some that I reach for fairly reguarly:

code-simplifier

My most-used plugin on legacy codebases. It untangles overcomplicated logic, extracts magic constants, and flags premature abstractions. It doesn't just make code shorter, it makes code legible to a human reading it six months later.

// before function processUserData(data) { const result = {} if (data !== null && data !== undefined) { if (data.user !== null && data.user !== undefined) { if (data.user.profile) { if (data.user.profile.name) { result.name = data.user.profile.name } if (data.user.profile.email) { result.email = data.user.profile.email } } } } return result }
// after code-simplifier function processUserData(data) { const profile = data?.user?.profile ?? {} return { ...(profile.name && { name: profile.name }), ...(profile.email && { email: profile.email }), } }

Functionally identical. Seven fewer lines. Zero nested conditionals. Not as ugly.

code-review

I run this before every PR now. Not instead of human review, alongside it. The persona it adopts is deliberately critical. It's not trying to make you feel good; it's trying to find problems.

/superpower code-review > Review the changes in git diff HEAD~1 # Reviewing 3 files, 147 additions, 32 deletions... CRITICAL (1) src/payments/charge.js:47 - Promise chain is not handling rejection. stripe.charges.create().then(handleSuccess) will silently drop errors. Add .catch() or convert to async/await with try/catch. WARNINGS (3) src/payments/charge.js:23 - hardcoded USD currency string. src/users/update.js:88 - N+1 query risk in the loop starting here. src/users/update.js:102 - function is 94 lines; consider splitting. STYLE (2) Inconsistent error message casing across the two files. Missing return type hints on three exported functions.

The critical finding about the Promise rejection was real. I missed it initially. It would have caused silent payment failures in prod.

security-guidance

This plugin makes me feel slightly paranoid in a healthy way. It approaches every file as a potential attack surface.

A recent example output: "The user ID is taken directly from req.params.id and interpolated into the SQL query on line 34. Even with parameterised queries elsewhere in the file, this one isn't. It's string concatenation. Classic SQL injection vector. Fix: use db.query('SELECT * FROM users WHERE id = ?', [req.params.id])."

What I dig is it doesn't just identify issues, it explains the attack scenario. "An attacker could pass 1 OR 1=1 and receive all rows" is far more motivating than "use parameterised queries." I find myself actually learning a lot from the fixes rather than cargo-culting them.

python-lsp

My Python code is aight. It's not perfect. I write it the way someone who primarily does JavaScript writes Python: syntactically correct, semantically loose, typed approximately never.

The python-lsp superpower essentially yells at me like a properly configured Pylance would, but in natural language, in context, with explanations I can actually learn from.

# Original def get_user(id): result = db.execute(f"SELECT * FROM users WHERE id = {id}") if result: return result["name"], result["email"] return None
# after python-lsp feedback from typing import Optional, Tuple def get_user(user_id: int) -> Optional[Tuple[str, str]]: """Fetch a user's name and email by ID. Returns None if not found.""" result = db.execute( "SELECT name, email FROM users WHERE id = %s", (user_id,) ) if result: return result["name"], result["email"] return None

Three issues caught in one pass: missing type annotations, SQL injection via f-string, and selecting * when only two columns were needed. The superpower explained each one. I understood all three better afterward than I did before.


frontend-design

Create distinctive, production-grade frontend interfaces using functional code that prioritizes creative aesthetics over generic patterns. It's pretty solid to be honest.

To install frontend-design:

/plugin install frontend-design

from inside a claude session.


The Rough Edges

I'd be lying if I said it was all smooth. A few honest frustrations after some time:

  • File context limits bite on large codebases. If your project has thousands of files, Claude Code needs to be guided about which directories matter. It won't read everything by default, and when it guesses wrong about file relevance, the output suffers.
  • Superpower switching resets conversational context. When you /superpower switch, the previous mode's personality goes away. This is actually correct behaviour (you want a clean slate for a security review) but it means you occasionally have to re-explain something you already discussed.
  • Some of the skills can work with or replace Superpowers, it all depends how/what you're doing. Def experiment and learn the workflows to match your workflow and preferences.
  • The python-lsp superpower struggles with complex type inference. It's great at catching simple type issues but starts to break down with heavily generic code or complex decorator patterns. Think of it as a fast linter, not a full mypy replacement.
  • Claude introduced a frontend designer tool recently over at claude.ai/design
  • Shit uses A LOT of tokens. When prompting, can't be so open ended/broad.

Note

A lot of these 'skills' are just markdown which can also be dropped into your project manually, really whenever you want, ideally at the start. Don't be afraid to make changes and customize it to your own workflow and preferences.

tags
SoftwareAIClaude CodeCodingProgramming