Anyone landing in an unfamiliar repo, whether that’s a new contributor, a security scanner, or an AI coding agent, has to answer the same handful of questions before doing anything useful: what language is this, how do I install dependencies, what’s the test command, which linter do I run before committing, and for a security review, which functions in this stack are the dangerous ones.
The agent case just makes the cost of getting it wrong easiest to watch, because you can see Claude grep for package.json, read the Gemfile, try npm test, get told there’s no test script, try yarn test, discover it’s actually pnpm, and only then get to the work you asked for. The answers are identical for every Rails project or every Go module that has ever existed, and rediscovering them from scratch every time is wasted effort.
brief is a knowledge base of 516 tools across 54 language ecosystems, with a single Go binary in front of it that does the lookup and prints JSON when piped or a human summary on a TTY. The dataset is the part that doesn’t exist anywhere else: invocation commands, config-file locations, and taxonomy for five hundred tools under one machine-readable schema. CI templates, devcontainer generators, and editor onboarding flows were the closest I found, each carrying a slice of it with no shared upstream. I think of the CLI as one view onto that data and expect there to be others.
Point it at a directory, a git URL, or a registry coordinate like gem:rails or npm:express and it reports the toolchain across twenty categories, each with the command to run and the config files that drive it, plus whatever governance and community files (license with SPDX identifier, security policy, CODEOWNERS, FUNDING.yml, and so on) it finds in the usual places.
brief . # local directory
brief gem:rails # registry package, resolved to source repo
brief diff # only tools touched by changed files
brief missing # baseline categories with no tool configured
brief threat-model # CWE/OWASP categories implied by the stack
brief sinks # dangerous functions in detected tools
Checking all 516 definitions finishes in under 250ms, since anything that runs at the front of every session or pipeline step can’t afford to be the slow part; on this blog’s own repo it picks out Jekyll, Bundler, Rake, Dependabot and GitHub Actions in around 220ms, and on a Go project the output looks like:
$ brief .
Language: Go
Package Manager: Go Modules (go mod download)
Test: go test (go test ./...)
Lint: golangci-lint (golangci-lint run) [.golangci.yml]
Format: gofmt (gofmt -w .)
Build: GoReleaser (goreleaser release --clean)
Security: govulncheck (govulncheck ./...)
CI: GitHub Actions [.github/workflows/]
I run it myself as the first thing after cloning anything, and I have it wired into my global agent instructions so every Claude session opens with brief . before anything else. That onboards the agent to the repo in one tool call and saves the tokens it would otherwise burn on exploratory greps and wrong guesses. On a feature branch brief diff narrows the report to just the tools touched by the changed files, so whoever is reading it knows to run golangci-lint because a .go file changed without also being told about the Python linter in the monorepo’s other half.
Because the JSON output follows a published schema, it also works as a building block for other tooling: brief --json . | jq -r '.tools.test[0].command.run' gives a polyglot CI job the project’s test command without anyone writing per-language cases, that lookup can drive a devcontainer or onboarding script, and the plan is to run it across every repo ecosyste.ms indexes so that stack metadata is available for every package.
The detection rules are TOML rather than Go, which means adding a tool is a single file under knowledge/ with no code changes: a name, a category, the files or dependency names that signal its presence, the command to run it, and optionally a set of oss-taxonomy tags describing what kind of thing it is. That taxonomy is a sibling project: it builds the vocabulary for what a tool is; brief detects which tools a project uses.
The dependency-name matching is driven by the same manifest parser as git-pkgs, so a tool definition can say “present if rspec-core is in the bundle” and brief already knows how to read Gemfiles, package.json, go.mod, Cargo.toml, and the other supported lockfile formats without any of that being reimplemented.
Those tags were originally there so the JSON output could say “web framework” rather than just “build tool”, but once a few hundred definitions carried them they mapped cleanly onto CWE and OWASP categories, and brief threat-model on a Rails project produces SQL injection, mass assignment, XSS, CSRF, and SSTI without scanning a line of code, because that’s what Rails and ActiveRecord are for. The definitions also carry the specific dangerous functions each tool exposes, around 700 across the dataset, which is a reasonable starting grep list for a security review of a stack you’ve never worked in:
$ brief sinks .
ActiveRecord:
Arel.sql sql_injection CWE-89
find_by_sql sql_injection CWE-89
where sql_injection CWE-89 string interpolation only
Rails:
html_safe xss CWE-79
redirect_to open_redirect CWE-601 when target is from params
render inline: ssti CWE-1336
Ruby:
eval code_injection CWE-95
Marshal.load deserialization CWE-502
brief missing inverts the check and reports which of five baseline categories (test, lint, format, typecheck, docs) have no tool configured for the detected ecosystems, naming the canonical choice for each gap. The detection engine is also importable as a Go library if you’d rather not shell out.
Tool definitions live in the knowledge/ directory and PRs adding new ones are the contributions I’m most interested in, particularly for ecosystems I don’t write every day. If you point it at a project and it gets something wrong, open an issue or find me on Mastodon.
brew install git-pkgs/git-pkgs/brief / github.com/git-pkgs/brief