Skip to content

How pyr works

A look under the hood at a tool that manages Python without being Python.

The binary

pyr is written in TypeScript and compiled to a native binary with deno compile. This is a deliberate choice. The tool that manages your Python environment can't depend on Python to install itself — that's the circular dependency that makes Poetry painful to bootstrap. Rust (like uv) solves this but comes with slow compile times and a steeper barrier to contribution. Deno gives you a single compiled binary with no runtime dependency, and the development loop stays fast.

The entire tool is two files: main.ts (the CLI dispatcher, ~170 lines) and lib.ts (everything else, ~1280 lines). There's no framework. The CLI is a hand-rolled command registry with a parse-dispatch loop. Dependencies are minimal: @std/cli for argument parsing and spinners, @std/toml for reading pyproject.toml.

Bootstrapping Python

On first use, pyr downloads a standalone CPython build from the python-build-standalone project into ~/.pyr/python/. These are self-contained CPython distributions — no system dependencies, no compilation, no brew or apt. pyr detects your platform and architecture, fetches the appropriate tarball, extracts it, and writes a .version stamp file.

This happens once. Every project on your machine shares the same managed Python. When you run pyr upgrade --python, the old install is replaced and every project's venv rebuilds itself on next use (more on that below).

The bootstrap probes GitHub's API before doing anything destructive. If the network is down or the API is rate-limited, pyr bails with a message instead of leaving you without a working Python.

Venv management

Every project gets a .venv created from the managed Python via the standard python -m venv mechanism. pyr writes a stamp file at .venv/.pyr-python containing the CPython version that created the venv.

On every command that touches the venv, pyr compares this stamp against the managed Python's version. If they differ — because you ran pyr upgrade --python — pyr deletes the old venv, creates a new one from the updated Python, and reinstalls everything from requirements.txt. This happens silently. You don't manage venvs. pyr manages venvs.

The venv is project-local (always .venv/ in the project root) and gitignored by default. It's disposable. Delete it, run any pyr command, and it rebuilds.

Dependency resolution

pyr does not implement a dependency resolver. It delegates to pip.

When you run pyr sync, pyr writes your top-level dependencies to a temp file and calls pip install --upgrade -r <tmpfile>. pip does the resolution, downloads the wheels, and installs them. pyr then captures the post-install state with pip freeze.

This is a conscious trade-off. pip's resolver is slower than uv's custom Rust resolver. But it's battle-tested, handles edge cases, and means pyr doesn't have to maintain a resolver. For a v0 tool, this is the right call. The resolver is the one piece pyr doesn't own, and that boundary is clean.

Orphan pruning

After installing, pyr identifies packages that are installed but no longer needed. It does this by asking pip for the "leaf" packages — packages nothing else depends on — via pip list --not-required --format=json.

A leaf package is an orphan if it's not in the user's declared dependencies and not a protected package (pip, setuptools, wheel — removing these breaks the venv).

Removing a leaf can promote its former dependents to leaf status. pyr handles this by pruning iteratively: remove orphans, recompute leaves, remove new orphans, repeat. The loop is bounded at 16 iterations. In practice it converges in 2-3.

The lockfile

requirements.txt is the lockfile. It's generated by pyr, never hand-edited (though pyr won't stop you). The format is pip freeze output: one package per line, fully pinned with ==, alphabetically sorted, with a # generated by pyr header.

There are no hashes. No environment markers. No custom format. Any Python developer can read it. Any CI system can install from it with pip install -r requirements.txt. The lockfile is portable by design.

Protected packages (pip, setuptools, wheel) are filtered from the lock. Editable installs are filtered. What remains is exactly what your application depends on.

TOML surgery

When pyr add or pyr remove modifies pyproject.toml, it doesn't parse the entire file into an AST, modify it, and serialize it back. That approach — used by most tools — destroys comments, reorders keys, and normalizes whitespace. Your carefully formatted pyproject.toml comes back looking like it was written by a machine.

Instead, pyr does surgical editing. It:

  1. Locates the [project] table header via regex, determining its byte span (from the header to the next table or EOF).
  2. Within that span, finds the dependencies = [ key and its opening bracket.
  3. Walks forward from the [ to find the matching ], using a state machine that correctly handles single-quoted strings, double-quoted strings, triple-quoted strings, escape sequences, comments, and nested brackets.
  4. Parses just the array contents into a list of dependency specs.
  5. Applies the mutation (add or remove).
  6. Formats the new array as multi-line TOML (4-space indent, double-quoted, trailing commas).
  7. Splices the new array back into the original file at the exact byte offsets.

Everything outside the dependencies array is untouched. Your comments survive. Your formatting survives. Your other sections survive. The splice is byte-perfect.

If the [project] table exists but has no dependencies key, pyr inserts one immediately after the table header. If [project] is missing entirely, pyr errors — it doesn't own pyproject.toml, and creating structural sections would be overstepping.

Requirement parsing

pyr needs to extract canonical package names from requirement specs in several contexts: reading pyproject.toml entries, parsing lockfile lines, matching packages for add/remove operations. The parser handles:

  • Standard PEP 508 specs: requests, httpx[http2], rich>=13
  • Direct references: name @ https://...
  • VCS URLs with egg fragments: git+https://...#egg=name
  • Environment markers: requests; python_version >= "3.8"
  • Comments: requests # for API calls

Names are canonicalized per PEP 503: lowercased, with runs of hyphens, underscores, and dots collapsed to a single hyphen. So Foo-Bar, foo_bar, and foo.bar all become foo-bar.

Editable installs (-e ./path) and bare local paths are intentionally unparseable — the package name isn't derivable without reading the target's metadata, and pyr doesn't go that far.

Self-upgrade

pyr upgrade replaces the running binary. It:

  1. Queries the GitHub releases API for the latest release.
  2. Compares the version tag against the current PYR_VERSION.
  3. Downloads the platform-appropriate zip asset.
  4. Extracts to a temp directory.
  5. Size-checks the extracted binary (zero-byte or missing = failed download).
  6. Executes the new binary with --version to verify it reports the expected version.
  7. Renames the new binary over the running one.

On Windows, you can't overwrite a running executable. pyr handles this by renaming the current binary to pyr.exe.old, placing the new binary at the original path, and cleaning up the .old file on the next run.

The GITHUB_TOKEN environment variable is respected for API requests, avoiding rate limits in CI environments or for frequent upgraders.

Auto-sync on run

pyr run checks whether pyproject.toml has been modified more recently than requirements.txt by comparing filesystem mtimes. If the toml is newer, it runs a quiet sync before executing the script.

This means you can hand-edit pyproject.toml, add a dependency to the [project].dependencies array, and immediately run pyr run. The dependency is resolved, installed, locked, and your script runs — in one command, with no manual sync step.

Platform support

pyr builds for five targets:

  • aarch64-apple-darwin (Apple Silicon Mac)
  • x86_64-apple-darwin (Intel Mac)
  • x86_64-unknown-linux-gnu (Linux x86_64)
  • aarch64-unknown-linux-gnu (Linux ARM64)
  • x86_64-pc-windows-msvc (Windows x86_64)

Each target is compiled on a native runner for its OS — macOS builds run on macOS, Linux builds on Linux, Windows builds on Windows. Architecture variants within the same OS use deno compile --target. The CI pipeline builds all five, smoke-tests three (macOS ARM, Linux x86_64, Windows x86_64) on native runners, and publishes them as release assets.

The install script (install.sh for Unix, install.ps1 for Windows) detects the platform, downloads the correct zip, extracts the binary to ~/.pyr/bin/, and prints the PATH line for your shell.

What pyr doesn't do

pyr doesn't build wheels. It doesn't publish to PyPI. It doesn't manage Python versions (it manages one: the latest). It doesn't run linters or formatters. It doesn't have a plugin system.

These aren't missing features. They're boundaries. pyr is a project manager for Python applications. It answers: how do I go from nothing to a running project with managed dependencies and a structure that scales? Everything else is someone else's job.