Self-hosted media archive manager for Sonarr and Radarr. Cold storage, offline drive tracking, and one-click restore. https://catalogarr.patserver.com/about
Find a file
2026-03-30 15:32:27 +00:00
routes new workers for transfer pipeline(still in testing) 2026-03-22 04:09:27 +00:00
services new workers for transfer pipeline(still in testing) 2026-03-22 04:09:27 +00:00
static v2.1.0 2026-03-30 15:32:27 +00:00
templates fixixng active to archive transfer 2026-03-22 13:27:04 +00:00
.gitignore fix requirements.txt due to old version 2026-03-21 00:10:32 +00:00
admin.py V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
catalogarr.service V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
config.yaml logo 2026-03-30 02:17:23 +00:00
env.template V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
Fix.py V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
gunicorn.conf.py V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
install.sh V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
LICENSE V2.0.0 Re-release 2026-03-20 23:47:32 +00:00
main.py new workers for transfer pipeline(still in testing) 2026-03-22 04:09:27 +00:00
README.md Update README.md 2026-03-29 23:25:08 -04:00
requirements.txt fix requirements.txt due to old version 2026-03-21 00:10:32 +00:00
text.txt fixixng active to archive transfer 2026-03-22 13:27:04 +00:00

Catalogarr

Catalogarrhttps://catalogarr.patserver.com/about

Release License Python Self-hosted

A self-hosted media archive manager built for the Servarr ecosystem. Catalogarr sits alongside Sonarr and Radarr and handles the side of media management they don't — moving finished media off your main array, tracking what's on which drive, and getting it back when you need it.


What it does

Sonarr and Radarr are great at managing what you're actively watching. They have no concept of done — no way to move a finished show off your main array, keep the metadata intact, and bring it back someday without starting from scratch.

Catalogarr handles that. It indexes your archive drives, syncs with your ARRs, and gives you a clean way to move media between active and cold storage without losing artwork, NFO files, or your watch history.

The core workflow is straightforward:

  • Scan your drives and index everything on them
  • Pull your active library from Sonarr and Radarr so both are visible in one place
  • Archive with a few clicks — copies the full folder including metadata and artwork, removes it from Sonarr/Radarr, optionally deletes the source
  • Restore when you want to watch something again — picks a root folder, copies the files, triggers a rescan automatically
  • For TV shows, season-level restore lets you bring back only what you're missing rather than the whole series

Screenshots

Dashboard

Dashboard

Archive Catalog

Archive Catalog

Active Catalog

Active Catalog

Connectors

Connectors

Settings

Settings

Tasks

Tasks


Requirements

  • Python 3.11+
  • Redis
  • Sonarr and/or Radarr (optional, but most features assume at least one)
  • TMDB API key (optional, used as a metadata fallback)
  • Shoko Server (optional, for anime metadata)

Installation

git clone https://gitlab.patserver.com/patrick19368/Catalogarr.git
cd Catalogarr
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.template .env
python3 admin.py
python3 main.py

For production use the included Gunicorn config:

gunicorn --config gunicorn.conf.py main:app

Or run it as a systemd service:

sudo cp catalogarr.service /etc/systemd/system/
sudo systemctl enable --now catalogarr

Configuration

Copy .env.template to .env and fill in your values:

APP_SECRET=        # python3 -c "import secrets; print(secrets.token_hex(32))"
REDIS_URL=         # redis://localhost:6379/0
SONARR_URL=        # http://192.168.1.x:8989
SONARR_API_KEY=
RADARR_URL=        # http://192.168.1.x:7878
RADARR_API_KEY=
TMDB_API_KEY=      # optional
SHOKO_URL=         # optional
SHOKO_API_KEY=     # generate in Shoko under Settings > API Keys

Scan paths are configured through the Settings page and saved to config.yaml. Each path can be flagged as an anime path, which routes metadata enrichment through Shoko instead of Sonarr.

If Catalogarr and your ARR apps are on separate machines, the paths configured in Sonarr/Radarr need to be accessible at the same location on the Catalogarr host via a shared mount or NFS.


Features

Archive catalog Indexes everything on your configured drives. Shows which drive each item lives on, whether that drive is online or cold, and pulls metadata from NFO files, Sonarr/Radarr, or TMDB in that order.

Active catalog Your live Sonarr/Radarr library in one view. Multi-select items and archive them in one shot.

Archive to drive Copies the full media folder — video files, artwork, NFO metadata, thumbnails — to a chosen local drive. Options to remove from Sonarr/Radarr and delete the source files. Deleting source files requires typing the title to confirm.

Restore to live Pick a root folder, copy the files, trigger a rescan. For TV shows it checks what Sonarr already has episode by episode and only shows you the seasons that are actually missing.

Drive management Mark drives as cold storage to lock everything on them against restores. Drive status is also detected automatically — if the path disappears from the filesystem the drive shows as offline regardless of its stored status.

Metadata enrichment Waterfall order: local NFO files first, then Sonarr/Radarr, then TMDB. Anime paths go through Shoko before TMDB. API keys are read at runtime so changes to .env take effect without a restart.

Scheduled tasks Background tasks run via APScheduler and can all be triggered manually from the Tasks page. Covers metadata enrichment, metadata refresh, poster caching, connector sync, drive deduplication, and duplicate media merging.

Merge duplicate media Fixes episodes that got indexed as separate shows due to filenames the parser misread (common with episode numbers above E99). Runs daily and on demand.


Notes

This is a personal project built for my own homelab. It runs on Ubuntu alongside Jellyfin, Sonarr, Radarr, and Shoko and handles everything I need for managing a library spread across multiple drives at different stages of the archive cycle.

The backend is Python/Flask with SQLite and Redis. The frontend is plain HTML/CSS/JS with no framework.

Disclosure: The frontend is roughly 90% AI-written — HTML and CSS are not where I spend my time. The backend is about 85% written by me with AI filling in the gaps. If that bothers you, fair enough. If you just want something that works, feel free to use it.

Bug reports and feature requests go in the issue tracker.


License

GNU General Public License v3.0. See the LICENSE file for details.