What My Project Does
PgQueuer converts any PostgreSQL database into a durable background-job and cron scheduler. It relies on LISTEN/NOTIFY for real-time worker wake-ups and FOR UPDATE SKIP LOCKED for high-concurrency locking, so you don’t need Redis, RabbitMQ, Celery, or any extra broker.
Everything—jobs, schedules, retries, statistics—lives as rows you can query.
Highlights since my last post
Cron-style recurring jobs (* * * * *) with automatic next_run Heartbeat API to re-queue tasks that die mid-run Async and sync drivers (asyncpg & psycopg v3) plus a one-command CLI for install / upgrade / live dashboard Pluggable executors with back-off helpers Zero-downtime schema migrations (pgqueuer upgrade)
Source & docs → https://github.com/janbjorge/pgqueuer
Target Audience
Teams already running PostgreSQL who want one fewer moving part in production
Python devs who love async/await but need sync compatibility
Apps on Heroku/Fly.io/Railway or serverless platforms where running Redis isn’t practical
How PgQueuer Stands Out
Single-service architecture – everything runs inside the DB you already use
SQL-backed durability – jobs are ACID rows you can inspect and JOIN
Extensible – swap in your own executor, customise retries, stream metrics from the stats table
I’d Love Your Feedback 🙏
I’m drafting the 1.0 roadmap and would love to know which of these (or something else!) would make you adopt a Postgres-only queue:
Dead-letter queues / automatically park repeatedly failing jobs
Edit-in-flight: change priority or delay of queued jobs
Web dashboard (FastAPI/React) for ops
Auto-managed migrations
Helm chart / Docker images for quick deployments
Have another idea or pain-point? Drop a comment here or open an issue/PR on GitHub.
submitted by /u/GabelSnabel to r/Python
[link] [comments]
Laisser un commentaire