I've seen small teams fall into the same trap more times than I can count. Project's just getting started, and already the technical decisions are chasing the new — Serverless Functions here, a managed database there, third-party auth, another managed platform for the queue. Each piece looks reasonable in isolation. Together, they become a nightmare.
By the time you're actually debugging something, you've got four or five consoles open at once. Every platform has its own log format, its own permission system, environment variables scattered across half a dozen dashboards. Traffic picks up and the billing starts expanding from every direction simultaneously — and good luck figuring out where the money is actually going.
Consolidation Is the Right Move for Small Teams
When resources are limited, the more concentrated your system, the lower the cognitive load — and the shorter the path to figuring out what broke when something inevitably does.
Here's a setup I've found genuinely practical:
Backend on Hono — lightweight, fast, native TypeScript support, runs in Docker, not tied to any specific platform. Database on PostgreSQL, self-managed, data stays in your hands. Reverse proxy with Caddy — automatic HTTPS, configuration that's dramatically simpler than Nginx, approachable even if you're not a sysadmin. The whole stack deployed to a Hetzner VPS: a 2-core 4GB instance runs under €10/month and comfortably handles most early-stage small-to-medium workloads.
Backups go to Cloudflare R2 — pay-as-you-go, storage costs are minimal, no separate backup service to maintain.
For the frontend, TanStack Start works well, with edge logic on Cloudflare Workers for acceleration and lightweight request handling. Core compute stays on infrastructure you control; the edge layer does what it's actually good at.
Total monthly cost for the whole thing: somewhere in the low tens of dollars. Structure is clear, and every layer has an obvious place to look when something goes wrong.
The Problem With Serverless Isn't the Technology — It's the Boundaries
Serverless isn't inherently bad. It genuinely fits certain scenarios: lightweight event-triggered tasks, workloads with extreme traffic variability, pure stateless function computation.
What it isn't suited for is being used as a default architecture for a product that's actively being built and iterated on. The reasons are specific:
Execution time limits mean complex tasks need to be split up or routed around. Cold starts affect user experience in ways you don't have much control over. Connection pooling is a real pain point — database connections in Serverless environments need explicit management, or you'll exhaust the connection limit faster than expected. And the most insidious problem: when your architecture starts bending itself to accommodate platform constraints, technical debt is already quietly accumulating.
Then there's the cost reality. Early on, when traffic is low, Serverless looks cheaper than a VPS. But invocations, storage, bandwidth, database connections — each one billed separately. Once a project hits a growth phase, the bill tends to climb faster than you anticipated, and it's genuinely hard to model in advance.
A Concrete Deployment Reference
If you want to build along these lines, here's roughly how it goes:
Provision a Hetzner CX22 (2 cores, 4GB RAM, ~€4.5/month), install Ubuntu 22.04.
Install Docker and Docker Compose:
curl -fsSL https://get.docker.com | sh
apt install docker-compose-plugin -y
A basic docker-compose.yml wiring together the Hono backend, PostgreSQL, and Caddy:
version: '3.8'
services:
app:
image: your-hono-app
restart: always
environment:
DATABASE_URL: postgresql://user:password@db:5432/mydb
depends_on:
- db
db:
image: postgres:16
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
volumes:
- pg_data:/var/lib/postgresql/data
caddy:
image: caddy:latest
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
volumes:
pg_data:
caddy_data:
The Caddyfile for reverse proxy and automatic HTTPS is just a few lines:
yourdomain.com {
reverse_proxy app:3000
}
Caddy handles Let's Encrypt certificate provisioning and renewal automatically. No manual configuration needed.
For backups, connect rclone to Cloudflare R2 and push on a cron schedule:
0 3 * * * pg_dump mydb | gzip | rclone rcat r2:backup/db-$(date +%Y%m%d).sql.gz
If you have a working Docker foundation, the whole thing comes together in a couple of hours from scratch.
What Technical Decisions Are Actually About
Architecture choices aren't about showing off. They're about serving the business.
For small teams focused on sustainable revenue and continuous product iteration, a stack that's controllable, consolidated, and lightly dependent on external services tends to outlast whatever's currently considered most modern. When something breaks, you know where to look. When you need to scale, you know which layer to upgrade. And cost growth tracks business growth linearly — not exponentially.
Fewer services means a clearer system. The projects that keep running long-term usually aren't the ones with the most cutting-edge stack. They're the ones that had the discipline to keep it simple.