My Raspberry Pi IT Rebirth: Migrating, Self‑Hosting, and Backups
A personal win: I proved I can move production projects between Raspberry Pis without losing data. Notes on what I built, why persistent data matters, backup ideas, and how to document the process for long‑term reliability.

My Raspberry Pi IT Rebirth: Migrating, Self‑Hosting, and Backups
I have a Pi problem — and I mean that in the best possible way. Over the last several months I've rebuilt a personal infrastructure using Raspberry Pis: access points for Pi-hole, a webcam/self-hosted camera app, Pi 5s acting as Docker hosts with Cloudflare tunnels, a Recalbox arcade cabinet, and a new Pi with a 1TB NVMe SSD for a VPN server and new projects. Today I hit a milestone: I successfully redeployed production-level projects to another machine and preserved the database/data from the original host. That felt like a real win.
Below I organize what I built, why the migration/persistence win matters, recommended documentation and backup practices, and some thoughts on when these kinds of migrations happen in production environments. I also include notes you can paste on your resume and a short checklist for next steps.
The projects I built (short list)
- Pi Zero 2W as an access point + Pi-hole for ad blocking and a VPN gateway for a TV streaming app.
- Pi Zero 2W + Pi Camera Module 3 running a small self-hosted webcam app reachable remotely.
- Pi 5 to manage Docker containers and Cloudflare tunnels for multiple self-hosted projects and two live sites.
- Recalbox on a Pi 5 for an arcade cabinet.
- New Pi with 1TB NVMe SSD intended for a VPN server and more projects.
Each device has a role: network-level filtering, remote camera access, container hosting, entertainment, and a beefy server node for heavier workloads.
The big win: migrating/persisting data
Today I moved production projects between machines and preserved the data (database contents, app state, or Docker volumes). That means:
- I can rebuild a service on different hardware and not lose critical user or configuration data.
- I now have a path for continuity if a device or site goes down — failover is achievable.
- It validates that my backup/restore and migration processes can work end-to-end.
This is the foundational milestone for any resilient multi-site setup.
Why persistent data and migrations matter
- Stateless apps are easy to redeploy, but most real services (databases, user uploads, container volumes) are stateful. Losing data is unacceptable for production.
- Reliable backups + tested restores = confidence. Without testing restores, backups are just files on disk.
- Mobility — being able to move workloads across different Pis and networks — unlocks multi-site redundancy, maintenance flexibility, and safer upgrades.
Documentation you should create now
Documentation will save you hours later. Key docs to write and keep versioned (Git, a wiki, or markdown files):
- Architecture diagram(s)
- Network topology for each site (WAN, LAN, APs, Pi roles, Cloudflare tunnels).
- Service diagram showing containers, DBs, volumes, and ingress (reverse proxies, tunnels).
- Inventory
- Hostnames, IPs, Pi model, storage type, SSH keys, and physical location.
- Configuration backups
- Exact steps and commands to recreate each service (Docker Compose files, env files, systemd units).
- Backup & restore runbook
- How to create backups (db dumps, volume backups), where they are stored, and step-by-step restore instructions.
- Deployment playbook
- Fresh-machine bootstrap (OS, partitions, required packages), and automated deploy steps.
- Secrets management plan
- Where secrets live (avoid plaintext in repo), recommended tools (environment variables, Vault, SOPS).
- Testing and verification checklist
- Post-restore verification steps (smoke tests, DB integrity checks, app health endpoints).
- Maintenance windows & change logs
- Dates, versions, and results for major changes and restores.
Store these docs with the same gravity as code. Keep them in a repo and tag versions for major changes.
Backup strategies & practical tooling
Choose tooling to match your goals (space, restore speed, encryption, offsite):
- Files and volumes
- rsync for simple block/file syncs to another device.
- Borg or Restic for deduplicated, encrypted snapshots.
- rclone to push backups to cloud providers (S3, Backblaze B2, Wasabi).
- Databases
- Periodic logical dumps (mysqldump, pg_dump) + WAL/transaction log shipping for point-in-time recovery.
- For small DBs, regular export + automated transfer is often fine.
- Docker volumes
- Use docker/compose commands or volume plugin tools to snapshot volumes.
- Copy the volume directory or use
docker run --rm -v volume:/volume -v $(pwd):/backup alpine tar czf /backup/volume.tar.gz /volume.
- Block device snapshots
- If using NVMe or LVM, consider snapshots for faster backups.
- Off-site & redundancy
- Always have at least one copy off-site (another Pi at a different location, or cloud storage).
- Automation & retention
- Automate backup creation and pruning (keep weekly/monthly retention policy).
- Monitor backup success/failure and alert on failures.
How to test a restore (simple playbook)
- Provision a fresh Pi (same architecture ideally).
- Transfer the backup artifact (encrypted archive, DB dump, Docker volumes).
- Run the documented restore steps:
- Recreate filesystem layout.
- Restore volumes and DB dumps.
- Start containers/services.
- Run verification:
- Health endpoints, sample queries, and sanity-check the dataset.
- Time the whole process and iterate on what took too long or failed.
Run a restore drill every few months or after any major change.
Real-world parallels: when this happens in production
- Planned migration or hardware upgrade (retire old hardware).
- Data center migrations and provider changes.
- Incident-driven failover (site power outage, physical damage).
- Blue/green or canary deployments where you need to roll back to a known good state.
- Compliance audits where you must demonstrate recoverability.
The difference in the real world is scale, SLAs, and often a stricter emphasis on automation, monitoring, and runbooks. Your home-lab is the perfect place to learn those practices safely.
Can you put this on your resume?
Yes. Framing tips:
- Focus on outcomes and responsibilities: "Designed and maintained a multi-node Raspberry Pi home-lab with self-hosted services, automated backups, and disaster recovery testing."
- Call out technologies and skills: Raspberry Pi hardware, Linux, Docker, backup tooling (Restic/Borg/rsync), Cloudflare Tunnels, Pi-hole, VPNs, database migration and recovery, monitoring.
- Quantify where possible: "Maintained 5+ devices, automated backup/restore drills, reduced failover time by X% (if you measure it)."
- Emphasize soft skills: problem solving, systems thinking, documentation, and reliability engineering.
Employers value practical, demonstrable projects that show ownership and technical depth.
Next steps and a simple checklist
- Commit architecture diagrams and runbooks to a Git repo.
- Automate backups (cron or systemd timers) and push one copy off-site.
- Script a reproducible bootstrap for a fresh Pi (ideally idempotent).
- Schedule and run a restore drill in a different physical location.
- Start instrumenting monitoring and alerting (Prometheus + Grafana or simpler uptime checks).
- Consider DNS failover or Cloudflare load-balancing for multi-site failover if you host public sites.
Final thoughts
Celebrate wins — redeploying your services while preserving data is the turning point where "play" becomes "infrastructure." You'll discover lots of edge cases as you scale across sites (network topology, latency, replication, and security), but each drill and document update makes the system stronger. You've already crossed the biggest threshold: reproducibility.
Attribution
Attribution breakdown:
- Papa-Sean — 90%: original ideas, hands-on engineering, testing, and the real-world project work and decisions described in this post.
- Assistant (GitHub Copilot / ChatGPT) — 10%: organized your notes, structured the blog post, suggested documentation/backup strategies, wrote and formatted the markdown content above.
What I contributed: consolidated your narrative into a coherent post, added actionable documentation and backup recommendations, and supplied the resume-friendly framing.
Comments (1)
WHAT UP DOE?! This is amazing work, Papa! Keep the momentum going, brother!