Back to Blog
HardwareAssisted by GitHub Copilot / ChatGPT

My Raspberry Pi IT Rebirth: Migrating, Self‑Hosting, and Backups

A personal win: I proved I can move production projects between Raspberry Pis without losing data. Notes on what I built, why persistent data matters, backup ideas, and how to document the process for long‑term reliability.

November 13, 2025
raspberry-pihome-labself-hostingpiholedockervpnbackupsdisaster-recoverydocumentation
My Raspberry Pi IT Rebirth: Migrating, Self‑Hosting, and Backups

My Raspberry Pi IT Rebirth: Migrating, Self‑Hosting, and Backups

I have a Pi problem — and I mean that in the best possible way. Over the last several months I've rebuilt a personal infrastructure using Raspberry Pis: access points for Pi-hole, a webcam/self-hosted camera app, Pi 5s acting as Docker hosts with Cloudflare tunnels, a Recalbox arcade cabinet, and a new Pi with a 1TB NVMe SSD for a VPN server and new projects. Today I hit a milestone: I successfully redeployed production-level projects to another machine and preserved the database/data from the original host. That felt like a real win.

Below I organize what I built, why the migration/persistence win matters, recommended documentation and backup practices, and some thoughts on when these kinds of migrations happen in production environments. I also include notes you can paste on your resume and a short checklist for next steps.


The projects I built (short list)

  • Pi Zero 2W as an access point + Pi-hole for ad blocking and a VPN gateway for a TV streaming app.
  • Pi Zero 2W + Pi Camera Module 3 running a small self-hosted webcam app reachable remotely.
  • Pi 5 to manage Docker containers and Cloudflare tunnels for multiple self-hosted projects and two live sites.
  • Recalbox on a Pi 5 for an arcade cabinet.
  • New Pi with 1TB NVMe SSD intended for a VPN server and more projects.

Each device has a role: network-level filtering, remote camera access, container hosting, entertainment, and a beefy server node for heavier workloads.


The big win: migrating/persisting data

Today I moved production projects between machines and preserved the data (database contents, app state, or Docker volumes). That means:

  • I can rebuild a service on different hardware and not lose critical user or configuration data.
  • I now have a path for continuity if a device or site goes down — failover is achievable.
  • It validates that my backup/restore and migration processes can work end-to-end.

This is the foundational milestone for any resilient multi-site setup.


Why persistent data and migrations matter

  • Stateless apps are easy to redeploy, but most real services (databases, user uploads, container volumes) are stateful. Losing data is unacceptable for production.
  • Reliable backups + tested restores = confidence. Without testing restores, backups are just files on disk.
  • Mobility — being able to move workloads across different Pis and networks — unlocks multi-site redundancy, maintenance flexibility, and safer upgrades.

Documentation you should create now

Documentation will save you hours later. Key docs to write and keep versioned (Git, a wiki, or markdown files):

  1. Architecture diagram(s)
    • Network topology for each site (WAN, LAN, APs, Pi roles, Cloudflare tunnels).
    • Service diagram showing containers, DBs, volumes, and ingress (reverse proxies, tunnels).
  2. Inventory
    • Hostnames, IPs, Pi model, storage type, SSH keys, and physical location.
  3. Configuration backups
    • Exact steps and commands to recreate each service (Docker Compose files, env files, systemd units).
  4. Backup & restore runbook
    • How to create backups (db dumps, volume backups), where they are stored, and step-by-step restore instructions.
  5. Deployment playbook
    • Fresh-machine bootstrap (OS, partitions, required packages), and automated deploy steps.
  6. Secrets management plan
    • Where secrets live (avoid plaintext in repo), recommended tools (environment variables, Vault, SOPS).
  7. Testing and verification checklist
    • Post-restore verification steps (smoke tests, DB integrity checks, app health endpoints).
  8. Maintenance windows & change logs
    • Dates, versions, and results for major changes and restores.

Store these docs with the same gravity as code. Keep them in a repo and tag versions for major changes.


Backup strategies & practical tooling

Choose tooling to match your goals (space, restore speed, encryption, offsite):

  • Files and volumes
    • rsync for simple block/file syncs to another device.
    • Borg or Restic for deduplicated, encrypted snapshots.
    • rclone to push backups to cloud providers (S3, Backblaze B2, Wasabi).
  • Databases
    • Periodic logical dumps (mysqldump, pg_dump) + WAL/transaction log shipping for point-in-time recovery.
    • For small DBs, regular export + automated transfer is often fine.
  • Docker volumes
    • Use docker/compose commands or volume plugin tools to snapshot volumes.
    • Copy the volume directory or use docker run --rm -v volume:/volume -v $(pwd):/backup alpine tar czf /backup/volume.tar.gz /volume.
  • Block device snapshots
    • If using NVMe or LVM, consider snapshots for faster backups.
  • Off-site & redundancy
    • Always have at least one copy off-site (another Pi at a different location, or cloud storage).
  • Automation & retention
    • Automate backup creation and pruning (keep weekly/monthly retention policy).
    • Monitor backup success/failure and alert on failures.

How to test a restore (simple playbook)

  1. Provision a fresh Pi (same architecture ideally).
  2. Transfer the backup artifact (encrypted archive, DB dump, Docker volumes).
  3. Run the documented restore steps:
    • Recreate filesystem layout.
    • Restore volumes and DB dumps.
    • Start containers/services.
  4. Run verification:
    • Health endpoints, sample queries, and sanity-check the dataset.
  5. Time the whole process and iterate on what took too long or failed.

Run a restore drill every few months or after any major change.


Real-world parallels: when this happens in production

  • Planned migration or hardware upgrade (retire old hardware).
  • Data center migrations and provider changes.
  • Incident-driven failover (site power outage, physical damage).
  • Blue/green or canary deployments where you need to roll back to a known good state.
  • Compliance audits where you must demonstrate recoverability.

The difference in the real world is scale, SLAs, and often a stricter emphasis on automation, monitoring, and runbooks. Your home-lab is the perfect place to learn those practices safely.


Can you put this on your resume?

Yes. Framing tips:

  • Focus on outcomes and responsibilities: "Designed and maintained a multi-node Raspberry Pi home-lab with self-hosted services, automated backups, and disaster recovery testing."
  • Call out technologies and skills: Raspberry Pi hardware, Linux, Docker, backup tooling (Restic/Borg/rsync), Cloudflare Tunnels, Pi-hole, VPNs, database migration and recovery, monitoring.
  • Quantify where possible: "Maintained 5+ devices, automated backup/restore drills, reduced failover time by X% (if you measure it)."
  • Emphasize soft skills: problem solving, systems thinking, documentation, and reliability engineering.

Employers value practical, demonstrable projects that show ownership and technical depth.


Next steps and a simple checklist

  • Commit architecture diagrams and runbooks to a Git repo.
  • Automate backups (cron or systemd timers) and push one copy off-site.
  • Script a reproducible bootstrap for a fresh Pi (ideally idempotent).
  • Schedule and run a restore drill in a different physical location.
  • Start instrumenting monitoring and alerting (Prometheus + Grafana or simpler uptime checks).
  • Consider DNS failover or Cloudflare load-balancing for multi-site failover if you host public sites.

Final thoughts

Celebrate wins — redeploying your services while preserving data is the turning point where "play" becomes "infrastructure." You'll discover lots of edge cases as you scale across sites (network topology, latency, replication, and security), but each drill and document update makes the system stronger. You've already crossed the biggest threshold: reproducibility.


Attribution

Attribution breakdown:

  • Papa-Sean — 90%: original ideas, hands-on engineering, testing, and the real-world project work and decisions described in this post.
  • Assistant (GitHub Copilot / ChatGPT) — 10%: organized your notes, structured the blog post, suggested documentation/backup strategies, wrote and formatted the markdown content above.

What I contributed: consolidated your narrative into a coherent post, added actionable documentation and backup recommendations, and supplied the resume-friendly framing.

AI Transparency
Model: GitHub Copilot / ChatGPT
Prompt Used:
I have a pi problem... No, its not my diet instead I am addicted to raspberry pi's! I have created an access point on a pi zero 2w to allow pihole to block adds on a network and to gain access to a vpn for a streamin app on a tv. I created a webcam and small selfhost app to access that camera from anywhere using a pi camera module 3 and another pi zero 2w. I setup a pi 5 to manage docker containers and cloudflare tunnels to self host personal projects and 2 sites i've deployed! I built out an arcade cabinet and added a pi 5 running recalbox to play arcade and console games I grew up with. Most recently, I setup a new pi with 1tb nvme ssd which i plan to setup a self hosted vpn server along with more projects but today was the first win of my whole IT rebirth. Eventually I would like to host sites on multiple pi's across multiple networks and manage traffic between them. If my power/internet does down at one site, there will be another that can keep me live. I am working through the known and unknown unknowns but redeploying on another machine with the db data from the first is a HUGE win for me. Migrating/persisting data is pretty important i'd say and gameplanning how I can safely create/store backups is where I am going to spend my time ruminating in the days to come. For now the win is, I know I can move production level projects onto/off of a platform and not lose data, I'm curious what issues I will uncover as I go but for now I am happy! Any suggestions on how to better document the process? When do things like this happen in the real world? Can I put this on my resume? Haha input is welcome but these are questions I will continue to question as I go! Can you help me organize the thoughts above into a blog post? I need iputs for title, exercpt, tags, blog content in markdown format please? Be sure to include attribution breakdown in the footnote to give yourself credit for your part in the blog content

Comments (1)

Champagne James3 months ago

WHAT UP DOE?! This is amazing work, Papa! Keep the momentum going, brother!