For almost a decade, a little 2‑bay Synology NAS quietly handled most of my storage needs. I’d even treated it to a pair of new 6 TB drives not long ago. But as my self‑hosting ambitions grew—and as that NAS turned into a bottleneck—it became clear that it was time for a proper upgrade. I didn’t just want “a bit more storage.” I wanted a home server that could act as a NAS, run virtual machines and containers, and eventually let me replace a bunch of cloud subscriptions with self‑hosted alternatives like Nextcloud, Jellyfin, Pi‑hole, and more.
In this post, I’ll show you the path I took to get there. It’s not a universal recipe or best‑practices guide; it’s simply how I approached replacing my aging Synology with a new home lab server. Along the way I ran into a surprise spike in RAM prices, experimented with a mini‑PC plus an external USB drive enclosure, tested several operating systems, and rebuilt the whole thing more than once before finally landing on a Minisforum N5 running TrueNAS.
My starting point was a very basic setup: the old 2‑bay Synology plus an aging Mac mini doing double duty as a file and media server. The combination worked, but it felt increasingly fragile and underpowered. I wanted to consolidate into a single box that could handle bulk storage, run VMs and container apps, and give me room to grow into more “home lab” style projects over the next 5–10 years.
On paper, that translated into a few concrete requirements. For compute, I wanted at least an 8‑core CPU and 32–64 GB of RAM, ideally DDR5, so I could comfortably run multiple VMs and containers at once. For storage, I wanted at least four 3.5" HDD bays for inexpensive bulk storage, plus a couple of NVMe slots for fast SSDs for VMs and “hot” data. Networking‑wise, I didn’t want to be stuck on 1 GbE anymore, so at least one 2.5 GbE port was on the list, and while I didn’t plan to add a GPU immediately, I liked the idea of leaving room for one later.
To keep costs somewhat under control, I planned to reuse the two 6 TB drives from the Synology, add two more matching drives, and aim for roughly 16 TB of usable space in a RAIDZ1‑style layout. On top of that, I wanted at least one 4 TB NVMe SSD, with the option to add a second later for redundancy. My initial budget target for the whole project was around $1,500 USD.
Once I knew what I wanted, I looked at the main hardware paths other people take: buying a newer turnkey NAS from Synology or QNAP, picking up a used workstation or server and stuffing it with drives, using a decent desktop or mini‑PC with external storage, going for a NAS appliance like a Minisforum N5 or ZimaCube that’s designed for third‑party OSes, or building a fully custom PC from parts. All of them were viable, but I was initially most excited about the NAS appliance route—specifically the Minisforum N5.
That enthusiasm ran straight into RAM pricing. To fill an N5 with 64 GB of DDR5 at current prices would have cost six or seven hundred dollars, which was as much or more than the N5 itself. Just the chassis plus memory would have eaten most of my original budget. That’s when I stumbled across a great deal on a mini‑PC: an 8‑core AMD Ryzen system with 64 GB of DDR5 for roughly the same price as the RAM alone. I rationalized it as “paying a lot for RAM, with a free mini‑PC attached,” and decided to pivot.
The new plan was to use that mini‑PC as the server, pair it with a 4‑bay USB drive enclosure, add two more 6 TB HDDs and a 4 TB NVMe SSD, and call it good. The idea was simple: the mini‑PC would handle compute, the external enclosure would handle bulk storage, and I could stay much closer to my original budget while still ticking most of my requirements.
With the mini‑PC + USB DAS combo assembled, the next big question was the operating system. I wanted something that could do serious NAS duties and still run containers and VMs without turning into a science project. On my short list were Proxmox, TrueNAS, Unraid, ZimaOS, and HexOS.
Proxmox is an excellent hypervisor for VMs and containers, and many home labbers pair it with something like TrueNAS running as a VM. In my case, I wanted to avoid that extra layer of complexity and didn’t especially love the UI for NAS‑style workloads, so I set Proxmox aside. ZimaOS, by contrast, has a beautiful interface and a nice catalog of container apps; there’s a free tier and a very reasonably priced 29 USD license if you want to support development. However, it still felt too immature for my primary data, and the fact that ZFS required dropping to the terminal was a reminder that it has some growing up to do. HexOS was essentially a non‑starter: still in beta, and a roughly 200 USD price tag that didn’t make sense for me.
That left TrueNAS and Unraid as the serious contenders. Both are widely used, both can handle storage plus apps and VMs, and both have strong communities. My first attempt with TrueNAS on the mini‑PC + USB enclosure setup was rough: I ran into issues creating pools and generally had a frustrating time getting the storage configured the way I wanted. It turned out that the real problem wasn’t TrueNAS at all—it was the USB enclosure. TrueNAS really expects directly attached disks for its pools, and plenty of people strongly discourage using USB for primary ZFS storage. Unraid, on the same hardware, felt much smoother: the storage configuration was straightforward, the UI clicked with me, and the app catalog and plugin ecosystem made it easy to get things running. At that point, I was leaning heavily toward Unraid as the final choice for this build.
Even with Unraid performing well, I couldn’t shake the sense that the architecture itself was flawed. All four HDDs were sitting behind a single USB 3.0 connection. No matter how good the OS was, that cable would always be a potential bottleneck and a single point of failure. The more I thought about it, the more it felt like I’d made a compromise at exactly the wrong layer: the foundation.
Eventually I admitted to myself that I’d optimized around the wrong constraint. Saving money on the chassis and relying on an external USB enclosure for my primary pool might have been fine for a scratch volume or cold backups, but it wasn’t how I wanted my main storage to work. So I backtracked and went back to the original plan: I bought the Minisforum N5 anyway, even though it meant going over budget. The silver lining was that the mini‑PC hadn’t been a complete waste—its 64 GB of DDR5 RAM was compatible with the N5, and I could transplant the SSDs and HDDs as well. I was at Micro Center when they opened, grabbed the N5, brought it home, and rebuilt the server yet again.
On the N5, with proper internal SATA and NVMe instead of a USB enclosure, both TrueNAS and Unraid behaved exactly as you’d expect. Unraid was still very compelling: friendly UI, flexible storage model that doesn’t require matching drive sizes, good plugins, and plenty of features. TrueNAS, however, also clicked in a new way: clean interface, straightforward SMB share setup, integrated backup tools, and solid support for containers and VMs. The deciding factor for me was cost and philosophy. Unraid isn’t expensive, but it is a paid product with recurring update costs for something I’ll depend on for years. TrueNAS is free and open source, and the problems I’d had earlier were clearly caused by the USB enclosure, not the OS. So I rebuilt one last time, this time with TrueNAS on the N5, and that combination finally felt like the “right answer” for this project.
In the final build, I installed four 6 TB HDDs and two 4 TB SSDs in the N5. The hard drives live in a RAIDZ1 ZFS pool, giving me roughly 16 TB of usable space with one drive’s worth of parity. The two 4 TB SSDs are configured as a mirrored pool for fast, redundant storage for VMs, containers, and any data that benefits from SSD performance. On top of that, I created SMB shares for the main data categories and started copying (not moving) data over from the old systems so I’d still have fallback copies during the migration.
For protection, I set up nightly ZFS snapshots and cloud backups using Backblaze B2. TrueNAS made it easy to add my Backblaze credentials and schedule backup tasks, and having both local snapshots and an off‑site copy goes a long way toward making this feel like a “real” home server instead of just a fancy PC with a bunch of disks. Since this system will be running 24/7, I also measured power usage: in my case, the server idles around 35–40 W and climbs into the 70–80 W range under load, which works out to under 10 USD a month in electricity where I live. That’s a cost I’m comfortable with for what the system does.
All told, I blew past my original budget and landed somewhere just over $2,200 USD—though I do still have a perfectly usable mini‑PC that can be repurposed for another project. In return, I now have a flexible home lab server that can handle my storage needs, run VMs and containers, and serve as the foundation for a series of self‑hosting experiments. In upcoming videos and posts, I’ll be diving into Pi‑hole and AdGuard for DNS‑level ad blocking, NGINX as a reverse proxy with proper SSL, and services like NextCloud, media servers, note‑sync tools, etc. If you’re considering a similar journey—from aging NAS to modern home server—I hope this walkthrough gives you a realistic sense of the trade‑offs and a few pitfalls to avoid.