I finally got around to building a NAS out of my home server.
I used MergerFS and SnapRAID to turn my ProxMox node into a single mountable network drive that had redundancy, while being able to swipe a drive and throw it into another PC and still read that data without overhead or formatting. And STILL use that PC for virtualization.
This is the best setup I’ve found in years, and I finally got around to setting it up!
The Journey (skip to next heading if you don’t care)
The journey went on a bit of a tangent from my original goal, as my focus was purely on data hoarding. But spinning up VMs, running containers, and experimenting with various corners of the open-source world to help organize my chaotic brain and life sort of took over. But as my homelab grew, I realized something: I was sitting on a lot of unused local storage still. Why have I still not turned one of my Proxmox nodes into a NAS and centralize my files, backups, and media?
It all started when I lost my one-and-only hard drive on my one-and-only computer, losing photos, notes, and more from over the years. I turned that into a multi-drive setup (over a decade or more ago now) where I had the main OS on a small SSD and the data on a HDD, with an external drive as backup. The fear of losing that external drive due to weather, handling, and more just got me thinking of redundancy more and more.
This turned into multiple drives, like, as many as I could find. I went to Goodwill, ebay, extracting hard drives from those WD external nas things… Well, I didn’t know that most multi-drive setups like RAID require same/similar drives. And I had windows. One of those PCs I got from Goodwill turned into a second computer I had to turn into a NAS with all those drives, but I only had windows. Hadn’t even heard about linux before.
So, with Windows, I found StableBit DrivePool. And you know what, that was still probably one of the better setups. It didn’t care about what drives you had, you could hot-swap them for all it cared, and it duplicated the files, not bits, and I could use ALL of the drives for all they offered. It just, worked. Turned all those drives into one mount point for my other computers.
Problem: It was Windows. bleh. I was using so much power, so much heat, and for it to JUST be a storage box that I would have to remote into to manage or run anything. I still hadn’t learned about virtualization yet. And oh man, when I learned about ProxMox and being able to VIRTUALIZE the windows server and then also use pihole and whatever the heck else? Well, that was a no-brainer.
The “actual” problem: StableBit DrivePool wasn’t on linux.
So… It sat.
I got a Synology NAS to hold my data instead, thinking “throw money at it, the problem will go away”. “Get a proper setup”. And “Everyone else has one” were all thoughts I had, thinking I finally did it right. And, sure, not a huge loss on that. A buddy had a spare synology with no drives so I took a couple and threw it in there. Good lord that was slow. And only mirrored. and locked into synology OS, which was not my favorite but not bad, coming from not even knowing what a NAS actually was.
That enabled me to use my was-nas into now a proxmox host!
What started as a weekend experiment has now become one of the most useful parts of my setup. As not only could I host verious apps, I finally, after about a decade of putting it off, turned that sucker into a LINUX StableBit Drivepool-esque NAS. (It’s not DrivePool, but we’ll get to that.) I ran it directly from the ProxMox host itself, and it’s solved all of my issues (other than off-premise backups…)
Here’s the story of how I did it, what I learned along the way, and a generic guide so you can try it yourself.
Why Use Proxmox as a NAS?
- Consolidation – One box can handle both virtualization and storage duties.
- Flexibility – You can choose your filesystem, RAID/ZFS-like setup, and sharing protocol. Hotswap into any other proxmox/linux box and still read the drives
- I can use ANY mix of drives, unlike stupid RAID pools.
- Expandability – Add the drive to the mount and off you go!
- Integration – Backups from other Proxmox nodes can land directly on the NAS node.
- Cost‑effective – No need for a separate dedicated NAS appliance. But, mount it into a VM and you can use a NAS OS if you choose.
Essentially, I had a HDD die. So, that means any hardware can fail at any time, right? My new gaming PC shipped with a bad 13th gen intel chip, so the whole computer was useless until that got replaced. I’ve had bad RAM. I’ve had a bad Power Supply. I’ve actually seen most issues at this point lead me believe my drives have actually been the longest living hardware I’ve had, ironically. So, I wanted to hot-swap them into any other PC and read the data, without the need of all drives and RAID being set up and formatting my drives.
ProxMox is free, and under the hood it’s just debian with some extra flavor on top. So, it doesn’t even really need to be ProxMox, but any linux flavor.
MergerFS + SnapRAID = Happy me!
MergerFS takes all the drives and ‘merges’ them into one mount point. SnapRAID takes those drives and balances the data and duplicating it to another drive. So no drive has all the data. As I was saying earlier, my drives have actually been the longest lasting hardware for me, so I’m not expecting to lose more than a drive at a time. Until a tornado strikes I guess… Bigger problems at that point.
So my StableBit Drivepool experience can now be on linux. And free!!!
Now… I just had to learn how to do that. I just heard these words on Google and had no idea what any of that actually looked like in practice. So, I’m glad I had my Synology to store my stuff while I played with my setup because I would have lost some data in this process…
How-To!
(commands are assuming you’re running this on your proxmox node as root, sudo will be omitted. You may need sudo)
This ProxMox server is one of many in a cluster. I went a little nuts with mini-pcs and 10-inch racks, as you’ll see in my other posts. So my intentions made my experience a little tricky trying to get both NFS and SMB shares working with the right permissions. It’s otherwise rather straightforward.
How to Turn a Proxmox Node into a NAS
Note: These steps are intentionally generic — adapt them to your environment, IP scheme, and security requirements.
Install MergerFS and SnapRAID (this is the easy part)
apt update
apt install mergerfs snapraid -y
Step 1: Prepare the Storage Disks
Install and connect your drives to the Proxmox node.
List all the disks to be able to see the drive letter that linux provided:
lsblk -o NAME,SIZE,MOUNTPOINT,LABEL
If you have multiple drives of similar size, it’s sort of hard to tell exactly which disk is which, physically, as that may come in handy when a drive dies you can just walk up to the server and swap it out. You can get more detail with this:
blkid
Step 2 – Format and Label the Disks, and Mount
There needs to be a parity disk, which has to be larger than your largest storage disk. They can be named in any order.
‘parted’ makes the partitions using ext4 and clears it to 100% usable space.
sde is the example drive: you may need sda, or sdc, or more. Do this for each drive you want in the pool, changing the “sdX” for your drive.
parted /dev/sde --script mklabel gpt
parted /dev/sde --script mkpart primary ext4 0% 100%
Do that for each disk.
Next, label one “parity” drive, and “diskX” after the drives have been formatted.
For example, 8 drives could look like this, order doesn’t matter, but could for you:
mkfs.ext4 -L disk3 /dev/sda1
mkfs.ext4 -L disk2 /dev/sdb1
mkfs.ext4 -L disk5 /dev/sdc1
mkfs.ext4 -L disk7 /dev/sdd1
mkfs.ext4 -L parity /dev/sde1
mkfs.ext4 -L disk8 /dev/sdf1
mkfs.ext4 -L disk4 /dev/sdg1
mkfs.ext4 -L disk6 /dev/sdh1
Step 3 – Create Mount Points
I omitted disk1 because it made sense to me, disk1 is my parity drive. But you might do disk1 and have the last drive be parity, which might make more sense to you. It’s personal preference.
Make the directories for each mount point:
mkdir -p /mnt/parity
# mkdir -p /mnt/disk1
mkdir -p /mnt/disk2 /mnt/disk3 /mnt/disk4 /mnt/disk5 /mnt/disk6 /mnt/disk7 /mnt/disk8
Then mount those drives!
# mount /dev/disk/by-label/disk1 /mnt/disk1
mount /dev/disk/by-label/parity /mnt/parity
mount /dev/disk/by-label/disk2 /mnt/disk2
mount /dev/disk/by-label/disk3 /mnt/disk3
mount /dev/disk/by-label/disk4 /mnt/disk4
mount /dev/disk/by-label/disk5 /mnt/disk5
mount /dev/disk/by-label/disk6 /mnt/disk6
mount /dev/disk/by-label/disk7 /mnt/disk7
mount /dev/disk/by-label/disk8 /mnt/disk8
Step 4: Create MergerFS pool
Make the path you’d like to share. I’ll refer to mine as “store”
mkdir /mnt/store
and merge teh disks you mounted earlier into the pool
mergerfs /mnt/disk2:/mnt/disk3:/mnt/disk4:/mnt/disk5:/mnt/disk6:/mnt/disk7:/mnt/disk8 /mnt/store \
-o defaults,allow_other,use_ino,category.create=epmfs
I ran into an error when trying to use an array for the drive numbers so it’s here as an options, but I recommend you do it individually instead, just ensure the disk labels match and DO NOT include the parity drive.
mergerfs /mnt/disk{2..8} /mnt/store \
-o defaults,allow_other,use_ino,category.create=epmfs
Step 4.5 – Make that persistent using fstab
edit your fstab configuration:
nano /etc/fstab
and under the commented area, place in this block:
# Persistent mounts for data drives
LABEL=disk2 /mnt/disk2 ext4 defaults,nofail 0 2
LABEL=disk3 /mnt/disk3 ext4 defaults,nofail 0 2
LABEL=disk4 /mnt/disk4 ext4 defaults,nofail 0 2
LABEL=disk5 /mnt/disk5 ext4 defaults,nofail 0 2
LABEL=disk6 /mnt/disk6 ext4 defaults,nofail 0 2
LABEL=disk7 /mnt/disk7 ext4 defaults,nofail 0 2
LABEL=disk8 /mnt/disk8 ext4 defaults,nofail 0 2
LABEL=parity /mnt/parity ext4 defaults,nofail 0 2
and then just under that, place in this block:
# mount the disks to /mnt/store with mergerfs
/mnt/disk2:/mnt/disk3:/mnt/disk4:/mnt/disk5:/mnt/disk6:/mnt/disk7:/mnt/disk8 /mnt/store fuse.mergerfs defaults,allow_other,use_ino,category.create=epmfs 0 0
it should look similar to the command you used earlier to make your merge just with a couple “0 0” at the end and “fuse.mergerfs” in there.
Save and exit that document, and then reload mergerFS mounts daemon
systemctl daemon-reload
mount -a
Step 5 – SnapRAID setup: Edit /etc/snapraid.conf
to set up SnapRAID for redundancy and “RAID-like” stuff, you just need to edit the snapraid.conf file
nano /etc/snapraid.conf
Paste in this block but customize for all the disks in your setup
# Parity file location (on the parity drive)
parity /mnt/parity/snapraid.parity
# Content files (used to track changes on each data disk)
content /mnt/disk2/snapraid.content
content /mnt/disk3/snapraid.content
content /mnt/disk4/snapraid.content
content /mnt/disk5/snapraid.content
content /mnt/disk6/snapraid.content
content /mnt/disk7/snapraid.content
content /mnt/disk8/snapraid.content
# Data disks (these are the actual files SnapRAID protects)
data d2 /mnt/disk2
data d3 /mnt/disk3
data d4 /mnt/disk4
data d5 /mnt/disk5
data d6 /mnt/disk6
data d7 /mnt/disk7
data d8 /mnt/disk8
# Exclude system and temp files
exclude *.log
exclude *.tmp
exclude *.bak
exclude /lost+found/
exclude .Trash-*
exclude .recycle/
exclude .snapshots/
exclude .DS_Store
exclude Thumbs.db
Save and exit that.
And run the first sync! You may encounter an error if there is no data. Make a document in your mergerfs share.
echo "test merger files" > /mnt/store/text.txt
snapraid sync
Scrub for integrity
snapraid scrub -p 5
And check status of snapraid
snapraid status
Step 6 (OPTIONAL) – Crontab syncs
This can be used if you’re in a home environment and files aren’t constantly changing, as each sync can take a few minutes and errors can happen. So if your drive is mainly static storage, like files, pictures, etc. then you can set up automatic syncs.
crontab -e
0 2 * * * /usr/bin/snapraid sync
0 4 * * 0 /usr/bin/snapraid scrub -p 10 -o 2
Hokay, so, that was pretty easy, yes?
We mounted your drives, we merged them, and we synced them. Now we want to share them!
You can choose NFS or SMB or however you would normally share a drive. You just have a single mount point now.
I’ll give examples for a combination of NFS and SMB because that’s what I used. Each instruction can be used independently for your setup. NFS is easy, SMB is easy, but the combination of both led me into some late nights and cold sweats. I’ll show why later.
Option 1 – NFS share
on the server, lets install NFS server
apt update
apt install nfs-kernel-server -y
Then edit the /etc/exports file
nano /etc/exports
and paste this in
# exports with subfolders that are also shares for particular reasons
/mnt/store/ 192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=0)
/mnt/store/nodes 192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=1)
/mnt/store/vm 192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=2)
/mnt/store/media 192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=3)
Replace 192.168.0.0/24 with the IP SUBNET of the network you’d like to share it to.
I’ve included additional lines to show an example of sharing particular folders inside of the main share for specific purposes. For instance, using the mount as the backup share for proxmox, it makes multiple directories that get ugly in the main share if you use it for other services alongside it. So, I placed it inside a “nodes” folder and shared that to the proxmox nodes.
Similarly, my plex server, I didn’t want all my media being dispersed in the main share.
And for my VMs to have their own mount point for NextCloud — you get the idea.
For just one NFS share because you’re normal and awesome:
# or just one share
/mnt/store 192.168.0.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=0)
Now export that share and restart NFS
exportfs -ra
systemctl restart nfs-kernel-server
Verify exports
cat /etc/exports
On the client PC:
Install the NFS client:
apt update
apt install nfs-common
and mount the share
mount -t nfs <your-server-ip>:/mnt/store /mnt/store_remote
replace “your-server-ip” with your server IP.
So, this is where I ran into issues. Things like ProxMox want to run as root, things like Windows want authentication and a specific user, things like containers and VMs use different users for different services, each have different permissions.
So, when you expose the NFS share to a subnet for proxmox nodes, you don’t want to have to authenticate each node each time. The “no_root_squash” part should take care of this but in case it doesn’t, you can change the permissions to “nobody:nogroup” which is essentially opening it up to guest access.
# Try not to use nobody outside of testing
chown -R nobody:nogroup /mnt/store
chmod -R 775 /mnt/store
We’ll get into permissions after this.
Option 2 – Samba (SMB) for Windows/MacOS
on the server:
Install Samba:
apt update
apt install samba
edit /etc/samba/smb.conf
nano /etc/samba/smb.conf
and place in the following
[store]
path = /mnt/store
browseable = yes
read only = no
guest ok = yes
force user = nobody
and restart samba!
systemctl restart smbd
For more control, your config might look more like this:
[store]
comment = MergerFS Pooled Storage
path = /mnt/store
browseable = yes
read only = no
writable = yes
guest ok = no
valid users = @storageusers
force group = storageusers
create mask = 0664
directory mask = 2775
inherit permissions = yes
inherit acls = yes
vfs objects = acl_xattr
map acl inherit = yes
store dos attributes = yes
again, I’ll get into permissions after this option.
On the Windows PC to map the network drive: (sorry Mac I have no idea how you do this, so I won’t share it as it might be incorrect)
Windows:
Open File Explorer → \\your-server-ip\store
or This PC, right click in the blank space where your hard drives are shown, and select “add network drive”. and paste in the thing.
\\your-server-ip\store
Permissions!!! UGHHHH!
We’re here! Permissions.
Windows might throw an error at you for trying to map to the monster of a NAS you just made. Don’t be discouraged, it’s just that you might not have an actual user for smb! If SMB still doesn’t work after this we can look at the server more in depth.
Let’s create a user on the server to use SMB. We’ll call him… SIMBA? Sure, Simba. idk dude I don’t name things.
adduser simba
smbpasswd -a simba
remember to update your smb.conf
nano /etc/samba/smb.conf
have it include this additional line, [store] representing the share name made earlier. You don’t need to include this.
[store]
valid users = simba
restart smb
systemctl restart smbd
Re-map it in Windows, hopefully that did it for ya. Took me a while because I didn’t understand how smb shares worked on linux to windows.
But!!! I didn’t work on NFS permissions yet. We’re still working on guest! or, nobody, in linux.
NFS Permissions
So, remember that I mentioned ProxMox runs things in root? Well, containers (specifically, what runs on them) don’t. Windows doesn’t. Whatever else might pop up will also be different.
So, make things easy, we can make a group, and append users to that group, so you only need to allow the group. alternatively, you can use ACLs! (Access Lists). These are a little bit more complicated but make it so you don’t specifically change permissions directly. It’s more like adding a port to an IP instead of changing the IP. Or for artistic folk, adding a mask to a layer instead of painting directly on the file.
I’ll go with direct permissions for now:
add a user to a group:
useradd -m -G "storageusersgroup" "simba"
Change the group to whatever and the newuser “simba”. Which can also be whatever.
Add the group permissions to the share:
# prep users for being added to share
chown -R simba:storageusersgroup /mnt/store
chmod -R 2775 /mnt/store
This then can also be applied to your SMB setup:
nano /etc/samba/smb.conf
[store]
valid users = @storageusersgroup
force group = storageusersgroup
create mask = 0664
directory mask = 2775
inherit permissions = yes
inherit acls = yes
vfs objects = acl_xattr
map acl inherit = yes
store dos attributes = yes
systemctl restart smbd
And now, simba has access to both NFS and SMB. Well, actually, his GROUP does!
But, you can also include root in that game too.
# prep users and root for being added to share
chown -R root:storageusersgroup /mnt/store
chmod -R 2775 /mnt/store
This adds root, and the group simba is in.
I’ll chirp the usual “don’t allow root access to your file shares”.
Summary
We formatted, mounted, merged, synced/balanced, shared over NFS and SMB, mounted those shares, and made sure permissions allowed us to do so.
This allows me (and you) to use any number of any type of any size of drives for one big file share that can be used for just about any device.
My next goal is to use this share to then mount to my Plex/Jellyfin servers, and link it up to Sonarr, Radarr, and all the other *arr tools.
To do this, we’ll be making sure things are locked down without root, ensuring nothing important/damaging on this server, proper monitoring, logs, and whatnot to be sure things are secure first.
I hope you also gained something from this! If you followed along, you now have a robust and flexible NAS that can kinda handle everything you throw at it (literally).
Leave a Reply