Andrea, just released the latest version of SnapRAID. This version introduces split parity, which allows you to spread your parity files across numerous disks. This makes transitioning up to larger data disks much easier and can mitigate the need to buy larger disks just for parity. You can read more about it’s features here. Also, I have updated my SnapRAID tutorial to reflect the update to v11, and I have posted a new sync script to work with the new split parity.
from https://zackreed.me/snapraid-v11-released/
-------
from https://zackreed.me/snapraid-v11-released/
-------
I have SnapRAID setup to create a super flexible, reliable bulk media server. I have used SnapRAID for years across numerous versions of Ubuntu and a plethora of hardware. SnapRAID has been so reliable that I have updated hardware four times since I originally set it up, migrated through many versions of SnapRAID, added many data disks, added parity levels, and replaced disks all without issue. All the while, it’s been super flexible and an awesome way to manage my bulk media. I currently have a ridiculously over the top server that you can read more about here. On it, I use three parity disks and 21 data disks.
The first thing I do after any new install is update the system, and install my base packages.
After the reboot, let’s keep installing the packages we will need to build SnapRAID.
Finally, let’s install it.
Next, let’s cleanup.
Next, I’m going to partition the disks, so I need to grab a couple packages.
Let’s partition one, and copy the structure to the other disks.
Now, we will make a place to mount the disks. I mount them via /etc/fstab labeled by their device type and serial number as seen beloew. This makes the disk easier to identify in the event of a disk failure.
Setup a filesystem on each data disk (Note, I’m reserving 2% of the disks space so that the parity overhead can fit on the parity disk). You can set the reserved space to 0% if your parity disk(s) are all larger than your data disks (i.e. you have 6TB parity disks and 5TB data disks).
Put a filesystem on the parity disk (here I’m reserving 0%, or letting it use the whole disk for parity).
Get the device type and serial numbers like this, then add them to your /etc/fstab.
It should give you output like this.
You use the above to add them to /etc/fstab
It should look something like this.
As you may be able to see, the above shows the type of connection, in this case SATA, the Manufacturer of the disk, the part number of the disk, the serial number of the disk, and the partition we are using from the disk. This makes indentying disks in the event of a failure super easy.
Mount the disks after you add them to /etc/fstab
Next, you’ll want to configure SnapRAID.
This is how I configured mine
Next, we need to create the path that we mentioned above for our local content file.
Once that’s complete, you should sync your array.
Since moving to SnapRAID 7.x, the above mentioned script no longer works. I have revised the script to accommodate dual parity, and to integrate the changes in the counters.
Finally, I wanted something to pool these disks together. There are four options here (choose your own adventure). The nice part about any of these is that it’s very easy to change later if you run into something you don’t like.
1. The first option is mhddfs. It is super easy to setup and “just works”, but many people have run into random disconnects while writing to the pool (large rsync jobs where causing this for me). I have since updated my mhddfs tutorial with some new FUSE options that seems to remedy the disconnect issue. mhddfs runs via FUSE vs. a kernel driver for AUFS, so it’s not as fast as AUFS and it does have more system overhead.
2. The second option is to use AUFS instead. The version bundled with Ubuntu has some weirdness with deletion and file moves with both it’s opaque and whiteout files. It also does not support exporting via NFS.
3. The third option is to use AUFS, but to compile your own versions to support the hnotify option and allow for export via NFS. This is where I landed for a few years after trying both of the above for many months/years.
4. This is what I use Finally, a solution that performs well and is easy to use. MergerFS (the solution I’m currently using). This is a FUSE based solution, but it’s fast and has create modes like AUFS. It’s also easy to install and requires no compiling unlike AUFS to get it working. This is what I use now, and it’s great and actively developed.
After choosing one of the options above, you should now have a mount point at /storage that is pooling all of your disks into one large volume. You’ll still want to setup a UPS and SMART monitoring for your disks. Another thing I did was write up a simple BASH script to watch my disk usage, and email me if a disk gets over 90% used, so I can add another disk to the array.
Next, I would strongly suggest you read my other articles to setup email for monitoring, SMART information monitoring , spinning down disks, setting up a UPS battery backup, and other raid array actions. Being able to cope with drives failing useful, but it’s nice to know that one has failed and be able to replace it too.
Updating in the future
You may wonder…”Hmm, I installed this fancy SnapRAID a while back, but the shiny new version of SnapRAID just came out, so how do I update?” The nice thing about SnapRAID is that it’s a standalone binary with no dependencies, so you can upgrade it in place. Just grab the latest version, untar, and install.
You may wonder…”Hmm, I installed this fancy SnapRAID a while back, but the shiny new version of SnapRAID just came out, so how do I update?” The nice thing about SnapRAID is that it’s a standalone binary with no dependencies, so you can upgrade it in place. Just grab the latest version, untar, and install.
You can check your version like this.
from https://zackreed.me/setting-up-snapraid-on-ubuntu/
No comments:
Post a Comment