These are a collection of (MIT licensed) tools I’ve been working on for years to automate some of the major functionality one needs when setting up/using new machines with lots of disks/SSD/NVMe.
The repo is here: https://github.com/joelandman/disk_test_setup . I will be adding some sas secure erase and formatting tools into this.
These tools wrap other lower level tools, and handle the process of automating common tasks you worry about when you are setting up and testing a machine with many drives. Usage instructions are in the code at the top … I will eventually add better documenation.
Here are the current list of tools (note: they aren’t aware of LVM yet, let me know if you would like this):
* **disk_mkfsxfs.pl** : Takes every disk or ssd that is not mounted or part of an MD RAID, and creates a file system on the disk, a mount point under a path of your choosing (/data by default). This is used with the disk_fio.pl code.
* **disk_fio.pl** : generate simple disk fio test cases, in which you can probe the IO performance to file system for many devices. The file names it generates include case type (read, write, randread, randwrite, randrw), blocksize and number of simultaneous IOs to each LUN. This will leverage the mount point you provide (defaults to /data) and will create test directories below this so you don't get collisions. To run the test cases, you need [fio](https://github.com/axboe/fio) installed.
* **disk_wipefs.pl** : removes any trace of file system metadata on a set of drives if they are not mounted and not part of an existing MDRAID.
* **ssd_condition.pl** : runs a conditioning write on a set of SSDs if they are not mounted or part of an MDRAID. If you are setting up an SSD based machine, you are of course, running something akin to this before using the SSDs ... right? Otherwise, you'll get a nasty performance and latency shock after you transition from the mostly unallocated block scenario to the completely allocated scenario. Especially painful when the block compression/garbage collection passes come through to help the FTL find more space to write your blocks. You can tell you are there if you see IO pauses after long sequences of writes. Also, condition helps improve the overall life of the drive. See [this presentation around slide 7 and beyond](http://www.snia.org/sites/default/education/tutorials/2011/fall/SolidState/EstherSpanjer_The_Why_How_SSD_Performance_Benchmarking.pdf) for more info.
* **sata_secure_erase.pl** : which will completely wipe the SSD (will work with rotational media as well).
The user interface for these tools are admittedly spartan and documented at the top of the code itself. This will improve over time. Have at them, MIT licensed, and please let me know if you use them or find them useful.