Another itch scratched

So there you are, with many software RAIDs. You’ve been building and rebuilding them. And somewhere along the line, you lost track of which devices were which. So somehow you didn’t clean up the last build right, and you thought you had a hot spare … until you looked at /proc/mdstat … and said … Oh …
So. I wanted to do the detailed accounting, in a simple way. I want the tool to tell me if I am missing a physical drive (e.g. a drive died), or if a disk thinks it is part of a raid, even though the OS doesn’t agree.
And yes, this latter bit can happen, if you re-build the array, and omit one of the devices for whatever reason.
Like I did.
So …

root@usn-t60:/opt/scalable/sbin# ./lsswraid --raid=md23
N(OS)	= 14
N(disk)	= 15
More Physical disk RAID elements than OS RAID elements, likely you have a previously built element which has not been cleared.
The extra devices are: sdz
root@usn-t60:/opt/scalable/sbin# grep sdz /proc/mdstat

And to add this particular device back in as a hot spare …

/dev/sdz: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
root@usn-t60:/opt/scalable/sbin# mdadm /dev/md23 --add /dev/sdz
mdadm: added /dev/sdz
root@usn-t60:/opt/scalable/sbin# grep sdz /proc/mdstat
md23 : active raid6 sdz[16](S) sdap[14] sdar[13] sdas[12] sdau[11] sdat[10] sdaf[9] sdag[8] sdai[7] sdah[6] sdaj[5] sdak[4] sdam[3] sdal[2] sdaa[15]