Disk, SSD, NVMe preparation tools cleaned up and on GitHub

These are a collection of (MIT licensed) tools I’ve been working on for years to automate some of the major functionality one needs when setting up/using new machines with lots of disks/SSD/NVMe.

The repo is here: https://github.com/joelandman/disk_test_setup . I will be adding some sas secure erase and formatting tools into this.

These tools wrap other lower level tools, and handle the process of automating common tasks you worry about when you are setting up and testing a machine with many drives. Usage instructions are in the code at the top … I will eventually add better documenation.

Here are the current list of tools (note: they aren’t aware of LVM yet, let me know if you would like this):

  • disk_mkfsxfs.pl : Takes every disk or ssd that is not mounted or part of an MD RAID, and creates a file system on the disk, a mount point under a path of your choosing (/data by default). This is used with the disk_fio.pl code.
  • disk_fio.pl : generate simple disk fio test cases, in which you can probe the IO performance to file system for many devices. The file names it generates include case type (read, write, randread, randwrite, randrw), blocksize and number of simultaneous IOs to each LUN. This will leverage the mount point you provide (defaults to /data) and will create test directories below this so you don’t get collisions. To run the test cases, you need fio installed.
  • disk_wipefs.pl : removes any trace of file system metadata on a set of drives if they are not mounted and not part of an existing MDRAID.
  • ssd_condition.pl : runs a conditioning write on a set of SSDs if they are not mounted or part of an MDRAID. If you are setting up an SSD based machine, you are of course, running something akin to this before using the SSDs … right? Otherwise, you’ll get a nasty performance and latency shock after you transition from the mostly unallocated block scenario to the completely allocated scenario. Especially painful when the block compression/garbage collection passes come through to help the FTL find more space to write your blocks. You can tell you are there if you see IO pauses after long sequences of writes. Also, condition helps improve the overall life of the drive. See this presentation around slide 7 and beyond for more info.
  • sata_secure_erase.pl : which will completely wipe the SSD (will work with rotational media as well).

The user interface for these tools are admittedly spartan and documented at the top of the code itself. This will improve over time. Have at them, MIT licensed, and please let me know if you use them or find them useful.

Viewed 34711 times by 2920 viewers

Aria2c for the win!

I’ve not heard of aria2c before today. Sort of a super wget as far as I could tell. Does parallel transfers to reduce data motion time, if possible.

So I pulled it down, built it. I have some large data sets to move. And a nice storage area for them.

Ok.

Fire it up to pull down a 2GB file.

Much faster than wget on the same system over the same network. Wow.

Then the rest of the ML data set. About 120GB in all.

Yeah, this is a good tool. Need to make sure it is on all our platforms.

Sort of like gridftp but far more flexible.

Definitely a good tool.

Viewed 44785 times by 3604 viewers

Working on benchmarking ML frameworks

Nice machine we have here …

root@hermes:/data/tests# lspci | egrep -i '(AMD|NVidia)' | grep VGA
3b:00.0 VGA compatible controller: NVIDIA Corporation GP100GL (rev a1)
88:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XTX [Radeon Vega Frontier Edition]

I want to see how tensorflow and many others run on each of the cards. The processor is no slouch either:

root@hermes:/data/tests# lscpu | grep "Model name"
Model name:            Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz

Missing a few things for this, as Amazon is a bit late on shipping some of the needed parts, but hopefully soon, I’ll be able to get everything in there.

Looking at the integrated Tensorflow benchmarks, which require image-net, as well as others. Feel free to point more out to me … happy to run some nice baseline/direct comparisons. I’d prefer open (sharable/distributable) benchmarks (alas image net isn’t precisely this, I put in my request for download).

Everything else is fair game though. Planning on publishing what I find.

Viewed 45358 times by 3619 viewers

Oracle finally kills off Solaris and SPARC

This was making the rounds last week. Oracle seems to have a leak in its process, creating labels that trigger event notifications for people, for their packages. Solaris was decimated. More details at the links and at The Layoff.

Honestly I had expected them to reach this point. I am guessing that they were contractually obligated for at least 7 years to provide Solaris/SPARC support to US government purchasers. SGI went through a similar thing with IRIX. Had to maintain it for N years (N being something like 7) after EOL.

After that contractual obligation expired, the question was, would the divisions be able to pay for themselves, and add positively to Oracles bottom line?

Generally, Oracle is in a very high margin software business. Not hardware, which tends to be much lower margin. Yeah, they have Exadata (or is that now gone?), storage and a few other things. But no one really looks to them any more as a leader in any aspect of the market. They are a very large player, with a set of core products that produce most of their revenue. They are working now to create a cloud, though it will likely be running Linux as the basis for their offerings. Also likely built by very low cost providers (Quanta et al.)

The calculus for their hardware division has been obvious for a long time. For Solaris, it has been getting clearer over time. The world has moved on from Sun hardware in the early 2000s. It moved on from Solaris mid 2000s. Linux has largely supplanted other Unixes in many cases (yeah I know, many aren’t happy with this).

Solaris, in some way, survives through the illumos fork, SmartOS, etc. There is a bit of baggage from those roots, in terms of perception, OEM driver support, user space, etc. Some is admittedly self-inflicted … porting should be trivial, yet I keep running into library implementation differences … that effectively prevent me from moving code bases to it. Some of this is obviated by the LX-brand zone … running what appears to be a linux kernel within a zone in the OS. Some is obviated by KVM. I remain hopeful that we’ll see these issues solved in illumos, as they are part of what killed Solaris as a viable platform. The world moved on, coalesced around a specific platform and API, for better or worse, and others fell off. Even Microsoft, after decades of battling the upstart, finally gave in and did something similar … implemented Ubuntu linux within Windows.

This is a good direction if you spend the time/effort to make sure you maintain compatibility with the fast changing kernel/environment, as well as make the underlying system as performant as possible. But you have to run hard to do this.

A great example of what happens when you set your flag, and refuse to move it to adapt to platform changes is with OS/2 and Win32 binaries. OS/2 refused to enable a new technology in Win32 at the time (I forgot what it was called) to work. Which resulted in code breaking … slowly at first, and then with gathering speed as developers adopted the new tech. I am hoping that illumos can move faster than this.

As for Solaris, and SunOS, I used the latter first at MSU in the late 80s. Around the time I used Vaxen and other systems. Including some Ultrix boxen. Then at Wayne State, I used a buddy’s sparcstation (and many other unixes) to do my simulations in the early 90s. What got me off the Sparc was when the brand new 486 unit I bought was shown to be about 2x the speed. The only advantage the Sparc had was RAM. 64MB as I remember. 486 had 16MB. The Cray machine I ran on at PSC had quite a bit of ram, and was also very fast.

Shortly after this, I started playing with SGIs and generally gave up on the Sparc units due to speed issues.

This said, I’ve always had a soft spot for real unix and workstation systems. At SGI I competed with them. At Scalable I used them and many others to help customers build scale out computing and storage systems.

Under Oracle, I thought they might have a chance if they were invested in. But apparently, this didn’t quite happen. It is sad to see such an ignoble end to Solaris and Sparc. Though, it was not unexpected.

Viewed 48919 times by 4667 viewers

M&A and business things

First up, Tegile was acquired by Western Digital (WDC). This is in part due to WDC’s desire to be a one stop shop vertically integrated supplier for storage parts, systems, etc. This is how all of the storage parts OEMs needed to move, though Seagate failed to execute this correctly, selling off their array business in part to Cray. Toshiba … well … they have some existential challenges right now, and are about to sell off their profitable flash and memory systems business, if they can just get everyone to agree …

This comes from the fact that spinning disk, while a venerable technology, has been effectively completely commoditized. It is an effective replacement for tape systems … and yes, I disagree strongly with a number of people I tremendously respect, on this particular matter. Tape is, apart from specific niches, dead. Tape vendors, tape drive vendors, are not long for this world, and are struggling mightily … though maybe not as intelligently as they could.

Over time, flash (TLC and QLC) will IMO get cost competitive per GB stored with spinning disk. Then the only thing holding back a full on switch over to flash will be manufacturing capacity issues. Until then, spinning disk will likely remain dominating the slower and colder tiers of storage.

When that crossover hits, disk will be the new tape.

Many years ago, in grad school, a buddy leaving the program wrote all his directories out to vax-tape, because, you know, vaxes … and tapes … are forever. I gotta ask him how that is working out. I have 30 year old floppies I can still (barely) read. Not so sure on vaxes. Or vax-tapes. I may have an old DLT cartridge from backups of my home Indy from 20+ years ago sitting around. No way to read that tape. Drives don’t exist anymore. I do have some ATA drives from that time period, and the capability to attach them and read them.

Which is more permanent?

Ok, back to the vertical integration. WDC seems to be doing some of the right things. They are investing in building an ecosystem that a customer can purchase offerings across a wide range of capabilities. This is IMO a sound direction, though it will require careful work to avoid competing with one’s own customers.

Second, Coho Data closed. I’ve been saying this for years, but … storage arrays as a market are slowly drying up. Somewhat of a long tail, but this is part of the reason EMC sold itself to Dell. The storage array market of all flavors and shapes (flash, disk, …) is contracting. More competitors are pursuing larger pieces of a shrinking market. Part of what drove WDC to buy Tegile, and build the rest of their plan appears to be a desire to have a foot into not only that market, and help control the flux into other markets that it also has a presence in, but also to manage the changes in that market by helping to shape how it evolves.

Coho died as the market is crowded, and non-differentiated hardware combined with an effectively random software stack of dubious actual value (which is the approach of a fairly large number of these groups) is not a viable strategy. Though with the right pedigree founders and connections, it appears you can liberate quite a bit of money from VCs in the process of failing.

Curiously, differentiated hardware, stuff that actually adds value to the software layers above it, do not seem to be in vogue for the past decade or so. But the new hotness is also NVMe-over-fabrics. Basically lets take the SAN model … and use PCIe as an interconnect. But … we need to encapsulate the PCIe into infiniband or … ethernet … packets, first.

So now we have too many of those folks proudly making benchmark guesses as to where their systems will matter, and what they can deliver. While what I see are actual numbers falling far short of what I was doing 2+ years ago. On a shoestring budget. With no venture backers.

I am not being bitter … ok … maybe a little … but I am being blunt. There are too many of these companies out there, without a valid defensible difference.

Yeah. I am not happy Coho died. It won’t be the last though.

We are into the culling phase of the market. Weaker organizations will be shut down. IP sold for a song.

Coho won’t be the last. Just on the leading edge.

Another aspect worth mentioning is the impact of clouds upon this. I’ll touch on this in a future post.

Viewed 47553 times by 3871 viewers

A completed project: mysqldump file to CSV converter

This was part of something else I’d worked on, but it never saw the light of day for a number of (rather silly) reasons. So rather than let these bits go to waste, I created a github repo for posterity. Someone might be able to make effective use of them somewhere.

Repo is located here: https://github.com/joelandman/msd2csv

Pretty simple code, does most of the work in-memory, and multiple regex passes to transform and clean up the CSV.

Viewed 52765 times by 4012 viewers

Finally got to use MCE::* in a project

There are a set of modules in the Perl universe that I’ve been looking for an excuse to use for a while. They are the MCE set of modules, which purportedly enable easy concurrency and parallelism, exploiting many core CPUs, and a number of techniques. Sure enough, I had a task to handle recently that required this.

I looked at many alternatives, and played with a few, including Parallel::Queue. I thought of writing my own with IPC::Run as I was already using it in the project, but I didn’t want to lose focus on the mission, and re-invent a wheel that already existed elsewhere.

Ok … so Parallel::Queue required I alter my code significantly. I did so. And it didn’t work. Arguments were lost. I was able to launch things, but not what I thought without further hacking. E.g. the example test code didn’t work.

Remembering MCE, I explored that universe, and found MCE::Loop. Apart from the initialization block, I simply had to change 4 lines of my code. One of my outer foreach loops was converted to an

mce_loop {} @array

construct from the previous

foreach (@array) { ... }

construct. Two additional assignments to pull the data in … and …

Worked perfectly with the IPC::Run based execution. Easily one of the best and fastest experiences I’ve had with concurrentizing/parallelizing a code.

And yes, this was in Perl 5.24. The same code had to run identically across Linux and SmartOS. And it did.

Powerful tools make you more productive. I wish I could get Perl6 built on SmartOS (running into some bugs with late versions of Rakudo star). And still working on getting Julia 0.6 to compile (dependency issues that I am working on resolving).

Viewed 41634 times by 3482 viewers

Cray “acquires” ClusterStor business unit from Seagate

Information at this link. It is being called a “strategic transaction”, though it likely came about vis-a-vis Seagate doing some profound and deep thinking over what business it was in.

Seagate has been weathering a storm, and has been working on re-orgs to deal with a declining disk market. They acquired ClusterStor as part of a preceding transaction of Xyratex. Xyratex was the basis for the Cray storage platforms (post Enginio).

So my guess (and no, I have no inside information at all, I’ve not spoken with friends at either organization) is that Seagate spoke with Cray and said something to the effect “look, we need to cut costs, focus on our core, trim the sails, batten down the hatches, and this business unit is on the chopping block. Do you want to take it off our hands? Along with the people?”

I am sure it wasn’t entirely like this.

Not entirely.

But it seems to approximately fit what I am seeing.

Generally, disk shipments are declining. We’ve likely hit peak-disk. There will be a very long tail, but like other industries in the past, buggy whips, tape drives and tape tech, I wouldn’t want to be only in that space.

Yeah yeah … some will argue with me that tape isn’t dead, or other things like that. Pull out metrics showing incredible economics, longevity, and so forth. Amazon Glacier they’ll say.

Nope.

Rigor mortis had set in long ago to that market. There is a good reason why tape mostly companies like Quantum are in a world of hurt with no real avenue to escape. This is buggy whip manufacturing all over again.

This isn’t lost on Seagate (I presume … I know lots of smart people there). Not lost on Cray either (again with the smart people).

Likely there are lots of nice things in the deal. Specials on disk pricing, priority support and access. Lots of other goodness.

Lets see what happens. But in conjunction with the changes at Intel over Lustre (getting out of the market), the changes at a number of national labs that I am aware of, I think this is probably the right move for both orgs. Cray will have a real storage portfolio that they own. Seagate will have been able to reduce its cost base and head count, while locking in a customer.

Could be a fun time to be at Cray.

Viewed 56834 times by 4524 viewers

More unix command line humor

Waaaay back in grad school in (mumble) late 80s/early 90s (/mumble), I started using Unix in earnest. Back then, my dad shared some funny Unix error messages which were double entendres … often quite entertaining, as the shell was effectively playing the straight man in a comedy duo. Without intentionally doing so (of course).

Nowadays, you can ask Siri about the air speed of an unladen swallow, and get something funny back, but that is because Siri has had that capability programmatically added. These are funny, because the humor is unintentionally ironic.

See this link for some of them.

With this background, late last week, I saw a reference to a BSD library function call run around work. The call is ffs, and a ‘man 3 ffs’ on my Mac shows something like this.

FFS(3)                   BSD Library Functions Manual                   FFS(3)

NAME
     ffs, ffsl, ffsll, fls, flsl, flsll -- find first or last bit set in a bit
     string

LIBRARY
     Standard C Library (libc, -lc)

SYNOPSIS
     #include 

     int
     ffs(int value);

Ok. This is part of the background. The other part, is the common abbreviation indicating exasperation, which is FFS.

Now that this background is in process, lets see if we can get some humor.

The man page shows up on my BSD systems (Mac, SmartOS, Linux, etc.) under section 3. But I was given a man page section of 3c, so

landman@lightning:~$ man 3c ffs
No manual entry for ffs in section 3c

[for the unix command line humor impaired, replace the ffs with the urban dictionary version of this and say that “No manual entry” line out loud with that substitution …. not at work, or in front of small children, or on the phone …]

Thank you, thank you, I’ll be here all week.

Viewed 92936 times by 6411 viewers

What reduces risk … a great engineering and support team, or a brand name ?

I’ve written about approved vendors and “one throat to choke” concept in the past. The short take from my vantage point as a small, not well known, but highly differentiated builder of high performance storage and computing systems … was that this brand specific focus was going to remove real differentiated solutions from market, while simultaneously lowering the quality and support of products in market. The concept of brand and marketing of a brand is about erecting barriers to market entry against the smaller folk whom might have something of interest, and the larger folk who might come in with a different ecosystem.

Remember “no one gets fired for buying IBM” ? Yeah, this is that.

The implication is that this might not be true for other vendors than IBM.

This post is not about IBM BTW. Not even remotely.

It’s about the concept of risk reduction in vendor selection.

And what real risk reduction means.

Lets look at this in terms of say … RAID units. RAID, as a concept, is about distributing risk failure across N units, with a scheme in such a manner as to be able to survive and operate (albeit at a reduced capability) in the event of a failure of a single unit. In some cases, in the case of a failure of two units. RAID is not a backup (yeah, it is likely time I repost this warning). RAID is about giving time to operators to replace a system component so that operations can continue.

Erasure coding is a somewhat more intensive version of this, but basically the same thing.

You make the (reasonable) assumption, that you will have a failure. You have an architecture in place, resilient to that failure. When a failure comes, within the design spec of that resiliency, you mitigate the impact of the failure if you follow protocol. Of course, if you have a failure outside of this spec, yes, you can lose data. Which is why we have layered protocols and systems for disaster recovery (replication, on and off-site).

All of this matters. Whether you are building storage systems, large computing systems, clouds, etc.

The attention to detail, the base engineering, the ability to support … find problem root causes, and meaningful remediations/work arounds … all of this matters. And from my own experience, running a company that did these things, it matters far more than brands do.

A brand is meant to be an abstract mental concept … that somehow represents how well a product should behave, and the support/engineering behind it. However, a brand is rarely that. It is really, just a name. There is little empirical evidence that shows that slapping a particular label on the outside of a box does anything to make it better/more stable. And if I’m wrong, I’d love to see the peer-reviewed studies of this (a cursory google search yielded a few results of popular anecdotal articles, with limited real analysis behind them).

My claim is that engineering matters. What you put into your design and implementation matter. What doesn’t matter, I claim, is the brand name on the box.

Sure, you can claim “they have better access to supply chain, OEMs, etc.”. And you may be right. Without revealing anything specific, I can tell you that this access doesn’t necessarily result in better outcomes.

Actually, if your boxes never have issues to begin with … well, you understand.

But more to the point. Architecture matters. Engineering matters. Support matters. Brand? Not so much.

If you or your company are making decisions based upon brands, it might be a good exercise to ask … “why” … is this being done? Is this risk reduction? If so, what risk can you quantifiably and empirically determine has been reduced? I am guessing this isn’t the real reason.

Is it comfort level with a vendor? That is, you know the brand names won’t go away, be sold off, or go into bankruptcy. Like IBM, Apple, Sun … er … oh wait.

What is the real reason that you have to buy vendor X?

And getting back to the RAID analogy above, in order to reduce risk, shouldn’t you have 2 vendors (at minimum) whom can produce the same things, with different parts (it is possible that some parts may be in common … you can’t escape that, but hey, a VW and a Porshe have pistons, and they are very different vehicles, engineering and built to different standards).

It’s too late for my old company … though I am getting support requests now to my personal email account … but in general, the question you need to ask yourself is, am I really reducing risk by concentrating risk? Will I really get better support from a behemoth who only wants to deal with massive customers, or a smaller dedicated team of experts whom are highly focused upon me, because … hey … I am core to their business, and they are invested in you.

The question is, do you want a brand, or do you want solid engineering and support, invested in your success? Not every smaller company is like that.

Scalable was.

And we lost.

Because people wanted the single brand, single throat to choke.

On the other side of this now, I see the impact with that single throat to choke doesn’t result in better outcomes.

So … you are going to get failures. You should be engineering for this. Planning for this.

Who will support you better?

That is who you should buy from.

Anyone focusing on ease of procurement over quality of engineering needs to be pulled out of the decision and purchasing loop. Really.

My argument is that operational and project risk increases when you do this. I don’t have hard numbers to demonstrate this. Merely observations of this risk being realized in various forms. With the common aspect being the single large preferred vendor taking business that would likely have gone elsewhere.

Viewed 99048 times by 6624 viewers