Cool bug on upgrade (not)

WordPress is an interesting beast. Spent hours working through issues that I shouldn’t have needed to on an upgrade, as some functions were deprecated.

In an interesting way. By removing them, and throwing an error. Which I found only through looking at a specific log.

So out goes that plugin. And the site is back.

Viewed 39007 times by 4181 viewers

#SC17

I’ve had numerous requests from friends and colleagues about whether I will be attending #SC17 this year. Sadly, this is not to be the case. $dayjob has me attending an onsite meeting that week in San Francisco, and the schedule was such that I could not attend the talks I was interested in.

I’d love for there to be a way to listen to the talks remotely. Maybe I’ll simply buy the DVD/USB stick of the talks if there is an online store for them.

Next year at #SC18 in Dallas if possible.

Enjoy, have fun @BeowulfBash, and please tweet/post what you see and hear.

And, for those whom are not aware of some of the most awesome hardware out there for big data analytics and deep learning, have a look at @thedeadline and Basement Supercomputing. Best in market, designed and built by people who know how to use the machines, what they are used for and why.

(unpaid/uncompensated endorsement … get out and support the small HPC guys, the ones who actually know what they are doing).

Viewed 50969 times by 4922 viewers

Disk, SSD, NVMe preparation tools cleaned up and on GitHub

These are a collection of (MIT licensed) tools I’ve been working on for years to automate some of the major functionality one needs when setting up/using new machines with lots of disks/SSD/NVMe.

The repo is here: https://github.com/joelandman/disk_test_setup . I will be adding some sas secure erase and formatting tools into this.

These tools wrap other lower level tools, and handle the process of automating common tasks you worry about when you are setting up and testing a machine with many drives. Usage instructions are in the code at the top … I will eventually add better documenation.

Here are the current list of tools (note: they aren’t aware of LVM yet, let me know if you would like this):

  • disk_mkfsxfs.pl : Takes every disk or ssd that is not mounted or part of an MD RAID, and creates a file system on the disk, a mount point under a path of your choosing (/data by default). This is used with the disk_fio.pl code.
  • disk_fio.pl : generate simple disk fio test cases, in which you can probe the IO performance to file system for many devices. The file names it generates include case type (read, write, randread, randwrite, randrw), blocksize and number of simultaneous IOs to each LUN. This will leverage the mount point you provide (defaults to /data) and will create test directories below this so you don’t get collisions. To run the test cases, you need fio installed.
  • disk_wipefs.pl : removes any trace of file system metadata on a set of drives if they are not mounted and not part of an existing MDRAID.
  • ssd_condition.pl : runs a conditioning write on a set of SSDs if they are not mounted or part of an MDRAID. If you are setting up an SSD based machine, you are of course, running something akin to this before using the SSDs … right? Otherwise, you’ll get a nasty performance and latency shock after you transition from the mostly unallocated block scenario to the completely allocated scenario. Especially painful when the block compression/garbage collection passes come through to help the FTL find more space to write your blocks. You can tell you are there if you see IO pauses after long sequences of writes. Also, condition helps improve the overall life of the drive. See this presentation around slide 7 and beyond for more info.
  • sata_secure_erase.pl : which will completely wipe the SSD (will work with rotational media as well).

The user interface for these tools are admittedly spartan and documented at the top of the code itself. This will improve over time. Have at them, MIT licensed, and please let me know if you use them or find them useful.

Viewed 114636 times by 9156 viewers

Aria2c for the win!

I’ve not heard of aria2c before today. Sort of a super wget as far as I could tell. Does parallel transfers to reduce data motion time, if possible.

So I pulled it down, built it. I have some large data sets to move. And a nice storage area for them.

Ok.

Fire it up to pull down a 2GB file.

Much faster than wget on the same system over the same network. Wow.

Then the rest of the ML data set. About 120GB in all.

Yeah, this is a good tool. Need to make sure it is on all our platforms.

Sort of like gridftp but far more flexible.

Definitely a good tool.

Viewed 124387 times by 9829 viewers

Working on benchmarking ML frameworks

Nice machine we have here …

root@hermes:/data/tests# lspci | egrep -i '(AMD|NVidia)' | grep VGA
3b:00.0 VGA compatible controller: NVIDIA Corporation GP100GL (rev a1)
88:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XTX [Radeon Vega Frontier Edition]

I want to see how tensorflow and many others run on each of the cards. The processor is no slouch either:

root@hermes:/data/tests# lscpu | grep "Model name"
Model name:            Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz

Missing a few things for this, as Amazon is a bit late on shipping some of the needed parts, but hopefully soon, I’ll be able to get everything in there.

Looking at the integrated Tensorflow benchmarks, which require image-net, as well as others. Feel free to point more out to me … happy to run some nice baseline/direct comparisons. I’d prefer open (sharable/distributable) benchmarks (alas image net isn’t precisely this, I put in my request for download).

Everything else is fair game though. Planning on publishing what I find.

Viewed 124796 times by 9805 viewers

Oracle finally kills off Solaris and SPARC

This was making the rounds last week. Oracle seems to have a leak in its process, creating labels that trigger event notifications for people, for their packages. Solaris was decimated. More details at the links and at The Layoff.

Honestly I had expected them to reach this point. I am guessing that they were contractually obligated for at least 7 years to provide Solaris/SPARC support to US government purchasers. SGI went through a similar thing with IRIX. Had to maintain it for N years (N being something like 7) after EOL.

After that contractual obligation expired, the question was, would the divisions be able to pay for themselves, and add positively to Oracles bottom line?

Generally, Oracle is in a very high margin software business. Not hardware, which tends to be much lower margin. Yeah, they have Exadata (or is that now gone?), storage and a few other things. But no one really looks to them any more as a leader in any aspect of the market. They are a very large player, with a set of core products that produce most of their revenue. They are working now to create a cloud, though it will likely be running Linux as the basis for their offerings. Also likely built by very low cost providers (Quanta et al.)

The calculus for their hardware division has been obvious for a long time. For Solaris, it has been getting clearer over time. The world has moved on from Sun hardware in the early 2000s. It moved on from Solaris mid 2000s. Linux has largely supplanted other Unixes in many cases (yeah I know, many aren’t happy with this).

Solaris, in some way, survives through the illumos fork, SmartOS, etc. There is a bit of baggage from those roots, in terms of perception, OEM driver support, user space, etc. Some is admittedly self-inflicted … porting should be trivial, yet I keep running into library implementation differences … that effectively prevent me from moving code bases to it. Some of this is obviated by the LX-brand zone … running what appears to be a linux kernel within a zone in the OS. Some is obviated by KVM. I remain hopeful that we’ll see these issues solved in illumos, as they are part of what killed Solaris as a viable platform. The world moved on, coalesced around a specific platform and API, for better or worse, and others fell off. Even Microsoft, after decades of battling the upstart, finally gave in and did something similar … implemented Ubuntu linux within Windows.

This is a good direction if you spend the time/effort to make sure you maintain compatibility with the fast changing kernel/environment, as well as make the underlying system as performant as possible. But you have to run hard to do this.

A great example of what happens when you set your flag, and refuse to move it to adapt to platform changes is with OS/2 and Win32 binaries. OS/2 refused to enable a new technology in Win32 at the time (I forgot what it was called) to work. Which resulted in code breaking … slowly at first, and then with gathering speed as developers adopted the new tech. I am hoping that illumos can move faster than this.

As for Solaris, and SunOS, I used the latter first at MSU in the late 80s. Around the time I used Vaxen and other systems. Including some Ultrix boxen. Then at Wayne State, I used a buddy’s sparcstation (and many other unixes) to do my simulations in the early 90s. What got me off the Sparc was when the brand new 486 unit I bought was shown to be about 2x the speed. The only advantage the Sparc had was RAM. 64MB as I remember. 486 had 16MB. The Cray machine I ran on at PSC had quite a bit of ram, and was also very fast.

Shortly after this, I started playing with SGIs and generally gave up on the Sparc units due to speed issues.

This said, I’ve always had a soft spot for real unix and workstation systems. At SGI I competed with them. At Scalable I used them and many others to help customers build scale out computing and storage systems.

Under Oracle, I thought they might have a chance if they were invested in. But apparently, this didn’t quite happen. It is sad to see such an ignoble end to Solaris and Sparc. Though, it was not unexpected.

Viewed 115066 times by 10320 viewers

M&A and business things

First up, Tegile was acquired by Western Digital (WDC). This is in part due to WDC’s desire to be a one stop shop vertically integrated supplier for storage parts, systems, etc. This is how all of the storage parts OEMs needed to move, though Seagate failed to execute this correctly, selling off their array business in part to Cray. Toshiba … well … they have some existential challenges right now, and are about to sell off their profitable flash and memory systems business, if they can just get everyone to agree …

This comes from the fact that spinning disk, while a venerable technology, has been effectively completely commoditized. It is an effective replacement for tape systems … and yes, I disagree strongly with a number of people I tremendously respect, on this particular matter. Tape is, apart from specific niches, dead. Tape vendors, tape drive vendors, are not long for this world, and are struggling mightily … though maybe not as intelligently as they could.

Over time, flash (TLC and QLC) will IMO get cost competitive per GB stored with spinning disk. Then the only thing holding back a full on switch over to flash will be manufacturing capacity issues. Until then, spinning disk will likely remain dominating the slower and colder tiers of storage.

When that crossover hits, disk will be the new tape.

Many years ago, in grad school, a buddy leaving the program wrote all his directories out to vax-tape, because, you know, vaxes … and tapes … are forever. I gotta ask him how that is working out. I have 30 year old floppies I can still (barely) read. Not so sure on vaxes. Or vax-tapes. I may have an old DLT cartridge from backups of my home Indy from 20+ years ago sitting around. No way to read that tape. Drives don’t exist anymore. I do have some ATA drives from that time period, and the capability to attach them and read them.

Which is more permanent?

Ok, back to the vertical integration. WDC seems to be doing some of the right things. They are investing in building an ecosystem that a customer can purchase offerings across a wide range of capabilities. This is IMO a sound direction, though it will require careful work to avoid competing with one’s own customers.

Second, Coho Data closed. I’ve been saying this for years, but … storage arrays as a market are slowly drying up. Somewhat of a long tail, but this is part of the reason EMC sold itself to Dell. The storage array market of all flavors and shapes (flash, disk, …) is contracting. More competitors are pursuing larger pieces of a shrinking market. Part of what drove WDC to buy Tegile, and build the rest of their plan appears to be a desire to have a foot into not only that market, and help control the flux into other markets that it also has a presence in, but also to manage the changes in that market by helping to shape how it evolves.

Coho died as the market is crowded, and non-differentiated hardware combined with an effectively random software stack of dubious actual value (which is the approach of a fairly large number of these groups) is not a viable strategy. Though with the right pedigree founders and connections, it appears you can liberate quite a bit of money from VCs in the process of failing.

Curiously, differentiated hardware, stuff that actually adds value to the software layers above it, do not seem to be in vogue for the past decade or so. But the new hotness is also NVMe-over-fabrics. Basically lets take the SAN model … and use PCIe as an interconnect. But … we need to encapsulate the PCIe into infiniband or … ethernet … packets, first.

So now we have too many of those folks proudly making benchmark guesses as to where their systems will matter, and what they can deliver. While what I see are actual numbers falling far short of what I was doing 2+ years ago. On a shoestring budget. With no venture backers.

I am not being bitter … ok … maybe a little … but I am being blunt. There are too many of these companies out there, without a valid defensible difference.

Yeah. I am not happy Coho died. It won’t be the last though.

We are into the culling phase of the market. Weaker organizations will be shut down. IP sold for a song.

Coho won’t be the last. Just on the leading edge.

Another aspect worth mentioning is the impact of clouds upon this. I’ll touch on this in a future post.

Viewed 105301 times by 9147 viewers

A completed project: mysqldump file to CSV converter

This was part of something else I’d worked on, but it never saw the light of day for a number of (rather silly) reasons. So rather than let these bits go to waste, I created a github repo for posterity. Someone might be able to make effective use of them somewhere.

Repo is located here: https://github.com/joelandman/msd2csv

Pretty simple code, does most of the work in-memory, and multiple regex passes to transform and clean up the CSV.

Viewed 105531 times by 8766 viewers