Why doesn’t linkedin make removing a contact easy?

I don’t get this. Yeah, sure, your contacts are curated, and I don’t accept everyone. I need to see some aspect of a connection and be pretty sure they wont spam me personally or try to spam my contacts.

So when I find out that this is what happens, I want to block their access to me. Which usually means un-connecting with them.

So why does LinkedIn make this effectively impossible on the phone apps? And why is it horrendously hard in their web page? Yeah, its causing me to think about this more than I like.

If someone does something I don’t want them to do, I want full control over the “off” button. But LinkedIn hides this and makes it hard for you to switch it off.

Maybe it makes their algos work harder to reconstruct the graphs?

I dunno. I wish they made a mute button or an off switch for problematic contacts (or ones whom go rogue).

Viewed 6163 times by 1194 viewers

Where have you been all my life FFI::Platypus?

Oh my … this is goodness I’ve been missing badly in Perl. Just learned about it this morning.

Short version. You want to mix programming languages for implementation of some project. One language makes development of some subset of functions very easy, while another language handles another part very well. You need some sort of layer to handle this usually, or a way to sanely map. FFI is the concept behind this … and while there is no mention of CORBA or XDR/RPC type things, this is the logical follow-on to these (in their time) ground breaking technologies.

Python has had a mind numbingly simple mechanism for FFI for C code for a while. The ctypes module made using external C libs trivial in the language.

I’d been wanting this for a long time for Perl. And now we have it with FFI::Platypus. We had something … well … roughly like this with the Inline:: modules for a while, but it did XS things, and XS is not to be trifled with. One needs to bring burnt offerings to the deities of XS (yeah, a pun), in order to use it correctly.

But the new library looks to make this very bloody simple. So I need to play with this.

Like a kid in a candy store I am …

Viewed 7168 times by 1309 viewers

[Update] debunked … (was IBM layoffs to hit 25% or so of the company)

[Update] As I had wondered, and other suggested to me, this number (25%) was likely a click bait fabrication.

Forbes and others also “fell for it.”

I’ll admit I did as well. It was too large to ignore, but it also didn’t make sense. Close down mainframe and storage? Seriously?

Lets call this what it is, an internet rumor that was busted.

Paraphrasing Mark Twain “An internet rumor can travel around the world while the truth is still putting on its shoes”.

My apologies to IBM for repeating the rumor.

=== old and busted below ===

Continue reading »

Viewed 9521 times by 1613 viewers

Finally, a desktop Linux that just works

I’ve been a user of Linux on the desktop, as my primary desktop, for the last 16 years. In that time, I’ve had laptops with Windows flavors (95, XP, 2000, 7), a MacOSX desktop. Before that, my first laptop I had bought (while working on my thesis) was a triple boot job, with DOS, Windows 9x, and OS2. I used the latter for when I was traveling and needed to write; the thesis was written in LaTeX and I could easily move everything back and forth between that and my Indy at home, and my office Indigo.

During the SGI years, I used Irix mostly for desktop stuff, and it was very nice. It was IMO the best user interface I’d seen to date inclusive of Windows. Far better than Mac of that era (really … no comparison). The text editors mostly sucked though … I wound up using nedit for almost everything.

After leaving SGI, I resolved that I would use desktop Linux in some form or the other. I started out on a Dell laptop with Mandrake (the flavor of the day then). Moved on to SuSE (driven in part by a customer whom used it). SuSE wasn’t actively unfriendly, its just its UX was … well … not for the faint of heart.

None of these would be reasonable to give to my wife and daughter to use on their machines.

I moved from SuSE everywhere to CentOS on the servers and Ubuntu on the desktop and laptop around 2007 or so. CentOS seemed to make sense to me then for server bits. Ubuntu around 8.04 was really quite good.

But it started going downhill around 10.x. UX sucked in the 11.x and 12.x with the conversion to Unity.

I left the servers on CentOS, and moved the laptop and desktop to LinuxMint. This is a Ubuntu rebuild (which is itself a Debian rebuild). Mint was focused on very easy UX. You shouldn’t have to worry about stuff, it should all just work. Previously had not had that experience with Linux. Nor windows for that matter.

Started out around Mint 12 with Cinnamon. That is a reworking of the Gnome desktop into a paradigm I find comfortable. They also have a Mate version of it which is reminiscent of the SuSE interface, but I really didn’t like that.

Mint was much better than Ubuntu, but sometimes I had interesting and astounding failures. Mint doesn’t believe in upgrades for one. Either you are on the long term support (LTS) release, or you are on the 6 month cycle. The latter is more “bleeding edge”, though you get support for up to 18 months. The former is “more stable” and you get longer support.

Some of the spectactular failures were around the NVidia graphics side. Nouveau, the open source NVidia driver was not terribly good, and would as often as not, hard lock my machines. I had a devil of a time ripping it out of a few machines to replace it with the closed source but mostly working version.

I replaced the NVidia card in the office with an AMD card for a while, but AMDs drivers were just terrible and quite unstable if used in accelerated mode. This appeared to not be Linux specific, but more related to driver quality.

I moved the desktop in the office over to LMDE, which is the Linux Mint based upon the Debian base rather than the Ubuntu base. Slightly different basis, same experience. Generally very stable. Swapped in a newer NVidia card and drivers. Now it is rock solid.

Moved the home machine to Linux Mint 16 and still had some weird problems. It was annoying enough that it hit my productivity. 17 and then 17.1 came out to rave reviews. I decided to update one of my machines.

2 weeks later, after very heavy use, I can say a number of things:

  1. Installation was a breeze. This is the first time I didn’t have to fiddle with boot line parameters to disable nouveau, it simply behaved correctly
  2. It worked with everything, with no fuss, out of the box, with the bare minimum of configuration on my part.
  3. Stability. Oh … my … best … Linux … desktop … experience … ever

I cant say enough good things about Linux Mint 17.1 Cinnamon edition. It really is the best desktop/laptop experience I’ve had to date, inclusive of the MacOSX machines.

I’ve got one outstanding annoyance on one machine, but its minor enough for me not to care so much.

Server side, we are rolling everything over to Debian. Or possibly the Devuan rebuild if I can’t get systemd to behave … though Mint 17.1 uses systemd and it doesn’t seem to suck.

This is definitely one that would work well for my family to use.

Viewed 18227 times by 2246 viewers

stateless booting

A problem I’ve been working on dealing with for a while has been the sad … well … no … terrible state of programmatically configured Linux systems, where the state is determined from a central (set of) source(s) via configuration databases, and NOT by local stateful configuration files. Madness lies in wait for those choosing the latter strategy, especially if you need to make changes.

All sorts of variations on the themes have been used over the last decade or so, with this. Often programmatic things like Chef or puppet, are there to do a push of configuration to a system. This of course breaks terribly with new systems, and the corner cases they bring up.

Other approaches have been to mandate one particular OS and OS version, combined with a standard hardware configuration. Given how hardware is built by large vendors, that word “standard” is … interesting … to say the least.

Continue reading »

Viewed 27154 times by 2766 viewers

Coraid may be going down

According to The Register. No real differentiation (AoE isn’t that good, and the Seagate/Hitachi network drives are going to completely obviate the need for such things).

We once used and sold Coraid to a customer. The linux client side wasn’t stable. iSCSI was coming up and was actually quite a bit better. We moved over to it. This was during our build vs buy phase. We weren’t sure if we could build a better box. After getting one and using them for a customer, yeah, we were very sure ours were better.

On the performance side, they never really had anything significant.

Such is life, I hate watching companies go down, even if they are nominally competitors.

Viewed 28419 times by 2894 viewers

Anatomy of a #fail … the internet of broken software stacks

So I’ve been trying to diagnose a problem with my Android devices running out their batteries very quickly. And at the same time, I’ve been trying to understand why my address bar on Thunderbird has taken a very long time to respond.

I had made a connection earlier today when I had noticed the 50k+ contacts in my contact list, of which maybe 2000 were unique.

I didn’t quite understand it. Why … no … where … were all these contacts coming from? And why were there so many duplicates?

In the brave new world of #IoT, we are going to have many interacting stacks. And these stacks are going to have bugs. And some of the failure modes are going to be … well … spectacular.

This is one such failure mode, that I happily caught in time.

Here is how I have pieced it together thus far. We had a run-away amplification of contacts due to some random buggy app. It may have been one of the sync bits in thunderbird, or on my old iphone and ipad, or on my new android, or whatever.

It doesn’t matter what it was. What matters is what happened, and how the failure progressed.

And it shows why remarkably simple, and stupid (e.g. #IoT level) code can result in something akin to a positive feedback loop.

One of the buggy apps apparently either pulled an extra set of contacts from google, or pushed an extra set to google. Doesn’t matter what.

Google’s contact manager is dumb. It could be smarter. Far smarter. Say for example, if another app attempts to push a duplicate contact to it, instead of accepting it, it should simply move on to the next contact. Rinse and repeat.

This would have stopped what amounted to a denial of service on my devices, cold.

But it didn’t.

So some buggy app synced. And then resynced, and then resynced.

And the poor little androids and other devices spent more and more time syncing. And more and more battery syncing.

This is the ultimate in secondary denial of service attack. Don’t attack the device, but cause it to run out its power by leveraging its normal functionality. The denial of service is through a shutoff due to running out of the battery. A second order, or indirect attack vector.

Neat, huh? This is what we have to look forward to.

For reasons beyond my comprehension, each sync resulted in a doubling of contacts. How often they have synced is not known. What is known is that I had well over 10k contacts for one person, that were identical.

So I cleaned that out.

And later today, I found I had that again.

So I stopped everything from syncing against it. Everything. Its now a one way pull from google’s contact manager.

Because google’s contact manager is really, hilariously, stupid. Though the remove duplicates function? Good idea. Though, I dunno, why not make it automatic?

But thats not the main point of this. The real point of this is that IoT is going to be ripe for un-intended abuse, not to mention intentional abuse. Denial of service at a level not comprehended before.

Tis a brave new world. Also known as, be careful what you wish for, you just might get it.

Software has eaten the world, and we might just regret letting this happen.

Viewed 24634 times by 2833 viewers

Drivers developed largely out of kernel, and infrequently synced

One of the other aspects of what we’ve been doing has been forward porting drivers into newer kernels, fixing the occasional bug, and often rewriting portions to correct interface changes.

I’ve found that subsystem vendors seem to prefer to drop code into the kernel very infrequently. Sometimes once every few years are they synced. Which leads to distro kernels having often terribly broken device support. And often very unstable device support.

May work fine for a web server or other lightly loaded cloud like system, but when you push serious metal very hard, bad things happen to these kernels with their badly out of date device drivers. We know, we push them hard. So do our customers.

So I’ve been forward porting a number of drivers, and I gotta say … I really … really … am not having fun dealing with all the fail I see in the source. Our make files are chock full of patches to the kernels to handle these things.

I wish that a requirement for having a device be in the linux source tree was that it was no more than 6 months out of date with current driver revisions.

In our kernels, these things will just work. We’ll offer the patches back to the driver folks, but I don’t think they’ll want them. Past experience on this.

Viewed 24684 times by 2841 viewers

Parallel building debian kernels … and why its not working … and how to make it work

So we build our own kernels. No great surprise, as we put our own patches in, our own drivers, etc. We have a nice build environment for RPMs and .debs. It works, quite well. Same source, same patches, same make file driving everything. We get shiny new and happy kernels out the back end, ready for regression/performance/stability testing.

Works really well.

But …

but …

parallel builds (e.g. leveraging more than 1 CPU) work only for the RPM builds. The .deb builds, not so much.

Now the standard mechanism to build debian kernels involves some trickery including fakeroot, make-kpkg, and other things. These autogenerate Makefiles, targets, etc. based upon the rule sets.

Fine, no problem with this. I like autogenerated things. Actually I often like programmatic generated things better than human generated things, as the latter invariably have crap you really don’t want in there. Not that the others don’t, but there is mysticism around the existence of some things in peoples build environment, versus empirical reality.

The canonical mechanism is to use CONCURRENCY_LEVEL=N for N=some integer.

Fine. Use it in the make file. And …

We have a stubborn single threaded build. It will not change.

Fine, lets capture output and make it verbose. Look for concurrency level in output. See if something is monkeying with it.

scalablekernel@build:~/kernel/3.18$ grep CONCUR out
export CONCURRENCY_LEVEL=8
cd linux-"3.18""" ; export CONCURRENCY_LEVEL=8 ; fakeroot make-kpkg -j8 --initrd --append-to-version=.scalable --added_modules=arcmsr,aacraid,igb,e1000,e1000e,ixgbe,virtio,virtio_blk,virtio_pci,virtio_net --overlay-dir=../ubuntu-package --verbose buildpackage --us 
DEB_BUILD_OPTIONS="" CONCURRENCY_LEVEL=1     \

/sigh

I look to see where that is coming from. Looks like debian/ruleset/targets/common.mk. Which is in turn, coming from /usr/share/kernel-package/ruleset/targets/common.mk . Look for concurrency and see this snippet

debian/stamp/build/buildpackage: debian/stamp/pre-config-common
        $(REASON)
        @test -d debian/stamp      || mkdir debian/stamp
        @test -d debian/stamp/build || mkdir debian/stamp/build
        @echo "This is kernel package version $(kpkg_version)."
ifneq ($(strip $(HAVE_VERSION_MISMATCH)),)
        @echo "The changelog says we are creating $(saved_version)"
        @echo "However, I thought the version is $(KERNELRELEASE)"
        exit 1
endif
        echo 'Building Package' > stamp-building
# work around idiocy in recent kernel versions
# However, this makes it harder to use git versions of the kernel
        $(save_upstream_debianization)
        DEB_BUILD_OPTIONS="$(SERIAL_BUILD_OPTIONS)" CONCURRENCY_LEVEL=1     \
          dpkg-buildpackage $(strip $(int_root_cmd)) $(strip $(int_us))     \
            $(strip $(int_uc)) -j1 -k"$(pgp)"  -m"$(maintainer) < $(email)>"
        rm -f stamp-building
        $(restore_upstream_debianization)
        echo done >  $@

[starts banging head against desk again]

Force concurrency level to 1, AND then force -j1. Oh dear lord.

Lets see if switching these back to 8 helps (8 core machine).

Why … yes, yes it does …

Grrr

[Update] The deities of kernel building are not kind. It appears that parallel build in debian actually breaks other things that it should not break. I have a choice of a very slow (1+ hour) kernel + module build that works, or a fast (roughly 5-10 minute) kernel + module build that fails because of a causality violation (e.g. someone couldn’t figure out how to fix their code to run in parallel).

So, if I have time in the next month or so, I am going to find that very annoying serializer, take it out behind the barn, and put it out of my misery.

Viewed 23502 times by 2817 viewers

Amusing #fail

I use Mozilla’s thunderbird mail client. For all its faults, it is still the best cross platform email system around. Apple’s mail client is a bad joke and only runs on apple devices (go figure). Linux’s many offerings are open source, portable, and most don’t run well on my Mac laptop. I no longer use Windows apart from running in a VirtualBox environment. And I would never go back to OutLook anyway (used it once, 15 years ago or so … never again).

Since I am using Thunderbird, and our dayjob mail leverages Google’s gmail system, I like to keep contacts in sync.

This is where the hilarity begins. And so does the #fail.

A long time ago, in a galaxy far, far away, contact management was easy. You had simple records, a single mail client or two. Everything sorta just worked … because, standards.

Then walled gardens arose. Keep the customer using your product. Prevent information outflow, but use information inflow. Break things in subtle ways.

Thus arose contact managers/importers, and things were again good in the world.

Until those in walled gardens (apple, google) decided to break other things, as you know, they started to compete more.

Those contact importers for Thunderbird worked, but pretty soon the address bar slowed down. Type in an address and wait 10 seconds or so to autocomplete.

Mind you, this is on a 24 physical processor desktop system, with 48GB ram, high end NVidia graphics, two displays, SSD OS drive, and a 1GB/s 5TB local storage. This is not a slow machine. Its actually bloody fast. One of our old Pegasus units we no longer build. Easily the best desktop that ever graced a market, but failed as a product because people want cheap crap on their desktop, not good crap.

Damn it, I am grousing.

Ok, back to the story.

So there I am, wondering why its taking 10+ seconds to auto complete an address. Its a database lookup dammit, should be indexed and fast. Unless … unless … they are doing something INSANE like, I dunno, not using a database with indices. That would manifest as a long delay searching a large “database”. So let me look at my address book. Over 10+ years, I’ve curated about 3.5k addresses, I should see something not unlike that.

The two imported address groups from google in the address book have … lets see. 50k addresses between the two of them.

This is 50k of pure #fail.

I don’t know whom to blame.

Happily, Google’s gmail has a find/merge duplicates function.

Start using that.

20 iterations later (it stops at 2500 copies of the same entry … go figure), this address book is down to 700 addresses, with no duplicates.

Oh. Dear. Lord.

So much #fail. So little time.

Disabling the contact manager updating google’s contacts. This is an unsolved or poorly solved problem.

Walled gardens suck.

Viewed 19835 times by 2880 viewers