Need to look at MooseFS

Looks similar to a number of others, but whats interesting is that it keeps its metadata in RAM. How much of an impact that provides for updates depends upon the efficiency of the network stack, and how much security it provides depends upon its ability to recover from unplanned outages … that is, it can’t just run in ram an occasionally update something on disk.

Gotta look at this more though, as it could be interesting as a front end FS to something else on the backend.

Viewed 15442 times by 4635 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail

3 thoughts on “Need to look at MooseFS

  1. I don’t know if I mentioned it to you somewhere, but I’ve also taken a second look at MooseFS and it shows quite a bit of promise. While I’m not wild about any kind of single-metadata-server approach, theirs (unlike some I could name) does at least show some concern/awareness about recovery/downtime issues.

    Also of potential interest is OrangeFS. They were all over FAST’11 so I got a chance to talk with some of the people involved, and mostly liked what I heard. Again, there seems to be a greater-than-typical (or at least greater-than-before) awareness of the need for redundancy at the filesystem level, and of the fact that MPI-mediated access to large files isn’t the only access pattern worth worrying about.

    I’m hoping to compare GlusterFS, MooseFS, OrangeFS, and Ceph using the same workload on the same hardware some time in the next few months. I’ll keep you posted.

  2. @Jeff

    OrangeFS appears to be part of PVFS. I am also not enamored of the single resource sharing model, but it does look like Moose is getting beyond that.

    We have a nice design for our siCluster in terms of putting these bits as a presentation layer above the actual storage, so one of the things we can do (today) is to provide a single storage platform, and whatever file system is required. Even multiple, at once, on the same physical hardware. We are looking at this as a way to help people with migration, setup, etc. issues.

  3. OrangeFS isn’t just a part of PVFS; it’s the main branch of PVFS. Apparently the “blue” (Argonne-led) and “orange” (Clemson-led) branches were created a while ago to facilitate development in different directions, and they flipped which was the main branch in “fall 2010” (http://www.orangefs.org/documentation/releases/current/doc/pvfs2-faq/pvfs2-faq.php). Particular areas of focus seem to be large directories (based on the GIGA+ work at CMU), capability-based security, better caching for small/unaligned requests, and built-in redundancy. Throw in better documentation and IMO you have something much more relevant and useful for people outside the national labs.

Comments are closed.