Not-so-modern file system errors in modern file systems

On a system in heavy production use, using an underlying file system for metadata service, we see this:

kernel:  EXT4-fs warning: ext4_dx_add_entry:1992: Directory index full!

Ok, where does this come from?

Ext3 had a limit of 32000 directory entries per directory, unless you turned on the dir_index feature.

Ext4 theoretically has no limit. Well, its 64000 if you don’t use dir_index. Which we do use. Really the feature you want is dir_nlink.

  -O [^]feature[,...]
              Set or clear the indicated filesystem features (options) in  the
              filesystem.   More than one filesystem feature can be cleared or
              set by separating features  with  commas.   Filesystem  features
              prefixed  with  a  caret  character ('^') will be cleared in the
              filesystem\'s superblock; filesystem features  without  a  prefix
              character  or prefixed with a plus character ('+') will be added
              to the filesystem.

        The following filesystem features can be set  or  cleared  using

                          Use  hashed  b-trees  to  speed  up lookups in large

                          Allow more than 65000 subdirectories per directory.

So, obviously we have to turn this on, right? Before we do that, a quick tune2fs -l /dev/$dev to see what is currently in place

Filesystem features:      has_journal ext_attr resize_inode dir_index filetype
                          needs_recovery flex_bg sparse_super large_file huge_file 
                          uninit_bg dir_nlink extra_isize

So … its already on? And not working?

Sometimes you gotta say whiskey tango foxtrot.

Yet another reason to use xfs and ditch ext*.

(n.b. our new SIOS v2 images will also let you build/use zfs file systems, by building installing the kernel module needed for this upon demand … so yes, we could use zfs as well)

Viewed 31602 times by 2611 viewers

5 thoughts on “Not-so-modern file system errors in modern file systems

  1. You haven’t run into the maximum subdirectory limit (which ext4 “doesn’t have” – it works around that limit by simply not counting subdirs anymore, after some point).
    What you’ve run into is a limit in the “htree” metadata format for ext4; it is a depth-1 tree, and as you have seen, it can run out. The limits depend on filesystem block size and average name length, but with 4k blocks and 100-char average filenames, you can get to about 9.5 million directory entries. How many did you have? 🙂

  2. Not entirely sure … production machine and I am a little worried about running a

    find /path | wc -l

    This is underneath a parallel file system, and it is quite possible that the users are using it :/ by dumping a few million files in a directory. I did ask them something along these lines, and got a roughly affirmative answer.

    Next maint period, I’ll try to run that find, just to see what happens. Thanks!

Comments are closed.