In current implementation of beegfs metadata server, all the files under one directory are stored on the same metadata server that their parent directory is .
Image a very large directory containing tens of thousands or even more files, which would be visited frequently, then the load of the metadata server storing this directory would be very very high, evenly response timeout. Could the files under one directory be hashed to multiple shards and then stored across multiple metadata servers? I think this would avoid the performance issue. As I known, Lustre supports this feature.
In current implementation of beegfs metadata server, all the files under one directory are stored on the same metadata server that their parent directory is .
Image a very large directory containing tens of thousands or even more files, which would be visited frequently, then the load of the metadata server storing this directory would be very very high, evenly response timeout. Could the files under one directory be hashed to multiple shards and then stored across multiple metadata servers? I think this would avoid the performance issue. As I known, Lustre supports this feature.