Class NamenodeFsck

java.lang.Object
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck
All Implemented Interfaces:
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataEncryptionKeyFactory

@Private public class NamenodeFsck extends Object implements org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataEncryptionKeyFactory
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.

The tool scans all files and directories, starting from an indicated root path and its descendants. The following abnormal conditions are detected and handled:

  • files with blocks that are completely missing from all datanodes.
    In this case the tool can perform one of the following actions:
    • move corrupted files to /lost+found directory on DFS (doMove). Remaining data blocks are saved as a block chains, representing longest consecutive series of valid blocks.
    • delete corrupted files (doDelete)
  • detect files with under-replicated or over-replicated blocks
Additionally, the tool collects a detailed overall DFS statistics, and optionally can print detailed statistics on block locations and replication factors of each file.
  • Field Details

  • Method Details

    • getAuditSource

      public String getAuditSource()
    • blockIdCK

      public void blockIdCK(String blockId)
      Check block information given a blockId number
    • fsck

      public void fsck() throws org.apache.hadoop.security.AccessControlException
      Check files on DFS, starting from the indicated path.
      Throws:
      org.apache.hadoop.security.AccessControlException
    • newDataEncryptionKey

      public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey newDataEncryptionKey() throws IOException
      Specified by:
      newDataEncryptionKey in interface org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataEncryptionKeyFactory
      Throws:
      IOException