Class BlockManager

java.lang.Object
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager
All Implemented Interfaces:
BlockStatsMXBean

@Private public class BlockManager extends Object implements BlockStatsMXBean
Keeps information related to the blocks stored in the Hadoop cluster. For block state management, it tries to maintain the safety property of "# of live replicas == # of expected redundancy" under any events such as decommission, namenode failover, datanode failure. The motivation of maintenance mode is to allow admins quickly repair nodes without paying the cost of decommission. Thus with maintenance mode, # of live replicas doesn't have to be equal to # of expected redundancy. If any of the replica is in maintenance mode, the safety property is extended as follows. These property still apply for the case of zero maintenance replicas, thus we can use these safe property for all scenarios. a. # of live replicas >= # of min replication for maintenance. b. # of live replicas <= # of expected redundancy. c. # of live replicas and maintenance replicas >= # of expected redundancy. For regular replication, # of min live replicas for maintenance is determined by DFSConfigKeys.DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY. This number has to <= DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY. For erasure encoding, # of min live replicas for maintenance is BlockInfoStriped.getRealDataBlockNum(). Another safety property is to satisfy the block placement policy. While the policy is configurable, the replicas the policy is applied to are the live replicas + maintenance replicas.
  • Field Details

    • LOG

      public static final org.slf4j.Logger LOG
    • blockLog

      public static final org.slf4j.Logger blockLog
    • neededReconstruction

      public final org.apache.hadoop.hdfs.server.blockmanagement.LowRedundancyBlocks neededReconstruction
      Store set of Blocks that need to be replicated 1 or more times. We also store pending reconstruction-orders.
    • maxReplication

      public final short maxReplication
      The maximum number of replicas allowed for a block
    • minReplication

      public final short minReplication
      Minimum copies needed or else write is disallowed
    • defaultReplication

      public final int defaultReplication
      Default number of replicas
  • Constructor Details

    • BlockManager

      public BlockManager(Namesystem namesystem, boolean haEnabled, org.apache.hadoop.conf.Configuration conf) throws IOException
      Throws:
      IOException
  • Method Details

    • getPendingReconstructionBlocksCount

      public long getPendingReconstructionBlocksCount()
      Used by metrics
    • getLowRedundancyBlocksCount

      public long getLowRedundancyBlocksCount()
      Used by metrics
    • getCorruptReplicaBlocksCount

      public long getCorruptReplicaBlocksCount()
      Used by metrics
    • getScheduledReplicationBlocksCount

      public long getScheduledReplicationBlocksCount()
      Used by metrics
    • getPendingDeletionBlocksCount

      public long getPendingDeletionBlocksCount()
      Used by metrics
    • getStartupDelayBlockDeletionInMs

      public long getStartupDelayBlockDeletionInMs()
      Used by metrics
    • getExcessBlocksCount

      public long getExcessBlocksCount()
      Used by metrics
    • getPostponedMisreplicatedBlocksCount

      public long getPostponedMisreplicatedBlocksCount()
      Used by metrics
    • getPendingDataNodeMessageCount

      public int getPendingDataNodeMessageCount()
      Used by metrics
    • getNumTimedOutPendingReconstructions

      public long getNumTimedOutPendingReconstructions()
      Used by metrics.
    • getLowRedundancyBlocks

      public long getLowRedundancyBlocks()
      Used by metrics.
    • getCorruptBlocks

      public long getCorruptBlocks()
      Used by metrics.
    • getMissingBlocks

      public long getMissingBlocks()
      Used by metrics.
    • getMissingReplicationOneBlocks

      public long getMissingReplicationOneBlocks()
      Used by metrics.
    • getBadlyDistributedBlocks

      public long getBadlyDistributedBlocks()
      Used by metrics.
    • getPendingDeletionReplicatedBlocks

      public long getPendingDeletionReplicatedBlocks()
      Used by metrics.
    • getTotalReplicatedBlocks

      public long getTotalReplicatedBlocks()
      Used by metrics.
    • getLowRedundancyECBlockGroups

      public long getLowRedundancyECBlockGroups()
      Used by metrics.
    • getCorruptECBlockGroups

      public long getCorruptECBlockGroups()
      Used by metrics.
    • getMissingECBlockGroups

      public long getMissingECBlockGroups()
      Used by metrics.
    • getPendingDeletionECBlocks

      public long getPendingDeletionECBlocks()
      Used by metrics.
    • getTotalECBlockGroups

      public long getTotalECBlockGroups()
      Used by metrics.
    • getPendingSPSPaths

      public int getPendingSPSPaths()
      Used by metrics.
    • getStoragePolicy

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String policyName)
    • getStoragePolicy

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(byte policyId)
    • getStoragePolicies

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[] getStoragePolicies()
    • setBlockPoolId

      public void setBlockPoolId(String blockPoolId)
    • getBlockPoolId

      public String getBlockPoolId()
    • getStoragePolicySuite

      public BlockStoragePolicySuite getStoragePolicySuite()
    • getBlockTokenSecretManager

      @VisibleForTesting public BlockTokenSecretManager getBlockTokenSecretManager()
      Returns:
      get the BlockTokenSecretManager
    • isBlockTokenEnabled

      protected boolean isBlockTokenEnabled()
    • activate

      public void activate(org.apache.hadoop.conf.Configuration conf, long blockTotal)
    • close

      public void close()
    • getDatanodeManager

      public DatanodeManager getDatanodeManager()
      Returns:
      the datanodeManager
    • getBlockPlacementPolicy

      @VisibleForTesting public BlockPlacementPolicy getBlockPlacementPolicy()
    • getStriptedBlockPlacementPolicy

      @VisibleForTesting public BlockPlacementPolicy getStriptedBlockPlacementPolicy()
    • refreshBlockPlacementPolicy

      public void refreshBlockPlacementPolicy(org.apache.hadoop.conf.Configuration conf)
    • metaSave

      public void metaSave(PrintWriter out)
      Dump meta data to out.
    • getMaxReplicationStreams

      public int getMaxReplicationStreams()
      Returns the current setting for maxReplicationStreams, which is set by DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY.
      Returns:
      maxReplicationStreams
    • setMaxReplicationStreams

      @VisibleForTesting public void setMaxReplicationStreams(int newVal, boolean ensurePositiveInt)
      Updates the value used for maxReplicationStreams, which is set by DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY initially.
      Parameters:
      newVal - - Must be a positive non-zero integer.
    • setMaxReplicationStreams

      public void setMaxReplicationStreams(int newVal)
    • getReplicationStreamsHardLimit

      public int getReplicationStreamsHardLimit()
      Returns the current setting for maxReplicationStreamsHardLimit, set by DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_KEY.
      Returns:
      maxReplicationStreamsHardLimit
    • setReplicationStreamsHardLimit

      public void setReplicationStreamsHardLimit(int newVal)
      Updates the value used for replicationStreamsHardLimit, which is set by DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_KEY initially.
      Parameters:
      newVal - - Must be a positive non-zero integer.
    • getBlocksReplWorkMultiplier

      public int getBlocksReplWorkMultiplier()
      Returns the current setting for blocksReplWorkMultiplier, set by DFSConfigKeys. DFS_NAMENODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION.
      Returns:
      maxReplicationStreamsHardLimit
    • setBlocksReplWorkMultiplier

      public void setBlocksReplWorkMultiplier(int newVal)
      Updates the value used for blocksReplWorkMultiplier, set by DFSConfigKeys. DFS_NAMENODE_REPLICATION_WORK_MULTIPLIER_PER_ITERATION initially.
      Parameters:
      newVal - - Must be a positive non-zero integer.
    • setReconstructionPendingTimeout

      public void setReconstructionPendingTimeout(int newVal)
      Updates the value used for pendingReconstruction timeout, which is set by DFSConfigKeys. DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY initially.
      Parameters:
      newVal - - Must be a positive non-zero integer.
    • getReconstructionPendingTimeout

      public int getReconstructionPendingTimeout()
      Returns the current setting for pendingReconstruction timeout, set by DFSConfigKeys.DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY.
    • getDefaultStorageNum

      public int getDefaultStorageNum(BlockInfo block)
    • getMinReplication

      public short getMinReplication()
    • getMinStorageNum

      public short getMinStorageNum(BlockInfo block)
    • getMinReplicationToBeInMaintenance

      public short getMinReplicationToBeInMaintenance()
    • hasMinStorage

      public boolean hasMinStorage(BlockInfo block)
    • hasMinStorage

      public boolean hasMinStorage(BlockInfo block, int liveNum)
    • commitOrCompleteLastBlock

      public boolean commitOrCompleteLastBlock(BlockCollection bc, org.apache.hadoop.hdfs.protocol.Block commitBlock, INodesInPath iip) throws IOException
      Commit the last block of the file and mark it as complete if it has meets the minimum redundancy requirement
      Parameters:
      bc - block collection
      commitBlock - - contains client reported block length and generation
      iip - - INodes in path to bc
      Returns:
      true if the last block is changed to committed state.
      Throws:
      IOException - if the block does not have at least a minimal number of replicas reported from data-nodes.
    • addExpectedReplicasToPending

      public void addExpectedReplicasToPending(BlockInfo blk)
      If IBR is not sent from expected locations yet, add the datanodes to pendingReconstruction in order to keep RedundancyMonitor from scheduling the block. In case of erasure coding blocks, adds only in case there isn't any missing node.
    • forceCompleteBlock

      public void forceCompleteBlock(BlockInfo block) throws IOException
      Force the given block in the given file to be marked as complete, regardless of whether enough replicas are present. This is necessary when tailing edit logs as a Standby.
      Throws:
      IOException
    • convertLastBlockToUnderConstruction

      public org.apache.hadoop.hdfs.protocol.LocatedBlock convertLastBlockToUnderConstruction(BlockCollection bc, long bytesToRemove) throws IOException
      Convert the last block of the file to an under construction block.

      The block is converted only if the file has blocks and the last one is a partial block (its size is less than the preferred block size). The converted block is returned to the client. The client uses the returned block locations to form the data pipeline for this block.
      The methods returns null if there is no partial block at the end. The client is supposed to allocate a new block with the next call.

      Parameters:
      bc - file
      bytesToRemove - num of bytes to remove from block
      Returns:
      the last block locations if the block is partial or null otherwise
      Throws:
      IOException
    • createLocatedBlocks

      public org.apache.hadoop.hdfs.protocol.LocatedBlocks createLocatedBlocks(BlockInfo[] blocks, long fileSizeExcludeBlocksUnderConstruction, boolean isFileUnderConstruction, long offset, long length, boolean needBlockToken, boolean inSnapshot, org.apache.hadoop.fs.FileEncryptionInfo feInfo, org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy ecPolicy) throws IOException
      Create a LocatedBlocks.
      Throws:
      IOException
    • getBlockKeys

      public ExportedBlockKeys getBlockKeys()
      Returns:
      current access keys.
    • setBlockToken

      public void setBlockToken(org.apache.hadoop.hdfs.protocol.LocatedBlock b, org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.AccessMode mode) throws IOException
      Generate a block token for the located block.
      Throws:
      IOException
    • generateDataEncryptionKey

      public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey generateDataEncryptionKey()
    • adjustReplication

      public short adjustReplication(short replication)
      Clamp the specified replication between the minimum and the maximum replication levels.
    • verifyReplication

      public void verifyReplication(String src, short replication, String clientName) throws IOException
      Check whether the replication parameter is within the range determined by system configuration and throw an exception if it's not.
      Parameters:
      src - the path to the target file
      replication - the requested replication factor
      clientName - the name of the client node making the request
      Throws:
      IOException - thrown if the requested replication factor is out of bounds
    • isSufficientlyReplicated

      public boolean isSufficientlyReplicated(BlockInfo b)
      Check if a block is replicated to at least the minimum replication.
    • getBlocksWithLocations

      public BlocksWithLocations getBlocksWithLocations(org.apache.hadoop.hdfs.protocol.DatanodeID datanode, long size, long minBlockSize, long timeInterval, org.apache.hadoop.fs.StorageType storageType) throws UnregisteredNodeException
      Get all blocks with location information from a datanode.
      Throws:
      UnregisteredNodeException
    • findAndMarkBlockAsCorrupt

      public void findAndMarkBlockAsCorrupt(org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo dn, String storageID, String reason) throws IOException
      Mark the block belonging to datanode as corrupt
      Parameters:
      blk - Block to be marked as corrupt
      dn - Datanode which holds the corrupt replica
      storageID - if known, null otherwise.
      reason - a textual reason why the block should be marked corrupt, for logging purposes
      Throws:
      IOException
    • setPostponeBlocksFromFuture

      public void setPostponeBlocksFromFuture(boolean postpone)
    • getUnderReplicatedNotMissingBlocks

      public int getUnderReplicatedNotMissingBlocks()
      Return number of low redundancy blocks but not missing blocks.
    • chooseTarget4WebHDFS

      public DatanodeStorageInfo[] chooseTarget4WebHDFS(String src, DatanodeDescriptor clientnode, Set<org.apache.hadoop.net.Node> excludes, long blocksize)
      Choose target for WebHDFS redirection.
    • chooseTarget4AdditionalDatanode

      public DatanodeStorageInfo[] chooseTarget4AdditionalDatanode(String src, int numAdditionalNodes, org.apache.hadoop.net.Node clientnode, List<DatanodeStorageInfo> chosen, Set<org.apache.hadoop.net.Node> excludes, long blocksize, byte storagePolicyID, org.apache.hadoop.hdfs.protocol.BlockType blockType)
      Choose target for getting additional datanodes for an existing pipeline.
    • chooseTarget4NewBlock

      public DatanodeStorageInfo[] chooseTarget4NewBlock(String src, int numOfReplicas, org.apache.hadoop.net.Node client, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, List<String> favoredNodes, byte storagePolicyID, org.apache.hadoop.hdfs.protocol.BlockType blockType, org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy ecPolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags) throws IOException
      Choose target datanodes for creating a new block.
      Throws:
      IOException - if the number of targets < minimum replication.
      See Also:
      • BlockPlacementPolicy.chooseTarget(String, int, Node, Set, long, List, BlockStoragePolicy, EnumSet)
    • requestBlockReportLeaseId

      public long requestBlockReportLeaseId(DatanodeRegistration nodeReg)
    • registerDatanode

      public void registerDatanode(DatanodeRegistration nodeReg) throws IOException
      Throws:
      IOException
    • setBlockTotal

      public void setBlockTotal(long total)
      Set the total number of blocks in the system. If safe mode is not currently on, this is a no-op.
    • isInSafeMode

      public boolean isInSafeMode()
    • getSafeModeTip

      public String getSafeModeTip()
    • leaveSafeMode

      public boolean leaveSafeMode(boolean force)
    • checkSafeMode

      public void checkSafeMode()
    • getBytesInFuture

      public long getBytesInFuture()
    • getBytesInFutureReplicatedBlocks

      public long getBytesInFutureReplicatedBlocks()
    • getBytesInFutureECBlockGroups

      public long getBytesInFutureECBlockGroups()
    • removeBlocksAndUpdateSafemodeTotal

      public void removeBlocksAndUpdateSafemodeTotal(INode.BlocksMapUpdateInfo blocks)
      Removes the blocks from blocksmap and updates the safemode blocks total.
      Parameters:
      blocks - An instance of INode.BlocksMapUpdateInfo which contains a list of blocks that need to be removed from blocksMap
    • getProvidedCapacity

      public long getProvidedCapacity()
    • checkBlockReportLease

      public boolean checkBlockReportLease(BlockReportContext context, org.apache.hadoop.hdfs.protocol.DatanodeID nodeID) throws UnregisteredNodeException
      Check block report lease.
      Returns:
      true if lease exist and not expire
      Throws:
      UnregisteredNodeException
    • processReport

      public boolean processReport(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID, org.apache.hadoop.hdfs.server.protocol.DatanodeStorage storage, BlockListAsLongs newReport, BlockReportContext context) throws IOException
      The given storage is reporting all its blocks. Update the (storage-->block list) and (block-->storage list) maps.
      Returns:
      true if all known storages of the given DN have finished reporting.
      Throws:
      IOException
    • removeBRLeaseIfNeeded

      public void removeBRLeaseIfNeeded(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID, BlockReportContext context) throws IOException
      Throws:
      IOException
    • setExcessRedundancyTimeout

      public void setExcessRedundancyTimeout(long timeout)
      Sets the timeout (in seconds) for excess redundancy blocks, if the provided timeout is less than or equal to 0, the default value is used (converted to milliseconds).
      Parameters:
      timeout - The time (in seconds) to set as the excess redundancy block timeout.
    • setExcessRedundancyTimeoutCheckLimit

      public void setExcessRedundancyTimeoutCheckLimit(long limit)
      Sets the limit number of blocks for checking excess redundancy timeout. If the provided limit is less than or equal to 0, the default limit is used.
      Parameters:
      limit - The limit number of blocks used to check for excess redundancy timeout.
    • markBlockReplicasAsCorrupt

      public void markBlockReplicasAsCorrupt(org.apache.hadoop.hdfs.protocol.Block oldBlock, BlockInfo block, long oldGenerationStamp, long oldNumBytes, DatanodeStorageInfo[] newStorages) throws IOException
      Mark block replicas as corrupt except those on the storages in newStorages list.
      Throws:
      IOException
    • processQueuedMessagesForBlock

      public void processQueuedMessagesForBlock(org.apache.hadoop.hdfs.protocol.Block b) throws IOException
      Try to process any messages that were previously queued for the given block. This is called from FSEditLogLoader whenever a block's state in the namespace has changed or a new block has been created.
      Throws:
      IOException
    • processAllPendingDNMessages

      public void processAllPendingDNMessages() throws IOException
      Process any remaining queued datanode messages after entering active state. At this point they will not be re-queued since we are the definitive master node and thus should be up-to-date with the namespace information.
      Throws:
      IOException
    • processMisReplicatedBlocks

      public void processMisReplicatedBlocks()
      For each block in the name-node verify whether it belongs to any file, extra or low redundancy. Place it into the respective queue.
    • stopReconstructionInitializer

      public void stopReconstructionInitializer()
    • getReconstructionQueuesInitProgress

      public float getReconstructionQueuesInitProgress()
      Get the progress of the reconstruction queues initialisation
      Returns:
      Returns values between 0 and 1 for the progress.
    • hasNonEcBlockUsingStripedID

      public boolean hasNonEcBlockUsingStripedID()
      Get the value of whether there are any non-EC blocks using StripedID.
      Returns:
      Returns the value of whether there are any non-EC blocks using StripedID.
    • processMisReplicatedBlocks

      public int processMisReplicatedBlocks(List<BlockInfo> blocks)
      Schedule replication work for a specified list of mis-replicated blocks and return total number of blocks scheduled for replication.
      Parameters:
      blocks - A list of blocks for which replication work needs to be scheduled.
      Returns:
      Total number of blocks for which replication work is scheduled.
    • setReplication

      public void setReplication(short oldRepl, short newRepl, BlockInfo b)
      Set replication for the blocks.
    • removeStoredBlock

      public void removeStoredBlock(BlockInfo storedBlock, DatanodeDescriptor node)
      Modify (block-->datanode) map. Possibly generate replication tasks, if the removed block is still valid.
    • addBlock

      @VisibleForTesting public void addBlock(DatanodeStorageInfo storageInfo, org.apache.hadoop.hdfs.protocol.Block block, String delHint) throws IOException
      The given node is reporting that it received a certain block.
      Throws:
      IOException
    • processIncrementalBlockReport

      public void processIncrementalBlockReport(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID, StorageReceivedDeletedBlocks srdb) throws IOException
      The given node is reporting incremental information about some blocks. This includes blocks that are starting to be received, completed being received, or deleted. This method must be called with FSNamesystem lock held.
      Throws:
      IOException
    • countNodes

      public NumberReplicas countNodes(BlockInfo b)
      Return the number of nodes hosting a given block, grouped by the state of those replicas. For a striped block, this includes nodes storing blocks belonging to the striped block group. But note we exclude duplicated internal block replicas for calculating NumberReplicas.liveReplicas(). If the replica on a decommissioning node is the same as the replica on a live node, the internal block for this replica is live, not decommissioning.
    • isExcess

      public boolean isExcess(DatanodeDescriptor dn, BlockInfo blk)
    • getActiveBlockCount

      public int getActiveBlockCount()
    • getStorages

      public DatanodeStorageInfo[] getStorages(BlockInfo block)
    • getStorages

      public Iterable<DatanodeStorageInfo> getStorages(org.apache.hadoop.hdfs.protocol.Block block)
      Returns:
      an iterator of the datanodes.
    • getTotalBlocks

      public int getTotalBlocks()
    • removeBlock

      public void removeBlock(BlockInfo block)
    • getStoredBlock

      public BlockInfo getStoredBlock(org.apache.hadoop.hdfs.protocol.Block block)
    • updateLastBlock

      public void updateLastBlock(BlockInfo lastBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock)
    • checkRedundancy

      public void checkRedundancy(BlockCollection bc)
      Check sufficient redundancy of the blocks in the collection. If any block is needed reconstruction, insert it into the reconstruction queue. Otherwise, if the block is more than the expected replication factor, process it as an extra redundancy block.
    • containsInvalidateBlock

      @VisibleForTesting public boolean containsInvalidateBlock(org.apache.hadoop.hdfs.protocol.DatanodeInfo dn, org.apache.hadoop.hdfs.protocol.Block block)
    • getExpectedLiveRedundancyNum

      public short getExpectedLiveRedundancyNum(BlockInfo block, NumberReplicas numberReplicas)
    • getExpectedRedundancyNum

      public short getExpectedRedundancyNum(BlockInfo block)
    • getMissingBlocksCount

      public long getMissingBlocksCount()
    • getMissingReplOneBlocksCount

      public long getMissingReplOneBlocksCount()
    • getBadlyDistributedBlocksCount

      public long getBadlyDistributedBlocksCount()
    • getHighestPriorityReplicatedBlockCount

      public long getHighestPriorityReplicatedBlockCount()
    • getHighestPriorityECBlockCount

      public long getHighestPriorityECBlockCount()
    • addBlockCollection

      public BlockInfo addBlockCollection(BlockInfo block, BlockCollection bc)
    • addBlockCollectionWithCheck

      public BlockInfo addBlockCollectionWithCheck(BlockInfo block, BlockCollection bc)
      Do some check when adding a block to blocksmap. For HDFS-7994 to check whether then block is a NonEcBlockUsingStripedID.
    • numCorruptReplicas

      public int numCorruptReplicas(org.apache.hadoop.hdfs.protocol.Block block)
    • removeBlockFromMap

      public void removeBlockFromMap(BlockInfo block)
    • getCapacity

      public int getCapacity()
    • getCorruptReplicaBlockIterator

      public Iterator<BlockInfo> getCorruptReplicaBlockIterator()
      Return an iterator over the set of blocks for which there are no replicas.
    • getCorruptReplicas

      public Collection<DatanodeDescriptor> getCorruptReplicas(org.apache.hadoop.hdfs.protocol.Block block)
      Get the replicas which are corrupt for a given block.
    • getCorruptReason

      public String getCorruptReason(org.apache.hadoop.hdfs.protocol.Block block, DatanodeDescriptor node)
      Get reason for certain corrupted replicas for a given block and a given dn.
    • numOfUnderReplicatedBlocks

      public int numOfUnderReplicatedBlocks()
      Returns:
      the size of UnderReplicatedBlocks
    • getLastRedundancyMonitorTS

      @VisibleForTesting public long getLastRedundancyMonitorTS()
      Used as ad hoc to check the time stamp of the last full cycle of redundancyThread. This is used by the Junit tests to block until lastRedundancyCycleTS is updated.
      Returns:
      the current lastRedundancyCycleTS.
    • clearQueues

      public void clearQueues()
      Clear all queues that hold decisions previously made by this NameNode.
    • newLocatedBlock

      public static org.apache.hadoop.hdfs.protocol.LocatedBlock newLocatedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, DatanodeStorageInfo[] storages, long startOffset, boolean corrupt)
    • newLocatedStripedBlock

      public static org.apache.hadoop.hdfs.protocol.LocatedStripedBlock newLocatedStripedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, DatanodeStorageInfo[] storages, byte[] indices, long startOffset, boolean corrupt)
    • newLocatedBlock

      public static org.apache.hadoop.hdfs.protocol.LocatedBlock newLocatedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock eb, BlockInfo info, DatanodeStorageInfo[] locs, long offset) throws IOException
      Throws:
      IOException
    • shutdown

      public void shutdown()
    • clear

      public void clear()
    • getBlockReportLeaseManager

      public org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager getBlockReportLeaseManager()
    • getStorageTypeStats

      public Map<org.apache.hadoop.fs.StorageType,StorageTypeStats> getStorageTypeStats()
      Description copied from interface: BlockStatsMXBean
      The statistics of storage types.
      Specified by:
      getStorageTypeStats in interface BlockStatsMXBean
      Returns:
      get storage statistics per storage type
    • initializeReplQueues

      public void initializeReplQueues()
      Initialize replication queues.
    • isPopulatingReplQueues

      public boolean isPopulatingReplQueues()
      Check if replication queues are to be populated
      Returns:
      true when node is HAState.Active and not in the very first safemode
    • setInitializedReplQueues

      public void setInitializedReplQueues(boolean v)
    • shouldPopulateReplQueues

      public boolean shouldPopulateReplQueues()
    • enqueueBlockOp

      public void enqueueBlockOp(Runnable action) throws IOException
      Throws:
      IOException
    • runBlockOp

      public <T> T runBlockOp(Callable<T> action) throws IOException
      Throws:
      IOException
    • successfulBlockRecovery

      public void successfulBlockRecovery(BlockInfo block)
      Notification of a successful block recovery.
      Parameters:
      block - for which the recovery succeeded
    • addBlockRecoveryAttempt

      public boolean addBlockRecoveryAttempt(BlockInfo b)
      Checks whether a recovery attempt has been made for the given block. If so, checks whether that attempt has timed out.
      Parameters:
      b - block for which recovery is being attempted
      Returns:
      true if no recovery attempt has been made or the previous attempt timed out
    • flushBlockOps

      @VisibleForTesting public void flushBlockOps() throws IOException
      Throws:
      IOException
    • getBlockOpQueueLength

      public int getBlockOpQueueLength()
    • getBlockIdManager

      public BlockIdManager getBlockIdManager()
    • getMarkedDeleteQueue

      @VisibleForTesting public ConcurrentLinkedQueue<List<BlockInfo>> getMarkedDeleteQueue()
    • addBLocksToMarkedDeleteQueue

      public void addBLocksToMarkedDeleteQueue(List<BlockInfo> blockInfos)
    • nextGenerationStamp

      public long nextGenerationStamp(boolean legacyBlock) throws IOException
      Throws:
      IOException
    • isLegacyBlock

      public boolean isLegacyBlock(org.apache.hadoop.hdfs.protocol.Block block)
    • nextBlockId

      public long nextBlockId(org.apache.hadoop.hdfs.protocol.BlockType blockType)
    • setBlockRecoveryTimeout

      @VisibleForTesting public void setBlockRecoveryTimeout(long blockRecoveryTimeout)
    • getProvidedStorageMap

      @VisibleForTesting public ProvidedStorageMap getProvidedStorageMap()
    • createSPSManager

      public boolean createSPSManager(org.apache.hadoop.conf.Configuration conf, String spsMode)
      Create SPS manager instance. It manages the user invoked sps paths and does the movement.
      Parameters:
      conf - configuration
      spsMode - satisfier mode
      Returns:
      true if the instance is successfully created, false otherwise.
    • disableSPS

      public void disableSPS()
      Nullify SPS manager as this feature is disabled fully.
    • getSPSManager

      public StoragePolicySatisfyManager getSPSManager()
      Returns:
      sps manager.
    • setExcludeSlowNodesEnabled

      public void setExcludeSlowNodesEnabled(boolean enable)
    • getExcludeSlowNodesEnabled

      @VisibleForTesting public boolean getExcludeSlowNodesEnabled(org.apache.hadoop.hdfs.protocol.BlockType blockType)
    • setMinBlocksForWrite

      public void setMinBlocksForWrite(int minBlocksForWrite)
    • getMinBlocksForWrite

      @VisibleForTesting public int getMinBlocksForWrite(org.apache.hadoop.hdfs.protocol.BlockType blockType)