Class DatanodeDescriptor
java.lang.Object
org.apache.hadoop.hdfs.protocol.DatanodeID
org.apache.hadoop.hdfs.protocol.DatanodeInfo
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor
- All Implemented Interfaces:
Comparable<org.apache.hadoop.hdfs.protocol.DatanodeID>,org.apache.hadoop.net.Node
- Direct Known Subclasses:
ProvidedStorageMap.ProvidedDescriptor
@Private
@Evolving
public class DatanodeDescriptor
extends org.apache.hadoop.hdfs.protocol.DatanodeInfo
This class extends the DatanodeInfo class with ephemeral information (eg
health, capacity, what blocks are associated with the Datanode) that is
private to the Namenode, ie this class is not exposed to clients.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classBlock and targets pairstatic classA list of CachedBlock objects on this datanode.classLeaving service status.Nested classes/interfaces inherited from class org.apache.hadoop.hdfs.protocol.DatanodeInfo
org.apache.hadoop.hdfs.protocol.DatanodeInfo.AdminStates, org.apache.hadoop.hdfs.protocol.DatanodeInfo.DatanodeInfoBuilder -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final DatanodeDescriptor[]static final org.slf4j.Loggerprotected final Map<String,DatanodeStorageInfo> Fields inherited from class org.apache.hadoop.hdfs.protocol.DatanodeInfo
adminStateFields inherited from class org.apache.hadoop.hdfs.protocol.DatanodeID
EMPTY_DATANODE_ID -
Constructor Summary
ConstructorsConstructorDescriptionDatanodeDescriptor(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID) DatanodeDescriptor constructorDatanodeDescriptor(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID, String networkLocation) DatanodeDescriptor constructor -
Method Summary
Modifier and TypeMethodDescriptionvoidaddBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) Store block replication work.voidaddECBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) Store ec block to be replicated work.booleanchooseStorage4Block(org.apache.hadoop.fs.StorageType t, long blockSize, int minBlocksForWrite) Find whether the datanode contains good storage of given type to place block of sizeblockSize.voidbooleancontainsInvalidateBlock(org.apache.hadoop.hdfs.protocol.Block block) voidbooleanlongintintgetBlocksScheduled(org.apache.hadoop.fs.StorageType t) getErasureCodeCommand(int maxTransfers) org.apache.hadoop.hdfs.protocol.Block[]getInvalidateBlocks(int maxblocks) Remove the specified number of blocks to be invalidatedlonggetLeaseRecoveryCommand(int maxTransfers) intThe number of work items that are pending to be reconstructed.intThe number of ec work items that are pending to be replicated.intintReturn the number of volumes that can be written.getStorageInfo(String storageID) org.apache.hadoop.hdfs.server.protocol.StorageReport[]EnumSet<org.apache.hadoop.fs.StorageType>intReturns info about volume failures.inthashCode()booleanhasStorageType(org.apache.hadoop.fs.StorageType type) voidbooleanisAlive()booleanIs the datanode disallowed from communicating with the namenode?booleanbooleanbooleanintvoidvoidsetAlive(boolean isAlive) voidsetBalancerBandwidth(long bandwidth) voidsetDisallowed(boolean flag) Set the flag to indicate if this datanode is disallowed from communicating with the namenode.voidsetForceRegistration(boolean force) voidsetLastCachingDirectiveSentTimeMs(long time) voidsetNeedKeyUpdate(boolean needKeyUpdate) voidupdateRegInfo(org.apache.hadoop.hdfs.protocol.DatanodeID nodeReg) Methods inherited from class org.apache.hadoop.hdfs.protocol.DatanodeInfo
addDependentHostName, getAdminState, getBlockPoolUsed, getBlockPoolUsedPercent, getCacheCapacity, getCacheRemaining, getCacheRemainingPercent, getCacheUsed, getCacheUsedPercent, getCapacity, getDatanodeReport, getDependentHostNames, getDfsUsed, getDfsUsedPercent, getLastBlockReportMonotonic, getLastBlockReportTime, getLastUpdate, getLastUpdateMonotonic, getLevel, getMaintenanceExpireTimeInMS, getName, getNetworkLocation, getNonDfsUsed, getNumBlocks, getParent, getRemaining, getRemainingPercent, getSoftwareVersion, getUpgradeDomain, getXceiverCount, isDecommissioned, isDecommissionInProgress, isEnteringMaintenance, isInMaintenance, isInService, isMaintenance, isStale, maintenanceExpired, maintenanceNotExpired, setAdminState, setBlockPoolUsed, setCacheCapacity, setCacheUsed, setCapacity, setDecommissioned, setDependentHostNames, setDfsUsed, setInMaintenance, setLastBlockReportMonotonic, setLastBlockReportTime, setLastUpdate, setLastUpdateMonotonic, setLevel, setMaintenanceExpireTimeInMS, setNetworkLocation, setNonDfsUsed, setNumBlocks, setParent, setRemaining, setSoftwareVersion, setUpgradeDomain, setXceiverCount, startDecommission, startMaintenance, stopDecommission, stopMaintenanceMethods inherited from class org.apache.hadoop.hdfs.protocol.DatanodeID
compareTo, getDatanodeUuid, getDatanodeUuidBytes, getHostName, getHostNameBytes, getInfoAddr, getInfoPort, getInfoSecureAddr, getInfoSecurePort, getIpAddr, getIpAddrBytes, getIpcAddr, getIpcPort, getPeerHostName, getResolvedAddress, getXferAddr, getXferAddr, getXferAddrWithHostname, getXferPort, setIpAddr, setPeerHostName, toString
-
Field Details
-
LOG
public static final org.slf4j.Logger LOG -
EMPTY_ARRAY
-
storageMap
-
-
Constructor Details
-
DatanodeDescriptor
public DatanodeDescriptor(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID) DatanodeDescriptor constructor- Parameters:
nodeID- id of the data node
-
DatanodeDescriptor
public DatanodeDescriptor(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID, String networkLocation) DatanodeDescriptor constructor- Parameters:
nodeID- id of the data nodenetworkLocation- location of the data node in network
-
-
Method Details
-
getPendingCached
-
getCached
-
getPendingUncached
-
isAlive
public boolean isAlive() -
setAlive
public void setAlive(boolean isAlive) -
needKeyUpdate
public boolean needKeyUpdate() -
setNeedKeyUpdate
public void setNeedKeyUpdate(boolean needKeyUpdate) -
getLeavingServiceStatus
-
isHeartbeatedSinceRegistration
@VisibleForTesting public boolean isHeartbeatedSinceRegistration() -
getStorageInfo
-
getStorageInfos
-
getStorageTypes
-
getStorageReports
public org.apache.hadoop.hdfs.server.protocol.StorageReport[] getStorageReports() -
resetBlocks
public void resetBlocks() -
clearBlockQueues
public void clearBlockQueues() -
numBlocks
public int numBlocks() -
incrementPendingReplicationWithoutTargets
@VisibleForTesting public void incrementPendingReplicationWithoutTargets() -
decrementPendingReplicationWithoutTargets
@VisibleForTesting public void decrementPendingReplicationWithoutTargets() -
addBlockToBeReplicated
@VisibleForTesting public void addBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) Store block replication work. -
addECBlockToBeReplicated
@VisibleForTesting public void addECBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) Store ec block to be replicated work. -
getNumberOfBlocksToBeErasureCoded
@VisibleForTesting public int getNumberOfBlocksToBeErasureCoded()The number of work items that are pending to be reconstructed. -
getNumberOfECBlocksToBeReplicated
@VisibleForTesting public int getNumberOfECBlocksToBeReplicated()The number of ec work items that are pending to be replicated. -
getNumberOfReplicateBlocks
@VisibleForTesting public int getNumberOfReplicateBlocks() -
getErasureCodeCommand
public List<BlockECReconstructionCommand.BlockECReconstructionInfo> getErasureCodeCommand(int maxTransfers) -
getLeaseRecoveryCommand
-
getInvalidateBlocks
public org.apache.hadoop.hdfs.protocol.Block[] getInvalidateBlocks(int maxblocks) Remove the specified number of blocks to be invalidated -
containsInvalidateBlock
@VisibleForTesting public boolean containsInvalidateBlock(org.apache.hadoop.hdfs.protocol.Block block) -
chooseStorage4Block
public DatanodeStorageInfo chooseStorage4Block(org.apache.hadoop.fs.StorageType t, long blockSize, int minBlocksForWrite) Find whether the datanode contains good storage of given type to place block of sizeblockSize.Currently datanode only cares about the storage type, in this method, the first storage of given type we see is returned.
- Parameters:
t- requested storage typeblockSize- requested block sizeminBlocksForWrite- requested the minimum number of blocks
-
getBlocksScheduled
public int getBlocksScheduled(org.apache.hadoop.fs.StorageType t) - Returns:
- Approximate number of blocks currently scheduled to be written to the given storage type of this datanode.
-
getBlocksScheduled
public int getBlocksScheduled()- Returns:
- Approximate number of blocks currently scheduled to be written to this datanode.
-
hashCode
public int hashCode()- Overrides:
hashCodein classorg.apache.hadoop.hdfs.protocol.DatanodeInfo
-
equals
- Overrides:
equalsin classorg.apache.hadoop.hdfs.protocol.DatanodeInfo
-
setDisallowed
public void setDisallowed(boolean flag) Set the flag to indicate if this datanode is disallowed from communicating with the namenode. -
isDisallowed
public boolean isDisallowed()Is the datanode disallowed from communicating with the namenode? -
getVolumeFailures
public int getVolumeFailures()- Returns:
- number of failed volumes in the datanode.
-
getVolumeFailureSummary
Returns info about volume failures.- Returns:
- info about volume failures, possibly null
-
getNumVolumesAvailable
public int getNumVolumesAvailable()Return the number of volumes that can be written.- Returns:
- the number of volumes that can be written.
-
updateRegInfo
public void updateRegInfo(org.apache.hadoop.hdfs.protocol.DatanodeID nodeReg) - Overrides:
updateRegInfoin classorg.apache.hadoop.hdfs.protocol.DatanodeID- Parameters:
nodeReg- DatanodeID to update registration for.
-
getBalancerBandwidth
public long getBalancerBandwidth()- Returns:
- balancer bandwidth in bytes per second for this datanode
-
setBalancerBandwidth
public void setBalancerBandwidth(long bandwidth) - Parameters:
bandwidth- balancer bandwidth in bytes per second for this datanode
-
dumpDatanode
- Overrides:
dumpDatanodein classorg.apache.hadoop.hdfs.protocol.DatanodeInfo
-
getLastCachingDirectiveSentTimeMs
public long getLastCachingDirectiveSentTimeMs()- Returns:
- The time at which we last sent caching directives to this DataNode, in monotonic milliseconds.
-
setLastCachingDirectiveSentTimeMs
public void setLastCachingDirectiveSentTimeMs(long time) - Parameters:
time- The time at which we last sent caching directives to this DataNode, in monotonic milliseconds.
-
checkBlockReportReceived
public boolean checkBlockReportReceived()- Returns:
- whether at least first block report has been received
-
setForceRegistration
public void setForceRegistration(boolean force) -
isRegistered
public boolean isRegistered() -
hasStorageType
public boolean hasStorageType(org.apache.hadoop.fs.StorageType type)
-