Class DataNode

java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.conf.ReconfigurableBase
org.apache.hadoop.hdfs.server.datanode.DataNode
All Implemented Interfaces:
org.apache.hadoop.conf.Configurable, org.apache.hadoop.conf.Reconfigurable, org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol, org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol, DataNodeMXBean, InterDatanodeProtocol

@Private public class DataNode extends org.apache.hadoop.conf.ReconfigurableBase implements InterDatanodeProtocol, org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol, DataNodeMXBean, org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
DataNode is a class (and program) that stores a set of blocks for a DFS deployment. A single deployment can have one or many DataNodes. Each DataNode communicates regularly with a single NameNode. It also communicates with client code and other DataNodes from time to time. DataNodes store a series of named blocks. The DataNode allows client code to read these blocks, or to write new block data. The DataNode may also, in response to instructions from its NameNode, delete blocks or copy blocks to/from other DataNodes. The DataNode maintains just one critical table: block-> stream of bytes (of BLOCK_SIZE or less) This info is stored on a local disk. The DataNode reports the table's contents to the NameNode upon startup and every so often afterwards. DataNodes spend their lives in an endless loop of asking the NameNode for something to do. A NameNode cannot connect to a DataNode directly; a NameNode simply returns values from functions invoked by a DataNode. DataNodes maintain an open server socket so that client code or other DataNodes can read/write data. The host/port for this server is reported to the NameNode, which then sends that information to clients or other DataNodes that might be interested.
  • Field Details

    • LOG

      public static final org.slf4j.Logger LOG
    • DN_CLIENTTRACE_FORMAT

      public static final String DN_CLIENTTRACE_FORMAT
      See Also:
    • MAX_VOLUME_FAILURE_TOLERATED_LIMIT

      public static final int MAX_VOLUME_FAILURE_TOLERATED_LIMIT
      See Also:
    • MAX_VOLUME_FAILURES_TOLERATED_MSG

      public static final String MAX_VOLUME_FAILURES_TOLERATED_MSG
      See Also:
    • METRICS_LOG_NAME

      public static final String METRICS_LOG_NAME
      See Also:
    • ipcServer

      public org.apache.hadoop.ipc.RPC.Server ipcServer
  • Method Details

    • createSocketAddr

      @Deprecated public static InetSocketAddress createSocketAddr(String target)
      Deprecated.
      Use NetUtils.createSocketAddr(String) instead.
    • getNewConf

      protected org.apache.hadoop.conf.Configuration getNewConf()
      Specified by:
      getNewConf in class org.apache.hadoop.conf.ReconfigurableBase
    • reconfigurePropertyImpl

      public String reconfigurePropertyImpl(String property, String newVal) throws org.apache.hadoop.conf.ReconfigurationException
      .
      Specified by:
      reconfigurePropertyImpl in class org.apache.hadoop.conf.ReconfigurableBase
      Throws:
      org.apache.hadoop.conf.ReconfigurationException
    • getReconfigurableProperties

      public Collection<String> getReconfigurableProperties()
      Get a list of the keys of the re-configurable properties in configuration.
      Specified by:
      getReconfigurableProperties in interface org.apache.hadoop.conf.Reconfigurable
      Specified by:
      getReconfigurableProperties in class org.apache.hadoop.conf.ReconfigurableBase
    • getECN

      public org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.ECN getECN()
      The ECN bit for the DataNode. The DataNode should return:
      • ECN.DISABLED when ECN is disabled.
      • ECN.SUPPORTED when ECN is enabled but the DN still has capacity.
      • ECN.CONGESTED when ECN is enabled and the DN is congested.
    • getSLOWByBlockPoolId

      public org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.SLOW getSLOWByBlockPoolId(String bpId)
      The SLOW bit for the DataNode of the specific BlockPool. The DataNode should return:
      • SLOW.DISABLED when SLOW is disabled
      • SLOW.NORMAL when SLOW is enabled and DN is not slownode.
      • SLOW.SLOW when SLOW is enabled and DN is slownode.
    • getFileIoProvider

      public FileIoProvider getFileIoProvider()
    • notifyNamenodeReceivedBlock

      public void notifyNamenodeReceivedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String delHint, String storageUuid, boolean isOnTransientStorage)
    • notifyNamenodeReceivingBlock

      protected void notifyNamenodeReceivingBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String storageUuid)
    • notifyNamenodeDeletedBlock

      public void notifyNamenodeDeletedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String storageUuid)
      Notify the corresponding namenode to delete the block.
    • reportBadBlocks

      public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) throws IOException
      Report a bad block which is hosted on the local DN.
      Throws:
      IOException
    • reportBadBlocks

      public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, FsVolumeSpi volume) throws IOException
      Report a bad block which is hosted on the local DN.
      Parameters:
      block - the bad block which is hosted on the local DN
      volume - the volume that block is stored in and the volume must not be null
      Throws:
      IOException
    • reportRemoteBadBlock

      public void reportRemoteBadBlock(org.apache.hadoop.hdfs.protocol.DatanodeInfo srcDataNode, org.apache.hadoop.hdfs.protocol.ExtendedBlock block) throws IOException
      Report a bad block on another DN (eg if we received a corrupt replica from a remote host).
      Parameters:
      srcDataNode - the DN hosting the bad block
      block - the block itself
      Throws:
      IOException
    • reportCorruptedBlocks

      public void reportCorruptedBlocks(org.apache.hadoop.hdfs.DFSUtilClient.CorruptedBlocks corruptedBlocks) throws IOException
      Throws:
      IOException
    • setHeartbeatsDisabledForTests

      @VisibleForTesting public void setHeartbeatsDisabledForTests(boolean heartbeatsDisabledForTests)
    • generateUuid

      public static String generateUuid()
    • getSaslClient

      public org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient getSaslClient()
    • getBpOsCount

      public int getBpOsCount()
    • getInfoAddr

      public static InetSocketAddress getInfoAddr(org.apache.hadoop.conf.Configuration conf)
      Determine the http server's effective addr
    • getXferServer

      @VisibleForTesting public org.apache.hadoop.hdfs.server.datanode.DataXceiverServer getXferServer()
    • getXferPort

      @VisibleForTesting public int getXferPort()
    • getSaslServer

      @VisibleForTesting public SaslDataTransferServer getSaslServer()
    • getDisplayName

      public String getDisplayName()
      Returns:
      name useful for logging or display
    • getXferAddress

      public InetSocketAddress getXferAddress()
      NB: The datanode can perform data transfer on the streaming address however clients are given the IPC IP address for data transfer, and that may be a different address.
      Returns:
      socket address for data transfer
    • getIpcPort

      public int getIpcPort()
      Returns:
      the datanode's IPC port
    • getDNRegistrationForBP

      @VisibleForTesting public DatanodeRegistration getDNRegistrationForBP(String bpid) throws IOException
      get BP registration by blockPool id
      Returns:
      BP registration object
      Throws:
      IOException - on error
    • newSocket

      public Socket newSocket() throws IOException
      Creates either NIO or regular depending on socketWriteTimeout.
      Throws:
      IOException
    • createInterDataNodeProtocolProxy

      public static InterDatanodeProtocol createInterDataNodeProtocolProxy(org.apache.hadoop.hdfs.protocol.DatanodeID datanodeid, org.apache.hadoop.conf.Configuration conf, int socketTimeout, boolean connectToDnViaHostname) throws IOException
      Throws:
      IOException
    • getMetrics

      public DataNodeMetrics getMetrics()
    • getDiskMetrics

      public DataNodeDiskMetrics getDiskMetrics()
    • getPeerMetrics

      public DataNodePeerMetrics getPeerMetrics()
    • getMaxNumberOfBlocksToLog

      public long getMaxNumberOfBlocksToLog()
    • getBlockLocalPathInfo

      public org.apache.hadoop.hdfs.protocol.BlockLocalPathInfo getBlockLocalPathInfo(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier> token) throws IOException
      Specified by:
      getBlockLocalPathInfo in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • shutdown

      public void shutdown()
      Shut down this instance of the datanode. Returns only after shutdown is complete. This method can only be called by the offerService thread. Otherwise, deadlock might occur.
    • checkDiskErrorAsync

      public void checkDiskErrorAsync(FsVolumeSpi volume)
      Check if there is a disk failure asynchronously and if so, handle the error.
    • getXceiverCount

      public int getXceiverCount()
      Number of concurrent xceivers per node.
      Specified by:
      getXceiverCount in interface DataNodeMXBean
    • getActiveTransferThreadCount

      public int getActiveTransferThreadCount()
      Description copied from interface: DataNodeMXBean
      Returns the number of Datanode threads actively transferring blocks.
      Specified by:
      getActiveTransferThreadCount in interface DataNodeMXBean
    • getDatanodeNetworkCounts

      public Map<String,Map<String,Long>> getDatanodeNetworkCounts()
      Description copied from interface: DataNodeMXBean
      Gets the network error counts on a per-Datanode basis.
      Specified by:
      getDatanodeNetworkCounts in interface DataNodeMXBean
    • getXmitsInProgress

      public int getXmitsInProgress()
      Description copied from interface: DataNodeMXBean
      Returns an estimate of the number of data replication/reconstruction tasks running currently.
      Specified by:
      getXmitsInProgress in interface DataNodeMXBean
    • incrementXmitsInProgress

      public void incrementXmitsInProgress()
      Increments the xmitsInProgress count. xmitsInProgress count represents the number of data replication/reconstruction tasks running currently.
    • incrementXmitsInProcess

      public void incrementXmitsInProcess(int delta)
      Increments the xmitInProgress count by given value.
      Parameters:
      delta - the amount of xmitsInProgress to increase.
      See Also:
    • decrementXmitsInProgress

      public void decrementXmitsInProgress()
      Decrements the xmitsInProgress count
    • decrementXmitsInProgress

      public void decrementXmitsInProgress(int delta)
      Decrements the xmitsInProgress count by given value.
      See Also:
    • getBlockAccessToken

      public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier> getBlockAccessToken(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, EnumSet<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.AccessMode> mode, org.apache.hadoop.fs.StorageType[] storageTypes, String[] storageIds) throws IOException
      Use BlockTokenSecretManager to generate block token for current user.
      Throws:
      IOException
    • getDataEncryptionKeyFactoryForBlock

      public org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataEncryptionKeyFactory getDataEncryptionKeyFactoryForBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block)
      Returns a new DataEncryptionKeyFactory that generates a key from the BlockPoolTokenSecretManager, using the block pool ID of the given block.
      Parameters:
      block - for which the factory needs to create a key
      Returns:
      DataEncryptionKeyFactory for block's block pool ID
    • runDatanodeDaemon

      public void runDatanodeDaemon() throws IOException
      Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted, it will stop waiting.
      Throws:
      IOException
    • isDatanodeUp

      public boolean isDatanodeUp()
      A data node is considered to be up if one of the bp services is up
    • instantiateDataNode

      public static DataNode instantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) throws IOException
      Instantiate a single datanode object. This must be run by invoking runDatanodeDaemon() subsequently.
      Throws:
      IOException
    • instantiateDataNode

      public static DataNode instantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf, SecureDataNodeStarter.SecureResources resources) throws IOException
      Instantiate a single datanode object, along with its secure resources. This must be run by invokingrunDatanodeDaemon() subsequently.
      Throws:
      IOException
    • getStorageLocations

      public static List<StorageLocation> getStorageLocations(org.apache.hadoop.conf.Configuration conf)
    • createDataNode

      @VisibleForTesting public static DataNode createDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) throws IOException
      Instantiate & Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted, it will stop waiting.
      Throws:
      IOException
    • createDataNode

      @VisibleForTesting @Private public static DataNode createDataNode(String[] args, org.apache.hadoop.conf.Configuration conf, SecureDataNodeStarter.SecureResources resources) throws IOException
      Instantiate & Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted, it will stop waiting.
      Throws:
      IOException
    • toString

      public String toString()
      Overrides:
      toString in class Object
    • scheduleAllBlockReport

      public void scheduleAllBlockReport(long delay)
      This methods arranges for the data node to send the block report at the next heartbeat.
    • getFSDataset

      @VisibleForTesting public FsDatasetSpi<?> getFSDataset()
      Examples are adding and deleting blocks directly. The most common usage will be when the data node's storage is simulated.
      Returns:
      the fsdataset that stores the blocks
    • getBlockScanner

      @VisibleForTesting public BlockScanner getBlockScanner()
    • getBlockPoolTokenSecretManager

      @VisibleForTesting public BlockPoolTokenSecretManager getBlockPoolTokenSecretManager()
    • secureMain

      public static void secureMain(String[] args, SecureDataNodeStarter.SecureResources resources)
    • main

      public static void main(String[] args)
    • initReplicaRecovery

      public ReplicaRecoveryInfo initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock) throws IOException
      Description copied from interface: InterDatanodeProtocol
      Initialize a replica recovery.
      Specified by:
      initReplicaRecovery in interface InterDatanodeProtocol
      Returns:
      actual state of the replica on this data-node or null if data-node does not have the replica.
      Throws:
      IOException
    • updateReplicaUnderRecovery

      public String updateReplicaUnderRecovery(org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, long recoveryId, long newBlockId, long newLength) throws IOException
      Update replica with the new generation stamp and length.
      Specified by:
      updateReplicaUnderRecovery in interface InterDatanodeProtocol
      Throws:
      IOException
    • getReplicaVisibleLength

      public long getReplicaVisibleLength(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) throws IOException
      Specified by:
      getReplicaVisibleLength in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • getSoftwareVersion

      public String getSoftwareVersion()
      Description copied from interface: DataNodeMXBean
      Get the version of software running on the DataNode
      Specified by:
      getSoftwareVersion in interface DataNodeMXBean
      Returns:
      a string representing the version
    • getVersion

      public String getVersion()
      Description copied from interface: DataNodeMXBean
      Gets the version of Hadoop.
      Specified by:
      getVersion in interface DataNodeMXBean
      Returns:
      the version of Hadoop
    • getRpcPort

      public String getRpcPort()
      Description copied from interface: DataNodeMXBean
      Gets the rpc port.
      Specified by:
      getRpcPort in interface DataNodeMXBean
      Returns:
      the rpc port
    • getDataPort

      public String getDataPort()
      Description copied from interface: DataNodeMXBean
      Gets the data port.
      Specified by:
      getDataPort in interface DataNodeMXBean
      Returns:
      the data port
    • getHttpPort

      public String getHttpPort()
      Description copied from interface: DataNodeMXBean
      Gets the http port.
      Specified by:
      getHttpPort in interface DataNodeMXBean
      Returns:
      the http port
    • getDNStartedTimeInMillis

      public long getDNStartedTimeInMillis()
      Description copied from interface: DataNodeMXBean
      Get the start time of the DataNode.
      Specified by:
      getDNStartedTimeInMillis in interface DataNodeMXBean
      Returns:
      Start time of the DataNode.
    • getRevision

      public String getRevision()
    • getInfoPort

      public int getInfoPort()
      Returns:
      the datanode's http port
    • getInfoSecurePort

      public int getInfoSecurePort()
      Returns:
      the datanode's https port
    • getNamenodeAddresses

      public String getNamenodeAddresses()
      Returned information is a JSON representation of a map with name node host name as the key and block pool Id as the value. Note that, if there are multiple NNs in an NA nameservice, a given block pool may be represented twice.
      Specified by:
      getNamenodeAddresses in interface DataNodeMXBean
      Returns:
      the namenode IP addresses that the datanode is talking to
    • getDatanodeHostname

      public String getDatanodeHostname()
      Return hostname of the datanode.
      Specified by:
      getDatanodeHostname in interface DataNodeMXBean
      Returns:
      the datanode hostname for the datanode.
    • getBPServiceActorInfo

      public String getBPServiceActorInfo()
      Returned information is a JSON representation of an array, each element of the array is a map contains the information about a block pool service actor.
      Specified by:
      getBPServiceActorInfo in interface DataNodeMXBean
      Returns:
      block pool service actors info
    • getBPServiceActorInfoMap

      @VisibleForTesting public List<Map<String,String>> getBPServiceActorInfoMap()
    • getVolumeInfo

      public String getVolumeInfo()
      Returned information is a JSON representation of a map with volume name as the key and value is a map of volume attribute keys to its values
      Specified by:
      getVolumeInfo in interface DataNodeMXBean
      Returns:
      the volume info
    • getClusterId

      public String getClusterId()
      Description copied from interface: DataNodeMXBean
      Gets the cluster id.
      Specified by:
      getClusterId in interface DataNodeMXBean
      Returns:
      the cluster id
    • getDiskBalancerStatus

      public String getDiskBalancerStatus()
      Description copied from interface: DataNodeMXBean
      Gets the diskBalancer Status. Please see implementation for the format of the returned information.
      Specified by:
      getDiskBalancerStatus in interface DataNodeMXBean
      Returns:
      DiskBalancer Status
    • isSecurityEnabled

      public boolean isSecurityEnabled()
      Description copied from interface: DataNodeMXBean
      Gets if security is enabled.
      Specified by:
      isSecurityEnabled in interface DataNodeMXBean
      Returns:
      true, if security is enabled.
    • refreshNamenodes

      public void refreshNamenodes(org.apache.hadoop.conf.Configuration conf) throws IOException
      Throws:
      IOException
    • refreshNamenodes

      public void refreshNamenodes() throws IOException
      Specified by:
      refreshNamenodes in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • deleteBlockPool

      public void deleteBlockPool(String blockPoolId, boolean force) throws IOException
      Specified by:
      deleteBlockPool in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • shutdownDatanode

      public void shutdownDatanode(boolean forUpgrade) throws IOException
      Specified by:
      shutdownDatanode in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • evictWriters

      public void evictWriters() throws IOException
      Specified by:
      evictWriters in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • getDatanodeInfo

      public org.apache.hadoop.hdfs.protocol.DatanodeLocalInfo getDatanodeInfo()
      Specified by:
      getDatanodeInfo in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
    • startReconfiguration

      public void startReconfiguration() throws IOException
      Specified by:
      startReconfiguration in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Specified by:
      startReconfiguration in interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
      Throws:
      IOException
    • getReconfigurationStatus

      public org.apache.hadoop.conf.ReconfigurationTaskStatus getReconfigurationStatus() throws IOException
      Specified by:
      getReconfigurationStatus in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Specified by:
      getReconfigurationStatus in interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
      Throws:
      IOException
    • listReconfigurableProperties

      public List<String> listReconfigurableProperties() throws IOException
      Specified by:
      listReconfigurableProperties in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Specified by:
      listReconfigurableProperties in interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
      Throws:
      IOException
    • triggerBlockReport

      public void triggerBlockReport(org.apache.hadoop.hdfs.client.BlockReportOptions options) throws IOException
      Specified by:
      triggerBlockReport in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • isConnectedToNN

      public boolean isConnectedToNN(InetSocketAddress addr)
      Parameters:
      addr - rpc address of the namenode
      Returns:
      true if the datanode is connected to a NameNode at the given address
    • isBPServiceAlive

      public boolean isBPServiceAlive(String bpid)
      Parameters:
      bpid - block pool Id
      Returns:
      true - if BPOfferService thread is alive
    • isDatanodeFullyStarted

      public boolean isDatanodeFullyStarted()
      A datanode is considered to be fully started if all the BP threads are alive and all the block pools are initialized.
      Returns:
      true - if the data node is fully started
    • isDatanodeFullyStarted

      public boolean isDatanodeFullyStarted(boolean checkConnectionToActiveNamenode)
      A datanode is considered to be fully started if all the BP threads are alive and all the block pools are initialized. If checkConnectionToActiveNamenode is true, the datanode is considered to be fully started if it is also heartbeating to active namenode in addition to the above-mentioned conditions.
      Parameters:
      checkConnectionToActiveNamenode - if true, performs additional check of whether datanode is heartbeating to active namenode.
      Returns:
      true if the datanode is fully started and also conditionally connected to active namenode, false otherwise.
    • getDatanodeId

      @VisibleForTesting public org.apache.hadoop.hdfs.protocol.DatanodeID getDatanodeId()
    • clearAllBlockSecretKeys

      @VisibleForTesting public void clearAllBlockSecretKeys()
    • getBalancerBandwidth

      public long getBalancerBandwidth()
      Specified by:
      getBalancerBandwidth in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
    • getDnConf

      public DNConf getDnConf()
    • getDatanodeUuid

      public String getDatanodeUuid()
    • getShortCircuitRegistry

      public ShortCircuitRegistry getShortCircuitRegistry()
    • getEcReconstuctReadThrottler

      public DataTransferThrottler getEcReconstuctReadThrottler()
    • getEcReconstuctWriteThrottler

      public DataTransferThrottler getEcReconstuctWriteThrottler()
    • checkDiskError

      @VisibleForTesting public void checkDiskError() throws IOException
      Check the disk error synchronously.
      Throws:
      IOException
    • handleVolumeFailures

      @VisibleForTesting public void handleVolumeFailures(Set<FsVolumeSpi> unhealthyVolumes)
    • getLastDiskErrorCheck

      @VisibleForTesting public long getLastDiskErrorCheck()
    • getBlockRecoveryWorker

      public BlockRecoveryWorker getBlockRecoveryWorker()
    • getErasureCodingWorker

      public ErasureCodingWorker getErasureCodingWorker()
    • getOOBTimeout

      public long getOOBTimeout(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.Status status) throws IOException
      Get the timeout to be used for transmitting the OOB type
      Returns:
      the timeout in milliseconds
      Throws:
      IOException
    • startMetricsLogger

      protected void startMetricsLogger()
      Start a timer to periodically write DataNode metrics to the log file. This behavior can be disabled by configuration.
    • stopMetricsLogger

      protected void stopMetricsLogger()
    • getTracer

      public org.apache.hadoop.tracing.Tracer getTracer()
    • submitDiskBalancerPlan

      public void submitDiskBalancerPlan(String planID, long planVersion, String planFile, String planData, boolean skipDateCheck) throws IOException
      Allows submission of a disk balancer Job.
      Specified by:
      submitDiskBalancerPlan in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Parameters:
      planID - - Hash value of the plan.
      planVersion - - Plan version, reserved for future use. We have only version 1 now.
      planFile - - Plan file name
      planData - - Actual plan data in json format
      Throws:
      IOException
    • cancelDiskBalancePlan

      public void cancelDiskBalancePlan(String planID) throws IOException
      Cancels a running plan.
      Specified by:
      cancelDiskBalancePlan in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Parameters:
      planID - - Hash string that identifies a plan
      Throws:
      IOException
    • queryDiskBalancerPlan

      public org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorkStatus queryDiskBalancerPlan() throws IOException
      Returns the status of current or last executed work plan.
      Specified by:
      queryDiskBalancerPlan in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Returns:
      DiskBalancerWorkStatus.
      Throws:
      IOException
    • getDiskBalancerSetting

      public String getDiskBalancerSetting(String key) throws IOException
      Gets a runtime configuration value from diskbalancer instance. For example : DiskBalancer bandwidth.
      Specified by:
      getDiskBalancerSetting in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Parameters:
      key - - String that represents the run time key value.
      Returns:
      value of the key as a string.
      Throws:
      IOException - - Throws if there is no such key
    • getSendPacketDownstreamAvgInfo

      public String getSendPacketDownstreamAvgInfo()
      Description copied from interface: DataNodeMXBean
      Gets the average info (e.g. time) of SendPacketDownstream when the DataNode acts as the penultimate (2nd to the last) node in pipeline.

      Example Json: {"[185.164.159.81:9801]RollingAvgTime":504.867, "[49.236.149.246:9801]RollingAvgTime":504.463, "[84.125.113.65:9801]RollingAvgTime":497.954}

      Specified by:
      getSendPacketDownstreamAvgInfo in interface DataNodeMXBean
    • getSlowDisks

      public String getSlowDisks()
      Description copied from interface: DataNodeMXBean
      Gets the slow disks in the Datanode.
      Specified by:
      getSlowDisks in interface DataNodeMXBean
      Returns:
      list of slow disks
    • getVolumeReport

      public List<org.apache.hadoop.hdfs.protocol.DatanodeVolumeInfo> getVolumeReport() throws IOException
      Specified by:
      getVolumeReport in interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
      Throws:
      IOException
    • getDiskBalancer

      @VisibleForTesting public DiskBalancer getDiskBalancer() throws IOException
      Throws:
      IOException
    • getDataSetLockManager

      public DataSetLockManager getDataSetLockManager()
    • getBlockPoolManager

      @VisibleForTesting public org.apache.hadoop.hdfs.server.datanode.BlockPoolManager getBlockPoolManager()