Uses of Class
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo
Packages that use DatanodeStorageInfo
Package
Description
-
Uses of DatanodeStorageInfo in org.apache.hadoop.hdfs.server.blockmanagement
Fields in org.apache.hadoop.hdfs.server.blockmanagement declared as DatanodeStorageInfoModifier and TypeFieldDescriptionstatic final DatanodeStorageInfo[]DatanodeStorageInfo.EMPTY_ARRAYfinal DatanodeStorageInfo[]DatanodeDescriptor.BlockTargetPair.targetsFields in org.apache.hadoop.hdfs.server.blockmanagement with type parameters of type DatanodeStorageInfoModifier and TypeFieldDescriptionprotected final Map<String,DatanodeStorageInfo> DatanodeDescriptor.storageMapMethods in org.apache.hadoop.hdfs.server.blockmanagement that return DatanodeStorageInfoModifier and TypeMethodDescriptionprotected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalOrFavoredStorage(org.apache.hadoop.net.Node localOrFavoredNode, boolean isFavoredNode, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose storage of local or favored node.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalRack(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose one node from the rack that localMachine is on.protected DatanodeStorageInfoBlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) protected DatanodeStorageInfoAvailableSpaceBlockPlacementPolicy.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes, boolean fallbackToLocalRack) protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes, boolean fallbackToLocalRack) Choose localMachine as the target.protected DatanodeStorageInfoBlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes, boolean fallbackToNodeGroupAndLocalRack) choose local node of localMachine as the target.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseRandom(int numOfReplicas, String scope, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Randomly choose numOfReplicas targets from the given scope.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseRandom(String scope, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Randomly choose one target from the given scope.BlockPlacementPolicyDefault.chooseReplicaToDelete(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, List<org.apache.hadoop.fs.StorageType> excessTypes, Map<String, List<DatanodeStorageInfo>> rackMap) Decide whether deleting the specified replica of the block still makes the block conform to the configured block placement policy.DatanodeDescriptor.chooseStorage4Block(org.apache.hadoop.fs.StorageType t, long blockSize, int minBlocksForWrite) Find whether the datanode contains good storage of given type to place block of sizeblockSize.abstract DatanodeStorageInfo[]BlockPlacementPolicy.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosen, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags) choose numOfReplicas data nodes for writer to re-replicate a block with size blocksize If not, return as many as we can.BlockPlacementPolicy.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosen, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) BlockPlacementPolicyDefault.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosenNodes, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags) BlockPlacementPolicyDefault.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosen, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) BlockManager.chooseTarget4AdditionalDatanode(String src, int numAdditionalNodes, org.apache.hadoop.net.Node clientnode, List<DatanodeStorageInfo> chosen, Set<org.apache.hadoop.net.Node> excludes, long blocksize, byte storagePolicyID, org.apache.hadoop.hdfs.protocol.BlockType blockType) Choose target for getting additional datanodes for an existing pipeline.BlockManager.chooseTarget4NewBlock(String src, int numOfReplicas, org.apache.hadoop.net.Node client, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, List<String> favoredNodes, byte storagePolicyID, org.apache.hadoop.hdfs.protocol.BlockType blockType, org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy ecPolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags) Choose target datanodes for creating a new block.BlockManager.chooseTarget4WebHDFS(String src, DatanodeDescriptor clientnode, Set<org.apache.hadoop.net.Node> excludes, long blocksize) Choose target for WebHDFS redirection.DatanodeManager.getDatanodeStorageInfos(org.apache.hadoop.hdfs.protocol.DatanodeID[] datanodeID, String[] storageIDs, String format, Object... args) BlockUnderConstructionFeature.getExpectedStorageLocations()Create array of expected replica locations (as has been assigned by chooseTargets()).ProvidedStorageMap.getProvidedStorageInfo()BlockInfoStriped.StorageAndBlockIndex.getStorage()DatanodeDescriptor.getStorageInfo(String storageID) DatanodeDescriptor.getStorageInfos()BlockManager.getStorages(BlockInfo block) Methods in org.apache.hadoop.hdfs.server.blockmanagement that return types with arguments of type DatanodeStorageInfoModifier and TypeMethodDescriptionabstract List<DatanodeStorageInfo>BlockPlacementPolicy.chooseReplicasToDelete(Collection<DatanodeStorageInfo> availableReplicas, Collection<DatanodeStorageInfo> delCandidates, int expectedNumOfReplicas, List<org.apache.hadoop.fs.StorageType> excessTypes, DatanodeDescriptor addedNode, DatanodeDescriptor delNodeHint) Select the excess replica storages for deletion based on either delNodehint/Excess storage types.BlockPlacementPolicyDefault.chooseReplicasToDelete(Collection<DatanodeStorageInfo> availableReplicas, Collection<DatanodeStorageInfo> delCandidates, int expectedNumOfReplicas, List<org.apache.hadoop.fs.StorageType> excessTypes, DatanodeDescriptor addedNode, DatanodeDescriptor delNodeHint) BlockUnderConstructionFeature.getExpectedStorageLocationsIterator()Note that this iterator doesn't guarantee thread-safe.BlockInfo.getStorageInfos()BlockManager.getStorages(org.apache.hadoop.hdfs.protocol.Block block) protected Collection<DatanodeStorageInfo>BlockPlacementPolicyDefault.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) Pick up replica node set for deleting replica as over-replicated.protected Collection<DatanodeStorageInfo>BlockPlacementPolicyRackFaultTolerant.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second, Map<String, List<DatanodeStorageInfo>> rackMap) Pick up replica node set for deleting replica as over-replicated.protected Collection<DatanodeStorageInfo>BlockPlacementPolicyWithUpgradeDomain.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) Methods in org.apache.hadoop.hdfs.server.blockmanagement with parameters of type DatanodeStorageInfoModifier and TypeMethodDescriptionvoidBlockManager.addBlock(DatanodeStorageInfo storageInfo, org.apache.hadoop.hdfs.protocol.Block block, String delHint) The given node is reporting that it received a certain block.voidDatanodeDescriptor.addBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) Store block replication work.voidProvidedStorageMap.ProvidedDescriptor.addBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) voidDatanodeDescriptor.addECBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets) Store ec block to be replicated work.voidBlockPlacementPolicy.adjustSetsWithChosenReplica(Map<String, List<DatanodeStorageInfo>> rackMap, List<DatanodeStorageInfo> moreThanOne, List<DatanodeStorageInfo> exactlyOne, DatanodeStorageInfo cur) Adjust rackmap, moreThanOne, and exactlyOne after removing replica on cur.voidBlockCollection.convertLastBlockToUC(BlockInfo lastBlock, DatanodeStorageInfo[] targets) Convert the last block of the collection to an under-construction block and set the locations.voidBlockInfo.convertToBlockUnderConstruction(HdfsServerConstants.BlockUCState s, DatanodeStorageInfo[] targets) Add/Update the under construction feature.static voidDatanodeStorageInfo.decrementBlocksScheduled(DatanodeStorageInfo... storages) Decrement the number of blocks scheduled for each given storage.byteBlockInfoStriped.getStorageBlockIndex(DatanodeStorageInfo storage) static voidDatanodeStorageInfo.incrementBlocksScheduled(DatanodeStorageInfo... storages) Increment the number of blocks scheduled for each given storagevoidBlockManager.markBlockReplicasAsCorrupt(org.apache.hadoop.hdfs.protocol.Block oldBlock, BlockInfo block, long oldGenerationStamp, long oldNumBytes, DatanodeStorageInfo[] newStorages) Mark block replicas as corrupt except those on the storages in newStorages list.BlockInfo.moveBlockToHead(BlockInfo head, DatanodeStorageInfo storage, int curIndex, int headIndex) Remove this block from the list of blocks related to the specified DatanodeDescriptor.static org.apache.hadoop.hdfs.protocol.LocatedBlockBlockManager.newLocatedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock eb, BlockInfo info, DatanodeStorageInfo[] locs, long offset) static org.apache.hadoop.hdfs.protocol.LocatedBlockBlockManager.newLocatedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, DatanodeStorageInfo[] storages, long startOffset, boolean corrupt) static org.apache.hadoop.hdfs.protocol.LocatedStripedBlockBlockManager.newLocatedStripedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, DatanodeStorageInfo[] storages, byte[] indices, long startOffset, boolean corrupt) voidBlockUnderConstructionFeature.setExpectedLocations(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets, org.apache.hadoop.hdfs.protocol.BlockType blockType) Set expected locationsstatic org.apache.hadoop.hdfs.protocol.DatanodeInfo[]DatanodeStorageInfo.toDatanodeInfos(DatanodeStorageInfo[] storages) static String[]DatanodeStorageInfo.toStorageIDs(DatanodeStorageInfo[] storages) static org.apache.hadoop.fs.StorageType[]DatanodeStorageInfo.toStorageTypes(DatanodeStorageInfo[] storages) Method parameters in org.apache.hadoop.hdfs.server.blockmanagement with type arguments of type DatanodeStorageInfoModifier and TypeMethodDescriptionvoidBlockPlacementPolicy.adjustSetsWithChosenReplica(Map<String, List<DatanodeStorageInfo>> rackMap, List<DatanodeStorageInfo> moreThanOne, List<DatanodeStorageInfo> exactlyOne, DatanodeStorageInfo cur) Adjust rackmap, moreThanOne, and exactlyOne after removing replica on cur.voidBlockPlacementPolicy.adjustSetsWithChosenReplica(Map<String, List<DatanodeStorageInfo>> rackMap, List<DatanodeStorageInfo> moreThanOne, List<DatanodeStorageInfo> exactlyOne, DatanodeStorageInfo cur) Adjust rackmap, moreThanOne, and exactlyOne after removing replica on cur.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalOrFavoredStorage(org.apache.hadoop.net.Node localOrFavoredNode, boolean isFavoredNode, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose storage of local or favored node.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalRack(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose one node from the rack that localMachine is on.protected DatanodeStorageInfoBlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) protected DatanodeStorageInfoAvailableSpaceBlockPlacementPolicy.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes, boolean fallbackToLocalRack) protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes, boolean fallbackToLocalRack) Choose localMachine as the target.protected DatanodeStorageInfoBlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes, boolean fallbackToNodeGroupAndLocalRack) choose local node of localMachine as the target.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseRandom(int numOfReplicas, String scope, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Randomly choose numOfReplicas targets from the given scope.protected DatanodeStorageInfoBlockPlacementPolicyDefault.chooseRandom(String scope, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Randomly choose one target from the given scope.protected voidBlockPlacementPolicyDefault.chooseRemoteRack(int numOfReplicas, DatanodeDescriptor localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose numOfReplicas nodes from the racks that localMachine is NOT on.protected voidBlockPlacementPolicyWithNodeGroup.chooseRemoteRack(int numOfReplicas, DatanodeDescriptor localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) abstract List<DatanodeStorageInfo>BlockPlacementPolicy.chooseReplicasToDelete(Collection<DatanodeStorageInfo> availableReplicas, Collection<DatanodeStorageInfo> delCandidates, int expectedNumOfReplicas, List<org.apache.hadoop.fs.StorageType> excessTypes, DatanodeDescriptor addedNode, DatanodeDescriptor delNodeHint) Select the excess replica storages for deletion based on either delNodehint/Excess storage types.BlockPlacementPolicyDefault.chooseReplicasToDelete(Collection<DatanodeStorageInfo> availableReplicas, Collection<DatanodeStorageInfo> delCandidates, int expectedNumOfReplicas, List<org.apache.hadoop.fs.StorageType> excessTypes, DatanodeDescriptor addedNode, DatanodeDescriptor delNodeHint) BlockPlacementPolicyDefault.chooseReplicaToDelete(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, List<org.apache.hadoop.fs.StorageType> excessTypes, Map<String, List<DatanodeStorageInfo>> rackMap) Decide whether deleting the specified replica of the block still makes the block conform to the configured block placement policy.BlockPlacementPolicyDefault.chooseReplicaToDelete(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, List<org.apache.hadoop.fs.StorageType> excessTypes, Map<String, List<DatanodeStorageInfo>> rackMap) Decide whether deleting the specified replica of the block still makes the block conform to the configured block placement policy.abstract DatanodeStorageInfo[]BlockPlacementPolicy.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosen, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags) choose numOfReplicas data nodes for writer to re-replicate a block with size blocksize If not, return as many as we can.BlockPlacementPolicy.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosen, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) BlockPlacementPolicyDefault.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosenNodes, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags) BlockPlacementPolicyDefault.chooseTarget(String srcPath, int numOfReplicas, org.apache.hadoop.net.Node writer, List<DatanodeStorageInfo> chosen, boolean returnChosenNodes, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, org.apache.hadoop.hdfs.protocol.BlockStoragePolicy storagePolicy, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> flags, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) BlockManager.chooseTarget4AdditionalDatanode(String src, int numAdditionalNodes, org.apache.hadoop.net.Node clientnode, List<DatanodeStorageInfo> chosen, Set<org.apache.hadoop.net.Node> excludes, long blocksize, byte storagePolicyID, org.apache.hadoop.hdfs.protocol.BlockType blockType) Choose target for getting additional datanodes for an existing pipeline.protected org.apache.hadoop.net.NodeBlockPlacementPolicyDefault.chooseTargetInOrder(int numOfReplicas, org.apache.hadoop.net.Node writer, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, boolean newBlock, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) protected org.apache.hadoop.net.NodeBlockPlacementPolicyRackFaultTolerant.chooseTargetInOrder(int numOfReplicas, org.apache.hadoop.net.Node writer, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, boolean newBlock, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose numOfReplicas in order: 1.protected booleanBlockPlacementPolicyWithUpgradeDomain.isGoodDatanode(DatanodeDescriptor node, int maxTargetPerRack, boolean considerLoad, List<DatanodeStorageInfo> results, boolean avoidStaleNodes) protected voidDatanodeAdminManager.logBlockReplicationInfo(BlockInfo block, BlockCollection bc, DatanodeDescriptor srcNode, NumberReplicas num, Iterable<DatanodeStorageInfo> storages) protected Collection<DatanodeStorageInfo>BlockPlacementPolicyDefault.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) Pick up replica node set for deleting replica as over-replicated.protected Collection<DatanodeStorageInfo>BlockPlacementPolicyDefault.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) Pick up replica node set for deleting replica as over-replicated.protected Collection<DatanodeStorageInfo>BlockPlacementPolicyRackFaultTolerant.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) protected Collection<DatanodeStorageInfo>BlockPlacementPolicyRackFaultTolerant.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second, Map<String, List<DatanodeStorageInfo>> rackMap) Pick up replica node set for deleting replica as over-replicated.BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second, Map<String, List<DatanodeStorageInfo>> rackMap) Pick up replica node set for deleting replica as over-replicated.protected Collection<DatanodeStorageInfo>BlockPlacementPolicyWithUpgradeDomain.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) protected Collection<DatanodeStorageInfo>BlockPlacementPolicyWithUpgradeDomain.pickupReplicaSet(Collection<DatanodeStorageInfo> moreThanOne, Collection<DatanodeStorageInfo> exactlyOne, Map<String, List<DatanodeStorageInfo>> rackMap) Constructors in org.apache.hadoop.hdfs.server.blockmanagement with parameters of type DatanodeStorageInfoModifierConstructorDescriptionBlockUnderConstructionFeature(org.apache.hadoop.hdfs.protocol.Block blk, HdfsServerConstants.BlockUCState state, DatanodeStorageInfo[] targets, org.apache.hadoop.hdfs.protocol.BlockType blockType) -
Uses of DatanodeStorageInfo in org.apache.hadoop.hdfs.server.namenode
Methods in org.apache.hadoop.hdfs.server.namenode with parameters of type DatanodeStorageInfoModifier and TypeMethodDescriptionvoidINodeFile.convertLastBlockToUC(BlockInfo lastBlock, DatanodeStorageInfo[] locations) -
Uses of DatanodeStorageInfo in org.apache.hadoop.hdfs.server.protocol
Constructors in org.apache.hadoop.hdfs.server.protocol with parameters of type DatanodeStorageInfoModifierConstructorDescriptionBlockECReconstructionInfo(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] sources, DatanodeStorageInfo[] targetDnStorageInfo, byte[] liveBlockIndices, byte[] excludeReconstructedIndices, org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy ecPolicy)