Class NameNodeRpcServer
java.lang.Object
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
- All Implemented Interfaces:
org.apache.hadoop.ha.HAServiceProtocol,org.apache.hadoop.hdfs.protocol.ClientProtocol,org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol,DatanodeLifelineProtocol,DatanodeProtocol,NamenodeProtocol,NamenodeProtocols,org.apache.hadoop.ipc.GenericRefreshProtocol,org.apache.hadoop.ipc.RefreshCallQueueProtocol,org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol,org.apache.hadoop.security.RefreshUserMappingsProtocol,org.apache.hadoop.tools.GetUserMappingsProtocol
@Private
@VisibleForTesting
public class NameNodeRpcServer
extends Object
implements NamenodeProtocols
This class is responsible for handling all of the RPC calls to the NameNode.
It is created, started, and stopped by
NameNode.-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.hadoop.ha.HAServiceProtocol
org.apache.hadoop.ha.HAServiceProtocol.HAServiceState, org.apache.hadoop.ha.HAServiceProtocol.RequestSource, org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected final InetSocketAddressprotected final org.apache.hadoop.ipc.RPC.ServerThe RPC server that listens to requests from clientsprotected final FSNamesystemprotected final NameNodeFields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol
GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX, GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_LOW_REDUNDANCY_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX, GET_STATS_PENDING_DELETION_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, STATS_ARRAY_LENGTH, versionIDFields inherited from interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol
DISK_ERROR, DNA_ACCESSKEYUPDATE, DNA_BALANCERBANDWIDTHUPDATE, DNA_BLOCK_STORAGE_MOVEMENT, DNA_CACHE, DNA_DROP_SPS_WORK_COMMAND, DNA_ERASURE_CODING_RECONSTRUCTION, DNA_FINALIZE, DNA_INVALIDATE, DNA_RECOVERBLOCK, DNA_REGISTER, DNA_SHUTDOWN, DNA_TRANSFER, DNA_UNCACHE, DNA_UNKNOWN, FATAL_DISK_ERROR, INVALID_BLOCK, NOTIFY, versionIDFields inherited from interface org.apache.hadoop.ipc.GenericRefreshProtocol
versionIDFields inherited from interface org.apache.hadoop.tools.GetUserMappingsProtocol
versionIDFields inherited from interface org.apache.hadoop.ha.HAServiceProtocol
versionIDFields inherited from interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
ACT_CHECKPOINT, ACT_SHUTDOWN, ACT_UNKNOWN, FATAL, NOTIFY, versionIDFields inherited from interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
VERSIONIDFields inherited from interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
versionIDFields inherited from interface org.apache.hadoop.ipc.RefreshCallQueueProtocol
versionIDFields inherited from interface org.apache.hadoop.security.RefreshUserMappingsProtocol
versionID -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidabandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) The client needs to give up on the block.org.apache.hadoop.hdfs.protocol.LocatedBlockaddBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) longaddCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) voidaddCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[]addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) voidallowSnapshot(String snapshotRoot) org.apache.hadoop.hdfs.protocol.LastBlockWithStatusappend(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) voidblockReceivedAndDeleted(DatanodeRegistration nodeReg, String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks) blockReceivedAndDeleted() allows the DataNode to tell the NameNode about recently-received and -deleted block data.blockReport(DatanodeRegistration nodeReg, String poolId, StorageBlockReport[] reports, BlockReportContext context) blockReport() tells the NameNode about all the locally-stored blocks.cacheReport(DatanodeRegistration nodeReg, String poolId, List<Long> blockIds) Communicates the complete list of locally cached blocks to the NameNode.voidcancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidcheckAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) voidcommitBlockSynchronization(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newtargets, String[] newtargetstorages) Commit block synchronization in lease recoverybooleancomplete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) voidorg.apache.hadoop.hdfs.protocol.HdfsFileStatuscreate(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) voidcreateEncryptionZone(String src, String keyName) createSnapshot(String snapshotRoot, String snapshotName) voidcreateSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) booleanvoiddeleteSnapshot(String snapshotRoot, String snapshotName) voiddisableErasureCodingPolicy(String ecPolicyName) voiddisallowSnapshot(String snapshot) voidenableErasureCodingPolicy(String ecPolicyName) voidendCheckpoint(NamenodeRegistration registration, CheckpointSignature sig) A request to the active name-node to finalize previously started checkpoint.voiderrorReport(DatanodeRegistration nodeReg, int errorCode, String msg) errorReport() tells the NameNode about something that has gone awry.voiderrorReport(NamenodeRegistration registration, int errorCode, String msg) Report to the active name-node an error occurred on a subordinate node.voidvoidorg.apache.hadoop.fs.permission.AclStatusgetAclStatus(String src) org.apache.hadoop.hdfs.protocol.LocatedBlockgetAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) org.apache.hadoop.hdfs.protocol.BatchedDirectoryListinggetBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) Get the current block keysorg.apache.hadoop.hdfs.protocol.LocatedBlocksgetBlockLocations(String src, long offset, long length) getBlocks(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanode, long size, long minBlockSize, long timeInterval, org.apache.hadoop.fs.StorageType storageType) Get a list of blocks belonging todatanodewhose total size equalssize.org.apache.hadoop.ipc.RPC.ServerAllow access to the client RPC server for testingorg.apache.hadoop.fs.ContentSummarygetContentSummary(String path) longorg.apache.hadoop.hdfs.security.token.block.DataEncryptionKeyorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) org.apache.hadoop.hdfs.protocol.ECBlockGroupStatsorg.apache.hadoop.hdfs.protocol.ECTopologyVerifierResultgetECTopologyResultForPolicies(String... policyNames) getEditLogManifest(long sinceTxId) Return a structure containing details about all edit logs available to be fetched from the NameNode.org.apache.hadoop.hdfs.inotify.EventBatchListgetEditsFromTxid(long txid) org.apache.hadoop.fs.PathgetEnclosingRoot(String src) org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[]org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyorg.apache.hadoop.hdfs.protocol.EncryptionZonegetEZForPath(String src) org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfo(String src) org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileLinkInfo(String src) String[]getGroupsForUser(String user) org.apache.hadoop.ha.HAServiceProtocol.HAServiceStategetLinkTarget(String path) org.apache.hadoop.hdfs.protocol.DirectoryListinggetListing(String src, byte[] startAfter, boolean needLocation) org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatusgetLocatedFileInfo(String src, boolean needBlockToken) longGet the transaction ID of the most recent checkpoint.longGet the transaction ID of the most recent checkpoint for the given NameNodeFile.longgetPreferredBlockSize(String filename) org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(String path) org.apache.hadoop.conf.ReconfigurationTaskStatusorg.apache.hadoop.hdfs.protocol.ReplicatedBlockStatsorg.apache.hadoop.fs.FsServerDefaultsorg.apache.hadoop.ha.HAServiceStatusorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]org.apache.hadoop.hdfs.protocol.SnapshotDiffReportgetSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListinggetSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) org.apache.hadoop.hdfs.protocol.SnapshotStatus[]getSnapshotListing(String snapshotRoot) org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[]long[]getStats()org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[]org.apache.hadoop.hdfs.protocol.BlockStoragePolicygetStoragePolicy(String path) longList<org.apache.hadoop.fs.XAttr>booleanisFileClosed(String src) booleanreturn whether the Namenode is rolling upgrade in progress (true) or not (false).booleanorg.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry>listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry>listCachePools(String prevKey) org.apache.hadoop.hdfs.protocol.CorruptFileBlockslistCorruptFileBlocks(String path, String cookie) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone>listEncryptionZones(long prevId) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(long prevId) Deprecated.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus>listReencryptionStatus(long prevId) List<org.apache.hadoop.fs.XAttr>listXAttrs(String src) voidbooleanvoidmodifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidmodifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) voidmodifyCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) voidvoidmsync()booleanrecoverLease(String src, String clientName) voidreencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) Collection<org.apache.hadoop.ipc.RefreshResponse>voidvoidvoidvoidvoidregisterDatanode(DatanodeRegistration nodeReg) Register Datanode.registerSubordinateNamenode(NamenodeRegistration registration) Register a subordinate name-node like backup node.voidvoidremoveAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidremoveCacheDirective(long id) voidremoveCachePool(String cachePoolName) voidremoveDefaultAcl(String src) voidremoveErasureCodingPolicy(String ecPolicyName) voidremoveXAttr(String src, org.apache.hadoop.fs.XAttr xAttr) booleanDeprecated.voidvoidrenameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) longrenewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidrenewLease(String clientName, List<String> namespaces) voidreportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) The client has detected an error on the specified located blocks and is reporting them to the server.booleanCloses the current edit log and opens a new one.longorg.apache.hadoop.hdfs.protocol.RollingUpgradeInforollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) voidbooleansaveNamespace(long timeWindow, long txGap) sendHeartbeat(DatanodeRegistration nodeReg, org.apache.hadoop.hdfs.server.protocol.StorageReport[] report, long dnCacheCapacity, long dnCacheUsed, int xmitsInProgress, int xceiverCount, int failedVolumes, VolumeFailureSummary volumeFailureSummary, boolean requestFullBlockReportLease, org.apache.hadoop.hdfs.server.protocol.SlowPeerReports slowPeers, org.apache.hadoop.hdfs.server.protocol.SlowDiskReports slowDisks) sendHeartbeat() tells the NameNode that the DataNode is still alive and well.voidsendLifeline(DatanodeRegistration nodeReg, org.apache.hadoop.hdfs.server.protocol.StorageReport[] report, long dnCacheCapacity, long dnCacheUsed, int xmitsInProgress, int xceiverCount, int failedVolumes, VolumeFailureSummary volumeFailureSummary) voidvoidsetBalancerBandwidth(long bandwidth) Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.voidsetErasureCodingPolicy(String src, String ecPolicyName) voidvoidsetPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) voidsetQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) booleansetReplication(String src, short replication) booleansetSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) voidsetStoragePolicy(String src, String policyName) voidvoidsetXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) startCheckpoint(NamenodeRegistration registration) A request to the active name-node to start a checkpoint.voidvoidtransitionToActive(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) voidtransitionToObserver(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) voidtransitionToStandby(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) booleanvoidvoidunsetStoragePolicy(String src) org.apache.hadoop.hdfs.protocol.LocatedBlockupdateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) voidupdatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) booleanRequest name-node version and storage information.
-
Field Details
-
namesystem
-
nn
-
clientRpcServer
protected final org.apache.hadoop.ipc.RPC.Server clientRpcServerThe RPC server that listens to requests from clients -
clientRpcAddress
-
-
Constructor Details
-
NameNodeRpcServer
- Throws:
IOException
-
-
Method Details
-
getClientRpcServer
@VisibleForTesting public org.apache.hadoop.ipc.RPC.Server getClientRpcServer()Allow access to the client RPC server for testing -
getRpcAddress
-
getAuxiliaryRpcAddresses
-
getBlocks
public BlocksWithLocations getBlocks(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanode, long size, long minBlockSize, long timeInterval, org.apache.hadoop.fs.StorageType storageType) throws IOException Description copied from interface:NamenodeProtocolGet a list of blocks belonging todatanodewhose total size equalssize.- Specified by:
getBlocksin interfaceNamenodeProtocol- Parameters:
datanode- a data nodesize- requested sizeminBlockSize- each block should be of this minimum Block SizetimeInterval- prefer to get blocks which are belong to the cold files accessed before the time intervalstorageType- the given storage typeStorageType- Returns:
- BlocksWithLocations a list of blocks & their locations
- Throws:
IOException- if size is less than or equal to 0 or datanode does not exist- See Also:
-
getBlockKeys
Description copied from interface:NamenodeProtocolGet the current block keys- Specified by:
getBlockKeysin interfaceNamenodeProtocol- Returns:
- ExportedBlockKeys containing current block keys
- Throws:
IOException
-
errorReport
public void errorReport(NamenodeRegistration registration, int errorCode, String msg) throws IOException Description copied from interface:NamenodeProtocolReport to the active name-node an error occurred on a subordinate node. Depending on the error code the active node may decide to unregister the reporting node.- Specified by:
errorReportin interfaceNamenodeProtocol- Parameters:
registration- requesting node.errorCode- indicates the errormsg- free text description of the error- Throws:
IOException
-
registerSubordinateNamenode
public NamenodeRegistration registerSubordinateNamenode(NamenodeRegistration registration) throws IOException Description copied from interface:NamenodeProtocolRegister a subordinate name-node like backup node.- Specified by:
registerSubordinateNamenodein interfaceNamenodeProtocol- Returns:
NamenodeRegistrationof the node, which this node has just registered with.- Throws:
IOException
-
startCheckpoint
Description copied from interface:NamenodeProtocolA request to the active name-node to start a checkpoint. The name-node should decide whether to admit it or reject. The name-node also decides what should be done with the backup node image before and after the checkpoint.- Specified by:
startCheckpointin interfaceNamenodeProtocol- Parameters:
registration- the requesting node- Returns:
CheckpointCommandif checkpoint is allowed.- Throws:
IOException- See Also:
-
endCheckpoint
public void endCheckpoint(NamenodeRegistration registration, CheckpointSignature sig) throws IOException Description copied from interface:NamenodeProtocolA request to the active name-node to finalize previously started checkpoint.- Specified by:
endCheckpointin interfaceNamenodeProtocol- Parameters:
registration- the requesting nodesig-CheckpointSignaturewhich identifies the checkpoint.- Throws:
IOException
-
getDelegationToken
public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renewDelegationToken
public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws org.apache.hadoop.security.token.SecretManager.InvalidToken, IOException - Specified by:
renewDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
org.apache.hadoop.security.token.SecretManager.InvalidTokenIOException
-
cancelDelegationToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
cancelDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBlockLocations
public org.apache.hadoop.hdfs.protocol.LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException - Specified by:
getBlockLocationsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getServerDefaults
- Specified by:
getServerDefaultsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
create
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException - Specified by:
createin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
append
public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException - Specified by:
appendin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
recoverLease
- Specified by:
recoverLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setReplication
- Specified by:
setReplicationin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
unsetStoragePolicy
- Specified by:
unsetStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setStoragePolicy
- Specified by:
setStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStoragePolicy
public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String path) throws IOException - Specified by:
getStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStoragePolicies
- Specified by:
getStoragePoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setPermission
public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException - Specified by:
setPermissionin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setOwner
- Specified by:
setOwnerin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addBlock
public org.apache.hadoop.hdfs.protocol.LocatedBlock addBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) throws IOException - Specified by:
addBlockin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getAdditionalDatanode
public org.apache.hadoop.hdfs.protocol.LocatedBlock getAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException - Specified by:
getAdditionalDatanodein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
abandonBlock
public void abandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) throws IOException The client needs to give up on the block.- Specified by:
abandonBlockin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
complete
public boolean complete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) throws IOException - Specified by:
completein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
reportBadBlocks
public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) throws IOException The client has detected an error on the specified located blocks and is reporting them to the server. For now, the namenode will mark the block as corrupt. In the future we might check the blocks are actually corrupt.- Specified by:
reportBadBlocksin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Specified by:
reportBadBlocksin interfaceDatanodeProtocol- Throws:
IOException
-
updateBlockForPipeline
public org.apache.hadoop.hdfs.protocol.LocatedBlock updateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) throws IOException - Specified by:
updateBlockForPipelinein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
updatePipeline
public void updatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) throws IOException - Specified by:
updatePipelinein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
commitBlockSynchronization
public void commitBlockSynchronization(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newtargets, String[] newtargetstorages) throws IOException Description copied from interface:DatanodeProtocolCommit block synchronization in lease recovery- Specified by:
commitBlockSynchronizationin interfaceDatanodeProtocol- Throws:
IOException
-
getPreferredBlockSize
- Specified by:
getPreferredBlockSizein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rename
Deprecated.- Specified by:
renamein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
concat
- Specified by:
concatin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rename2
public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException - Specified by:
rename2in interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
truncate
- Specified by:
truncatein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
delete
- Specified by:
deletein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
mkdirs
public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException - Specified by:
mkdirsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renewLease
- Specified by:
renewLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getListing
public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBatchedListing
public org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getBatchedListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileInfo
- Specified by:
getFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLocatedFileInfo
public org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException - Specified by:
getLocatedFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
isFileClosed
- Specified by:
isFileClosedin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileLinkInfo
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileLinkInfo(String src) throws IOException - Specified by:
getFileLinkInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStats
- Specified by:
getStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getReplicatedBlockStats
public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException- Specified by:
getReplicatedBlockStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getECBlockGroupStats
- Specified by:
getECBlockGroupStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeReport
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeStorageReport
public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeStorageReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setSafeMode
public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException - Specified by:
setSafeModein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
restoreFailedStorage
- Specified by:
restoreFailedStoragein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
saveNamespace
- Specified by:
saveNamespacein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rollEdits
- Specified by:
rollEditsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
org.apache.hadoop.security.AccessControlExceptionIOException
-
refreshNodes
- Specified by:
refreshNodesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getTransactionID
- Specified by:
getTransactionIDin interfaceNamenodeProtocol- Returns:
- The most recent transaction ID that has been synced to persistent storage, or applied from persistent storage in the case of a non-active node.
- Throws:
IOException
-
getMostRecentCheckpointTxId
Description copied from interface:NamenodeProtocolGet the transaction ID of the most recent checkpoint.- Specified by:
getMostRecentCheckpointTxIdin interfaceNamenodeProtocol- Throws:
IOException
-
getMostRecentNameNodeFileTxId
Description copied from interface:NamenodeProtocolGet the transaction ID of the most recent checkpoint for the given NameNodeFile.- Specified by:
getMostRecentNameNodeFileTxIdin interfaceNamenodeProtocol- Throws:
IOException
-
rollEditLog
Description copied from interface:NamenodeProtocolCloses the current edit log and opens a new one. The call fails if the file system is in SafeMode.- Specified by:
rollEditLogin interfaceNamenodeProtocol- Returns:
- a unique token to identify this transaction.
- Throws:
IOException
-
getEditLogManifest
Description copied from interface:NamenodeProtocolReturn a structure containing details about all edit logs available to be fetched from the NameNode.- Specified by:
getEditLogManifestin interfaceNamenodeProtocol- Parameters:
sinceTxId- return only logs that contain transactions >= sinceTxId- Throws:
IOException
-
isUpgradeFinalized
- Specified by:
isUpgradeFinalizedin interfaceNamenodeProtocol- Returns:
- Whether the NameNode is in upgrade state (false) or not (true)
- Throws:
IOException
-
isRollingUpgrade
Description copied from interface:NamenodeProtocolreturn whether the Namenode is rolling upgrade in progress (true) or not (false).- Specified by:
isRollingUpgradein interfaceNamenodeProtocol- Returns:
- Throws:
IOException
-
finalizeUpgrade
- Specified by:
finalizeUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
upgradeStatus
- Specified by:
upgradeStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rollingUpgrade
public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException - Specified by:
rollingUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
metaSave
- Specified by:
metaSavein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId) throws IOException Deprecated.- Specified by:
listOpenFilesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException - Specified by:
listOpenFilesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
msync
- Specified by:
msyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getHAServiceState
- Specified by:
getHAServiceStatein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCorruptFileBlocks
public org.apache.hadoop.hdfs.protocol.CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException - Specified by:
listCorruptFileBlocksin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setBalancerBandwidth
Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.- Specified by:
setBalancerBandwidthin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Parameters:
bandwidth- Balancer bandwidth in bytes per second for all datanodes.- Throws:
IOException
-
getContentSummary
- Specified by:
getContentSummaryin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getQuotaUsage
- Specified by:
getQuotaUsagein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
satisfyStoragePolicy
- Specified by:
satisfyStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSlowDatanodeReport
- Specified by:
getSlowDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setQuota
public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException - Specified by:
setQuotain interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
fsync
public void fsync(String src, long fileId, String clientName, long lastBlockLength) throws IOException - Specified by:
fsyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setTimes
- Specified by:
setTimesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createSymlink
public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException - Specified by:
createSymlinkin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLinkTarget
- Specified by:
getLinkTargetin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
registerDatanode
Description copied from interface:DatanodeProtocolRegister Datanode.- Specified by:
registerDatanodein interfaceDatanodeProtocol- Parameters:
nodeReg- datanode registration information- Returns:
- the given
DatanodeRegistrationwith updated registration information - Throws:
IOException- See Also:
-
FSNamesystem.registerDatanode(DatanodeRegistration)
-
sendHeartbeat
public HeartbeatResponse sendHeartbeat(DatanodeRegistration nodeReg, org.apache.hadoop.hdfs.server.protocol.StorageReport[] report, long dnCacheCapacity, long dnCacheUsed, int xmitsInProgress, int xceiverCount, int failedVolumes, VolumeFailureSummary volumeFailureSummary, boolean requestFullBlockReportLease, @Nonnull org.apache.hadoop.hdfs.server.protocol.SlowPeerReports slowPeers, @Nonnull org.apache.hadoop.hdfs.server.protocol.SlowDiskReports slowDisks) throws IOException Description copied from interface:DatanodeProtocolsendHeartbeat() tells the NameNode that the DataNode is still alive and well. Includes some status info, too. It also gives the NameNode a chance to return an array of "DatanodeCommand" objects in HeartbeatResponse. A DatanodeCommand tells the DataNode to invalidate local block(s), or to copy them to other DataNodes, etc.- Specified by:
sendHeartbeatin interfaceDatanodeProtocol- Parameters:
nodeReg- datanode registration information.report- utilization report per storage.dnCacheCapacity- the total cache capacity of the datanode (in bytes).dnCacheUsed- the amount of cache used by the datanode (in bytes).xmitsInProgress- number of transfers from this datanode to others.xceiverCount- number of active transceiver threads.failedVolumes- number of failed volumes.volumeFailureSummary- info about volume failures.requestFullBlockReportLease- whether to request a full block report lease.slowPeers- Details of peer DataNodes that were detected as being slow to respond to packet writes. Empty report if no slow peers were detected by the DataNode.slowDisks- Details of disks on DataNodes that were detected as being slow. Empty report if no slow disks were detected.- Throws:
IOException- on error.
-
blockReport
public DatanodeCommand blockReport(DatanodeRegistration nodeReg, String poolId, StorageBlockReport[] reports, BlockReportContext context) throws IOException Description copied from interface:DatanodeProtocolblockReport() tells the NameNode about all the locally-stored blocks. The NameNode returns an array of Blocks that have become obsolete and should be deleted. This function is meant to upload *all* the locally-stored blocks. It's invoked upon startup and then infrequently afterwards.- Specified by:
blockReportin interfaceDatanodeProtocol- Parameters:
nodeReg- datanode registrationpoolId- the block pool ID for the blocksreports- report of blocks per storage Each finalized block is represented as 3 longs. Each under- construction replica is represented as 4 longs. This is done instead of Block[] to reduce memory used by block reports.context- Context information for this block report.- Returns:
- - the next command for DN to process.
- Throws:
IOException
-
cacheReport
public DatanodeCommand cacheReport(DatanodeRegistration nodeReg, String poolId, List<Long> blockIds) throws IOException Description copied from interface:DatanodeProtocolCommunicates the complete list of locally cached blocks to the NameNode. This method is similar toDatanodeProtocol.blockReport(DatanodeRegistration, String, StorageBlockReport[], BlockReportContext), which is used to communicated blocks stored on disk.- Specified by:
cacheReportin interfaceDatanodeProtocol- Parameters:
nodeReg- The datanode registration.poolId- The block pool ID for the blocks.blockIds- A list of block IDs.- Returns:
- The DatanodeCommand.
- Throws:
IOException
-
blockReceivedAndDeleted
public void blockReceivedAndDeleted(DatanodeRegistration nodeReg, String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks) throws IOException Description copied from interface:DatanodeProtocolblockReceivedAndDeleted() allows the DataNode to tell the NameNode about recently-received and -deleted block data. For the case of received blocks, a hint for preferred replica to be deleted when there is any excessive blocks is provided. For example, whenever client code writes a new Block here, or another DataNode copies a Block to this DataNode, it will call blockReceived().- Specified by:
blockReceivedAndDeletedin interfaceDatanodeProtocol- Throws:
IOException
-
errorReport
Description copied from interface:DatanodeProtocolerrorReport() tells the NameNode about something that has gone awry. Useful for debugging.- Specified by:
errorReportin interfaceDatanodeProtocol- Throws:
IOException
-
versionRequest
Description copied from interface:NamenodeProtocolRequest name-node version and storage information.- Specified by:
versionRequestin interfaceDatanodeProtocol- Specified by:
versionRequestin interfaceNamenodeProtocol- Returns:
NamespaceInfoidentifying versions and storage information of the name-node- Throws:
IOException
-
sendLifeline
public void sendLifeline(DatanodeRegistration nodeReg, org.apache.hadoop.hdfs.server.protocol.StorageReport[] report, long dnCacheCapacity, long dnCacheUsed, int xmitsInProgress, int xceiverCount, int failedVolumes, VolumeFailureSummary volumeFailureSummary) throws IOException - Specified by:
sendLifelinein interfaceDatanodeLifelineProtocol- Throws:
IOException
-
refreshServiceAcl
- Specified by:
refreshServiceAclin interfaceorg.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol- Throws:
IOException
-
refreshUserToGroupsMappings
- Specified by:
refreshUserToGroupsMappingsin interfaceorg.apache.hadoop.security.RefreshUserMappingsProtocol- Throws:
IOException
-
refreshSuperUserGroupsConfiguration
- Specified by:
refreshSuperUserGroupsConfigurationin interfaceorg.apache.hadoop.security.RefreshUserMappingsProtocol- Throws:
IOException
-
refreshCallQueue
- Specified by:
refreshCallQueuein interfaceorg.apache.hadoop.ipc.RefreshCallQueueProtocol- Throws:
IOException
-
refresh
- Specified by:
refreshin interfaceorg.apache.hadoop.ipc.GenericRefreshProtocol
-
getGroupsForUser
- Specified by:
getGroupsForUserin interfaceorg.apache.hadoop.tools.GetUserMappingsProtocol- Throws:
IOException
-
monitorHealth
public void monitorHealth() throws org.apache.hadoop.ha.HealthCheckFailedException, org.apache.hadoop.security.AccessControlException, IOException- Specified by:
monitorHealthin interfaceorg.apache.hadoop.ha.HAServiceProtocol- Throws:
org.apache.hadoop.ha.HealthCheckFailedExceptionorg.apache.hadoop.security.AccessControlExceptionIOException
-
transitionToActive
public void transitionToActive(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) throws org.apache.hadoop.ha.ServiceFailedException, org.apache.hadoop.security.AccessControlException, IOException - Specified by:
transitionToActivein interfaceorg.apache.hadoop.ha.HAServiceProtocol- Throws:
org.apache.hadoop.ha.ServiceFailedExceptionorg.apache.hadoop.security.AccessControlExceptionIOException
-
transitionToStandby
public void transitionToStandby(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) throws org.apache.hadoop.ha.ServiceFailedException, org.apache.hadoop.security.AccessControlException, IOException - Specified by:
transitionToStandbyin interfaceorg.apache.hadoop.ha.HAServiceProtocol- Throws:
org.apache.hadoop.ha.ServiceFailedExceptionorg.apache.hadoop.security.AccessControlExceptionIOException
-
transitionToObserver
public void transitionToObserver(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) throws org.apache.hadoop.ha.ServiceFailedException, org.apache.hadoop.security.AccessControlException, IOException - Specified by:
transitionToObserverin interfaceorg.apache.hadoop.ha.HAServiceProtocol- Throws:
org.apache.hadoop.ha.ServiceFailedExceptionorg.apache.hadoop.security.AccessControlExceptionIOException
-
getServiceStatus
public org.apache.hadoop.ha.HAServiceStatus getServiceStatus() throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.ha.ServiceFailedException, IOException- Specified by:
getServiceStatusin interfaceorg.apache.hadoop.ha.HAServiceProtocol- Throws:
org.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.ha.ServiceFailedExceptionIOException
-
getDataEncryptionKey
public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey getDataEncryptionKey() throws IOException- Specified by:
getDataEncryptionKeyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createSnapshot
- Specified by:
createSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
deleteSnapshot
- Specified by:
deleteSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
allowSnapshot
- Specified by:
allowSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
disallowSnapshot
- Specified by:
disallowSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renameSnapshot
public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException - Specified by:
renameSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshottableDirListing
public org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException- Specified by:
getSnapshottableDirListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotListing
public org.apache.hadoop.hdfs.protocol.SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException - Specified by:
getSnapshotListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotDiffReport
public org.apache.hadoop.hdfs.protocol.SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) throws IOException - Specified by:
getSnapshotDiffReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotDiffReportListing
public org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) throws IOException - Specified by:
getSnapshotDiffReportListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addCacheDirective
public long addCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException - Specified by:
addCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyCacheDirective
public void modifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException - Specified by:
modifyCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeCacheDirective
- Specified by:
removeCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCacheDirectives
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry> listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) throws IOException - Specified by:
listCacheDirectivesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addCachePool
- Specified by:
addCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyCachePool
- Specified by:
modifyCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeCachePool
- Specified by:
removeCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCachePools
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry> listCachePools(String prevKey) throws IOException - Specified by:
listCachePoolsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyAclEntries
public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
modifyAclEntriesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeAclEntries
public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
removeAclEntriesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeDefaultAcl
- Specified by:
removeDefaultAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeAcl
- Specified by:
removeAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setAcl
public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
setAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getAclStatus
- Specified by:
getAclStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createEncryptionZone
- Specified by:
createEncryptionZonein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEZForPath
- Specified by:
getEZForPathin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listEncryptionZones
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone> listEncryptionZones(long prevId) throws IOException - Specified by:
listEncryptionZonesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) throws IOException - Specified by:
reencryptEncryptionZonein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException - Specified by:
listReencryptionStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setErasureCodingPolicy
- Specified by:
setErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setXAttr
public void setXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException - Specified by:
setXAttrin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getXAttrs
public List<org.apache.hadoop.fs.XAttr> getXAttrs(String src, List<org.apache.hadoop.fs.XAttr> xAttrs) throws IOException - Specified by:
getXAttrsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listXAttrs
- Specified by:
listXAttrsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeXAttr
- Specified by:
removeXAttrin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
checkAccess
public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException - Specified by:
checkAccessin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getCurrentEditLogTxid
- Specified by:
getCurrentEditLogTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEditsFromTxid
- Specified by:
getEditsFromTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException- Specified by:
getErasureCodingPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingCodecs
- Specified by:
getErasureCodingCodecsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingPolicy
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException - Specified by:
getErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
unsetErasureCodingPolicy
- Specified by:
unsetErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getECTopologyResultForPolicies
public org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException - Specified by:
getECTopologyResultForPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException - Specified by:
addErasureCodingPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeErasureCodingPolicy
- Specified by:
removeErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
enableErasureCodingPolicy
- Specified by:
enableErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
disableErasureCodingPolicy
- Specified by:
disableErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
startReconfiguration
- Specified by:
startReconfigurationin interfaceorg.apache.hadoop.hdfs.protocol.ReconfigurationProtocol- Throws:
IOException
-
getReconfigurationStatus
public org.apache.hadoop.conf.ReconfigurationTaskStatus getReconfigurationStatus() throws IOException- Specified by:
getReconfigurationStatusin interfaceorg.apache.hadoop.hdfs.protocol.ReconfigurationProtocol- Throws:
IOException
-
listReconfigurableProperties
- Specified by:
listReconfigurablePropertiesin interfaceorg.apache.hadoop.hdfs.protocol.ReconfigurationProtocol- Throws:
IOException
-
getNextSPSPath
- Specified by:
getNextSPSPathin interfaceNamenodeProtocol- Returns:
- Gets the next available sps path, otherwise null. This API used by External SPS.
- Throws:
IOException
-
getEnclosingRoot
- Specified by:
getEnclosingRootin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-