Class RouterAsyncClientProtocol
java.lang.Object
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol
org.apache.hadoop.hdfs.server.federation.router.async.RouterAsyncClientProtocol
- All Implemented Interfaces:
org.apache.hadoop.hdfs.protocol.ClientProtocol
Module that implements all the async RPC calls in
ClientProtocol in the
RouterRpcServer.-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol
RouterClientProtocol.GetListingComparator -
Field Summary
Fields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol
GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX, GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_LOW_REDUNDANCY_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX, GET_STATS_PENDING_DELETION_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, STATS_ARRAY_LENGTH, versionID -
Constructor Summary
ConstructorsConstructorDescriptionRouterAsyncClientProtocol(org.apache.hadoop.conf.Configuration conf, RouterRpcServer rpcServer) -
Method Summary
Modifier and TypeMethodDescriptionorg.apache.hadoop.hdfs.protocol.LastBlockWithStatusappend(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) voidcancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidorg.apache.hadoop.hdfs.protocol.HdfsFileStatuscreate(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) org.apache.hadoop.fs.ContentSummarygetContentSummary(String path) longorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) org.apache.hadoop.fs.PathgetEnclosingRoot(String src) org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfo(String src) protected org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfoAll(List<RemoteLocation> locations, RemoteMethod method, long timeOutMs) Get the file info from all the locations.getFileRemoteLocation(String path) org.apache.hadoop.hdfs.protocol.DirectoryListinggetListing(String src, byte[] startAfter, boolean needLocation) protected List<RemoteResult<RemoteLocation,org.apache.hadoop.hdfs.protocol.DirectoryListing>> getListingInt(String src, byte[] startAfter, boolean needLocation) Get listing on remote locations.org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetMountPointStatus(String name, int childrenNum, long date, boolean setPath) Create a new file status for a mount point.org.apache.hadoop.hdfs.protocol.ReplicatedBlockStatsorg.apache.hadoop.fs.FsServerDefaultsorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]long[]getStats()booleanChecks if the path is a directory and is supposed to be present in all subclusters.booleanvoidmsync()booleanrecoverLease(String src, String clientName) booleanDeprecated.voidlongrenewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) booleanlongorg.apache.hadoop.hdfs.protocol.RollingUpgradeInforollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) booleansaveNamespace(long timeWindow, long txGap) booleansetReplication(String src, short replication) booleansetSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) Methods inherited from class org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol
abandonBlock, addBlock, addCacheDirective, addCachePool, addErasureCodingPolicies, aggregateContentSummary, allowSnapshot, checkAccess, checkFaultTolerantRetry, complete, createEncryptionZone, createSnapshot, createSymlink, delete, deleteSnapshot, disableErasureCodingPolicy, disallowSnapshot, enableErasureCodingPolicy, finalizeUpgrade, fsync, getAclStatus, getAdditionalDatanode, getBatchedListing, getBlockLocations, getComparator, getDataEncryptionKey, getDelegationTokens, getECBlockGroupStats, getECTopologyResultForPolicies, getEditsFromTxid, getErasureCodingCodecs, getErasureCodingPolicies, getErasureCodingPolicy, getEZForPath, getFileInfoAll, getFileLinkInfo, getHAServiceState, getLinkTarget, getLocatedFileInfo, getLocationsForContentSummary, getMountPointDates, getMountPointStatus, getMountStatusTimeOut, getNamenodeResolver, getParentPermission, getPreferredBlockSize, getQuotaUsage, getRbfRename, getRenameDestinations, getRouterFederationRenameCount, getRpcClient, getRpcServer, getSecurityManager, getServerDefaultsLastUpdate, getServerDefaultsValidityPeriod, getSnapshotDiffReport, getSnapshotDiffReportListing, getSnapshotListing, getSnapshottableDirListing, getStoragePolicies, getStoragePolicy, getStoragePolicy, getSubclusterResolver, getSuperGroup, getSuperUser, getXAttrs, isAllowPartialList, isFileClosed, isUnavailableSubclusterException, listCacheDirectives, listCachePools, listCorruptFileBlocks, listEncryptionZones, listOpenFiles, listOpenFiles, listReencryptionStatus, listXAttrs, mergeDtanodeStorageReport, metaSave, modifyAclEntries, modifyCacheDirective, modifyCachePool, reencryptEncryptionZone, refreshNodes, removeAcl, removeAclEntries, removeCacheDirective, removeCachePool, removeDefaultAcl, removeErasureCodingPolicy, removeXAttr, renameSnapshot, renewLease, reportBadBlocks, satisfyStoragePolicy, setAcl, setBalancerBandwidth, setErasureCodingPolicy, setOwner, setPermission, setQuota, setServerDefaultsLastUpdate, setStoragePolicy, setTimes, setXAttr, shouldAddMountPoint, truncate, unsetErasureCodingPolicy, unsetStoragePolicy, updateBlockForPipeline, updatePipeline, upgradeStatus
-
Constructor Details
-
RouterAsyncClientProtocol
public RouterAsyncClientProtocol(org.apache.hadoop.conf.Configuration conf, RouterRpcServer rpcServer)
-
-
Method Details
-
getServerDefaults
- Specified by:
getServerDefaultsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getServerDefaultsin classRouterClientProtocol- Throws:
IOException
-
create
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException - Specified by:
createin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
createin classRouterClientProtocol- Throws:
IOException
-
append
public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException - Specified by:
appendin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
appendin classRouterClientProtocol- Throws:
IOException
-
rename
Deprecated.- Specified by:
renamein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
renamein classRouterClientProtocol- Throws:
IOException
-
rename2
public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException - Specified by:
rename2in interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
rename2in classRouterClientProtocol- Throws:
IOException
-
concat
- Specified by:
concatin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
concatin classRouterClientProtocol- Throws:
IOException
-
mkdirs
public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException - Specified by:
mkdirsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
mkdirsin classRouterClientProtocol- Throws:
IOException
-
getListing
public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getListingin classRouterClientProtocol- Throws:
IOException
-
getListingInt
protected List<RemoteResult<RemoteLocation,org.apache.hadoop.hdfs.protocol.DirectoryListing>> getListingInt(String src, byte[] startAfter, boolean needLocation) throws IOException Get listing on remote locations.- Overrides:
getListingIntin classRouterClientProtocol- Parameters:
src- the directory namestartAfter- the name to start afterneedLocation- if blockLocations need to be returned- Returns:
- a partial listing starting after startAfter
- Throws:
IOException- if other I/O error occurred
-
getFileInfo
- Specified by:
getFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getFileInfoin classRouterClientProtocol- Throws:
IOException
-
getFileRemoteLocation
- Overrides:
getFileRemoteLocationin classRouterClientProtocol- Throws:
IOException
-
getMountPointStatus
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getMountPointStatus(String name, int childrenNum, long date, boolean setPath) Description copied from class:RouterClientProtocolCreate a new file status for a mount point.- Overrides:
getMountPointStatusin classRouterClientProtocol- Parameters:
name- Name of the mount point.childrenNum- Number of children.date- Map with the dates.setPath- if true should set path in HdfsFileStatus- Returns:
- New HDFS file status representing a mount point.
-
getFileInfoAll
protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfoAll(List<RemoteLocation> locations, RemoteMethod method, long timeOutMs) throws IOException Description copied from class:RouterClientProtocolGet the file info from all the locations.- Overrides:
getFileInfoAllin classRouterClientProtocol- Parameters:
locations- Locations to check.method- The file information method to run.timeOutMs- Time out for the operation in milliseconds.- Returns:
- The first file info if it's a file, the directory if it's everywhere.
- Throws:
IOException- If all the locations throw an exception.
-
recoverLease
- Specified by:
recoverLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
recoverLeasein classRouterClientProtocol- Throws:
IOException
-
getStats
- Specified by:
getStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getStatsin classRouterClientProtocol- Throws:
IOException
-
getReplicatedBlockStats
public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException- Specified by:
getReplicatedBlockStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getReplicatedBlockStatsin classRouterClientProtocol- Throws:
IOException
-
getDatanodeReport
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getDatanodeReportin classRouterClientProtocol- Throws:
IOException
-
getSlowDatanodeReport
- Specified by:
getSlowDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getSlowDatanodeReportin classRouterClientProtocol- Throws:
IOException
-
getDatanodeStorageReport
public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeStorageReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getDatanodeStorageReportin classRouterClientProtocol- Throws:
IOException
-
getDatanodeStorageReport
public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException - Overrides:
getDatanodeStorageReportin classRouterClientProtocol- Throws:
IOException
-
setSafeMode
public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException - Specified by:
setSafeModein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
setSafeModein classRouterClientProtocol- Throws:
IOException
-
saveNamespace
- Specified by:
saveNamespacein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
saveNamespacein classRouterClientProtocol- Throws:
IOException
-
rollEdits
- Specified by:
rollEditsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
rollEditsin classRouterClientProtocol- Throws:
IOException
-
restoreFailedStorage
- Specified by:
restoreFailedStoragein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
restoreFailedStoragein classRouterClientProtocol- Throws:
IOException
-
rollingUpgrade
public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException - Specified by:
rollingUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
rollingUpgradein classRouterClientProtocol- Throws:
IOException
-
getContentSummary
- Specified by:
getContentSummaryin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getContentSummaryin classRouterClientProtocol- Throws:
IOException
-
getCurrentEditLogTxid
- Specified by:
getCurrentEditLogTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getCurrentEditLogTxidin classRouterClientProtocol- Throws:
IOException
-
msync
- Specified by:
msyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
msyncin classRouterClientProtocol- Throws:
IOException
-
setReplication
- Specified by:
setReplicationin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
setReplicationin classRouterClientProtocol- Throws:
IOException
-
isMultiDestDirectory
Checks if the path is a directory and is supposed to be present in all subclusters.- Overrides:
isMultiDestDirectoryin classRouterClientProtocol- Parameters:
src- the source path- Returns:
- true if the path is directory and is supposed to be present in all subclusters else false in all other scenarios.
-
getEnclosingRoot
- Specified by:
getEnclosingRootin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getEnclosingRootin classRouterClientProtocol- Throws:
IOException
-
getDelegationToken
public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
getDelegationTokenin classRouterClientProtocol- Throws:
IOException
-
renewDelegationToken
public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
renewDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
renewDelegationTokenin classRouterClientProtocol- Throws:
IOException
-
cancelDelegationToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
cancelDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Overrides:
cancelDelegationTokenin classRouterClientProtocol- Throws:
IOException
-