Package org.apache.hadoop.hdfs
Class ViewDistributedFileSystem
java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.fs.FileSystem
org.apache.hadoop.hdfs.DistributedFileSystem
org.apache.hadoop.hdfs.ViewDistributedFileSystem
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.hadoop.conf.Configurable,org.apache.hadoop.crypto.key.KeyProviderTokenIssuer,org.apache.hadoop.fs.BatchListingOperations,org.apache.hadoop.fs.BulkDeleteSource,org.apache.hadoop.fs.LeaseRecoverable,org.apache.hadoop.fs.PathCapabilities,org.apache.hadoop.fs.SafeMode,org.apache.hadoop.fs.WithErasureCoding,org.apache.hadoop.security.token.DelegationTokenIssuer
The ViewDistributedFileSystem is an extended class to DistributedFileSystem
with additional mounting functionality. The goal is to have better API
compatibility for HDFS users when using mounting
filesystem(ViewFileSystemOverloadScheme).
The ViewFileSystemOverloadScheme
ViewFileSystemOverloadScheme is a new
filesystem with inherited mounting functionality from ViewFileSystem.
For the user who is using ViewFileSystemOverloadScheme by setting
fs.hdfs.impl=org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme, now
they can set fs.hdfs.impl=org.apache.hadoop.hdfs.ViewDistributedFileSystem.
So, that the hdfs users will get closely compatible API with mount
functionality. For the rest of all other schemes can continue to use
ViewFileSystemOverloadScheme class directly for mount functionality. Please
note that ViewFileSystemOverloadScheme provides only
ViewFileSystemViewFileSystem APIs.
If user configured this class but no mount point configured? Then it will
simply work as existing DistributedFileSystem class. If user configured both
fs.hdfs.impl to this class and mount configurations, then users will be able
to make calls the APIs available in this class, they are nothing but DFS
APIs, but they will be delegated to viewfs functionality. Please note, APIs
without any path in arguments( ex: isInSafeMode), will be delegated to
default filesystem only, that is the configured fallback link. If you want to
make these API calls on specific child filesystem, you may want to initialize
them separately and call. In ViewDistributedFileSystem, we strongly recommend
to configure linkFallBack when you add mount links and it's recommended to
point be to your base cluster, usually your current fs.defaultFS if that's
pointing to hdfs.-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.hadoop.hdfs.DistributedFileSystem
DistributedFileSystem.HdfsDataOutputStreamBuilderNested classes/interfaces inherited from class org.apache.hadoop.fs.FileSystem
org.apache.hadoop.fs.FileSystem.DirectoryEntries, org.apache.hadoop.fs.FileSystem.Statistics -
Field Summary
Fields inherited from class org.apache.hadoop.fs.FileSystem
DEFAULT_FS, FS_DEFAULT_NAME_KEY, LOG, SHUTDOWN_HOOK_PRIORITY, statistics, TRASH_PREFIX, USER_HOME_PREFIXFields inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
TOKEN_LOG -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidaccess(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) longlongaddCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) Add a new CacheDirective.voidaddCachePool(CachePoolInfo info) Add a cache pool.addErasureCodingPolicies(ErasureCodingPolicy[] policies) Add Erasure coding policies to HDFS.voidallowSnapshot(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress) Append to an existing file (optional operation).org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) Append to an existing file (optional operation).appendFile(org.apache.hadoop.fs.Path path) Create aDistributedFileSystem.HdfsDataOutputStreamBuilderto append a file on DFS.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.LocatedFileStatus>>batchedListLocatedStatusIterator(List<org.apache.hadoop.fs.Path> paths) org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.FileStatus>>batchedListStatusIterator(List<org.apache.hadoop.fs.Path> paths) protected URIcanonicalizeUri(URI uri) voidclose()voidconcat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) Move blocks from srcs to trg and delete srcs afterwards.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) Same asDistributedFileSystem.create(Path, FsPermission, boolean, int, short, long, Progressable)with the addition of favoredNodes that is a hint to where the namenode should place the file blocks.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> cflags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) voidcreateEncryptionZone(org.apache.hadoop.fs.Path path, String keyName) createFile(org.apache.hadoop.fs.Path path) Create a HdfsDataOutputStreamBuilder to create a file on DFS.org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) Same as create(), except fails if parent directory doesn't already exist.protected HdfsPathHandlecreatePathHandle(org.apache.hadoop.fs.FileStatus st, org.apache.hadoop.fs.Options.HandleOpt... opts) Create a handle to an HDFS file.org.apache.hadoop.fs.PathcreateSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) voidcreateSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) booleandelete(org.apache.hadoop.fs.Path f) booleandelete(org.apache.hadoop.fs.Path f, boolean recursive) voiddeleteSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) voiddisableErasureCodingPolicy(String ecPolicyName) Disable erasure coding policy.voiddisallowSnapshot(org.apache.hadoop.fs.Path path) voidenableErasureCodingPolicy(String ecPolicyName) Enable erasure coding policy.voidFinalize previously upgraded files system state.protected org.apache.hadoop.fs.PathfixRelativePart(org.apache.hadoop.fs.Path p) org.apache.hadoop.fs.permission.AclStatusgetAclStatus(org.apache.hadoop.fs.Path path) org.apache.hadoop.security.token.DelegationTokenIssuer[]Retrieve all the erasure coding codecs and coders supported by this file system.Gets all erasure coding policies from all available child file systems.longReturns number of bytes within blocks with future generation stamp.Get a canonical service name for this file system.org.apache.hadoop.fs.FileSystem[]org.apache.hadoop.fs.ContentSummarygetContentSummary(org.apache.hadoop.fs.Path f) longReturns count of blocks with at least one replica marked corrupt.longgetDefaultBlockSize(org.apache.hadoop.fs.Path f) protected intshortgetDefaultReplication(org.apache.hadoop.fs.Path f) org.apache.hadoop.security.token.Token<DelegationTokenIdentifier>getDelegationToken(String renewer) If no mount points configured, it works same asDistributedFileSystem.getDelegationToken(String).getECTopologyResultForPolicies(String... policyNames) Verifies if the given policies are supported in the given cluster setup.getErasureCodingPolicy(org.apache.hadoop.fs.Path path) Get erasure coding policy information for the specified path.getEZForPath(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.FileStatus fs, long start, long len) org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len) The returned BlockLocation will have different formats for replicated and erasure coded file.org.apache.hadoop.fs.FileChecksumgetFileChecksum(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.FileChecksumgetFileChecksum(org.apache.hadoop.fs.Path f, long length) org.apache.hadoop.fs.FileEncryptionInfogetFileEncryptionInfo(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.FileStatusgetFileLinkStatus(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.FileStatusgetFileStatus(org.apache.hadoop.fs.Path f) Returns the stat information about the file.Returns only default cluster getHedgedReadMetrics.org.apache.hadoop.fs.PathgetInotifyEventStream(long lastReadTxid) org.apache.hadoop.crypto.key.KeyProviderorg.apache.hadoop.fs.PathgetLinkTarget(org.apache.hadoop.fs.Path path) longReturns aggregated count of blocks with less redundancy.longReturns count of blocks with no good replicas left.longReturns count of blocks with replication factor 1 and have lost the only replica.org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint[]longReturns count of blocks pending on deletion.org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(org.apache.hadoop.fs.Path f) Return the protocol scheme for the FileSystem.org.apache.hadoop.fs.FsServerDefaultsorg.apache.hadoop.fs.FsServerDefaultsgetServerDefaults(org.apache.hadoop.fs.Path f) Retrieve stats for slow running datanodes.getSnapshotDiffReport(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) Get the difference between two snapshots, or between a snapshot and the current tree of a directory.Get the list of snapshottable directories that are owned by the current user.org.apache.hadoop.fs.FsStatusorg.apache.hadoop.fs.FsStatusgetStatus(org.apache.hadoop.fs.Path p) Deprecated.org.apache.hadoop.fs.BlockStoragePolicySpigetStoragePolicy(org.apache.hadoop.fs.Path src) org.apache.hadoop.fs.PathgetTrashRoot(org.apache.hadoop.fs.Path path) Get the root directory of Trash for a path in HDFS. 1.Collection<org.apache.hadoop.fs.FileStatus>getTrashRoots(boolean allUsers) Get all the trash roots of HDFS for current user or for all the users. 1.getUri()longgetUsed()org.apache.hadoop.fs.Pathbyte[]getXAttrs(org.apache.hadoop.fs.Path path) booleanhasPathCapability(org.apache.hadoop.fs.Path path, String capability) HDFS client capabilities.voidinitialize(URI uri, org.apache.hadoop.conf.Configuration conf) booleanisFileClosed(org.apache.hadoop.fs.Path src) Get the close status of a filebooleanUtility function that returns if the NameNode is in safemode or not.org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry>List cache directives.org.apache.hadoop.fs.RemoteIterator<CachePoolEntry>List all cache pools.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path>listCorruptFileBlocks(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.RemoteIterator<EncryptionZone>Returns the results from default DFS (fallback).org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus>listLocatedStatus(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.PathFilter filter) The BlockLocation of returned LocatedFileStatus will have different formats for replicated and erasure coded file.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>Deprecated.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) Deprecated.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) org.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus>Returns the results from default DFS (fallback).org.apache.hadoop.fs.FileStatus[]listStatus(org.apache.hadoop.fs.Path p) List all the entries of a directory Note that this operation is not atomic for a large directory.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus>listStatusIterator(org.apache.hadoop.fs.Path p) Returns a remote iterator so that followup calls are made on demand while consuming the entries.listXAttrs(org.apache.hadoop.fs.Path path) voidbooleanmkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) Create a directory, only when the parent directories exist.booleanmkdirs(org.apache.hadoop.fs.Path dir) booleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) Create a directory and its parent directories.voidmodifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidvoidmodifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) Modify a CacheDirective.voidmodifyCachePool(CachePoolInfo info) Modify an existing cache pool.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.PathHandle fd, int bufferSize) Opens an FSDataInputStream with the indicated file ID extracted from thePathHandle.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.Path f, int bufferSize) protected HdfsDataOutputStreamprimitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) protected booleanprimitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission) voidprovisionEZTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) org.apache.hadoop.fs.PathprovisionSnapshotTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) HDFS only.booleanrecoverLease(org.apache.hadoop.fs.Path f) Start the lease recovery of a filevoidreencryptEncryptionZone(org.apache.hadoop.fs.Path zone, HdfsConstants.ReencryptAction action) voidRefreshes the list of hosts and excluded hosts from the configured files.voidremoveAcl(org.apache.hadoop.fs.Path path) voidremoveAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidremoveCacheDirective(long id) Remove a CacheDirectiveInfo.voidremoveCachePool(String poolName) Remove a cache pool.voidremoveDefaultAcl(org.apache.hadoop.fs.Path path) voidremoveErasureCodingPolicy(String ecPolicyName) Remove erasure coding policy.voidremoveXAttr(org.apache.hadoop.fs.Path path, String name) booleanrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) voidrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options) This rename operation is guaranteed to be atomic.voidrenameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName) protected org.apache.hadoop.fs.PathresolveLink(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.PathresolvePath(org.apache.hadoop.fs.Path f) booleanenable/disable/check restoreFaileStorage.longRolls the edit log on the active NameNode.Rolling upgrade: prepare/finalize/query.voidsatisfyStoragePolicy(org.apache.hadoop.fs.Path src) Set the source path to satisfy storage policy.voidSave namespace image.booleansaveNamespace(long timeWindow, long txGap) Save namespace image.voidvoidsetBalancerBandwidth(long bandwidth) Requests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.voidsetErasureCodingPolicy(org.apache.hadoop.fs.Path path, String ecPolicyName) Set the source path to the specified erasure coding policy.voidvoidsetPermission(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) voidsetQuota(org.apache.hadoop.fs.Path src, long namespaceQuota, long storagespaceQuota) Set a directory's quotasvoidsetQuotaByStorageType(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.StorageType type, long quota) Set the per type storage quota of a directory.booleansetReplication(org.apache.hadoop.fs.Path f, short replication) booleanEnter, leave or get safe mode.booleansetSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) Enter, leave or get safe mode.voidsetStoragePolicy(org.apache.hadoop.fs.Path src, String policyName) Set the source path to the specified storage policy.voidsetTimes(org.apache.hadoop.fs.Path f, long mtime, long atime) voidsetVerifyChecksum(boolean verifyChecksum) voidsetWorkingDirectory(org.apache.hadoop.fs.Path dir) voidsetWriteChecksum(boolean writeChecksum) voidsetXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) org.apache.hadoop.fs.RemoteIterator<SnapshotDiffReportListing>snapshotDiffReportListingRemoteIterator(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) Returns a remote iterator so that followup calls are made on demand while consuming the SnapshotDiffReportListing entries.booleanbooleantruncate(org.apache.hadoop.fs.Path f, long newLength) voidunsetErasureCodingPolicy(org.apache.hadoop.fs.Path path) Unset the erasure coding policy from the source path.voidunsetStoragePolicy(org.apache.hadoop.fs.Path src) booleanGet status of upgrade - finalized or not.Methods inherited from class org.apache.hadoop.hdfs.DistributedFileSystem
append, createMultipartUploader, getDefaultBlockSize, getDefaultReplication, getEnclosingRoot, getErasureCodingPolicyName, getLocatedBlocks, getSnapshotDiffReportListing, getSnapshotListing, isSnapshotTrashRootEnabled, msync, setSafeMode, setSafeMode, toStringMethods inherited from class org.apache.hadoop.fs.FileSystem
append, append, append, areSymlinksEnabled, cancelDeleteOnExit, checkPath, clearStatistics, closeAll, closeAllForUGI, completeLocalOutput, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, create, create, create, create, create, create, create, createBulkDelete, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createNewFile, createNonRecursive, createNonRecursive, createSnapshot, deleteOnExit, enableSymlinks, exists, get, get, get, getAllStatistics, getBlockSize, getCanonicalUri, getDefaultUri, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getInitialWorkingDirectory, getLength, getLocal, getName, getNamed, getPathHandle, getReplication, getStatistics, getStatistics, getStorageStatistics, getUsed, globStatus, globStatus, isDirectory, isFile, listFiles, listLocatedStatus, listStatus, listStatus, listStatus, listStatusBatch, makeQualified, mkdirs, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, newInstance, newInstance, newInstance, newInstanceLocal, open, open, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveMkdir, printStatistics, processDeleteOnExit, setDefaultUri, setDefaultUri, setXAttr, startLocalOutputMethods inherited from class org.apache.hadoop.conf.Configured
getConf, setConfMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
addDelegationTokens
-
Constructor Details
-
ViewDistributedFileSystem
public ViewDistributedFileSystem()
-
-
Method Details
-
initialize
- Overrides:
initializein classDistributedFileSystem- Throws:
IOException
-
getUri
- Overrides:
getUriin classDistributedFileSystem
-
getScheme
Description copied from class:DistributedFileSystemReturn the protocol scheme for the FileSystem.- Overrides:
getSchemein classDistributedFileSystem- Returns:
hdfs
-
getWorkingDirectory
public org.apache.hadoop.fs.Path getWorkingDirectory()- Overrides:
getWorkingDirectoryin classDistributedFileSystem
-
setWorkingDirectory
public void setWorkingDirectory(org.apache.hadoop.fs.Path dir) - Overrides:
setWorkingDirectoryin classDistributedFileSystem
-
getHomeDirectory
public org.apache.hadoop.fs.Path getHomeDirectory()- Overrides:
getHomeDirectoryin classDistributedFileSystem
-
getHedgedReadMetrics
Returns only default cluster getHedgedReadMetrics.- Overrides:
getHedgedReadMetricsin classDistributedFileSystem- Returns:
- object of DFSHedgedReadMetrics
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileStatus fs, long start, long len) throws IOException - Overrides:
getFileBlockLocationsin classDistributedFileSystem- Throws:
IOException
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len) throws IOException Description copied from class:DistributedFileSystemThe returned BlockLocation will have different formats for replicated and erasure coded file. Please refer toFileSystem.getFileBlockLocations(FileStatus, long, long)for more details.- Overrides:
getFileBlockLocationsin classDistributedFileSystem- Throws:
IOException
-
setVerifyChecksum
public void setVerifyChecksum(boolean verifyChecksum) - Overrides:
setVerifyChecksumin classDistributedFileSystem
-
recoverLease
Description copied from class:DistributedFileSystemStart the lease recovery of a file- Specified by:
recoverLeasein interfaceorg.apache.hadoop.fs.LeaseRecoverable- Overrides:
recoverLeasein classDistributedFileSystem- Parameters:
f- a file- Returns:
- true if the file is already closed
- Throws:
IOException- if an error occurs
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f, int bufferSize) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
openin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.PathHandle fd, int bufferSize) throws IOException Description copied from class:DistributedFileSystemOpens an FSDataInputStream with the indicated file ID extracted from thePathHandle.- Overrides:
openin classDistributedFileSystem- Parameters:
fd- Reference to entity in this FileSystem.bufferSize- the size of the buffer to be used.- Throws:
org.apache.hadoop.fs.InvalidPathHandleException- If PathHandle constraints do not holdIOException- On I/O errors
-
createPathHandle
protected HdfsPathHandle createPathHandle(org.apache.hadoop.fs.FileStatus st, org.apache.hadoop.fs.Options.HandleOpt... opts) Description copied from class:DistributedFileSystemCreate a handle to an HDFS file.- Overrides:
createPathHandlein classDistributedFileSystem- Parameters:
st- HdfsFileStatus instance from NameNodeopts- Standard handle arguments- Returns:
- A handle to the file.
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException - Overrides:
appendin classDistributedFileSystem- Throws:
IOException
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException Description copied from class:DistributedFileSystemAppend to an existing file (optional operation).- Overrides:
appendin classDistributedFileSystem- Parameters:
f- the existing file to be appended.flag- Flags for the Append operation. CreateFlag.APPEND is mandatory to be present.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.- Returns:
- Returns instance of
FSDataOutputStream - Throws:
IOException
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) throws IOException Description copied from class:DistributedFileSystemAppend to an existing file (optional operation).- Overrides:
appendin classDistributedFileSystem- Parameters:
f- the existing file to be appended.flag- Flags for the Append operation. CreateFlag.APPEND is mandatory to be present.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.favoredNodes- Favored nodes for new blocks- Returns:
- Returns instance of
FSDataOutputStream - Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException - Overrides:
createin classDistributedFileSystem- Throws:
IOException
-
create
public HdfsDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) throws IOException Description copied from class:DistributedFileSystemSame asDistributedFileSystem.create(Path, FsPermission, boolean, int, short, long, Progressable)with the addition of favoredNodes that is a hint to where the namenode should place the file blocks. The favored nodes hint is not persisted in HDFS. Hence it may be honored at the creation time only. And with favored nodes, blocks will be pinned on the datanodes to prevent balancing move the block. HDFS could move the blocks during replication, to move the blocks from favored nodes. A value of null means no favored nodes for this create- Overrides:
createin classDistributedFileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> cflags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException - Overrides:
createin classDistributedFileSystem- Throws:
IOException
-
primitiveCreate
protected HdfsDataOutputStream primitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException - Overrides:
primitiveCreatein classDistributedFileSystem- Throws:
IOException
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException Description copied from class:DistributedFileSystemSame as create(), except fails if parent directory doesn't already exist.- Overrides:
createNonRecursivein classDistributedFileSystem- Throws:
IOException
-
setReplication
public boolean setReplication(org.apache.hadoop.fs.Path f, short replication) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
setReplicationin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
setStoragePolicy
Description copied from class:DistributedFileSystemSet the source path to the specified storage policy.- Overrides:
setStoragePolicyin classDistributedFileSystem- Parameters:
src- The source path referring to either a directory or a file.policyName- The name of the storage policy.- Throws:
IOException
-
unsetStoragePolicy
- Overrides:
unsetStoragePolicyin classDistributedFileSystem- Throws:
IOException
-
getStoragePolicy
public org.apache.hadoop.fs.BlockStoragePolicySpi getStoragePolicy(org.apache.hadoop.fs.Path src) throws IOException - Overrides:
getStoragePolicyin classDistributedFileSystem- Throws:
IOException
-
getAllStoragePolicies
- Overrides:
getAllStoragePoliciesin classDistributedFileSystem- Throws:
IOException
-
getBytesWithFutureGenerationStamps
Description copied from class:DistributedFileSystemReturns number of bytes within blocks with future generation stamp. These are bytes that will be potentially deleted if we forceExit from safe mode.- Overrides:
getBytesWithFutureGenerationStampsin classDistributedFileSystem- Returns:
- number of bytes.
- Throws:
IOException
-
getStoragePolicies
Deprecated.Description copied from class:DistributedFileSystemDeprecated. PreferFileSystem.getAllStoragePolicies()- Overrides:
getStoragePoliciesin classDistributedFileSystem- Throws:
IOException
-
concat
public void concat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) throws IOException Description copied from class:DistributedFileSystemMove blocks from srcs to trg and delete srcs afterwards. The file block sizes must be the same.- Overrides:
concatin classDistributedFileSystem- Parameters:
trg- existing file to append topsrcs- list of files (same block size, same replication)- Throws:
IOException
-
rename
public boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOException - Overrides:
renamein classDistributedFileSystem- Throws:
IOException
-
rename
public void rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException Description copied from class:DistributedFileSystemThis rename operation is guaranteed to be atomic.- Overrides:
renamein classDistributedFileSystem- Throws:
IOException
-
truncate
- Overrides:
truncatein classDistributedFileSystem- Throws:
IOException
-
delete
public boolean delete(org.apache.hadoop.fs.Path f, boolean recursive) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
deletein classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
getContentSummary
public org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getContentSummaryin classDistributedFileSystem- Throws:
IOException
-
getQuotaUsage
public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getQuotaUsagein classDistributedFileSystem- Throws:
IOException
-
setQuota
public void setQuota(org.apache.hadoop.fs.Path src, long namespaceQuota, long storagespaceQuota) throws IOException Description copied from class:DistributedFileSystemSet a directory's quotas- Overrides:
setQuotain classDistributedFileSystem- Throws:
IOException- See Also:
-
setQuotaByStorageType
public void setQuotaByStorageType(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.StorageType type, long quota) throws IOException Description copied from class:DistributedFileSystemSet the per type storage quota of a directory.- Overrides:
setQuotaByStorageTypein classDistributedFileSystem- Parameters:
src- target directory whose quota is to be modified.type- storage type of the specific storage type quota to be modified.quota- value of the specific storage type quota to be modified. MaybeHdfsConstants.QUOTA_RESETto clear quota by storage type.- Throws:
IOException
-
listStatus
Description copied from class:DistributedFileSystemList all the entries of a directory Note that this operation is not atomic for a large directory. The entries of a directory may be fetched from NameNode multiple times. It only guarantees that each name occurs once if a directory undergoes changes between the calls. If any of the the immediate children of the given path f is a symlink, the returned FileStatus object of that children would be represented as a symlink. It will not be resolved to the target path and will not get the target path FileStatus object. The target path will be available via getSymlink on that children's FileStatus object. Since it represents as symlink, isDirectory on that children's FileStatus will return false. If you want to get the FileStatus of target path for that children, you may want to use GetFileStatus API with that children's symlink path. Please seeDistributedFileSystem.getFileStatus(Path f)- Overrides:
listStatusin classDistributedFileSystem- Throws:
IOException
-
listLocatedStatus
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.PathFilter filter) throws FileNotFoundException, IOException Description copied from class:DistributedFileSystemThe BlockLocation of returned LocatedFileStatus will have different formats for replicated and erasure coded file. Please refer toFileSystem.getFileBlockLocations(FileStatus, long, long)for more details.- Overrides:
listLocatedStatusin classDistributedFileSystem- Throws:
FileNotFoundExceptionIOException
-
listStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus> listStatusIterator(org.apache.hadoop.fs.Path p) throws IOException Description copied from class:DistributedFileSystemReturns a remote iterator so that followup calls are made on demand while consuming the entries. This reduces memory consumption during listing of a large directory.- Overrides:
listStatusIteratorin classDistributedFileSystem- Parameters:
p- target path- Returns:
- remote iterator
- Throws:
IOException
-
batchedListStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.FileStatus>> batchedListStatusIterator(List<org.apache.hadoop.fs.Path> paths) throws IOException - Specified by:
batchedListStatusIteratorin interfaceorg.apache.hadoop.fs.BatchListingOperations- Overrides:
batchedListStatusIteratorin classDistributedFileSystem- Throws:
IOException
-
batchedListLocatedStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.LocatedFileStatus>> batchedListLocatedStatusIterator(List<org.apache.hadoop.fs.Path> paths) throws IOException - Specified by:
batchedListLocatedStatusIteratorin interfaceorg.apache.hadoop.fs.BatchListingOperations- Overrides:
batchedListLocatedStatusIteratorin classDistributedFileSystem- Throws:
IOException
-
mkdir
public boolean mkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Description copied from class:DistributedFileSystemCreate a directory, only when the parent directories exist. SeeFsPermission.applyUMask(FsPermission)for details of how the permission is applied.- Overrides:
mkdirin classDistributedFileSystem- Parameters:
f- The path to createpermission- The permission. See FsPermission#applyUMask for details about how this is used to calculate the effective permission.- Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Description copied from class:DistributedFileSystemCreate a directory and its parent directories. SeeFsPermission.applyUMask(FsPermission)for details of how the permission is applied.- Overrides:
mkdirsin classDistributedFileSystem- Parameters:
f- The path to createpermission- The permission. See FsPermission#applyUMask for details about how this is used to calculate the effective permission.- Throws:
IOException
-
primitiveMkdir
protected boolean primitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission) throws IOException - Overrides:
primitiveMkdirin classDistributedFileSystem- Throws:
IOException
-
close
- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable- Overrides:
closein classDistributedFileSystem- Throws:
IOException
-
getClient
- Overrides:
getClientin classDistributedFileSystem
-
getStatus
- Overrides:
getStatusin classDistributedFileSystem- Throws:
IOException
-
getMissingBlocksCount
Description copied from class:DistributedFileSystemReturns count of blocks with no good replicas left. Normally should be zero.- Overrides:
getMissingBlocksCountin classDistributedFileSystem- Throws:
IOException
-
getPendingDeletionBlocksCount
Description copied from class:DistributedFileSystemReturns count of blocks pending on deletion.- Overrides:
getPendingDeletionBlocksCountin classDistributedFileSystem- Throws:
IOException
-
getMissingReplOneBlocksCount
Description copied from class:DistributedFileSystemReturns count of blocks with replication factor 1 and have lost the only replica.- Overrides:
getMissingReplOneBlocksCountin classDistributedFileSystem- Throws:
IOException
-
getLowRedundancyBlocksCount
Description copied from class:DistributedFileSystemReturns aggregated count of blocks with less redundancy.- Overrides:
getLowRedundancyBlocksCountin classDistributedFileSystem- Throws:
IOException
-
getCorruptBlocksCount
Description copied from class:DistributedFileSystemReturns count of blocks with at least one replica marked corrupt.- Overrides:
getCorruptBlocksCountin classDistributedFileSystem- Throws:
IOException
-
listCorruptFileBlocks
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path> listCorruptFileBlocks(org.apache.hadoop.fs.Path path) throws IOException - Overrides:
listCorruptFileBlocksin classDistributedFileSystem- Throws:
IOException
-
getDataNodeStats
- Overrides:
getDataNodeStatsin classDistributedFileSystem- Returns:
- datanode statistics.
- Throws:
IOException
-
getDataNodeStats
- Overrides:
getDataNodeStatsin classDistributedFileSystem- Returns:
- datanode statistics for the given type.
- Throws:
IOException
-
setSafeMode
Description copied from class:DistributedFileSystemEnter, leave or get safe mode.- Overrides:
setSafeModein classDistributedFileSystem- Throws:
IOException- See Also:
-
setSafeMode
public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException Description copied from class:DistributedFileSystemEnter, leave or get safe mode.- Overrides:
setSafeModein classDistributedFileSystem- Parameters:
action- One of SafeModeAction.ENTER, SafeModeAction.LEAVE and SafeModeAction.GET.isChecked- If true check only for Active NNs status, else check first NN's status.- Throws:
IOException- See Also:
-
saveNamespace
Description copied from class:DistributedFileSystemSave namespace image.- Overrides:
saveNamespacein classDistributedFileSystem- Parameters:
timeWindow- NameNode can ignore this command if the latest checkpoint was done within the given time period (in seconds).- Returns:
- true if a new checkpoint has been made
- Throws:
IOException- See Also:
-
saveNamespace
Description copied from class:DistributedFileSystemSave namespace image. NameNode always does the checkpoint.- Overrides:
saveNamespacein classDistributedFileSystem- Throws:
IOException
-
rollEdits
Description copied from class:DistributedFileSystemRolls the edit log on the active NameNode. Requires super-user privileges.- Overrides:
rollEditsin classDistributedFileSystem- Returns:
- the transaction ID of the newly created segment
- Throws:
IOException- See Also:
-
restoreFailedStorage
Description copied from class:DistributedFileSystemenable/disable/check restoreFaileStorage.- Overrides:
restoreFailedStoragein classDistributedFileSystem- Throws:
IOException- See Also:
-
refreshNodes
Description copied from class:DistributedFileSystemRefreshes the list of hosts and excluded hosts from the configured files.- Overrides:
refreshNodesin classDistributedFileSystem- Throws:
IOException
-
finalizeUpgrade
Description copied from class:DistributedFileSystemFinalize previously upgraded files system state.- Overrides:
finalizeUpgradein classDistributedFileSystem- Throws:
IOException
-
upgradeStatus
Description copied from class:DistributedFileSystemGet status of upgrade - finalized or not.- Overrides:
upgradeStatusin classDistributedFileSystem- Returns:
- true if upgrade is finalized or if no upgrade is in progress and false otherwise.
- Throws:
IOException
-
rollingUpgrade
public RollingUpgradeInfo rollingUpgrade(HdfsConstants.RollingUpgradeAction action) throws IOException Description copied from class:DistributedFileSystemRolling upgrade: prepare/finalize/query.- Overrides:
rollingUpgradein classDistributedFileSystem- Throws:
IOException
-
metaSave
- Overrides:
metaSavein classDistributedFileSystem- Throws:
IOException
-
getServerDefaults
- Overrides:
getServerDefaultsin classDistributedFileSystem- Throws:
IOException
-
getFileStatus
public org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException Description copied from class:DistributedFileSystemReturns the stat information about the file. If the given path is a symlink, the path will be resolved to a target path and it will get the resolved path's FileStatus object. It will not be represented as a symlink and isDirectory API returns true if the resolved path is a directory, false otherwise.- Overrides:
getFileStatusin classDistributedFileSystem- Throws:
FileNotFoundException- if the file does not exist.org.apache.hadoop.security.AccessControlExceptionIOException
-
createSymlink
public void createSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) throws IOException - Overrides:
createSymlinkin classDistributedFileSystem- Throws:
IOException
-
supportsSymlinks
public boolean supportsSymlinks()- Overrides:
supportsSymlinksin classDistributedFileSystem
-
getFileLinkStatus
public org.apache.hadoop.fs.FileStatus getFileLinkStatus(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getFileLinkStatusin classDistributedFileSystem- Throws:
IOException
-
getLinkTarget
- Overrides:
getLinkTargetin classDistributedFileSystem- Throws:
IOException
-
resolveLink
- Overrides:
resolveLinkin classDistributedFileSystem- Throws:
IOException
-
getFileChecksum
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
getFileChecksumin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
setPermission
public void setPermission(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
setPermissionin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
setOwner
public void setOwner(org.apache.hadoop.fs.Path f, String username, String groupname) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
setOwnerin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
setTimes
public void setTimes(org.apache.hadoop.fs.Path f, long mtime, long atime) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
setTimesin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
getDefaultPort
protected int getDefaultPort()- Overrides:
getDefaultPortin classDistributedFileSystem
-
getDelegationToken
public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(String renewer) throws IOException If no mount points configured, it works same asDistributedFileSystem.getDelegationToken(String). If there are mount points configured and if default fs(linkFallback) configured, then it will return default fs delegation token. Otherwise it will return null.- Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Overrides:
getDelegationTokenin classDistributedFileSystem- Throws:
IOException
-
setBalancerBandwidth
Description copied from class:DistributedFileSystemRequests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec. The bandwidth parameter is the max number of bytes per second of network bandwidth to be used by a datanode during balancing.- Overrides:
setBalancerBandwidthin classDistributedFileSystem- Parameters:
bandwidth- Balancer bandwidth in bytes per second for all datanodes.- Throws:
IOException
-
getCanonicalServiceName
Description copied from class:DistributedFileSystemGet a canonical service name for this file system. If the URI is logical, the hostname part of the URI will be returned.- Specified by:
getCanonicalServiceNamein interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Overrides:
getCanonicalServiceNamein classDistributedFileSystem- Returns:
- a service string that uniquely identifies this file system.
-
canonicalizeUri
- Overrides:
canonicalizeUriin classDistributedFileSystem
-
isInSafeMode
Description copied from class:DistributedFileSystemUtility function that returns if the NameNode is in safemode or not. In HA mode, this API will return only ActiveNN's safemode status.- Overrides:
isInSafeModein classDistributedFileSystem- Returns:
- true if NameNode is in safemode, false otherwise.
- Throws:
IOException- when there is an issue communicating with the NameNode
-
allowSnapshot
- Overrides:
allowSnapshotin classDistributedFileSystem- Throws:
IOException- See Also:
-
disallowSnapshot
- Overrides:
disallowSnapshotin classDistributedFileSystem- Throws:
IOException- See Also:
-
createSnapshot
public org.apache.hadoop.fs.Path createSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) throws IOException - Overrides:
createSnapshotin classDistributedFileSystem- Throws:
IOException
-
renameSnapshot
public void renameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName) throws IOException - Overrides:
renameSnapshotin classDistributedFileSystem- Throws:
IOException
-
getSnapshottableDirListing
Description copied from class:DistributedFileSystemGet the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.- Overrides:
getSnapshottableDirListingin classDistributedFileSystem- Returns:
- The list of all the current snapshottable directories.
- Throws:
IOException- If an I/O error occurred.
-
deleteSnapshot
- Overrides:
deleteSnapshotin classDistributedFileSystem- Throws:
IOException
-
snapshotDiffReportListingRemoteIterator
public org.apache.hadoop.fs.RemoteIterator<SnapshotDiffReportListing> snapshotDiffReportListingRemoteIterator(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) throws IOException Description copied from class:DistributedFileSystemReturns a remote iterator so that followup calls are made on demand while consuming the SnapshotDiffReportListing entries. This reduces memory consumption overhead in case the snapshotDiffReport is huge.- Overrides:
snapshotDiffReportListingRemoteIteratorin classDistributedFileSystem- Parameters:
snapshotDir- full path of the directory where snapshots are takenfromSnapshot- snapshot name of the from point. Null indicates the current treetoSnapshot- snapshot name of the to point. Null indicates the current tree.- Returns:
- Remote iterator
- Throws:
IOException
-
getSnapshotDiffReport
public SnapshotDiffReport getSnapshotDiffReport(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) throws IOException Description copied from class:DistributedFileSystemGet the difference between two snapshots, or between a snapshot and the current tree of a directory.- Overrides:
getSnapshotDiffReportin classDistributedFileSystem- Throws:
IOException- See Also:
-
isFileClosed
Description copied from class:DistributedFileSystemGet the close status of a file- Specified by:
isFileClosedin interfaceorg.apache.hadoop.fs.LeaseRecoverable- Overrides:
isFileClosedin classDistributedFileSystem- Parameters:
src- The path to the file- Returns:
- return true if file is closed
- Throws:
FileNotFoundException- if the file does not exist.IOException- If an I/O error occurred
-
addCacheDirective
- Overrides:
addCacheDirectivein classDistributedFileSystem- Throws:
IOException- See Also:
-
addCacheDirective
Description copied from class:DistributedFileSystemAdd a new CacheDirective.- Overrides:
addCacheDirectivein classDistributedFileSystem- Parameters:
info- Information about a directive to add.flags-CacheFlags to use for this operation.- Returns:
- the ID of the directive that was created.
- Throws:
IOException- if the directive could not be added
-
modifyCacheDirective
- Overrides:
modifyCacheDirectivein classDistributedFileSystem- Throws:
IOException- See Also:
-
modifyCacheDirective
public void modifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException Description copied from class:DistributedFileSystemModify a CacheDirective.- Overrides:
modifyCacheDirectivein classDistributedFileSystem- Parameters:
info- Information about the directive to modify. You must set the ID to indicate which CacheDirective you want to modify.flags-CacheFlags to use for this operation.- Throws:
IOException- if the directive could not be modified
-
removeCacheDirective
Description copied from class:DistributedFileSystemRemove a CacheDirectiveInfo.- Overrides:
removeCacheDirectivein classDistributedFileSystem- Parameters:
id- identifier of the CacheDirectiveInfo to remove- Throws:
IOException- if the directive could not be removed
-
listCacheDirectives
public org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter) throws IOException Description copied from class:DistributedFileSystemList cache directives. Incrementally fetches results from the server.- Overrides:
listCacheDirectivesin classDistributedFileSystem- Parameters:
filter- Filter parameters to use when listing the directives, null to list all directives visible to us.- Returns:
- A RemoteIterator which returns CacheDirectiveInfo objects.
- Throws:
IOException
-
addCachePool
Description copied from class:DistributedFileSystemAdd a cache pool.- Overrides:
addCachePoolin classDistributedFileSystem- Parameters:
info- The request to add a cache pool.- Throws:
IOException- If the request could not be completed.
-
modifyCachePool
Description copied from class:DistributedFileSystemModify an existing cache pool.- Overrides:
modifyCachePoolin classDistributedFileSystem- Parameters:
info- The request to modify a cache pool.- Throws:
IOException- If the request could not be completed.
-
removeCachePool
Description copied from class:DistributedFileSystemRemove a cache pool.- Overrides:
removeCachePoolin classDistributedFileSystem- Parameters:
poolName- Name of the cache pool to remove.- Throws:
IOException- if the cache pool did not exist, or could not be removed.
-
listCachePools
Description copied from class:DistributedFileSystemList all cache pools.- Overrides:
listCachePoolsin classDistributedFileSystem- Returns:
- A remote iterator from which you can get CachePoolEntry objects. Requests will be made as needed.
- Throws:
IOException- If there was an error listing cache pools.
-
modifyAclEntries
public void modifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Description copied from class:DistributedFileSystem- Overrides:
modifyAclEntriesin classDistributedFileSystem- Throws:
IOException
-
removeAclEntries
public void removeAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Description copied from class:DistributedFileSystem- Overrides:
removeAclEntriesin classDistributedFileSystem- Throws:
IOException
-
removeDefaultAcl
Description copied from class:DistributedFileSystem- Overrides:
removeDefaultAclin classDistributedFileSystem- Throws:
IOException
-
removeAcl
Description copied from class:DistributedFileSystem- Overrides:
removeAclin classDistributedFileSystem- Throws:
IOException
-
setAcl
public void setAcl(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Description copied from class:DistributedFileSystem- Overrides:
setAclin classDistributedFileSystem- Throws:
IOException
-
getAclStatus
public org.apache.hadoop.fs.permission.AclStatus getAclStatus(org.apache.hadoop.fs.Path path) throws IOException Description copied from class:DistributedFileSystem- Overrides:
getAclStatusin classDistributedFileSystem- Throws:
IOException
-
createEncryptionZone
- Overrides:
createEncryptionZonein classDistributedFileSystem- Throws:
IOException
-
getEZForPath
- Overrides:
getEZForPathin classDistributedFileSystem- Throws:
IOException
-
listEncryptionZones
Returns the results from default DFS (fallback). If you want the results from specific clusters, please invoke them on child fs instance directly.- Overrides:
listEncryptionZonesin classDistributedFileSystem- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(org.apache.hadoop.fs.Path zone, HdfsConstants.ReencryptAction action) throws IOException - Overrides:
reencryptEncryptionZonein classDistributedFileSystem- Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus> listReencryptionStatus() throws IOExceptionReturns the results from default DFS (fallback). If you want the results from specific clusters, please invoke them on child fs instance directly.- Overrides:
listReencryptionStatusin classDistributedFileSystem- Throws:
IOException
-
getFileEncryptionInfo
public org.apache.hadoop.fs.FileEncryptionInfo getFileEncryptionInfo(org.apache.hadoop.fs.Path path) throws IOException - Overrides:
getFileEncryptionInfoin classDistributedFileSystem- Throws:
IOException
-
provisionEZTrash
public void provisionEZTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) throws IOException - Overrides:
provisionEZTrashin classDistributedFileSystem- Throws:
IOException
-
provisionSnapshotTrash
public org.apache.hadoop.fs.Path provisionSnapshotTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) throws IOException Description copied from class:DistributedFileSystemHDFS only. Provision snapshottable directory trash.- Overrides:
provisionSnapshotTrashin classDistributedFileSystem- Parameters:
path- Path to a snapshottable directory.trashPermission- Expected FsPermission of the trash root.- Returns:
- Path of the provisioned trash root
- Throws:
IOException
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException - Overrides:
setXAttrin classDistributedFileSystem- Throws:
IOException
-
getXAttr
- Overrides:
getXAttrin classDistributedFileSystem- Throws:
IOException
-
getXAttrs
- Overrides:
getXAttrsin classDistributedFileSystem- Throws:
IOException
-
getXAttrs
public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, List<String> names) throws IOException - Overrides:
getXAttrsin classDistributedFileSystem- Throws:
IOException
-
listXAttrs
- Overrides:
listXAttrsin classDistributedFileSystem- Throws:
IOException
-
removeXAttr
- Overrides:
removeXAttrin classDistributedFileSystem- Throws:
IOException
-
access
public void access(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
accessin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
getKeyProviderUri
- Specified by:
getKeyProviderUriin interfaceorg.apache.hadoop.crypto.key.KeyProviderTokenIssuer- Overrides:
getKeyProviderUriin classDistributedFileSystem- Throws:
IOException
-
getKeyProvider
- Specified by:
getKeyProviderin interfaceorg.apache.hadoop.crypto.key.KeyProviderTokenIssuer- Overrides:
getKeyProviderin classDistributedFileSystem- Throws:
IOException
-
getAdditionalTokenIssuers
public org.apache.hadoop.security.token.DelegationTokenIssuer[] getAdditionalTokenIssuers() throws IOException- Specified by:
getAdditionalTokenIssuersin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Overrides:
getAdditionalTokenIssuersin classDistributedFileSystem- Throws:
IOException
-
getInotifyEventStream
- Overrides:
getInotifyEventStreamin classDistributedFileSystem- Throws:
IOException
-
getInotifyEventStream
- Overrides:
getInotifyEventStreamin classDistributedFileSystem- Throws:
IOException
-
setErasureCodingPolicy
public void setErasureCodingPolicy(org.apache.hadoop.fs.Path path, String ecPolicyName) throws IOException Description copied from class:DistributedFileSystemSet the source path to the specified erasure coding policy.- Specified by:
setErasureCodingPolicyin interfaceorg.apache.hadoop.fs.WithErasureCoding- Overrides:
setErasureCodingPolicyin classDistributedFileSystem- Parameters:
path- The directory to set the policyecPolicyName- The erasure coding policy name.- Throws:
IOException
-
satisfyStoragePolicy
Description copied from class:DistributedFileSystemSet the source path to satisfy storage policy.- Overrides:
satisfyStoragePolicyin classDistributedFileSystem- Parameters:
src- The source path referring to either a directory or a file.- Throws:
IOException
-
getErasureCodingPolicy
public ErasureCodingPolicy getErasureCodingPolicy(org.apache.hadoop.fs.Path path) throws IOException Description copied from class:DistributedFileSystemGet erasure coding policy information for the specified path.- Overrides:
getErasureCodingPolicyin classDistributedFileSystem- Parameters:
path- The path of the file or directory- Returns:
- Returns the policy information if file or directory on the path is erasure coded, null otherwise. Null will be returned if directory or file has REPLICATION policy.
- Throws:
IOException
-
getAllErasureCodingPolicies
Gets all erasure coding policies from all available child file systems.- Overrides:
getAllErasureCodingPoliciesin classDistributedFileSystem- Returns:
- all erasure coding policies supported by this file system.
- Throws:
IOException
-
getAllErasureCodingCodecs
Description copied from class:DistributedFileSystemRetrieve all the erasure coding codecs and coders supported by this file system.- Overrides:
getAllErasureCodingCodecsin classDistributedFileSystem- Returns:
- all erasure coding codecs and coders supported by this file system.
- Throws:
IOException
-
addErasureCodingPolicies
public AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException Description copied from class:DistributedFileSystemAdd Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response; policy states will be set to DISABLED automatically.- Overrides:
addErasureCodingPoliciesin classDistributedFileSystem- Parameters:
policies- The user defined ec policy list to add.- Returns:
- Return the response list of adding operations.
- Throws:
IOException
-
removeErasureCodingPolicy
Description copied from class:DistributedFileSystemRemove erasure coding policy.- Overrides:
removeErasureCodingPolicyin classDistributedFileSystem- Parameters:
ecPolicyName- The name of the policy to be removed.- Throws:
IOException
-
enableErasureCodingPolicy
Description copied from class:DistributedFileSystemEnable erasure coding policy.- Overrides:
enableErasureCodingPolicyin classDistributedFileSystem- Parameters:
ecPolicyName- The name of the policy to be enabled.- Throws:
IOException
-
disableErasureCodingPolicy
Description copied from class:DistributedFileSystemDisable erasure coding policy.- Overrides:
disableErasureCodingPolicyin classDistributedFileSystem- Parameters:
ecPolicyName- The name of the policy to be disabled.- Throws:
IOException
-
unsetErasureCodingPolicy
Description copied from class:DistributedFileSystemUnset the erasure coding policy from the source path.- Overrides:
unsetErasureCodingPolicyin classDistributedFileSystem- Parameters:
path- The directory to unset the policy- Throws:
IOException
-
getECTopologyResultForPolicies
public ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException Description copied from class:DistributedFileSystemVerifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.- Overrides:
getECTopologyResultForPoliciesin classDistributedFileSystem- Parameters:
policyNames- name of policies.- Returns:
- the result if the given policies are supported in the cluster setup
- Throws:
IOException
-
getTrashRoot
public org.apache.hadoop.fs.Path getTrashRoot(org.apache.hadoop.fs.Path path) Description copied from class:DistributedFileSystemGet the root directory of Trash for a path in HDFS. 1. File in encryption zone returns /ez1/.Trash/username 2. File in snapshottable directory returns /snapdir1/.Trash/username if dfs.namenode.snapshot.trashroot.enabled is set to true. 3. In other cases, or encountered exception when checking the encryption zone or when checking snapshot root of the path, returns /users/username/.Trash Caller appends either Current or checkpoint timestamp for trash destination- Overrides:
getTrashRootin classDistributedFileSystem- Parameters:
path- the trash root of the path to be determined.- Returns:
- trash root
-
getTrashRoots
Description copied from class:DistributedFileSystemGet all the trash roots of HDFS for current user or for all the users. 1. File deleted from encryption zones e.g., ez1 rooted at /ez1 has its trash root at /ez1/.Trash/$USER 2. File deleted from snapshottable directories if dfs.namenode.snapshot.trashroot.enabled is set to true. e.g., snapshottable directory /snapdir1 has its trash root at /snapdir1/.Trash/$USER 3. File deleted from other directories /user/username/.Trash- Overrides:
getTrashRootsin classDistributedFileSystem- Parameters:
allUsers- return trashRoots of all users if true, used by emptier- Returns:
- trash roots of HDFS
-
fixRelativePart
protected org.apache.hadoop.fs.Path fixRelativePart(org.apache.hadoop.fs.Path p) - Overrides:
fixRelativePartin classDistributedFileSystem
-
createFile
Description copied from class:DistributedFileSystemCreate a HdfsDataOutputStreamBuilder to create a file on DFS. Similar toFileSystem.create(Path), file is overwritten by default.- Overrides:
createFilein classDistributedFileSystem- Parameters:
path- the path of the file to create.- Returns:
- A HdfsDataOutputStreamBuilder for creating a file.
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles() throws IOExceptionDeprecated.Description copied from class:DistributedFileSystemReturns a RemoteIterator which can be used to list all open files currently managed by the NameNode. For large numbers of open files, iterator will fetch the list in batches of configured size.Since the list is fetched in batches, it does not represent a consistent snapshot of the all open files.
This method can only be called by HDFS superusers.
- Overrides:
listOpenFilesin classDistributedFileSystem- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) throws IOException Deprecated.- Overrides:
listOpenFilesin classDistributedFileSystem- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException - Overrides:
listOpenFilesin classDistributedFileSystem- Throws:
IOException
-
appendFile
Description copied from class:DistributedFileSystemCreate aDistributedFileSystem.HdfsDataOutputStreamBuilderto append a file on DFS.- Overrides:
appendFilein classDistributedFileSystem- Parameters:
path- file path.- Returns:
- A
DistributedFileSystem.HdfsDataOutputStreamBuilderfor appending a file.
-
hasPathCapability
public boolean hasPathCapability(org.apache.hadoop.fs.Path path, String capability) throws IOException Description copied from class:DistributedFileSystemHDFS client capabilities. UsesDfsPathCapabilitiesto keepWebHdfsFileSystemin sync.- Specified by:
hasPathCapabilityin interfaceorg.apache.hadoop.fs.PathCapabilities- Overrides:
hasPathCapabilityin classDistributedFileSystem- Throws:
IOException
-
resolvePath
- Overrides:
resolvePathin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
delete
public boolean delete(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
deletein classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
getFileChecksum
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f, long length) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
getFileChecksumin classDistributedFileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
mkdirs
- Overrides:
mkdirsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getDefaultBlockSize
public long getDefaultBlockSize(org.apache.hadoop.fs.Path f) - Overrides:
getDefaultBlockSizein classorg.apache.hadoop.fs.FileSystem
-
getDefaultReplication
public short getDefaultReplication(org.apache.hadoop.fs.Path f) - Overrides:
getDefaultReplicationin classorg.apache.hadoop.fs.FileSystem
-
getServerDefaults
public org.apache.hadoop.fs.FsServerDefaults getServerDefaults(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getServerDefaultsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setWriteChecksum
public void setWriteChecksum(boolean writeChecksum) - Overrides:
setWriteChecksumin classorg.apache.hadoop.fs.FileSystem
-
getChildFileSystems
public org.apache.hadoop.fs.FileSystem[] getChildFileSystems()- Overrides:
getChildFileSystemsin classorg.apache.hadoop.fs.FileSystem
-
getMountPoints
public org.apache.hadoop.fs.viewfs.ViewFileSystem.MountPoint[] getMountPoints() -
getStatus
- Overrides:
getStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getUsed
- Overrides:
getUsedin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getSlowDatanodeStats
Description copied from class:DistributedFileSystemRetrieve stats for slow running datanodes.- Overrides:
getSlowDatanodeStatsin classDistributedFileSystem- Returns:
- An array of slow datanode info.
- Throws:
IOException- If an I/O error occurs.
-