Class DistributedFileSystem

java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.fs.FileSystem
org.apache.hadoop.hdfs.DistributedFileSystem
All Implemented Interfaces:
Closeable, AutoCloseable, org.apache.hadoop.conf.Configurable, org.apache.hadoop.crypto.key.KeyProviderTokenIssuer, org.apache.hadoop.fs.BatchListingOperations, org.apache.hadoop.fs.BulkDeleteSource, org.apache.hadoop.fs.LeaseRecoverable, org.apache.hadoop.fs.PathCapabilities, org.apache.hadoop.fs.SafeMode, org.apache.hadoop.fs.WithErasureCoding, org.apache.hadoop.security.token.DelegationTokenIssuer
Direct Known Subclasses:
ViewDistributedFileSystem

@LimitedPrivate({"MapReduce","HBase"}) @Unstable public class DistributedFileSystem extends org.apache.hadoop.fs.FileSystem implements org.apache.hadoop.crypto.key.KeyProviderTokenIssuer, org.apache.hadoop.fs.BatchListingOperations, org.apache.hadoop.fs.LeaseRecoverable, org.apache.hadoop.fs.SafeMode, org.apache.hadoop.fs.WithErasureCoding
Implementation of the abstract FileSystem for the DFS system. This object is the way end-user code interacts with a Hadoop DistributedFileSystem.
  • Constructor Details

    • DistributedFileSystem

      public DistributedFileSystem()
  • Method Details

    • getScheme

      public String getScheme()
      Return the protocol scheme for the FileSystem.
      Overrides:
      getScheme in class org.apache.hadoop.fs.FileSystem
      Returns:
      hdfs
    • getUri

      public URI getUri()
      Specified by:
      getUri in class org.apache.hadoop.fs.FileSystem
    • initialize

      public void initialize(URI uri, org.apache.hadoop.conf.Configuration conf) throws IOException
      Overrides:
      initialize in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getWorkingDirectory

      public org.apache.hadoop.fs.Path getWorkingDirectory()
      Specified by:
      getWorkingDirectory in class org.apache.hadoop.fs.FileSystem
    • getDefaultBlockSize

      public long getDefaultBlockSize()
      Overrides:
      getDefaultBlockSize in class org.apache.hadoop.fs.FileSystem
    • getDefaultReplication

      public short getDefaultReplication()
      Overrides:
      getDefaultReplication in class org.apache.hadoop.fs.FileSystem
    • setWorkingDirectory

      public void setWorkingDirectory(org.apache.hadoop.fs.Path dir)
      Specified by:
      setWorkingDirectory in class org.apache.hadoop.fs.FileSystem
    • getHomeDirectory

      public org.apache.hadoop.fs.Path getHomeDirectory()
      Overrides:
      getHomeDirectory in class org.apache.hadoop.fs.FileSystem
    • getHedgedReadMetrics

      public DFSHedgedReadMetrics getHedgedReadMetrics()
      Returns the hedged read metrics object for this client.
      Returns:
      object of DFSHedgedReadMetrics
    • getFileBlockLocations

      public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) throws IOException
      Overrides:
      getFileBlockLocations in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getFileBlockLocations

      public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len) throws IOException
      The returned BlockLocation will have different formats for replicated and erasure coded file. Please refer to FileSystem.getFileBlockLocations(FileStatus, long, long) for more details.
      Overrides:
      getFileBlockLocations in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setVerifyChecksum

      public void setVerifyChecksum(boolean verifyChecksum)
      Overrides:
      setVerifyChecksum in class org.apache.hadoop.fs.FileSystem
    • recoverLease

      public boolean recoverLease(org.apache.hadoop.fs.Path f) throws IOException
      Start the lease recovery of a file
      Specified by:
      recoverLease in interface org.apache.hadoop.fs.LeaseRecoverable
      Parameters:
      f - a file
      Returns:
      true if the file is already closed
      Throws:
      IOException - if an error occurs
    • open

      public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f, int bufferSize) throws IOException
      Specified by:
      open in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • open

      public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.PathHandle fd, int bufferSize) throws IOException
      Opens an FSDataInputStream with the indicated file ID extracted from the PathHandle.
      Overrides:
      open in class org.apache.hadoop.fs.FileSystem
      Parameters:
      fd - Reference to entity in this FileSystem.
      bufferSize - the size of the buffer to be used.
      Throws:
      org.apache.hadoop.fs.InvalidPathHandleException - If PathHandle constraints do not hold
      IOException - On I/O errors
    • getErasureCodingPolicyName

      public String getErasureCodingPolicyName(org.apache.hadoop.fs.FileStatus fileStatus)
      Specified by:
      getErasureCodingPolicyName in interface org.apache.hadoop.fs.WithErasureCoding
    • createPathHandle

      protected HdfsPathHandle createPathHandle(org.apache.hadoop.fs.FileStatus st, org.apache.hadoop.fs.Options.HandleOpt... opts)
      Create a handle to an HDFS file.
      Overrides:
      createPathHandle in class org.apache.hadoop.fs.FileSystem
      Parameters:
      st - HdfsFileStatus instance from NameNode
      opts - Standard handle arguments
      Returns:
      A handle to the file.
      Throws:
      IllegalArgumentException - If the FileStatus instance refers to a directory, symlink, or another namesystem.
      UnsupportedOperationException - If opts are not specified or both data and location are not allowed to change.
    • append

      public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException
      Specified by:
      append in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • append

      public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress, boolean appendToNewBlock) throws IOException
      Overrides:
      append in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • append

      public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException
      Append to an existing file (optional operation).
      Parameters:
      f - the existing file to be appended.
      flag - Flags for the Append operation. CreateFlag.APPEND is mandatory to be present.
      bufferSize - the size of the buffer to be used.
      progress - for reporting progress if it is not null.
      Returns:
      Returns instance of FSDataOutputStream
      Throws:
      IOException
    • append

      public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) throws IOException
      Append to an existing file (optional operation).
      Parameters:
      f - the existing file to be appended.
      flag - Flags for the Append operation. CreateFlag.APPEND is mandatory to be present.
      bufferSize - the size of the buffer to be used.
      progress - for reporting progress if it is not null.
      favoredNodes - Favored nodes for new blocks
      Returns:
      Returns instance of FSDataOutputStream
      Throws:
      IOException
    • create

      public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException
      Specified by:
      create in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • create

      public HdfsDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) throws IOException
      Same as create(Path, FsPermission, boolean, int, short, long, Progressable) with the addition of favoredNodes that is a hint to where the namenode should place the file blocks. The favored nodes hint is not persisted in HDFS. Hence it may be honored at the creation time only. And with favored nodes, blocks will be pinned on the datanodes to prevent balancing move the block. HDFS could move the blocks during replication, to move the blocks from favored nodes. A value of null means no favored nodes for this create
      Throws:
      IOException
    • create

      public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> cflags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException
      Overrides:
      create in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • primitiveCreate

      protected HdfsDataOutputStream primitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException
      Overrides:
      primitiveCreate in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • createNonRecursive

      public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException
      Same as create(), except fails if parent directory doesn't already exist.
      Overrides:
      createNonRecursive in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setReplication

      public boolean setReplication(org.apache.hadoop.fs.Path src, short replication) throws IOException
      Overrides:
      setReplication in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setStoragePolicy

      public void setStoragePolicy(org.apache.hadoop.fs.Path src, String policyName) throws IOException
      Set the source path to the specified storage policy.
      Overrides:
      setStoragePolicy in class org.apache.hadoop.fs.FileSystem
      Parameters:
      src - The source path referring to either a directory or a file.
      policyName - The name of the storage policy.
      Throws:
      IOException
    • unsetStoragePolicy

      public void unsetStoragePolicy(org.apache.hadoop.fs.Path src) throws IOException
      Overrides:
      unsetStoragePolicy in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getStoragePolicy

      public org.apache.hadoop.fs.BlockStoragePolicySpi getStoragePolicy(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      getStoragePolicy in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getAllStoragePolicies

      public Collection<BlockStoragePolicy> getAllStoragePolicies() throws IOException
      Overrides:
      getAllStoragePolicies in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getBytesWithFutureGenerationStamps

      public long getBytesWithFutureGenerationStamps() throws IOException
      Returns number of bytes within blocks with future generation stamp. These are bytes that will be potentially deleted if we forceExit from safe mode.
      Returns:
      number of bytes.
      Throws:
      IOException
    • getStoragePolicies

      @Deprecated public BlockStoragePolicy[] getStoragePolicies() throws IOException
      Deprecated.
      Deprecated. Prefer FileSystem.getAllStoragePolicies()
      Throws:
      IOException
    • concat

      public void concat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) throws IOException
      Move blocks from srcs to trg and delete srcs afterwards. The file block sizes must be the same.
      Overrides:
      concat in class org.apache.hadoop.fs.FileSystem
      Parameters:
      trg - existing file to append to
      psrcs - list of files (same block size, same replication)
      Throws:
      IOException
    • rename

      public boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOException
      Specified by:
      rename in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • rename

      public void rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
      This rename operation is guaranteed to be atomic.
      Overrides:
      rename in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • truncate

      public boolean truncate(org.apache.hadoop.fs.Path f, long newLength) throws IOException
      Overrides:
      truncate in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • delete

      public boolean delete(org.apache.hadoop.fs.Path f, boolean recursive) throws IOException
      Specified by:
      delete in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getContentSummary

      public org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f) throws IOException
      Overrides:
      getContentSummary in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getQuotaUsage

      public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(org.apache.hadoop.fs.Path f) throws IOException
      Overrides:
      getQuotaUsage in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setQuota

      public void setQuota(org.apache.hadoop.fs.Path src, long namespaceQuota, long storagespaceQuota) throws IOException
      Set a directory's quotas
      Overrides:
      setQuota in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
      See Also:
    • setQuotaByStorageType

      public void setQuotaByStorageType(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.StorageType type, long quota) throws IOException
      Set the per type storage quota of a directory.
      Overrides:
      setQuotaByStorageType in class org.apache.hadoop.fs.FileSystem
      Parameters:
      src - target directory whose quota is to be modified.
      type - storage type of the specific storage type quota to be modified.
      quota - value of the specific storage type quota to be modified. Maybe HdfsConstants.QUOTA_RESET to clear quota by storage type.
      Throws:
      IOException
    • listStatus

      public org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path p) throws IOException
      List all the entries of a directory Note that this operation is not atomic for a large directory. The entries of a directory may be fetched from NameNode multiple times. It only guarantees that each name occurs once if a directory undergoes changes between the calls. If any of the the immediate children of the given path f is a symlink, the returned FileStatus object of that children would be represented as a symlink. It will not be resolved to the target path and will not get the target path FileStatus object. The target path will be available via getSymlink on that children's FileStatus object. Since it represents as symlink, isDirectory on that children's FileStatus will return false. If you want to get the FileStatus of target path for that children, you may want to use GetFileStatus API with that children's symlink path. Please see getFileStatus(Path f)
      Specified by:
      listStatus in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • listLocatedStatus

      protected org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.PathFilter filter) throws IOException
      The BlockLocation of returned LocatedFileStatus will have different formats for replicated and erasure coded file. Please refer to FileSystem.getFileBlockLocations(FileStatus, long, long) for more details.
      Overrides:
      listLocatedStatus in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • listStatusIterator

      public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus> listStatusIterator(org.apache.hadoop.fs.Path p) throws IOException
      Returns a remote iterator so that followup calls are made on demand while consuming the entries. This reduces memory consumption during listing of a large directory.
      Overrides:
      listStatusIterator in class org.apache.hadoop.fs.FileSystem
      Parameters:
      p - target path
      Returns:
      remote iterator
      Throws:
      IOException
    • batchedListStatusIterator

      public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.FileStatus>> batchedListStatusIterator(List<org.apache.hadoop.fs.Path> paths) throws IOException
      Specified by:
      batchedListStatusIterator in interface org.apache.hadoop.fs.BatchListingOperations
      Throws:
      IOException
    • batchedListLocatedStatusIterator

      public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.LocatedFileStatus>> batchedListLocatedStatusIterator(List<org.apache.hadoop.fs.Path> paths) throws IOException
      Specified by:
      batchedListLocatedStatusIterator in interface org.apache.hadoop.fs.BatchListingOperations
      Throws:
      IOException
    • mkdir

      public boolean mkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
      Create a directory, only when the parent directories exist. See FsPermission.applyUMask(FsPermission) for details of how the permission is applied.
      Parameters:
      f - The path to create
      permission - The permission. See FsPermission#applyUMask for details about how this is used to calculate the effective permission.
      Throws:
      IOException
    • mkdirs

      public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
      Create a directory and its parent directories. See FsPermission.applyUMask(FsPermission) for details of how the permission is applied.
      Specified by:
      mkdirs in class org.apache.hadoop.fs.FileSystem
      Parameters:
      f - The path to create
      permission - The permission. See FsPermission#applyUMask for details about how this is used to calculate the effective permission.
      Throws:
      IOException
    • primitiveMkdir

      protected boolean primitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission) throws IOException
      Overrides:
      primitiveMkdir in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • close

      public void close() throws IOException
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
      Overrides:
      close in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • toString

      public String toString()
      Overrides:
      toString in class Object
    • getClient

      @Private @VisibleForTesting public DFSClient getClient()
    • getStatus

      public org.apache.hadoop.fs.FsStatus getStatus(org.apache.hadoop.fs.Path p) throws IOException
      Overrides:
      getStatus in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getMissingBlocksCount

      public long getMissingBlocksCount() throws IOException
      Returns count of blocks with no good replicas left. Normally should be zero.
      Throws:
      IOException
    • getPendingDeletionBlocksCount

      public long getPendingDeletionBlocksCount() throws IOException
      Returns count of blocks pending on deletion.
      Throws:
      IOException
    • getMissingReplOneBlocksCount

      public long getMissingReplOneBlocksCount() throws IOException
      Returns count of blocks with replication factor 1 and have lost the only replica.
      Throws:
      IOException
    • getLowRedundancyBlocksCount

      public long getLowRedundancyBlocksCount() throws IOException
      Returns aggregated count of blocks with less redundancy.
      Throws:
      IOException
    • getCorruptBlocksCount

      public long getCorruptBlocksCount() throws IOException
      Returns count of blocks with at least one replica marked corrupt.
      Throws:
      IOException
    • listCorruptFileBlocks

      public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path> listCorruptFileBlocks(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      listCorruptFileBlocks in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getDataNodeStats

      public DatanodeInfo[] getDataNodeStats() throws IOException
      Returns:
      datanode statistics.
      Throws:
      IOException
    • getDataNodeStats

      public DatanodeInfo[] getDataNodeStats(HdfsConstants.DatanodeReportType type) throws IOException
      Returns:
      datanode statistics for the given type.
      Throws:
      IOException
    • setSafeMode

      public boolean setSafeMode(org.apache.hadoop.fs.SafeModeAction action) throws IOException
      Enter, leave or get safe mode.
      Specified by:
      setSafeMode in interface org.apache.hadoop.fs.SafeMode
      Throws:
      IOException
      See Also:
    • setSafeMode

      public boolean setSafeMode(org.apache.hadoop.fs.SafeModeAction action, boolean isChecked) throws IOException
      Enter, leave or get safe mode.
      Specified by:
      setSafeMode in interface org.apache.hadoop.fs.SafeMode
      Parameters:
      action - One of SafeModeAction.ENTER, SafeModeAction.LEAVE and SafeModeAction.GET.
      isChecked - If true check only for Active NNs status, else check first NN's status.
      Throws:
      IOException
    • setSafeMode

      @Deprecated public boolean setSafeMode(HdfsConstants.SafeModeAction action) throws IOException
      Deprecated.
      please instead use setSafeMode(SafeModeAction).
      Enter, leave or get safe mode.
      Throws:
      IOException
      See Also:
    • setSafeMode

      @Deprecated public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException
      Deprecated.
      Enter, leave or get safe mode.
      Parameters:
      action - One of SafeModeAction.ENTER, SafeModeAction.LEAVE and SafeModeAction.GET.
      isChecked - If true check only for Active NNs status, else check first NN's status.
      Throws:
      IOException
      See Also:
    • saveNamespace

      public boolean saveNamespace(long timeWindow, long txGap) throws IOException
      Save namespace image.
      Parameters:
      timeWindow - NameNode can ignore this command if the latest checkpoint was done within the given time period (in seconds).
      Returns:
      true if a new checkpoint has been made
      Throws:
      IOException
      See Also:
    • saveNamespace

      public void saveNamespace() throws IOException
      Save namespace image. NameNode always does the checkpoint.
      Throws:
      IOException
    • rollEdits

      public long rollEdits() throws IOException
      Rolls the edit log on the active NameNode. Requires super-user privileges.
      Returns:
      the transaction ID of the newly created segment
      Throws:
      IOException
      See Also:
    • restoreFailedStorage

      public boolean restoreFailedStorage(String arg) throws IOException
      enable/disable/check restoreFaileStorage.
      Throws:
      IOException
      See Also:
    • refreshNodes

      public void refreshNodes() throws IOException
      Refreshes the list of hosts and excluded hosts from the configured files.
      Throws:
      IOException
    • finalizeUpgrade

      public void finalizeUpgrade() throws IOException
      Finalize previously upgraded files system state.
      Throws:
      IOException
    • upgradeStatus

      public boolean upgradeStatus() throws IOException
      Get status of upgrade - finalized or not.
      Returns:
      true if upgrade is finalized or if no upgrade is in progress and false otherwise.
      Throws:
      IOException
    • rollingUpgrade

      public RollingUpgradeInfo rollingUpgrade(HdfsConstants.RollingUpgradeAction action) throws IOException
      Rolling upgrade: prepare/finalize/query.
      Throws:
      IOException
    • metaSave

      public void metaSave(String pathname) throws IOException
      Throws:
      IOException
    • getServerDefaults

      public org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
      Overrides:
      getServerDefaults in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getFileStatus

      public org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f) throws IOException
      Returns the stat information about the file. If the given path is a symlink, the path will be resolved to a target path and it will get the resolved path's FileStatus object. It will not be represented as a symlink and isDirectory API returns true if the resolved path is a directory, false otherwise.
      Specified by:
      getFileStatus in class org.apache.hadoop.fs.FileSystem
      Throws:
      FileNotFoundException - if the file does not exist.
      IOException
    • msync

      public void msync() throws IOException
      Synchronize client metadata state with Active NameNode.

      In HA the client synchronizes its state with the Active NameNode in order to guarantee subsequent read consistency from Observer Nodes.

      Overrides:
      msync in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • createSymlink

      public void createSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) throws IOException
      Overrides:
      createSymlink in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • supportsSymlinks

      public boolean supportsSymlinks()
      Overrides:
      supportsSymlinks in class org.apache.hadoop.fs.FileSystem
    • getFileLinkStatus

      public org.apache.hadoop.fs.FileStatus getFileLinkStatus(org.apache.hadoop.fs.Path f) throws IOException
      Overrides:
      getFileLinkStatus in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getLinkTarget

      public org.apache.hadoop.fs.Path getLinkTarget(org.apache.hadoop.fs.Path f) throws IOException
      Overrides:
      getLinkTarget in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • resolveLink

      protected org.apache.hadoop.fs.Path resolveLink(org.apache.hadoop.fs.Path f) throws IOException
      Overrides:
      resolveLink in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getFileChecksum

      public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f) throws IOException
      Overrides:
      getFileChecksum in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getFileChecksum

      public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f, long length) throws IOException
      Overrides:
      getFileChecksum in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setPermission

      public void setPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
      Overrides:
      setPermission in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setOwner

      public void setOwner(org.apache.hadoop.fs.Path p, String username, String groupname) throws IOException
      Overrides:
      setOwner in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setTimes

      public void setTimes(org.apache.hadoop.fs.Path p, long mtime, long atime) throws IOException
      Overrides:
      setTimes in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getDefaultPort

      protected int getDefaultPort()
      Overrides:
      getDefaultPort in class org.apache.hadoop.fs.FileSystem
    • getDelegationToken

      public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(String renewer) throws IOException
      Specified by:
      getDelegationToken in interface org.apache.hadoop.security.token.DelegationTokenIssuer
      Overrides:
      getDelegationToken in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setBalancerBandwidth

      public void setBalancerBandwidth(long bandwidth) throws IOException
      Requests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec. The bandwidth parameter is the max number of bytes per second of network bandwidth to be used by a datanode during balancing.
      Parameters:
      bandwidth - Balancer bandwidth in bytes per second for all datanodes.
      Throws:
      IOException
    • getCanonicalServiceName

      public String getCanonicalServiceName()
      Get a canonical service name for this file system. If the URI is logical, the hostname part of the URI will be returned.
      Specified by:
      getCanonicalServiceName in interface org.apache.hadoop.security.token.DelegationTokenIssuer
      Overrides:
      getCanonicalServiceName in class org.apache.hadoop.fs.FileSystem
      Returns:
      a service string that uniquely identifies this file system.
    • canonicalizeUri

      protected URI canonicalizeUri(URI uri)
      Overrides:
      canonicalizeUri in class org.apache.hadoop.fs.FileSystem
    • isInSafeMode

      public boolean isInSafeMode() throws IOException
      Utility function that returns if the NameNode is in safemode or not. In HA mode, this API will return only ActiveNN's safemode status.
      Returns:
      true if NameNode is in safemode, false otherwise.
      Throws:
      IOException - when there is an issue communicating with the NameNode
    • isSnapshotTrashRootEnabled

      public boolean isSnapshotTrashRootEnabled() throws IOException
      HDFS only. Returns if the NameNode enabled the snapshot trash root configuration dfs.namenode.snapshot.trashroot.enabled
      Returns:
      true if NameNode enabled snapshot trash root
      Throws:
      IOException - when there is an issue communicating with the NameNode
    • allowSnapshot

      public void allowSnapshot(org.apache.hadoop.fs.Path path) throws IOException
      Throws:
      IOException
      See Also:
    • disallowSnapshot

      public void disallowSnapshot(org.apache.hadoop.fs.Path path) throws IOException
      Throws:
      IOException
      See Also:
    • createSnapshot

      public org.apache.hadoop.fs.Path createSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) throws IOException
      Overrides:
      createSnapshot in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • renameSnapshot

      public void renameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName) throws IOException
      Overrides:
      renameSnapshot in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getSnapshottableDirListing

      public SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException
      Get the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.
      Returns:
      The list of all the current snapshottable directories.
      Throws:
      IOException - If an I/O error occurred.
    • getSnapshotListing

      public SnapshotStatus[] getSnapshotListing(org.apache.hadoop.fs.Path snapshotRoot) throws IOException
      Returns:
      all the snapshots for a snapshottable directory
      Throws:
      IOException
    • deleteSnapshot

      public void deleteSnapshot(org.apache.hadoop.fs.Path snapshotDir, String snapshotName) throws IOException
      Overrides:
      deleteSnapshot in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • snapshotDiffReportListingRemoteIterator

      public org.apache.hadoop.fs.RemoteIterator<SnapshotDiffReportListing> snapshotDiffReportListingRemoteIterator(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) throws IOException
      Returns a remote iterator so that followup calls are made on demand while consuming the SnapshotDiffReportListing entries. This reduces memory consumption overhead in case the snapshotDiffReport is huge.
      Parameters:
      snapshotDir - full path of the directory where snapshots are taken
      fromSnapshot - snapshot name of the from point. Null indicates the current tree
      toSnapshot - snapshot name of the to point. Null indicates the current tree.
      Returns:
      Remote iterator
      Throws:
      IOException
    • getSnapshotDiffReport

      public SnapshotDiffReport getSnapshotDiffReport(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) throws IOException
      Get the difference between two snapshots, or between a snapshot and the current tree of a directory.
      Throws:
      IOException
      See Also:
    • getSnapshotDiffReportListing

      public SnapshotDiffReportListing getSnapshotDiffReportListing(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshotName, String toSnapshotName, String snapshotDiffStartPath, int snapshotDiffIndex) throws IOException
      Get the difference between two snapshots of a directory iteratively.
      Parameters:
      snapshotDir - full path of the directory where snapshots are taken.
      fromSnapshotName - snapshot name of the from point. Null indicates the current tree.
      toSnapshotName - snapshot name of the to point. Null indicates the current tree.
      snapshotDiffStartPath - path relative to the snapshottable root directory from where the snapshotdiff computation needs to start.
      snapshotDiffIndex - index in the created or deleted list of the directory at which the snapshotdiff computation stopped during the last rpc call. -1 indicates the diff computation needs to start right from the start path.
      Returns:
      the difference report represented as a SnapshotDiffReportListing.
      Throws:
      IOException - if an I/O error occurred.
    • isFileClosed

      public boolean isFileClosed(org.apache.hadoop.fs.Path src) throws IOException
      Get the close status of a file
      Specified by:
      isFileClosed in interface org.apache.hadoop.fs.LeaseRecoverable
      Parameters:
      src - The path to the file
      Returns:
      return true if file is closed
      Throws:
      FileNotFoundException - if the file does not exist.
      IOException - If an I/O error occurred
    • addCacheDirective

      public long addCacheDirective(CacheDirectiveInfo info) throws IOException
      Throws:
      IOException
      See Also:
    • addCacheDirective

      public long addCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException
      Add a new CacheDirective.
      Parameters:
      info - Information about a directive to add.
      flags - CacheFlags to use for this operation.
      Returns:
      the ID of the directive that was created.
      Throws:
      IOException - if the directive could not be added
    • modifyCacheDirective

      public void modifyCacheDirective(CacheDirectiveInfo info) throws IOException
      Throws:
      IOException
      See Also:
    • modifyCacheDirective

      public void modifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException
      Modify a CacheDirective.
      Parameters:
      info - Information about the directive to modify. You must set the ID to indicate which CacheDirective you want to modify.
      flags - CacheFlags to use for this operation.
      Throws:
      IOException - if the directive could not be modified
    • removeCacheDirective

      public void removeCacheDirective(long id) throws IOException
      Remove a CacheDirectiveInfo.
      Parameters:
      id - identifier of the CacheDirectiveInfo to remove
      Throws:
      IOException - if the directive could not be removed
    • listCacheDirectives

      public org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter) throws IOException
      List cache directives. Incrementally fetches results from the server.
      Parameters:
      filter - Filter parameters to use when listing the directives, null to list all directives visible to us.
      Returns:
      A RemoteIterator which returns CacheDirectiveInfo objects.
      Throws:
      IOException
    • addCachePool

      public void addCachePool(CachePoolInfo info) throws IOException
      Add a cache pool.
      Parameters:
      info - The request to add a cache pool.
      Throws:
      IOException - If the request could not be completed.
    • modifyCachePool

      public void modifyCachePool(CachePoolInfo info) throws IOException
      Modify an existing cache pool.
      Parameters:
      info - The request to modify a cache pool.
      Throws:
      IOException - If the request could not be completed.
    • removeCachePool

      public void removeCachePool(String poolName) throws IOException
      Remove a cache pool.
      Parameters:
      poolName - Name of the cache pool to remove.
      Throws:
      IOException - if the cache pool did not exist, or could not be removed.
    • listCachePools

      public org.apache.hadoop.fs.RemoteIterator<CachePoolEntry> listCachePools() throws IOException
      List all cache pools.
      Returns:
      A remote iterator from which you can get CachePoolEntry objects. Requests will be made as needed.
      Throws:
      IOException - If there was an error listing cache pools.
    • modifyAclEntries

      public void modifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Overrides:
      modifyAclEntries in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • removeAclEntries

      public void removeAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Overrides:
      removeAclEntries in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • removeDefaultAcl

      public void removeDefaultAcl(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      removeDefaultAcl in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • removeAcl

      public void removeAcl(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      removeAcl in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • setAcl

      public void setAcl(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Overrides:
      setAcl in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getAclStatus

      public org.apache.hadoop.fs.permission.AclStatus getAclStatus(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      getAclStatus in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • createEncryptionZone

      public void createEncryptionZone(org.apache.hadoop.fs.Path path, String keyName) throws IOException
      Throws:
      IOException
    • getEZForPath

      public EncryptionZone getEZForPath(org.apache.hadoop.fs.Path path) throws IOException
      Throws:
      IOException
    • listEncryptionZones

      public org.apache.hadoop.fs.RemoteIterator<EncryptionZone> listEncryptionZones() throws IOException
      Throws:
      IOException
    • reencryptEncryptionZone

      public void reencryptEncryptionZone(org.apache.hadoop.fs.Path zone, HdfsConstants.ReencryptAction action) throws IOException
      Throws:
      IOException
    • listReencryptionStatus

      public org.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus> listReencryptionStatus() throws IOException
      Throws:
      IOException
    • getFileEncryptionInfo

      public org.apache.hadoop.fs.FileEncryptionInfo getFileEncryptionInfo(org.apache.hadoop.fs.Path path) throws IOException
      Throws:
      IOException
    • provisionEZTrash

      public void provisionEZTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) throws IOException
      Throws:
      IOException
    • provisionSnapshotTrash

      public org.apache.hadoop.fs.Path provisionSnapshotTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) throws IOException
      HDFS only. Provision snapshottable directory trash.
      Parameters:
      path - Path to a snapshottable directory.
      trashPermission - Expected FsPermission of the trash root.
      Returns:
      Path of the provisioned trash root
      Throws:
      IOException
    • setXAttr

      public void setXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException
      Overrides:
      setXAttr in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getXAttr

      public byte[] getXAttr(org.apache.hadoop.fs.Path path, String name) throws IOException
      Overrides:
      getXAttr in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getXAttrs

      public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      getXAttrs in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getXAttrs

      public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, List<String> names) throws IOException
      Overrides:
      getXAttrs in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • listXAttrs

      public List<String> listXAttrs(org.apache.hadoop.fs.Path path) throws IOException
      Overrides:
      listXAttrs in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • removeXAttr

      public void removeXAttr(org.apache.hadoop.fs.Path path, String name) throws IOException
      Overrides:
      removeXAttr in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • access

      public void access(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException
      Overrides:
      access in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getKeyProviderUri

      public URI getKeyProviderUri() throws IOException
      Specified by:
      getKeyProviderUri in interface org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
      Throws:
      IOException
    • getKeyProvider

      public org.apache.hadoop.crypto.key.KeyProvider getKeyProvider() throws IOException
      Specified by:
      getKeyProvider in interface org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
      Throws:
      IOException
    • getAdditionalTokenIssuers

      public org.apache.hadoop.security.token.DelegationTokenIssuer[] getAdditionalTokenIssuers() throws IOException
      Specified by:
      getAdditionalTokenIssuers in interface org.apache.hadoop.security.token.DelegationTokenIssuer
      Overrides:
      getAdditionalTokenIssuers in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getInotifyEventStream

      public DFSInotifyEventInputStream getInotifyEventStream() throws IOException
      Throws:
      IOException
    • getInotifyEventStream

      public DFSInotifyEventInputStream getInotifyEventStream(long lastReadTxid) throws IOException
      Throws:
      IOException
    • setErasureCodingPolicy

      public void setErasureCodingPolicy(org.apache.hadoop.fs.Path path, String ecPolicyName) throws IOException
      Set the source path to the specified erasure coding policy.
      Specified by:
      setErasureCodingPolicy in interface org.apache.hadoop.fs.WithErasureCoding
      Parameters:
      path - The directory to set the policy
      ecPolicyName - The erasure coding policy name.
      Throws:
      IOException
    • satisfyStoragePolicy

      public void satisfyStoragePolicy(org.apache.hadoop.fs.Path path) throws IOException
      Set the source path to satisfy storage policy.
      Overrides:
      satisfyStoragePolicy in class org.apache.hadoop.fs.FileSystem
      Parameters:
      path - The source path referring to either a directory or a file.
      Throws:
      IOException
    • getErasureCodingPolicy

      public ErasureCodingPolicy getErasureCodingPolicy(org.apache.hadoop.fs.Path path) throws IOException
      Get erasure coding policy information for the specified path.
      Parameters:
      path - The path of the file or directory
      Returns:
      Returns the policy information if file or directory on the path is erasure coded, null otherwise. Null will be returned if directory or file has REPLICATION policy.
      Throws:
      IOException
    • getAllErasureCodingPolicies

      public Collection<ErasureCodingPolicyInfo> getAllErasureCodingPolicies() throws IOException
      Retrieve all the erasure coding policies supported by this file system, including enabled, disabled and removed policies, but excluding REPLICATION policy.
      Returns:
      all erasure coding policies supported by this file system.
      Throws:
      IOException
    • getAllErasureCodingCodecs

      public Map<String,String> getAllErasureCodingCodecs() throws IOException
      Retrieve all the erasure coding codecs and coders supported by this file system.
      Returns:
      all erasure coding codecs and coders supported by this file system.
      Throws:
      IOException
    • addErasureCodingPolicies

      public AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException
      Add Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response; policy states will be set to DISABLED automatically.
      Parameters:
      policies - The user defined ec policy list to add.
      Returns:
      Return the response list of adding operations.
      Throws:
      IOException
    • removeErasureCodingPolicy

      public void removeErasureCodingPolicy(String ecPolicyName) throws IOException
      Remove erasure coding policy.
      Parameters:
      ecPolicyName - The name of the policy to be removed.
      Throws:
      IOException
    • enableErasureCodingPolicy

      public void enableErasureCodingPolicy(String ecPolicyName) throws IOException
      Enable erasure coding policy.
      Parameters:
      ecPolicyName - The name of the policy to be enabled.
      Throws:
      IOException
    • disableErasureCodingPolicy

      public void disableErasureCodingPolicy(String ecPolicyName) throws IOException
      Disable erasure coding policy.
      Parameters:
      ecPolicyName - The name of the policy to be disabled.
      Throws:
      IOException
    • unsetErasureCodingPolicy

      public void unsetErasureCodingPolicy(org.apache.hadoop.fs.Path path) throws IOException
      Unset the erasure coding policy from the source path.
      Parameters:
      path - The directory to unset the policy
      Throws:
      IOException
    • getECTopologyResultForPolicies

      public ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException
      Verifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.
      Parameters:
      policyNames - name of policies.
      Returns:
      the result if the given policies are supported in the cluster setup
      Throws:
      IOException
    • getTrashRoot

      public org.apache.hadoop.fs.Path getTrashRoot(org.apache.hadoop.fs.Path path)
      Get the root directory of Trash for a path in HDFS. 1. File in encryption zone returns /ez1/.Trash/username 2. File in snapshottable directory returns /snapdir1/.Trash/username if dfs.namenode.snapshot.trashroot.enabled is set to true. 3. In other cases, or encountered exception when checking the encryption zone or when checking snapshot root of the path, returns /users/username/.Trash Caller appends either Current or checkpoint timestamp for trash destination
      Overrides:
      getTrashRoot in class org.apache.hadoop.fs.FileSystem
      Parameters:
      path - the trash root of the path to be determined.
      Returns:
      trash root
    • getTrashRoots

      public Collection<org.apache.hadoop.fs.FileStatus> getTrashRoots(boolean allUsers)
      Get all the trash roots of HDFS for current user or for all the users. 1. File deleted from encryption zones e.g., ez1 rooted at /ez1 has its trash root at /ez1/.Trash/$USER 2. File deleted from snapshottable directories if dfs.namenode.snapshot.trashroot.enabled is set to true. e.g., snapshottable directory /snapdir1 has its trash root at /snapdir1/.Trash/$USER 3. File deleted from other directories /user/username/.Trash
      Overrides:
      getTrashRoots in class org.apache.hadoop.fs.FileSystem
      Parameters:
      allUsers - return trashRoots of all users if true, used by emptier
      Returns:
      trash roots of HDFS
    • fixRelativePart

      protected org.apache.hadoop.fs.Path fixRelativePart(org.apache.hadoop.fs.Path p)
      Overrides:
      fixRelativePart in class org.apache.hadoop.fs.FileSystem
    • createFile

      public DistributedFileSystem.HdfsDataOutputStreamBuilder createFile(org.apache.hadoop.fs.Path path)
      Create a HdfsDataOutputStreamBuilder to create a file on DFS. Similar to FileSystem.create(Path), file is overwritten by default.
      Overrides:
      createFile in class org.apache.hadoop.fs.FileSystem
      Parameters:
      path - the path of the file to create.
      Returns:
      A HdfsDataOutputStreamBuilder for creating a file.
    • listOpenFiles

      @Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles() throws IOException
      Deprecated.
      Returns a RemoteIterator which can be used to list all open files currently managed by the NameNode. For large numbers of open files, iterator will fetch the list in batches of configured size.

      Since the list is fetched in batches, it does not represent a consistent snapshot of the all open files.

      This method can only be called by HDFS superusers.

      Throws:
      IOException
    • listOpenFiles

      @Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) throws IOException
      Deprecated.
      Throws:
      IOException
    • listOpenFiles

      public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException
      Throws:
      IOException
    • appendFile

      public DistributedFileSystem.HdfsDataOutputStreamBuilder appendFile(org.apache.hadoop.fs.Path path)
      Create a DistributedFileSystem.HdfsDataOutputStreamBuilder to append a file on DFS.
      Overrides:
      appendFile in class org.apache.hadoop.fs.FileSystem
      Parameters:
      path - file path.
      Returns:
      A DistributedFileSystem.HdfsDataOutputStreamBuilder for appending a file.
    • hasPathCapability

      public boolean hasPathCapability(org.apache.hadoop.fs.Path path, String capability) throws IOException
      HDFS client capabilities. Uses DfsPathCapabilities to keep WebHdfsFileSystem in sync.
      Specified by:
      hasPathCapability in interface org.apache.hadoop.fs.PathCapabilities
      Overrides:
      hasPathCapability in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • createMultipartUploader

      public org.apache.hadoop.fs.MultipartUploaderBuilder createMultipartUploader(org.apache.hadoop.fs.Path basePath) throws IOException
      Overrides:
      createMultipartUploader in class org.apache.hadoop.fs.FileSystem
      Throws:
      IOException
    • getSlowDatanodeStats

      public DatanodeInfo[] getSlowDatanodeStats() throws IOException
      Retrieve stats for slow running datanodes.
      Returns:
      An array of slow datanode info.
      Throws:
      IOException - If an I/O error occurs.
    • getLocatedBlocks

      public LocatedBlocks getLocatedBlocks(org.apache.hadoop.fs.Path p, long start, long len) throws IOException
      Returns LocatedBlocks of the corresponding HDFS file p from offset start for length len. This is similar to getFileBlockLocations(Path, long, long) except that it returns LocatedBlocks rather than BlockLocation array.
      Parameters:
      p - path representing the file of interest.
      start - offset
      len - length
      Returns:
      a LocatedBlocks object
      Throws:
      IOException
    • getEnclosingRoot

      public org.apache.hadoop.fs.Path getEnclosingRoot(org.apache.hadoop.fs.Path path) throws IOException
      Return path of the enclosing root for a given path The enclosing root path is a common ancestor that should be used for temp and staging dirs as well as within encryption zones and other restricted directories.
      Overrides:
      getEnclosingRoot in class org.apache.hadoop.fs.FileSystem
      Parameters:
      path - file path to find the enclosing root path for
      Returns:
      a path to the enclosing root
      Throws:
      IOException - early checks like failure to resolve path cause IO failures