Class FsVolumeImpl
java.lang.Object
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl
- All Implemented Interfaces:
Checkable<FsVolumeSpi.VolumeCheckContext,,VolumeCheckResult> FsVolumeSpi
The underlying volume used to store replica.
It uses the
FsDatasetImpl object for synchronization.-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic enumFilter for block file names stored on the file system volumes.Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi
FsVolumeSpi.BlockIterator, FsVolumeSpi.ScanInfo, FsVolumeSpi.VolumeCheckContext -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected ThreadPoolExecutorPer-volume worker pool that processes new blocks to cache.protected longstatic final org.slf4j.Logger -
Method Summary
Modifier and TypeMethodDescriptionactivateSavedReplica(String bpid, ReplicaInfo replicaInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker.RamDiskReplica replicaState) append(String bpid, ReplicaInfo replicaInfo, long newGS, long estimateBlockLen) check(FsVolumeSpi.VolumeCheckContext ignored) Query the health of this object.voidcompileReport(String bpid, Collection<FsVolumeSpi.ScanInfo> report, DirectoryScanner.ReportCompiler reportCompiler) Compile a list ofFsVolumeSpi.ScanInfofor the blocks in the block pool with idbpid.convertTemporaryToRbw(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, ReplicaInfo temp) File[]copyBlockToLazyPersistLocation(String bpId, long blockId, long genStamp, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) createRbw(org.apache.hadoop.hdfs.protocol.ExtendedBlock b) createTemporary(org.apache.hadoop.hdfs.protocol.ExtendedBlock b) longCalculate the available space of the filesystem, excluding space reserved for non-HDFS and space reserved for RBW.String[]Make a deep copy of the list of currently active BPIDs.longReturn either the configured capacity of the file system if configured; or the capacity of the file system excluding space reserved for non-HDFS.FsDatasetSpi<? extends FsVolumeSpi>Get the FSDatasetSpi which this volume is a part of.longlongThis function is only used for Mock.getFinalizedDir(String bpid) protected FilegetLazyPersistDir(String bpid) longUnplanned Non-DFS usage, i.e.protected Fileintlongorg.apache.hadoop.fs.StorageTypeprotected Fileorg.apache.hadoop.fs.DFgetUsageStats(org.apache.hadoop.conf.Configuration conf) hardLinkBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo) voidincrNumBlocks(String bpid) protected ThreadPoolExecutorinitializeCacheExecutor(File parent) booleanReturns true if the volume is backed by RAM storage.booleanReturns true if the volume is NOT backed by persistent storage.loadBlockIterator(String bpid, String name) Load a saved block iterator.byte[]loadLastPartialChunkChecksum(File blockFile, File metaFile) Load last partial chunk checksum from checksum file.moveBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) newBlockIterator(String bpid, String name) Create a new block iterator.static StringnextSorted(List<String> arr, String prev) Obtain a reference object that had increased 1 reference count of the volume.voidreleaseLockedMemory(long bytesToRelease) Release reserved memory for an RBW block written to transient storage i.e.voidreleaseReservedSpace(long bytesToRelease) Release disk space previously reserved for block opened for write.voidreserveSpaceForReplica(long bytesToReserve) Reserve disk space for a block (RBW or Re-replicating) so a writer does not run out of space before the block is full.voidresolveDuplicateReplicas(String bpid, ReplicaInfo memBlockInfo, ReplicaInfo diskBlockInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap volumeMap) voidsetCapacityForTesting(long capacity) This function MUST NOT be used outside of tests.toString()updateRURCopyOnTruncate(ReplicaInfo rur, String bpid, long newBlockId, long recoveryId, long newlength)
-
Field Details
-
LOG
public static final org.slf4j.Logger LOG -
configuredCapacity
protected volatile long configuredCapacity -
cacheExecutor
Per-volume worker pool that processes new blocks to cache. The maximum number of workers per volume is bounded (configurable via dfs.datanode.fsdatasetcache.max.threads.per.volume) to limit resource contention.
-
-
Method Details
-
initializeCacheExecutor
-
obtainReference
Description copied from interface:FsVolumeSpiObtain a reference object that had increased 1 reference count of the volume. It is caller's responsibility to closeFsVolumeReferenceto decrease the reference count on the volume.- Specified by:
obtainReferencein interfaceFsVolumeSpi- Throws:
ClosedChannelException
-
getReferenceCount
@VisibleForTesting public int getReferenceCount() -
getCurrentDir
-
getRbwDir
- Throws:
IOException
-
getLazyPersistDir
- Throws:
IOException
-
getTmpDir
- Throws:
IOException
-
getDfsUsed
- Throws:
IOException
-
getCapacity
@VisibleForTesting public long getCapacity()Return either the configured capacity of the file system if configured; or the capacity of the file system excluding space reserved for non-HDFS. When same-disk-tiering is turned on, the reported capacity will take reservedForArchive value into consideration of.- Returns:
- the unreserved number of bytes left in this filesystem. May be zero.
-
setCapacityForTesting
@VisibleForTesting public void setCapacityForTesting(long capacity) This function MUST NOT be used outside of tests.- Parameters:
capacity-
-
getAvailable
Calculate the available space of the filesystem, excluding space reserved for non-HDFS and space reserved for RBW.- Specified by:
getAvailablein interfaceFsVolumeSpi- Returns:
- the available number of bytes left in this filesystem. May be zero.
- Throws:
IOException
-
getDfUsed
@VisibleForTesting public long getDfUsed()This function is only used for Mock. -
getNonDfsUsed
Unplanned Non-DFS usage, i.e. Extra usage beyond reserved.- Returns:
- Disk usage excluding space used by HDFS and excluding space reserved for blocks open for write.
- Throws:
IOException
-
getReservedForReplicas
@VisibleForTesting public long getReservedForReplicas() -
getBlockPoolSlices
-
getBaseURI
- Specified by:
getBaseURIin interfaceFsVolumeSpi- Returns:
- the base path to the volume
-
getUsageStats
public org.apache.hadoop.fs.DF getUsageStats(org.apache.hadoop.conf.Configuration conf) - Specified by:
getUsageStatsin interfaceFsVolumeSpi
-
getStorageLocation
- Specified by:
getStorageLocationin interfaceFsVolumeSpi- Returns:
- the
StorageLocationto the volume
-
isTransientStorage
public boolean isTransientStorage()Description copied from interface:FsVolumeSpiReturns true if the volume is NOT backed by persistent storage.- Specified by:
isTransientStoragein interfaceFsVolumeSpi
-
isRAMStorage
public boolean isRAMStorage()Description copied from interface:FsVolumeSpiReturns true if the volume is backed by RAM storage.- Specified by:
isRAMStoragein interfaceFsVolumeSpi
-
getFinalizedDir
- Throws:
IOException
-
getBlockPoolList
Make a deep copy of the list of currently active BPIDs.- Specified by:
getBlockPoolListin interfaceFsVolumeSpi- Returns:
- a list of block pools.
-
reserveSpaceForReplica
public void reserveSpaceForReplica(long bytesToReserve) Description copied from interface:FsVolumeSpiReserve disk space for a block (RBW or Re-replicating) so a writer does not run out of space before the block is full.- Specified by:
reserveSpaceForReplicain interfaceFsVolumeSpi
-
releaseReservedSpace
public void releaseReservedSpace(long bytesToRelease) Description copied from interface:FsVolumeSpiRelease disk space previously reserved for block opened for write.- Specified by:
releaseReservedSpacein interfaceFsVolumeSpi
-
releaseLockedMemory
public void releaseLockedMemory(long bytesToRelease) Description copied from interface:FsVolumeSpiRelease reserved memory for an RBW block written to transient storage i.e. RAM. bytesToRelease will be rounded down to the OS page size since locked memory reservation must always be a multiple of the page size.- Specified by:
releaseLockedMemoryin interfaceFsVolumeSpi
-
nextSorted
-
newBlockIterator
Description copied from interface:FsVolumeSpiCreate a new block iterator. It will start at the beginning of the block set.- Specified by:
newBlockIteratorin interfaceFsVolumeSpi- Parameters:
bpid- The block pool id to iterate over.name- The name of the block iterator to create.- Returns:
- The new block iterator.
-
loadBlockIterator
Description copied from interface:FsVolumeSpiLoad a saved block iterator.- Specified by:
loadBlockIteratorin interfaceFsVolumeSpi- Parameters:
bpid- The block pool id to iterate over.name- The name of the block iterator to load.- Returns:
- The saved block iterator.
- Throws:
IOException- If there was an IO error loading the saved block iterator.
-
getDataset
Description copied from interface:FsVolumeSpiGet the FSDatasetSpi which this volume is a part of.- Specified by:
getDatasetin interfaceFsVolumeSpi
-
check
public VolumeCheckResult check(FsVolumeSpi.VolumeCheckContext ignored) throws org.apache.hadoop.util.DiskChecker.DiskErrorException Description copied from interface:CheckableQuery the health of this object. This method may hang indefinitely depending on the status of the target resource.- Specified by:
checkin interfaceCheckable<FsVolumeSpi.VolumeCheckContext,VolumeCheckResult> - Parameters:
ignored- for the probe operation. May be null depending on the implementation.- Returns:
- result of the check operation.
- Throws:
org.apache.hadoop.util.DiskChecker.DiskErrorException
-
toString
-
getStorageID
- Specified by:
getStorageIDin interfaceFsVolumeSpi- Returns:
- the StorageUuid of the volume
-
getStorageType
public org.apache.hadoop.fs.StorageType getStorageType()- Specified by:
getStorageTypein interfaceFsVolumeSpi- Returns:
- the
StorageTypeof the volume
-
loadLastPartialChunkChecksum
Description copied from interface:FsVolumeSpiLoad last partial chunk checksum from checksum file. Need to be called with FsDataset lock acquired.- Specified by:
loadLastPartialChunkChecksumin interfaceFsVolumeSpi- Returns:
- the last partial checksum
- Throws:
IOException
-
append
public ReplicaInPipeline append(String bpid, ReplicaInfo replicaInfo, long newGS, long estimateBlockLen) throws IOException - Throws:
IOException
-
createRbw
public ReplicaInPipeline createRbw(org.apache.hadoop.hdfs.protocol.ExtendedBlock b) throws IOException - Throws:
IOException
-
convertTemporaryToRbw
public ReplicaInPipeline convertTemporaryToRbw(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, ReplicaInfo temp) throws IOException - Throws:
IOException
-
createTemporary
public ReplicaInPipeline createTemporary(org.apache.hadoop.hdfs.protocol.ExtendedBlock b) throws IOException - Throws:
IOException
-
updateRURCopyOnTruncate
public ReplicaInPipeline updateRURCopyOnTruncate(ReplicaInfo rur, String bpid, long newBlockId, long recoveryId, long newlength) throws IOException - Throws:
IOException
-
compileReport
public void compileReport(String bpid, Collection<FsVolumeSpi.ScanInfo> report, DirectoryScanner.ReportCompiler reportCompiler) throws InterruptedException, IOException Description copied from interface:FsVolumeSpiCompile a list ofFsVolumeSpi.ScanInfofor the blocks in the block pool with idbpid.- Specified by:
compileReportin interfaceFsVolumeSpi- Parameters:
bpid- block pool id to scanreport- the list onto which blocks reports are placed- Throws:
InterruptedExceptionIOException
-
getFileIoProvider
- Specified by:
getFileIoProviderin interfaceFsVolumeSpi
-
getMetrics
- Specified by:
getMetricsin interfaceFsVolumeSpi
-
moveBlockToTmpLocation
public ReplicaInfo moveBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) throws IOException - Throws:
IOException
-
hardLinkBlockToTmpLocation
public ReplicaInfo hardLinkBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo) throws IOException - Throws:
IOException
-
copyBlockToLazyPersistLocation
public File[] copyBlockToLazyPersistLocation(String bpId, long blockId, long genStamp, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) throws IOException - Throws:
IOException
-
incrNumBlocks
- Throws:
IOException
-
resolveDuplicateReplicas
public void resolveDuplicateReplicas(String bpid, ReplicaInfo memBlockInfo, ReplicaInfo diskBlockInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap volumeMap) throws IOException - Throws:
IOException
-
activateSavedReplica
public ReplicaInfo activateSavedReplica(String bpid, ReplicaInfo replicaInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker.RamDiskReplica replicaState) throws IOException - Throws:
IOException
-