Cluster

The Cluster Object holds the information regarding the cluster’s configuration.

Cluster() is the top level of the Object hierarchy.

Cluster Objects support Callback Functions

Object Hierarchy

Fig 1. The Object hierarchy representing the PixStor file system under the Cluster() Object
ArcaPix PixStor Python API Object Hierarchy

Description

class arcapix.fs.gpfs.cluster.Cluster

Cluster class represents the GPFS cluster and its associated attributes.

Instantiates a Cluster object with the settings of the Cluster

id

Returns the GPFS cluster id

Shorthand for ‘clusterID’

Return type:str
name

Returns the name of the GPFS cluster

Return type:str
clusterId

Returns the GPFS cluster ID

Return type:str
UIDdomain

Returns the GPFS cluster UID domain

Return type:str
RemoteShellCmd

Returns the remote shell command

Return type:str
RemoteFileCopyCmd

Returns the remote file copy command

Return type:str
PrimaryConfigServer

Returns the name of the cluster’s primary configuration server

Return type:str
SecondaryConfigServer

Returns the name of the cluster’s secondary configuration server.

Default:None if there is no configured secondary config server
Return type:str
RepositoryType

Returns the repository type for this cluster - can be either ‘server-based’ or ‘CCR’

On valid on 4.1 clusters - returns server-based for 3.5

Return type:str
filesystems

Returns an instance of the Filesystems collection

Return type:Filesystems object
nodes

Returns an instance of the Nodes collection

Return type:Nodes Object
callbacks

Returns an instance of the Callbacks collection, containing all callbacks installed on the cluster.

Return type:Callbacks Object
nsds

Returns and instance of the Nsds collection

Return type:Nsds object
nodeclasses

Returns an instance of the Nodes collection

Return type:Nodeclasses Object
manager

Returns the (fully-qualified) name of the current cluster manager node

Return type:str
shutdown(unmountTimeout=None)

Shuts down GPFS on all nodes in the cluster

Parameters:unmountTimeout (int) – Time to wait for filesystem(s) to unmount on the nodes. If unmount doesn’t complete within unmountTimeout, the GPFS daemon will shutdown anyway (default=60+3*(num nodes) seconds).
Return type:Nothing
startup(async_=False, timeout=60, **kwargs)

Starts up GPFS on all nodes in the cluster

Parameters:
  • async (bool) – If True, do not wait for startup (default=False).
  • timeout (int) – Time to wait for the nodes to start up if not async (in seconds) (default=60s)
Return type:

Nothing

Raises:

GPFSError if not async and the nodes aren’t back up within the timeout period

change(**kwargs)
Parameters:manager (str) – Node to be made the cluster manager
runPolicy(policy, target, **kwargs)

Run a management policy on a target filesystem or directory in the cluster

Parameters:
  • policy (ManagementPolicy object) – Management policy to run
  • target (Filesystem object or string) – target filesystem or directory
  • nodes – list of names of nodes to run on
  • action – ‘yes’, ‘defer’, ‘test’, ‘prepare’ (default=’yes’)
  • cleanup – remove any temporary policy file after completion (default=True)
afmAsyncDelay

Specifies (in seconds) the amount of time by which write operations are delayed (because write operations are asynchronous with respect to remote clusters). For write-intensive applications that keep writing to the same set of files, this delay is helpful because it replaces multiple writes to the home cluster with a single write containing the latest data. However, setting a very high value weakens the consistency of data on the remote cluster.

This configuration parameter is applicable only for writer caches (SW and IW), where data from cache is pushed to home.

Valid values are 1 through 2147483647.

Default:15
Return type:int
afmAsyncOpWaitTimeout

Specifies the time (in seconds) that AFM or AFM DR waits for completion of any inflight asynchronous operation which is synchronizing with the home or primary cluster

Valid values are between 5 and 2147483647

New in GPFS 5.0.0 - older versions return default

Default:300
Return type:int
afmDirLookupRefreshInterval

Controls the frequency of data revalidations that are triggered by such lookup operations as ls or stat (specified in seconds). When a lookup operation is done on a directory, if the specified amount of time has passed, AFM sends a message to the home cluster to find out whether the metadata of that directory has been modified since the last time it was checked. If the time interval has not passed, AFM does not check the home cluster for updates to the metadata.

Valid values are 0 through 2147483647. In situations where home cluster data changes frequently, a value of 0 is recommended.

Default:60
Return type:int
afmDirOpenRefreshInterval

Controls the frequency of data revalidations that are triggered by such I/O operations as read or write (specified in seconds). After a directory has been cached, open requests resulting from I/O operations on that object are directed to the cached directory until the specified amount of time has passed. Once the specified amount of time has passed, the open request gets directed to a gateway node rather than to the cached directory.

Valid values are 0 through 2147483647. Setting a lower value guarantees a higher level of consistency.

Default:60
Return type:int
afmDisconnectTimeout

Controls how long a network outage between the cache and home clusters can continue before the data in the cache is considered out of sync with home. The cache waits until afmDisconnectTimeout expires before declaring a network outage. At that point, only cached data can be accessed; attempts to read uncached data result in I/O errors, and new updates remain in the queue waiting for a reconnect. The cached data remains available until afmExpirationTimeout also expires.

Valid values are 0 through 2147483647.

Default:60
Return type:int
afmEnableNFSSec

If enabled at cache/primary, exported paths from home/secondary with kerberos-enabled security levels are mounted at cache/primary in the increasing order of security level

New in GPFS 5.0.1 - older versions return default

Default:False
Return type:bool
afmExpirationTimeout

Controls how long a network outage between the cache and home clusters can continue before the data in the cache is considered out of sync with home. After afmDisconnectTimeout expires, cached data remains available until afmExpirationTimeout expires, at which point the cached data is considered expired and cannot be read until a reconnect occurs.

Valid values are 0 through 2147483647.

Can also be None, indicating a disabled state

Default:None (disabled)
Return type:int
afmFileLookupRefreshInterval

Controls the frequency of data revalidations that are triggered by such lookup operations as ls or stat (specified in seconds). When a lookup operation is done on a file, if the specified amount of time has passed, AFM sends a message to the home cluster to find out whether the metadata of the file has been modified since the last time it was checked. If the time interval has not passed, AFM does not check the home cluster for updates to the metadata.

Valid values are 0 through 2147483647. In situations where home cluster data changes frequently, a value of 0 is recommended.

Default:30
Return type:int
afmFileOpenRefreshInterval

Controls the frequency of data revalidations that are triggered by such I/O operations as read or write (specified in seconds). After a file has been cached, open requests resulting from I/O operations on that object are directed to the cached file until the specified amount of time has passed. Once the specified amount of time has passed, the open request gets directed to a gateway node rather than to the cached file.

Valid values are 0 through 2147483647. Setting a lower value guarantees a higher level of consistency.

Default:30
Return type:int
afmHardMemThreshold

Sets a limit to the maximum amount of memory that AFM can use on each gateway node to record changes to the file system

New in GPFS 4.2.1 - older versions return None

Return type:int
afmHashVersion

Specifies an older or newer version of gateway node hashing algorithm (for example, afmHashVersion=2). This can be used to minimize the impact of gateway nodes joining or leaving the active cluster by running as few recoveries as much as possible.

Valid values are 1, 2 or 4. 5 is also allowed in GPFS 5.0.2+

New in GPFS 4.1.0.4 - older versions return default

Default:2
Return type:int
afmMaxParallelRecoveries

Specifies the number of filesets per gateway node on which event recovery is run

New in GPFS 4.2.3 - older versions return default

Default:0
Return type:int
afmNumReadThreads

The number of threads that can be used on each participating gateway node during parallel read. The default value of this parameter is 1; that is, one reader thread will be active on every gateway node for each big write operation qualifying for splitting per the parallel read threshold value.

The valid range of values is 1 to 64.

New in GPFS 4.1.0.4 - older versions return default

Default:1
Return type:int
afmNumWriteThreads

Defines the number of threads that can be used on each participating gateway node during parallel write. The default value of this parameter is 1; that is, one writer thread will be active on every gateway node for each big write operation qualifying for splitting per the parallel write threshold value.

Valid values can range from 1 to 64.

New in GPFS 4.1.0.4 - older versions return default

Default:1
Return type:int
afmParallelReadChunkSize

The minimum chunk size of the read that needs to be distributed among the gateway nodes during parallel reads. Values are interpreted in terms of BYTES.

The valid range of values is 0 to 2147483647.

It can also be set at fileset level.

New in GPFS 4.1.0.4 - older versions return default

Default:128 MB
Return type:int
afmParallelReadThreshold

The threshold beyond which parallel reads become effective.

Reads are split into chunks when file size exceeds this threshold value. Values are interpreted in terms of MB.

The valid range of values is 0 to 2147483647.

It can also be set at fileset level.

New in GPFS 4.1.0.4 - older versions return default

Default:1024 MB
Return type:int
afmParallelWriteChunkSize

The minimum chunk size of the write that needs to be distributed among the gateway nodes during parallel writes. Values are interpreted in terms of BYTES.

The valid range of values is 0 to 2147483647.

It can also be set at fileset level.

New in GPFS 4.1.0.4 - older versions return default

Default:128 MB
Return type:int
afmParallelWriteThreshold

The threshold beyond which parallel writes become effective.

Writes are split into chunks when file size exceeds this threshold value. Values are interpreted in terms of MBYTES.

The valid range of values is 0 to 2147483647.

It can also be set at fileset level.

New in GPFS 4.1.0.4 - older versions return default

Default:1024 MB
Return type:int
afmRPO

Specifies the recovery point objective (RPO) interval for an AFM DR fileset

This attribute is disabled by default. You can specify a value with the suffix M for minutes, H for hours, or W for weeks. If you do not add a suffix, the value is assumed to be in minutes.

Valid values are 720 minutes - 2147483647 minutes.

New in GPFS 5.0.0 - older versions return default

Default:None (disabled)
Return type:int
afmReadSparseThreshold

Specifies the file size beyond which sparseness of a file in cache is maintained.

File sparseness in cache is detected and maintained when the size of allocated blocks in the cache for a file exceeds afmReadSparseThreshold. If the size of a file is less than the threshold, sparseness is not maintained.

Default:128
Return type:int
afmRefreshAsync

Specifies whether cache data refresh operation is in asynchronous mode.

New in GPFS 5.0.3 - older versions return None

Default:False
Return type:bool
afmRevalOpWaitTimeout

Specifies the time that AFM waits for revalidation to get response from the home cluster

Valid values are between 5 and 2147483647

New in GPFS 5.0.0 - older versions return default

Default:180
Return type:int
afmSecondaryRW

Specifies if the secondary is read-write or not

New in GPFS 4.2.1 - older version return default

Default:None
Return type:bool
afmShowHomeSnapshot

Controls the visibility of the home snapshot directory in cache. For this to be visible in cache, this variable has to be set to True, and the snapshot directory name in cache and home should not be the same.

Default:None
Return type:bool
afmSyncOpWaitTimeout

Specifies the time that AFM or AFM DR waits for completion of any inflight synchronous operation which is synchronizing with the home or primary cluster.

Valid values are between 5 and 2147483647

New in GPFS 5.0.0 - older versions return default

Default:180
Return type:int
FIPS1402mode

Controls whether GPFS operates in FIPS 140-2 mode, which requires using a FIPS-compliant encryption module for all encryption and decryption activity

New in GPFS 4.1.0.4 - older versions return default

Default:False
Return type:bool
adminMode

Specifies whether all nodes in the cluster are used for issuing GPFS administration commands or just a subset of the nodes

One of [allToAll, central]

Return type:str
atimeDeferredSeconds

Controls the update behavior of atime when the relatime option is enabled

Default:86400
Return type:int
autoBuildGPL

Causes IBM Spectrum Scale to detect when the GPFS portability layer (GPL) needs to be rebuilt and to rebuild it automatically.

Valid values are ‘yes’, ‘no’ or a combination of ‘quiet’ and ‘verbose’ e.g. ‘quiet’, ‘verbose’, ‘quiet-verbose’

New in GPFS 5.0.2 - older versions will return None

Default:no
Return type:str
autoload

Starts GPFS automatically whenever the nodes are rebooted

Return type:bool
automountDir

Specifies the directory to be used by the Linux automounter for GPFS file systems that are being mounted automatically

Default:/gpfs/automountdir
Return type:str
cesSharedRoot

Specifies a directory in a GPFS file system to be used by the Cluster Export Services (CES) subsystem.

New in GPFS 4.2.1 - older versions return None

Return type:str
cifsBypassTraversalChecking

Controls the GPFS behavior while performing access checks for directories.

New in GPFS 4.2.1 - older versions return default

Default:False
Return type:bool
cipherList

Sets the security mode for the cluster

Return type:str
clusterName

Name of the cluster

Return type:str
cnfsGrace

Specifies the number of seconds a CNFS node will deny new client requests after a node failover or failback, to allow clients with existing locks to reclaim them without the possibility of some other client being granted a conflicting access

Valid values are 10 through 600

Default:90
Return type:int
cnfsMountdPort

Specifies the port number to be used for rpc.mountd

Return type:str
cnfsNFSDprocs

Specifies the number of nfsd kernel threads

Default:32
Return type:int
cnfsReboot

Specifies whether the node will reboot when CNFS monitoring detects an unrecoverable problem that can only be handled by node failover.

New in GPFS 4.1.0.4 - older versions return default

Default:True
Return type:bool
cnfsSharedRoot

Specifies a directory in a GPFS file system to be used by the clustered NFS subsystem.

cnfsVIP

Obsolete since GPFS 4.1.0.4 - newer versions return None

cnfsVersions

Specifies a list of protocol versions that CNFS should start and monitor.

New in GPFS 4.1.0.4 - older versions return None

Return type:list of int
commandAudit

Controls the logging of audit messages for GPFS commands that change the configuration of the cluster. This attribute is not supported on Windows operating systems

One of [yes, no, syslogOnly]

New in GPFS 4.2.1 - older versions return default

Default:syslogOnly
Return type:str
dataDiskCacheProtectionMethod
Return type:int
dataDiskWaitTimeForRecovery

Specifies a period of time, in seconds, during which the recovery of dataOnly disks is suspended to give the disk subsystem a chance to correct itself

Allowed values are between 0 and 3600 seconds

Default:3600
Return type:int
dataStructureDump

Path for the storage of dumps

Return type:str
deadlockBreakupDelay

Specifies how long to wait after a deadlock is detected before attempting to break up the deadlock

New in GPFS 4.1.0.4 - older versions return default

Default:0
Return type:int
deadlockDataCollectionDailyLimit

Specifies the maximum number of times that debug data can be collected every 24 hours

New in GPFS 4.1.0.4 - older versions return default

Default:10
Return type:int
deadlockDataCollectionMinInterval

Specifies the minimum interval between two consecutive collections of debug data.

New in GPFS 4.2.1 - older versions return default

Default:300
Return type:int
deadlockDetectionThreshold

Specifies the deadlock detection threshold

New in GPFS 4.1.0.4 - older versions return default

Default:300
Return type:int
deadlockDetectionThresholdForShortWaiters

Specifies the deadlock detection threshold for short waiters that should never be long.

New in GPFS 4.2.1 - older versions return default

Default:60
Return type:int
deadlockDetectionThresholdIfOverloaded

Specifies the deadlock detection threshold to use when a cluster is overloaded.

New in GPFS 4.2.1 - older versions return default

Default:1800
Return type:int
deadlockOverloadThreshold

Specifies the threshold for detecting a cluster overload condition.

New in GPFS 4.2.1 - older versions return default

Default:1
Return type:int
debugDataControl

Controls the amount of debug data that is collected

One of [none, light, medium, heavy, verbose]

New in GPFS 4.2.1 - older versions return default

Default:light
Return type:str
defaultHelperNodes

For commands that distribute work among a set of nodes, the defaultHelperNodes parameter specifies the nodes to be used.

Return type:list
defaultMountDir

Specifies the default parent directory for GPFS file systems.

Default:/gpfs
Return type:str
disableInodeUpdateOnFdatasync

Controls the inode update on fdatasync for mtime and atime updates

Default:False
Return type:bool
dmapiDataEventRetry

Controls how GPFS handles data events that are enabled again immediately after the event is handled by the DMAPI application

Default:2
Return type:int
dmapiEventTimeout

Controls the blocking of file operation threads of NFS, while in the kernel waiting for the handling of a DMAPI synchronous event

Valid range 0-86400000

Default:86400000
Return type:int
dmapiMountEvent

Controls the generation of the mount, preunmount, and unmount events

One of [all, SessionNode, RemoteNode]

Default:all
Return type:str
dmapiMountTimeout

Controls the blocking of mount operations, waiting for a disposition for the mount event to be set

Valid range is 0-86400

Default:60
Return type:int
dmapiSessionFailureTimeout

Controls the blocking of file operation threads, while in the kernel, waiting for the handling of a DMAPI synchronous event that is enqueued on a session that has experienced a failure

Allowed values are 0-86400

Default:0
Return type:int
enableIPv6

Controls whether the GPFS daemon communicates through the IPv6 network

One of [yes, no, prepare, commit]

Return type:str
enforceFilesetQuotaOnRoot

Controls whether fileset quotas should be enforced for the root user the same way as for any other users

Default:False
Return type:bool
expelDataCollectionDailyLimit

Specifies the maximum number of times that debug data associated with expelling nodes can be collected in a 24-hour period

New in GPFS 4.2.1 - older versions return default

Default:10
Return type:int
expelDataCollectionMinInterval

Specifies the minimum interval, in seconds, between two consecutive expel-related data collection attempts on the same node.

New in GPFS 4.2.1 - older versions return default

Default:120
Return type:int
failureDetectionTime

Indicates to GPFS the amount of time it takes to detect that a node has failed.

Return type:int
fastestPolicyCmpThreshold

Indicates the disk comparison count threshold, above which GPFS forces selection of this disk as the preferred disk to read and update its current speed.

Valid values >= 3

New in GPFS 4.2.1 - older versions return default

Default:50
Return type:int
fastestPolicyMaxValidPeriod

Indicates the time period after which the disk’s current evaluation is considered invalid

Valid values >= 1

New in GPFS 4.2.1 - older versions return default

Default:600
Return type:int
fastestPolicyMinDiffPercent

A percentage value indicating how GPFS selects the fastest between two disks

Valid range 0-100

New in GPFS 4.2.1 - older versions return default

Default:50
Return type:int
fastestPolicyNumReadSamples

Controls how many read samples are taken to evaluate the disk’s recent speed.

Valid values are 3 through 100

New in GPFS 4.2.1 - older versions return default

Default:5
Return type:int
fileHeatLossPercent

Specifies the reduction rate of FILE_HEAT value for every fileHeatPeriodMinutes of file inactivity

Default:10
Return type:int
fileHeatPeriodMinutes

Specifies the inactivity time before a file starts to lose FILE_HEAT value

Default:0
Return type:int
forceLogWriteOnFdatasync

Controls forcing log writes to disk

Default:True
Return type:bool
frequentLeaveCountThreshold

Specifies the number of times a node exits the cluster within the last frequentLeaveTimespanMinutes before autorecovery ignores the next exit of that node.

Valid values are between 0 and 10

New in GPFS 5.0.1 - older versions return default

Default:0
Return type:int
frequentLeaveTimespanMinutes

Specifies the time span that is used to calculate the exit frequency of a node.

Valid values are between 1 and 1440

New in GPFS 5.0.1 - older versions return default

Default:60
Return type:int
ignorePrefetchLUNCount

Setting the value of the ignorePrefetchLUNCount parameter to yes does not include the LUN count and uses the maxMBpS value to dynamically determine the number of threads to schedule the prefetchThreads value.

New in GPFS 4.2.1 - older versions return default

Default:False
Return type:bool
indefiniteRetentionProtection
Return type:bool
linuxStatfsUnits

Controls the values that are returned by the Linux functions statfs and statvfs for f_bsize, f_rsize, f_blocks, and f_bfree

One of [posix, subblock, fullblock]

New in GPFS 5.0.3 - older versions return default

Default:fullblock
Return type:str
lrocData

Controls whether user data is populated into the local read-only cache

New in GPFS 4.1.0.4 - older versions return default

Default:True
Return type:bool
lrocDataMaxFileSize

Limits the data that may be saved in the local read-only cache to only the data from small files.

New in GPFS 4.1.0.4 - older versions return default

Default:0
Return type:int
lrocDataStubFileSize

Limits the data that may be saved in the local read-only cache to only the data from the first portion of all files

New in GPFS 4.1.0.4 - older versions return default

Default:0
Return type:int
lrocDirectories

Controls whether directory blocks is populated into the local read-only cache

New in GPFS 4.1.0.4 - older versions return default

Default:True
Return type:bool
lrocEnableStoringClearText

Controls whether encrypted file data can be read into a local read-only cache (LROC) device

New in GPFS 5.0.0 - older versions return default

Default:False
Return type:bool
lrocInodes

Controls whether inodes from open files is populated into the local read-only cache

New in GPFS 4.1.0.4 - older versions return default

Default:True
Return type:bool
maxActiveIallocSegs

Specifies the number of active inode allocation segments that are maintained on the specified nodes.

The valid range is 1 - 64

Default:1 for GPFS < 5.0.2, 8 for GPFS >= 5.0.2
Return type:int
maxBufferDescs

Each buffer descriptor caches maximum block size data for a file.

Valid values are from 512 to 10,000,000

New in GPFS 4.2.1 - older versions return default

Default:10 * maxFilesToCache up to pagepool size/16 KB
Return type:int
maxDownDisksForRecovery

Specifies the maximum number of disks that may experience a failure and still be subject to an automatic recovery attempt

Valid values are between 0 and 300

New in GPFS 4.2.1 - older versions return default

Default:16
Return type:int
maxFailedNodesForRecovery

Specifies the maximum number of nodes that may be unavailable before automatic disk recovery actions are cancelled.

Valid values are between 0 and 300

New in GPFS 4.2.1 - older versions return default

Default:3
Return type:int
maxFcntlRangesPerFile

Specifies the number of fcntl locks that are allowed per file

Allowed range 10 to 200000

Default:200
Return type:int
maxFilesToCache

Specifies the number of inodes to cache for recently used files that have been closed.

Default:4000
Return type:int
maxMBpS

Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node

Default:2048
Return type:int
maxMissedPingTimeout

Set limits on the calculation of missedPingTimeout (MPT). The MPT is the allowable time for pings sent from the Cluster Manager (CM) to a node that has not renewed its lease to fail.

New in GPFS 4.2.1 - older versions return default

Default:60
Return type:int
maxStatCache

Specifies the number of inodes to keep in the stat cache

Return type:int
maxblocksize

Changes the maximum file system block size.

Valid block sizes are 64 KiB, 128 KiB, 256 KiB, 512 KiB, 1 MiB, 2 MiB, 4 MiB, 8 MiB, and 16 MiB.

Default:1M
Return type:str
metadataDiskWaitTimeForRecovery

Specifies a period of time, in seconds, during which the recovery of metadata disks is suspended to give the disk subsystem a chance to correct itself

Valid values are between 0 and 3600 seconds.

Default:2400
Return type:int
minDiskWaitTimeForRecovery

Specifies a period of time, in seconds, during which the recovery of disks is suspended to give the disk subsystem a chance to correct itself

Valid values 0-3600

New in GPFS 4.2.1 - older versions return default

Default:1800
Return type:int
minMissedPingTimeout

Set limits on the calculation of missedPingTimeout (MPT). The MPT is the allowable time for pings sent from the Cluster Manager (CM) to a node that has not renewed its lease to fail.

New in GPFS 4.2.1 - older versions return default

Default:5
Return type:int
minReleaseLevel

GPFS version number

Return type:str
mmapRangeLock

Specifies POSIX or non-POSIX mmap byte-range semantics

Default:True
Return type:bool
mmfsLogLevel

Specifies the logging level

Return type:str
mmfsLogTimeStampISO8601

Controls the time stamp format for GPFS log entries

New in GPFS 4.2.2 - older versions return default

Default:True
Return type:bool
nfsPrefetchStrategy

Defines a window of the number of blocks around the current position that are treated as fuzzy-sequential access.

Valid values are between 0 and 10

New in GPFS 4.2.1 - older versions return default

Default:0
Return type:int
nistCompliance

Controls whether GPFS operates in the NIST 800-131A mode

One of [off, SP800-131A]

New in GPFS 4.1.0.4 - older versions return default

Return type:str
noSpaceEventInterval

Specifies the time interval between calling a callback script of two noDiskSpace events of a file system.

Default:120
Return type:int
nsdBufSpace

This option specifies the percentage of the page pool reserved for the network transfer of NSD requests

Valid range 10-70

Default:30
Return type:int
nsdCksumTraditional

Enables checksum data-integrity checking between a traditional NSD client node and its NSD server

New in GPFS 5.0.1 - older versions return default

Default:False
Return type:bool
nsdDumpBuffersOnCksumError

Enables the dumping of the data buffer to a file when a checksum error occurs.

New in GPFS 5.0.1 - older versions return default

Default:False
Return type:bool
nsdInlineWriteMax

Specifies the maximum transaction size that can be sent as embedded data in an NSD-write RPC

Valid values are between 0 and 8M

New in GPFS 4.2.1 - older versions return default

Default:1024
Return type:int
nsdMaxWorkerThreads

Sets the maximum number of NSD threads that can be involved in NSD I/O operations on an NSD server to the storage system to which the server is connected

Valid values are between 8 and 8192

New in GPFS 4.2.1 - older versions return default

Default:512
Return type:int
nsdMinWorkerThreads

Sets a lower bound on number of active NSD I/O threads on an NSD server node that executes I/O operations against NSDs

Valid values are between 1 and 8192

New in GPFS 4.2.1 - older versions return default

Default:16
Return type:int
nsdMultiQueue

Sets the number of NSD queues.

Valid values are between 2 and 512

New in GPFS 4.2.1 - older versions return default

Default:256
Return type:int
nsdRAIDBufferPoolSizePct

This option specifies the percentage of the page pool that is used for the GPFS Native RAID vdisk buffer pool

Valid range 10-90

Default:50
Return type:int
nsdRAIDTracks

specifies the number of tracks in the GPFS Native RAID buffer pool, or 0 if this node does not have a GPFS Native RAID vdisk buffer pool

Valid values are: 0; 256 or greater.

New in GPFS 4.1.0.4 - older versions return None

Return type:int
nsdServerWaitTimeForMount

When mounting a file system whose disks depend on NSD servers, this option specifies the number of seconds to wait for those servers to come up

Valid values are between 0 and 1200 seconds

Default:300
Return type:int
nsdServerWaitTimeWindowOnMount

Specifies a window of time (in seconds) during which a mount can wait for NSD servers

Valid values are between 1 and 1200 seconds

Default:600
Return type:int
numaMemoryInterleave

In a Linux NUMA environment, the default memory policy is to allocate memory from the local NUMA node of the CPU from which the allocation request was made.

This parameter is used to change to an interleave memory policy for GPFS

New in GPFS 4.1.0.4 - older versions return default

Default:False
Return type:bool
onAfmCmdRequeued

Returns a Callbacks collection, the members of which will be triggered during replication when messages are queued up again because of errors. These messages are retried after 15 minutes (Local Event)

GPFS 4.2.1+ only

onAfmFilesetChange

Returns a Callbacks collection, the members of which will be triggered when an AFM fileset is changed. If a fileset is renamed the new name is part of %reason. (Local Event)

GPFS 4.2.3+ only.

onAfmFilesetCreate

Returns a Callbacks collection, the members of which will be triggered when an AFM fileset is created. (Local Event)

GPFS 4.2.3+ only.

onAfmFilesetDelete

Returns a Callbacks collection, the members of which will be triggered when an AFM fileset is deleted. (Local Event)

GPFS 5.0.0+ only.

onAfmFilesetExpired

Returns a Callbacks collection, the members of which will be triggered when the contents of a fileset expire. (Global Event)

Returns a Callbacks collection, the members of which will be triggered when an AFM fileset is linked. (Local Event)

GPFS 4.2.3+ only.

onAfmFilesetUnexpired

Returns a Callbacks collection, the members of which will be triggered when the contents of a fileset become unexpired. (Global Event)

Returns a Callbacks collection, the members of which will be triggered when a AFM fileset is unlinked. (Local Event)

GPFS 4.2.3+ only.

onAfmFilesetUnmounted

Returns a Callbacks collection, the members of which will be triggered when the fileset is moved to an Unmounted state because NFS server is not reachable or remote cluster mount is not available for GPFS Native protocol. (Local Event)

GPFS 4.1.0+ only

onAfmHomeConnected

Returns a Callbacks collection, the members of which will be triggered when a gateway node connects to the home NFS server of the fileset that it is serving. (Local Event)

onAfmHomeDisconnected

Returns a Callbacks collection, the members of which will be triggered when a gateway node gets disconnected from the home NFS server of the fileset that it is serving. (Local Event)

onAfmManualResyncComplete

Returns a Callbacks collection, the members of which will be triggered when a manual resync is completed. (Local Event)

onAfmPrepopEnd

Returns a Callbacks collection, the members of which will be triggered when all the files specified by a prefetch operation have been cached successfully. (Local Event)

onAfmQueueDropped

Returns a Callbacks collection, the members of which will be triggered when replication encounters an issue that cannot be corrected. After the queue is dropped, next recovery action attempts to fix the error and continue to replicate. (Local Event)

GPFS 4.2.1+ only

onAfmRPOMiss

Returns a Callbacks collection, the members of which will be triggered when Recovery Point Objective (RPO) is missed on DR primary filesets, RPO Manager keeps retrying the snapshots. This event occurs when there is lot of data to replicate for the RPO snapshot to be taken or there is an error such as, deadlock and recovery keeps failing. (Local Event)

GPFS 4.2.1+ only.

onAfmRecoveryEnd

Returns a Callbacks collection, the members of which will be triggered when AFM recovery ends. (Local Event)

onAfmRecoveryFail

Returns a Callbacks collection, the members of which will be triggered when recovery fails. The recovery action is retried after 300 seconds. If recovery keeps failing, fileset is moved to a resync state if the fileset mode allows it. (Local Event)

GPFS 4.2.1+ only

onAfmRecoveryStart

Returns a Callbacks collection, the members of which will be triggered when AFM recovery starts. (Local Event)

onCcrFileChange

Returns a Callbacks collection, the members of which will be triggered when CCR fput operation takes place. (Local Event)

GPFS 4.2.1+ only.

onCcrVarChange

Returns a Callbacks collection, the members of which will be triggered when CCR vput operation takes place. (Local Event)

GPFS 4.2.1+ only.

onClusterManagerTakeover

Returns a Callbacks collection, the members of which will be triggered when a new cluster manager node has been elected. (Global Event)

onDaRebuildFailed

Returns a Callbacks collection, the members of which will be triggered when the spare space in a declustered array has been exhausted, and vdisk tracks involving damaged pdisks can no longer be rebuilt. The occurrence of this event indicates that fault tolerance in the declustered array has become degraded and that disk maintenance should be performed immediately. The daRemainingRedundancy parameter indicates how much fault tolerance remains in the declustered array. (Local Event)

onDeadlockDetected

Returns a Callbacks collection, the members of which will be triggered when a node detects a potential deadlock. (Local Event)

onDeadlockOverload

Returns a Callbacks collection, the members of which will be triggered when a cluster is overloaded on the node detecting the overload condition. (Local Event)

onDiskFailure

Returns a Callbacks collection, the members of which will be triggered on the file system manager node when the disk status in the files system changes to down. (Local Event)

onDiskIOHang

Returns a Callbacks collection, the members of which will be triggered when the GPFS daemon detects that a local I/O request has been pending in the kernel for more than five minutes. (Local Event)

GPFS 5.0.2+ only

onFilesetLimitExceeded

Returns a Callbacks collection, the members of which will be triggered when the file system manager detects that a fileset quota has been exceeded. (Local Event)

Note

GPFS recommends using the softQuotaExceeded event instead.

onFsstruct

Returns a Callbacks collection, the members of which will be triggered when the file system manager detects a file system structure (FS Struct) error. (Local Event)

GPFS 4.2.1+ only

onHealthCollapse

Returns a Callbacks collection, the members of which will be triggered when the node health declines below the healthCollapseThreshold long enough for the health check thread to notice. (Local Event)

GPFS 4.2.1 and 4.2.2 only

onLowDiskSpace

Returns a Callbacks collection, the members of which will be triggered when the file system manager detects that disk space is running below the low threshold that is specified in the current policy rule. (Local Event)

Note

This event is triggered every two minutes until the condition is solved.

onMmProtocolTraceFileChange

Returns a Callbacks collection, the members of which will be triggered on each CES node to check for required tracing tasks when the trace file is changed within the CCR. Allows traces to be started, stopped, and monitored across a cluster. (Local Event)

onMount

Returns a Callbacks collection, the members of which will be triggered when a file system is mounted successfully. (Local Event)

onNoDiskSpace

Returns a Callbacks collection, the members of which will be triggered when the file system encounters a disk that ran out of space. (Local Event)

Note

This event is triggered every two minutes until the condition is solved.

onNodeJoin

Returns a Callbacks collection, the members of which will be triggered when one or more nodes join the cluster. (Global Event)

onNodeLeave

Returns a Callbacks collection, the members of which will be triggered when one or more nodes leave the cluster. (Global Event)

onNsdCksumMismatch

Returns a Callbacks collection, the members of which will be triggered whenever transmission of vdisk data by the NSD network layer fails to verify the data checksum. This can indicate problems in the network between the GPFS client node and a recovery group server. The first error between a given client and server generates the callback; subsequent callbacks are generated for each ckReportingInterval occurrence. (Local event)

onPdFailed

Returns a Callbacks collection, the members of which will be triggered whenever a pdisk in a recovery group is marked as dead, missing, failed, or readonly. (Local Event)

onPdPathDown

Returns a Callbacks collection, the members of which will be triggered whenever one of the block device paths to a pdisk disappears or becomes inoperative. The occurrence of this event can indicate connectivity problems with the JBOD array in which the pdisk resides. (Local Event)

onPdRecovered

Returns a Callbacks collection, the members of which will be triggered whenever a missing pdisk is rediscovered. (Local Event)

onPdReplacePdisk

Returns a Callbacks collection, the members of which will be triggered whenever a pdisk is marked for replacement according to the replace threshold setting of the declustered array in which it resides. (Local Event)

onPostRGRelinquish

Returns a Callbacks collection, the members of which will be triggered on a recovery group server after it has relinquished serving recovery groups. (Local Event)

onPostRGTakeover

Returns a Callbacks collection, the members of which will be triggered on a recovery group server after it has checked, attempted, or begun to serve a recovery group. (Local Event)

onPreMount

Returns a Callbacks collection, the members of which will be triggered when a file system is about to be mounted. (Local Event)

onPreRGRelinquish

Returns a Callbacks collection, the members of which will be triggered when a file system is unmounted successfully. (Local Event)

onPreRGTakeover

Returns a Callbacks collection, the members of which will be triggered on a recovery group server prior to relinquishing service of recovery groups. (Local Event)

onPreShutdown

Returns a Callbacks collection, the members of which will be triggered on a recovery group server prior to attempting to open and serve recovery groups. (Local Event)

onPreStartup

Returns a Callbacks collection, the members of which will be triggered after the GPFS daemon completes its internal initialization and joins the cluster, but before the node runs recovery for any VFS mount points that were already mounted, and before the node starts accepting user initiated sessions. (Local Event)

onPreUnmount

Returns a Callbacks collection, the members of which will be triggered when a file system is about to be unmounted. (Local Event)

onQuorumLoss

Returns a Callbacks collection, the members of which will be triggered when a quorum has been lost in the GPFS cluster. (Global Event)

onQuorumNodeJoin

Returns a Callbacks collection, the members of which will be triggered when one or more quorum nodes join the cluster. (Global Event)

onQuorumNodeLeave

Returns a Callbacks collection, the members of which will be triggered when one or more quorum nodes leave the cluster. (Global Event)

onQuorumReached

Returns a Callbacks collection, the members of which will be triggered when a quorum has been established in the GPFS cluster. This event is triggered only on the elected cluster manager node, not on all the nodes in the cluster. (Global Event)

onRgOpenFailed

Returns a Callbacks collection, the members of which will be triggered on a recovery group server when it fails to open a recovery group that it is attempting to serve. This may be due to loss of connectivity to some or all of the disks in the recovery group. (Local Event)

onRgPanic

Returns a Callbacks collection, the members of which will be triggered on a recovery group server when it is no longer able to continue serving a recovery group. This may be due to loss of connectivity to some or all of the disks in the recovery group. (Local Event)

onSendRequestToNodes

Returns a Callbacks collection, the members of which will be triggered when a node sends a request for collecting expel-related debug data to some nodes. (Local Event)

onShutdown

Returns a Callbacks collection, the members of which will be triggered when GPFS completes the shutdown. (Local Event)

onSnapshotCreated

Returns a Callbacks collection, the members of which will be triggered after a snapshot is created, and run before the file system is resumed. (Local Event)

onSoftQuotaExceeded

Returns a Callbacks collection, the members of which will be triggered when the file system manager detects that a soft quota limit (for either files or blocks) has been exceeded. (Local Event)

onStartup

Returns a Callbacks collection, the members of which will be triggered after a successful GPFS startup and when the node is ready for user initiated sessions. (Local Event)

onTiebreakerCheck

Returns a Callbacks collection, the members of which will be triggered when a quorum node detects loss of network connectivity but before GPFS runs the algorithm that decides if the node will remain in the cluster. This event is generated only in configurations that use quorum nodes with tiebreaker disks. (Local Event)

Note

Before you add or delete the tiebreakerCheck event, you must stop the GPFS daemon on all the nodes in the cluster.

onTraceConfigChanged

Returns a Callbacks collection, the members of which will be triggered when GPFS tracing configuration is changed. (Local Event)

GPFS 4.2.1+ only

onUnmount

Returns a Callbacks collection, the members of which will be triggered when a file system is unmounted successfully. (Local Event)

onUsageUnderSoftQuota

Returns a Callbacks collection, the members of which will be triggered when the file system manager detects that quota usage has dropped below soft limits and grace time is reset. (Local Event)

pagepool

Changes the size of the cache on each node

The default value is either one-third of the physical memory on the node or 1G, whichever is smaller

Return type:str
pagepoolMaxPhysMemPct

Percentage of physical memory that can be assigned to the page pool.

Valid values are 10 through 90 percent

Default:75
Return type:int
panicOnIOHang

Controls whether the GPFS daemon panics the node kernel when a local I/O request is pending in the kernel for more than five minutes.

This attribute applies only to disks that the node is directly attached to.

New in GPFS 5.0.2 - older versions return False

Default:False
Return type:bool
pitWorkerThreadsPerNode

Controls the maximum number of threads to be involved in parallel processing on each node that is serving as a Parallel Inode Traversal (PIT) worker.

The range of accepted values is 0 to 8192

New in GPFS 4.2.1 - older versions return None

Return type:int
prefetchPct

Limit on the page pool space that is to be used for prefetch and write-behind buffers for active sequential streams

Valid values are between 0 and 60

New in GPFS 4.2.1 - older versions return default

Default:20
Return type:int
prefetchThreads

Controls the maximum possible number of threads dedicated to prefetching data for files that are read sequentially, or to handle sequential write-behind.

Default:72
Return type:int
proactiveReconnect

When enabled causes nodes to proactively close problematic TCP connections with other nodes and to reestablish new connections in their place.

New in GPFS 5.0.3 - older version return default

Default:False
Return type:bool
profile

Specifies a predefined profile of attributes to be applied. System-defined profiles are located in /usr/lpp/mmfs/profiles/.

New in GPFS 4.2.1 - older versions return default

Default:system
Return type:str
readReplicaPolicy

Specifies the location from which the FPO policy is to read replicas.

New in GPFS 4.2.1 - older versions return None

Return type:str
restripeOnDiskFailure

Specifies whether a restripe will be performed on disk failures

Return type:bool
rpcPerfNumberDayIntervals

Controls the number of days that aggregated RPC data is saved

Valid range 4-60

New in GPFS 4.1.0.4 - older versions return default

Default:30
Return type:int
rpcPerfNumberHourIntervals

Controls the number of hours that aggregated RPC data is saved.

Allowed valeus are 4, 6, 8, 12, or 24.

New in GPFS 4.1.0.4 - older versions return default

Default:24
Return type:int
rpcPerfNumberMinuteIntervals

Controls the number of minutes that aggregated RPC data is saved

Allowed values 4, 5, 6, 10, 12, 15, 20, 30, or 60

New in GPFS 4.1.0.4 - older versions return default

Default:60
Return type:int
rpcPerfNumberSecondIntervals

Controls the number of seconds that aggregated RPC data is saved

Allowed values are 4, 5, 6, 10, 12, 15, 20, 30, or 60

New in GPFS 4.1.0.4 - older versions return default

Default:60
Return type:int
rpcPerfRawExecBufferSize

Specifies the number of bytes to save in the buffer that stores raw RPC execution statistics

New in GPFS 4.1.0.4 - older versions return default

Default:2
Return type:int
rpcPerfRawStatBufferSize

Specifies the number of bytes to save in the buffer that stores raw RPC performance statistics

New in GPFS 4.1.0.4 - older versions return default

Default:6
Return type:int
seqDiscardThreshold

Specifies what has to be done with the page pool buffer after it is consumed or flushed by write-behind threads

New in GPFS 4.2.1 - older versions return default

Default:1M
Return type:str
sharedTmpDir

Specifies a default global work directory where the mmapplypolicy command or the mmbackup command can store the temporary files that it generates during its processing

New in GPFS 5.0.1 - older versions return None

Return type:str
sidAutoMapRangeLength

Controls the length of the reserved range for Windows SID to UNIX ID mapping

Return type:int
sidAutoMapRangeStart

Specifies the start of the reserved range for Windows SID to UNIX ID mapping

Return type:int
subnets

Specifies subnets used to communicate between nodes in a GPFS cluster or a remote GPFS cluster.

Return type:str
sudoUser

Specifies a non-root admin user ID to be used when sudo wrappers are enabled and a root-level background process calls an administration command directly instead of through sudo

New in GPFS 5.0.0 - older versions return None

Return type:str
syncBuffsPerIteration

This parameter is used to expedite buffer flush and the rename operations that are done by MapReduce jobs

New in GPFS 4.2.1 - older versions return default

Default:100
Return type:int
syncSambaMetadataOps

Specified whether syncing of metadata operations that are issued by the SMB server is enabled

New in GPFS 4.2.1 - older versions return None

Return type:bool
systemLogLevel

Specifies the minimum severity level for messages sent to the system log

One of [alert, critical, error, warning, notice, configuration, informational, detail, debug]

New in GPFS 4.1.0.4 - older versions return None

Return type:str
tiebreakerDisks

Controls whether GPFS will use the node quorum with tiebreaker algorithm in place of the regular node-based quorum algorithm.

Return type:list of str
tmMaxPhysMemPct
Return type:int
tscCmdPortRange

Specifies the range of port numbers to be used for extra TCP/IP ports that some administration commands need for their processing

New in GPFS 4.2.3 - older versions return None

Return type:str
uidDomain

Specifies the UID domain name for the cluster.

Return type:str
unmountOnDiskFail

Controls how the daemon responds when it detects a disk failure

One of [yes, no, meta]

Return type:str
usePersistentReserve

Specifies whether to enable or disable Persistent Reserve (PR) on the disks

Default:False
Return type:bool
verbsPorts

Specifies the InfiniBand device names and port numbers used for RDMA transfers between an NSD client and server

Return type:list of str
verbsRdma

Enables or disables InfiniBand RDMA using the Verbs API for data transfers between an NSD client and NSD server

One of [enable, disable]

Return type:str
verbsRdmaCm

Enables or disables the RDMA Connection Manager (RDMA CM or RDMA_CM) using the RDMA_CM API for establishing connections between an NSD client and NSD server

One of [enable, disable]

Return type:str
verbsRdmaPkey

Specifies an InfiniBand partition key for a connection between the specified node and an Infiniband server that is included in an InfiniBand partition

New in GPFS 4.2.3 - older versions return None

Return type:str
verbsRdmaRoCEToS

Specifies the Type of Service (ToS) value for clusters using RDMA over Converged Ethernet (RoCE).

Acceptable values for this parameter are 0, 8, 16, and 24

New in GPFS 4.1.0.4 - older versions return default

Default:-1
Return type:int
verbsRdmaSend

Enables or disables the use of InfiniBand RDMA rather than TCP for most GPFS daemon-to-daemon communication

One of [enable, disable]

Return type:str
verbsRdmasPerConnection

Sets the maximum number of simultaneous RDMA data transfer requests allowed per connection

Obsolete since GPFS 5.0.0 - newer versions return default

Return type:int
verbsRdmasPerNode

Sets the maximum number of simultaneous RDMA data transfer requests allowed per node

Obsolete since 5.0.0 - new versions return default

Default:0
Return type:int
verbsRecvBufferCount

Defines the number of RDMA recv buffers created for each RDMA connection that is enabled for RDMA send when verbsRdmaSend is enabled

New in GPFS 5.0.0 - older versions return default

Default:128
Return type:int
verbsRecvBufferSize

Defines the size, in bytes, of the RDMA send and recv buffers that are used for RDMA connections that are enabled for RDMA send when verbsRdmaSend is enabled

New in GPFS 5.0.0 - older versions return default

Default:4096
Return type:int
verbsSendBufferMemoryMB

Sets the amount of page pool memory (in MiB) to reserve as dedicated buffer space for use by the verbsRdmaSend feature

Obsolete since GPFS 5.0.0 - newer versions return default

Default:0
Return type:int:
worker1Threads

Controls the maximum number of concurrent file operations at any one instant.

Default:48
Return type:int
workerThreads

Controls an integrated group of variables that tune file system performance

The valid range is 1-8192

New in GPFS 4.2.1 - older versions return default

Default:48
Return type:int
writebehindThreshold

Specifies the point at which GPFS starts flushing new data out of the page pool for a file that is being written sequentially

New in GPFS 4.2.1 - older versions return default

Default:512K
Return type:str
arcapix.fs.gpfs.cluster.setDisableClib()

disable lazy-import of clib

arcapix.fs.gpfs.cluster.setEnableClib()

re-enable lazy-import of clib

Example

Utilising the Cluster Object

>>> from arcapix.fs.gpfs import Cluster
>>>
>>> mycluster = Cluster()
>>>
>>> print(mycluster.RemoteShellCmd)
/usr/bin/ssh
>>>
>>> print(mycluster.PrimaryConfigServer)
pixstor-sn-001