.\" snfs_config.5: auto-generated, DO NOT EDIT .\" .\" Copyright 1997-2025. Quantum Corporation. All Rights Reserved. .\" StorNext is either a trademark or registered trademark of .\" Quantum Corporation in the US and/or other countries. .\" .\" Code start macro .de Cs .sp .ft C .in +0.3i .nf .. .\" Code end macro .de Ce .fi .in -0.3i .ft R .. .\" Example file macro .de Cf .sp .in 0i .nf .. .\" Deprecated message macro .de Dp \fINOTE\fR: This setting has been deprecated and is no longer supported. It will be ignored. .. .\" Not recommended message macro .de Nr \fINOTE\fR: Not intended for general use. Only use when recommended by Apple Support. .. .\" Option header definition (first argument bold, the rest space separated) .de Oh .sp .in -0.2i \(bu XML: .ft B \\$1 .ft R \\$2 .PP Old: .ft B \\$3 .ft R \\$4 .in +0.2i .. .de Dh .sp .in -0.2i \(bu Old: .ft B \\$1 .ft R \\$2 .in +0.2i .. .TH SNFS_CONFIG 5 "August 2025" "Xsan File System" .SH NAME snfs_config \- Xsan Volume Configuration File .SH SYNOPSIS .na .nh .HP .B /Library/Preferences/Xsan/*.cfg .ad .hy .SH DESCRIPTION The \fBXsan Volume\fR configuration file describes to the \fBFile System Manager\fR (\fBFSM\fR) the physical and logical layout of an individual volume. .SH FORMAT OPTIONS The Xsan Volume uses the XML format for the configuration file (see \fBsnfs.cfgx.5\fR). This is supported on linux MDCs and is required when using the Storage Manager web-based GUI. If the GUI is not used or not available, the \fBsncfgedit(8)\fR utility should be used to create or change the XML configuration file. .PP The old non-XML format (see \fBsnfs.cfg.5\fR) used in previous versions of Xsan is required on Windows MDCs and is valid on linux MDCs, but the Storage Manager GUI will not recognize it. .PP Linux MDCs will automatically have their volume configuration files converted to the XML format on upgrade, if necessary. Old config files will be retained in the .IR /Library/Logs/Xsan/data/ /config_history directory. .PP When a volume system is created, the configuration file is stored in a compressed format in the metadata. Some Xsan components validate that if the configuration file has changed, it is still valid for the operation of that component. The components that do this are: \fBfsm(8)\fP, \fBcvupdatefs(1)\fP, and \fBcvfsck(1)\fP. If the configuration is invalid, the component terminates. If the configuration has changed and is valid, the old configuration is saved in .br .IR /Library/Logs/Xsan/data/ /config_history/*.cfg. and the new one replaces the old one in metadata. .PP This manpage seeks to describe the configuration file in general. Format specific information can be found in \fBsnfs.cfgx.5\fR and \fBsnfs.cfg.5\fR. .SH GLOBAL VARIABLES The file system configuration has several global variables that affect the size, function and performance of the \fBXsan File System Manager\fR (\fBFSM\fR). (The \fBFSM\fR is the controlling program that tracks file allocation and consistency across the multiple clients that have access to the volume via a Storage Area Network.) The following global variables can be modified. .PP .Oh affinityPreference AffinityPreference .Ce .PP The \fBAffinityPreference\fR variable instructs the FSM how to allocate space to a file with an \fBAffinity\fR in low space conditions. If space cannot be allocated on a storage pool with a matching \fBAffinity\fR, the system normally fails with ENOSPC. This occurs even if the file system has remaining space that could satisfy the allocation request. If this variable is set to true (Yes), instead of returning ENOSPC, the system attempts to allocate space on another storage pool with an \fBAffinity\fR of 0. .PP With this preference mechanism, the file's \fBAffinity\fR is not changed so a subsequent allocation request will still try to use the original \fBAffinity\fR before retrying with an \fBAffinity\fR of 0. .PP The default value of false (No) retains the behavior of returning ENOSPC instead of retrying the allocation request. .PP .Oh allocationStrategy AllocationStrategy .Ce .PP The \fBAllocationStrategy\fR variable selects a method for allocating new disk file blocks in the volume. There are four methods supported: \fBRound\fR, \fBBalance\fR, \fBStrictBalance\fR and \fBFill\fR. In conjuction with \fBInodeStripeWidth\fR and \fBAllocSessionReservationSize\fR, these methods specify how, for each file, the allocator chooses an initial storage pool to allocate blocks from, and how the allocator chooses a new storage pool when it cannot honor an allocation request from a file's current storage pool. .PP The default allocation strategy is \fBRound\fR. \fBRound\fR means that when there are multiple storage pools of similar classes (for example two storage pools for non-exclusive data), the allocator should alternate (round robin) new files through the available storage pools. .PP When the strategy is \fBBalance\fR, the available blocks of each storage pool are analyzed, and the storage pool with the most total free blocks is chosen. There are a few exceptions: if a stripe group is so fragmented that it doesn't have a single block large enough to cover a large allocation request, the allocator may choose another stripe group containing bigger free chunks. Similarly, when automatic stripe alignment is in use based on the file system having a non-zero value of \fBStripeAlignSize\fR, if there isn't a large enough chunk to produce an aligned allocation from the storage pool having the most free space, another stripe group may be chosen. In such cases, it is recommended that the stripe group be defragmented using \fBsgdefrag\fR or offloaded using \fBsgoffload\fR. If this is not feasible, an alternative is to use the strategy \fBStrictBalance\fR. The \fBStrictBalance\fR strategy is similar to the \fBBalance\fR strategy but offers a way to maintain better balance among storage pools when free space within some storage pools becomes fragmented. Unlike the \fBBalance\fR strategy, the \fBStrictBalance\fR strategy allocates space from the storage pool with the most free space, even if that space is highly fragmented. However, this can lead to increased file fragmentation for newly added files at lower overall file system fill levels compared to the \fBBalance\fR strategy, potentially reducing performance. .PP When the strategy is \fBFill\fR, the allocator will initially choose the storage pool that has the least amount of total free space. .PP Regardless of the \fBAllocationStrategy\fR method, subsequent allocation requests for any one file are directed to the same storage pool as the initial allocation until either there is insufficient space in the storage pool (in which case the allocator will choose the next storage pool that can honor the allocation request using the original criteria) or until the current allocation reaches the configured size of \fBInodeStripeWidth\fR (in which case the allocator selects the next available stripe group in a round robin fashion). .PP Also, when the Allocation Session Reservation feature is enabled, the \fBRound\fR strategy must be used. .PP Based on the above, if the goal is to use \fBAllocationStrategy\fR to strictly adjust stripe group fill levels, both \fBInodeStripeWidth\fR and \fBAllocSessionReservationSize\fR should be set to zero, disabling these features. Refer to the description of these parameters for side effects. .PP .Oh fileLockResyncTimeOut BRLResyncTimeout .Ce .PP .Nr .PP .Oh allocSessionReservationSize AllocSessionReservationSize .Ce .PP The Allocation Session Reservation (ASR) feature allows a file system to benefit from optimized allocation behavior for certain rich media streaming applications and most other workloads. The feature also focuses on reducing free space fragmentation. .PP By default, this feature is enabled with a 1GB, 1073741824, size. .PP An old, deprecated parameter, \fBAllocSessionReservation\fR, when set to yes would use a 1 GB segment size with no rounding. This old parameter is now ignored but can generate some warnings. .PP \fBallocSessionReservationSize\fR allows you to specify the size this feature should use when allocating segments for a session. The value is expressed in bytes so a value of 2147483648 is 2 GBs. The value must be a multiple of MBs. The XML file format must be in bytes. The old configuration file format can use multipliers such as \fBm\fP for MBs or \fBg\fP for GBs. If the multiplier is omitted in the old configuration file, the value is interpreted as bytes as in the XML format. .PP A value of 0 turns off this capability and falls back on the base allocator. When enabled, the value can range from 128 MB (134217728) to 1 TB (1099511627776). (The largest value would indicate segments are 1 TB in size, which is extremely large.) The feature starts with the specified size and then may use rounding to better handle user's requests. See also \fBInodeStripeWidth\fR. .PP There are 3 session types: small, medium, and large. The type is determined by the file offset and requested allocation size. Small sessions are for sizes (offset+allocation size) smaller than 1MB. Medium sessions are for sizes 1MB through 1/10th of the \fBallocSessionReservationSize\fR. Large sessions are sizes bigger than medium. .PP Here is another way to think of these three types: small sessions collect or organize all small files into small session chunks; medium sessions collect medium sized files by chunks using their parent directory; and large files collect their own chunks and are allocated independently of other files. .PP All sessions are client specific. Multiple writers to the same directory or large file on different clients will use different sessions. Small files from different clients use different chunks by client. .PP Small sessions use a smaller chunk size than the configured \fBallocSessionReservationSize\fR. The small chunk size is determined by dividing the configured size by 32. For 128 MB, the small chunk size is 4 MB. For 1 GB, the small chunk size is 32 MBs. .PP Files can start using one session type and then move to another session type. If a file starts in a medium session and then becomes large, it "reserves" the remainder of the session chunk it was using for itself. After a session is reserved for a file, a new session segment will be allocated for any other medium files in that directory. .PP When allocating subsequent pieces for a session, they are rotated around to other stripe groups that can hold user data unless \fBInodeStripeWidth\fR is set to 0. When \fBInodeStripeWidth\fR is set, session chunks are rotated in a similar fashion to \fBInodeStripeWidth\fR. The direction of rotation is determined by a combination of the session key and the index of the client in the client table. The session key is based on the inode number so odd inodes will rotate in a different direction from even inodes. Directory session keys are based on the inode number of the parent directory. .PP If this capability is enabled, \fBStripeAlignSize\fR is forced to 0. In fact, all stripe alignment requests are disabled because they can cause clipping and can lead to severe free-space fragmentation. .PP The old \fBAllocSessionReservation\fR parameter is deprecated and replaced by \fBallocSessionReservationSize.\fR .PP If any of the following "special" allocation functions are detected, \fBallocSessionReservationSize\fR is turned off for that allocation: \fBPerfectFit\fR, \fBMustFit\fR, or \fBGapped files\fR. .PP When this feature is enabled, \fBAllocationStrategy\fR must be set to \fBRound\fR. As of StorNext 6, this is enforced when creating and modifying file systems. If a file system was created using a prior version of StorNext and ASR was enabled but \fBAllocationStrategy\fR was not set to \fBRound\fR, the FSM will run. However, the \fBAllocationStrategy\fR will be treated as \fBRound\fR and a warning will be issued whenever the configuration file is parsed. .PP .Oh bufferCacheSize BufferCacheSize .Ce .PP This variable defines how much memory to use in the FSM program for general metadata information caching. The amount of memory consumed is up to 2 times the value specified but typically less. .PP Increasing this value can improve performance of many metadata operations by performing a memory cache access to directory blocks, inode info and other metadata info. This is about 10 - 1000 times faster than performing I/O. .PP There are two buffer caches: the L1 cache and the L2 cache. If bufferCacheSize is configured as 1G or smaller, only the L1 cache is used. If bufferCacheSize is configured greater than 1G, the first 512M is used by the L1 cache and the remainder is used by the L2 cache. Blocks may reside in both caches. Blocks in the L2 cache are compressed by about a factor of 2.4, allowing for better memory utilization. For example, if bufferCacheSize is set to a value of 8G, the FSM will actually be able to cache about 7.5 * 2.4 = 18 G of metadata. Depending on the amount of RAM in the MDC and the number of allocated metadata blocks, in some cases it may be possible to keep all used metadata in cache which can dramatically improve performance for file system scanning. Cvfsck also uses the buffer cache and specifying a large enough value of bufferCacheSize to cover all metadata will result in a large speed increase. The cvadmin "metadata" command can be used to determine the value of bufferCacheSize required to cache all metadata. .PP Also see the useL2BufferCache configuration parameter. .PP .Oh caseInsensitive .Ce .PP The \fBcaseInsensitive\fR variable controls how the FSM reports case sensitivity to clients. Windows clients are always case insensitive, Mac clients default to case insensitive, but if the FSM is configured as case sensitive then they will operate in case sensitive mode. Linux clients will follow the configuration variable, but can operate in case insensitive mode on a case sensitive filesystem by using the caseinsensitive mount option. Linux clients must be at the 5.4 release or beyond to enable this behavior. .PP Note: You must stop the file system and run \fBcvupdatefs\fR once the config file has been updated in order to enable or disable case insensitive. Clients must re-mount the file system to pick up the change. .PP When enabling case insensitive, it is also strongly recommended that \fBcvfsck -A\fR be run to detect name case collisions. \fBCvupdatefs\fR will not enable case insensitive when name case collisions are present in the file system. .PP .Oh cvRootDir CvRootDir .Ce .PP .Nr .PP The \fBCvRootDir\fR variable specifies the directory in the StorNext file system that will be mounted by clients. The specified path is an absolute pathname of a directory that will become the root of the mounted file system. The default value for the \fBCvRootDir\fR path is the root of the file system, "/". This feature is available only with Quantum StorNext Appliance products. .PP .Oh debug Debug .Ce .PP The \fBDebug\fR variable turns on debug functions for the FSM. The output is sent to .IR /Library/Logs/Xsan/data/ /log/cvfs_log . These data may be useful when a problem occurs. The following list shows which value turns on a specific debug trace. Multiple debugging options may be selected by calculating the bitwise OR of the options' values to use as debug_value. Output from the debugging options is accumulated into a single file. .Cs 0x00000001 General Information 0x00000002 Sockets 0x00000004 Messages 0x00000008 Connections 0x00000010 File (VFS) requests 0x00000020 File file (VOPS) 0x00000040 Allocations 0x00000080 Inodes 0x00000100 Tokens 0x00000200 Directories 0x00000400 Attributes 0x00000800 Bandwidth Management 0x00001000 Quotas 0x00002000 Administrative Management 0x00004000 I/O 0x00008000 Data Migration 0x00010000 B+Trees 0x00020000 Transactions and Journal 0x00040000 REST API calls and data 0x00080000 Memory Management 0x00100000 QOS IO 0x00200000 External API 0x00400000 Windows Security 0x00800000 Journal Activity 0x01000000 Dump Statistics (Once Only) 0x02000000 Extended Buffers 0x04000000 Extended Directories 0x08000000 Queues 0x10000000 Extended Inodes 0x20000000 Metadata Archive 0x40000000 Xattr manipulation 0x80000000 Development debug .Ce .PP \fINOTE\fR: The performance of the volume is dramatically affected by turning on debugging traces. .PP .Oh dirWarp DirWarp .Ce .PP .Dp .PP .Oh disableRecycleBin DisableRecycleBin .Ce .PP When configured, this setting causes native StorNext Windows clients to not use the Recycle Bin when files are removed using File Explorer. Similarly, Xsan clients will not use the Trash when files are removed using macOS Finder. .PP When setting \fBDisableRecycleBin\fR on an existing volume, the hidden $RECYCLE.BIN folder in the root of the volume will be renamed $RECYCLE.BIN-saved but will continue to have the "hidden" attribute. Also a dangling symbolic link named $RECYCLE.BIN will be created in the root of the volume. This is intentional as the link causes Windows File Explorer to not use the Recycle Bin for the file system. .PP Similarly, if the volume has a .Trashes directory at the root of the volume for the macOS Finder Trash, it will be renamed .Trashes-saved and a file named .Trashes will be created in its place. This is intentional as having a regular file named .Trashes causes Finder to not use the Trash for the file system. .PP When \fBDisableRecycleBin\fR is active, attempts to delete or rename the $RECYCLE.BIN symbolic link or the .Trashes file will result in a "permission denied" error. .PP After setting \fBDisableRecycleBin\fR, the directories $RECYCLE.BIN-saved and .Trashes-saved should be removed if they are no longer needed. Also, StorNext clients need to remount the file system for the feature to take effect. .PP Also note that use of the \fBunixpermbits\fR security model implies \fBDisableRecycleBin=yes\fR behavior with respect to $RECYCLE.BIN but not .Trashes. .Oh enforceAcls EnforceACLs .Ce .PP Enables Access Control List enforcement on XSan clients. On non-XSan MDCs, \fBwindowsSecurity\fR should also be enabled for this feature to work with XSan clients. .PP This variable is only applicable when \fBsecurityModel\fR is set to \fBlegacy\fR. It is ignored for other \fBsecurityModel\fR values. See \fBsecurityModel\fR for details. .PP .Oh enableSpotlight EnableSpotlight .Ce .PP Enable Spotlight indexing. .PP .Oh eventFiles EventFiles .Ce .PP .Nr .PP Enables event files processing for Data Migration .PP .Oh eventFileDir EventFileDir .Ce .PP .Nr .PP Specifies the location to put Event Files .PP .Oh extentCountThreshold ExtentCountThreshold .Ce .PP When a file has this many extents, a RAS event is triggered to warn of fragmented files. The default value is 49152. A value of 0 or 1 disables the RAS event. This value must be between 0 and 33553408 (0x1FFFC00), inclusive. .PP .Oh fileLocks FileLocks .Ce .PP The variable enables or disables the tracking and enforcement of file-system-wide file locking. Enabling the \fBFile locks\fR feature allows file locks to be tracked across all clients of the volume. The FileLocks feature supports both the POSIX file locking model and the Windows file locking model. .PP If enabled, byte-range file locks are coordinated through the FSM, allowing a lock set by one client to block overlapping locks by other clients. If disabled, then byte-range locks are local to a client and do not prevent other clients from getting byte-range locks on a file, however they do prevent overlapping lock attempts on the same client. .PP .Oh forcePerfectFit ForcePerfectFit .Ce .PP .Nr .PP Enables a specialized allocation mode where all files are automatically aligned and rounded to \fBPerfectFitSize\fR blocks. If this is enabled, \fBallocSessionReservationSize\fR is ignored. .PP .Oh fsBlockSize FsBlockSize .Ce .PP The File System Block Size defines the granularity of the volume's allocation size. The block size is fixed at 4K. When an older file system is upgraded to StorNext 5, if the block size is other than 4k, the file system is converted to a 4K block size. For these file systems, the original block size value remains in the config file. If a file system is remade that had a file system block size other than 4K, the config file is rewritten, changing the file system block size parameter value to 4K. .PP .Oh fsCapacityThreshold FsCapacityThreshold .Ce .PP When a file system is over \fBfsCapacityThreshold\fR percent full, a RAS event is sent to warn of this condition. This value must be between 0 and 100, inclusive. The default value is 0, which disables the RAS event for all file systems except the HA shared file system which defaults to 85%. To disable this RAS event for the HA shared file system, set fsCapacityThreshold to 100. .PP .PP .Oh globalShareMode GlobalShareMode .Ce .PP The \fBGlobalShareMode\fR variable enables or disables the enforcement of Windows Share Modes across StorNext clients. This feature is limited to StorNext clients running on Microsoft Windows platforms. See the Windows CreateFile documentation for the details on the behavior of share modes. When enabled, sharing violations will be detected between processes on different StorNext clients accessing the same file. Otherwise sharing violations will only be detected between processes on the same system. The default of this variable is \fBfalse\fR. This value may be modified for existing volumes. .PP .Oh globalSuperUser GlobalSuperUser .Ce .PP The \fBGlobal Super User\fR variable allows the administrator to decide if any user with super-user privileges may use those privileges on the file system. When this variable is set to \fBtrue\fR, any super-user has global access rights on the volume. This may be equated to the \fBmaproot=0\fR directive in NFS. When the \fBGlobal Super User\fR variable is set to \fBfalse\fR, a super-user may only modify files where it has access rights as a normal user. This value may be modified for existing volumes. If \fBstorageManager\fR is enabled and this variable is set to \fBfalse\fR, the value will be overriden and set to \fBtrue\fR on storage manager nodes. A storage manager node is the MDC or a Distributed Data Mover client. Apple Xsan clients do not honor the setting of \fBglobalSuperUser\fP. .PP .Oh haFsType HaFsType .Ce .PP The \fBHaFsType\fR configuration item turns on Xsan High Availability (HA) protection for a file system, which prevents split-brain scenario data corruption. HA detects conditions where split brain is possible and triggers a hardware reset of the server to remove the possibility of split brain scenario. This occurs when an activated FSM is not properly maintaining its brand of an arbitration block (ARB) on the metadata LUN. Timers on the activated and standby FSMs coordinate the usurpation of the ARB so that the activated server will relinquish control or perform a hardware reset before the standby FSM can take over. It is very important to configure all file systems correctly and consistently between the two servers in the HA cluster. .PP There are currently three types of HA monitoring that are indicated by the \fBHaShared\fR, \fBHaManaged, and \fBHaUnmanaged\fR configuration parameters. .PP The \fBHaShared\fR dedicated file system holds shared data for the operation of the \fBStorNext File System\fR and \fBStornext Storage Manager\fR (SNSM). There must be one and only one \fBHaShared\fR file system configured for these installations. The running of SNSM processes and the starting of managed file systems is triggered by activation of the \fBHaShared\fR file system. In addition to being monitored for ARB branding as described above, the exit of the \fBHaShared\fR FSM triggers a hardware reset to ensure that SNSM processes are stopped if the shared file system is not unmounted. .PP The \fBHaManaged\fR file systems are not started until the \fBHaShared\fR file system activates. This keeps all the managed file systems collocated with the SNSM processes. It also means that they cannot experience split-brain corruption because there is no redundant server to compete for control, so they are not monitored and cannot trigger a hardware reset. .PP The \fBHaUnmanaged\fR file systems are monitored. The minimum configuration necessary for an HA cluster is to: 1) place this type in all the FSMs, and 2) enter the peer server's IP address in the .BR ha_peer (4) file. Unmanaged FSMs can activate on either server and fail over to the peer server without a hardware reset under normal operating conditions. .PP On non-HA setups, the special \fBHaUnmonitored\fR type is used to indicate no HA monitoring is done on the file systems. It is only to be used on non-HA setups. Note that setting HaFsType to HaUnmonitored disables the HA monitor timers used to guarantee against split brain. When two MDCs are configured to run as an HA pair but full HA protection is disabled in this way, it is possible in rare situations for file system metadata to become corrupt if there are lengthy delays or excessive loads in the LAN and SAN networks that prevent an active FSM from maintaining its branding of the ARB in a timely manner. .PP .Oh inodeCacheSize nodeCacheSize .Ce .PP This variable defines how many inodes can be cached in the FSM program. An in-core inode is approximately 800 - 1000 bytes per entry. .PP .Oh inodeDeleteMax InodeDeleteMax .Ce .PP .Nr .PP Sets the trickle delete rate of inodes that fall under the \fBPerfect Fit\fR check (see the \fBForce Perfect Fit\fR option for more information. If \fBInode Delete Max\fR is set to 0 or is excluded from the configuration file, it is set to an internally calculated value. .PP .PP .Oh inodeExpandMin InodeExpandMin .PP .Oh inodeExpandInc InodeExpandInc .PP .Oh inodeExpandMax InodeExpandMax .Ce .PP The \fBinodeExpandMin\fR, \fBinodeExpandInc\fR and \fBinodeExpandMax\fR variables configure the floor, increment and ceiling, respectively, for the block allocation size of a dynamically expanding file. The new format requires this value be specified in bytes and multipliers are not supported. In the old format, when the value is specified without a multiplier suffix, it is a number of volume blocks; when specified with a multiplier, it is bytes. .PP The first time a file requires space, \fBinodeExpandMin\fR blocks are allocated. When an allocation is exhausted, a new set of blocks is allocated equal to the size of the previous allocation to this file plus \fBinodeExpandInc\fR additional blocks. Each new allocation size will increase until the allocations reach \fBinodeExpandMax\fR blocks. Any expansion that occurs thereafter will always use \fBinodeExpandMax\fR blocks per expansion. .PP \fINOTE\fR: when \fBinodeExpandInc\fR is not a factor of \fBinodeExpandMin\fR, all new allocation sizes will be rounded up to the next \fBinodeExpandMin\fR boundary. The allocation increment rules are still used, but the actual allocation size is always a multiple of \fBinodeExpandMin\fR. .PP \fINOTE\fR: The explicit use of the configuration variables \fBinodeExpandMin\fR, \fBinodeExpandInc\fR and \fBinodeExpandMax\fR are being deprecated in favor of an internal table driven mechanism. Although they are still supported for backward compatibility, there may be warnings during the conversion of an old configuration file to an XML format. .PP .Oh inodeStripeWidth InodeStripeWidth .Ce .PP The \fBInode Stripe Width\fR variable defines how a file is striped across the volume's data storage pools. The default value is 4 GBs (4294967296). After the initial placement policy has selected a storage pool for the first extent of the file, for each \fBInode Stripe Width\fR extent the allocation is changed to prefer the next storage pool allowed to contain file data. Next refers to the next numerical stripe group number going up or down. (The direction is determined using the inode number: odd inode numbers go up or increment, and even inode numbers go down or decrement). The rotation is modulo the number of stripe groups that can hold data. .PP When \fBInode Stripe Width\fR is not specified, file data allocations will typically attempt to use the same storage pool as the initial allocation to the file. .PP When used with an \fBAllocation Strategy\fR setting of \fBRound\fR, files will be spread around the allocation groups both in terms of where their initial allocation is and in how the file contents are spread out. .PP \fBInode Stripe Width\fR is intended for large files. The typical value would be many times the maximum \fBStripe Breadth\fR of the data storage pools. The value cannot be less than the maximum \fBStripe Breadth\fR of the data storage pools. Note that when some storage pools are full, this policy will start to prefer the storage pool logically following the full one. A typical value is 4 GB (4294967296) or 8 GBs (8589934592). The size is capped at 1099511627776 (1TB). .PP If this value is configured too small, fragmentation can occur. Consider using a setting of 1MB with files as big as 100 GBs. Each 100 GB file would have 102,400 extents! .PP The new format requires this value be specified in bytes, and multipliers are not supported. In the old format, when the value is specified without a multiplier suffix, it is a number of volume blocks; when specified with a multiplier, it is bytes. .PP When \fBallocSessionReservationSize\fP is non-zero, this parameter is forced to be >= \fBallocSessionReservationSize\fP. .PP If \fBInode Stripe Width\fR is greater than \fBallocSessionReservationSize\fP, files larger than \fBallocSessionReservationSize\fP will use \fBInode Stripe Width\fR as their \fBallocSessionReservationSize\fP for allocations with an offset beyond \fBallocSessionReservationSize\fP. .PP .Oh ioTokens IoTokens .Ce .PP The \fBI/O Tokens\fR variable allows the administrator to select which coherency model should be used when different clients open the same file, concurrently. With \fBioTokens\fP set to false, the coherency model uses 3 states: exclusive, shared, and shared write. If a file is exclusive, only one client is using the file. Shared indicates that multiple clients have the file open but for read only mode. This allows clients to cache data in memory. Shared write indicates multiple clients have the file open and at least one client has the file open for write. With "Shared Write" mode, coherency is resolved by using DMA I/O and no caching of data. .PP A problem with DMA I/O is that small or unaligned I/Os need to do a read-modify-write. So, two racing clients can undo each other's writes since they could have data in memory. This occurs when a client has read into a buffer, modifies part of the buffer, and then write it using DMA (after the other client's write that occurred before this client read into the buffer being written). Different platforms have requirements on the granularity of DMA I/O, usually at least 512 bytes that must be written and also using a 512 or greater boundary for the start and end of the I/O. .PP If one sets \fBioTokens\fP to true (the default setting), each I/O performed by a client must have a token. Clients cache and can do many I/Os while they have the token. When the token is revoked, all data and associated attributes are flushed. .PP Customers, who have multiple writers on a file, should set \fBioTokens\fP to true, unless they know that the granularity and length of I/Os are safe for DMA. File locking does NOT prevent read-modify-write across lock boundaries. .PP The default for I/O Tokens is true. .PP For backward compatibility, if a client opens a file from a prior release that does not support \fBioTokens\fP, the coherency model drops back to the "Shared Write" model using DMA I/O (\fBioTokens\fP false) but on a file-by-file basis. .PP If \fBioTokens\fP is changed and the MDC is restarted, files that were open at that time continue to operate in the model before the change. To switch these files to the new value of \fBioTokens\fP, all applications must close the file and wait for a few seconds and then re-open it. Or, if the value was switched from true to false, a new client can open the file and all clients will transparently be switched to the old model on that file. .PP .Oh journalSize JournalSize .Ce .PP Controls the size of the volume journal. .BR cvupdatefs (8) must be run after changing this value for it to take effect. The FSM will not activate if it detects that the journal size has been changed in the config file, but the metadata has not been updated. .PP .Oh maintenanceMode MaintenanceMode .Ce .PP The \fBmaintenanceMode\fR parameter enables or disables maintenance mode for the file system. In maintenance mode, all client mount requests are rejected by the FSM except from the client running on the same node as the FSM. .PP .Nr .PP .Oh maxLogs MaxLogs .Ce .PP The \fBmaxLogs\fR variable defines the maximum number of logs a FSM can rotate through when they get to \fBMaxLogSize\fR. The current log file resides in .IR /Library/Logs/Xsan/data/ /log/cvlog . .PP .Oh maxLogSize MaxLogSize .Ce .PP The \fBmaxLogSize\fR variable defines the maximum number of bytes a FSM log file should grow to. The log file resides in .IR /Library/Logs/Xsan/data/ /log/cvlog . When the log file grows to the specified size, it is moved to \fBcvlog_\fR and a new \fBcvlog\fR is started. Therefore, the maximum space consumed will be \fBmaxLogs\fR multiplied by \fBmaxLogSize\fR. .PP .Oh namedStreams NamedStreams .Ce .PP The \fBnamedStreams\fR parameter enables or disables support for Apple Named Streams. Named Streams can be used by macOS clients to efficiently store resource forks and extended attributes directly in file system metadata instead of using Apple Double files. If namedStreams is not enabled when the file system is initialized, .BR cvupdatefs (8) must be run after enabling namedStreams for it to take effect. The FSM will not activate if it detects that namedStreams has been enabled in the config file, but the metadata has not been updated by \fBcvupdatefs\fR. Enabling namedStreams is meant to be a permanent operation. Once enabled, disabling namedStreams requires a special procedure only available through technical support that is not always feasible. Note that most "copy" programs on Windows and Linux do not preserve namedStreams. This includes Windows Explorer. Also note that this parameter applies to Apple Named Streams support in the file system only. The StorNext NAS SMB server has its own named streams option that must be activated separately. .PP .Oh opHangLimitSecs OpHangLimitSecs .Ce .PP This variable defines the time threshold used by the FSM program to discover hung operations. The default is 180. It can be disabled by specifying 0. When the FSM program detects an I/O hang, it will stop execution in order to initiate failover to backup system. .PP .Oh perfectFitSize PerfectFitSize .Ce .PP For files in perfect fit mode, all allocations will be rounded up to the number of volume blocks set by this variable. Perfect fit mode can be enabled on an individual file by an application using the Xsan extended API, or for an entire file system by setting \fBforcePerfectFit\fR. .PP If \fBInodeStripeWidth\fP or \fBallocSessionReservationSize\fP are non-zero and Perfect fit is not being applied to an allocation, this rounding is skipped. .PP .Oh quotas Quotas .Ce .PP The \fBquotas\fR variable enables or disables the enforcement of the volume quotas. Enabling the quotas feature allows storage usage to be tracked for individual users and groups. Setting hard and soft quotas allows administrators to limit the amount of storage consumed by a particular user/group ID. See .BR snquota (1) for information on quotas feature commands. .PP \fINOTE\fR: Quotas are calculated differently on Windows and Linux systems. It is not possible to migrate a meta data controller running quotas between these different types. .PP \fINOTE\fR: Quotas are not allowed when \fBsecurityModel\fR is set to legacy and \fBwindowsSecurity\fR is set to false. .PP \fINOTE\fR: When using a Windows MDC, quotas are not allowed if \fBsecurityModel\fR is set to unixpermbits. .PP .Oh quotaHistoryDays QuotaHistoryDays .Ce .PP When the \fBquotas\fR variable (see above) is turned on, there will be nightly logging of the current quota limits and values. The logs will be placed in the .IR /Library/Logs/Xsan/data/ /quota_history directory. This variable specifies the number of days of logs to keep. Valid values are 0 (no logs are kept) to 3650 (10 years of nightly logs are kept). The default is 7. .PP .Oh remoteNotification RemoteNotification .Ce .PP The \fBremoteNotification\fR variable controls the Windows Remote Directory Notification feature. The default value is no which disables the feature. Note: this option is not intended for general use. Only use when recommended by Apple Support. .PP .Oh renameTracking RenameTracking .Ce .PP The \fBrenameTracking\fR variable controls the \fBStornext Storage Manager\fR (SNSM) rename tracking feature. This replaces the (global) Storage Manager configuration variable MICRO_RENAME that was present in older versions of StorNext. It is by default set to 'false'. Note that this feature should ONLY be enabled at sites where Microsoft, or other similar applications, end up renaming operational files during their processing. See the fsrecover(1) man page for more information on use of renameTracking. .PP .Oh reservedSpace ReservedSpace .Ce .PP \fINOTE\fR: Not intended for general use. Only use when recommended by Quantum Support. .PP The \fBreservedSpace\fR parameter allows the administrator the ability to control the use of delayed allocations on clients. The default value is \fBYes\fR. \fBreservedSpace\fR is a performance feature that allows clients to perform buffered writes on a file without first obtaining real allocations from the FSM. The allocations are later performed when the data are flushed to disk in the background by a daemon performing a periodic sync. .PP When \fBreservedSpace\fR is \fBtrue\fR, the FSM reserves enough disk space so that clients are able to safely perform these delayed allocations. The meta-data server reserves a minimum of 4GB per stripe group and up to 280 megabytes per client per stripe group. .PP Setting \fBreservedSpace\fR to \fBfalse\fR allows slightly more disk space to be used, but adversely affects buffer cache performance and may result in serious fragmentation. .PP XML: metadataArchive .PP The \fBmetadataArchive\fR statement is used to enable or disable the Metadata Archive created by the FSM. The Metadata Archive contains a copy of all file system metadata including past history of metadata changes if \fBmetadataArchiveDays\fR is set to a value greater than zero. The Metadata Archive is used for disaster recovery, file system event notification, and file system auditing among other features. .PP XML: metadataArchiveDir .PP The \fBmetadataArchiveDir\fR statement is used to change the path in which the Metadata Archive is created. The default path is .IR /System/Library/Filesystems/acfs.fs/Contents/database/mdarchives/ for all file systems except non-managed file systems not running in an HA environment where the path is then .IR /Library/Logs/Xsan/data/ / . .PP XML: metadataArchiveSearch .PP The \fBmetadataArchiveSearch\fR statement is used to enable or disable the Metadata Archive Search capability in Metadata Archive. If enabled, Metadata Archive supports advanced searching capabilities which are used by various other StorNext features. Metadata Archive Search is enabled by default and should only be turned off if performance issues are experienced. .PP XML: metadataArchiveCache .PP The \fBmetadataArchiveCache\fR statement is used to configure the size of the memory cache for the Metadata Archive. The minimum cache size is 1GB, the maximum is 500GB, and the default is 2GB. .PP XML: metadataArchiveDays .PP The \fBmetadataArchiveDays\fR statement is used to set the number of days of metadata history to keep available in the Metadata Archive. The default value is zero (no metadata history). .PP XML: audit .PP The \fBaudit\fR keyword controls if the filesystem maintains extra metadata for use with the snaudit command and for tracking client activity on files. The default value is false and this feature requires that \fBmetadataArchive\fR be enabled. .PP XML: restAccess .PP Controls the presentation of a rest API for various filesystem capabilities on Linux systems. An https service is presented by the FSM if this is enabled. Various utilities such as \fBsgmanage\fR and parts of the GUI make use of this. Some rest services also depend on \fBmetadataArchive\fR being enabled. When the mode is set to \fBprivileged\fR, the access information for the service is only available to privileged users. When the mode is \fBenabled\fR, any user may view the service. The service may be disabled completely with by setting this to \fRdisabled\fR. The default is \fBprivileged\fR. .PP .Oh securityModel SecurityModel .Ce .PP The \fBsecurityModel\fR variable determines the security model to use on Xsan clients. \fBlegacy\fR is the default value. .PP When set to \fBlegacy\fR, the \fBwindowsSecurity\fR variable is checked to determine whether or not Windows clients should make use of the Windows Security Reference Monitor (ACLs). The \fBwindowsIdMapping\fR variable is ignored for this security model. .PP When set to \fBacl\fR, all Xsan clients (Windows and Unix) will make use of the Windows Security Reference Monitor (ACLs). The \fBwindowsSecurity\fR, \fBwindowsIdMapping\fR, and \fBenforceAcls\fR variables are ignored for this security model. .PP When set to \fBunixpermbits\fR, all Xsan clients (Unix and Windows) will use Unix permission bit settings when performing file access checks. When \fBunixpermbits\fR is specified, an additional variable, \fBwindowsIdMapping\fR, is used to control the method used to perform the Windows User to Unix User/Group ID mappings. See the \fBwindowsIdMapping\fR variable for additional information. The \fBwindowsSecurity\fR, \fBuseActiveDirectorySFU\fR, \fBenforceAcls\fR, and \fBunixIdFabricationOnWindows\fR variables are ignored for this security model. .PP \fINOTE\fR: The \fBunixpermbits\fR setting does not support the Windows NtCreateFile function FILE_OPEN_BY_FILE_ID option, which opens a file by inode number versus file name. .PP \fINOTE\fR: Using \fBunixpermbits\fR results in the Windows Recycle Bin being disabled. See the description of the \fBdisableRecycleBin\fR option. .PP .Oh spotlightSearchLevel SpotlightSearchLevel .Ce .PP Set the SpotlightSearchLevel. This option only applies when Xsan MDCs are used and should not be used elsewhere as it can interfere with Spotlight Proxy functionality. .PP .Oh spotlightUseProxy SpotlightUseProxy .Ce .PP Enable properly configured Xsan clients to act as proxy servers for OS X Spotlight Search on Xsan. .PP .Oh stripeAlignSize StripeAlignSize .Ce .PP The \fBstripeAlignSize\fR statement causes the allocator to automatically attempt stripe alignment and rounding of allocations greater than or equal to this size. The new format requires this value be specified in bytes and multipliers are not supported. In the old format, when the value is specified without a multiplier suffix, it is a number of volume blocks; when specified with a multiplier, it is bytes. If set to default \fBvalue\fR (-1), it internally gets set to the size of largest \fBstripeBreadth\fR found for any \fBstripeGroup\fR that can hold user data. A value of 0 turns off automatic stripe alignment. Stripe-aligned allocations are rounded up so that allocations are one stripe breadth or larger. .PP If an allocation fails with stripe alignment enabled, another attempt is made to allocate the space without stripe alignment. .PP If \fBallocSessionReservationSize\fP is enabled, \fBstripeAlignSize\fR is set to 0 to reduce fragmentation within segments which occurs when clipping within segments. .PP .Oh trimOnClose TrimOnClose .Ce .PP .Nr .PP .Oh useL2BufferCache UseL2BufferCache .Ce .PP The \fBuseL2BufferCache\fR variable determines whether the FSM should use the compressed L2 metadata block cache when the bufferCacheSize is greater than 1GB. The default is true. Setting this variable to false may delay FSM startup when using a very large value for bufferCacheSize. .PP \fINOTE\fR: This variable may be removed in a future release. .PP \fINOTE\fR: Not intended for general use. Only use when recommended by Apple Support. .Oh unixDirectoryCreationModeOnWindows UnixDirectoryCreationModeOnWindows .Ce .PP The \fBunixDirectoryCreationModeOnWindows\fR variable instructs the FSM to pass this value back to Microsoft Windows clients. The Windows Xsan clients will then use this value as the permission mode when creating a directory. The default value is 0755. This value must be between 0 and 0777, inclusive. .PP .Oh unixFileCreationModeOnWindows UnixFileCreationModeOnWindows .Ce .PP The \fBunixFileCreationModeOnWindows\fR variable instructs the FSM to pass this value back to Microsoft Windows clients. The Windows Xsan clients will then use this value as the permission mode when creating a file. The default value is 0644. This value must be between 0 and 0777, inclusive. .PP .Oh unixIdFabricationOnWindows UnixIdFabricationOnWindows .Ce .PP The \fBunixIdFabricationOnWindows\fR variable is simply passed back to a Microsoft Windows client. The client uses this information to turn on/off "fabrication" of uid/gids from a Microsoft Active Directory obtained GUID for a given Windows user. A value of yes will cause the client for this volume to fabricate the uid/gid and possibly override any specific uid/gid already in Microsoft Active Directory for the Windows user. This setting should only be enabled if it is necessary for compatibility with Apple MacOS clients. The default is false, unless the meta-data server is running on Apple MacOS, in which case it is true. .PP This variable is only applicable when \fBsecurityModel\fR is set to \fBlegacy\fR or \fBacl\fR. It is ignored for other \fBsecurityModel\fR values. See \fBsecurityModel\fR for details. .PP .Oh unixIdMapping UnixIdMapping .PP When \fBsecurityModel\fR is set to \fBacl\fR, the \fBunixIdMapping\fR variable determines the method Linux and Unix clients use to perform Unix User/Group ID to Windows User mappings used by ACLs. This setting has no effect on Windows or Xsan clients. .PP The default value of this variable is \fBnone\fR which is incompatible with setting \fBsecurityModel\fR to \fBacl\fR. .PP A value of \fBwinbind\fR should be used when the environment contains Linux clients that are all bound to Active Directory using Winbind and running the winbind service. .PP A value of \fBmdc\fR should be used when the MDCs for a file system are bound to Active Directory using Winbind but one or more of the Linux clients in the environment are not running Winbind. For example, Linux clients may instead be bound to Active Directory using sssd. The use of \fBmdc\fR unixIdMapping allows such environments to be supported by having Linux clients forward ID mapping requests to the MDC for processing. .PP When \fBunixIdMapping\fR is set to \fBalgorithmic\fR, UIDs are mapped to SIDs using the following: .br RID(uid) = (2 * uid) + 1000 .br The RID is then appended to the Domain SID. For the \fBalgorithmic\fR unixIdMapping, the default value of the Domain SID is: .br S-5-21-3274805877-1740924817-4269325941 .br For example, a user having a UID of 400, will have the SID: .br S-5-21-3274805877-1740924817-4269325941-1800 .br GIDs are mapped to SIDs using the following: .br RID(gid) = (2 * gid) + 1001 .br The RID is then appended to the Domain SID. For example, a group having a GID of 300 will have the SID: .br S-5-21-3274805877-1740924817-4269325941-1601 .br Note: while commonly only required when using Open Directory, the Domain SID can be overridden using the StorNext domainsid (4) configuration file. .PP .Oh unixNobodyGidOnWindows UnixNobodyGidOnWindows .Ce .PP The \fBunixNobodyGidOnWindows\fR variable instructs the FSM to pass this value back to Microsoft Windows clients. The Windows Xsan clients will then use this value as the gid for a Windows user when no gid can be found using Microsoft Active Directory. The default value is 60001. This value must be between 0 and 2147483647, inclusive. .Oh unixNobodyUidOnWindows UnixNobodyUidOnWindows .Ce .PP The \fBunixNobodyUidOnWindows\fR variable instructs the FSM to pass this value back to Microsoft Windows clients. The Windows Xsan clients will then use this value as the uid for a Windows user when no uid can be found using Microsoft Active Directory. The default value is 60001. This value must be between 0 and 2147483647, inclusive. .PP .Oh useActiveDirectorySFU UseActiveDirectorySFU .Ce .PP The \fBuseActiveDirectorySFU\fR variable enables or disables the use of Microsoft's Active Directory Services for UNIX (SFU) on Windows based Xsan clients. (Note: Microsoft has changed the name "Services for UNIX" in recent releases of Windows. We are using the term SFU as a generic name for all similar Active Directory Unix services.) This variable does not affect the behavior of Unix clients. Active Directory SFU allows Windows-based clients to obtain the Windows user's Unix security credentials. By default, Xsan clients running on Windows query Active Directory to translated Windows SIDs to Unix uid, gid and mode values and store those credentials with newly created files. This is needed to set the proper Unix uid, gid and permissions on files. If there is no Active Directory mapping of a Windows user's SID to a Unix user, a file created in Windows will have its uid and gid owned by \fBNOBODY\fR in the Unix view (See \fBunixNobodyUidOnWindows\fR.) .PP Always use Active Directory SFU in a mixed Windows/Unix environment, or if there is a possibility in the future of moving to a mixed environment. If \fBuseActiveDirectorySFU\fR is set to \fBfalse\fR, files created on Windows based Xsan clients will always have their uid and gid set to \fBNOBODY\fR with default permissions. .PP However, if it is unlikely a Unix client will ever access the Xsan volume, then you may get a small performance increase by setting \fBuseActiveDirectorySFU\fR to \fBfalse\fR. The performance increase will be substantial higher only if you have more than 100 users concurrently access the volume via a single Windows Xsan client. .PP This variable is only applicable when \fBsecurityModel\fR is set to \fBlegacy\fR or \fBacl\fR. It is ignored for other \fBsecurityModel\fR values. See \fBsecurityModel\fR for details. .PP The default of this variable is \fBtrue\fR. This value may be modified for existing volumes. .PP .Oh windowsIdMapping WindowsIdMapping .Ce .PP The \fBwindowsIdMapping\fR variable determines the method Windows clients should use to perform the Windows User to Unix User/Group ID mappings. \fBldap\fR is the default value. .PP This variable is only applicable when \fBsecurityModel\fR is set to \fBunixpermbits\fR. It is ignored for other \fBsecurityModel\fR values. See \fBsecurityModel\fR for details. Note that due to caching, the effect of changing the \fBwindowsIdMapping\fR may not be seen on Windows clients until 10-15 minutes after the FSM is restarted unless StorNext is also subsequently restarted on Windows clients. .PP When set to \fBldap\fR, Microsoft Active Directory is queried to obtain uid/gid values for the Windows User, including support for up to 32 supplemental GIDs. .PP When set to \fBmdc\fR, the Xsan MDC is queried to obtain uid/gid values for Windows users that are in the Active Directory domain that the system belongs to. This includes support for an unlimited number of supplemental GIDs. However, local users and groups are NOT mapped. The \fBmdc\fR setting is not valid on Windows MDCs. .PP When set to \fBmdcall\fR, ID mapping on Windows works the same as described above for the \fBmdc\fR type except that locally created Windows accounts are also mapped. Note that with this setting, Windows systems that are not joined to any domain can still use MDC mapping. The \fBmdcall\fR setting is not valid on Windows MDCs. .PP When set to \fBnone\fR, then there is no specific Windows User to Unix User mapping (see the Windows control panel). In this case, files will be owned by NOBODY in the Unix view. .PP .Oh windowsSecurity WindowsSecurity .Ce .PP The \fBwindowsSecurity\fR variable enables or disables the use of the Windows Security Reference Monitor (ACLs) on Windows clients. This does not affect the behavior of Unix clients. In a mixed client environment where there is no specific Windows User to Unix User mapping (see the Windows control panel), files under Windows security will be owned by \fBNOBODY\fR in the Unix view. The default of this variable is \fBfalse\fR for configuration files using the old format and \fBtrue\fR when using the new XML format. This value may be modified for existing volumes. .PP This variable is only applicable when \fBsecurityModel\fR is set to \fBlegacy\fR. It is ignored for other \fBsecurityModel\fR values. See \fBsecurityModel\fR for details. .PP \fINOTE\fR: Once windowsSecurity has been enabled, the volume will track Windows access control lists (ACLs) for the life of the volume regardless of the \fBwindowsSecurity\fR value. .PP .SH AUTOAFFINITY DEFINITION A \fBautoAffinity\fP defines a mapping of file extension(s) to an \fBAffinity\fP. A \fBnoAffinity\fP defines a mapping of file extensions to an affinity of 0. The \fBAffinity\fP must exist in the storage pool section (see below). At file creation time, if the file has an extension in the list specified, it will be assigned the \fBAffinity\fP or 0. This is only done for regular files and not other types of files such as directories, devices, symbolic links, etc. An extension can only exist once for all \fBautoAffinity\fP and \fBnoAffinity\fP mappings. .PP Extensions in a file name are defined by all the characters following the last "." in the file name. The \fBextension\fP tag in the configuration file is followed by the characters in the extension without the ".". There is one special extension that is defined by not specifying an extension. This is the "empty" extension and tells file creation to map all files not matching another extension to the \fBautoAffinity\fP or \fBnoAffinity\fP mapping it is in. .PP For example, an administrator can map all files ending in .dpx to an affinity of \fBMovies\fP. Or, all remaining files could be mapped to an affinity of \fBOther\fP. .PP Customers can explicitly assign affinities to files and directories using the cvmkdir, cvmkfile, or cvaffinity commands. Or, files can be assigned affinities with library API calls from within applications. The automatic affinities defined in this section take precedence and override affinities set with cvmkdir/cvmkfile or via a library function. For example, if a directory exists with an affinity of \fBAudio\fP and a file is created in that directory with a dpx extension with the above autoAffinity mapping. The *.dpx files gets assigned the \fBMovies\fP affinity overriding \fBAudio\fP. .PP The cvaffinity command can be used to later change the affinity of a file to some other value. .PP Some applications create temporary files before renaming them to their final name. Mappings of extension to affinity take effect only on the create call. So for these applications, the temporary file name determines the file's affinity. If the temporary file name has a different extension or no extension, the temporary's extension is used for the mapping. If the file is renamed to a different extension, the mapping is not affected. A typical example of this is Microsoft Word. .SH DISKTYPE DEFINITION A \fBdiskType\fR defines the number of sectors for a category of disk devices, and optionally the number of bytes per disk device sector. Since multiple disks used in a file system may have the same type of disk, it is easier to consolidate that information into a disk type definition rather than including it for each disk definition. .PP For example, a 9.2GB Seagate Barracuda Fibre Channel ST19171FC disk has \fB1778311\fR total sectors. However, using most drivers, a portion of the disk device is used for the volume header. For example, when using a \fBPrisa\fR adapter and driver, the maximum number of sectors available to the volume is \fB11781064\fR. .PP When specified, the sector size must be 512 or 4096 bytes. The default sector size is 512 bytes. .SH DISK DEFINITION \fINote:\fR The XML format defines disks in the stripeGroup section. The old format defines disks in a separate section and then links to that definition with the \fBnode\fR variable in the stripe group. The general description below applies to both. .PP Each \fBdisk\fR defines a disk device that is in the Storage Area Network configuration. The name of each disk device must be entered into the disk device's volume header label using .BR cvlabel (8). Disk devices that the client cannot see will not be accessible, and any stripe group containing an inaccessible disk device will not be available, so plan stripe groups accordingly. Entire disks must be specified here; partitions may not be used. .PP The disk definition's \fBname\fR must be unique, and is used by the volume administrator programs. .PP A disk's status may be up or down. When down, this device will not be accessible. Users may still be able to see directories, file names and other meta-data if the disk is in a stripe group that only contains userdata, but attempts to open a file affected by the downed disk device will receive an \fBOperation Not Permitted (EPERM)\fR failure. When a volume contains down data storage pools, space reporting tools in the operating system will not count these storage pools in computing the total volume size and available free blocks. \fINOTE\fR: when files are removed that only contain extents on \fBdown\fR storage pools, the amount of available free space displayed will not change. .PP Each disk definition has a type which must match one of the names from a previously defined \fBdiskType\fR. .PP \fINOTE\fR: In much older releases there was also a \fBDeviceName\fR option in the \fBDisk\fR section. The \fBDeviceName\fR was previously used to specify a operating system specific disk name, but this has been superseded by automatic volume recognition for some time and is no longer supported. This is now for internal use only. .SH STRIPEGROUP DEFINITION The \fBstripeGroup\fR defines individual storage pools. A storage pool is a collection of disk devices. A disk device may only be in one storage pool. .PP The \fBstripeGroup\fR has a name \fBname\fR that is used in subsequent system administration functions for the storage pool. .PP A storage pool can be set to have it's status up or down. If down, the storage pool is not used by the file system, and anything on that storage pool is inaccessible. This should normally be left up. .PP A storage pool can contain a combination of \fBmetadata\fR, \fBjournal\fR, or \fBuserdata\fR. There can only be one storage pool that contains a \fBjournal\fR per file system. Best performance is attained with a minimum of 2 stripe groups per file system with one stripe group used exclusively for metadata/journal and the other for user data. Metadata has an I/O pattern of small random I/O whereas user data is typically of much larger size. Splitting apart metadata and journal so there are 3 stripe groups is recommended particularly if latency for file creation, removal and allocation of space is important. .PP When a collection of disk devices is assembled under a storage pool, each disk device is logically striped into chunks of disk blocks as defined by the \fBstripeBreadth\fR variable. For example, with a 4k-byte block-size and a stripe breadth of 86 volume blocks, the first 352,256 bytes would be written or read from/to the first disk device in the storage pool, the second 352,256 bytes would be on the second disk device and so on. When the last disk device used its 352,256 bytes, the stripe would start again at drive zero. This allows for more than a single disk device's bandwidth to be realized by applications. .PP The allocator aligns an allocation that is greater than or equal to the largest \fBstripeBreadth\fR of any storage pool that can hold data. This is done if the allocation request is an extension of the file. .PP A storage pool can be marked up or down. When the storage pool is marked down, it is not available for data access. However, users may look at the directory and meta-data information. Attempts to open a file residing on a downed storage pool will receive a \fBPermission Denied\fR failure. .PP There is an option to turn off reads to a stripe group. .Nr .PP A storage pool can have write access denied. If writes are disabled, then any new allocations are disallowed as well. When a volume contains data storage pools with writes disabled, space reporting tools in the operating system will show all blocks for the storage pool as \fBused\fR. Note that when files are removed that only contain extents on write-disabled storage pools, the amount of available free space displayed will not change. This is typically only used during \fIDynamic Resource Allocation\fR procedures (see the StorNext User Guide for more details). .PP Allocations can be disabled on a storage pool. This would typically be done as a step towards retiring a stripe group. Unlike disabling writes, turning off allocations allows writes to a file which do not require a new allocation. On Linux systems, the stripe group management utilities \fBsgmanage\fR and \fBsgoffload\fR can be used to change this field, while the file system remains up and on-line. .PP Affinities can be used to target allocations at specific stripe groups, and the stripe group can exclusively contain affinity targeted allocations or have affinity targeted allocations co-existing with other allocations. See .BR snfs.cfg (5) and .BR snfs.cfgx (5) for more details. .PP Each stripe group can define a multipath method, which controls the algorithm used to allocate disk I/Os on paths to the storage when the volume has multiple paths available to it. See .BR sgmanage (8) for details. .PP Various realtime I/O parameters can be specified on a per stripe group basis as well. These define the maximum number of I/O operations per second available to real-time applications for the stripe group using the \fBQuality of Service (QoS)\fR API. There is also the ability to specify I/Os that should be reserved for applications not using the QoS API. Realtime I/O functionality is off by default. .PP A stripe group contains one or more disks on which to put the metadata/journal/userdata. The disk has an \fBindex\fR that defines the ordinal position the disk has in the storage pool. This number must be in the range of zero to the number of disks in the storage pool minus one, and be unique within the storage pool. There must be one disk entry per disk and the number of disk entries defines the stripe depth. For more information about disks, see the DISK DEFINITION section above. .PP \fINOTE\fR: The \fBStripeClusters\fR variable has been \fBdeprecated\fR. It was used to limit I/O submitted by a single process, but was removed when asynchronous I/O was added to the volume. .PP \fINOTE\fR: The \fBType\fR variable for Stripe Groups has been \fBdeprecated\fR. Several versions ago, the \fBType\fR parameter was used as a very course-grained affinity-like control of how data was laid out between stripe groups. The only valid value of \fBType\fR for several releases of SNFS has been \fBRegular\fR, and this is now deprecated as well for the XML configuration format. \fBType\fR has been superseded by \fBAffinity\fR. .SH FILES .I /Library/Preferences/Xsan/*.cfgx .br .I /Library/Preferences/Xsan/*.cfg .SH "SEE ALSO" .BR snfs.cfgx (5), .BR snfs.cfg (5), .BR sncfgedit (8), .BR cnvt2ha.sh (8), .BR cvfs (8), .BR cvadmin (8), .BR cvlabel (8), .BR snldapd (8), .BR cvmkdir (1), .BR cvmkfile (1), .BR acldomain (4), .BR ha_peer (4), .BR mount_acfs (8), .BR sgmanage (8), .BR sgoffload (8)