OCI File Storage (FSS) Guide
Deploy shared file storage with OCI FSS: mount targets, exports, NFS mounting, snapshots, replication, and performance tuning.
Prerequisites
- Basic understanding of NFS and shared file systems
- OCI account with file storage permissions
Introduction to OCI File Storage
OCI File Storage Service (FSS) provides a fully managed, durable, and scalable network file system that supports the NFSv3 protocol. It enables multiple compute instances to share data through a common file system, making it ideal for shared application data, content management systems, home directories, development environments, and any workload that requires concurrent read/write access from multiple instances.
FSS file systems can grow to 8 exabytes without provisioning capacity upfront. You pay only for the storage you use, with no minimum commitment. The service handles all the operational complexity of managing NFS servers, including replication, snapshots, data protection, and capacity management.
This guide covers file system creation, mount target configuration, export paths, snapshots, cross-region replication, performance tuning, security configuration, and production best practices for deploying shared file storage on OCI.
File Storage Pricing
OCI File Storage is priced per GB-month of storage consumed. There are no charges for IOPS, throughput, or data access. The base price is approximately $0.0255/GB/month for standard storage. Snapshots consume additional storage only for changed blocks. Outbound data replication is charged at standard data transfer rates.
File System Architecture
OCI File Storage consists of three main components that work together:
File Systems: The actual storage containers that hold your files and directories. A file system is created in a specific availability domain and can grow dynamically as you add data.
Mount Targets: NFS endpoints with IP addresses in your VCN that clients use to access file systems. A mount target can serve multiple file systems through different export paths. Each mount target has an IP address in a specific subnet.
Exports: The mappings between a file system and a mount target. An export defines the path at which a file system is available (e.g., /shared-data) and includes export options that control client access permissions.
# Create a file system
oci fs file-system create \
--compartment-id $C \
--availability-domain $AD \
--display-name "shared-app-data" \
--wait-for-state ACTIVE
# Create a mount target in a subnet
oci fs mount-target create \
--compartment-id $C \
--availability-domain $AD \
--subnet-id <private-subnet-ocid> \
--display-name "app-mount-target" \
--wait-for-state ACTIVE
# Get the mount target IP address
oci fs mount-target get \
--mount-target-id <mount-target-ocid> \
--query 'data.{"private-ip-ids":"private-ip-ids"}'
# Get the private IP details
oci network private-ip get \
--private-ip-id <private-ip-ocid> \
--query 'data.{"ip-address":"ip-address"}'
# Create an export (map file system to mount target path)
oci fs export create \
--file-system-id <file-system-ocid> \
--export-set-id <export-set-ocid> \
--path "/shared-data" \
--wait-for-state ACTIVE
# List file systems
oci fs file-system list \
--compartment-id $C \
--availability-domain $AD \
--query 'data[].{"display-name":"display-name", id:id, "lifecycle-state":"lifecycle-state", "metered-bytes":"metered-bytes"}' \
--output table
# List mount targets
oci fs mount-target list \
--compartment-id $C \
--availability-domain $AD \
--query 'data[].{"display-name":"display-name", "lifecycle-state":"lifecycle-state", "subnet-id":"subnet-id"}' \
--output tableMounting File Systems on Instances
Once the file system, mount target, and export are configured, you mount the file system on your compute instances using standard NFS mount commands. The mount target IP address and export path are the only information needed on the client side.
For persistent mounts that survive instance reboots, add an entry to /etc/fstabwith the appropriate NFS options. The recommended mount options include nfsvers=3for protocol version, rsize and wsize for I/O block size, and_netdev to ensure the network is available before mounting.
# Install NFS utilities (if not already installed)
# Oracle Linux / CentOS:
# sudo dnf install -y nfs-utils
# Ubuntu:
# sudo apt-get install -y nfs-common
# Create the mount point directory
# sudo mkdir -p /mnt/shared-data
# Mount the file system
# sudo mount -t nfs -o nfsvers=3 10.0.1.50:/shared-data /mnt/shared-data
# Verify the mount
# df -h /mnt/shared-data
# mount | grep shared-data
# Add to /etc/fstab for persistent mount
# echo "10.0.1.50:/shared-data /mnt/shared-data nfs nfsvers=3,rsize=65536,wsize=65536,hard,timeo=600,retrans=2,_netdev 0 0" | sudo tee -a /etc/fstab
# Test fstab entry (mount without reboot)
# sudo mount -a
# Unmount when no longer needed
# sudo umount /mnt/shared-dataUse Large Block Sizes for Better Performance
Set rsize=65536 and wsize=65536 (64 KB) in your NFS mount options for optimal throughput. Larger block sizes reduce the number of network round trips for sequential reads and writes. The default block size is often 32 KB or smaller, which can significantly limit performance for large file transfers and streaming workloads.
Export Options and Access Control
Export options control which clients can access a file system and what operations they can perform. You can specify access rules based on source IP addresses or CIDR blocks, and configure read/write or read-only access, root squash settings, and identity mapping.
Root Squash: When enabled, the root user on the client (UID 0) is mapped to an anonymous user on the file system, preventing root-level operations. This is a security best practice for shared file systems where clients from different teams or environments access the same data.
Identity Squash: Options include NONE (no identity mapping),ROOT (map only root to anonymous), and ALL (map all users to anonymous).
# Create an export with access control options
oci fs export create \
--file-system-id <file-system-ocid> \
--export-set-id <export-set-ocid> \
--path "/project-data" \
--export-options '[
{
"source": "10.0.1.0/24",
"requirePrivilegedSourcePort": true,
"access": "READ_WRITE",
"identitySquash": "ROOT",
"anonymousUid": 65534,
"anonymousGid": 65534
},
{
"source": "10.0.2.0/24",
"requirePrivilegedSourcePort": true,
"access": "READ_ONLY",
"identitySquash": "NONE"
}
]' \
--wait-for-state ACTIVE
# Update export options
oci fs export update \
--export-id <export-ocid> \
--export-options '[
{
"source": "10.0.0.0/16",
"requirePrivilegedSourcePort": true,
"access": "READ_WRITE",
"identitySquash": "ROOT",
"anonymousUid": 65534,
"anonymousGid": 65534
}
]'
# List exports
oci fs export list \
--compartment-id $C \
--query 'data[].{path:path, "file-system-id":"file-system-id", "lifecycle-state":"lifecycle-state"}' \
--output tableSnapshots
Snapshots provide point-in-time copies of your file system that you can use for backup, recovery, and data protection. Snapshots are space-efficient: the initial snapshot references the same data blocks as the live file system, and only changed blocks consume additional storage. You can create snapshots manually or on a schedule using automation.
Snapshots are accessible as read-only directories within the file system at the.snapshot hidden directory. Users can browse and copy files from snapshots without administrator intervention, enabling self-service file recovery.
# Create a snapshot
oci fs snapshot create \
--file-system-id <file-system-ocid> \
--name "daily-backup-2026-03-14" \
--wait-for-state ACTIVE
# List snapshots
oci fs snapshot list \
--file-system-id <file-system-ocid> \
--query 'data[].{name:name, "time-created":"time-created", "lifecycle-state":"lifecycle-state"}' \
--output table
# Access snapshot data from a mounted file system:
# ls /mnt/shared-data/.snapshot/
# ls /mnt/shared-data/.snapshot/daily-backup-2026-03-14/
# cp /mnt/shared-data/.snapshot/daily-backup-2026-03-14/important-file.txt /mnt/shared-data/
# Create a new file system from a snapshot (clone)
oci fs file-system create \
--compartment-id $C \
--availability-domain $AD \
--display-name "restored-from-snapshot" \
--source-snapshot-id <snapshot-ocid> \
--wait-for-state ACTIVE
# Delete a snapshot
oci fs snapshot delete \
--snapshot-id <snapshot-ocid> \
--force
# Automated snapshot script (for cron):
# oci fs snapshot create \
# --file-system-id <fs-ocid> \
# --name "auto-snapshot-$(date +%Y-%m-%d-%H%M)"
#
# # Clean up snapshots older than 7 days
# for snap in $(oci fs snapshot list \
# --file-system-id <fs-ocid> \
# --query "data[?"time-created" < '$(date -d '-7 days' -u +%Y-%m-%dT%H:%M:%SZ)'].id" \
# --raw-output); do
# oci fs snapshot delete --snapshot-id $snap --force
# doneCross-Region Replication
File system replication asynchronously copies data from a source file system in one availability domain or region to a target file system in another. Replication provides disaster recovery and geographic data distribution. The target file system is read-only during replication and can be activated as read-write if the source becomes unavailable.
Replication uses snapshot-based delta synchronization, meaning only changed blocks are transferred after the initial full copy. The replication interval is configurable, with options ranging from every 1 minute to every 24 hours.
# Create a replication target file system in another region
oci fs replication create \
--compartment-id $C \
--source-id <source-file-system-ocid> \
--target-id <target-file-system-ocid> \
--display-name "dr-replication" \
--replication-interval 60 \
--wait-for-state ACTIVE
# List replications
oci fs replication list \
--compartment-id $C \
--availability-domain $AD \
--query 'data[].{"display-name":"display-name", "lifecycle-state":"lifecycle-state", "replication-interval":"replication-interval", "delta-status":"delta-status"}' \
--output table
# Get replication details and sync status
oci fs replication get \
--replication-id <replication-ocid> \
--query 'data.{"delta-status":"delta-status", "delta-progress":"delta-progress", "last-snapshot-id":"last-snapshot-id"}'
# Failover: Delete replication and make target writable
oci fs replication delete \
--replication-id <replication-ocid> \
--force
# After deletion, the target file system becomes read-writePerformance Tuning
FSS performance scales automatically with file system size. Larger file systems receive more IOPS and throughput because the data is distributed across more storage servers. However, there are several client-side optimizations that can significantly impact performance.
NFS Mount Options: Use NFSv3 with 64 KB read/write sizes. NFSv3 offers better performance than NFSv4 for most workloads on OCI because it uses UDP for metadata operations and supports larger transfer sizes.
Concurrent Access: FSS performs best with multiple concurrent readers or writers. Single-threaded sequential access may not fully utilize the available bandwidth. Use parallel tools like parallel, rsync --parallel, or multi- threaded applications to maximize throughput.
Small File Operations: NFS has higher per-operation overhead than local file systems, especially for small files. If your workload involves many small file operations, batch them together or use tar/zip archives for bulk transfers.
# Optimal NFS mount options for throughput
# sudo mount -t nfs -o nfsvers=3,rsize=65536,wsize=65536,hard,timeo=600,retrans=2,_netdev,noatime \
# 10.0.1.50:/shared-data /mnt/shared-data
# Performance testing with fio
# sudo dnf install -y fio
#
# Sequential write test:
# fio --name=seq-write --ioengine=libaio --rw=write --bs=1M --numjobs=4 \
# --size=1G --runtime=60 --directory=/mnt/shared-data --group_reporting
#
# Sequential read test:
# fio --name=seq-read --ioengine=libaio --rw=read --bs=1M --numjobs=4 \
# --size=1G --runtime=60 --directory=/mnt/shared-data --group_reporting
#
# Random 4K read test:
# fio --name=rand-read --ioengine=libaio --rw=randread --bs=4k --numjobs=8 \
# --size=256M --runtime=60 --directory=/mnt/shared-data --group_reporting
# Monitor FSS metrics
oci monitoring metric-data summarize-metrics-data \
--compartment-id $C \
--namespace "oci_filestorage" \
--query-text 'FileSystemReadBytes[5m]{resourceId = "<fs-ocid>"}.sum()'
oci monitoring metric-data summarize-metrics-data \
--compartment-id $C \
--namespace "oci_filestorage" \
--query-text 'FileSystemWriteBytes[5m]{resourceId = "<fs-ocid>"}.sum()'Production Best Practices
Deploying FSS in production requires attention to availability, security, and cost management:
High Availability: FSS data is replicated within the availability domain for durability. For cross-AD or cross-region protection, configure file system replication. Create mount targets in each subnet where clients need access.
Security: Use NSGs to restrict NFS port access (ports 111, 2048-2050) to only authorized client subnets. Configure export options with root squash enabled. Use POSIX permissions and ownership to control file-level access within the file system.
Snapshot Strategy: Create automated snapshots using a cron job or OCI Functions triggered by a schedule. Retain daily snapshots for 7 days, weekly snapshots for 4 weeks, and monthly snapshots for 12 months. Regularly test snapshot restoration.
Cost Management: Monitor file system size using the metered-bytesattribute. FSS charges for all stored data including snapshots. Old snapshots with large unique blocks can significantly increase costs. Implement data lifecycle policies to archive or delete old data.
Multiple Mount Targets: Create separate mount targets for different environments (production, staging) and use export options to enforce access boundaries. This provides network-level isolation between environments sharing the same subnet.
OCI Block Volume Storage GuideOCI Object Storage & TiersOCI Compute Shapes & InstancesKey Takeaways
- 1FSS provides fully managed NFS v3 file storage that scales to 8 exabytes without provisioning.
- 2Mount targets expose file systems as NFS endpoints within your VCN subnets.
- 3Snapshots enable point-in-time recovery with self-service access via the .snapshot directory.
- 4Cross-region replication provides disaster recovery with configurable sync intervals.
Frequently Asked Questions
How does OCI File Storage compare to AWS EFS?
Can multiple instances mount the same file system?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.