Amazon Web Services

EBS, EFS, S3FS


Elastic Block Storage

The AWS EBS driver registers a storage driver named ebs with the libStorage service registry and is used to connect and manage AWS Elastic Block Storage volumes for EC2 instances.

Note

For backwards compatibility, the driver also registers a storage driver named ec2. The use of ec2 in config files is deprecated but functional. The ec2 driver will be removed in 0.7.0, at which point all instances of ec2 in config files must use ebs instead.

Note

The EBS driver does not yet support snapshots or tags, as previously supported in Rex-Ray v0.3.3.

Note

Due to issues with device naming, it is currently not possible to run the rexray/ebs plugin on 5th generation (C5 and M5) instances in AWS. See here for more information.

The EBS driver is made possible by the official Amazon Go AWS SDK.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

ebs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  region:         us-east-1
  maxRetries:     10
  kmsKeyID:       arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef
  statusMaxAttempts:  10
  statusInitialDelay: 100ms
  statusTimeout:      2m

Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Activating the Driver

To activate the AWS EBS driver please follow the instructions for activating storage drivers, using ebs as the driver name.

Troubleshooting

Examples

Below is a working config.yml file that works with AWS EBS.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: ebs
  server:
    services:
      ebs:
        driver: ebs
        ebs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX
          region:         us-east-1

NVMe Support

Support for NVMe requires a udev rule to alias the NVMe device to the path REX-Ray expects as a mount point. A similar udev rule is built into the Amazon Linux AMI already, and trivial to add to other linux distributions.

The following is an example of the udev rule that must be in place:

# /etc/udev/rules.d/999-aws-ebs-nvme.rules
# ebs nvme devices
KERNEL=="nvme[0-9]*n[0-9]*", ENV{DEVTYPE}=="disk", ATTRS{model}=="Amazon Elastic Block Store", PROGRAM="/usr/local/bin/ebs-nvme-mapping /dev/%k", SYMLINK+="%c"

This script is a helper for creating the required device aliases required by REX-Ray to support NVMe:

#!/bin/bash
#/usr/local/bin/ebs-nvme-mapping
vol=$(/usr/sbin/nvme id-ctrl --raw-binary "${1}" | \
      cut -c3073-3104 | tr -s ' ' | sed 's/ $//g')
vol=${vol#/dev/}
[ -n "${vol}" ] && echo "${vol/xvd/sd} ${vol/sd/xvd}"

Elastic File System

The AWS EFS driver registers a storage driver named efs with the libStorage service registry and is used to connect and manage AWS Elastic File Systems.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

efs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  securityGroups:
  - sg-XXXXXXX
  - sg-XXXXXX0
  - sg-XXXXXX1
  region:              us-east-1
  tag:                 test
  disableSessionCache: false
  statusMaxAttempts:  6
  statusInitialDelay: 1s
  statusTimeout:      2m

Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

AWS EFS storage driver creates one EFS FileSystem per volume and provides root of the filesystem as NFS mount point. Volumes aren't attached to instances directly but rather exposed to each subnet by creating MountPoint in each VPC subnet. When detaching volume from instance no action is taken as there isn't good way to figure out if there are other instances in same subnet using MountPoint that is being detached. There is no charge for MountPoint so they are removed only once whole volume is deleted.

By default all EFS instances are provisioned as generalPurpose performance mode. maxIO EFS type can be provisioned by providing maxIO flag as volumetype.

Its possible to mount same volume to multiple container on a single EC2 instance as well as use single volume across multiple EC2 instances at the same time.

Note

Each EFS FileSystem can be accessed only from single VPC at the time.

Activating the Driver

To activate the AWS EFS driver please follow the instructions for activating storage drivers, using efs as the driver name.

Troubleshooting

Examples

Below is a working config.yml file that works with AWS EFS.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: efs
  server:
    services:
      efs:
        driver: efs
        efs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX
          securityGroups:
          - sg-XXXXXXX
          - sg-XXXXXX0
          - sg-XXXXXX1
          region:         us-east-1
          tag:            test

Simple Storage Service

The AWS S3FS driver registers a storage driver named s3fs with the libStorage service registry and provides the ability to mount Amazon Simple Storage Service (S3) buckets as filesystems using the s3fs FUSE command.

Unlike the other AWS-related drivers, the S3FS storage driver does not need to deployed or used by an EC2 instance. Any client can take advantage of Amazon's S3 buckets.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

Server-Side Configuration

s3fs:
  region:           us-east-1
  accessKey:        XXXXXXXXXX
  secretKey:        XXXXXXXXXX
  disablePathStyle: false

Client-Side Configuration

s3fs:
  cmd:            s3fs
  options:
  - XXXX
  - XXXX
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

The AWS S3FS storage driver can create new buckets as well as remove existing ones. Buckets are mounted to clients as filesystems using the s3fs FUSE command. For clients to correctly mount and unmount S3 buckets the s3fs command should be in the path of the executor or configured via the s3fs.cmd property in the client-side REX-Ray configuration file.

The client must also have access to the AWS credentials used for mounting and unmounting S3 buckets. These credentials can be stored in the client-side REX-Ray configuration file or via any means available to the s3fs command.

Activating the Driver

To activate the AWS S3FS driver please follow the instructions for activating storage drivers, using s3fs as the driver name.

Examples

Below is a working config.yml file that works with AWS S3FS.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: s3fs
  server:
    services:
      s3fs:
        driver: s3fs
        s3fs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX