RADOS Block Device (RBD)

The Ceph RBD driver registers a driver named rbd with the libStorage driver manager and is used to connect and mount RADOS Block Devices from a Ceph cluster.



The following is an example with all possible fields configured. For a running example see the Examples section.

  defaultPool: rbd
  testModule: true
  cephArgs: --cluster testcluster

Configuration Notes

Runtime behavior

The Ceph RBD driver only works when the client and server are on the same node. There is no way for a centralized libStorage server to attach volumes to clients, therefore the libStorage server must be running on each node that wishes to mount RBD volumes.

The RBD driver uses the format of <pool>.<name> for the volume ID. This allows for the use of multiple pools by the driver. During a volume create, if the volume ID is given as <pool>.<name>, a volume named name will be created in the pool storage pool. If no pool is referenced, the defaultPool will be used.


When the rbd.cephArgs config option is set and contains any of the flags --id, --user, -n or --name, support for multiple pools is disabled and only the pool defined by rbd.defaultPool will be used.

Both pool and name may only contain alphanumeric characters, underscores, and dashes.

When querying volumes, the driver will return all RBDs present in all pools in the cluster, prefixing each volume with the appropriate <pool>. value.

All RBD creates are done using the default 4MB object size, and using the "layering" feature bit to ensure greatest compatibility with the kernel clients.

Activating the Driver

To activate the Ceph RBD driver please follow the instructions for activating storage drivers, using rbd as the driver name.



Below is a full config.yml that works with RBD

  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: rbd
        driver: rbd
          defaultPool: rbd
          cephArgs: --id myuser