Cinder Configuration¶
Cinder, the block storage service for OpenStack, can be configured to use a variety of storage backends. This section guides you through setting up Cinder with different backend technologies, each of which might require specific configuration steps.
Cinder can be configured with multiple backends which would all be configured
inside of cinder_helm_values.conf.backends
. The documentation below explains
how to configure a specific backend, but you can add multiple backends by
adding additional entries to the cinder_helm_values.conf.backends
dictionary.
Ceph RBD¶
When using the integrated Ceph cluster provided with Atmosphere, no additional configuration is needed for Cinder. The deployment process automatically configures Cinder to use Ceph as the backend, simplifying setup and integration.
Dell PowerStore¶
In order to be able to use Dell PowerStore, you’ll need to make sure that you setup the hosts inside of your storage array. You’ll also need to make sure that they are not inside a host group or otherwise individual attachments will not work.
You can enable the native PowerStore driver for Cinder with the following configuration inside your Ansible inventory:
cinder_helm_values:
storage: powerstore
dependencies:
static:
api:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
scheduler:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume_usage_audit:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
conf:
cinder:
DEFAULT:
enabled_backends: powerstore
default_volume_type: powerstore
backends:
rbd1: null
powerstore:
volume_backend_name: powerstore
volume_driver: cinder.volume.drivers.dell_emc.powerstore.driver.PowerStoreDriver
san_ip: <FILL IN>
san_login: <FILL IN>
san_password: <FILL IN>
storage_protocol: <FILL IN> # FC or iSCSI
manifests:
deployment_backup: true
job_backup_storage_init: true
job_storage_init: false
nova_helm_values:
conf:
enable_iscsi: true
About conf.enable_iscsi
The enable_iscsi
setting is required to allow the Nova instances to
expose volumes by making the /dev devices available to the containers,
not necessarily to use iSCSI as the storage protocol. In this case, the
PowerStore driver will use the storage protocol specified inside Cinder,
Pure Storage¶
Pure maintains a native Cinder driver that can be used to integrate with the Pure Storage FlashArray. To enable the Pure Storage driver for Cinder, you need to provide the necessary configuration settings in your Ansible inventory.
In order to use Pure Storage, you’ll need to have the following information available:
- Volume Driver (
volume_driver
) Use
cinder.volume.drivers.pure.PureISCSIDriver
for iSCSI,cinder.volume.drivers.pure.PureFCDriver
for Fibre Channel orcinder.volume.drivers.pure.PureNVMEDriver
for NVME connectivity.If using the NVME driver, specify the
pure_nvme_transport
value, which the supported values areroce
ortcp
.- Pure API Endpoint (
san_ip
) The IP address of the Pure Storage array’s management interface or a domain name that resolves to that IP address.
- Pure API Token (
pure_api_token
) A token generated by the Pure Storage array that allows the Cinder driver to authenticate with the array.
You can use any other configuration settings that are specific to your needs by referencing the Cinder Pure Storage documentation.
cinder_helm_values:
storage: pure
pod:
useHostNetwork:
volume: true
backup: true
security_context:
cinder_volume:
container:
cinder_volume:
readOnlyRootFilesystem: true
privileged: true
cinder_backup:
container:
cinder_backup:
privileged: true
dependencies:
static:
api:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
backup:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
scheduler:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume_usage_audit:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
conf:
enable_iscsi: true
cinder:
DEFAULT:
default_volume_type: purestorage
enabled_backends: purestorage
backends:
rbd1: null
purestorage:
volume_backend_name: purestorage
volume_driver: <FILL IN>
san_ip: <FILL IN>
pure_api_token: <FILL IN>
# pure_nvme_transport:
use_multipath_for_image_xfer: true
pure_eradicate_on_delete: true
manifests:
deployment_backup: false
job_backup_storage_init: false
job_storage_init: false
nova_helm_values:
conf:
enable_iscsi: true
About conf.enable_iscsi
The enable_iscsi
setting is required to allow the Nova instances to
expose volumes by making the /dev devices available to the containers,
not necessarily to use iSCSI as the storage protocol. In this case, the
Cinder instances will use the volume driver specified in volume_driver
.
StorPool¶
Using StorPool as a storage backend requires additional configuration to ensure proper integration. These adjustments include network settings and file system mounts.
Configure Cinder to use StorPool by implementing the following settings:
cinder_helm_values:
storage: storpool
pod:
useHostNetwork:
volume: true
mounts:
cinder_volume:
volumeMounts:
- name: etc-storpool-conf
mountPath: /etc/storpool.conf
readOnly: true
- name: etc-storpool-conf-d
mountPath: /etc/storpool.conf.d
readOnly: true
volumes:
- name: etc-storpool-conf
hostPath:
type: File
path: /etc/storpool.conf
- name: etc-storpool-conf-d
hostPath:
type: Directory
path: /etc/storpool.conf.d
dependencies:
static:
api:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
scheduler:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume_usage_audit:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
conf:
cinder:
DEFAULT:
enabled_backends: hybrid-2ssd
default_volume_type: hybrid-2ssd
backends:
rbd1: null
hybrid-2ssd:
volume_backend_name: hybrid-2ssd
volume_driver: cinder.volume.drivers.storpool.StorPoolDriver
storpool_template: hybrid-2ssd
report_discard_supported: true
manifests:
deployment_backup: false
job_backup_storage_init: false
job_storage_init: false
nova_helm_values:
conf:
enable_iscsi: true
About conf.enable_iscsi
The enable_iscsi
setting is required to allow the Nova instances to
expose volumes by making the /dev devices available to the containers,
not necessarily to use iSCSI as the storage protocol. In this case, the
StorPool devices will be exposed as block devices to the containers.