-
Notifications
You must be signed in to change notification settings - Fork 3
OSD Encryption
For this tutorial we will use the replica 3 ceph cluster shown in the following figure:
To create an encrypted pool we will execute the following steps:
- add new OSDs to the cluster using the --dmcrypt option;
- modify the crush map in order to create a root bucket containing the new OSDs and a new rule "encrypted_ruleset";
- create the pool "encrypted" and associate it with the rule "encrypted_ruleset".
The final setup of the cluster is shown in the following figure:
From the admin node (vm01), into the working directory (cd ceph-cluster), run these commands:
#!bash
ceph-deploy disk zap vm02:vdb
ceph-deploy disk zap vm03:vdb
ceph-deploy disk zap vm04:vdb
ceph-deploy osd create --dmcrypt vm02:vdb
ceph-deploy osd create --dmcrypt vm03:vdb
ceph-deploy osd create --dmcrypt vm04:vdb
Note: by default, ceph stores the encryption keys in the folder /etc/ceph/dmcrypt-keys.
You can check the cluster status and the OSD tree map:
ceph status
ceph osd tree
Get the crush map and decompile it using the following commands:
#!bash
ceph osd getcrushmap -o crushmap.compiled
crushtool -d crushmap.compiled -o crushmap.decompiled
Edit the map crushmap.decompiled adding the following buckets:
host vm02-encr {
id -5 # do not change unnecessarily
# weight 0.080
alg straw
hash 0 # rjenkins1
item osd.3 weight 0.040
}
host vm03-encr {
id -6 # do not change unnecessarily
# weight 0.080
alg straw
hash 0 # rjenkins1
item osd.4 weight 0.040
}
host vm04-encr {
id -7 # do not change unnecessarily
# weight 0.080
alg straw
hash 0 # rjenkins1
item osd.5 weight 0.040
}
root encrypted {
id -8 # do not change unnecessarily
# weight 0.120
alg straw
hash 0 # rjenkins1
item vm02-encr weight 0.040
item vm03-encr weight 0.040
item vm04-encr weight 0.040
}
Moreover add the following rule:
rule encrypted_ruleset {
ruleset 1
type replicated
min_size 1
max_size 10
step take encrypted
step chooseleaf firstn 0 type host
step emit
}
Note: the ids of the added entities may be different - take care of replacing them in order to use unique ids inside the map.
Now, you can apply the modified crush map:
#!bash
crushtool -c crushmap.decompiled -o crushmap.compiled
ceph osd setcrushmap -i crushmap.compiled
Note: you can disable updating the crushmap on start of the daemon:
[osd]
osd crush update on start = false
Check the cluster status:
#!bash
ceph status
ceph osd tree
Create the pool "encrypted" and assign the rule "encrypted_ruleset" to the pool:
#!bash
ceph osd pool create encrypted 128
ceph osd pool set encrypted crush_ruleset 1
Check the results:
#!bash
ceph osd dump
Once we have created an encrypted pool we can configure cinder to use it. Edit the cinder-volume configuration file adding the new backend:
[rbdencrypted]
volume_backend_name=RBD-ENCR
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=encrypted
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
glance_api_version=2
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
Modify the 'cinder' user capabilities to allow access also to the new pool 'encrypted' in order to read and write data:
#!bash
ceph auth caps client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images, allow rwx pool=encrypted'
Restart the cinder-volume service:
#!bash
service cinder-volume restart
cinder type-create encrypted
cinder type-key encrypted set volume_backend_name=RBD-ENCR
cinder create --volume_type encrypted --display_name vol-test-encrypted 1
Check the status and the details of the volume:
cinder show vol-test-encrypted

