Configuring docker to use rexray and Ceph for persistent storage

  • Post author:
  • Post category:Docker

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working... First off, I needed to install rexray: root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh Selecting previously unselected package rexray. (Reading database ... 177547 files and directories currently installed.) Preparing to unpack rexray_0.9.0-1_amd64.deb ... Unpacking rexray (0.9.0-1) ... Setting up rexray (0.9.0-1) ... rexray has been installed to /usr/bin/rexray REX-Ray ------- Binary: /usr/bin/rexray Flavor: client+agent+controller SemVer: 0.9.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: 2a7458dd90a79c673463e14094377baf9fc8695e Formed: Thu, 04 May 2017 07:38:11 AEST libStorage ---------- SemVer: 0.6.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 Formed: Thu, 04 May 2017 07:36:11 AEST Which is of course horrid. What that script seems to have done is install a deb'd version of rexray based on an alien'd package: root@labosa:~/rexray# dpkg -s rexray Package: rexray Status: install ok installed Priority: extra Section: alien Installed-Size: 36140 Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1> Architecture: amd64 Version: 0.9.0-1 Depends: libc6 (>= 2.3.2) Description: Tool for managing remote & local storage. A guest based storage introspection tool that allows local visibility and management from cloud and storage platforms. . (Converted from a rpm…

Continue ReadingConfiguring docker to use rexray and Ceph for persistent storage

So you want to setup a Ceph dev environment using OSA

  • Post author:
  • Post category:OpenStack

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used. First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L: export SCENARIO=ceph Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch: --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:55:07.803635173 +1000 +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:58:30.417019878 +1000 @@ -338,7 +338,9 @@ # foo: 1234 # bar: 5678 # -ceph_conf_overrides: {} +ceph_conf_overrides: + global: + osd_pool_default_pg_num: 8 ############# @@ -373,4 +375,4 @@ # Set this to true to enable File access via NFS. Requires an MDS role. nfs_file_gw: true # Set this to true to enable Object access via NFS. Requires an RGW role. -nfs_obj_gw: false \ No newline at end of file +nfs_obj_gw: false That…

Continue ReadingSo you want to setup a Ceph dev environment using OSA

End of content

No more pages to load