分享技术,记录生活,在互联网上留下属于自己的一亩三分地。

Ceph系统安装

默认笔记 novadmin 611℃ 0评论

2018年6月26日

这里为了测试Ceph,使用的是3台虚拟机,1台做Mon节点和Manage节点,1台做OSD0,1台做OSD1;

目前配置是1G内存,60G 硬盘,目前正在安装系统,

系统环境:Centos7 最小化安装和开发环境

目前在安装系统,后面会更新,具体步骤,做个记录

1、修改节点的名字, 设置Mon和Man节点的名字为MonMan,OSD0名字为node0,OSD1的节点名字为node1。

分别在3台节点上操作


#hostnamectl set-hostname MonMan

#hostnamectl set-hostname node0

#hostnamectl set-hostname node1

2、更新操作系统,在每个节点上


#yum -y update

3、更新完系统后,最好重启下系统


#reboot

 

4、配置节点的hosts文件

monman、node0、node1 的3个节点的hosts文件添加下面3行


192.168.1.38 monman
192.168.1.39 node0
192.168.1.40 node1

在每个节点都验证一下,是否可以ping通

5、安装部署工具 ceph-deploy

Ceph 提供了部署工具 ceph-deploy 来方便安装 Ceph 集群,我们只需要在 ceph-deploy 节点上安装即可,这里对应的就是 admin-node 节点。把 Ceph 仓库添加到 ceph-deploy 管理节点,然后安装 ceph-deploy。因为系统是 Centos7 版本,所以配置如下:

5.1 在monman节点上做以下操作(yum 配置其他依赖包):


#yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

5.2 添加 Ceph 源,以下内容


[root@monman ~]# vi /etc/yum.repos.d/ceph.repo

增加以下内容


[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

5.3 安装 ceph-deploy(monman节点)


[root@monman ~]# yum update && yum install ceph-deploy

6、安装NTP服务和SSH服务

官方建议在所有 Ceph 节点上安装 NTP 服务(特别是 Ceph Monitor 节点),以免因时钟漂移导致故障。


#yum install ntp ntpdate ntp-doc -y

每个节点上同步时间


#ntpdate 0.cn.pool.ntp.org

如果SSH服务已经安装好,可忽略;如果没有安装SSH服务,则安装SSH服务


#yum install openssh-server -y

7、创建 Ceph 部署用户

ceph-deploy 工具必须以普通用户登录 Ceph 节点,且此用户拥有无密码使用 sudo 的权限,因为它需要在安装软件及配置文件的过程中,不必输入密码。官方建议所有 Ceph 节点上给 ceph-deploy 创建一个特定的用户,而且不要使用 ceph 这个名字。这里为了方便,我们使用 cephd 这个账户作为特定的用户,而且每个节点上(monman、node0、node1)上都需要创建该账户,并且拥有 sudo 权限。

7.1 在ceph的每个节点做如下操作(添加用户并且修改密码):


#useradd -d /home/cephd -m cephd
#passwd cephd

7.2 添加sudo权限


#echo "cephd ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephd
#chmod 0440 /etc/sudoers.d/cephd

7.3 接下来在 ceph-deploy 节点(monman)上,切换到 cephd 用户,生成 SSH 密钥并把其公钥分发到各 Ceph 节点上,注意使用 cephd 账户生成,且提示输入密码时,直接回车,因为它需要免密码登录到各个节点。

在ceph-deploy节点(monman)上执行下面的操作:


[cephd@monman ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephd/.ssh/id_rsa): 
Created directory '/home/cephd/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/cephd/.ssh/id_rsa.
Your public key has been saved in /home/cephd/.ssh/id_rsa.pub.
The key fingerprint is:

默认回车就可以了

然后将公钥复制到其他节点上


[cephd@monman ~]$ ssh-copy-id cephd@node0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephd/.ssh/id_rsa.pub"
The authenticity of host 'node0 (192.168.1.39)' can't be established.
ECDSA key fingerprint is SHA256:HAqxF++F0XsqxJHXf/VUotzy5HuX5qvQGWf4RvGgynQ.
ECDSA key fingerprint is MD5:69:72:5e:dc:7b:70:f6:12:1f:39:e1:ea:7f:7c:c0:61.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephd@node0's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephd@node0'"
and check to make sure that only the key(s) you wanted were added.

[cephd@monman ~]$ ssh-copy-id cephd@node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephd/.ssh/id_rsa.pub"
The authenticity of host 'node1 (192.168.1.40)' can't be established.
ECDSA key fingerprint is SHA256:pt8YFfEFdJoQg4ipU4wEB/vFdTcYnvymcSzkJ1ldnb8.
ECDSA key fingerprint is MD5:14:0e:db:bf:04:9b:42:65:c5:20:c8:fa:99:5a:2f:d9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephd@node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephd@node1'"
and check to make sure that only the key(s) you wanted were added.

7.4 测试SSH免登陆到各个节点上


[cephd@monman ~]$ ssh node0
[cephd@node0 ~]$ 
[cephd@node0 ~]$ hostname
node0
[cephd@node0 ~]$ 
[cephd@monman ~]$ ssh node1
[cephd@node1 ~]$ hostname
node1
[cephd@node1 ~]$ 

测试没有问题,接下来,修改 ceph-deploy 管理节点上的 ~/.ssh/config 文件,这样无需每次执行 ceph-deploy 都要指定 –username cephd 。这样做同时也简化了 ssh 和 scp 的用法。

添加下面几行


$vi ~/~/.ssh/config

Host node0
   Hostname node0
   User cephd
Host node1
   Hostname node1
   User cephd

注意,此时再执行 ssh node0 会提示报错 Bad owner or permissions on /home/cephd/.ssh/config。原因是 config 文件权限问题,修改权限


[cephd@monman ~]$ sudo chmod 600 ~/.ssh/config

即可解决。

8.设置SELINUX, 在 CentOS 系统上, SELinux 默认为 Enforcing 开启状态,为了方便安装,建议把 SELinux 设置为 Permissive 或者 disabled。

8.1如果是临时关闭,则使用下面的命令


$sudo setenforce 0

8.2 如果永久关闭,则使用下面的命令

将SELINUX设置为disabled状态


$sudo vi /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

这里为了做测试,就设置为disabled状态,在没给节点上都要做。

9. 设置防火墙状态(每个节点都要做)

开放所需端口设置,Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信,所以我们需要调整防火墙设置,开放所需端口,允许相应的入站请求。

9.1 防火墙设置,开放6789端口


$sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

9.2 彻底关掉防火墙


$ sudo systemctl stop firewalld.service
$ sudo systemctl disable firewalld.service

上面一条命令是停止掉防火墙,下面一条命令是关闭自启动防火墙;

配置完以上,那些步骤,下面开始安装ceph;

10、   首先 Cephd 用户创建一个目录 ceph-cluster 并进入到该目录执行一系列操作。因为我们设计的 monitor 节点在 monman节点上,所以,执行如下命令。


[cephd@monman ~]$ mkdir ~/ceph-cluster && cd ~/ceph-cluster

创建集群

ceph-deploy new monman 记得改成将monman改成自己实际情况的主机名


[cephd@monman ceph-cluster]$ ceph-deploy new monman
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy new monman
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f39775c55f0>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3976d435f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['monman']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[monman][DEBUG ] connection detected need for sudo
[monman][DEBUG ] connected to host: monman 
[monman][DEBUG ] detect platform information from remote host
[monman][DEBUG ] detect machine type
[monman][DEBUG ] find the location of an executable
[monman][INFO  ] Running command: sudo /usr/sbin/ip link show
[monman][INFO  ] Running command: sudo /usr/sbin/ip addr show
[monman][DEBUG ] IP addresses found: [u'192.168.1.38']
[ceph_deploy.new][DEBUG ] Resolving host monman
[ceph_deploy.new][DEBUG ] Monitor monman at 192.168.1.38
[ceph_deploy.new][DEBUG ] Monitor initial members are ['monman']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.38']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

此时,我们会发现 ceph-deploy 会在 ceph-cluster 目录下生成几个文件,ceph.conf 为 ceph 配置文件,ceph-deploy-ceph.log 为 ceph-deploy 日志文件,ceph.mon.keyring 为 ceph monitor 的密钥环。


[cephd@monman ceph-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

查看下ceph.conf里面的内容


[cephd@monman ceph-cluster]$ cat ceph.conf 
[global]
fsid = 6301daa8-3a5b-4d39-8507-4dfd509c1a8e
mon_initial_members = monman
mon_host = 192.168.1.38
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

接下来,我们需要修改下 ceph.conf 配置文件,增加副本数为 2,因为我们有两个 osd 节点。


[cephd@monman ceph-cluster]$ vi ceph.conf
[global]
fsid = 6301daa8-3a5b-4d39-8507-4dfd509c1a8e
mon_initial_members = monman
mon_host = 192.168.1.38
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
~

然后,我们需要通过 ceph-deploy 在各个节点安装 ceph。


[cephd@monman ceph-cluster]$ ceph-deploy install monman node0 node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy install monman node0 node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8bf2f28cf8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f8bf39f3d70>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['monman', 'node0', 'node1']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts monman node0 node1
[ceph_deploy.install][DEBUG ] Detecting platform for host monman ...
[monman][DEBUG ] connection detected need for sudo
[monman][DEBUG ] connected to host: monman 
[monman][DEBUG ] detect platform information from remote host
[monman][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[monman][INFO  ] installing Ceph on monman
[monman][INFO  ] Running command: sudo yum clean all
[monman][DEBUG ] Loaded plugins: fastestmirror
[monman][DEBUG ] Cleaning repos: Ceph-noarch base epel extras updates
[monman][DEBUG ] Cleaning up everything
[monman][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[monman][DEBUG ] Cleaning up list of fastest mirrors
[monman][INFO  ] Running command: sudo yum -y install epel-release
[monman][DEBUG ] Loaded plugins: fastestmirror
[monman][DEBUG ] Determining fastest mirrors

PS:如果中间报错了如下


[monman][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

[monman_deploy][ERROR ] RuntimeError: NoSectionError: No section: ‘ceph‘

解决办法:


# yum remove ceph-release -y    

然后再执行:


[cephd@monman ceph-cluster]$ ceph-deploy install monman node0 node1

执行完成后,初始化monitor 节点并收集所有密钥


[cephd@monman ceph-cluster]$ ceph-deploy mon create-initial

执行完毕后,会在当前目录下生成一系列的密钥环,应该是各组件之间访问所需要的认证信息吧。


[cephd@monman ceph-cluster]$ ll
total 17616
-rw------- 1 cephd cephd      113 Jul  3 10:47 ceph.bootstrap-mds.keyring
-rw------- 1 cephd cephd       71 Jul  3 10:47 ceph.bootstrap-mgr.keyring
-rw------- 1 cephd cephd      113 Jul  3 10:47 ceph.bootstrap-osd.keyring
-rw------- 1 cephd cephd      113 Jul  3 10:47 ceph.bootstrap-rgw.keyring
-rw------- 1 cephd cephd      129 Jul  3 10:47 ceph.client.admin.keyring
-rw-rw-r-- 1 cephd cephd      221 Jun 28 14:42 ceph.conf
-rw-rw-r-- 1 cephd cephd   655728 Jul  3 10:47 ceph-deploy-ceph.log
-rw------- 1 cephd cephd       73 Jun 28 14:40 ceph.mon.keyring

到此,ceph monitor 已经成功启动了。接下来需要创建 OSD 了,OSD 是最终数据存储的地方,这里我们准备了两个 OSD 节点,分别为 osd.0 和 osd.1。官方建议为 OSD 及其日志使用独立硬盘或分区作为存储空间,不过本机虚拟机上不具备条件,但是我们可以在虚拟机本地磁盘上创建目录,来作为 OSD 的存储空间。

在monman节点上执行下面的命令:


[cephd@monman ceph-cluster]$ ssh node0
Last login: Thu Jun 28 14:38:56 2018 from monman
[cephd@node0 ~]$ sudo mkdir /var/local/osd0
[cephd@node0 ~]$ sudo chown -R ceph:ceph /var/local/osd0/
[cephd@node0 ~]$ exit
logout
Connection to node0 closed.
[cephd@monman ceph-cluster]$ ssh node1
Last login: Thu Jun 28 14:39:00 2018 from monman
[cephd@node1 ~]$ sudo mkdir /var/local/osd1
[cephd@node1 ~]$ sudo chown -R ceph:ceph /var/local/osd1
[cephd@node1 ~]$ exit
logout
Connection to node1 closed.

注意:这里执行了 chown -R ceph:ceph 操作,将 osd0 和 osd1 目录的权限赋予 ceph:ceph,否则,接下来执行 ceph-deploy osd activate … 时会报权限错误。

接下来,我们需要 ceph-deploy 节点执行 prepare OSD 操作,目的是分别在各个 OSD 节点上创建一些后边激活 OSD 需要的信息。


[cephd@monman ceph-cluster]$ ceph-deploy osd prepare node0:/var/local/osd0 node1:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd prepare node0:/var/local/osd0 node1:/var/local/osd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('node0', '/var/local/osd0', None), ('node1', '/var/local/osd1', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1b464efe60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f1b46544f50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node0:/var/local/osd0: node1:/var/local/osd1:
[node0][DEBUG ] connection detected need for sudo
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[node0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to node0
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node0][WARNIN] osd keyring does not exist yet, creating one
[node0][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host node0 disk /var/local/osd0 journal None activate False
[node0][DEBUG ] find the location of an executable
[node0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd0
[node0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[node0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[node0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[node0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node0][WARNIN] populate_data_path: Preparing osd data dir /var/local/osd0
[node0][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd0/ceph_fsid.1316.tmp
[node0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd0/ceph_fsid.1316.tmp
[node0][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd0/fsid.1316.tmp
[node0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd0/fsid.1316.tmp
[node0][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd0/magic.1316.tmp
[node0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd0/magic.1316.tmp
[node0][INFO  ] checking OSD status...
[node0][DEBUG ] find the location of an executable
[node0][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node0 is now ready for osd use.
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][WARNIN] osd keyring does not exist yet, creating one
[node1][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /var/local/osd1 journal None activate False
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd1
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/local/osd1
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd1/ceph_fsid.1282.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/ceph_fsid.1282.tmp
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd1/fsid.1282.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/fsid.1282.tmp
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd1/magic.1282.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/magic.1282.tmp
[node1][INFO  ] checking OSD status...
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.

如果中间没有出现错误,并且出现红色字样的,那么说明执行成功。

OK 接下来,我们需要激活 activate OSD。


[cephd@monman ceph-cluster]$ ceph-deploy osd activate node0:/var/local/osd0 node1:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd activate node0:/var/local/osd0 node1:/var/local/osd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f241b726e60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f241b77bf50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node0', '/var/local/osd0', None), ('node1', '/var/local/osd1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node0:/var/local/osd0: node1:/var/local/osd1:
[node0][DEBUG ] connection detected need for sudo
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[node0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] activating host node0 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node0][DEBUG ] find the location of an executable
[node0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /var/local/osd0
[node0][WARNIN] main_activate: path = /var/local/osd0
[node0][WARNIN] activate: Cluster uuid is 895251dc-26af-496d-a8b2-10ab745f04b9
[node0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node0][WARNIN] activate: Cluster name is ceph
[node0][WARNIN] activate: OSD uuid is dd194f31-108f-43cf-b37a-e92a5cdccae6
[node0][WARNIN] allocate_osd_id: Allocating OSD id...
[node0][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise dd194f31-108f-43cf-b37a-e92a5cdccae6
[node0][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd0/whoami.1411.tmp
[node0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd0/whoami.1411.tmp
[node0][WARNIN] activate: OSD id is 0
[node0][WARNIN] activate: Initializing OSD...
[node0][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/local/osd0/activate.monmap
[node0][WARNIN] got monmap epoch 1
[node0][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid dd194f31-108f-43cf-b37a-e92a5cdccae6 --keyring /var/local/osd0/keyring --setuser ceph --setgroup ceph
[node0][WARNIN] activate: Marking with init system systemd
[node0][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd0/systemd
[node0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd0/systemd
[node0][WARNIN] activate: Authorizing OSD key...
[node0][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /var/local/osd0/keyring osd allow * mon allow profile osd
[node0][WARNIN] added key for osd.0
[node0][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd0/active.1411.tmp
[node0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd0/active.1411.tmp
[node0][WARNIN] activate: ceph osd.0 data dir is ready at /var/local/osd0
[node0][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-0 -> /var/local/osd0
[node0][WARNIN] start_daemon: Starting ceph osd.0...
[node0][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0
[node0][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0 --runtime
[node0][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@0
[node0][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[node0][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@0
[node0][INFO  ] checking OSD status...
[node0][DEBUG ] find the location of an executable
[node0][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[node0][INFO  ] Running command: sudo systemctl enable ceph.target
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] activating host node1 disk /var/local/osd1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /var/local/osd1
[node1][WARNIN] main_activate: path = /var/local/osd1
[node1][WARNIN] activate: Cluster uuid is 895251dc-26af-496d-a8b2-10ab745f04b9
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] activate: Cluster name is ceph
[node1][WARNIN] activate: OSD uuid is 2764f66a-3d0b-40ee-a3b8-f46eed7d5e3f
[node1][WARNIN] allocate_osd_id: Allocating OSD id...
[node1][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 2764f66a-3d0b-40ee-a3b8-f46eed7d5e3f
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd1/whoami.1377.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/whoami.1377.tmp
[node1][WARNIN] activate: OSD id is 1
[node1][WARNIN] activate: Initializing OSD...
[node1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/local/osd1/activate.monmap
[node1][WARNIN] got monmap epoch 1
[node1][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/local/osd1/activate.monmap --osd-data /var/local/osd1 --osd-journal /var/local/osd1/journal --osd-uuid 2764f66a-3d0b-40ee-a3b8-f46eed7d5e3f --keyring /var/local/osd1/keyring --setuser ceph --setgroup ceph
[node1][WARNIN] activate: Marking with init system systemd
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd1/systemd
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/systemd
[node1][WARNIN] activate: Authorizing OSD key...
[node1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /var/local/osd1/keyring osd allow * mon allow profile osd
[node1][WARNIN] added key for osd.1
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/local/osd1/active.1377.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/active.1377.tmp
[node1][WARNIN] activate: ceph osd.1 data dir is ready at /var/local/osd1
[node1][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-1 -> /var/local/osd1
[node1][WARNIN] start_daemon: Starting ceph osd.1...
[node1][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@1
[node1][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@1 --runtime
[node1][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@1
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
[node1][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@1
[node1][INFO  ] checking OSD status...
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[node1][INFO  ] Running command: sudo systemctl enable ceph.target

最后也没有报错,说明命令也执行成功了。

看日志,激活也没有问题,最后一步,通过 ceph-deploy admin 将配置文件和 admin 密钥同步到各个节点,以便在各个 Node 上使用 ceph 命令时,无需指定 monitor 地址和 ceph.client.admin.keyring 密钥。(PS:记得更改节点的名字)


[cephd@monman ceph-cluster]$ ceph-deploy admin monman node0 node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy admin monman node0 node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f487f71ca70>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['monman', 'node0', 'node1']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f48802309b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to monman
[monman][DEBUG ] connection detected need for sudo
[monman][DEBUG ] connected to host: monman 
[monman][DEBUG ] detect platform information from remote host
[monman][DEBUG ] detect machine type
[monman][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node0
[node0][DEBUG ] connection detected need for sudo
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cephd@monman ceph-cluster]$ 

同时为了确保对 ceph.client.admin.keyring 有正确的操作权限,所以还需要增加权限设置。


[cephd@monman ceph-cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

至此,Ceph 存储集群已经搭建完毕了,我们可以查看那一下集群是否启动成功!


[cephd@monman ceph-cluster]$ ceph -s
    cluster 895251dc-26af-496d-a8b2-10ab745f04b9
     health HEALTH_OK
     monmap e1: 1 mons at {monman=192.168.1.38:6789/0}
            election epoch 3, quorum 0 monman
     osdmap e10: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v23: 64 pgs, 1 pools, 0 bytes data, 0 objects
            14535 MB used, 87814 MB / 102350 MB avail
                  64 active+clean
[cephd@monman ceph-cluster]$ 

查看集群健康状况


[cephd@monman ceph-cluster]$ ceph health
HEALTH_OK

查看集群 OSD 信息


[cephd@monman ceph-cluster]$ ceph osd tree
ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.09760 root default                                     
-2 0.04880     host node0                                   
 0 0.04880         osd.0       up  1.00000          1.00000 
-3 0.04880     host node1                                   
 1 0.04880         osd.1       up  1.00000          1.00000 

好了,通过参考官方文档中的 Ceph 安装和集群搭建说明,一步一步的完成整个存储集群的搭建,文档写的很详细,非常容易上手操作。本次就先记录到这,下一篇我们继续研究学习 Ceph 存储体系中的对象存储、块设备和文件系统。

转载请注明:Nov » Ceph系统安装

喜欢 (0)or分享 (0)
发表我的评论
取消评论

表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址