Openstack T版高可用VMware部署方案(更新)
此文章采用虚拟机安装Zerotier形成虚拟网卡的方式实现不同VMware中的虚拟机实现通信,具体安装ZeroTier的流程自行百度。
一、节点规划
controller01
配置:
6RAM、4CPU、30GB硬盘、1块NAT网卡、1块Zerotier的虚拟网卡
网络:
NAT:192.168.200.10
ZeroTier:192.168.100.10
controller02
配置:
6RAM、4CPU、30GB硬盘、1块NAT网卡、1块Zerotier的虚拟网卡
网络:
NAT:192.168.200.11
ZeroTier:192.168.100.11
compute01
配置:
4RAM、4CPU、30GB硬盘、1块NAT网卡、1块Zerotier的虚拟网卡
网络:
NAT:192.168.200.20
ZeroTier:192.168.100.20
compute02
配置:
4RAM、4CPU、30GB硬盘、1块NAT网卡、1块Zerotier的虚拟网卡
网络:
NAT:192.168.200.21
ZeroTier:192.168.100.21
#采用pacemaker+haproxy的模式实现高可用,pacemaker提供资源管理及VIP(虚拟IP),haproxy提供方向代理及负载均衡
Pacemaker高可用VIP:192.168.100.100
节点 | controller01 | controller02 | compute01 | compute02 |
组件 | mysql | mysql | ||
Keepalived |
Keepalived |
|||
RabbitMQ |
RabbitMQ |
|||
服务 | 用户 | 密码 |
数据库mysql | root | 000000 |
backup | backup | |
keystone | KEYSTONE_DBPASS | |
pacemakeweb https://192.168.100.10:2224/ | hacluster | 000000 |
HAProxyweb http://192.168.100.100:1080/ | admin | admin |
二、基本高可用环境配置
1、环境初始化配置
首先配置好四个节点的NAT网卡地址,并且确定可以访问外网,再进行如下操作。
#####所有节点#####
#设置对应节点的名称
hostnamectl set-hostname controller01
hostnamectl set-hostname controller02
hostnamectl set-hostname compute01
hostnamectl set-hostname compute02
#关闭防火墙、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's\SELINUX=enforcing\SELINUX=disable\' /etc/selinux/config
setenforce 0
#安装基本工具
yum install vim wget net-tools -y
#安装并且配置zeroTier
curl -s https://install.zerotier.com | sudo bash
zerotier-cli join a0cbf4b62a1c903e
//此处换成自己的网络ID(在官网申请)
//加入网络后需在zerotier的网络管理后台同意加入,否则没有IP地址
#配置moon提高速度
mkdir /var/lib/zerotier-one/moons.d
cd /var/lib/zerotier-one/moons.d
wget --no-check-certificate https://baimafeima1.coding.net/p/linux-openstack-jiaoben/d/openstack-T/git/raw/master/000000986a8a957f.moon
systemctl restart zerotier-one.service
systemctl enable zerotier-one.service
#配置hosts
cat >> /etc/hosts <>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.conf
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
sysctl -p
#下载Train版的软件包
yum install centos-release-openstack-train -y
yum upgrade
yum clean all
yum makecache
yum install python-openstackclient -y
yum install openstack-utils -y
#openstack-selinux(暂时未装)
yum install openstack-selinux -y
如图显示【*】则为同步成功
2、MariaDB+Keepalived双主高可用配置MySQL-HA
双master+keeplived
2.1、安装数据库
#####controller#####
所有的controller安装数据库
yum install mariadb mariadb-server python2-PyMySQL -y
systemctl restart mariadb.service
systemctl enable mariadb.service
#安装一些必备软件
yum -y install gcc gcc-c++ gcc-g77 ncurses-devel bison libaio-devel cmake libnl* libpopt* popt-static openssl-devel
2.2初始化mariadb,在全部控制节点初始化数据库密码
######全部controller#####
mysql_secure_installation
#输入root用户的当前密码(不输入密码)
Enter current password for root (enter for none):
#设置root密码?
Set root password? [Y/n] y
#新密码:
New password:
#重新输入新的密码:
Re-enter new password:
#删除匿名用户?
Remove anonymous users? [Y/n] y
#禁止远程root登录?
Disallow root login remotely? [Y/n] n
#删除测试数据库并访问它?
Remove test database and access to it? [Y/n] y
#现在重新加载特权表?
Reload privilege tables now? [Y/n] y
2.3、修改mariadb配置文件
在全部控制节点修改配置文件/etc/my.cnf
######controller01######
确保/etc/my.cnf中有如下参数,没有的话需手工添加,并重启mysql服务。
[mysqld]
log-bin=mysql-bin #启动二进制文件
server-id=1 #服务器ID(两个节点的ID不能一样)
systemctl restart mariadb #重启数据库
mysql -uroot -p000000 #登录数据库
grant replication slave on *.* to 'backup'@'%' identified by 'backup'; flush privileges; #创建一个用户用于同步数据
show master status; #查看master的状态,记录下File和Position然后在controller02上面设置从controller01同步需要使用。
######配置controller02######
确保/etc/my.cnf中有如下参数,没有的话需手工添加,并重启mysql服务。
[mysqld]
log-bin=mysql-bin #启动二进制文件
server-id=2 #服务器ID(两个节点的ID不能一样)
systemctl restart mariadb #重启数据库
mysql -uroot -p000000 #登录数据库
grant replication slave on *.* to 'backup'@'%' identified by 'backup'; flush privileges; #创建一个用户用于同步数据
show master status; #查看master的状态,记录下File和Position然后在controller02上面设置从controller01同步需要使用。
change master to master_host='192.168.100.10',master_user='backup',master_password='backup',master_log_file='mysql-bin.000001',master_log_pos=639;
#设置controller02以01为主进行数据库同步,,master_log_file='mysql-bin.000001',master_log_pos=639;是上图查到的信息
exit; #退出数据库
systemctl restart mariadb #重启数据库
mysql -uroot -p000000
show slave status \G; #查看数据库的状态
//Slave_IO_Running: Yes
//Slave_SQL_Running: Yes
//两项都显示Yes时说明从controller02同步数据成功。
//至此controller01为主controller02为从的主从架构数据设置成功!
######controller02######
进入数据库查看数据库的状态
mysql -uroot -p000000;
show master status;
######controller01######
#设置controller01和controller02互为主从(即双master)
mysql -uroot -p000000;
change master to master_host='192.168.100.11',master_user='backup',master_password='backup',master_log_file='mysql-bin.000002',master_log_pos=342;
#master_log_file='mysql-bin.000002',master_log_pos=342;填上图所示信息
exit;
######controller01#######
systemctl restart mariadb #重启数据库
mysql -uroot -p000000
show slave status \G; #查看数据库的状态
2.4、测试同步(自行测试)
在controller01节点上创建一个库,然后查看controller02是否已经同步成功,反之在controller02上创建一个库controller01上查看是否同步成功。
同步成功
2.5、测试利用keepalived实现高可用
#####controller01######
ip link set multicast on dev ztc3q6qjqu //打开虚拟网卡的多播功能
wget http://www.keepalived.org/software/keepalived-2.1.5.tar.gz
tar -zxvf keepalived-2.1.5.tar.gz
cd keepalived-2.1.5
./configure --prefix=/usr/local/keepalived #安装到/usr/local/keepalived目录下;
make && make install
#编辑配置文件
说明:keepalived只有一个配置文件keepalived.conf,里面主要包括以下几个配置区域,分别是:
global_defs、vrrp_instance和virtual_server。
global_defs:主要是配置故障发生时的通知对象以及机器标识。
vrrp_instance:用来定义对外提供服务的VIP区域及其相关属性。
virtual_server:虚拟服务器定义
mkdir -p /etc/keepalived/
vim /etc/keepalived/keepalived.conf
#写入以下配置文件
! Configuration File for keepalived
global_defs {
router_id MySQL-ha
}
vrrp_instance VI_1 {
state BACKUP #两台配置此处均是BACKUP
interface ztc3q6qjqu
virtual_router_id 51
priority 100 #优先级,另一台改为90
advert_int 1
nopreempt #不抢占,只在优先级高的机器上设置即可,优先级低的机器不设置
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.200
}
}
virtual_server 192.168.100.200 3306 {
delay_loop 2 #每个2秒检查一次real_server状态
lb_algo wrr #LVS算法
lb_kind DR #LVS模式
persistence_timeout 60 #会话保持时间
protocol TCP
real_server 192.168.100.10 3306 {
weight 3
notify_down /usr/local/MySQL/bin/MySQL.sh #检测到服务down后执行的脚本
TCP_CHECK {
connect_timeout 10 #连接超时时间
nb_get_retry 3 #重连次数
delay_before_retry 3 #重连间隔时间
connect_port 3306 #健康检查端口
}
}
#####controller01
#编写检测脚本
mkdir -p /usr/local/MySQL/bin/
vi /usr/local/MySQL/bin/MySQL.sh
#内容如下
!/bin/sh
pkill keepalived
#添加可执行权限
chmod +x /usr/local/MySQL/bin/MySQL.sh
#启动keepalived
/usr/local/keepalived/sbin/keepalived -D
systemctl enable keepalived.service //开机自启
#####controller02#####
#安装keepalived-2.1.5
ip link set multicast on dev ztc3q6qjqu //打开虚拟网卡的多播功能
wget http://www.keepalived.org/software/keepalived-2.1.5.tar.gz
tar -zxvf keepalived-2.1.5.tar.gz
cd keepalived-2.1.5
./configure --prefix=/usr/local/keepalived #安装到/usr/local/keepalived目录下;
make && make install
mkdir -p /etc/keepalived/
vim /etc/keepalived/keepalived.conf
#写入以下配置文件
! Configuration File for keepalived
global_defs {
router_id MySQL-ha
}
vrrp_instance VI_1 {
state BACKUP
interface ztc3q6qjqu
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.200
}
}
virtual_server 192.168.100.200 3306 {
delay_loop 2
lb_algo wrr
lb_kind DR
persistence_timeout 60
protocol TCP
real_server 192.168.100.11 3306 {
weight 3
notify_down /usr/local/MySQL/bin/MySQL.sh
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
#####controller02#####
#编写检测脚本
mkdir -p /usr/local/MySQL/bin/
vi /usr/local/MySQL/bin/MySQL.sh
#内容如下
!/bin/sh
pkill keepalived
#添加可执行权限
chmod +x /usr/local/MySQL/bin/MySQL.sh
#启动keepalived
/usr/local/keepalived/sbin/keepalived -D
systemctl enable keepalived.service //开机自启
2.6使用VIP测试登录MySQL
mysql -h192.168.100.200 -ubackup -pbackup
#关闭keepalived
systemctl stop keepalived
systemctl disable keepalived
数据库高可用搭建成功
此处仅作为测试使用Zerotier的方式搭建高可用是否可行,文章后面的组件均采用Pacemaker实现VIP,keepalived弃用
3、RabbitMQ集群(控制节点)
#####全部controller节点######
yum install erlang rabbitmq-server python-memcached -y
systemctl enable rabbitmq-server.service
#####controller01#####
systemctl start rabbitmq-server.service
rabbitmqctl cluster_status
scp /var/lib/rabbitmq/.erlang.cookie controller02:/var/lib/rabbitmq/
######controller02######
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie #修改controller02节点.erlang.cookie文件的用户/组
systemctl start rabbitmq-server #启动controller02节点的rabbitmq服务
#构建集群,controller02节点以ram节点的形式加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller01
rabbitmqctl start_app
#查询集群状态
rabbitmqctl cluster_status
#####controller01#####
# 在任意节点新建账号并设置密码,以controller01节点为例
rabbitmqctl add_user openstack 000000
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
#设置消息队列的高可用
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
#查看消息队列策略
rabbitmqctl list_policies
4、Memcached(控制节点)
#####全部控制节点
#安装memcache的软件包
yum install memcached python-memcached -y
#设置本地监听
//controller01
sed -i 's\OPTIONS="-l 127.0.0.1,::1"\OPTIONS="-l 127.0.0.1,::1,controller01"\' /etc/sysconfig/memcached
sed -i 's\CACHESIZE="64"\CACHESIZE="1024"\' /etc/sysconfig/memcached
//controller02
sed -i 's\OPTIONS="-l 127.0.0.1,::1"\OPTIONS="-l 127.0.0.1,::1,controller02"\' /etc/sysconfig/memcached
sed -i 's\CACHESIZE="64"\CACHESIZE="1024"\' /etc/sysconfig/memcached
#开机自启(所有controller节点)
systemctl enable memcached.service
systemctl start memcached.service
systemctl status memcached.service
5、Etcd集群(控制节点)
#####所有controller节点#####
yum install -y etcd
cp -a /etc/etcd/etcd.conf{,.bak} //备份配置文件
#####controller01#####
cat > /etc/etcd/etcd.conf < /etc/etcd/etcd.conf <
#####controller01######
#修改etcd.service
vim /usr/lib/systemd/system/etcd.service
#修改成以下
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd \
--name=\"${ETCD_NAME}\" \
--data-dir=\"${ETCD_DATA_DIR}\" \
--listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" \
--listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" \
--initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" \
--advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" \
--initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" \
--initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" \
--initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536
#拷贝该配置文件到controller02
scp -rp /usr/lib/systemd/system/etcd.service controller02:/usr/lib/systemd/system/
#####全部controller#####
#设置开机自启
systemctl enable etcd
systemctl restart etcd
systemctl status etcd
#验证
etcdctl cluster-health
etcdctl member list
两台节点都是healthy,controller01成为了leader。
6、使用开源的pacemaker cluster stack做为集群高可用资源管理软件
#####所有controller#####
yum install pacemaker pcs corosync fence-agents resource-agents -y
# 启动pcs服务
systemctl enable pcsd
systemctl start pcsd
#修改集群管理员hacluster密码
echo 000000 | passwd --stdin hacluster
# 认证操作(controller01)
#节点认证,组建集群,需要采用上一步设置的password
pcs cluster auth controller01 controller02 -u hacluster -p 000000 --force
#创建并命名集群,
pcs cluster setup --force --name openstack-cluster-01 controller01 controller02
pacemaker集群启动
#####controller01#####
pcs cluster start --all
pcs cluster enable --all
#命令记录
pcs cluster status //查看集群状态
pcs status corosync //corosync表示一种底层状态等信息的同步方式
corosync-cmapctl | grep members //查看节点
pcs resource //查看资源
设置高可用属性
#####controller01#####
#设置合适的输入处理历史记录及策略引擎生成的错误与警告
pcs property set pe-warn-series-max=1000 \
pe-input-series-max=1000 \
pe-error-series-max=1000
#cluster-recheck-interval默认定义某些pacemaker操作发生的事件间隔为15min,建议设置为5min或3min
pcs property set cluster-recheck-interval=5
pcs property set stonith-enabled=false
#因为资源问题本次只采用了两控制节点搭建,无法仲裁,需忽略法定人数策略
pcs property set no-quorum-policy=ignore
配置 vip
#####controller01#####
pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.100.100 cidr_netmask=24 op monitor interval=30s
#查看集群资源和生成的VIP情况
pcs resource
ip a
高可用性管理
通过web访问任意控制节点:https://192.168.100.10:2224,账号/密码(即构建集群时生成的密码):hacluster/000000。虽然以命令行的方式设置了集群,但web界面默认并不显示,手动添加集群,实际操作只需要添加已组建集群的任意节点即可,如下
7、部署Haproxy
#####全部controller#####
yum install haproxy -y
#开启日志功能
mkdir /var/log/haproxy
chmod a+w /var/log/haproxy
#编辑配置文件
vim /etc/rsyslog.conf
取消以下注释:
15 $ModLoad imudp
16 $UDPServerRun 514
19 $ModLoad imtcp
20 $InputTCPServerRun 514
最后添加:
local0.=info -/var/log/haproxy/haproxy-info.log
local0.=err -/var/log/haproxy/haproxy-err.log
local0.notice;local0.!=err -/var/log/haproxy/haproxy-notice.log
#重启rsyslog
systemctl restart rsyslog
配置关于所有组件的配置(全部控制节点)
VIP:192.168.100.100
#####全部controller节点#####
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
vim /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0
chroot /var/lib/haproxy
daemon
group haproxy
user haproxy
maxconn 4000
pidfile /var/run/haproxy.pid
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 4000
# haproxy监控页
listen stats
bind 0.0.0.0:1080
mode http
stats enable
stats uri /
stats realm OpenStack\ Haproxy
stats auth admin:admin
stats refresh 30s
stats show-node
stats show-legends
stats hide-version
# horizon服务
listen dashboard_cluster
bind 192.168.100.100:80
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:80 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:80 check inter 2000 rise 2 fall 5
#为rabbirmq提供ha集群访问端口,供openstack各服务访问;
#如果openstack各服务直接连接rabbitmq集群,这里可不设置rabbitmq的负载均衡
listen rabbitmq_cluster
bind 192.168.100.100:5673
mode tcp
option tcpka
balance roundrobin
timeout client 3h
timeout server 3h
option clitcpka
server controller01 192.168.100.10:5672 check inter 10s rise 2 fall 5
server controller02 192.168.100.11:5672 check inter 10s rise 2 fall 5
# glance_api服务
listen glance_api_cluster
bind 192.168.100.100:9292
balance source
option tcpka
option httpchk
option tcplog
timeout client 3h
timeout server 3h
server controller01 192.168.100.10:9292 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:9292 check inter 2000 rise 2 fall 5
# keystone_public _api服务
listen keystone_public_cluster
bind 192.168.100.100:5000
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:5000 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:5000 check inter 2000 rise 2 fall 5
listen nova_compute_api_cluster
bind 192.168.100.100:8774
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:8774 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8774 check inter 2000 rise 2 fall 5
listen nova_placement_cluster
bind 192.168.100.100:8778
balance source
option tcpka
option tcplog
server controller01 192.168.100.10:8778 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8778 check inter 2000 rise 2 fall 5
listen nova_metadata_api_cluster
bind 192.168.100.100:8775
balance source
option tcpka
option tcplog
server controller01 192.168.100.10:8775 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8775 check inter 2000 rise 2 fall 5
listen nova_vncproxy_cluster
bind 192.168.100.100:6080
balance source
option tcpka
option tcplog
server controller01 192.168.100.10:6080 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:6080 check inter 2000 rise 2 fall 5
listen neutron_api_cluster
bind 192.168.100.100:9696
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:9696 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:9696 check inter 2000 rise 2 fall 5
listen cinder_api_cluster
bind 192.168.100.100:8776
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:8776 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8776 check inter 2000 rise 2 fall 5
# mariadb服务;
#设置controller01节点为master,controller02节点为backup,一主多备的架构可规避数据不一致性;
#另外官方示例为检测9200(心跳)端口,测试在mariadb服务宕机的情况下,虽然”/usr/bin/clustercheck”脚本已探测不到服务,但受xinetd控制的9200端口依然正常,导致haproxy始终将请求转发到mariadb服务宕机的节点,暂时修改为监听3306端口
#listen galera_cluster
# bind 192.168.100.100:3306
# balance source
# mode tcp
# server controller01 192.168.100.10:3306 check inter 2000 rise 2 fall 5
# server controller02 192.168.100.11:3306 backup check inter 2000 rise 2 fall 5
#为rabbirmq提供ha集群访问端口,供openstack各服务访问;
#如果openstack各服务直接连接rabbitmq集群,这里可不设置rabbitmq的负载均衡
#复制配置信息给controller02
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
配置内核参数
#####所有controller#####
#net.ipv4.ip_nonlocal_bind = 1是否允许no-local ip绑定,关系到haproxy实例与vip能否绑定并切换
#net.ipv4.ip_forward:是否允许转发
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
sysctl -p
启动服务
#####所有controller######
systemctl enable haproxy
systemctl restart haproxy
systemctl status haproxy
netstat -lntup | grep haproxy
可以看到VIP的各端口处于监听状态
访问:http://192.168.100.100:1080 用户名/密码:admin/admin
rabbitmq已安装所有显示绿色,其他服务未安装;在此步骤会显示红色
设置pcs资源
#####controller01#####
#添加资源 lb-haproxy-clone
pcs resource create lb-haproxy systemd:haproxy clone
pcs resource
#####controller01#####
#设置资源启动顺序,先vip再lb-haproxy-clone;
pcs constraint order start vip then lb-haproxy-clone kind=Optional
#官方建议设置vip运行在haproxy active的节点,通过绑定lb-haproxy-clone与vip服务,
#所以将两种资源约束在1个节点;约束后,从资源角度看,其余暂时没有获得vip的节点的haproxy会被pcs关闭
pcs constraint colocation add lb-haproxy-clone with vip
pcs resource
通过pacemaker高可用管理查看资源相关的设置
高可用配置(pacemaker&haproxy)部署完毕#####controller01#####
#hosts添加mysqlvip和havip解析
sed -i '$a 192.168.100.100 havip' /etc/hosts
scp /etc/hosts controller02:/etc/hosts
三、openstackT版各个组件部署
1、Keystone部署
1.1配置keystone数据库
#####任意控制节点(例如controller01)#####
mysql -u root -p000000
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
flush privileges;
exit
controller02查看一下是否同步成功、
同步成功。
#####全部controller节点#####
yum install openstack-keystone httpd mod_wsgi -y
yum install openstack-utils -y
yum install python-openstackclient -y
cp /etc/keystone/keystone.conf{,.bak}
egrep -v '^$|^#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller01:11211,controller02:11211
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@havip/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
1.2同步数据库
#####controller01#####
#同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
#查看是否同步成功
mysql -hhavip -ukeystone -pKEYSTONE_DBPASS keystone -e "show tables";
#####controller01#####
#在/etc/keystone/生成相关秘钥及目录
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
#并将初始化的密钥拷贝到其他的控制节点
scp -rp /etc/keystone/fernet-keys /etc/keystone/credential-keys controller02:/etc/keystone/
#####controller02#####
#同步后修改controller02节点的fernet的权限
chown -R keystone:keystone /etc/keystone/credential-keys/
chown -R keystone:keystone /etc/keystone/fernet-keys/
1.3认证引导
#####controller01#####
keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-admin-url http://havip:5000/v3/ \
--bootstrap-internal-url http://havip:5000/v3/ \
--bootstrap-public-url http://havip:5000/v3/ \
--bootstrap-region-id RegionOne
1.4配置Http Server
#####controller01#####
cp /etc/httpd/conf/httpd.conf{,.bak}
sed -i "s/#ServerName www.example.com:80/ServerName ${HOSTNAME}/" /etc/httpd/conf/httpd.conf
sed -i "s/Listen\ 80/Listen\ 192.168.100.10:80/g" /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
sed -i "s/Listen\ 5000/Listen\ 192.168.100.10:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#192.168.100.10:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf
systemctl enable httpd.service
systemctl restart httpd.service
systemctl status httpd.service
#####controller02#####
cp /etc/httpd/conf/httpd.conf{,.bak}
sed -i "s/#ServerName www.example.com:80/ServerName ${HOSTNAME}/" /etc/httpd/conf/httpd.conf
sed -i "s/Listen\ 80/Listen\ 192.168.100.11:80/g" /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
sed -i "s/Listen\ 5000/Listen\ 192.168.100.11:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#192.168.100.11:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf
systemctl enable httpd.service
systemctl restart httpd.service
systemctl status httpd.service
1.5编写环境变量脚本
#####controller01#####
touch ~/admin-openrc.sh
cat >> ~/admin-openrc.sh<< EOF
#admin-openrc
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://havip:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
source ~/admin-openrc.sh
scp -rp ~/admin-openrc.sh controller02:~/
#验证
openstack token issue
1.6创建新域、项目、用户和角色
openstack project create --domain default --description "Service Project" service //使用默认域创建service项目
openstack project create --domain default --description "Demo Project" myproject //使用默认域创建myproject项目(no-admin使用)
openstack user create --domain default --password-prompt myuser //创建myuser用户,需要设置密码,可设置为myuser
openstack role create myrole //创建myrole角色
openstack role add --project myproject --user myuser myrole