对象存储 RadosGW 使用
3,对象存储访问对比:
Amazon S3:提供了 user、bucket 和 object 分别表示用户、存储桶和对象,其中 bucket 隶属 于 user,可以针对 user 设置不同 bucket 的名称空间的访问权限,而且不同用户允许访问相 同的 bucket。 OpenStack Swift:提供了 user、container 和 object 分别对应于用户、存储桶和对象,不过它 还额外为 user 提供了父级组件 account,用于表示一个项目或租户,因此一个 account 中可 包含一到多个 user,它们可共享使用同一组 container,并为 container 提供名称空间。 RadosGW:提供了 user、 subuser、bucket 和 object,其中的 user 对应于 S3 的 user,而 subuser 则对应于 Swift 的 user,不过 user 和 subuser 都不支持为 bucket 提供名称空间,因此,不同 用户的存储桶也不允许同名;不过,自 Jewel 版本起,RadosGW 引入了 tenant(租户)用于 为 user 和 bucket 提供名称空间,但它是个可选组件,RadosGW 基于 ACL 为不同的用户设置 不同的权限控制,如: Read 读加执行权限 Write 写权限 Readwrite 只读 full-control 全部控制权限 4,部署radosgw服务cephadmin@ceph-deploy:~$ ceph osd pool ls device_health_metrics .rgw.root #包含realm(领域信息),比如zone和zinegroup default.rgw.log #存储日志信息,用于记录各种log信息 default.rgw.control #系统控制池,在有数据更新时,通知其他RGW跟新缓存 default.rgw.meta #元数据存储池,通过不同的名称空间分别存储不同的rados对象,这些名称空间包括用户uid及bucket映射信息的名称空间users.uid,用户的密钥名称空间user.keys,用户的email名称空间users.email,用户的subuser的名称空间users.swift以及bucket的名称空间root
root@ceph-mon01-mgr01:~# ps -ef|grep rados ceph 852 1 0 21:46 ? 00:00:06 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-mon01-mgr01 --setuser ceph --setgroup ceph
cephadmin@ceph-deploy:~$ curl http://192.168.192.172:7480
更改自定义端口
root@ceph-mon01-mgr01:~# cat /etc/ceph/ceph.conf [global] fsid = d2cca32b-57dc-409f-9605-b19a373ce759 public_network = 192.168.192.0/24 cluster_network = 192.168.227.0/24 mon_initial_members = ceph-mon01-mgr01 mon_host = 192.168.192.172 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx [client.rgw.ceph-mon01-mgr01] #在最后面添加针对当前节点的自定义配置如下: rgw_host = ceph-mon01-mgr01 rgw_frontends = civetweb port=8000
systemctl restart ceph-radosgw@rgw.ceph-mon01-mgr01.service
default.rgw.bucket.index #存放bucket到object的索引信息
default.rgw.bucket.data #存放对象的数据
default.rgw.bucket.non-ec #数据的额外信息存储池
[root@localhost haproxy]# cat haproxy.cfg 关闭防火墙 SELinux listen ceph-rgw bind 192.168.192.129:80 mode tcp server rgw1 192.168.192.172:8000 check inter 3s fall 3 rise 5 server rgw2 192.168.192.173:8000 check inter 3s fall 3 rise 5
root@ceph-mon01-mgr01:~# radosgw-admin zone get --rgw-zone=default
启用ssl
自签名证书
cephadmin@ceph-mon01-mgr01:/etc/ceph/certs$ sudo openssl genrsa -out civetweb.key 2048
cephadmin@ceph-mon01-mgr01:/etc/ceph/certs$ touch /home/cephadmin/.rnd cephadmin@ceph-mon01-mgr01:/etc/ceph/certs$ sudo openssl req -new -x509 -key civetweb.key -out civetweb.crt -subj "/CN=rgw.chuan.net"
root@ceph-mon01-mgr01:/etc/ceph/certs# cat civetweb.key civetweb.crt > civetweb.pem
ssl配置
root@ceph-mon01-mgr01:/etc/ceph# cat ceph.conf #给两台rgw节点加配置和pem [client.rgw.ceph-mon01-mgr01] #在最后面添加针对当前节点的自定义配置如下: rgw_host = ceph-mon01-mgr01 rgw_frontends = civetweb port=8000 rgw_frontends = "civetweb port=8000+8443s ssl_certificate=/etc/ceph/certs/civetweb.pem" root@ceph-mon01-mgr01:/etc/ceph# systemctl restart ceph-radosgw@rgw.ceph-mon01-mgr01.service ceph-radosgw@rgw.ceph-node01.service
listen ceph-rgw bind 192.168.192.129:80 mode tcp server rgw1 192.168.192.172:8000 check inter 3s fall 3 rise 5 server rgw2 192.168.192.173:8000 check inter 3s fall 3 rise 5 listen ceph-rgws bind 192.168.192.129:8443 mode tcp server rgw1 192.168.192.172:8443 check inter 3s fall 3 rise 5 server rgw2 192.168.192.173:8443 check inter 3s fall 3 rise 5
root@ceph-node01:/etc/ceph# netstat -anp|grep 8443
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 7304/radosgw
优化配置
root@ceph-node01:/etc/ceph# mkdir /var/log/radosgw root@ceph-node01:/etc/ceph# chown ceph.ceph /var/log/radosgw -R [client.rgw.ceph-node01] #在最后面添加针对当前节点的自定义配置如下: rgw_host = ceph-node01 rgw_frontends = civetweb port=8000 rgw_frontends = "civetweb port=8000+8443s ssl_certificate=/etc/ceph/certs/civetweb.pem request_timeout_ms=30000 error_log_file=/var/log/radosgw/civetweb.error.log access_log_file=/var/log/radosgw/civetweb.access.log num_threads=100 systemctl restart ceph-radosgw@rgw.ceph-node01.service
查看访问日志
root@ceph-node01:/var/log/radosgw# cat civetweb.access.log
root@ceph-mon01-mgr01:/var/log/radosgw# tail -f civetweb.access.log
root@ceph-mon01-mgr01:/var/log/radosgw# radosgw-admin user create --uid=”user1“ --display-name="chuan user1"
"keys": [ { "user": "”user1“", "access_key": "OBR7FTPC8OA0R4CBA402", "secret_key": "9ZxeGE0U6WRtvDTmkoTB0t2UJyNfPtZlN9J5SrsB" } ],
安装s3cmd客户端
s3cmd 是一个通过命令行访问ceph RGW实现创建存储桶,上传下载、管理数据到对象存储的命令行客户端工具。
root@ceph-deploy:~# apt install s3cmd
root@ceph-deploy:~# telnet rgw.chuan.net 8443
root@ceph-deploy:~# s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: OBR7FTPC8OA0R4CBA402 #输出 Secret Key: 9ZxeGE0U6WRtvDTmkoTB0t2UJyNfPtZlN9J5SrsB #输入 Default Region [US]: #回车 Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: rgw.chuan.net:8000 #输入 Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: rgw.chuan.net:8000/%(bucket) #输入 Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: #回车 Path to GPG program [/usr/bin/gpg]: #回车 When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: No #不用https On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: OBR7FTPC8OA0R4CBA402 Secret Key: 9ZxeGE0U6WRtvDTmkoTB0t2UJyNfPtZlN9J5SrsB Default Region: US S3 Endpoint: rgw.chuan.net:8000 DNS-style bucket+hostname:port template for accessing a bucket: rgw.chuan.net:8000/%(bucket) Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Y #测试 Please wait, attempting to list all buckets... ERROR: Test failed: [Errno 111] Connection refused Retry configuration? [Y/n] y Save settings? [y/N] Y Configuration saved to '/root/.s3cfg'
root@ceph-deploy:~# s3cmd la root@ceph-deploy:~# s3cmd mb s3://chuan Bucket 's3://chuan/' created
root@ceph-deploy:~# s3cmd ls #环境没问题
root@ceph-deploy:~# cat .s3cfg [default] access_key = OBR7FTPC8OA0R4CBA402 access_token = add_encoding_exts = add_headers = bucket_location = US ca_certs_file = cache_file = check_ssl_certificate = True check_ssl_hostname = True cloudfront_host = cloudfront.amazonaws.com default_mime_type = binary/octet-stream delay_updates = False delete_after = False delete_after_fetch = False delete_removed = False dry_run = False enable_multipart = True encoding = UTF-8 encrypt = False expiry_date = expiry_days = expiry_prefix = follow_symlinks = False force = False get_continue = False gpg_command = /usr/bin/gpg gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = guess_mime_type = True host_base = rgw.chuan.net:8000 host_bucket = rgw.chuan.net:8000/%(bucket) human_readable_sizes = False invalidate_default_index_on_cf = False invalidate_default_index_root_on_cf = True invalidate_on_cf = False kms_key = limit = -1 limitrate = 0 list_md5 = False log_target_prefix = long_listing = False max_delete = -1 mime_type = multipart_chunk_size_mb = 15 multipart_max_chunks = 10000 preserve_attrs = True progress_meter = True proxy_host = proxy_port = 0 put_continue = False recursive = False recv_chunk = 65536 reduced_redundancy = False requester_pays = False restore_days = 1 restore_priority = Standard secret_key = 9ZxeGE0U6WRtvDTmkoTB0t2UJyNfPtZlN9J5SrsB send_chunk = 65536 server_side_encryption = False signature_v2 = False signurl_use_https = False simpledb_host = sdb.amazonaws.com skip_existing = False socket_timeout = 300 stats = False stop_on_error = False storage_class = urlencoding_mode = normal use_http_expect = False use_https = False use_mime_magic = True verbosity = WARNING website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/ website_error = website_index = index.html
创建bucket验证权限
存储空间(Bucker)是用于存储对象(Object)的容器,在上传任意类型的object前,先创建bucket
root@ceph-deploy:~# s3cmd la root@ceph-deploy:~# s3cmd mb s3://chuan Bucket 's3://chuan/' created
root@ceph-deploy:~# s3cmd put /root/docker-binary-install.tar.gz s3://chuan/tar/ #上传数据 upload: '/root/docker-binary-install.tar.gz' -> 's3://chuan/tar/docker-binary-install.tar.gz' [part 1 of 5, 15MB] [1 of 1] 15728640 of 15728640 100% in 2s 6.39 MB/s done upload: '/root/docker-binary-install.tar.gz' -> 's3://chuan/tar/docker-binary-install.tar.gz' [part 2 of 5, 15MB] [1 of 1] 15728640 of 15728640 100% in 0s 60.23 MB/s done upload: '/root/docker-binary-install.tar.gz' -> 's3://chuan/tar/docker-binary-install.tar.gz' [part 3 of 5, 15MB] [1 of 1] 15728640 of 15728640 100% in 0s 67.17 MB/s done upload: '/root/docker-binary-install.tar.gz' -> 's3://chuan/tar/docker-binary-install.tar.gz' [part 4 of 5, 15MB] [1 of 1] 15728640 of 15728640 100% in 0s 57.15 MB/s done upload: '/root/docker-binary-install.tar.gz' -> 's3://chuan/tar/docker-binary-install.tar.gz' [part 5 of 5, 14MB] [1 of 1] 15241880 of 15241880 100% in 0s 69.33 MB/s done
root@ceph-deploy:~# ceph osd pool ls device_health_metrics .rgw.root default.rgw.log default.rgw.control default.rgw.meta default.rgw.buckets.index default.rgw.buckets.non-ec default.rgw.buckets.data
default.rgw.bucket.index #存放bucket到object的索引信息
default.rgw.bucket.data #存放对象的数据
default.rgw.bucket.non-ec #数据的额外信息存储池
root@ceph-deploy:~# s3cmd ls s3://chuan/tar/ #查看
root@ceph-deploy:~# ceph df .rgw.root 4 32 1.3 KiB 4 48 KiB 0 142 GiB default.rgw.log 5 32 3.6 KiB 209 408 KiB 0 142 GiB default.rgw.control 6 32 0 B 8 0 B 0 142 GiB default.rgw.meta 7 128 930 B 5 48 KiB 0 142 GiB default.rgw.buckets.index 8 128 0 B 11 0 B 0 142 GiB default.rgw.buckets.non-ec 9 32 0 B 0 0 B 0 142 GiB default.rgw.buckets.data 10 32 75 MiB 21 224 MiB 0.05 142 GiB
root@ceph-deploy:/opt# s3cmd get s3://chuan/tar/docker-binary-install.tar.gz #下载
s3cmd rm s3://chuan/tar/docker-binary-install.tar.gz
root@ceph-deploy:~# ceph pg ls-by-pool default.rgw.buckets.data|awk '{print $1,$2,$15}' PG OBJECTS ACTING 10.0 1 [5,8,0]p5 10.1 1 [8,3,0]p8 10.2 0 [1,4,8]p1 10.3 2 [7,2,4]p7 10.4 1 [8,0,3]p8 10.5 1 [2,7,5]p2 10.6 2 [5,8,2]p5 10.7 2 [3,1,6]p3 10.8 3 [0,7,5]p0 10.9 1 [3,2,8]p3 10.a 1 [4,7,2]p4 10.b 1 [5,1,8]p5 10.c 0 [4,8,0]p4 10.d 1 [4,0,7]p4 10.e 1 [4,2,6]p4 10.f 2 [6,3,1]p6 10.10 1 [6,4,0]p6 10.11 0 [7,0,3]p7 10.12 0 [6,5,0]p6 10.13 5 [1,7,3]p1 10.14 1 [1,7,3]p1 10.15 0 [0,5,7]p0 10.16 2 [3,1,6]p3 10.17 2 [3,7,2]p3 10.18 2 [2,6,4]p2 10.19 0 [1,4,7]p1 10.1a 2 [3,8,2]p3 10.1b 1 [2,4,8]p2 10.1c 1 [5,1,8]p5 10.1d 0 [4,8,1]p4 10.1e 4 [6,0,4]p6 10.1f 0 [3,1,8]p3
root@ceph-deploy:~# ceph osd pool get default.rgw.buckets.data size size: 3 root@ceph-deploy:~# ceph osd pool get default.rgw.buckets.data crush_rule crush_rule: replicated_rule
root@ceph-deploy:~# ceph osd pool get default.rgw.buckets.data pg_num pg_num: 32 root@ceph-deploy:~# ceph osd pool get default.rgw.buckets.data pgp_num pgp_num: 32
s3cmd --recursive ls s3://chuan
s3cmd sync --delete-removed ./ s3://chuan --force
s3cmd rb s3://chuan