【大数据权限分配】一、kerberos
一、kerberos概念
密钥分发中心,管理用户身份信息,进行身份认证。
二、安装
选择集群中的一台主机(hadoop102)作为Kerberos服务端,安装KDC,所有主机都需要部署Kerberos客户端。
选择hadoop102 安装服务器
yum install -y krb5-server
所有机器安装 客户端
yum install -y krb5-workstation krb5-libs
修改服务端配置文件
vim /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] EXAMPLE.COM = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal }
修改所有客户单主机
vim /etc/krb5.conf
# Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt default_realm = EXAMPLE.COM #default_ccache_name = KEYRING:persistent:%{uid} [realms] EXAMPLE.COM = { kdc = hadoop102 admin_server = hadoop102 } [domain_realm] # .example.com = EXAMPLE.COM # example.com = EXAMPLE.COM
初始化kdc数据库
kdb5_util create -s
修改服务端配置文件
vim var/kerberos/krb5kdc/kadm5.acl
*/admin@EXAMPLE.COM *
启动kdc服务
systemctl start krb5kdc systemctl enable krb5kdc
启动kerberos的管理,是kdc数据访问入口
systemctl start kadmin systemctl enable kadmin
在服务端执行以下命令,并输入密码
kadmin.local -q "addprinc admin/admin"
三、操作kerberos
1.本地登录无需认证
kadmin.local
远程登录需要主题认证
kadmin
报了一个错误
Authenticating as principal root/admin@EXAMPLE.COM with password. kadmin: Client 'root/admin@EXAMPLE.COM' not found in Kerberos database while initializing kadmin interface
我是没有root用户的,只有admin/admin
需要创建一个
kadmin.local -q "addprinc root/admin"
再远程登录ok
2.创建kerberos主体
登录数据库后执行
kadmin.local: addprinc test
创建了test这个主体
查看所有主体
kadmin.local: list_principals
3.认证操作
1.输入下面指令
kinit test
按提示输入密码
查看凭证
klist
2.密钥文件认证
生成主体test的keytab文件到指定目录/root/test.keytab
kadmin.local -q "xst -norandkey -k /root/test.keytab test@EXAMPLE.COM"
使用keytab进行认证
kinit -kt /root/test.keytab test
查看凭证
klist
二、创建haoop系统用户
为Hadoop开启Kerberos,需为不同服务准备不同的用户,启动服务时需要使用相应的用户。须在所有节点创建以下用户和用户组。
为所有节点添加组
groupadd hadoop
各个节点添加三个用户,并添加到hadoop组下面
useradd hdfs -g hadoop
echo hdfs | passwd --stdin hdfs
useradd yarn -g hadoop
echo yarn | passwd --stdin yarn
useradd mapred -g hadoop
echo mapred| passwd --stdin mapred
四、为haoop各个组件添加kerberos主体
服务 |
所在主机 |
主体(Principal) |
NameNode |
hadoop102 |
nn/hadoop102 |
DataNode |
hadoop102 |
dn/hadoop102 |
DataNode |
hadoop103 |
dn/hadoop103 |
DataNode |
hadoop104 |
dn/hadoop104 |
Secondary NameNode |
hadoop104 |
sn/hadoop104 |
ResourceManager |
hadoop103 |
rm/hadoop103 |
NodeManager |
hadoop102 |
nm/hadoop102 |
NodeManager |
hadoop103 |
nm/hadoop103 |
NodeManager |
hadoop104 |
nm/hadoop104 |
JobHistory Server |
hadoop102 |
jhs/hadoop102 |
Web UI |
hadoop102 |
HTTP/hadoop102 |
Web UI |
hadoop103 |
HTTP/hadoop103 |
Web UI |
hadoop104 |
HTTP/hadoop104 |
1.服务器端创建主体路径准备
mkdir /etc/security/keytab/
chown -R root:hadoop /etc/security/keytab/
chmod 770 /etc/security/keytab/
管理员主体认证
kinit admin/admin
登录数据库客户端
kadmin
执行主体语句
kadmin: addprinc -randkey test/test kadmin: xst -k /etc/security/keytab/test.keytab test/test
解释语句含义
xst -k /etc/security/keytab/test.keytab test/test:作用是将主体的密钥写入keytab文件 xst:将主体的密钥写入keytab文件 -k /etc/security/keytab/test.keytab:指明keytab文件路径和文件名 test/test:主体
上述所有可以简化为
[root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey test/test" [root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/test.keytab test/test"
2.所有节点创建keytab文件目录
root@hadoop102 ~]# mkdir /etc/security/keytab/ [root@hadoop102 ~]# chown -R root:hadoop /etc/security/keytab/ [root@hadoop102 ~]# chmod 770 /etc/security/keytab/ [root@hadoop103 ~]# mkdir /etc/security/keytab/ [root@hadoop103 ~]# chown -R root:hadoop /etc/security/keytab/ [root@hadoop103 ~]# chmod 770 /etc/security/keytab/ [root@hadoop104 ~]# mkdir /etc/security/keytab/ [root@hadoop104 ~]# chown -R root:hadoop /etc/security/keytab/ [root@hadoop104 ~]# chmod 770 /etc/security/keytab/
3.根据上述表格 节点与组件的关系写数据
以下在hadoop102执行
[root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey nn/hadoop102" [root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/nn.service.keytab nn/hadoop102"
kadmin -padmin/admin -wadmin -q"addprinc -randkey dn/hadoop102" [root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/dn.service.keytab dn/hadoop102"
[root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey nm/hadoop102" [root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/nm.service.keytab nm/hadoop102"
[root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey jhs/hadoop102" [root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/jhs.service.keytab jhs/hadoop102"
[root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey HTTP/hadoop102" [root@hadoop102 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/spnego.service.keytab HTTP/hadoop102"
以下在hadoop103执行
[root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey rm/hadoop103" [root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/rm.service.keytab rm/hadoop103"
[root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey dn/hadoop103" [root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/dn.service.keytab dn/hadoop103"
[root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey nm/hadoop103" [root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/nm.service.keytab nm/hadoop103"
[root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey HTTP/hadoop103" [root@hadoop103 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/spnego.service.keytab HTTP/hadoop103"
hadoop104
[root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey dn/hadoop104" [root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/dn.service.keytab dn/hadoop104"
[root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey sn/hadoop104" [root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/sn.service.keytab sn/hadoop104"
[root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey nm/hadoop104" [root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/nm.service.keytab nm/hadoop104"
[root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"addprinc -randkey HTTP/hadoop104" [root@hadoop104 ~]# kadmin -padmin/admin -wadmin -q"xst -k /etc/security/keytab/spnego.service.keytab HTTP/hadoop104"
修改所有节点的访问权限
[root@hadoop102 ~]# chown -R root:hadoop /etc/security/keytab/ [root@hadoop102 ~]# chmod 660 /etc/security/keytab/* [root@hadoop103 ~]# chown -R root:hadoop /etc/security/keytab/ [root@hadoop103 ~]# chmod 660 /etc/security/keytab/* [root@hadoop104 ~]# chown -R root:hadoop /etc/security/keytab/ [root@hadoop104 ~]# chmod 660 /etc/security/keytab/*
4、修改hadoop配置文件进行分发
core-site.xml
hadoop.security.auth_to_local.mechanism MIT hadoop.security.auth_to_local RULE:[2:$1/$2@$0]([ndj]n\/.*@EXAMPLE\.COM)s/.*/hdfs/ RULE:[2:$1/$2@$0]([rn]m\/.*@EXAMPLE\.COM)s/.*/yarn/ RULE:[2:$1/$2@$0](jhs\/.*@EXAMPLE\.COM)s/.*/mapred/ DEFAULT hadoop.security.authentication kerberos hadoop.security.authorization true hadoop.rpc.protection authentication
hdfs-site.xml
dfs.block.access.token.enable true dfs.namenode.kerberos.principal nn/_HOST@EXAMPLE.COM dfs.namenode.keytab.file /etc/security/keytab/nn.service.keytab dfs.secondary.namenode.keytab.file /etc/security/keytab/sn.service.keytab dfs.secondary.namenode.kerberos.principal sn/_HOST@EXAMPLE.COM dfs.namenode.kerberos.internal.spnego.principal HTTP/_HOST@EXAMPLE.COM dfs.web.authentication.kerberos.principal HTTP/_HOST@EXAMPLE.COM dfs.secondary.namenode.kerberos.internal.spnego.principal HTTP/_HOST@EXAMPLE.COM dfs.web.authentication.kerberos.keytab /etc/security/keytab/spnego.service.keytab dfs.datanode.kerberos.principal dn/_HOST@EXAMPLE.COM dfs.datanode.keytab.file /etc/security/keytab/dn.service.keytab dfs.http.policy HTTPS_ONLY dfs.data.transfer.protection authentication
yarn-site.xml
yarn.resourcemanager.principal rm/_HOST@EXAMPLE.COM yarn.resourcemanager.keytab /etc/security/keytab/rm.service.keytab yarn.nodemanager.principal nm/_HOST@EXAMPLE.COM yarn.nodemanager.keytab /etc/security/keytab/nm.service.keytab
mapred-site.xml
mapreduce.jobhistory.keytab /etc/security/keytab/jhs.service.keytab mapreduce.jobhistory.principal jhs/_HOST@EXAMPLE.COM
分发配置文件到集群各个节点
5.配置hdfs使用https安全协议
生成密钥对
keytool -keystore /etc/security/keytab/keystore -alias jetty -genkey -keyalg RSA
解释
Keytool是java数据证书的管理工具,使用户能够管理自己的公/私钥对及相关证书。 -keystore 指定密钥库的名称及位置(产生的各类信息将存在.keystore文件中) -genkey(或者-genkeypair) 生成密钥对 -alias 为生成的密钥对指定别名,如果没有默认是mykey -keyalg 指定密钥的算法 RSA/DSA 默认是DSA
修改keystore访问权限
chown -R root:hadoop /etc/security/keytab/keystore
chmod 660 /etc/security/keytab/keystore
将证书分发到各个节点
xsync /etc/security/keytab/keystore
修改hadoop的http配置文件
mv $HADOOP_HOME/etc/hadoop/ssl-server.xml.example $HADOOP_HOME/etc/hadoop/ssl-server.xml
vim $HADOOP_HOME/etc/hadoop/ssl-server.xml
ssl.server.keystore.location /etc/security/keytab/keystore ssl.server.keystore.password 123456 ssl.server.truststore.location /etc/security/keytab/keystore ssl.server.keystore.keypassword 123456 ssl.server.truststore.password 123456
分发到各个节点
6.配置yarn使用LinuxContainerExecutor
修改所有节点的container-executor所有者和权限,要求其所有者为root,所有组为hadoop(启动NodeManger的yarn用户的所属组),权限为6050。其默认路径为$HADOOP_HOME/bin
[root@hadoop102 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/bin/container-executor [root@hadoop102 ~]# chmod 6050 /opt/module/hadoop-3.1.3/bin/container-executor [root@hadoop103 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/bin/container-executor [root@hadoop103 ~]# chmod 6050 /opt/module/hadoop-3.1.3/bin/container-executor [root@hadoop104 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/bin/container-executor [root@hadoop104 ~]# chmod 6050 /opt/module/hadoop-3.1.3/bin/container-executor
修改所有节点的container-executor.cfg文件的所有者和权限,要求该文件及其所有的上级目录的所有者均为root,所有组为hadoop(启动NodeManger的yarn用户的所属组),权限为400。其默认路径为$HADOOP_HOME/etc/hadoop
[root@hadoop102 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc/hadoop/container-executor.cfg [root@hadoop102 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc/hadoop [root@hadoop102 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc [root@hadoop102 ~]# chown root:hadoop /opt/module/hadoop-3.1.3 [root@hadoop102 ~]# chown root:hadoop /opt/module [root@hadoop102 ~]# chmod 400 /opt/module/hadoop-3.1.3/etc/hadoop/container-executor.cfg [root@hadoop103 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc/hadoop/container-executor.cfg [root@hadoop103 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc/hadoop [root@hadoop103 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc [root@hadoop103 ~]# chown root:hadoop /opt/module/hadoop-3.1.3 [root@hadoop103 ~]# chown root:hadoop /opt/module [root@hadoop103 ~]# chmod 400 /opt/module/hadoop-3.1.3/etc/hadoop/container-executor.cfg [root@hadoop104 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc/hadoop/container-executor.cfg [root@hadoop104 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc/hadoop [root@hadoop104 ~]# chown root:hadoop /opt/module/hadoop-3.1.3/etc [root@hadoop104 ~]# chown root:hadoop /opt/module/hadoop-3.1.3 [root@hadoop104 ~]# chown root:hadoop /opt/module [root@hadoop104 ~]# chmod 400 /opt/module/hadoop-3.1.3/etc/hadoop/container-executor.cfg
vim $HADOOP_HOME/etc/hadoop/container-executor.cfg
yarn.nodemanager.linux-container-executor.group=hadoop banned.users=hdfs,yarn,mapred min.user.id=1000 allowed.system.users= feature.tc.enabled=false
vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
yarn.nodemanager.container-executor.class org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor yarn.nodemanager.linux-container-executor.group hadoop yarn.nodemanager.linux-container-executor.path /opt/module/hadoop-3.1.3/bin/container-executor
分发container-executor.cfg和yarn-site.xml文件
五、安全模式下启动集群
local |
$HADOOP_LOG_DIR |
hdfs:hadoop |
drwxrwxr-x |
local |
dfs.namenode.name.dir |
hdfs:hadoop |
drwx------ |
local |
dfs.datanode.data.dir |
hdfs:hadoop |
drwx------ |
local |
dfs.namenode.checkpoint.dir |
hdfs:hadoop |
drwx------ |
local |
yarn.nodemanager.local-dirs |
yarn:hadoop |
drwxrwxr-x |
local |
yarn.nodemanager.log-dirs |
yarn:hadoop |
drwxrwxr-x |
$HADOOP_LOG_DIR(所有节点)该变量位于hadoop-env.sh文件,默认值为 ${HADOOP_HOME}/logs
[root@hadoop102 ~]# chown hdfs:hadoop /opt/module/hadoop-3.1.3/logs/ [root@hadoop102 ~]# chmod 775 /opt/module/hadoop-3.1.3/logs/ [root@hadoop103 ~]# chown hdfs:hadoop /opt/module/hadoop-3.1.3/logs/ [root@hadoop103 ~]# chmod 775 /opt/module/hadoop-3.1.3/logs/ [root@hadoop104 ~]# chown hdfs:hadoop /opt/module/hadoop-3.1.3/logs/ [root@hadoop104 ~]# chmod 775 /opt/module/hadoop-3.1.3/logs/
dfs.namenode.name.dir(NameNode节点)该参数位于hdfs-site.xml文件,默认值为file://${hadoop.tmp.dir}/dfs/name
[root@hadoop102 ~]# chown -R hdfs:hadoop /opt/module/hadoop-3.1.3/data/dfs/name/ [root@hadoop102 ~]# chmod 700 /opt/module/hadoop-3.1.3/data/dfs/name/
dfs.datanode.data.dir(DataNode节点)该参数为于hdfs-site.xml文件,默认值为file://${hadoop.tmp.dir}/dfs/data
[root@hadoop102 ~]# chown -R hdfs:hadoop /opt/module/hadoop-3.1.3/data/dfs/data/ [root@hadoop102 ~]# chmod 700 /opt/module/hadoop-3.1.3/data/dfs/data/ [root@hadoop103 ~]# chown -R hdfs:hadoop /opt/module/hadoop-3.1.3/data/dfs/data/ [root@hadoop103 ~]# chmod 700 /opt/module/hadoop-3.1.3/data/dfs/data/ [root@hadoop104 ~]# chown -R hdfs:hadoop /opt/module/hadoop-3.1.3/data/dfs/data/ [root@hadoop104 ~]# chmod 700 /opt/module/hadoop-3.1.3/data/dfs/data/
dfs.namenode.checkpoint.dir(SecondaryNameNode节点)该参数位于hdfs-site.xml文件,默认值为file://${hadoop.tmp.dir}/dfs/namesecondary
[root@hadoop104 ~]# chown -R hdfs:hadoop /opt/module/hadoop-3.1.3/data/dfs/namesecondary/ [root@hadoop104 ~]# chmod 700 /opt/module/hadoop-3.1.3/data/dfs/namesecondary/
yarn.nodemanager.local-dirs(NodeManager节点)该参数位于yarn-site.xml文件,默认值为file://${hadoop.tmp.dir}/nm-local-dir
[root@hadoop102 ~]# chown -R yarn:hadoop /opt/module/hadoop-3.1.3/data/nm-local-dir/ [root@hadoop102 ~]# chmod -R 775 /opt/module/hadoop-3.1.3/data/nm-local-dir/ [root@hadoop103 ~]# chown -R yarn:hadoop /opt/module/hadoop-3.1.3/data/nm-local-dir/ [root@hadoop103 ~]# chmod -R 775 /opt/module/hadoop-3.1.3/data/nm-local-dir/ [root@hadoop104 ~]# chown -R yarn:hadoop /opt/module/hadoop-3.1.3/data/nm-local-dir/ [root@hadoop104 ~]# chmod -R 775 /opt/module/hadoop-3.1.3/data/nm-local-dir/
yarn.nodemanager.log-dirs(NodeManager节点)该参数位于yarn-site.xml文件,默认值为$HADOOP_LOG_DIR/userlogs
[root@hadoop102 ~]# chown yarn:hadoop /opt/module/hadoop-3.1.3/logs/userlogs/ [root@hadoop102 ~]# chmod 775 /opt/module/hadoop-3.1.3/logs/userlogs/ [root@hadoop103 ~]# chown yarn:hadoop /opt/module/hadoop-3.1.3/logs/userlogs/ [root@hadoop103 ~]# chmod 775 /opt/module/hadoop-3.1.3/logs/userlogs/ [root@hadoop104 ~]# chown yarn:hadoop /opt/module/hadoop-3.1.3/logs/userlogs/ [root@hadoop104 ~]# chmod 775 /opt/module/hadoop-3.1.3/logs/userlogs/
启动hdfs
1.单点
sudo -i -u hdfs hdfs --daemon start namenode [root@hadoop102 ~]# sudo -i -u hdfs hdfs --daemon start datanode [root@hadoop103 ~]# sudo -i -u hdfs hdfs --daemon start datanode [root@hadoop104 ~]# sudo -i -u hdfs hdfs --daemon start datanode [root@hadoop104 ~]# sudo -i -u hdfs hdfs --daemon start secondarynamenode
2.群起
先有ssh
再者修改配置
vim $HADOOP_HOME/sbin/start-dfs.sh 再顶部添加
HDFS_DATANODE_USER=hdfs HDFS_NAMENODE_USER=hdfs HDFS_SECONDARYNAMENODE_USER=hdfs
[root@hadoop102 ~]# start-dfs.sh
查看页面
6.修改hdfs特定路径访问权限
hdfs |
hdfs:hadoop |
drwxr-xr-x |
|
hdfs |
/tmp |
hdfs:hadoop |
drwxrwxrwxt |
hdfs |
/user |
hdfs:hadoop |
drwxrwxr-x |
hdfs |
yarn.nodemanager.remote-app-log-dir |
yarn:hadoop |
drwxrwxrwxt |
hdfs |
mapreduce.jobhistory.intermediate-done-dir |
mapred:hadoop |
drwxrwxrwxt |
hdfs |
mapreduce.jobhistory.done-dir |
mapred:hadoop |
drwxrwx--- |
创建hdfs/hadoop主体,执行以下命令并按照提示输入密码
[root@hadoop102 ~]# kadmin.local -q "addprinc hdfs/hadoop"
认证hdfs/hadoop主体,执行以下命令并按照提示输入密码
[root@hadoop102 ~]# kinit hdfs/hadoop
修改/、/tmp、/user路径
[root@hadoop102 ~]# hadoop fs -chown hdfs:hadoop / /tmp /user [root@hadoop102 ~]# hadoop fs -chmod 755 / [root@hadoop102 ~]# hadoop fs -chmod 1777 /tmp [root@hadoop102 ~]# hadoop fs -chmod 775 /user
参数yarn.nodemanager.remote-app-log-dir位于yarn-site.xml文件,默认值/tmp/logs
[root@hadoop102 ~]# hadoop fs -chown yarn:hadoop /tmp/logs [root@hadoop102 ~]# hadoop fs -chmod 1777 /tmp/logs
参数mapreduce.jobhistory.intermediate-done-dir位于mapred-site.xml文件,默认值为/tmp/hadoop-yarn/staging/history/done_intermediate,需保证该路径的所有上级目录(除/tmp)的所有者均为mapred,所属组为hadoop,权限为770
[root@hadoop102 ~]# hadoop fs -chown -R mapred:hadoop /tmp/hadoop-yarn/staging/history/done_intermediate [root@hadoop102 ~]# hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging/history/done_intermediate [root@hadoop102 ~]# hadoop fs -chown mapred:hadoop /tmp/hadoop-yarn/staging/history/ [root@hadoop102 ~]# hadoop fs -chown mapred:hadoop /tmp/hadoop-yarn/staging/ [root@hadoop102 ~]# hadoop fs -chown mapred:hadoop /tmp/hadoop-yarn/ [root@hadoop102 ~]# hadoop fs -chmod 770 /tmp/hadoop-yarn/staging/history/ [root@hadoop102 ~]# hadoop fs -chmod 770 /tmp/hadoop-yarn/staging/ [root@hadoop102 ~]# hadoop fs -chmod 770 /tmp/hadoop-yarn/
参数mapreduce.jobhistory.done-dir位于mapred-site.xml文件,默认值为/tmp/hadoop-yarn/staging/history/done,需保证该路径的所有上级目录(除/tmp)的所有者均为mapred,所属组为hadoop,权限为770
[root@hadoop102 ~]# hadoop fs -chown -R mapred:hadoop /tmp/hadoop-yarn/staging/history/done [root@hadoop102 ~]# hadoop fs -chmod -R 750 /tmp/hadoop-yarn/staging/history/done [root@hadoop102 ~]# hadoop fs -chown mapred:hadoop /tmp/hadoop-yarn/staging/history/ [root@hadoop102 ~]# hadoop fs -chown mapred:hadoop /tmp/hadoop-yarn/staging/ [root@hadoop102 ~]# hadoop fs -chown mapred:hadoop /tmp/hadoop-yarn/ [root@hadoop102 ~]# hadoop fs -chmod 770 /tmp/hadoop-yarn/staging/history/ [root@hadoop102 ~]# hadoop fs -chmod 770 /tmp/hadoop-yarn/staging/ [root@hadoop102 ~]# hadoop fs -chmod 770 /tmp/hadoop-yarn/
启动yarn
vim $HADOOP_HOME/sbin/start-yarn.sh
在顶部增加如下内容 YARN_RESOURCEMANAGER_USER=yarn YARN_NODEMANAGER_USER=yarn
[root@hadoop103 ~]# start-yarn.sh
启动历史服务器
[root@hadoop102 ~]# sudo -i -u mapred mapred --daemon start historyserver
6.安全集群使用说明