|NO.Z.00013|——————————|Deployment|——|Hadoop&OLAP数据库管理系统.v13|——|Kylin.v04|Spark集群部署|


一、spark安装
### --- 下载软件解压缩,

~~~     # 下载spark版本包
[root@hadoop01 software]# wget https://archive.apache.org/dist/spark/spark-2.4.5/spark-2.4.5-bin-without-hadoop-scala-2.12.tgz
~~~     # 解压版本包
[root@hadoop01 software]# tar -zxvf spark-2.4.5-bin-without-hadoop-scala-2.12.tgz -C ../servers/
~~~     # 更改目录名称
[root@hadoop01 software]# cd ../servers/
[root@hadoop01 servers]# mv spark-2.4.5-bin-without-hadoop-scala-2.12/ spark-2.4.5
二、spark配置文件修改
### --- 修改配置:准备配置文件

~~~     # 准备配置文件
[root@hadoop01 ~]# cd $SPARK_HOME/conf

[root@hadoop01 conf]# cp slaves.template slaves
[root@hadoop01 conf]# cp spark-defaults.conf.template spark-defaults.conf
[root@hadoop01 conf]# cp spark-env.sh.template spark-env.sh
[root@hadoop01 conf]# cp log4j.properties.template log4j.properties
### --- 修改slaves配置文件

~~~     # 放置所有从节点的主机地址:  hadoop01作为主节点,可以不写入到里面
[root@hadoop01 ~]# vim $SPARK_HOME/conf/slaves
 ~~~在文件末尾写入配置参数
hadoop01
hadoop02
hadoop03
### --- 修改spark-defaults.conf配置文件

[root@hadoop01 ~]# vim $SPARK_HOME/conf/spark-defaults.conf

Example:
spark.master                    spark://hadoop01:7077
spark.eventLog.enabled          true
spark.eventLog.dir              hdfs://hadoop01:9000/spark-eventlog
spark.serializer                org.apache.spark.serializer.KryoSerializer
spark.driver.memory             512m
~~~     # 创建 HDFS 目录:hdfs dfs -mkdir /spark-eventlog

[root@hadoop01 ~]# hdfs dfs -mkdir /spark-eventlog
### --- 修改spark-eventlog配置文集spark-env.sh

[root@hadoop01 ~]# vim $SPARK_HOME/conf/spark-env.sh

export JAVA_HOME=/opt/yanqi/servers/jdk1.8.0_231
export HADOOP_HOME=/opt/yanqi/servers/hadoop-2.9.2
export HADOOP_CONF_DIR=/opt/yanqi/servers/hadoop-2.9.2/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/opt/yanqi/servers/hadoop-2.9.2/bin/hadoop classpath)
export SPARK_MASTER_HOST=hadoop01
export SPARK_MASTER_PORT=7077
export SPARK_EXECUTOR_MEMORY=512m
三、同步产品版本包并配置环境变量
### --- 将Spark软件分发到集群;修改其他节点上的环境变量

~~~     # 将spark版本包发送到其它节点
[root@hadoop01 ~]# rsync-script /opt/yanqi/servers/spark-2.4.5/
### --- 设置环境变量,并使之生效

~~~     # 在hadoop01、Hadoop02、Hadoop03设置环境变量
[root@hadoop01 ~]# vim /etc/profile
##SPARK_HOME
export SPARK_HOME=/opt/yanqi/servers/spark-2.4.5
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
~~~     # 使环境变量生效

[root@hadoop01 ~]# source /etc/profile
四、启动集群
### --- 启动集群

~~~     # 在hadoop01上启动spark服务
[root@hadoop01 ~]# cd $SPARK_HOME/sbin
[root@hadoop01 sbin]# ./start-all.sh 
### --- 验证spark集群是否启动

~~~     # 分别在hadoop01、hadoop02、hadoop03上执行 jps,可以发现:    #   此时 Spark 运行在 Standalone 模式下。
[root@hadoop00 ~]# jps
~~~ 输出的进程
hadoop01     Master     Worker
hadoop02                Worker
hadoop03                Worker
五、通过Chrome访问spark.ui地址:http://hadoop01:8080/:Spark 的 Web 界面:
六、start-all.sh和stop-all.sh文件冲突解决方案
### --- 通过whereis搜索可以看到hadoop和spark下都有start-all.sh和stop-all.sh

~~~     # 修改文件名称:Hadoop01、Hadoop02、Hadoop03
[root@hadoop01 ~]# cd $SPARK_HOME/sbin
[root@hadoop01 sbin]# mv start-all.sh start-all-spark.sh 
[root@hadoop01 sbin]# mv stop-all.sh stop-all-spark.sh
七、集群测试
### --- 集群测试

~~~     # 通过spark计算:run-example SparkPi 10
[root@hadoop01 ~]# run-example SparkPi 10
~~~ 输出参数
Pi is roughly 3.14003114003114
### --- 通过spark-shell访问

[root@hadoop02 ~]# spark-shell
 
scala> val lines = sc.textFile("/wcinput/wc.txt")
scala> lines.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect().foreach(println)
~~~ 输出参数                                                      
(#在文件中输入如下内容,1)                                                       
(yanqi,3)
(mapreduce,3)
(yarn,2)
(hadoop,2)
(hdfs,1)

                 
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart                                                                                                                                                    ——W.S.Landor
 

相关