第二十九章 Hadoop综合调优


一、Hadoop小文件优化方法

1.Hadoop小文件弊端

HDFS上每个文件都要在NameNode上创建对应的元数据,这个元数据的大小约为150byte,这样当小文件比较多的时候,就会产生很多的元数据文件,一方面会大量占用NameNode的内存空间,另一方面就是元数据文件过多,使得寻址索引速度变慢。

小文件过多,在进行MR计算时,会生成过多切片,需要启动过多的MapTask。每个MapTask处理的数据量小,导致MapTask的处理时间比启动时间还小,白白消耗资源。

2.Hadoop小文件解决方案

#1.在数据采集的时候,就将小文件或小批数据合成大文件再上传HDFS(数据源头)

#2.Hadoop Archive(存储方向)
是一个高效的将小文件放入HDFS块中的文件存档工具,能够将多个小文件打包成一个HAR文件,从而达到减少NameNode的内存使用

#3.CombineTextInputFormat(计算方向)
CombineTextInputFormat用于将多个小文件在切片过程中生成一个单独的切片或者少量的切片。 

#4.开启uber模式,实现JVM重用(计算方向)
默认情况下,每个Task任务都需要启动一个JVM来运行,如果Task任务计算的数据量很小,我们可以让同一个Job的多个Task运行在一个JVM中,不必为每个Task都开启一个JVM。
1)未开启uber模式,在/input路径上上传多个小文件并执行wordcount程序
[delopy@hadoop102 hadoop]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount /input /output2

2)观察控制台
2021-09-08 16:13:50,607 INFO mapreduce.Job: Job job_1613281510851_0002 running in uber mode : false

3)观察http://hadoop103:8088/cluster

4)开启uber模式,在mapred-site.xml中添加如下配置


  	mapreduce.job.ubertask.enable
  	true


 

  	mapreduce.job.ubertask.maxmaps
  	9



  	mapreduce.job.ubertask.maxreduces
  	1



  	mapreduce.job.ubertask.maxbytes
  	

5)分发配置
[delopy@hadoop102 hadoop]$ xsync mapred-site.xml

6)再次执行wordcount程序
[delopy@hadoop102 hadoop]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount /input /output2

7)观察控制台
2021-09-08 16:28:36,198 INFO mapreduce.Job: Job job_1613281510851_0003 running in uber mode : true

8)观察http://hadoop103:8088/cluster

二、测试MapReduce计算性能

使用Sort程序评测MapReduce
注:一个虚拟机不超过150G磁盘尽量不要执行这段代码

#1.使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数
[delopy@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar randomwriter random-data

#2.执行Sort程序
[delopy@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar sort random-data sorted-data

#3.验证数据是否真正排好序了
[delopy@hadoop102 mapreduce]$ 
hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar testmapredsort -sortInput random-data -sortOutput sorted-data

三、企业开发场景案例

1.需求

#1.需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。

#2.需求分析:
1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster
平均每个节点运行10个 / 3台 ≈ 3个任务(4	3	3)

2.HDFS参数调优

#1.修改:hadoop-env.sh
export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS -Xmx1024m"
export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m"

#2.修改hdfs-site.xml


    dfs.namenode.handler.count
    21

#3.修改core-site.xml


    fs.trash.interval
    60

#4.分发配置
[delopy@hadoop102 hadoop]$ xsync hadoop-env.sh hdfs-site.xml core-site.xml

3.MapReduce参数调优

#1.修改mapred-site.xml


  mapreduce.task.io.sort.mb
  100




  mapreduce.map.sort.spill.percent
  0.80




  mapreduce.task.io.sort.factor
  10




  mapreduce.map.memory.mb
  -1
  The amount of memory to request from the scheduler for each    map task. If this is not specified or is non-positive, it is inferred from mapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we set it to 1024.
  




  mapreduce.map.cpu.vcores
  1




  mapreduce.map.maxattempts
  4




  mapreduce.reduce.shuffle.parallelcopies
  5




  mapreduce.reduce.shuffle.input.buffer.percent
  0.70




  mapreduce.reduce.shuffle.merge.percent
  0.66




  mapreduce.reduce.memory.mb
  -1
  The amount of memory to request from the scheduler for each    reduce task. If this is not specified or is non-positive, it is inferred
    from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio.
    If java-opts are also not specified, we set it to 1024.
  




  mapreduce.reduce.cpu.vcores
  2




  mapreduce.reduce.maxattempts
  4




  mapreduce.job.reduce.slowstart.completedmaps
  0.05




  mapreduce.task.timeout
  600000

#2.分发配置
[delopy@hadoop102 hadoop]$ xsync mapred-site.xml

3.Yarn参数调优

#1.修改yarn-site.xml配置参数如下:


	The class to use as the resource scheduler.
	yarn.resourcemanager.scheduler.class
	org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler




	Number of threads to handle scheduler interface.
	yarn.resourcemanager.scheduler.client.thread-count
	8




	Enable auto-detection of node capabilities such as
	memory and CPU.
	
	yarn.nodemanager.resource.detect-hardware-capabilities
	false




	Flag to determine if logical processors(such as
	hyperthreads) should be counted as cores. Only applicable on Linux
	when yarn.nodemanager.resource.cpu-vcores is set to -1 and
	yarn.nodemanager.resource.detect-hardware-capabilities is true.
	
	yarn.nodemanager.resource.count-logical-processors-as-cores
	false




	Multiplier to determine how to convert phyiscal cores to
	vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
	is set to -1(which implies auto-calculate vcores) and
	yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The	number of vcores will be calculated as	number of CPUs * multiplier.
	
	yarn.nodemanager.resource.pcores-vcores-multiplier
	1.0




	Amount of physical memory, in MB, that can be allocated 
	for containers. If set to -1 and
	yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
	automatically calculated(in case of Windows and Linux).
	In other cases, the default is 8192MB.
	
	yarn.nodemanager.resource.memory-mb
	4096




	Number of vcores that can be allocated
	for containers. This is used by the RM scheduler when allocating
	resources for containers. This is not used to limit the number of
	CPUs used by YARN containers. If it is set to -1 and
	yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
	automatically determined from the hardware in case of Windows and Linux.
	In other cases, number of vcores is 8 by default.
	yarn.nodemanager.resource.cpu-vcores
	4




	The minimum allocation for every container request at the RM	in MBs. Memory requests lower than this will be set to the value of this	property. Additionally, a node manager that is configured to have less memory	than this value will be shut down by the resource manager.
	
	yarn.scheduler.minimum-allocation-mb
	1024




	The maximum allocation for every container request at the RM	in MBs. Memory requests higher than this will throw an	InvalidResourceRequestException.
	
	yarn.scheduler.maximum-allocation-mb
	2048




	The minimum allocation for every container request at the RM	in terms of virtual CPU cores. Requests lower than this will be set to the	value of this property. Additionally, a node manager that is configured to	have fewer virtual cores than this value will be shut down by the resource	manager.
	
	yarn.scheduler.minimum-allocation-vcores
	1




	The maximum allocation for every container request at the RM	in terms of virtual CPU cores. Requests higher than this will throw an
	InvalidResourceRequestException.
	yarn.scheduler.maximum-allocation-vcores
	2




	Whether virtual memory limits will be enforced for
	containers.
	yarn.nodemanager.vmem-check-enabled
	false




	Ratio between virtual memory to physical memory when	setting memory limits for containers. Container allocations are	expressed in terms of physical memory, and virtual memory usage	is allowed to exceed this allocation by this ratio.
	
	yarn.nodemanager.vmem-pmem-ratio
	2.1

#2.分发配置
[delopy@hadoop102 hadoop]$ xsync yarn-site.xml

四、执行程序

#1.重启集群
[delopy@hadoop102 hadoop]$ sbin/stop-yarn.sh
[delopy@hadoop103 hadoop]$ sbin/start-yarn.sh

#2.执行WordCount程序
[delopy@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount /input /output

#3.观察Yarn任务执行页面
http://hadoop103:8088/cluster/apps

相关