ELK 简单实验


1> ES cluster 

[23:23:40 root@noise elasticsearch-head]#grep -Evn "#|^$" /etc/elasticsearch/elasticsearch.yml 
1:cluster.name: elastic.example.com
2:node.name: 10.0.0.201
3:node.master: true
4:node.data: true
5:path.data: /var/lib/elasticsearch
6:path.logs: /var/log/elasticsearch
7:network.host: 0.0.0.0
8:http.port: 9200
9:http.cors.enabled: true
10:http.cors.allow-origin: "*"
11:discovery.seed_hosts: ["10.0.0.201", "10.0.0.202","10.0.0.203"]
12:cluster.initial_master_nodes: ["10.0.0.201"]


[17:43:40 root@noise ~]#curl -XGET 10.0.0.202:9200/_cat/nodes
10.0.0.203 13 95 0 0.00 0.00 0.00 cdfhilmrstw - 10.0.0.203
10.0.0.201 16 70 4 0.24 0.12 0.09 cdfhilmrstw * 10.0.0.201
10.0.0.202 20 91 1 0.00 0.00 0.00 cdfhilmrstw - 10.0.0.202

成功配置如下
curl 10.0.0.202:9200 - 结果如下:
{
  "name" : "10.0.0.202",   --> 每个节点自己的IP
  "cluster_name" : "elastic.example.com",
  "cluster_uuid" : "MSSP7MIjQoaZbbyLa0_Gqw",
  "version" : {
    "number" : "7.15.2",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "93d5a7f6192e8a1a12e154a2b81bf6fa7309da0c",
    "build_date" : "2021-11-04T14:04:42.515624022Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

  

安装ES web可视化插件head


[23:21:23 root@noise elasticsearch-head]elasticsearch-plugin install analysis-smartcn
[23:21:23 root@noise elasticsearch-head]elasticsearch-plugin install analysis-icu
[23:21:23 root@noise elasticsearch-head]apt install npm git grunt -y
[23:21:23 root@noise elasticsearch-head]mkdir /data/es/plugins -p
[23:21:23 root@noise elasticsearch-head]cd /data/es/plugins/
[23:21:23 root@noise elasticsearch-head]git clone https://gitee.com/mynextlifestyle/elasticsearch-head.git
[23:21:23 root@noise elasticsearch-head]cd elasticsearch-head/
[23:21:23 root@noise elasticsearch-head]npm config set registry https://registry.npm.taobao.org
[23:21:23 root@noise elasticsearch-head]npm install --force
[23:21:23 root@noise elasticsearch-head]cd /data/es/plugins/elasticsearch-head/
[23:21:23 root@noise elasticsearch-head]vim Gruntfile.js 
[23:21:23 root@noise elasticsearch-head]vim _site/app.js 

[23:21:23 root@noise elasticsearch-head]#grep -n "hostname" /data/es/plugins/elasticsearch-head/Gruntfile.js
97:                    hostname: '*',
[23:22:04 root@noise elasticsearch-head]#grep -n "http:" /data/es/plugins/elasticsearch-head/_site/app.js
4388:			this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://10.0.0.201:9200";
2> 安装logstash ,并配置好yml
[17:39:14 root@dev ~]#logstash -e 'input { stdin { } } output { stdout { } }'
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-11-21 17:41:26.578 [main] runner - Starting Logstash {"logstash.version"=>"7.15.2", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.12+7 on 11.0.12+7 +indy +jit [linux-x86_64]"}
[INFO ] 2021-11-21 17:41:26.608 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2021-11-21 17:41:26.622 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2021-11-21 17:41:26.937 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-11-21 17:41:26.959 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"8b041cf4-08e5-47c1-8ed8-1203cc12c217", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2021-11-21 17:41:28.219 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-11-21 17:41:28.629 [Converge PipelineAction::Create
] Reflections - Reflections took 66 ms to scan 1 urls, producing 120 keys and 417 values [WARN ] 2021-11-21 17:41:29.265 [Converge PipelineAction::Create
] line - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode. [WARN ] 2021-11-21 17:41:29.295 [Converge PipelineAction::Create
] stdin - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode. [INFO ] 2021-11-21 17:41:29.552 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["config string"], :thread=>"#"} [INFO ] 2021-11-21 17:41:30.274 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.72} WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.jrubystdinchannel.StdinChannelLibrary$Reader (file:/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jruby-stdin-channel-0.2.0-java/lib/jruby_stdin_channel/jruby_stdin_channel.jar) to field java.io.FilterInputStream.in WARNING: Please consider reporting this to the maintainers of com.jrubystdinchannel.StdinChannelLibrary$Reader WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release [INFO ] 2021-11-21 17:41:30.372 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2021-11-21 17:41:30.410 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} The stdin plugin is now waiting for input: nihao,pengyou { "host" => "dev.noisedu.cn", "@version" => "1", "message" => "nihao,pengyou", "@timestamp" => 2021-11-21T09:41:35.285Z } ^C[WARN ] 2021-11-21 17:41:41.368 [SIGINT handler] runner - SIGINT received. Shutting down. [INFO ] 2021-11-21 17:41:41.583 [[main]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"main"} [INFO ] 2021-11-21 17:41:42.490 [LogStash::Runner] runner - Logstash shut down. [23:18:15 root@noise ~]#cat /etc/logstash/conf.d/logstash.conf input { beats { port => 5044 } } output { elasticsearch { hosts => ["http://10.0.0.201:9200"] index => "logstash-nginx-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } }
3> 安装filebeat,并配置好yml
[23:13:02 root@noise ~]#cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  paths:
    - /var/log/nginx/access.log 
setup.template.settings:
  index.number_of_shards: 5

#output.elasticsearch:
output.logstash:
  hosts: ["10.0.0.204:5044"]
  template.name: "filebeat"
4>安装kibana
[23:19:19 root@noise ~]#grep -Env "#|^$" /etc/kibana/kibana.yml 
1:server.port: 5601
2:server.host: "0.0.0.0"
3:elasticsearch.hosts: ["http://10.0.0.201:9200"]
4:kibana.index: ".kibana"
5:i18n.locale: "zh-CN"
5> 开始测试
访问nginx,会自动记载通过filebeat通过5044端口传给logstash,然后logstash通过端口9200传给es集群,集群此时head能查看到数据,并且稍后会在kibana查看到