可关注微信公众号
手机上编辑、浏览

| ELK日志分析 | 
ELK
日志分析
zhimap.com整理
可关注微信公众号
手机上编辑、浏览

kibana
配置文件
config\kibana.yml
启动
官网测试数据sample data
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
curl -XGET 'localhost:9200/_cat/indices?v'
http://localhost:9200/_cat/indices?v
start.bat
windows command
echo off
echo bin\kibana.bat
echo http://localhost:5601
bin\kibana.bat
操作
查询
account_number:<100 AND balance:>47500
ip: =185.124.182.126
elasticsearch
配置文件
config\elasticsearch.yml
启动
start.bat
windows command
echo off
echo bin\elasticsearch.bat
echo http://localhost:9200/
bin\elasticsearch.bat
停止
stop.bat
windows command
echo off
echo netstat -aon |findstr "9200"
echo taskkill -f -pid <pid>
logstash
配置
Pipeline
input
beats
input {
    beats {
        port => "5044"
    }
}
...
filter
插件
grok
 filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
...
}
grok pattern
| Information | Field Name | 
| IP Address | 
 | 
| User ID | 
 | 
| User Authentication | 
 | 
| timestamp | 
 | 
| HTTP Verb | 
 | 
| Request body | 
 | 
| HTTP Version | 
 | 
| HTTP Status Code | 
 | 
| Bytes served | 
 | 
| Referrer URL | 
 | 
| User agent | 
 | 
https://qbox.io/blog/logstash-grok-filter-tutorial-patterns
patterns_dir
patterns_dir => ["./patterns"]
kv
#kv {
#source => "xxxx_message"
#value_split => "="
#field_split => "&?"
#}
geoip
 filter {
    ...
    geoip {
        source => "clientip"
    }
}json
json {
source => "xxxxsourcexxxx"
target => "xxxxjsonroot"
}
扔掉不匹配
# 扔掉不匹配
if "_grokparsefailure" in [tags] {
drop {}
}
#if [loglevel]!= "ERROR" {
# drop {}
#}
urldecode
#urldecode {
# all_fields => true
#}
date
#date {
# match => { "time" ,"dd/MM/YYYY:HH:mm:ss Z " }
# local => en
#}
useragent
#useragent {
# source => "User_Agent"
# target => "user_agent"
#}
output
output {
#elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
elasticsearch
output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
    }
}rubydebug
output {
    stdout { codec => rubydebug }
}
Multiple Pipelines
verify your configuration
bin/logstash -f first-pipeline.conf --config.test_and_exit
启动
start.bat
windows
bin\logstash -f first-pipeline.conf --config.reload.automatic
linux
bin/logstash -f first-pipeline.conf --config.reload.automatic
filebeat
配置文件
filebeat.yml
paths:
#- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
- C:\Buffer\logs\*
Logstash
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
或者
Kibana
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "localhost:5601"
多行匹配
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# true 不匹配pattern的行合并到上一行
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
# http://blog.csdn.net/u012516166/article/details/74946823
# 匹配pattern后,与前面(before)还是后面(after)的内容合并为一条日志
#multiline.match: after
multiline.match: after
根据情况自行定义
启动
设置
./filebeat setup -e
start.bat
echo off
echo filebeat -e -c filebeat.yml -d "publish"
filebeat -e -c filebeat.yml -d "publish"
reolad重新读log方式启动
echo off
echo 重新读log
echo 1. ctl+c
echo 2. del /f /q data/registry
echo 3. filebeat -e -c filebeat.yml -d "publish"
del /f /q data\registry
filebeat -e -c filebeat.yml -d "publish"
其他
日志分析
https://www.docker.elastic.co/
http://www.dahouduan.com/2016/10/17/bigdata-filebeat-elasticsearch-kibana-elk/
日志分析工具
https://www.ibm.com/developerworks/cn/opensource/os-cn-elk-filebeat/index.html
https://www.elastic.co/products
logstash
filebeat
https://zhuanlan.zhihu.com/p/23049700
kibana
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04
https://facingissuesonit.com/2017/05/29/integrate-filebeat-kafka-logstash-elasticsearch-and-kibana/
ELK环境搭建和SpringBoot集成测试
https://my.oschina.net/u/2477500/blog/1615611
SpringBoot测试项目源码:https://github.com/YunDongTeng/springboot-es.git