您现在的位置是:主页 > news > 大兴做网站的公司/营销推广策划及渠道
大兴做网站的公司/营销推广策划及渠道
admin2025/6/17 5:31:13【news】
简介大兴做网站的公司,营销推广策划及渠道,汕头八景,网站开发 管理方案文章目录前言参数说明1、name2、tag3、fields4、fields_under_root5、processors6、max_procs转载请标明出处: http://blog.csdn.net/qq_27818541/article/details/108138274 本文出自:【BigManing的博客】 前言 所有Elastic Beats都支持这些选项。因为它们是通用选…
大兴做网站的公司,营销推广策划及渠道,汕头八景,网站开发 管理方案文章目录前言参数说明1、name2、tag3、fields4、fields_under_root5、processors6、max_procs转载请标明出处: http://blog.csdn.net/qq_27818541/article/details/108138274 本文出自:【BigManing的博客】 前言
所有Elastic Beats都支持这些选项。因为它们是通用选…
文章目录
- 前言
- 参数说明
- 1、name
- 2、tag
- 3、fields
- 4、fields_under_root
- 5、processors
- 6、max_procs
转载请标明出处:
http://blog.csdn.net/qq_27818541/article/details/108138274
本文出自:【BigManing的博客】
前言
所有Elastic Beats都支持这些选项。因为它们是通用选项,所以它们没有命名空间。 以下是可设置参数:
#================================ General ======================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false# Internal queue configuration for buffering events to be published.
#queue:# Queue type by name (default 'mem')# The memory queue will present all available events (up to the outputs# bulk_max_size) to the output, the moment the output is ready to server# another batch of events.#mem:# Max number of events the queue can buffer.#events: 4096# Hints the minimum number of events stored in the queue,# before providing a batch of events to the outputs.# The default value is set to 2048.# A value of 0 ensures events are immediately available# to be sent to the outputs.#flush.min_events: 2048# Maximum duration after which events are available to the outputs,# if the number of events stored in the queue is < `flush.min_events`.#flush.timeout: 1s# The spool queue will store events in a local spool file, before# forwarding the events to the outputs.## Beta: spooling to disk is currently a beta feature. Use with care.## The spool file is a circular buffer, which blocks once the file/buffer is full.# Events are put into a write buffer and flushed once the write buffer# is full or the flush_timeout is triggered.# Once ACKed by the output, events are removed immediately from the queue,# making space for new events to be persisted.#spool:# The file namespace configures the file path and the file creation settings.# Once the file exists, the `size`, `page_size` and `prealloc` settings# will have no more effect.#file:# Location of spool file. The default value is ${path.data}/spool.dat.#path: "${path.data}/spool.dat"# Configure file permissions if file is created. The default value is 0600.#permissions: 0600# File size hint. The spool blocks, once this limit is reached. The default value is 100 MiB.#size: 100MiB# The files page size. A file is split into multiple pages of the same size. The default value is 4KiB.#page_size: 4KiB# If prealloc is set, the required space for the file is reserved using# truncate. The default value is true.#prealloc: true# Spool writer settings# Events are serialized into a write buffer. The write buffer is flushed if:# - The buffer limit has been reached.# - The configured limit of buffered events is reached.# - The flush timeout is triggered.#write:# Sets the write buffer size.#buffer_size: 1MiB# Maximum duration after which events are flushed if the write buffer# is not full yet. The default value is 1s.#flush.timeout: 1s# Number of maximum buffered events. The write buffer is flushed once the# limit is reached.#flush.events: 16384# Configure the on-disk event encoding. The encoding can be changed# between restarts.# Valid encodings are: json, ubjson, and cbor.#codec: cbor#read:# Reader flush timeout, waiting for more events to become available, so# to fill a complete batch as required by the outputs.# If flush_timeout is 0, all available events are forwarded to the# outputs immediately.# The default value is 0s.#flush.timeout: 0s# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
参数说明
1、name
默认是用服务器的主机名。该名称以agent.name
字段而存在。
2、tag
为采集的日志消息定义tag信息标识,用于接收端处理响应的逻辑。
tags: ["web-service","info-file"]
3、fields
定义额外的字段信息,定义的字段以KV形式出现。默认是在fields
下。
fields: {project: "credit", env: "prod"}
如果设置了fields_under_root
为true,那么自定义的KV 将作为根字段出现在发送的消息里。
4、fields_under_root
自定义的字段是否作为根字段。 使用实例
fields_under_root: true
fields: {project: "credit", env: "prod"}
5、processors
beat提供的数据预处理 功能,如果定义了多个processors ,那么beat将会顺序执行。 更多定义的processor
6、max_procs
设置可以同时执行的最大CPU数。默认值为系统中可用的逻辑CPU的数量。