1538 lines
46 KiB
Plaintext
1538 lines
46 KiB
Plaintext
emqx_conf_schema {
|
||
|
||
cluster_name {
|
||
desc {
|
||
en: """Human-friendly name of the EMQX cluster."""
|
||
zh: """EMQX集群名称。每个集群都有一个唯一的名称。服务发现时会用于做路径的一部分。"""
|
||
}
|
||
label {
|
||
en: "Cluster Name"
|
||
zh: "集群名称"
|
||
}
|
||
}
|
||
|
||
process_limit {
|
||
desc {
|
||
en: """Maximum number of simultaneously existing processes for this Erlang system.
|
||
The actual maximum chosen may be much larger than the Number passed.
|
||
For more information, see: https://www.erlang.org/doc/man/erl.html
|
||
"""
|
||
zh: """Erlang系统同时存在的最大进程数。
|
||
实际选择的最大值可能比设置的数字大得多。
|
||
参考: https://www.erlang.org/doc/man/erl.html
|
||
"""
|
||
}
|
||
label {
|
||
en: "Erlang Process Limit"
|
||
zh: "Erlang 最大进程数"
|
||
}
|
||
}
|
||
|
||
max_ports {
|
||
desc {
|
||
en: """Maximum number of simultaneously existing ports for this Erlang system.
|
||
The actual maximum chosen may be much larger than the Number passed.
|
||
For more information, see: https://www.erlang.org/doc/man/erl.html
|
||
"""
|
||
zh: """Erlang系统同时存在的最大端口数。
|
||
实际选择的最大值可能比设置的数字大得多。
|
||
参考: https://www.erlang.org/doc/man/erl.html
|
||
"""
|
||
}
|
||
label {
|
||
en: "Erlang Port Limit"
|
||
zh: "Erlang 最大端口数"
|
||
}
|
||
}
|
||
|
||
dist_buffer_size {
|
||
desc {
|
||
en: """Erlang's distribution buffer busy limit in kilobytes."""
|
||
zh: """Erlang分布式缓冲区的繁忙阈值,单位是KB。"""
|
||
}
|
||
label {
|
||
en: "Erlang's dist buffer size(KB)"
|
||
zh: "Erlang分布式缓冲区的繁忙阈值(KB)"
|
||
}
|
||
}
|
||
|
||
max_ets_tables {
|
||
desc {
|
||
en: """Max number of ETS tables"""
|
||
zh: """Erlang ETS 表的最大数量"""
|
||
}
|
||
label {
|
||
en: "Max number of ETS tables"
|
||
zh: "Erlang 表的最大数量"
|
||
}
|
||
}
|
||
|
||
cluster_discovery_strategy {
|
||
desc {
|
||
en: """Service discovery method for the cluster nodes."""
|
||
zh: """集群节点发现方式。可选值为:
|
||
- manual: 手动加入集群<br/>
|
||
- static: 配置静态节点。配置几个固定的节点,新节点通过连接固定节点中的某一个来加入集群。<br/>
|
||
- mcast: 使用 UDP 多播的方式发现节点。<br/>
|
||
- dns: 使用 DNS A 记录的方式发现节点。<br/>
|
||
- etcd: 使用 etcd 发现节点。<br/>
|
||
- k8s: 使用 Kubernetes 发现节点。<br/>
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Discovery Strategy"
|
||
zh: "集群服务发现策略"
|
||
}
|
||
}
|
||
|
||
cluster_autoclean {
|
||
desc {
|
||
en: """Remove disconnected nodes from the cluster after this interval."""
|
||
zh: """指定多久之后从集群中删除离线节点。"""
|
||
}
|
||
label {
|
||
en: "Cluster Auto Clean"
|
||
zh: "自动删除离线节点时间"
|
||
}
|
||
}
|
||
|
||
cluster_autoheal {
|
||
desc {
|
||
en: """If <code>true</code>, the node will try to heal network partitions automatically."""
|
||
zh: """集群脑裂自动恢复机制开关。"""
|
||
}
|
||
label {
|
||
en: "Cluster Auto Heal"
|
||
zh: "节点脑裂自动修复机制"
|
||
}
|
||
}
|
||
|
||
cluster_proto_dist {
|
||
desc {
|
||
en: """The Erlang distribution protocol for the cluster."""
|
||
zh: """分布式 Erlang 集群协议类型。可选值为:
|
||
- inet_tcp: 使用 IPv4 <br/>
|
||
- inet6_tcp 使用 IPv6 <br/>
|
||
- inet_tls: 使用 TLS,需要与 node.ssl_dist_optfile 配置一起使用。<br/>
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Protocol Distribution"
|
||
zh: "集群内部通信协议"
|
||
}
|
||
}
|
||
|
||
cluster_static_seeds {
|
||
desc {
|
||
en: """List EMQX node names in the static cluster. See <code>node.name</code>."""
|
||
zh: """集群中的EMQX节点名称列表,
|
||
指定固定的节点列表,多个节点间使用逗号 , 分隔。
|
||
当 cluster.discovery_strategy 为 static 时,此配置项才有效。
|
||
适合于节点数量较少且固定的集群。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Static Seeds"
|
||
zh: "集群静态节点"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_addr {
|
||
desc {
|
||
en: """Multicast IPv4 address."""
|
||
zh: """指定多播 IPv4 地址。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Multicast Address"
|
||
zh: "多播地址"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_ports {
|
||
desc {
|
||
en: """List of UDP ports used for service discovery.<br/>
|
||
Note: probe messages are broadcast to all the specified ports.
|
||
"""
|
||
zh: """指定多播端口。如有多个端口使用逗号 , 分隔。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Multicast Ports"
|
||
zh: "多播端口"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_iface {
|
||
desc {
|
||
en: """Local IP address the node discovery service needs to bind to."""
|
||
zh: """指定节点发现服务需要绑定到本地 IP 地址。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Multicast Interface"
|
||
zh: "多播绑定地址"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_ttl {
|
||
desc {
|
||
en: """Time-to-live (TTL) for the outgoing UDP datagrams."""
|
||
zh: """指定多播的 Time-To-Live 值。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Multicast TTL"
|
||
zh: "多播TTL"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_loop {
|
||
desc {
|
||
en: """If <code>true</code>, loop UDP datagrams back to the local socket."""
|
||
zh: """设置多播的报文是否投递到本地回环地址。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Multicast Loop"
|
||
zh: "多播回环开关"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_sndbuf {
|
||
desc {
|
||
en: """Size of the kernel-level buffer for outgoing datagrams."""
|
||
zh: """外发数据报的内核级缓冲区的大小。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Muticast Sendbuf"
|
||
zh: "多播发送缓存区"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_recbuf {
|
||
desc {
|
||
en: """Size of the kernel-level buffer for incoming datagrams."""
|
||
zh: """接收数据报的内核级缓冲区的大小。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Muticast Sendbuf"
|
||
zh: "多播接收数据缓冲区"
|
||
}
|
||
}
|
||
|
||
cluster_mcast_buffer {
|
||
desc {
|
||
en: """Size of the user-level buffer."""
|
||
zh: """用户级缓冲区的大小。
|
||
当 cluster.discovery_strategy 为 mcast 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Muticast Buffer"
|
||
zh: "多播用户级缓冲区"
|
||
}
|
||
}
|
||
|
||
cluster_dns_name {
|
||
desc {
|
||
en: """The domain name from which to discover peer EMQX nodes' IP addresses.
|
||
Applicable when <code>cluster.discovery_strategy = dns</code>
|
||
"""
|
||
zh: """指定 DNS A 记录的名字。emqx 会通过访问这个 DNS A 记录来获取 IP 地址列表。
|
||
当<code>cluster.discovery_strategy</code> 为 <code>dns</code> 时有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Dns Name"
|
||
zh: "DNS名称"
|
||
}
|
||
}
|
||
|
||
cluster_dns_record_type {
|
||
desc {
|
||
en: """DNS record type. """
|
||
zh: """DNS 记录类型。"""
|
||
}
|
||
label {
|
||
en: "DNS Record Type"
|
||
zh: "DNS记录类型"
|
||
}
|
||
}
|
||
|
||
cluster_etcd_server {
|
||
desc {
|
||
en: """List of endpoint URLs of the etcd cluster"""
|
||
zh: """指定 etcd 服务的地址。如有多个服务使用逗号 , 分隔。
|
||
当 cluster.discovery_strategy 为 etcd 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Etcd Server"
|
||
zh: "Etcd 服务器地址"
|
||
}
|
||
}
|
||
|
||
cluster_etcd_prefix {
|
||
desc {
|
||
en: """Key prefix used for EMQX service discovery."""
|
||
zh: """指定 etcd 路径的前缀。每个节点在 etcd 中都会创建一个路径:
|
||
v2/keys/<prefix>/<cluster.name>/<node.name> <br/>
|
||
当 cluster.discovery_strategy 为 etcd 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Etcd Prefix"
|
||
zh: "Etcd 路径前缀"
|
||
}
|
||
}
|
||
|
||
cluster_etcd_node_ttl {
|
||
desc {
|
||
en: """Expiration time of the etcd key associated with the node.
|
||
It is refreshed automatically, as long as the node is alive.
|
||
"""
|
||
zh: """指定 etcd 中节点信息的过期时间。
|
||
当 cluster.discovery_strategy 为 etcd 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Etcd Node TTL"
|
||
zh: "Etcd 节点过期时间"
|
||
}
|
||
}
|
||
|
||
cluster_etcd_ssl {
|
||
desc {
|
||
en: """Options for the TLS connection to the etcd cluster."""
|
||
zh: """当使用 TLS 连接 etcd 时的配置选项。
|
||
当 cluster.discovery_strategy 为 etcd 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster Etcd SSL Option"
|
||
zh: "Etcd SSL 选项"
|
||
}
|
||
}
|
||
|
||
cluster_k8s_apiserver {
|
||
desc {
|
||
en: """Kubernetes API endpoint URL."""
|
||
zh: """指定 Kubernetes API Server。如有多个 Server 使用逗号 , 分隔。
|
||
当 cluster.discovery_strategy 为 k8s 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Cluster k8s ApiServer"
|
||
zh: "K8s 服务地址"
|
||
}
|
||
}
|
||
|
||
cluster_k8s_service_name {
|
||
desc {
|
||
en: """EMQX broker service name."""
|
||
zh: """指定 Kubernetes 中 EMQX 的服务名。
|
||
当 cluster.discovery_strategy 为 k8s 时,此配置项才有效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "K8s Service Name"
|
||
zh: "K8s 服务别名"
|
||
}
|
||
}
|
||
|
||
cluster_k8s_address_type {
|
||
desc {
|
||
en: """Address type used for connecting to the discovered nodes.
|
||
Setting <code>cluster.k8s.address_type</code> to <code>ip</code> will
|
||
make EMQX to discover IP addresses of peer nodes from Kubernetes API.
|
||
"""
|
||
zh: """当使用 k8s 方式集群时,address_type 用来从 Kubernetes 接口的应答里获取什么形式的 Host 列表。
|
||
指定 <code>cluster.k8s.address_type</code> 为 <code>ip</code>,则将从 Kubernetes 接口中获取集群中其他节点
|
||
的IP地址。
|
||
"""
|
||
}
|
||
label {
|
||
en: "K8s Address Type"
|
||
zh: "K8s 地址类型"
|
||
}
|
||
}
|
||
|
||
cluster_k8s_namespace {
|
||
desc {
|
||
en: """Kubernetes namespace."""
|
||
zh: """当使用 k8s 方式并且 cluster.k8s.address_type 指定为 dns 类型时,
|
||
可设置 emqx 节点名的命名空间。与 cluster.k8s.suffix 一起使用用以拼接得到节点名列表。
|
||
"""
|
||
}
|
||
label {
|
||
en: "K8s Namespace"
|
||
zh: "K8s 命名空间"
|
||
}
|
||
}
|
||
|
||
cluster_k8s_suffix {
|
||
desc {
|
||
en: """Node name suffix.<br/>
|
||
Note: this parameter is only relevant when <code>address_type</code> is <code>dns</code>
|
||
or <code>hostname</code>."""
|
||
zh: """当使用 k8s 方式并且 cluster.k8s.address_type 指定为 dns 类型时,可设置 emqx 节点名的后缀。
|
||
与 cluster.k8s.namespace 一起使用用以拼接得到节点名列表。
|
||
"""
|
||
}
|
||
label {
|
||
en: "K8s Suffix"
|
||
zh: "K8s 前缀"
|
||
}
|
||
}
|
||
|
||
node_name {
|
||
desc {
|
||
en: """Unique name of the EMQX node. It must follow <code>%name%@FQDN</code> or
|
||
<code>%name%@IPv4</code> format.
|
||
"""
|
||
zh: """节点名。格式为 \<name>@\<host>。其中 <host> 可以是 IP 地址,也可以是 FQDN。
|
||
详见 http://erlang.org/doc/reference_manual/distributed.html。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Node Name"
|
||
zh: "节点名"
|
||
}
|
||
}
|
||
|
||
node_cookie {
|
||
desc {
|
||
en: """Secret cookie is a random string that should be the same on all nodes in
|
||
the given EMQX cluster, but unique per EMQX cluster. It is used to prevent EMQX nodes that
|
||
belong to different clusters from accidentally connecting to each other."""
|
||
zh: """分布式 Erlang 集群使用的 cookie 值。集群间保持一致"""
|
||
}
|
||
label {
|
||
en: "Node Cookie"
|
||
zh: "节点 Cookie"
|
||
}
|
||
}
|
||
|
||
node_data_dir {
|
||
desc {
|
||
en: """
|
||
Path to the persistent data directory.<br/>
|
||
Possible auto-created subdirectories are:<br/>
|
||
- `mnesia/<node_name>`: EMQX's built-in database directory.<br/>
|
||
For example, `mnesia/emqx@127.0.0.1`.<br/>
|
||
There should be only one such subdirectory.<br/>
|
||
Meaning, in case the node is to be renamed (to e.g. `emqx@10.0.1.1`),<br/>
|
||
the old dir should be deleted first.<br/>
|
||
- `configs`: Generated configs at boot time, and cluster/local override configs.<br/>
|
||
- `patches`: Hot-patch beam files are to be placed here.<br/>
|
||
- `trace`: Trace log files.<br/>
|
||
|
||
**NOTE**: One data dir cannot be shared by two or more EMQX nodes.
|
||
"""
|
||
zh: """
|
||
节点数据存放目录,可能会自动创建的子目录如下:<br/>
|
||
- `mnesia/<node_name>`。EMQX的内置数据库目录。例如,`mnesia/emqx@127.0.0.1`。<br/>
|
||
如果节点要被重新命名(例如,`emqx@10.0.1.1`)。旧目录应该首先被删除。<br/>
|
||
- `configs`。在启动时生成的配置,以及集群/本地覆盖的配置。<br/>
|
||
- `patches`: 热补丁文件将被放在这里。<br/>
|
||
- `trace`: 日志跟踪文件。<br/>
|
||
|
||
**注意**: 一个数据dir不能被两个或更多的EMQX节点同时使用。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Node Data Dir"
|
||
zh: "节点数据目录"
|
||
}
|
||
}
|
||
|
||
node_config_files {
|
||
desc {
|
||
en: """List of configuration files that are read during startup. The order is
|
||
significant: later configuration files override the previous ones.
|
||
"""
|
||
zh: """启动时读取的配置文件列表。后面的配置文件项覆盖前面的文件。"""
|
||
}
|
||
label {
|
||
en: "Config Files"
|
||
zh: "配置文件"
|
||
}
|
||
}
|
||
|
||
node_global_gc_interval {
|
||
desc {
|
||
en: """Periodic garbage collection interval. Set to <code>disabled</code> to have it disabled."""
|
||
zh: """系统调优参数,设置节点运行多久强制进行一次全局垃圾回收。禁用设置为 <code>disabled</code>。"""
|
||
}
|
||
label {
|
||
en: "Global GC Interval"
|
||
zh: "全局垃圾回收"
|
||
}
|
||
}
|
||
|
||
node_crash_dump_file {
|
||
desc {
|
||
en: """Location of the crash dump file."""
|
||
zh: """设置 Erlang crash_dump 文件的存储路径和文件名。"""
|
||
}
|
||
label {
|
||
en: "Crash Dump File"
|
||
zh: "节点崩溃时的Dump文件"
|
||
}
|
||
}
|
||
|
||
node_crash_dump_seconds {
|
||
desc {
|
||
en: """The number of seconds that the broker is allowed to spend writing a crash dump."""
|
||
zh: """保存崩溃文件最大允许时间,如果文件太大,在规则时间内没有保存完成,则会直接结束。"""
|
||
}
|
||
label {
|
||
en: "Crash Dump Seconds"
|
||
zh: "保存崩溃文件最长时间"
|
||
}
|
||
}
|
||
|
||
node_crash_dump_bytes {
|
||
desc {
|
||
en: """The maximum size of a crash dump file in bytes."""
|
||
zh: """限制崩溃文件的大小,当崩溃时节点内存太大,
|
||
如果为了保存现场,需要全部存到崩溃文件中,此处限制最多能保存多大的文件。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Crash Dump Bytes"
|
||
zh: "崩溃文件最大容量"
|
||
}
|
||
}
|
||
|
||
node_dist_net_ticktime {
|
||
desc {
|
||
en: """This is the approximate time an EMQX node may be unresponsive until it is considered down and thereby disconnected."""
|
||
zh: """系统调优参数,此配置将覆盖 vm.args 文件里的 -kernel net_ticktime 参数。当一个节点持续无响应多久之后,认为其已经宕机并断开连接。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Dist Net TickTime"
|
||
zh: "节点间心跳间隔"
|
||
}
|
||
}
|
||
|
||
node_backtrace_depth {
|
||
desc {
|
||
en: """Maximum depth of the call stack printed in error messages and
|
||
<code>process_info</code>.
|
||
"""
|
||
zh: """错误信息中打印的最大堆栈层数"""
|
||
}
|
||
label {
|
||
en: "BackTrace Depth"
|
||
zh: "最大堆栈导数"
|
||
}
|
||
}
|
||
|
||
node_applications {
|
||
desc {
|
||
en: """List of Erlang applications that shall be rebooted when the EMQX broker joins the cluster.
|
||
"""
|
||
zh: """当新EMQX 加入集群时,应重启的Erlang应用程序的列表。"""
|
||
}
|
||
label {
|
||
en: "Application"
|
||
zh: "应用"
|
||
}
|
||
}
|
||
|
||
node_etc_dir {
|
||
desc {
|
||
en: """<code>etc</code> dir for the node"""
|
||
zh: """<code>etc</code> 存放目录"""
|
||
}
|
||
label {
|
||
en: "Etc Dir"
|
||
zh: "Etc 目录"
|
||
}
|
||
}
|
||
|
||
db_backend {
|
||
desc {
|
||
en: """
|
||
Select the backend for the embedded database.<br/>
|
||
<code>rlog</code> is the default backend,
|
||
that is suitable for very large clusters.<br/>
|
||
<code>mnesia</code> is a backend that offers decent performance in small clusters.
|
||
"""
|
||
zh: """ rlog是默认的数据库,他适用于大规模的集群。
|
||
mnesia是备选数据库,在小集群中提供了很好的性能。
|
||
"""
|
||
}
|
||
label {
|
||
en: "DB Backend"
|
||
zh: "内置数据库"
|
||
}
|
||
}
|
||
|
||
db_role {
|
||
desc {
|
||
en: """
|
||
Select a node role.<br/>
|
||
<code>core</code> nodes provide durability of the data, and take care of writes.
|
||
It is recommended to place core nodes in different racks or different availability zones.<br/>
|
||
<code>replicant</code> nodes are ephemeral worker nodes. Removing them from the cluster
|
||
doesn't affect database redundancy<br/>
|
||
It is recommended to have more replicant nodes than core nodes.<br/>
|
||
Note: this parameter only takes effect when the <code>backend</code> is set
|
||
to <code>rlog</code>.
|
||
"""
|
||
zh: """
|
||
选择节点的角色。<br/>
|
||
<code>core</code> 节点提供数据的持久性,并负责写入。建议将核心节点放置在不同的机架或不同的可用区。<br/>
|
||
<code>repliant</code> 节点是临时工作节点。 从集群中删除它们,不影响数据库冗余<br/>
|
||
建议复制节点多于核心节点。<br/>
|
||
注意:该参数仅在设置<code>backend</code>时生效到 <code>rlog</code>。
|
||
"""
|
||
}
|
||
label {
|
||
en: "DB Role"
|
||
zh: "数据库角色"
|
||
}
|
||
}
|
||
|
||
db_core_nodes {
|
||
desc {
|
||
en: """
|
||
List of core nodes that the replicant will connect to.<br/>
|
||
Note: this parameter only takes effect when the <code>backend</code> is set
|
||
to <code>rlog</code> and the <code>role</code> is set to <code>replicant</code>.<br/>
|
||
This value needs to be defined for manual or static cluster discovery mechanisms.<br/>
|
||
If an automatic cluster discovery mechanism is being used (such as <code>etcd</code>),
|
||
there is no need to set this value.
|
||
"""
|
||
zh: """当前节点连接的核心节点列表。<br/>
|
||
注意:该参数仅在设置<code>backend</code>时生效到 <code>rlog</code>
|
||
并且设置<code>role</code>为<code>replicant</code>时生效。<br/>
|
||
该值需要在手动或静态集群发现机制下设置。<br/>
|
||
如果使用了自动集群发现机制(如<code>etcd</code>),则不需要设置该值。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Db Core Node"
|
||
zh: "数据库核心节点"
|
||
}
|
||
}
|
||
|
||
db_rpc_module {
|
||
desc {
|
||
en: """Protocol used for pushing transaction logs to the replicant nodes."""
|
||
zh: """集群间推送事务日志到复制节点使用的协议。"""
|
||
}
|
||
label {
|
||
en: "RPC Module"
|
||
zh: "RPC协议"
|
||
}
|
||
}
|
||
|
||
db_tlog_push_mode {
|
||
desc {
|
||
en: """
|
||
In sync mode the core node waits for an ack from the replicant nodes before sending the next
|
||
transaction log entry.
|
||
"""
|
||
zh: """同步模式下,核心节点等待复制节点的确认信息,然后再发送下一条事务日志。"""
|
||
}
|
||
label {
|
||
en: "Tlog Push Mode"
|
||
zh: "Tlog推送模式"
|
||
}
|
||
}
|
||
|
||
db_default_shard_transport {
|
||
desc {
|
||
en: """Defines the default transport for pushing transaction logs.<br/>
|
||
This may be overridden on a per-shard basis in <code>db.shard_transports</code>.
|
||
<code>gen_rpc</code> uses the <code>gen_rpc</code> library,
|
||
<code>distr</code> uses the Erlang distribution.<br/>"""
|
||
zh: """
|
||
定义用于推送事务日志的默认传输。<br/>
|
||
这可以在 <code>db.shard_transports</code> 中基于每个分片被覆盖。
|
||
<code>gen_rpc</code> 使用 <code>gen_rpc</code> 库,
|
||
<code>distr</code> 使用 Erlang 发行版。<br/>
|
||
"""
|
||
}
|
||
label {
|
||
en: "Default Shard Transport"
|
||
zh: "事务日志传输默认协议"
|
||
}
|
||
}
|
||
|
||
db_shard_transports {
|
||
desc {
|
||
en: """Allows to tune the transport method used for transaction log replication, on a per-shard basis.<br/>
|
||
<code>gen_rpc</code> uses the <code>gen_rpc</code> library,
|
||
<code>distr</code> uses the Erlang distribution.<br/>If not specified,
|
||
the default is to use the value set in <code>db.default_shard_transport</code>."""
|
||
zh: """允许为每个 shard 下的事务日志复制操作的传输方法进行调优。<br/>
|
||
<code>gen_rpc</code> 使用 <code>gen_rpc</code> 库,
|
||
<code>distr</code> 使用 Erlang 自带的 rpc 库。<br/>如果未指定,
|
||
默认是使用 <code>db.default_shard_transport</code> 中设置的值。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Shard Transports"
|
||
zh: "事务日志传输协议"
|
||
}
|
||
}
|
||
|
||
cluster_call_retry_interval {
|
||
desc {
|
||
en: """Time interval to retry after a failed call."""
|
||
zh: """当集群间调用出错时,多长时间重试一次。"""
|
||
}
|
||
label {
|
||
en: "Cluster Call Retry Interval"
|
||
zh: "重试时间间隔"
|
||
}
|
||
}
|
||
|
||
cluster_call_max_history {
|
||
desc {
|
||
en: """Retain the maximum number of completed transactions (for queries)."""
|
||
zh: """集群间调用最多保留的历史记录数。只用于排错时查看。"""
|
||
}
|
||
label {
|
||
en: "Cluster Call Max History"
|
||
zh: "最大历史记录"
|
||
}
|
||
}
|
||
|
||
cluster_call_cleanup_interval {
|
||
desc {
|
||
en: """Time interval to clear completed but stale transactions.
|
||
Ensure that the number of completed transactions is less than the <code>max_history</code>."""
|
||
zh: """清理过期事务的时间间隔"""
|
||
}
|
||
label {
|
||
en: "Clean Up Interval"
|
||
zh: "清理间隔"
|
||
}
|
||
}
|
||
|
||
rpc_mode {
|
||
desc {
|
||
en: """In <code>sync</code> mode the sending side waits for the ack from the receiving side."""
|
||
zh: """在 <code>sync</code> 模式下,发送端等待接收端的 ack信号。"""
|
||
}
|
||
label {
|
||
en: "RPC Mode"
|
||
zh: "RPC 模式"
|
||
}
|
||
}
|
||
|
||
rpc_driver {
|
||
desc {
|
||
en: """Transport protocol used for inter-broker communication"""
|
||
zh: """集群间通信使用的传输协议。"""
|
||
}
|
||
label {
|
||
en: "RPC dirver"
|
||
zh: "RPC 驱动"
|
||
}
|
||
}
|
||
|
||
rpc_async_batch_size {
|
||
desc {
|
||
en: """The maximum number of batch messages sent in asynchronous mode.
|
||
Note that this configuration does not work in synchronous mode.
|
||
"""
|
||
zh: """异步模式下,发送的批量消息的最大数量。"""
|
||
}
|
||
label {
|
||
en: "Async Batch Size"
|
||
zh: "异步模式下的批量消息数量"
|
||
}
|
||
}
|
||
|
||
rpc_port_discovery {
|
||
desc {
|
||
en: """<code>manual</code>: discover ports by <code>tcp_server_port</code>.<br/>
|
||
<code>stateless</code>: discover ports in a stateless manner, using the following algorithm.
|
||
If node name is <code>emqxN@127.0.0.1</code>, where the N is an integer,
|
||
then the listening port will be 5370 + N."""
|
||
zh: """<code>manual</code>: 通过 <code>tcp_server_port</code> 来发现端口。
|
||
<br/><code>stateless</code>: 使用无状态的方式来发现端口,使用如下算法。如果节点名称是 <code>
|
||
emqxN@127.0.0.1</code>, N 是一个数字,那么监听端口就是 5370 + N。
|
||
"""
|
||
}
|
||
label {
|
||
en: "RRC Port Discovery"
|
||
zh: "RPC 端口发现策略"
|
||
}
|
||
}
|
||
|
||
rpc_tcp_server_port {
|
||
desc {
|
||
en: """Listening port used by RPC local service.<br/>
|
||
Note that this config only takes effect when rpc.port_discovery is set to manual."""
|
||
zh: """RPC 本地服务使用的 TCP 端口。<br/>
|
||
只有当 rpc.port_discovery 设置为 manual 时,此配置才会生效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "RPC TCP Server Port"
|
||
zh: "RPC TCP 服务监听端口"
|
||
}
|
||
}
|
||
|
||
rpc_ssl_server_port {
|
||
desc {
|
||
en: """Listening port used by RPC local service.<br/>
|
||
Note that this config only takes effect when rpc.port_discovery is set to manual
|
||
and <code>driver</code> is set to <code>ssl</code>."""
|
||
zh: """RPC 本地服务使用的监听SSL端口。<br/>
|
||
只有当 rpc.port_discovery 设置为 manual 且 <code> dirver </code> 设置为 <code>ssl</code>,
|
||
此配置才会生效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "RPC SSL Server Port"
|
||
zh: "RPC SSL 服务监听端口"
|
||
}
|
||
}
|
||
|
||
rpc_tcp_client_num {
|
||
desc {
|
||
en: """Set the maximum number of RPC communication channels initiated by this node to each remote node."""
|
||
zh: """设置本节点与远程节点之间的 RPC 通信通道的最大数量。"""
|
||
}
|
||
label {
|
||
en: "RPC TCP Client Num"
|
||
zh: "RPC TCP 客户端数量"
|
||
}
|
||
}
|
||
|
||
rpc_connect_timeout {
|
||
desc {
|
||
en: """Timeout for establishing an RPC connection."""
|
||
zh: """建立 RPC 连接的超时时间。"""
|
||
}
|
||
label {
|
||
en: "RPC Connect Timeout"
|
||
zh: "RPC 连接超时时间"
|
||
}
|
||
}
|
||
|
||
rpc_certfile {
|
||
desc {
|
||
en: """Path to TLS certificate file used to validate identity of the cluster nodes.
|
||
Note that this config only takes effect when <code>rpc.driver</code> is set to <code>ssl</code>.
|
||
"""
|
||
zh: """TLS 证书文件的路径,用于验证集群节点的身份。
|
||
只有当 <code>rpc.driver</code> 设置为 <code>ssl</code> 时,此配置才会生效。
|
||
"""
|
||
}
|
||
label {
|
||
en: "RPC Certfile"
|
||
zh: "RPC 证书文件"
|
||
}
|
||
}
|
||
|
||
rpc_keyfile {
|
||
desc {
|
||
en: """Path to the private key file for the <code>rpc.certfile</code>.<br/>
|
||
Note: contents of this file are secret, so it's necessary to set permissions to 600."""
|
||
zh: """<code>rpc.certfile</code> 的私钥文件的路径。<br/>
|
||
注意:此文件内容是私钥,所以需要设置权限为 600。
|
||
"""
|
||
}
|
||
label {
|
||
en: "RPC Keyfile"
|
||
zh: "RPC 私钥文件"
|
||
}
|
||
}
|
||
|
||
rpc_cacertfile {
|
||
desc {
|
||
en: """Path to certification authority TLS certificate file used to validate <code>rpc.certfile</code>.<br/>
|
||
Note: certificates of all nodes in the cluster must be signed by the same CA."""
|
||
zh: """验证 <code>rpc.certfile</code> 的 CA 证书文件的路径。<br/>
|
||
注意:集群中所有节点的证书必须使用同一个 CA 签发。
|
||
"""
|
||
}
|
||
label {
|
||
en: "RPC Cacertfile"
|
||
zh: "RPC CA 证书文件"
|
||
}
|
||
}
|
||
|
||
rpc_send_timeout {
|
||
desc {
|
||
en: """Timeout for sending the RPC request."""
|
||
zh: """发送 RPC 请求的超时时间。"""
|
||
}
|
||
label {
|
||
en: "RPC Send Timeout"
|
||
zh: "RPC 发送超时时间"
|
||
}
|
||
}
|
||
|
||
rpc_authentication_timeout {
|
||
desc {
|
||
en: """Timeout for the remote node authentication."""
|
||
zh: """远程节点认证的超时时间。"""
|
||
}
|
||
label {
|
||
en: "RPC Authentication Timeout"
|
||
zh: "RPC 认证超时时间"
|
||
}
|
||
}
|
||
|
||
rpc_call_receive_timeout {
|
||
desc {
|
||
en: """Timeout for the reply to a synchronous RPC."""
|
||
zh: """同步 RPC 的回复超时时间。"""
|
||
}
|
||
label {
|
||
en: "RPC Call Receive Timeout"
|
||
zh: "RPC 调用接收超时时间"
|
||
}
|
||
}
|
||
|
||
rpc_socket_keepalive_idle {
|
||
desc {
|
||
en: """How long the connections between the brokers should remain open after the last message is sent."""
|
||
zh: """broker 之间的连接在最后一条消息发送后保持打开的时间。"""
|
||
}
|
||
label {
|
||
en: "RPC Socket Keepalive Idle"
|
||
zh: "RPC Socket Keepalive Idle"
|
||
}
|
||
}
|
||
|
||
rpc_socket_keepalive_interval {
|
||
desc {
|
||
en: """The interval between keepalive messages."""
|
||
zh: """keepalive 消息的间隔。"""
|
||
}
|
||
label {
|
||
en: "RPC Socket Keepalive Interval"
|
||
zh: "RPC Socket Keepalive 间隔"
|
||
}
|
||
}
|
||
|
||
rpc_socket_keepalive_count {
|
||
desc {
|
||
en: """How many times the keepalive probe message can fail to receive a reply
|
||
until the RPC connection is considered lost."""
|
||
zh: """keepalive 探测消息发送失败的次数,直到 RPC 连接被认为已经断开。"""
|
||
}
|
||
label {
|
||
en: "RPC Socket Keepalive Count"
|
||
zh: "RPC Socket Keepalive 次数"
|
||
}
|
||
}
|
||
|
||
rpc_socket_sndbuf {
|
||
desc {
|
||
en: """TCP tuning parameters. TCP sending buffer size."""
|
||
zh: """TCP 调节参数。TCP 发送缓冲区大小。"""
|
||
}
|
||
label {
|
||
en: "RPC Socket Sndbuf"
|
||
zh: "RPC 套接字发送缓冲区大小"
|
||
}
|
||
}
|
||
|
||
rpc_socket_recbuf {
|
||
desc {
|
||
en: """TCP tuning parameters. TCP receiving buffer size."""
|
||
zh: """TCP 调节参数。TCP 接收缓冲区大小。"""
|
||
}
|
||
label {
|
||
en: "RPC Socket Recbuf"
|
||
zh: "RPC 套接字接收缓冲区大小"
|
||
}
|
||
}
|
||
|
||
rpc_socket_buffer {
|
||
desc {
|
||
en: """TCP tuning parameters. Socket buffer size in user mode."""
|
||
zh: """TCP 调节参数。用户模式套接字缓冲区大小。"""
|
||
}
|
||
label {
|
||
en: "RPC Socket Buffer"
|
||
zh: "RPC 套接字缓冲区大小"
|
||
}
|
||
}
|
||
|
||
rpc_insecure_fallback {
|
||
desc {
|
||
en: """Enable compatibility with old RPC authentication."""
|
||
zh: """兼容旧的无鉴权模式"""
|
||
}
|
||
label {
|
||
en: "RPC insecure fallback"
|
||
zh: "向后兼容旧的无鉴权模式"
|
||
}
|
||
}
|
||
|
||
log_file_handlers {
|
||
desc {
|
||
en: """File-based log handlers."""
|
||
zh: """输出到文件的日志处理进程列表"""
|
||
}
|
||
label {
|
||
en: "File Handler"
|
||
zh: "File Handler"
|
||
}
|
||
}
|
||
|
||
common_handler_enable {
|
||
desc {
|
||
en: """Enable this log handler."""
|
||
zh: """启用此日志处理进程。"""
|
||
}
|
||
label {
|
||
en: "Enable Log Handler"
|
||
zh: "启用日志处理进程"
|
||
}
|
||
}
|
||
|
||
common_handler_level {
|
||
desc {
|
||
en: """
|
||
The log level for the current log handler.
|
||
Defaults to warning.
|
||
"""
|
||
zh: """
|
||
当前日志处理进程的日志级别。
|
||
默认为 warning 级别。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Log Level"
|
||
zh: "日志级别"
|
||
}
|
||
}
|
||
|
||
common_handler_time_offset {
|
||
desc {
|
||
en: """
|
||
The time offset to be used when formatting the timestamp.
|
||
Can be one of:
|
||
- <code>system</code>: the time offset used by the local system
|
||
- <code>utc</code>: the UTC time offset
|
||
- <code>+-[hh]:[mm]</code>: user specified time offset, such as "-02:00" or "+00:00"
|
||
Defaults to: <code>system</code>.
|
||
"""
|
||
zh: """
|
||
日志中的时间戳使用的时间偏移量。
|
||
可选值为:
|
||
- <code>system</code>: 本地系统使用的时区偏移量
|
||
- <code>utc</code>: 0 时区的偏移量
|
||
- <code>+-[hh]:[mm]</code>: 自定义偏移量,比如 "-02:00" 或者 "+00:00"
|
||
默认值为本地系统的时区偏移量:<code>system</code>。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Time Offset"
|
||
zh: "时间偏移量"
|
||
}
|
||
}
|
||
|
||
common_handler_chars_limit {
|
||
desc {
|
||
en: """
|
||
Set the maximum length of a single log message. If this length is exceeded, the log message will be truncated.
|
||
NOTE: Restrict char limiter if formatter is JSON , it will get a truncated incomplete JSON data, which is not recommended.
|
||
"""
|
||
zh: """
|
||
设置单个日志消息的最大长度。 如果超过此长度,则日志消息将被截断。最小可设置的长度为100。
|
||
注意:如果日志格式为 JSON,限制字符长度可能会导致截断不完整的 JSON 数据。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Single Log Max Length"
|
||
zh: "单条日志长度限制"
|
||
}
|
||
}
|
||
|
||
common_handler_formatter {
|
||
desc {
|
||
en: """Choose log formatter. <code>text</code> for free text, and <code>json</code> for structured logging."""
|
||
zh: """选择日志格式类型。 <code>text</code> 用于纯文本,<code>json</code> 用于结构化日志记录。"""
|
||
}
|
||
label {
|
||
en: "Log Formatter"
|
||
zh: "日志格式类型"
|
||
}
|
||
}
|
||
|
||
common_handler_single_line {
|
||
desc {
|
||
en: """Print logs in a single line if set to true. Otherwise, log messages may span multiple lines."""
|
||
zh: """如果设置为 true,则单行打印日志。 否则,日志消息可能跨越多行。"""
|
||
}
|
||
label {
|
||
en: "Single Line Mode"
|
||
zh: "单行模式"
|
||
}
|
||
}
|
||
|
||
common_handler_sync_mode_qlen {
|
||
desc {
|
||
en: """As long as the number of buffered log events is lower than this value,
|
||
all log events are handled asynchronously. This means that the client process sending the log event,
|
||
by calling a log function in the Logger API, does not wait for a response from the handler
|
||
but continues executing immediately after the event is sent.
|
||
It is not affected by the time it takes the handler to print the event to the log device.
|
||
If the message queue grows larger than this value,
|
||
the handler starts handling log events synchronously instead,
|
||
meaning that the client process sending the event must wait for a response.
|
||
When the handler reduces the message queue to a level below the sync_mode_qlen threshold,
|
||
asynchronous operation is resumed.
|
||
"""
|
||
zh: """只要缓冲的日志事件的数量低于这个值,所有的日志事件都会被异步处理。
|
||
这意味着,日志落地速度不会影响正常的业务进程,因为它们不需要等待日志处理进程的响应。
|
||
如果消息队列的增长超过了这个值,处理程序开始同步处理日志事件。也就是说,发送事件的客户进程必须等待响应。
|
||
当处理程序将消息队列减少到低于sync_mode_qlen阈值的水平时,异步操作就会恢复。
|
||
默认为100条信息,当等待的日志事件大于100条时,就开始同步处理日志。"""
|
||
}
|
||
label {
|
||
en: "Queue Length before Entering Sync Mode"
|
||
zh: "进入异步模式的队列长度"
|
||
}
|
||
}
|
||
|
||
common_handler_drop_mode_qlen {
|
||
desc {
|
||
en: """When the number of buffered log events is larger than this value, the new log events are dropped.
|
||
When drop mode is activated or deactivated, a message is printed in the logs."""
|
||
zh: """当缓冲的日志事件数大于此值时,新的日志事件将被丢弃。起到过载保护的功能。
|
||
为了使过载保护算法正常工作必须要:<code> sync_mode_qlen =< drop_mode_qlen =< flush_qlen </code> 且 drop_mode_qlen > 1
|
||
要禁用某些模式,请执行以下操作。
|
||
- 如果sync_mode_qlen被设置为0,所有的日志事件都被同步处理。也就是说,异步日志被禁用。
|
||
- 如果sync_mode_qlen被设置为与drop_mode_qlen相同的值,同步模式被禁用。也就是说,处理程序总是以异步模式运行,除非调用drop或flushing。
|
||
- 如果drop_mode_qlen被设置为与flush_qlen相同的值,则drop模式被禁用,永远不会发生。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Queue Length before Entering Drop Mode"
|
||
zh: "进入丢弃模式的队列长度"
|
||
}
|
||
}
|
||
|
||
common_handler_flush_qlen {
|
||
desc {
|
||
en: """If the number of buffered log events grows larger than this threshold, a flush (delete) operation takes place.
|
||
To flush events, the handler discards the buffered log messages without logging."""
|
||
zh: """如果缓冲日志事件的数量增长大于此阈值,则会发生冲刷(删除)操作。 日志处理进程会丢弃缓冲的日志消息。
|
||
来缓解自身不会由于内存瀑涨而影响其它业务进程。日志内容会提醒有多少事件被删除。"""
|
||
}
|
||
label {
|
||
en: "Flush Threshold"
|
||
zh: "冲刷阈值"
|
||
}
|
||
}
|
||
|
||
common_handler_supervisor_reports {
|
||
desc {
|
||
en: """
|
||
Type of supervisor reports that are logged. Defaults to <code>error</code>
|
||
- <code>error</code>: only log errors in the Erlang processes.
|
||
- <code>progress</code>: log process startup.
|
||
"""
|
||
zh: """
|
||
Supervisor 报告的类型。默认为 error 类型。
|
||
- <code>error</code>:仅记录 Erlang 进程中的错误。
|
||
- <code>progress</code>:除了 error 信息外,还需要记录进程启动的详细信息。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Report Type"
|
||
zh: "报告类型"
|
||
}
|
||
}
|
||
|
||
common_handler_max_depth {
|
||
desc {
|
||
en: """Maximum depth for Erlang term log formatting and Erlang process message queue inspection."""
|
||
zh: """Erlang 内部格式日志格式化和 Erlang 进程消息队列检查的最大深度。"""
|
||
}
|
||
label {
|
||
en: "Max Depth"
|
||
zh: "最大深度"
|
||
}
|
||
}
|
||
|
||
log_file_handler_file {
|
||
desc {
|
||
en: """Name the log file."""
|
||
zh: """日志文件路径及名字。"""
|
||
}
|
||
label {
|
||
en: "Log File Name"
|
||
zh: "日志文件名字"
|
||
}
|
||
}
|
||
|
||
log_file_handler_max_size {
|
||
desc {
|
||
en: """This parameter controls log file rotation. The value `infinity` means the log file will grow indefinitely, otherwise the log file will be rotated once it reaches `max_size` in bytes."""
|
||
zh: """此参数控制日志文件轮换。 `infinity` 意味着日志文件将无限增长,否则日志文件将在达到 `max_size`(以字节为单位)时进行轮换。
|
||
与 rotation count配合使用。如果 counter 为 10,则是10个文件轮换。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Rotation Size"
|
||
zh: "日志文件轮换大小"
|
||
}
|
||
}
|
||
|
||
log_rotation_enable {
|
||
desc {
|
||
en: """Enable log rotation feature."""
|
||
zh: """启用日志轮换功能。启动后生成日志文件后缀会加上对应的索引数字,比如:log/emqx.log.1。
|
||
系统会默认生成<code>*.siz/*.idx</code>用于记录日志位置,请不要手动修改这两个文件。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Rotation Enable"
|
||
zh: "日志轮换"
|
||
}
|
||
}
|
||
|
||
log_rotation_count {
|
||
desc {
|
||
en: """Maximum number of log files."""
|
||
zh: """轮换的最大日志文件数。"""
|
||
}
|
||
label {
|
||
en: "Max Log Files Number"
|
||
zh: "最大日志文件数"
|
||
}
|
||
}
|
||
|
||
log_overload_kill_enable {
|
||
desc {
|
||
en: """Enable log handler overload kill feature."""
|
||
zh: """日志处理进程过载时为保护自己节点其它的业务能正常,强制杀死日志处理进程。"""
|
||
}
|
||
label {
|
||
en: "Log Handler Overload Kill"
|
||
zh: "日志处理进程过载保护"
|
||
}
|
||
}
|
||
|
||
log_overload_kill_mem_size {
|
||
desc {
|
||
en: """Maximum memory size that the log handler process is allowed to use."""
|
||
zh: """日志处理进程允许使用的最大内存。"""
|
||
}
|
||
label {
|
||
en: "Log Handler Max Memory Size"
|
||
zh: "日志处理进程允许使用的最大内存"
|
||
}
|
||
}
|
||
|
||
log_overload_kill_qlen {
|
||
desc {
|
||
en: """Maximum allowed queue length."""
|
||
zh: """允许的最大队列长度。"""
|
||
}
|
||
label {
|
||
en: "Max Queue Length"
|
||
zh: "最大队列长度"
|
||
}
|
||
}
|
||
|
||
log_overload_kill_restart_after {
|
||
desc {
|
||
en: """If the handler is terminated, it restarts automatically after a delay specified in milliseconds. The value `infinity` prevents restarts."""
|
||
zh: """如果处理进程终止,它会在以指定的时间后后自动重新启动。 `infinity` 不自动重启。"""
|
||
}
|
||
label {
|
||
en: "Handler Restart Timer"
|
||
zh: "处理进程重启机制"
|
||
}
|
||
}
|
||
|
||
log_burst_limit_enable {
|
||
desc {
|
||
en: """Enable log burst control feature."""
|
||
zh: """启用日志限流保护机制。"""
|
||
}
|
||
label {
|
||
en: "Enable Burst"
|
||
zh: "日志限流保护"
|
||
}
|
||
}
|
||
|
||
log_burst_limit_max_count {
|
||
desc {
|
||
en: """Maximum number of log events to handle within a `window_time` interval. After the limit is reached, successive events are dropped until the end of the `window_time`."""
|
||
zh: """在 `window_time` 间隔内处理的最大日志事件数。 达到限制后,将丢弃连续事件,直到 `window_time` 结束。"""
|
||
}
|
||
label {
|
||
en: "Events Number"
|
||
zh: "日志事件数"
|
||
}
|
||
}
|
||
|
||
log_burst_limit_window_time {
|
||
desc {
|
||
en: """See <code>max_count</code>."""
|
||
zh: """参考 <code>max_count</code>。"""
|
||
}
|
||
label {
|
||
en: "Window Time"
|
||
zh: "Window Time"
|
||
}
|
||
}
|
||
|
||
authorization {
|
||
desc {
|
||
en: """
|
||
Authorization a.k.a. ACL.<br/>
|
||
In EMQX, MQTT client access control is extremely flexible.<br/>
|
||
An out-of-the-box set of authorization data sources are supported.
|
||
For example,<br/>
|
||
'file' source is to support concise and yet generic ACL rules in a file;<br/>
|
||
'built_in_database' source can be used to store per-client customizable rule sets,
|
||
natively in the EMQX node;<br/>
|
||
'http' source to make EMQX call an external HTTP API to make the decision;<br/>
|
||
'PostgreSQL' etc. to look up clients or rules from external databases;<br/>
|
||
"""
|
||
zh: """ 授权(ACL)。EMQX 支持完整的客户端访问控制(ACL)。<br/> """
|
||
}
|
||
label {
|
||
en: "Authorization"
|
||
zh: "授权"
|
||
}
|
||
}
|
||
|
||
desc_cluster {
|
||
desc {
|
||
en: """EMQX nodes can form a cluster to scale up the total capacity.<br/>
|
||
Here holds the configs to instruct how individual nodes can discover each other."""
|
||
zh: """EMQX 节点可以组成一个集群,以提高总容量。<br/> 这里指定了节点之间如何连接。"""
|
||
}
|
||
label {
|
||
en: "Cluster"
|
||
zh: "集群"
|
||
}
|
||
}
|
||
|
||
desc_cluster_static {
|
||
desc {
|
||
en: """Service discovery via static nodes.
|
||
The new node joins the cluster by connecting to one of the bootstrap nodes."""
|
||
zh: """静态节点服务发现。新节点通过连接一个节点来加入集群。"""
|
||
}
|
||
label {
|
||
en: "Cluster Static"
|
||
zh: "静态节点服务发现"
|
||
}
|
||
}
|
||
|
||
desc_cluster_mcast {
|
||
desc {
|
||
en: """Service discovery via UDP multicast."""
|
||
zh: """UDP 组播服务发现。"""
|
||
}
|
||
label {
|
||
en: "Cluster Multicast"
|
||
zh: "UDP 组播服务发现"
|
||
}
|
||
}
|
||
|
||
desc_cluster_dns {
|
||
desc {
|
||
en: """Service discovery via DNS SRV records."""
|
||
zh: """DNS SRV 记录服务发现。"""
|
||
}
|
||
label {
|
||
en: "Cluster DNS"
|
||
zh: "DNS SRV 记录服务发现"
|
||
}
|
||
}
|
||
|
||
desc_cluster_etcd {
|
||
desc {
|
||
en: """Service discovery using 'etcd' service."""
|
||
zh: """使用 'etcd' 服务的服务发现。"""
|
||
}
|
||
label {
|
||
en: "Cluster Etcd"
|
||
zh: "'etcd' 服务的服务发现"
|
||
}
|
||
}
|
||
|
||
desc_cluster_k8s {
|
||
desc {
|
||
en: """Service discovery via Kubernetes API server."""
|
||
zh: """Kubernetes 服务发现。"""
|
||
}
|
||
label {
|
||
en: "Cluster Kubernetes"
|
||
zh: "Kubernetes 服务发现"
|
||
}
|
||
}
|
||
|
||
desc_node {
|
||
desc {
|
||
en: """Node name, cookie, config & data directories and the Erlang virtual machine (BEAM) boot parameters."""
|
||
zh: """节点名称、Cookie、配置文件、数据目录和 Erlang 虚拟机(BEAM)启动参数。"""
|
||
}
|
||
label {
|
||
en: "Node"
|
||
zh: "节点"
|
||
}
|
||
}
|
||
|
||
desc_db {
|
||
desc {
|
||
en: """Settings for the embedded database."""
|
||
zh: """内置数据库的配置。"""
|
||
}
|
||
label {
|
||
en: "Database"
|
||
zh: "数据库"
|
||
}
|
||
}
|
||
|
||
desc_cluster_call {
|
||
desc {
|
||
en: """Options for the 'cluster call' feature that allows to execute a callback on all nodes in the cluster."""
|
||
zh: """集群调用功能的选项。"""
|
||
}
|
||
label {
|
||
en: "Cluster Call"
|
||
zh: "集群调用"
|
||
}
|
||
}
|
||
|
||
desc_rpc {
|
||
desc {
|
||
en: """EMQX uses a library called <code>gen_rpc</code> for inter-broker communication.<br/>
|
||
Most of the time the default config should work,
|
||
but in case you need to do performance fine-tuning or experiment a bit,
|
||
this is where to look."""
|
||
zh: """EMQX 使用 <code>gen_rpc</code> 库来实现跨节点通信。<br/>
|
||
大多数情况下,默认的配置应该可以工作,但如果你需要做一些性能优化或者实验,可以尝试调整这些参数。"""
|
||
}
|
||
label {
|
||
en: "RPC"
|
||
zh: "RPC"
|
||
}
|
||
}
|
||
|
||
desc_log {
|
||
desc {
|
||
en: """EMQX logging supports multiple sinks for the log events.
|
||
Each sink is represented by a _log handler_, which can be configured independently."""
|
||
zh: """EMQX 日志记录支持日志事件的多个接收器。 每个接收器由一个_log handler_表示,可以独立配置。"""
|
||
}
|
||
label {
|
||
en: "Log"
|
||
zh: "日志"
|
||
}
|
||
}
|
||
|
||
desc_console_handler {
|
||
desc {
|
||
en: """Log handler that prints log events to the EMQX console."""
|
||
zh: """日志处理进程将日志事件打印到 EMQX 控制台。"""
|
||
}
|
||
label {
|
||
en: "Console Handler"
|
||
zh: "Console Handler"
|
||
}
|
||
}
|
||
|
||
desc_log_file_handler {
|
||
desc {
|
||
en: """Log handler that prints log events to files."""
|
||
zh: """日志处理进程将日志事件打印到文件。"""
|
||
}
|
||
label {
|
||
en: "Files Log Handler"
|
||
zh: "文件日志处理进程"
|
||
}
|
||
}
|
||
|
||
desc_log_rotation {
|
||
desc {
|
||
en: """
|
||
By default, the logs are stored in `./log` directory (for installation from zip file) or in `/var/log/emqx` (for binary installation).<br/>
|
||
This section of the configuration controls the number of files kept for each log handler.
|
||
"""
|
||
zh: """
|
||
默认情况下,日志存储在 `./log` 目录(用于从 zip 文件安装)或 `/var/log/emqx`(用于二进制安装)。<br/>
|
||
这部分配置,控制每个日志处理进程保留的文件数量。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Log Rotation"
|
||
zh: "日志轮换"
|
||
}
|
||
}
|
||
|
||
desc_log_overload_kill {
|
||
desc {
|
||
en: """
|
||
Log overload kill features an overload protection that activates when the log handlers use too much memory or have too many buffered log messages.<br/>
|
||
When the overload is detected, the log handler is terminated and restarted after a cooldown period.
|
||
"""
|
||
zh: """
|
||
日志过载终止,具有过载保护功能。当日志处理进程使用过多内存,或者缓存的日志消息过多时该功能被激活。<br/>
|
||
检测到过载时,日志处理进程将终止,并在冷却期后重新启动。
|
||
"""
|
||
}
|
||
label {
|
||
en: "Log Overload Kill"
|
||
zh: "日志过载保护"
|
||
}
|
||
}
|
||
|
||
desc_log_burst_limit {
|
||
desc {
|
||
en: """Large bursts of log events produced in a short time can potentially cause problems, such as:
|
||
- Log files grow very large
|
||
- Log files are rotated too quickly, and useful information gets overwritten
|
||
- Overall performance impact on the system
|
||
|
||
Log burst limit feature can temporarily disable logging to avoid these issues."""
|
||
zh: """短时间内产生的大量日志事件可能会导致问题,例如:
|
||
- 日志文件变得非常大
|
||
- 日志文件轮换过快,有用信息被覆盖
|
||
- 对系统的整体性能影响
|
||
|
||
日志突发限制功能可以暂时禁用日志记录以避免这些问题。"""
|
||
}
|
||
label {
|
||
en: "Log Burst Limit"
|
||
zh: "日志突发限制"
|
||
}
|
||
}
|
||
|
||
desc_authorization {
|
||
desc {
|
||
en: """Settings that control client authorization."""
|
||
zh: """授权相关"""
|
||
}
|
||
label {
|
||
en: "Authorization"
|
||
zh: "授权"
|
||
}
|
||
}
|
||
}
|