Merge pull request #12079 from emqx/1201-docs-sync-i18n-changes

1201 docs sync i18n changes
This commit is contained in:
Zaiming (Stone) Shi 2023-12-01 16:53:25 +01:00 committed by GitHub
commit e5e8384515
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 59 additions and 66 deletions

View File

@ -95,6 +95,6 @@ desc_param_path_operation_on_node.desc:
"""Operations can be one of: stop, restart"""
desc_param_path_operation_on_node.label:
"""Node Operation """
"""Node Operation"""
}

View File

@ -7,27 +7,27 @@ connect_timeout.label:
"""Connect Timeout"""
producer_opts.desc:
"""Local MQTT data source and Azure Event Hub bridge configs."""
"""Local MQTT data source and Azure Event Hubs bridge configs."""
producer_opts.label:
"""MQTT to Azure Event Hub"""
"""MQTT to Azure Event Hubs"""
min_metadata_refresh_interval.desc:
"""Minimum time interval the client has to wait before refreshing Azure Event Hub Kafka broker and topic metadata. Setting too small value may add extra load on Azure Event Hub."""
"""Minimum time interval the client has to wait before refreshing Azure Event Hubs Kafka broker and topic metadata. Setting too small value may add extra load on Azure Event Hubs."""
min_metadata_refresh_interval.label:
"""Min Metadata Refresh Interval"""
kafka_producer.desc:
"""Azure Event Hub Producer configuration."""
"""Azure Event Hubs Producer configuration."""
kafka_producer.label:
"""Azure Event Hub Producer"""
"""Azure Event Hubs Producer"""
producer_buffer.desc:
"""Configure producer message buffer.
Tell Azure Event Hub producer how to buffer messages when EMQX has more messages to send than Azure Event Hub can keep up, or when Azure Event Hub is down."""
Tell Azure Event Hubs producer how to buffer messages when EMQX has more messages to send than Azure Event Hubs can keep up, or when Azure Event Hubs is down."""
producer_buffer.label:
"""Message Buffer"""
@ -45,7 +45,7 @@ socket_receive_buffer.label:
"""Socket Receive Buffer Size"""
socket_tcp_keepalive.desc:
"""Enable TCP keepalive for Azure Event Hub bridge connections.
"""Enable TCP keepalive for Azure Event Hubs bridge connections.
The value is three comma separated numbers in the format of 'Idle,Interval,Probes'
- Idle: The number of seconds a connection needs to be idle before the server begins to send out keep-alive probes (Linux default 7200).
- Interval: The number of seconds between TCP keep-alive probes (Linux default 75).
@ -63,16 +63,16 @@ desc_name.label:
"""Bridge Name"""
producer_kafka_opts.desc:
"""Azure Event Hub producer configs."""
"""Azure Event Hubs producer configs."""
producer_kafka_opts.label:
"""Azure Event Hub Producer"""
"""Azure Event Hubs Producer"""
kafka_topic.desc:
"""Event Hub name"""
"""Event Hubs name"""
kafka_topic.label:
"""Event Hub Name"""
"""Event Hubs Name"""
kafka_message_timestamp.desc:
"""Which timestamp to use. The timestamp is expected to be a millisecond precision Unix epoch which can be in string format, e.g. <code>1661326462115</code> or <code>'1661326462115'</code>. When the desired data field for this template is not found, or if the found data is not a valid integer, the current system timestamp will be used."""
@ -97,21 +97,21 @@ socket_opts.label:
"""Socket Options"""
partition_count_refresh_interval.desc:
"""The time interval for Azure Event Hub producer to discover increased number of partitions.
After the number of partitions is increased in Azure Event Hub, EMQX will start taking the
"""The time interval for Azure Event Hubs producer to discover increased number of partitions.
After the number of partitions is increased in Azure Event Hubs, EMQX will start taking the
discovered partitions into account when dispatching messages per <code>partition_strategy</code>."""
partition_count_refresh_interval.label:
"""Partition Count Refresh Interval"""
max_batch_bytes.desc:
"""Maximum bytes to collect in an Azure Event Hub message batch. Most of the Kafka brokers default to a limit of 1 MB batch size. EMQX's default value is less than 1 MB in order to compensate Kafka message encoding overheads (especially when each individual message is very small). When a single message is over the limit, it is still sent (as a single element batch)."""
"""Maximum bytes to collect in an Azure Event Hubs message batch."""
max_batch_bytes.label:
"""Max Batch Bytes"""
required_acks.desc:
"""Required acknowledgements for Azure Event Hub partition leader to wait for its followers before it sends back the acknowledgement to EMQX Azure Event Hub producer
"""Required acknowledgements for Azure Event Hubs partition leader to wait for its followers before it sends back the acknowledgement to EMQX Azure Event Hubs producer
<code>all_isr</code>: Require all in-sync replicas to acknowledge.
<code>leader_only</code>: Require only the partition-leader's acknowledgement."""
@ -120,7 +120,7 @@ required_acks.label:
"""Required Acks"""
kafka_headers.desc:
"""Please provide a placeholder to be used as Azure Event Hub Headers<br/>
"""Please provide a placeholder to be used as Azure Event Hubs Headers<br/>
e.g. <code>${pub_props}</code><br/>
Notice that the value of the placeholder must either be an object:
<code>{\"foo\": \"bar\"}</code>
@ -128,39 +128,39 @@ or an array of key-value pairs:
<code>[{\"key\": \"foo\", \"value\": \"bar\"}]</code>"""
kafka_headers.label:
"""Azure Event Hub Headers"""
"""Azure Event Hubs Headers"""
producer_kafka_ext_headers.desc:
"""Please provide more key-value pairs for Azure Event Hub headers<br/>
"""Please provide more key-value pairs for Azure Event Hubs headers<br/>
The key-value pairs here will be combined with the
value of <code>kafka_headers</code> field before sending to Azure Event Hub."""
value of <code>kafka_headers</code> field before sending to Azure Event Hubs."""
producer_kafka_ext_headers.label:
"""Extra Azure Event Hub headers"""
"""Extra Azure Event Hubs headers"""
producer_kafka_ext_header_key.desc:
"""Key of the Azure Event Hub header. Placeholders in format of ${var} are supported."""
"""Key of the Azure Event Hubs header. Placeholders in format of ${var} are supported."""
producer_kafka_ext_header_key.label:
"""Azure Event Hub extra header key."""
"""Azure Event Hubs extra header key."""
producer_kafka_ext_header_value.desc:
"""Value of the Azure Event Hub header. Placeholders in format of ${var} are supported."""
"""Value of the Azure Event Hubs header. Placeholders in format of ${var} are supported."""
producer_kafka_ext_header_value.label:
"""Value"""
kafka_header_value_encode_mode.desc:
"""Azure Event Hub headers value encode mode<br/>
- NONE: only add binary values to Azure Event Hub headers;<br/>
- JSON: only add JSON values to Azure Event Hub headers,
"""Azure Event Hubs headers value encode mode<br/>
- NONE: only add binary values to Azure Event Hubs headers;<br/>
- JSON: only add JSON values to Azure Event Hubs headers,
and encode it to JSON strings before sending."""
kafka_header_value_encode_mode.label:
"""Azure Event Hub headers value encode mode"""
"""Azure Event Hubs headers value encode mode"""
metadata_request_timeout.desc:
"""Maximum wait time when fetching metadata from Azure Event Hub."""
"""Maximum wait time when fetching metadata from Azure Event Hubs."""
metadata_request_timeout.label:
"""Metadata Request Timeout"""
@ -220,52 +220,52 @@ config_enable.label:
"""Enable or Disable"""
desc_config.desc:
"""Configuration for an Azure Event Hub bridge."""
"""Configuration for an Azure Event Hubs bridge."""
desc_config.label:
"""Azure Event Hub Bridge Configuration"""
"""Azure Event Hubs Bridge Configuration"""
buffer_per_partition_limit.desc:
"""Number of bytes allowed to buffer for each Azure Event Hub partition. When this limit is exceeded, old messages will be dropped in a trade for credits for new messages to be buffered."""
"""Number of bytes allowed to buffer for each Azure Event Hubs partition. When this limit is exceeded, old messages will be dropped in a trade for credits for new messages to be buffered."""
buffer_per_partition_limit.label:
"""Per-partition Buffer Limit"""
bootstrap_hosts.desc:
"""A comma separated list of Azure Event Hub Kafka <code>host[:port]</code> namespace endpoints to bootstrap the client. Default port number is 9093."""
"""A comma separated list of Azure Event Hubs Kafka <code>host[:port]</code> namespace endpoints to bootstrap the client. Default port number is 9093."""
bootstrap_hosts.label:
"""Bootstrap Hosts"""
kafka_message_key.desc:
"""Template to render Azure Event Hub message key. If the template is rendered into a NULL value (i.e. there is no such data field in Rule Engine context) then Azure Event Hub's <code>NULL</code> (but not empty string) is used."""
"""Template to render Azure Event Hubs message key. If the template is rendered into a NULL value (i.e. there is no such data field in Rule Engine context) then Azure Event Hubs's <code>NULL</code> (but not empty string) is used."""
kafka_message_key.label:
"""Message Key"""
kafka_message.desc:
"""Template to render an Azure Event Hub message."""
"""Template to render an Azure Event Hubs message."""
kafka_message.label:
"""Azure Event Hub Message Template"""
"""Azure Event Hubs Message Template"""
mqtt_topic.desc:
"""MQTT topic or topic filter as data source (bridge input). If rule action is used as data source, this config should be left empty, otherwise messages will be duplicated in Azure Event Hub."""
"""MQTT topic or topic filter as data source (bridge input). If rule action is used as data source, this config should be left empty, otherwise messages will be duplicated in Azure Event Hubs."""
mqtt_topic.label:
"""Source MQTT Topic"""
kafka_message_value.desc:
"""Template to render Azure Event Hub message value. If the template is rendered into a NULL value (i.e. there is no such data field in Rule Engine context) then Azure Event Hub's <code>NULL</code> (but not empty string) is used."""
"""Template to render Azure Event Hubs message value. If the template is rendered into a NULL value (i.e. there is no such data field in Rule Engine context) then Azure Event Hubs' <code>NULL</code> (but not empty string) is used."""
kafka_message_value.label:
"""Message Value"""
partition_strategy.desc:
"""Partition strategy is to tell the producer how to dispatch messages to Azure Event Hub partitions.
"""Partition strategy is to tell the producer how to dispatch messages to Azure Event Hubs partitions.
<code>random</code>: Randomly pick a partition for each message
<code>key_dispatch</code>: Hash Azure Event Hub message key to a partition number"""
<code>key_dispatch</code>: Hash Azure Event Hubs message key to a partition number"""
partition_strategy.label:
"""Partition Strategy"""
@ -278,7 +278,7 @@ buffer_segment_bytes.label:
"""Segment File Bytes"""
max_inflight.desc:
"""Maximum number of batches allowed for Azure Event Hub producer (per-partition) to send before receiving acknowledgement from Azure Event Hub. Greater value typically means better throughput. However, there can be a risk of message reordering when this value is greater than 1."""
"""Maximum number of batches allowed for Azure Event Hubs producer (per-partition) to send before receiving acknowledgement from Azure Event Hubs. Greater value typically means better throughput. However, there can be a risk of message reordering when this value is greater than 1."""
max_inflight.label:
"""Max Inflight"""
@ -308,25 +308,25 @@ auth_username_password.label:
"""Username/password Auth"""
auth_sasl_password.desc:
"""The Connection String for connecting to Azure Event Hub. Should be the "connection string-primary key" of a Namespace shared access policy."""
"""The Connection String for connecting to Azure Event Hubs. Should be the "connection string-primary key" of a Namespace shared access policy."""
auth_sasl_password.label:
"""Connection String"""
producer_kafka_opts.desc:
"""Azure Event Hub producer configs."""
"""Azure Event Hubs producer configs."""
producer_kafka_opts.label:
"""Azure Event Hub Producer"""
"""Azure Event Hubs Producer"""
desc_config.desc:
"""Configuration for an Azure Event Hub bridge."""
"""Configuration for an Azure Event Hubs bridge."""
desc_config.label:
"""Azure Event Hub Bridge Configuration"""
"""Azure Event Hubs Bridge Configuration"""
ssl_client_opts.desc:
"""TLS/SSL options for Azure Event Hub client."""
"""TLS/SSL options for Azure Event Hubs client."""
ssl_client_opts.label:
"""TLS/SSL options"""

View File

@ -106,6 +106,6 @@ desc_param_path_operation_on_node.desc:
"""Operation can be one of: 'start'."""
desc_param_path_operation_on_node.label:
"""Node Operation """
"""Node Operation"""
}

View File

@ -94,6 +94,6 @@ desc_param_path_operation_on_node.desc:
"""Operation can be one of: 'start'."""
desc_param_path_operation_on_node.label:
"""Node Operation """
"""Node Operation"""
}

View File

@ -8,7 +8,7 @@ desc_connectors.label:
connector_field.desc:
"""Name of connector used to connect to the resource where the action is to be performed."""
"""Name of the connector specified by the action, used for external resource selection."""
connector_field.label:
"""Connector"""

View File

@ -6,7 +6,7 @@ This is used to limit the connection rate for this node.
Once the limit is reached, new connections will be deferred or refused.<br/>
For example:<br/>
- <code>1000/s</code> :: Only accepts 1000 connections per second<br/>
- <code>1000/10s</code> :: Only accepts 1000 connections every 10 seconds"""
- <code>1000/10s</code> :: Only accepts 1000 connections every 10 seconds."""
max_conn_rate.label:
"""Maximum Connection Rate"""

View File

@ -12,13 +12,6 @@ batch_time.desc:
batch_time.label:
"""Max batch wait time"""
buffer_mode.desc:
"""Buffer operation mode.
<code>memory_only</mode>: Buffer all messages in memory.<code>volatile_offload</code>: Buffer message in memory first, when up to certain limit (see <code>buffer_seg_bytes</code> config for more information), then start offloading messages to disk"""
buffer_mode.label:
"""Buffer Mode"""
buffer_seg_bytes.desc:
"""Applicable when buffer mode is set to <code>volatile_offload</code>.
This value is to specify the size of each on-disk buffer file."""

View File

@ -573,7 +573,7 @@ fields_tcp_opts_buffer.label:
"""TCP user-space buffer"""
server_ssl_opts_schema_honor_cipher_order.desc:
"""An important security setting, it forces the cipher to be set based
"""An important security setting. It forces the cipher to be set based
on the server-specified order instead of the client-specified order,
hence enforcing the (usually more properly configured) security
ordering of the server administrator."""
@ -1012,13 +1012,13 @@ fields_ws_opts_supported_subprotocols.label:
broker_shared_subscription_strategy.desc:
"""Dispatch strategy for shared subscription.
- `random`: dispatch the message to a random selected subscriber
- `round_robin`: select the subscribers in a round-robin manner
- `round_robin_per_group`: select the subscribers in round-robin fashion within each shared subscriber group
- `local`: select random local subscriber otherwise select random cluster-wide
- `sticky`: always use the last selected subscriber to dispatch, until the subscriber disconnects.
- `hash_clientid`: select the subscribers by hashing the `clientIds`
- `hash_topic`: select the subscribers by hashing the source topic"""
- `random`: Randomly select a subscriber for dispatch;
- `round_robin`: Messages from a single publisher are dispatched to subscribers in turn;
- `round_robin_per_group`: All messages are dispatched to subscribers in turn;
- `local`: Randomly select a subscriber on the current node, if there are no subscribers on the current node, then randomly select within the cluster;
- `sticky`: Continuously dispatch messages to the initially selected subscriber until their session ends;
- `hash_clientid`: Hash the publisher's client ID to select a subscriber;
- `hash_topic`: Hash the publishing topic to select a subscriber."""
fields_deflate_opts_mem_level.desc:
"""Specifies the size of the compression state.<br/>