Fixes https://emqx.atlassian.net/browse/EMQX-10654
Despite loading the application in `nodetool`, we need to invoke `:module_info()` to force
the module to be actually loaded, especially when it's called in `call_hocon` to generate
the initial configurations. Otherwise, it'll fail to list all the bridge schemas.
Fixes https://emqx.atlassian.net/browse/EMQX-10629
During health checking, we check whether tables in the SQL statement
exist. Such check was done by asking the backend to parse the
statement using a named prepared statements. Concurrent health checks
could then result in the error:
```erlang
{error,{error,error,<<"42P05">>,duplicate_prepared_statement,<<"prepared statement \"get_status\" already exists">>,[{file,<<"prepare.c">>},{line,<<"451">>},{routine,<<"StorePreparedStatement">>},{severity,<<"ERROR">>}]}}
```
This could lead to an inconsistent state in the driver process, which
would crash later when a message from the backend (`READY_FOR_QUERY`, "idle"):
```
2023-07-24T13:05:58.892043+00:00 [error] Generic server <0.2134.0> terminating. Reason: {'module could not be loaded',[{undefined,handle_message,[90,<<"I">>,...
```
Added calls to `epgsql:sync/1` for functions that could return
`{error, sync_required}`.
Also, redundant calls to `parse2` were removed to reduce the number of requests.
Fixes https://emqx.atlassian.net/browse/EMQX-10361
- Moves `lib-ee/emqx_ee_schema_registry` to `apps/emqx_schema_registry`.
- Removes the `_ee_` segment from module names.
- Exceptions are the table names which are kept to avoid backwards incompatibilities.
As emqx_ee_schema_registry uses Mria tables (schema_registry_shard),
a node joining a cluster needs to restart this application in order to
restart relevant Mria shard processes.
Fixes https://emqx.atlassian.net/browse/EMQX-10408
From an old conversation with @kjellwinblad:
> There are 3 pool_sizes
> - The buffer workers pool size, just exposed here: https://github.com/emqx/emqx/pull/9742
> - The topology.pool_size, which controls the pool size for the poolboy_pool in Kjell's
diagram (on mongodb's side).
> - The pool_size from emqx_connector_mongo:mongo_fields that controls the ecpool pool
size (on EMQX's side).
> So we actually want to set topology.pool_size = 1 and hide it from users.
Fixes https://emqx.atlassian.net/browse/EMQX-10279
Related: https://github.com/emqx/emqx/pull/11038
Since wolff client has its own replayq that lives outside the management of the buffer
workers, we must not return disconnected status for such bridge: otherwise, the resource
manager will eventually kill the producers and data may be lost.
Fixes https://emqx.atlassian.net/browse/EMQX-10278
Since Pulsar client has its own replayq that lives outside the management of the buffer
workers, we must not return disconnected status for such bridge: otherwise, the resource
manager will eventually kill the producers and data may be lost.
Fixes https://emqx.atlassian.net/browse/EMQX-10228
This is a cosmetic fix for the Pulsar Producer bridge health check status.
Pulsar connection process is asynchronous, therefore, when a bridge of this type is
created or updated (which is the same as stopping and re-creating it), the immediate
status will be connecting because it’s indeed still connecting. The bridge will connect
very soon afterwards (assuming there are no true network/config issue), but having to
refresh the UI to see the new status and/or seeing the resource alarm might annoy users.
This workaround adds a few retries to account for that effect to reduce the probability of
seeing the `connecting` state on such happy-paths.
MongoDB connector currently does not support batching
so the batch_size option has no effect.
However we cannot remove the field, so we choose to hide it from
schema
Fixes https://emqx.atlassian.net/browse/EMQX-9603
Rather than relying on a JWT worker to produce and refresh tokens, we
could just produce then on demand when pushing the messages to GCP
PubSub. That can generate a bit of extra work (as multiple processes
might realize it’s time to refresh the JWT and do so), but that
shouldn’t be much. In return, we avoid any possibility of not having
a fresh JWT when pushing messages.
* release-50:
fix(pulsar): use a binary duration as default `health_check_interval`
docs: add changelog entry
docs: clarify description of bridge username and password
chore: bump to v5.0.25
fix(limiter): adjust type for compatibility
fix(limiter): fix that update node-level limiter config will not working
chore: upgrade dashboard to v1.2.4-1 for ce
chore: upgarde rulesql to 0.1.6 to fix invaid utf8 input
chore: add changelog for 10659
fix: crash when sysmon.os.mem_check_interval = disabled
chore: bump influxdb version && update changes
refactor(influxdb): move influxdb bridge into its own app
chore: add listener default changelog
fix: ocsp cache SUITE failed
fix: ensure atom key for emqx_config:get
fix: only fill cerf_file default in server side
fix: authn init is empty
fix: bad listeners default ssl_options