Fixes https://emqx.atlassian.net/browse/EMQX-10629
During health checking, we check whether tables in the SQL statement
exist. Such check was done by asking the backend to parse the
statement using a named prepared statements. Concurrent health checks
could then result in the error:
```erlang
{error,{error,error,<<"42P05">>,duplicate_prepared_statement,<<"prepared statement \"get_status\" already exists">>,[{file,<<"prepare.c">>},{line,<<"451">>},{routine,<<"StorePreparedStatement">>},{severity,<<"ERROR">>}]}}
```
This could lead to an inconsistent state in the driver process, which
would crash later when a message from the backend (`READY_FOR_QUERY`, "idle"):
```
2023-07-24T13:05:58.892043+00:00 [error] Generic server <0.2134.0> terminating. Reason: {'module could not be loaded',[{undefined,handle_message,[90,<<"I">>,...
```
Added calls to `epgsql:sync/1` for functions that could return
`{error, sync_required}`.
Also, redundant calls to `parse2` were removed to reduce the number of requests.
Fixes https://emqx.atlassian.net/browse/EMQX-10361
- Moves `lib-ee/emqx_ee_schema_registry` to `apps/emqx_schema_registry`.
- Removes the `_ee_` segment from module names.
- Exceptions are the table names which are kept to avoid backwards incompatibilities.
As emqx_ee_schema_registry uses Mria tables (schema_registry_shard),
a node joining a cluster needs to restart this application in order to
restart relevant Mria shard processes.
Fixes https://emqx.atlassian.net/browse/EMQX-10408
From an old conversation with @kjellwinblad:
> There are 3 pool_sizes
> - The buffer workers pool size, just exposed here: https://github.com/emqx/emqx/pull/9742
> - The topology.pool_size, which controls the pool size for the poolboy_pool in Kjell's
diagram (on mongodb's side).
> - The pool_size from emqx_connector_mongo:mongo_fields that controls the ecpool pool
size (on EMQX's side).
> So we actually want to set topology.pool_size = 1 and hide it from users.
Fixes https://emqx.atlassian.net/browse/EMQX-10279
Related: https://github.com/emqx/emqx/pull/11038
Since wolff client has its own replayq that lives outside the management of the buffer
workers, we must not return disconnected status for such bridge: otherwise, the resource
manager will eventually kill the producers and data may be lost.
Fixes https://emqx.atlassian.net/browse/EMQX-10278
Since Pulsar client has its own replayq that lives outside the management of the buffer
workers, we must not return disconnected status for such bridge: otherwise, the resource
manager will eventually kill the producers and data may be lost.
Fixes https://emqx.atlassian.net/browse/EMQX-10228
This is a cosmetic fix for the Pulsar Producer bridge health check status.
Pulsar connection process is asynchronous, therefore, when a bridge of this type is
created or updated (which is the same as stopping and re-creating it), the immediate
status will be connecting because it’s indeed still connecting. The bridge will connect
very soon afterwards (assuming there are no true network/config issue), but having to
refresh the UI to see the new status and/or seeing the resource alarm might annoy users.
This workaround adds a few retries to account for that effect to reduce the probability of
seeing the `connecting` state on such happy-paths.
MongoDB connector currently does not support batching
so the batch_size option has no effect.
However we cannot remove the field, so we choose to hide it from
schema