As emqx_ee_schema_registry uses Mria tables (schema_registry_shard),
a node joining a cluster needs to restart this application in order to
restart relevant Mria shard processes.
Fixes https://emqx.atlassian.net/browse/EMQX-10408
From an old conversation with @kjellwinblad:
> There are 3 pool_sizes
> - The buffer workers pool size, just exposed here: https://github.com/emqx/emqx/pull/9742
> - The topology.pool_size, which controls the pool size for the poolboy_pool in Kjell's
diagram (on mongodb's side).
> - The pool_size from emqx_connector_mongo:mongo_fields that controls the ecpool pool
size (on EMQX's side).
> So we actually want to set topology.pool_size = 1 and hide it from users.
Fixes https://emqx.atlassian.net/browse/EMQX-10279
Related: https://github.com/emqx/emqx/pull/11038
Since wolff client has its own replayq that lives outside the management of the buffer
workers, we must not return disconnected status for such bridge: otherwise, the resource
manager will eventually kill the producers and data may be lost.
Fixes https://emqx.atlassian.net/browse/EMQX-10278
Since Pulsar client has its own replayq that lives outside the management of the buffer
workers, we must not return disconnected status for such bridge: otherwise, the resource
manager will eventually kill the producers and data may be lost.
Fixes https://emqx.atlassian.net/browse/EMQX-10228
This is a cosmetic fix for the Pulsar Producer bridge health check status.
Pulsar connection process is asynchronous, therefore, when a bridge of this type is
created or updated (which is the same as stopping and re-creating it), the immediate
status will be connecting because it’s indeed still connecting. The bridge will connect
very soon afterwards (assuming there are no true network/config issue), but having to
refresh the UI to see the new status and/or seeing the resource alarm might annoy users.
This workaround adds a few retries to account for that effect to reduce the probability of
seeing the `connecting` state on such happy-paths.
MongoDB connector currently does not support batching
so the batch_size option has no effect.
However we cannot remove the field, so we choose to hide it from
schema
Fixes https://emqx.atlassian.net/browse/EMQX-9603
Rather than relying on a JWT worker to produce and refresh tokens, we
could just produce then on demand when pushing the messages to GCP
PubSub. That can generate a bit of extra work (as multiple processes
might realize it’s time to refresh the JWT and do so), but that
shouldn’t be much. In return, we avoid any possibility of not having
a fresh JWT when pushing messages.