the connector schemas are shared between authn, auth and bridges,
the difference is that authn and authz configs are unions like:
[
{mongo_type = rs,
...
}
{another backend config}
]
however the brdige types are maps like, for example:
mongodb_rs.$name {
mongo_type = rs
...
}
in which case, the mongo_type is not required.
in order to keep the schema static as much as possible,
this field is chanegd to 'required => false' with a default value.
However, for authn and authz, the union selector will still raise
exception if the there is no type provided.
This makes the buffer/resource workers always use `replayq` for
queuing, along with collecting multiple requests in a single call.
This is done to avoid long message queues for the buffer workers and
rely on `replayq`'s capabilities of offloading to disk and detecting
overflow.
Also, this deprecates the `enable_batch` and `enable_queue` resource
creation options, as: i) queuing is now always enables; ii) batch_size
> 1 <=> batch_enabled. The corresponding metric
`dropped.queue_not_enabled` is dropped, along with `batching`. The
batching is too ephemeral, especially considering a default batch time
of 20 ms, and is not shown in the dashboard, so it was removed.
https://emqx.atlassian.net/browse/EMQX-8548
Currently, we face several issues trying to keep resource metrics
reasonable. For example, when a resource is re-created and has its
metrics reset, but then its durable queue resumes its previous work
and leads to strange (often negative) metrics.
Instead using `counters` that are shared by more than one worker to
manage gauges, we introduce an ETS table whose key is not only scoped
by the Resource ID as before, but also by the worker ID. This way,
when a worker starts/terminates, they should set their own gauges to
their values (often 0 or `replayq:count` when resuming off a queue).
With this scoping and initialization procedure, we'll hopefully avoid
hitting those strange metrics scenarios and have better control over
the gauges.
https://emqx.atlassian.net/browse/EMQX-8547
If a Kafka Producer bridge is given bad configuration (e.g.: bad authn
credentials), the Wolff client process is started successfully, as it
does not attempt to connect, but when the producer process is
attempted to be started, it fails (only then the client tries to
connect to Kafka). At this point, an error was thrown, but the
supervised client process remained running.
If the configuration was later fixed and the bridge updated, which
prompted its removal and recreation, the Wolff client would report to
be "already started", so it would never pick up the new (fixed)
configuration, and the producers would perpetually fail to start until
the node would be restarted.
We simply ensure the client is stopped before throwing the error,
unrolling the start-up procedure.