This surprisingly simple change yields a big performance improvement
in throughput.
While the previous commit achieves ~ 55 k messages / s
in throughput under some test conditions (100 k concurrent publishers
publishing 1 QoS 1 message per second), the simple change in this
commit improves it further to ~ 63 k messages / s.
Benchmarks indicated that the evaluating one reply function is
consistently quite fast (~ 20 µs), which makes this performance gain
counterintuitive. Perhaps, although each call is cheap, `ehttpc`
calls several of these in a row when there are several sent requests,
and those costs might add up in latency.
This is a performance improvement for webhook bridge.
Since this bridge is called using `async` callback mode, and `ehttpc`
frequently returns errors of the form `normal` and `{shutdown,
normal}` that are retried "for free" by `ehttpc`, we add this behavior
to async requests as well. Other errors are retried too, but they are
not "free": 3 attempts are made at a maximum.
This is important because, when using buffer workers, we should avoid
making them enter the `blocked` state, since that halts all progress
and makes throughput plummet.
* release-50:
fix(pulsar): use a binary duration as default `health_check_interval`
docs: add changelog entry
docs: clarify description of bridge username and password
chore: bump to v5.0.25
fix(limiter): adjust type for compatibility
fix(limiter): fix that update node-level limiter config will not working
chore: upgrade dashboard to v1.2.4-1 for ce
chore: upgarde rulesql to 0.1.6 to fix invaid utf8 input
chore: add changelog for 10659
fix: crash when sysmon.os.mem_check_interval = disabled
chore: bump influxdb version && update changes
refactor(influxdb): move influxdb bridge into its own app
chore: add listener default changelog
fix: ocsp cache SUITE failed
fix: ensure atom key for emqx_config:get
fix: only fill cerf_file default in server side
fix: authn init is empty
fix: bad listeners default ssl_options
The previous commit uncovered another bug that was hidden by it:
`maybe_flush_after_async_reply` was sending a message to the wrong
PID. It was sending a message to `self()` meaning to target a buffer
worker, but `self()` in that context is never the buffer worker, it's
the connector's worker.
This change also revealed a race condition where the buffer workers
could stop flushing messages. So we piggy-backed on the atomic update
of the table size count to check if the buffer worker should be poked
to continue flushing. This allows us to get rid of
`maybe_flush_after_async_reply` altogether.
Fixes https://emqx.atlassian.net/browse/EMQX-9902
When the buffer worker inflight window is full, we don’t need to set a
timer to flush the messages again because there’s no more room, and
one of the inflight windows will flush the buffer worker by calling
`flush_worker`.
Currently, we do set the timer on such situation, and this fact
combined with the default batch time of 0 yields a busy loop situation
where the CPU spins a lot while inflight messages do not return.