Compare commits

...

54 Commits

Author SHA1 Message Date
Ivan Dyachkov bcd63344b8
Merge pull request #13583 from id/20240807-sync-release-branches
sync release branches
2024-08-07 11:38:14 +02:00
Ivan Dyachkov cc3b26a3ac Merge remote-tracking branch 'upstream/release-58' into 20240807-sync-release-branches 2024-08-07 09:48:38 +02:00
Ivan Dyachkov dd686c24a0 Merge remote-tracking branch 'upstream/release-57' into 20240807-sync-release-branches 2024-08-07 09:44:38 +02:00
Ivan Dyachkov 592c4e0045
Merge pull request #12774 from emqx/dependabot/github_actions/dot-github/actions/package-macos/actions-package-macos-83d1e47aa6
chore(deps): bump the actions-package-macos group in /.github/actions/package-macos with 1 update
2024-08-07 09:43:53 +02:00
Ivan Dyachkov 073e3ea0a8
Merge pull request #13569 from emqx/dependabot/github_actions/actions-ef71aea555
chore(deps): bump the actions group across 1 directory with 8 updates
2024-08-07 09:35:52 +02:00
Ilia Averianov 6bfddd9952
Merge pull request #13565 from savonarola/0801-shared-subs-compact-structures
Reduce size of shared sub protocol structures
2024-08-06 19:56:08 +03:00
Ilya Averyanov 9ad65c6ac1 feat(queue): reduce logging levels 2024-08-06 18:45:15 +03:00
Ilya Averyanov e17becb84d feat(queue): compact protocol structures, organize formatting 2024-08-06 18:05:02 +03:00
Ivan Dyachkov 822ed71282 chore: release 5.7.2 2024-08-06 13:25:56 +02:00
Kinple b8fd5de2a5
Merge pull request #13577 from Kinplemelon/kinple/upgrade-dashboard
chore(dashboard): bump dashboard version to v1.9.2 & e1.7.2
2024-08-06 19:02:50 +08:00
Kinplemelon 3ee84d60ae chore(dashboard): bump dashboard version to v1.9.2 & e1.7.2 2024-08-06 18:11:35 +08:00
Andrew Mayorov 3b52b658cd
Merge pull request #13559 from keynslug/feat/EMQX-12309/raft-precond
feat(dsraft): support atomic batches + preconditions
2024-08-06 09:17:16 +02:00
Kinple caf1897979
Merge pull request #13574 from Kinplemelon/kinple/upgrade-dashboard
chore(dashboard): bump dashboard version to e1.7.2-beta.7
2024-08-06 10:51:03 +08:00
Kinplemelon dbbd5e1458 ci: update emqx docs link in dashboard 2024-08-06 09:33:20 +08:00
Kinplemelon 0ab31df9d2 chore(dashboard): bump dashboard version to v1.9.2-beta.1 & e1.7.2-beta.7 2024-08-06 09:32:17 +08:00
Andrew Mayorov b1a53568d6
test(ds): avoid side effects in check phase 2024-08-05 16:34:17 +02:00
Andrew Mayorov 382feab7d1
chore(dsraft): fix few spelling errors
Co-Authored-By: Thales Macedo Garitezi <thalesmg@gmail.com>
2024-08-05 10:55:49 +02:00
Andrew Mayorov 6aad774075
chore(dsraft): fix a typespec 2024-08-05 10:55:49 +02:00
Andrew Mayorov 649cbf1c79
fix(dsraft): use local application environment 2024-08-05 10:55:49 +02:00
Andrew Mayorov 4cde5e98a3
chore(dslocal): refine few typespecs 2024-08-05 10:55:48 +02:00
Andrew Mayorov d631b5b296
feat(ds): support deletions + precondition-related API in bitfield-lts 2024-08-05 10:55:48 +02:00
Andrew Mayorov 26ec69d5f4
test(ds): verify deletions work predictably 2024-08-05 10:55:48 +02:00
Andrew Mayorov 58b9ab0210
fix(dsbackend): unify timestamp resolution in operations / preconditions 2024-08-05 10:55:22 +02:00
lafirest 4644072fd8
Merge pull request #13570 from lafirest/fix/api_key_bootstrap
fix(api_key): do not crash boot when the bootstrap file is not exists
2024-08-05 16:33:43 +08:00
firest c9c4d1a196 fix(api_key): do not crash boot when the bootstrap file is not exists 2024-08-05 15:56:05 +08:00
dependabot[bot] 11546b72f4
chore(deps): bump the actions group across 1 directory with 8 updates
Bumps the actions group with 8 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [actions/checkout](https://github.com/actions/checkout) | `4.1.2` | `4.1.7` |
| [actions/upload-artifact](https://github.com/actions/upload-artifact) | `4.3.3` | `4.3.5` |
| [actions/download-artifact](https://github.com/actions/download-artifact) | `4.1.7` | `4.1.8` |
| [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) | `3.0.0` | `3.2.0` |
| [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) | `3.3.0` | `3.6.1` |
| [docker/login-action](https://github.com/docker/login-action) | `3.2.0` | `3.3.0` |
| [erlef/setup-beam](https://github.com/erlef/setup-beam) | `1.18.0` | `1.18.1` |
| [ossf/scorecard-action](https://github.com/ossf/scorecard-action) | `2.3.3` | `2.4.0` |



Updates `actions/checkout` from 4.1.2 to 4.1.7
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.1.2...692973e3d937129bcbf40652eb9f2f61becf3332)

Updates `actions/upload-artifact` from 4.3.3 to 4.3.5
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](65462800fd...89ef406dd8)

Updates `actions/download-artifact` from 4.1.7 to 4.1.8
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](65a9edc588...fa0a91b85d)

Updates `docker/setup-qemu-action` from 3.0.0 to 3.2.0
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](68827325e0...49b3bc8e6b)

Updates `docker/setup-buildx-action` from 3.3.0 to 3.6.1
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](d70bba72b1...988b5a0280)

Updates `docker/login-action` from 3.2.0 to 3.3.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](0d4c9c5ea7...9780b0c442)

Updates `erlef/setup-beam` from 1.18.0 to 1.18.1
- [Release notes](https://github.com/erlef/setup-beam/releases)
- [Commits](a6e26b2231...b9c58b0450)

Updates `ossf/scorecard-action` from 2.3.3 to 2.4.0
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](dc50aa9510...62b2cac7ed)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
- dependency-name: erlef/setup-beam
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 03:25:47 +00:00
dependabot[bot] bcb70a9fb9
chore(deps): bump the actions-package-macos group
Bumps the actions-package-macos group in /.github/actions/package-macos with 1 update: [actions/cache](https://github.com/actions/cache).


Updates `actions/cache` from 4.0.1 to 4.0.2
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](ab5e6d0c87...0c45773b62)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions-package-macos
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-05 03:17:26 +00:00
JimMoen 09ec31908b
Merge pull request #13357 from JimMoen/fix-utf8-frame-error-connack
Stop returning `CONNACK` or `DISCONNECT` to clients that sent malformed CONNECT packets.

- Only send `CONNACK` with reason code `frame_too_large` for MQTT-v5.0 when connecting if the protocol version field in CONNECT can be detected.
- Otherwise **DONOT** send any CONNACK or DISCONNECT packet.
2024-08-02 15:24:30 +08:00
lafirest b94ec4014f
Merge pull request #13563 from lafirest/fix/payload_encode
fix(log): respect payload encoding settings when formatting packets
2024-08-02 14:38:12 +08:00
firest 74c346f9d1 fix(log): respect payload encoding settings when formatting packets 2024-08-02 12:41:30 +08:00
zhongwencool 8a33ef8576
Merge pull request #13562 from zhongwencool/fix-deactivate-alarm
fix: deactivate alarm before create resource
2024-08-02 12:08:27 +08:00
zhongwencool 6c2033ecbf fix: deactivate alarm before create resource 2024-08-02 11:03:59 +08:00
zmstone 51530588ef ci: fix a typo in commented out docker-compose yaml file 2024-08-01 22:41:42 +02:00
Andrew Mayorov 810a4d3cf9
test(dsbackend): add shared tests for atomic batches + preconditions 2024-08-01 14:26:45 +02:00
Andrew Mayorov 7b243ef7ad
feat(ds): support operations + preconditions in skipstream-lts 2024-08-01 14:26:45 +02:00
Andrew Mayorov fcf76d28ba
feat(dsraft): support atomic batches + preconditions 2024-08-01 14:26:45 +02:00
Andrew Mayorov 3b5d98c1d9
feat(ds): adopt buffer interface to `emqx_ds:operation()` 2024-08-01 14:26:45 +02:00
Andrew Mayorov 451b03ff99
feat(ds): add generic preconditions implementation 2024-08-01 14:26:45 +02:00
JimMoen f792418a68
Merge pull request #13552 from JimMoen/fix-plugin-app-takes-too-long
fix: add a startup timeout limit for the plugin application
2024-08-01 16:46:09 +08:00
JimMoen 4915cc0da6
chore: add changelog entry for 13357 2024-08-01 15:23:58 +08:00
JimMoen 15b3f4deb0
fix: rm unused func and exports 2024-08-01 15:00:24 +08:00
JimMoen 7a251c9ead
test: handle frame error for CONNECT packets 2024-08-01 10:26:31 +08:00
JimMoen 37a89d0094
fix: enrich parse_state and connection serialize opts 2024-08-01 10:26:31 +08:00
JimMoen c313aa89f0
fix: try throw proto_ver and proto_name when parsing CONNECT packet 2024-08-01 10:26:31 +08:00
JimMoen 6db1c0a446
refactor: separate function to handle `frame_error` 2024-08-01 10:26:31 +08:00
JimMoen d4508a4f1d
chore: sync master `elvis.config` 2024-08-01 10:26:31 +08:00
Ivan Dyachkov 577f1a7d8a
Merge pull request #13553 from id/20240731-ci-fix-docker-build
ci: fix docker images build
2024-07-31 16:04:47 +02:00
Thales Macedo Garitezi 08c58cc319
Merge pull request #13543 from thalesmg/20240730-r57-sr-delete-protobuf-cache
fix(schema registry): clear protobuf code cache when deleting/updating serdes
2024-07-31 10:16:48 -03:00
Thales Macedo Garitezi 150fee87f1
Merge pull request #13541 from thalesmg/20240730-r57-unset-crl-check-listener
fix(crl): force remove CRL fields from SSL opts after listener update
2024-07-31 10:16:35 -03:00
JimMoen c658cfe269
fix: make static_check happy 2024-07-31 17:17:13 +08:00
JimMoen a246551914
fix: add a startup timeout limit for the plugin application 2024-07-31 17:17:11 +08:00
Ivan Dyachkov 8d8ff6cf5d ci: fix docker images build
/etc/docker/daemon.json requires root for read access
2024-07-31 10:27:04 +02:00
Thales Macedo Garitezi ebb69f4ebf fix(crl): force remove crl fields from SSL opts after listener update
Fixes https://emqx.atlassian.net/browse/EMQX-12785
2024-07-30 14:00:24 -03:00
Thales Macedo Garitezi fd961f9da7 fix(schema registry): clear protobuf code cache when deleting/updating serde
Fixes https://emqx.atlassian.net/browse/EMQX-12789
2024-07-30 13:52:34 -03:00
99 changed files with 1062 additions and 563 deletions

View File

@ -10,7 +10,7 @@ services:
nofile: 1024
image: openldap
#ports:
# - 389:389
# - "389:389"
volumes:
- ./certs/ca.crt:/etc/certs/ca.crt
restart: always

View File

@ -51,7 +51,7 @@ runs:
echo "SELF_HOSTED=false" >> $GITHUB_OUTPUT
;;
esac
- uses: actions/cache@ab5e6d0c87105b4c9c2047343972218f562e4319 # v4.0.1
- uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
id: cache
if: steps.prepare.outputs.SELF_HOSTED != 'true'
with:

View File

@ -152,7 +152,7 @@ jobs:
echo "PROFILE=${PROFILE}" | tee -a .env
echo "PKG_VSN=$(./pkg-vsn.sh ${PROFILE})" | tee -a .env
zip -ryq -x@.github/workflows/.zipignore $PROFILE.zip .
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: ${{ matrix.profile }}
path: ${{ matrix.profile }}.zip

View File

@ -163,7 +163,7 @@ jobs:
echo "PROFILE=${PROFILE}" | tee -a .env
echo "PKG_VSN=$(./pkg-vsn.sh ${PROFILE})" | tee -a .env
zip -ryq -x@.github/workflows/.zipignore $PROFILE.zip .
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: ${{ matrix.profile }}
path: ${{ matrix.profile }}.zip

View File

@ -83,7 +83,7 @@ jobs:
id: build
run: |
make ${{ matrix.profile }}-tgz
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: "${{ matrix.profile }}-${{ matrix.arch }}.tar.gz"
path: "_packages/emqx*/emqx-*.tar.gz"
@ -110,7 +110,7 @@ jobs:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
ref: ${{ github.event.inputs.ref }}
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: "${{ matrix.profile[0] }}-*.tar.gz"
path: _packages
@ -122,24 +122,25 @@ jobs:
run: |
ls -lR _packages/$PROFILE
mv _packages/$PROFILE/*.tar.gz ./
- name: Enable containerd image store on Docker Engine
run: |
echo "$(jq '. += {"features": {"containerd-snapshotter": true}}' /etc/docker/daemon.json)" > daemon.json
echo "$(sudo cat /etc/docker/daemon.json | jq '. += {"features": {"containerd-snapshotter": true}}')" > daemon.json
sudo mv daemon.json /etc/docker/daemon.json
sudo systemctl restart docker
- uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3 # v3.0.0
- uses: docker/setup-buildx-action@d70bba72b1f3fd22344832f00baa16ece964efeb # v3.3.0
- uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf # v3.2.0
- uses: docker/setup-buildx-action@988b5a0280414f521da01fcc63a27aeeb4b104db # v3.6.1
- name: Login to hub.docker.com
uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446 # v3.2.0
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
if: inputs.publish && contains(matrix.profile[1], 'docker.io')
with:
username: ${{ secrets.DOCKER_HUB_USER }}
password: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Login to AWS ECR
uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446 # v3.2.0
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
if: inputs.publish && contains(matrix.profile[1], 'public.ecr.aws')
with:
registry: public.ecr.aws

View File

@ -51,7 +51,7 @@ jobs:
if: always()
run: |
docker save $_EMQX_DOCKER_IMAGE_TAG | gzip > $EMQX_NAME-docker-$PKG_VSN.tar.gz
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: "${{ env.EMQX_NAME }}-docker"
path: "${{ env.EMQX_NAME }}-docker-${{ env.PKG_VSN }}.tar.gz"

View File

@ -95,7 +95,7 @@ jobs:
apple_developer_identity: ${{ secrets.APPLE_DEVELOPER_IDENTITY }}
apple_developer_id_bundle: ${{ secrets.APPLE_DEVELOPER_ID_BUNDLE }}
apple_developer_id_bundle_password: ${{ secrets.APPLE_DEVELOPER_ID_BUNDLE_PASSWORD }}
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: ${{ matrix.profile }}-${{ matrix.os }}-${{ matrix.otp }}
@ -180,7 +180,7 @@ jobs:
--builder $BUILDER \
--elixir $IS_ELIXIR \
--pkgtype pkg
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: ${{ matrix.profile }}-${{ matrix.os }}-${{ matrix.arch }}${{ matrix.with_elixir == 'yes' && '-elixir' || '' }}-${{ matrix.builder }}-${{ matrix.otp }}-${{ matrix.elixir }}
path: _packages/${{ matrix.profile }}/
@ -198,7 +198,7 @@ jobs:
profile:
- ${{ inputs.profile }}
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: "${{ matrix.profile }}-*"
path: packages/${{ matrix.profile }}

View File

@ -54,7 +54,7 @@ jobs:
- name: build pkg
run: |
./scripts/buildx.sh --profile "$PROFILE" --pkgtype pkg --builder "$BUILDER"
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: ${{ matrix.profile[0] }}-${{ matrix.profile[1] }}-${{ matrix.os }}
@ -102,7 +102,7 @@ jobs:
apple_developer_identity: ${{ secrets.APPLE_DEVELOPER_IDENTITY }}
apple_developer_id_bundle: ${{ secrets.APPLE_DEVELOPER_ID_BUNDLE }}
apple_developer_id_bundle_password: ${{ secrets.APPLE_DEVELOPER_ID_BUNDLE_PASSWORD }}
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: ${{ matrix.profile }}-${{ matrix.os }}

View File

@ -41,13 +41,13 @@ jobs:
- name: build pkg
run: |
./scripts/buildx.sh --profile $PROFILE --pkgtype pkg --elixir $ELIXIR --arch $ARCH
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: "${{ matrix.profile[0] }}-${{ matrix.profile[1] }}-${{ matrix.profile[2] }}"
path: _packages/${{ matrix.profile[0] }}/*
retention-days: 7
compression-level: 0
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: "${{ matrix.profile[0] }}-schema-dump-${{ matrix.profile[1] }}-${{ matrix.profile[2] }}"
path: |
@ -84,7 +84,7 @@ jobs:
apple_developer_identity: ${{ secrets.APPLE_DEVELOPER_IDENTITY }}
apple_developer_id_bundle: ${{ secrets.APPLE_DEVELOPER_ID_BUNDLE }}
apple_developer_id_bundle_password: ${{ secrets.APPLE_DEVELOPER_ID_BUNDLE_PASSWORD }}
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: ${{ matrix.os }}
path: _packages/**/*

View File

@ -37,7 +37,7 @@ jobs:
- run: ./scripts/check-elixir-deps-discrepancies.exs
- run: ./scripts/check-elixir-applications.exs
- name: Upload produced lock files
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: ${{ matrix.profile }}_produced_lock_files

View File

@ -52,7 +52,7 @@ jobs:
id: package_file
run: |
echo "PACKAGE_FILE=$(find _packages/emqx -name 'emqx-*.deb' | head -n 1 | xargs basename)" >> $GITHUB_OUTPUT
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: emqx-ubuntu20.04
path: _packages/emqx/${{ steps.package_file.outputs.PACKAGE_FILE }}
@ -77,7 +77,7 @@ jobs:
repository: emqx/tf-emqx-performance-test
path: tf-emqx-performance-test
ref: v0.2.3
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-ubuntu20.04
path: tf-emqx-performance-test/
@ -113,13 +113,13 @@ jobs:
working-directory: ./tf-emqx-performance-test
run: |
terraform destroy -auto-approve
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: metrics
path: |
"./tf-emqx-performance-test/*.tar.gz"
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: terraform
@ -148,7 +148,7 @@ jobs:
repository: emqx/tf-emqx-performance-test
path: tf-emqx-performance-test
ref: v0.2.3
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-ubuntu20.04
path: tf-emqx-performance-test/
@ -184,13 +184,13 @@ jobs:
working-directory: ./tf-emqx-performance-test
run: |
terraform destroy -auto-approve
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: metrics
path: |
"./tf-emqx-performance-test/*.tar.gz"
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: terraform
@ -220,7 +220,7 @@ jobs:
repository: emqx/tf-emqx-performance-test
path: tf-emqx-performance-test
ref: v0.2.3
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-ubuntu20.04
path: tf-emqx-performance-test/
@ -257,13 +257,13 @@ jobs:
working-directory: ./tf-emqx-performance-test
run: |
terraform destroy -auto-approve
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: metrics
path: |
"./tf-emqx-performance-test/*.tar.gz"
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: terraform
@ -294,7 +294,7 @@ jobs:
repository: emqx/tf-emqx-performance-test
path: tf-emqx-performance-test
ref: v0.2.3
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-ubuntu20.04
path: tf-emqx-performance-test/
@ -330,13 +330,13 @@ jobs:
working-directory: ./tf-emqx-performance-test
run: |
terraform destroy -auto-approve
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: success()
with:
name: metrics
path: |
"./tf-emqx-performance-test/*.tar.gz"
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: terraform

View File

@ -25,7 +25,7 @@ jobs:
- emqx
- emqx-enterprise
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ matrix.profile }}
- name: extract artifact
@ -40,7 +40,7 @@ jobs:
if: failure()
run: |
cat _build/${{ matrix.profile }}/rel/emqx/log/erlang.log.*
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: conftest-logs-${{ matrix.profile }}

View File

@ -35,7 +35,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh "$EMQX_NAME")
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ env.EMQX_NAME }}-docker
path: /tmp
@ -90,7 +90,7 @@ jobs:
fi
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh "$EMQX_NAME")
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ env.EMQX_NAME }}-docker
path: /tmp

View File

@ -95,7 +95,7 @@ jobs:
echo "Suites: $SUITES"
./rebar3 as standalone_test ct --name 'test@127.0.0.1' -v --readable=true --suite="$SUITES"
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: logs-emqx-app-tests-${{ matrix.type }}

View File

@ -44,7 +44,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh "$EMQX_NAME")
echo "EMQX_TAG=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: "${{ env.EMQX_NAME }}-docker"
path: /tmp

View File

@ -31,7 +31,7 @@ jobs:
else
wget --no-verbose --no-check-certificate -O /tmp/apache-jmeter.tgz $ARCHIVE_URL
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: apache-jmeter.tgz
path: /tmp/apache-jmeter.tgz
@ -58,7 +58,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh emqx)
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-docker
path: /tmp
@ -95,7 +95,7 @@ jobs:
echo "check logs failed"
exit 1
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: always()
with:
name: jmeter_logs-advanced_feat-${{ matrix.scripts_type }}
@ -127,7 +127,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh emqx)
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-docker
path: /tmp
@ -175,7 +175,7 @@ jobs:
if: failure()
run: |
docker compose -f .ci/docker-compose-file/docker-compose-emqx-cluster.yaml logs --no-color > ./jmeter_logs/emqx.log
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: always()
with:
name: jmeter_logs-pgsql_authn_authz-${{ matrix.scripts_type }}_${{ matrix.pgsql_tag }}
@ -204,7 +204,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh emqx)
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-docker
path: /tmp
@ -248,7 +248,7 @@ jobs:
echo "check logs failed"
exit 1
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: always()
with:
name: jmeter_logs-mysql_authn_authz-${{ matrix.scripts_type }}_${{ matrix.mysql_tag }}
@ -273,7 +273,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh emqx)
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-docker
path: /tmp
@ -313,7 +313,7 @@ jobs:
echo "check logs failed"
exit 1
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: always()
with:
name: jmeter_logs-JWT_authn-${{ matrix.scripts_type }}
@ -339,7 +339,7 @@ jobs:
source env.sh
PKG_VSN=$(docker run --rm -v $(pwd):$(pwd) -w $(pwd) -u $(id -u) "$EMQX_BUILDER" ./pkg-vsn.sh emqx)
echo "PKG_VSN=$PKG_VSN" >> "$GITHUB_ENV"
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-docker
path: /tmp
@ -370,7 +370,7 @@ jobs:
echo "check logs failed"
exit 1
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: always()
with:
name: jmeter_logs-built_in_database_authn_authz-${{ matrix.scripts_type }}

View File

@ -25,7 +25,7 @@ jobs:
run:
shell: bash
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: emqx-enterprise
- name: extract artifact
@ -45,7 +45,7 @@ jobs:
run: |
export PROFILE='emqx-enterprise'
make emqx-enterprise-tgz
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
name: Upload built emqx and test scenario
with:
name: relup_tests_emqx_built
@ -72,7 +72,7 @@ jobs:
run:
shell: bash
steps:
- uses: erlef/setup-beam@a6e26b22319003294c58386b6f25edbc7336819a # v1.18.0
- uses: erlef/setup-beam@b9c58b0450cd832ccdb3c17cc156a47065d2114f # v1.18.1
with:
otp-version: 26.2.5
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
@ -88,7 +88,7 @@ jobs:
./configure
make
echo "$(pwd)/bin" >> $GITHUB_PATH
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
name: Download built emqx and test scenario
with:
name: relup_tests_emqx_built
@ -111,7 +111,7 @@ jobs:
docker logs node2.emqx.io | tee lux_logs/emqx2.log
exit 1
fi
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
name: Save debug data
if: failure()
with:

View File

@ -46,7 +46,7 @@ jobs:
contents: read
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ matrix.profile }}
@ -90,7 +90,7 @@ jobs:
contents: read
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ matrix.profile }}
- name: extract artifact
@ -133,7 +133,7 @@ jobs:
if: failure()
run: tar -czf logs.tar.gz _build/test/logs
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: logs-${{ matrix.profile }}-${{ matrix.prefix }}-sg${{ matrix.suitegroup }}
@ -164,7 +164,7 @@ jobs:
CT_COVER_EXPORT_PREFIX: ${{ matrix.profile }}-sg${{ matrix.suitegroup }}
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ matrix.profile }}
- name: extract artifact
@ -193,7 +193,7 @@ jobs:
if: failure()
run: tar -czf logs.tar.gz _build/test/logs
- uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
- uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
if: failure()
with:
name: logs-${{ matrix.profile }}-${{ matrix.prefix }}-sg${{ matrix.suitegroup }}

View File

@ -30,7 +30,7 @@ jobs:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@dc50aa9510b46c811795eb24b2f1ba02a914e534 # v2.3.3
uses: ossf/scorecard-action@62b2cac7ed8198b15735ed49ab1e5cf35480ba46 # v2.4.0
with:
results_file: results.sarif
results_format: sarif
@ -40,7 +40,7 @@ jobs:
publish_results: true
- name: "Upload artifact"
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: SARIF file
path: results.sarif

View File

@ -19,7 +19,7 @@ jobs:
- emqx-enterprise
runs-on: ${{ endsWith(github.repository, '/emqx') && 'ubuntu-22.04' || fromJSON('["self-hosted","ephemeral","linux","x64"]') }}
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: "${{ matrix.profile }}-schema-dump-*-x64"
merge-multiple: true

View File

@ -30,7 +30,7 @@ jobs:
include: ${{ fromJson(inputs.ct-matrix) }}
container: "${{ inputs.builder }}"
steps:
- uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: ${{ matrix.profile }}
- name: extract artifact

View File

@ -34,7 +34,7 @@ jobs:
pull-requests: write
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0

View File

@ -683,6 +683,7 @@ end).
-define(FRAME_PARSE_ERROR, frame_parse_error).
-define(FRAME_SERIALIZE_ERROR, frame_serialize_error).
-define(THROW_FRAME_ERROR(Reason), erlang:throw({?FRAME_PARSE_ERROR, Reason})).
-define(THROW_SERIALIZE_ERROR(Reason), erlang:throw({?FRAME_SERIALIZE_ERROR, Reason})).

View File

@ -91,7 +91,7 @@
?_DO_TRACE(Tag, Msg, Meta),
?SLOG(
Level,
(emqx_trace_formatter:format_meta_map(Meta))#{msg => Msg, tag => Tag},
(Meta)#{msg => Msg, tag => Tag},
#{is_trace => false}
)
end).

View File

@ -2,7 +2,7 @@
{application, emqx, [
{id, "emqx"},
{description, "EMQX Core"},
{vsn, "5.3.3"},
{vsn, "5.3.4"},
{modules, []},
{registered, []},
{applications, [

View File

@ -146,7 +146,9 @@
-type replies() :: emqx_types:packet() | reply() | [reply()].
-define(IS_MQTT_V5, #channel{conninfo = #{proto_ver := ?MQTT_PROTO_V5}}).
-define(IS_CONNECTED_OR_REAUTHENTICATING(ConnState),
((ConnState == connected) orelse (ConnState == reauthenticating))
).
-define(IS_COMMON_SESSION_TIMER(N),
((N == retry_delivery) orelse (N == expire_awaiting_rel))
).
@ -337,7 +339,7 @@ take_conn_info_fields(Fields, ClientInfo, ConnInfo) ->
| {shutdown, Reason :: term(), channel()}
| {shutdown, Reason :: term(), replies(), channel()}.
handle_in(?CONNECT_PACKET(), Channel = #channel{conn_state = ConnState}) when
ConnState =:= connected orelse ConnState =:= reauthenticating
?IS_CONNECTED_OR_REAUTHENTICATING(ConnState)
->
handle_out(disconnect, ?RC_PROTOCOL_ERROR, Channel);
handle_in(?CONNECT_PACKET(), Channel = #channel{conn_state = connecting}) ->
@ -567,29 +569,8 @@ handle_in(
process_disconnect(ReasonCode, Properties, NChannel);
handle_in(?AUTH_PACKET(), Channel) ->
handle_out(disconnect, ?RC_IMPLEMENTATION_SPECIFIC_ERROR, Channel);
handle_in({frame_error, Reason}, Channel = #channel{conn_state = idle}) ->
shutdown(shutdown_count(frame_error, Reason), Channel);
handle_in(
{frame_error, #{cause := frame_too_large} = R}, Channel = #channel{conn_state = connecting}
) ->
shutdown(
shutdown_count(frame_error, R), ?CONNACK_PACKET(?RC_PACKET_TOO_LARGE), Channel
);
handle_in({frame_error, Reason}, Channel = #channel{conn_state = connecting}) ->
shutdown(shutdown_count(frame_error, Reason), ?CONNACK_PACKET(?RC_MALFORMED_PACKET), Channel);
handle_in(
{frame_error, #{cause := frame_too_large}}, Channel = #channel{conn_state = ConnState}
) when
ConnState =:= connected orelse ConnState =:= reauthenticating
->
handle_out(disconnect, {?RC_PACKET_TOO_LARGE, frame_too_large}, Channel);
handle_in({frame_error, Reason}, Channel = #channel{conn_state = ConnState}) when
ConnState =:= connected orelse ConnState =:= reauthenticating
->
handle_out(disconnect, {?RC_MALFORMED_PACKET, Reason}, Channel);
handle_in({frame_error, Reason}, Channel = #channel{conn_state = disconnected}) ->
?SLOG(error, #{msg => "malformed_mqtt_message", reason => Reason}),
{ok, Channel};
handle_in({frame_error, Reason}, Channel) ->
handle_frame_error(Reason, Channel);
handle_in(Packet, Channel) ->
?SLOG(error, #{msg => "disconnecting_due_to_unexpected_message", packet => Packet}),
handle_out(disconnect, ?RC_PROTOCOL_ERROR, Channel).
@ -1021,6 +1002,68 @@ not_nacked({deliver, _Topic, Msg}) ->
true
end.
%%--------------------------------------------------------------------
%% Handle Frame Error
%%--------------------------------------------------------------------
handle_frame_error(
Reason = #{cause := frame_too_large},
Channel = #channel{conn_state = ConnState, conninfo = ConnInfo}
) when
?IS_CONNECTED_OR_REAUTHENTICATING(ConnState)
->
ShutdownCount = shutdown_count(frame_error, Reason),
case proto_ver(Reason, ConnInfo) of
?MQTT_PROTO_V5 ->
handle_out(disconnect, {?RC_PACKET_TOO_LARGE, frame_too_large}, Channel);
_ ->
shutdown(ShutdownCount, Channel)
end;
%% Only send CONNACK with reason code `frame_too_large` for MQTT-v5.0 when connecting,
%% otherwise DONOT send any CONNACK or DISCONNECT packet.
handle_frame_error(
Reason,
Channel = #channel{conn_state = ConnState, conninfo = ConnInfo}
) when
is_map(Reason) andalso
(ConnState == idle orelse ConnState == connecting)
->
ShutdownCount = shutdown_count(frame_error, Reason),
ProtoVer = proto_ver(Reason, ConnInfo),
NChannel = Channel#channel{conninfo = ConnInfo#{proto_ver => ProtoVer}},
case ProtoVer of
?MQTT_PROTO_V5 ->
shutdown(ShutdownCount, ?CONNACK_PACKET(?RC_PACKET_TOO_LARGE), NChannel);
_ ->
shutdown(ShutdownCount, NChannel)
end;
handle_frame_error(
Reason,
Channel = #channel{conn_state = connecting}
) ->
shutdown(
shutdown_count(frame_error, Reason),
?CONNACK_PACKET(?RC_MALFORMED_PACKET),
Channel
);
handle_frame_error(
Reason,
Channel = #channel{conn_state = ConnState}
) when
?IS_CONNECTED_OR_REAUTHENTICATING(ConnState)
->
handle_out(
disconnect,
{?RC_MALFORMED_PACKET, Reason},
Channel
);
handle_frame_error(
Reason,
Channel = #channel{conn_state = disconnected}
) ->
?SLOG(error, #{msg => "malformed_mqtt_message", reason => Reason}),
{ok, Channel}.
%%--------------------------------------------------------------------
%% Handle outgoing packet
%%--------------------------------------------------------------------
@ -1289,7 +1332,7 @@ handle_info(
session = Session
}
) when
ConnState =:= connected orelse ConnState =:= reauthenticating
?IS_CONNECTED_OR_REAUTHENTICATING(ConnState)
->
{Intent, Session1} = session_disconnect(ClientInfo, ConnInfo, Session),
Channel1 = ensure_disconnected(Reason, maybe_publish_will_msg(sock_closed, Channel)),
@ -2636,8 +2679,7 @@ save_alias(outbound, AliasId, Topic, TopicAliases = #{outbound := Aliases}) ->
NAliases = maps:put(Topic, AliasId, Aliases),
TopicAliases#{outbound => NAliases}.
-compile({inline, [reply/2, shutdown/2, shutdown/3, sp/1, flag/1]}).
-compile({inline, [reply/2, shutdown/2, shutdown/3]}).
reply(Reply, Channel) ->
{reply, Reply, Channel}.
@ -2673,13 +2715,13 @@ disconnect_and_shutdown(
?IS_MQTT_V5 =
#channel{conn_state = ConnState}
) when
ConnState =:= connected orelse ConnState =:= reauthenticating
?IS_CONNECTED_OR_REAUTHENTICATING(ConnState)
->
NChannel = ensure_disconnected(Reason, Channel),
shutdown(Reason, Reply, ?DISCONNECT_PACKET(reason_code(Reason)), NChannel);
%% mqtt v3/v4 connected sessions
disconnect_and_shutdown(Reason, Reply, Channel = #channel{conn_state = ConnState}) when
ConnState =:= connected orelse ConnState =:= reauthenticating
?IS_CONNECTED_OR_REAUTHENTICATING(ConnState)
->
NChannel = ensure_disconnected(Reason, Channel),
shutdown(Reason, Reply, NChannel);
@ -2722,6 +2764,13 @@ is_durable_session(#channel{session = Session}) ->
false
end.
proto_ver(#{proto_ver := ProtoVer}, _ConnInfo) ->
ProtoVer;
proto_ver(_Reason, #{proto_ver := ProtoVer}) ->
ProtoVer;
proto_ver(_, _) ->
?MQTT_PROTO_V4.
%%--------------------------------------------------------------------
%% For CT tests
%%--------------------------------------------------------------------

View File

@ -783,7 +783,8 @@ parse_incoming(Data, Packets, State = #state{parse_state = ParseState}) ->
input_bytes => Data,
parsed_packets => Packets
}),
{[{frame_error, Reason} | Packets], State};
NState = enrich_state(Reason, State),
{[{frame_error, Reason} | Packets], NState};
error:Reason:Stacktrace ->
?LOG(error, #{
at_state => emqx_frame:describe_state(ParseState),
@ -1227,6 +1228,12 @@ inc_counter(Key, Inc) ->
_ = emqx_pd:inc_counter(Key, Inc),
ok.
enrich_state(#{parse_state := NParseState}, State) ->
Serialize = emqx_frame:serialize_opts(NParseState),
State#state{parse_state = NParseState, serialize = Serialize};
enrich_state(_, State) ->
State.
set_tcp_keepalive({quic, _Listener}) ->
ok;
set_tcp_keepalive({Type, Id}) ->

View File

@ -267,28 +267,50 @@ packet(Header, Variable) ->
packet(Header, Variable, Payload) ->
#mqtt_packet{header = Header, variable = Variable, payload = Payload}.
parse_connect(FrameBin, StrictMode) ->
{ProtoName, Rest} = parse_utf8_string_with_cause(FrameBin, StrictMode, invalid_proto_name),
case ProtoName of
<<"MQTT">> ->
ok;
<<"MQIsdp">> ->
ok;
_ ->
%% from spec: the server MAY send disconnect with reason code 0x84
%% we chose to close socket because the client is likely not talking MQTT anyway
?PARSE_ERR(#{
cause => invalid_proto_name,
expected => <<"'MQTT' or 'MQIsdp'">>,
received => ProtoName
})
end,
parse_connect2(ProtoName, Rest, StrictMode).
parse_connect(FrameBin, Options = #{strict_mode := StrictMode}) ->
{ProtoName, Rest0} = parse_utf8_string_with_cause(FrameBin, StrictMode, invalid_proto_name),
%% No need to parse and check proto_ver if proto_name is invalid, check it first
%% And the matching check of `proto_name` and `proto_ver` fields will be done in `emqx_packet:check_proto_ver/2`
_ = validate_proto_name(ProtoName),
{IsBridge, ProtoVer, Rest2} = parse_connect_proto_ver(Rest0),
NOptions = Options#{version => ProtoVer},
try
do_parse_connect(ProtoName, IsBridge, ProtoVer, Rest2, StrictMode)
catch
throw:{?FRAME_PARSE_ERROR, ReasonM} when is_map(ReasonM) ->
?PARSE_ERR(
ReasonM#{
proto_ver => ProtoVer,
proto_name => ProtoName,
parse_state => ?NONE(NOptions)
}
);
throw:{?FRAME_PARSE_ERROR, Reason} ->
?PARSE_ERR(
#{
cause => Reason,
proto_ver => ProtoVer,
proto_name => ProtoName,
parse_state => ?NONE(NOptions)
}
)
end.
parse_connect2(
do_parse_connect(
ProtoName,
<<BridgeTag:4, ProtoVer:4, UsernameFlagB:1, PasswordFlagB:1, WillRetainB:1, WillQoS:2,
WillFlagB:1, CleanStart:1, Reserved:1, KeepAlive:16/big, Rest2/binary>>,
IsBridge,
ProtoVer,
<<
UsernameFlagB:1,
PasswordFlagB:1,
WillRetainB:1,
WillQoS:2,
WillFlagB:1,
CleanStart:1,
Reserved:1,
KeepAlive:16/big,
Rest/binary
>>,
StrictMode
) ->
_ = validate_connect_reserved(Reserved),
@ -303,14 +325,14 @@ parse_connect2(
UsernameFlag = bool(UsernameFlagB),
PasswordFlag = bool(PasswordFlagB)
),
{Properties, Rest3} = parse_properties(Rest2, ProtoVer, StrictMode),
{Properties, Rest3} = parse_properties(Rest, ProtoVer, StrictMode),
{ClientId, Rest4} = parse_utf8_string_with_cause(Rest3, StrictMode, invalid_clientid),
ConnPacket = #mqtt_packet_connect{
proto_name = ProtoName,
proto_ver = ProtoVer,
%% For bridge mode, non-standard implementation
%% Invented by mosquitto, named 'try_private': https://mosquitto.org/man/mosquitto-conf-5.html
is_bridge = (BridgeTag =:= 8),
is_bridge = IsBridge,
clean_start = bool(CleanStart),
will_flag = WillFlag,
will_qos = WillQoS,
@ -343,16 +365,16 @@ parse_connect2(
unexpected_trailing_bytes => size(Rest7)
})
end;
parse_connect2(_ProtoName, Bin, _StrictMode) ->
%% sent less than 32 bytes
do_parse_connect(_ProtoName, _IsBridge, _ProtoVer, Bin, _StrictMode) ->
%% sent less than 24 bytes
?PARSE_ERR(#{cause => malformed_connect, header_bytes => Bin}).
parse_packet(
#mqtt_packet_header{type = ?CONNECT},
FrameBin,
#{strict_mode := StrictMode}
Options
) ->
parse_connect(FrameBin, StrictMode);
parse_connect(FrameBin, Options);
parse_packet(
#mqtt_packet_header{type = ?CONNACK},
<<AckFlags:8, ReasonCode:8, Rest/binary>>,
@ -516,6 +538,12 @@ parse_packet_id(<<PacketId:16/big, Rest/binary>>) ->
parse_packet_id(_) ->
?PARSE_ERR(invalid_packet_id).
parse_connect_proto_ver(<<BridgeTag:4, ProtoVer:4, Rest/binary>>) ->
{_IsBridge = (BridgeTag =:= 8), ProtoVer, Rest};
parse_connect_proto_ver(Bin) ->
%% sent less than 1 bytes or empty
?PARSE_ERR(#{cause => malformed_connect, header_bytes => Bin}).
parse_properties(Bin, Ver, _StrictMode) when Ver =/= ?MQTT_PROTO_V5 ->
{#{}, Bin};
%% TODO: version mess?
@ -739,6 +767,8 @@ serialize_fun(#{version := Ver, max_size := MaxSize, strict_mode := StrictMode})
initial_serialize_opts(Opts) ->
maps:merge(?DEFAULT_OPTIONS, Opts).
serialize_opts(?NONE(Options)) ->
maps:merge(?DEFAULT_OPTIONS, Options);
serialize_opts(#mqtt_packet_connect{proto_ver = ProtoVer, properties = ConnProps}) ->
MaxSize = get_property('Maximum-Packet-Size', ConnProps, ?MAX_PACKET_SIZE),
#{version => ProtoVer, max_size => MaxSize, strict_mode => false}.
@ -1157,18 +1187,34 @@ validate_subqos([3 | _]) -> ?PARSE_ERR(bad_subqos);
validate_subqos([_ | T]) -> validate_subqos(T);
validate_subqos([]) -> ok.
%% from spec: the server MAY send disconnect with reason code 0x84
%% we chose to close socket because the client is likely not talking MQTT anyway
validate_proto_name(<<"MQTT">>) ->
ok;
validate_proto_name(<<"MQIsdp">>) ->
ok;
validate_proto_name(ProtoName) ->
?PARSE_ERR(#{
cause => invalid_proto_name,
expected => <<"'MQTT' or 'MQIsdp'">>,
received => ProtoName
}).
%% MQTT-v3.1.1-[MQTT-3.1.2-3], MQTT-v5.0-[MQTT-3.1.2-3]
-compile({inline, [validate_connect_reserved/1]}).
validate_connect_reserved(0) -> ok;
validate_connect_reserved(1) -> ?PARSE_ERR(reserved_connect_flag).
-compile({inline, [validate_connect_will/3]}).
%% MQTT-v3.1.1-[MQTT-3.1.2-13], MQTT-v5.0-[MQTT-3.1.2-11]
validate_connect_will(false, _, WillQos) when WillQos > 0 -> ?PARSE_ERR(invalid_will_qos);
validate_connect_will(false, _, WillQoS) when WillQoS > 0 -> ?PARSE_ERR(invalid_will_qos);
%% MQTT-v3.1.1-[MQTT-3.1.2-14], MQTT-v5.0-[MQTT-3.1.2-12]
validate_connect_will(true, _, WillQoS) when WillQoS > 2 -> ?PARSE_ERR(invalid_will_qos);
%% MQTT-v3.1.1-[MQTT-3.1.2-15], MQTT-v5.0-[MQTT-3.1.2-13]
validate_connect_will(false, WillRetain, _) when WillRetain -> ?PARSE_ERR(invalid_will_retain);
validate_connect_will(_, _, _) -> ok.
-compile({inline, [validate_connect_password_flag/4]}).
%% MQTT-v3.1
%% Username flag and password flag are not strongly related
%% https://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connect
@ -1183,6 +1229,7 @@ validate_connect_password_flag(true, ?MQTT_PROTO_V5, _, _) ->
validate_connect_password_flag(_, _, _, _) ->
ok.
-compile({inline, [bool/1]}).
bool(0) -> false;
bool(1) -> true.

View File

@ -432,7 +432,7 @@ do_start_listener(Type, Name, Id, #{bind := ListenOn} = Opts) when ?ESOCKD_LISTE
esockd:open(
Id,
ListenOn,
merge_default(esockd_opts(Id, Type, Name, Opts))
merge_default(esockd_opts(Id, Type, Name, Opts, _OldOpts = undefined))
);
%% Start MQTT/WS listener
do_start_listener(Type, Name, Id, Opts) when ?COWBOY_LISTENER(Type) ->
@ -476,7 +476,7 @@ do_update_listener(Type, Name, OldConf, NewConf = #{bind := ListenOn}) when
Id = listener_id(Type, Name),
case maps:get(bind, OldConf) of
ListenOn ->
esockd:set_options({Id, ListenOn}, esockd_opts(Id, Type, Name, NewConf));
esockd:set_options({Id, ListenOn}, esockd_opts(Id, Type, Name, NewConf, OldConf));
_Different ->
%% TODO
%% Again, we're not strictly required to drop live connections in this case.
@ -588,7 +588,7 @@ perform_listener_change(update, {{Type, Name, ConfOld}, {_, _, ConfNew}}) ->
perform_listener_change(stop, {Type, Name, Conf}) ->
stop_listener(Type, Name, Conf).
esockd_opts(ListenerId, Type, Name, Opts0) ->
esockd_opts(ListenerId, Type, Name, Opts0, OldOpts) ->
Opts1 = maps:with([acceptors, max_connections, proxy_protocol, proxy_protocol_timeout], Opts0),
Limiter = limiter(Opts0),
Opts2 =
@ -620,7 +620,7 @@ esockd_opts(ListenerId, Type, Name, Opts0) ->
tcp ->
Opts3#{tcp_options => tcp_opts(Opts0)};
ssl ->
OptsWithCRL = inject_crl_config(Opts0),
OptsWithCRL = inject_crl_config(Opts0, OldOpts),
OptsWithSNI = inject_sni_fun(ListenerId, OptsWithCRL),
OptsWithRootFun = inject_root_fun(OptsWithSNI),
OptsWithVerifyFun = inject_verify_fun(OptsWithRootFun),
@ -996,7 +996,7 @@ inject_sni_fun(_ListenerId, Conf) ->
Conf.
inject_crl_config(
Conf = #{ssl_options := #{enable_crl_check := true} = SSLOpts}
Conf = #{ssl_options := #{enable_crl_check := true} = SSLOpts}, _OldOpts
) ->
HTTPTimeout = emqx_config:get([crl_cache, http_timeout], timer:seconds(15)),
Conf#{
@ -1006,7 +1006,16 @@ inject_crl_config(
crl_cache => {emqx_ssl_crl_cache, {internal, [{http, HTTPTimeout}]}}
}
};
inject_crl_config(Conf) ->
inject_crl_config(#{ssl_options := SSLOpts0} = Conf0, #{} = OldOpts) ->
%% Note: we must set crl options to `undefined' to unset them. Otherwise,
%% `esockd' will retain such options when `esockd:merge_opts/2' is called and the SSL
%% options were previously enabled.
WasEnabled = emqx_utils_maps:deep_get([ssl_options, enable_crl_check], OldOpts, false),
Undefine = fun(Acc, K) -> emqx_utils_maps:put_if(Acc, K, undefined, WasEnabled) end,
SSLOpts1 = Undefine(SSLOpts0, crl_check),
SSLOpts = Undefine(SSLOpts1, crl_cache),
Conf0#{ssl_options := SSLOpts};
inject_crl_config(Conf, undefined = _OldOpts) ->
Conf.
maybe_unregister_ocsp_stapling_refresh(

View File

@ -105,7 +105,7 @@ format(Msg, Meta, Config) ->
maybe_format_msg(undefined, _Meta, _Config) ->
#{};
maybe_format_msg({report, Report0} = Msg, #{report_cb := Cb} = Meta, Config) ->
Report = emqx_logger_textfmt:try_encode_payload(Report0, Config),
Report = emqx_logger_textfmt:try_encode_meta(Report0, Config),
case is_map(Report) andalso Cb =:= ?DEFAULT_FORMATTER of
true ->
%% reporting a map without a customised format function

View File

@ -20,7 +20,7 @@
-export([format/2]).
-export([check_config/1]).
-export([try_format_unicode/1, try_encode_payload/2]).
-export([try_format_unicode/1, try_encode_meta/2]).
%% Used in the other log formatters
-export([evaluate_lazy_values_if_dbg_level/1, evaluate_lazy_values/1]).
@ -111,7 +111,7 @@ is_list_report_acceptable(_) ->
enrich_report(ReportRaw0, Meta, Config) ->
%% clientid and peername always in emqx_conn's process metadata.
%% topic and username can be put in meta using ?SLOG/3, or put in msg's report by ?SLOG/2
ReportRaw = try_encode_payload(ReportRaw0, Config),
ReportRaw = try_encode_meta(ReportRaw0, Config),
Topic =
case maps:get(topic, Meta, undefined) of
undefined -> maps:get(topic, ReportRaw, undefined);
@ -180,9 +180,22 @@ enrich_topic({Fmt, Args}, #{topic := Topic}) when is_list(Fmt) ->
enrich_topic(Msg, _) ->
Msg.
try_encode_payload(#{payload := Payload} = Report, #{payload_encode := Encode}) ->
try_encode_meta(Report, Config) ->
lists:foldl(
fun(Meta, Acc) ->
try_encode_meta(Meta, Acc, Config)
end,
Report,
[payload, packet]
).
try_encode_meta(payload, #{payload := Payload} = Report, #{payload_encode := Encode}) ->
Report#{payload := encode_payload(Payload, Encode)};
try_encode_payload(Report, _Config) ->
try_encode_meta(packet, #{packet := Packet} = Report, #{payload_encode := Encode}) when
is_tuple(Packet)
->
Report#{packet := emqx_packet:format(Packet, Encode)};
try_encode_meta(_, Report, _Config) ->
Report.
encode_payload(Payload, text) ->
@ -190,4 +203,5 @@ encode_payload(Payload, text) ->
encode_payload(_Payload, hidden) ->
"******";
encode_payload(Payload, hex) ->
binary:encode_hex(Payload).
Bin = emqx_utils_conv:bin(Payload),
binary:encode_hex(Bin).

View File

@ -51,7 +51,6 @@
]).
-export([
format/1,
format/2
]).
@ -481,10 +480,6 @@ will_msg(#mqtt_packet_connect{
headers = #{username => Username, properties => Props}
}.
%% @doc Format packet
-spec format(emqx_types:packet()) -> iolist().
format(Packet) -> format(Packet, emqx_trace_handler:payload_encode()).
%% @doc Format packet
-spec format(emqx_types:packet(), hex | text | hidden) -> iolist().
format(#mqtt_packet{header = Header, variable = Variable, payload = Payload}, PayloadEncode) ->

View File

@ -56,6 +56,11 @@
cold_get_subscription/2
]).
-export([
format_lease_events/1,
format_stream_progresses/1
]).
-define(schedule_subscribe, schedule_subscribe).
-define(schedule_unsubscribe, schedule_unsubscribe).
@ -236,14 +241,14 @@ schedule_subscribe(
ScheduledActions1 = ScheduledActions0#{
ShareTopicFilter => ScheduledAction#{type => {?schedule_subscribe, SubOpts}}
},
?tp(warning, shared_subs_schedule_subscribe_override, #{
?tp(debug, shared_subs_schedule_subscribe_override, #{
share_topic_filter => ShareTopicFilter,
new_type => {?schedule_subscribe, SubOpts},
old_action => format_schedule_action(ScheduledAction)
}),
SharedSubS0#{scheduled_actions := ScheduledActions1};
_ ->
?tp(warning, shared_subs_schedule_subscribe_new, #{
?tp(debug, shared_subs_schedule_subscribe_new, #{
share_topic_filter => ShareTopicFilter, subopts => SubOpts
}),
Agent1 = emqx_persistent_session_ds_shared_subs_agent:on_subscribe(
@ -294,7 +299,7 @@ schedule_unsubscribe(
ScheduledActions1 = ScheduledActions0#{
ShareTopicFilter => ScheduledAction1
},
?tp(warning, shared_subs_schedule_unsubscribe_override, #{
?tp(debug, shared_subs_schedule_unsubscribe_override, #{
share_topic_filter => ShareTopicFilter,
new_type => ?schedule_unsubscribe,
old_action => format_schedule_action(ScheduledAction0)
@ -309,7 +314,7 @@ schedule_unsubscribe(
progresses => []
}
},
?tp(warning, shared_subs_schedule_unsubscribe_new, #{
?tp(debug, shared_subs_schedule_unsubscribe_new, #{
share_topic_filter => ShareTopicFilter,
stream_keys => format_stream_keys(StreamKeys)
}),
@ -334,7 +339,7 @@ renew_streams(S0, #{agent := Agent0, scheduled_actions := ScheduledActions} = Sh
Agent0
),
StreamLeaseEvents =/= [] andalso
?tp(warning, shared_subs_new_stream_lease_events, #{
?tp(debug, shared_subs_new_stream_lease_events, #{
stream_lease_events => format_lease_events(StreamLeaseEvents)
}),
S1 = lists:foldl(
@ -501,7 +506,7 @@ run_scheduled_action(
Progresses1 = stream_progresses(S, StreamKeysToWait0 -- StreamKeysToWait1) ++ Progresses0,
case StreamKeysToWait1 of
[] ->
?tp(warning, shared_subs_schedule_action_complete, #{
?tp(debug, shared_subs_schedule_action_complete, #{
share_topic_filter => ShareTopicFilter,
progresses => format_stream_progresses(Progresses1),
type => Type
@ -525,7 +530,7 @@ run_scheduled_action(
end;
_ ->
Action1 = Action#{stream_keys_to_wait => StreamKeysToWait1, progresses => Progresses1},
?tp(warning, shared_subs_schedule_action_continue, #{
?tp(debug, shared_subs_schedule_action_continue, #{
share_topic_filter => ShareTopicFilter,
new_action => format_schedule_action(Action1)
}),

View File

@ -62,7 +62,7 @@
streams := [{pid(), quicer:stream_handle()}],
%% New stream opts
stream_opts := map(),
%% If conneciton is resumed from session ticket
%% If connection is resumed from session ticket
is_resumed => boolean(),
%% mqtt message serializer config
serialize => undefined,
@ -70,8 +70,8 @@
}.
-type cb_ret() :: quicer_lib:cb_ret().
%% @doc Data streams initializions are started in parallel with control streams, data streams are blocked
%% for the activation from control stream after it is accepted as a legit conneciton.
%% @doc Data streams initializations are started in parallel with control streams, data streams are blocked
%% for the activation from control stream after it is accepted as a legit connection.
%% For security, the initial number of allowed data streams from client should be limited by
%% 'peer_bidi_stream_count` & 'peer_unidi_stream_count`
-spec activate_data_streams(pid(), {
@ -80,7 +80,7 @@
activate_data_streams(ConnOwner, {PS, Serialize, Channel}) ->
gen_server:call(ConnOwner, {activate_data_streams, {PS, Serialize, Channel}}, infinity).
%% @doc conneciton owner init callback
%% @doc connection owner init callback
-spec init(map()) -> {ok, cb_state()}.
init(#{stream_opts := SOpts} = S) when is_list(SOpts) ->
init(S#{stream_opts := maps:from_list(SOpts)});

View File

@ -589,6 +589,14 @@ ensure_valid_options(Options, Versions) ->
ensure_valid_options([], _, Acc) ->
lists:reverse(Acc);
ensure_valid_options([{K, undefined} | T], Versions, Acc) when
K =:= crl_check;
K =:= crl_cache
->
%% Note: we must set crl options to `undefined' to unset them. Otherwise,
%% `esockd' will retain such options when `esockd:merge_opts/2' is called and the SSL
%% options were previously enabled.
ensure_valid_options(T, Versions, [{K, undefined} | Acc]);
ensure_valid_options([{_, undefined} | T], Versions, Acc) ->
ensure_valid_options(T, Versions, Acc);
ensure_valid_options([{_, ""} | T], Versions, Acc) ->

View File

@ -17,7 +17,6 @@
-include("emqx_mqtt.hrl").
-export([format/2]).
-export([format_meta_map/1]).
%% logger_formatter:config/0 is not exported.
-type config() :: map().
@ -43,10 +42,6 @@ format(
format(Event, Config) ->
emqx_logger_textfmt:format(Event, Config).
format_meta_map(Meta) ->
Encode = emqx_trace_handler:payload_encode(),
format_meta_map(Meta, Encode).
format_meta_map(Meta, Encode) ->
format_meta_map(Meta, Encode, [
{packet, fun format_packet/2},

View File

@ -436,6 +436,7 @@ websocket_handle({Frame, _}, State) ->
%% TODO: should not close the ws connection
?LOG(error, #{msg => "unexpected_frame", frame => Frame}),
shutdown(unexpected_ws_frame, State).
websocket_info({call, From, Req}, State) ->
handle_call(From, Req, State);
websocket_info({cast, rate_limit}, State) ->
@ -737,7 +738,8 @@ parse_incoming(Data, Packets, State = #state{parse_state = ParseState}) ->
input_bytes => Data
}),
FrameError = {frame_error, Reason},
{[{incoming, FrameError} | Packets], State};
NState = enrich_state(Reason, State),
{[{incoming, FrameError} | Packets], NState};
error:Reason:Stacktrace ->
?LOG(error, #{
at_state => emqx_frame:describe_state(ParseState),
@ -830,7 +832,7 @@ serialize_and_inc_stats_fun(#state{serialize = Serialize}) ->
?LOG(warning, #{
msg => "packet_discarded",
reason => "frame_too_large",
packet => emqx_packet:format(Packet)
packet => Packet
}),
ok = emqx_metrics:inc('delivery.dropped.too_large'),
ok = emqx_metrics:inc('delivery.dropped'),
@ -1069,6 +1071,13 @@ check_max_connection(Type, Listener) ->
{denny, Reason}
end
end.
enrich_state(#{parse_state := NParseState}, State) ->
Serialize = emqx_frame:serialize_opts(NParseState),
State#state{parse_state = NParseState, serialize = Serialize};
enrich_state(_, State) ->
State.
%%--------------------------------------------------------------------
%% For CT tests
%%--------------------------------------------------------------------

View File

@ -414,11 +414,18 @@ t_handle_in_auth(_) ->
emqx_channel:handle_in(?AUTH_PACKET(), Channel).
t_handle_in_frame_error(_) ->
IdleChannel = channel(#{conn_state => idle}),
{shutdown, #{shutdown_count := frame_too_large, cause := frame_too_large}, _Chan} =
emqx_channel:handle_in({frame_error, #{cause => frame_too_large}}, IdleChannel),
IdleChannelV5 = channel(#{conn_state => idle}),
%% no CONNACK packet for v4
?assertMatch(
{shutdown, #{shutdown_count := frame_too_large, cause := frame_too_large}, _Chan},
emqx_channel:handle_in(
{frame_error, #{cause => frame_too_large}}, v4(IdleChannelV5)
)
),
ConnectingChan = channel(#{conn_state => connecting}),
ConnackPacket = ?CONNACK_PACKET(?RC_PACKET_TOO_LARGE),
?assertMatch(
{shutdown,
#{
shutdown_count := frame_too_large,
@ -426,12 +433,13 @@ t_handle_in_frame_error(_) ->
limit := 100,
received := 101
},
ConnackPacket,
_} =
ConnackPacket, _},
emqx_channel:handle_in(
{frame_error, #{cause => frame_too_large, received => 101, limit => 100}},
ConnectingChan
)
),
DisconnectPacket = ?DISCONNECT_PACKET(?RC_PACKET_TOO_LARGE),
ConnectedChan = channel(#{conn_state => connected}),
?assertMatch(

View File

@ -138,13 +138,14 @@ init_per_testcase(t_refresh_config = TestCase, Config) ->
];
init_per_testcase(TestCase, Config) when
TestCase =:= t_update_listener;
TestCase =:= t_update_listener_enable_disable;
TestCase =:= t_validations
->
ct:timetrap({seconds, 30}),
ok = snabbkaffe:start_trace(),
%% when running emqx standalone tests, we can't use those
%% features.
case does_module_exist(emqx_management) of
case does_module_exist(emqx_mgmt) of
true ->
DataDir = ?config(data_dir, Config),
CRLFile = filename:join([DataDir, "intermediate-revoked.crl.pem"]),
@ -165,7 +166,7 @@ init_per_testcase(TestCase, Config) when
{emqx_conf, #{config => #{listeners => #{ssl => #{default => ListenerConf}}}}},
emqx,
emqx_management,
{emqx_dashboard, "dashboard.listeners.http { enable = true, bind = 18083 }"}
emqx_mgmt_api_test_util:emqx_dashboard()
],
#{work_dir => emqx_cth_suite:work_dir(TestCase, Config)}
),
@ -206,6 +207,7 @@ read_crl(Filename) ->
end_per_testcase(TestCase, Config) when
TestCase =:= t_update_listener;
TestCase =:= t_update_listener_enable_disable;
TestCase =:= t_validations
->
Skip = proplists:get_bool(skip_does_not_apply, Config),
@ -1057,3 +1059,104 @@ do_t_validations(_Config) ->
),
ok.
%% Checks that if CRL is ever enabled and then disabled, clients can connect, even if they
%% would otherwise not have their corresponding CRLs cached and fail with `{bad_crls,
%% no_relevant_crls}`.
t_update_listener_enable_disable(Config) ->
case proplists:get_bool(skip_does_not_apply, Config) of
true ->
ct:pal("skipping as this test does not apply in this profile"),
ok;
false ->
do_t_update_listener_enable_disable(Config)
end.
do_t_update_listener_enable_disable(Config) ->
DataDir = ?config(data_dir, Config),
Keyfile = filename:join([DataDir, "server.key.pem"]),
Certfile = filename:join([DataDir, "server.cert.pem"]),
Cacertfile = filename:join([DataDir, "ca-chain.cert.pem"]),
ClientCert = filename:join(DataDir, "client.cert.pem"),
ClientKey = filename:join(DataDir, "client.key.pem"),
ListenerId = "ssl:default",
%% Enable CRL
{ok, {{_, 200, _}, _, ListenerData0}} = get_listener_via_api(ListenerId),
CRLConfig0 =
#{
<<"ssl_options">> =>
#{
<<"keyfile">> => Keyfile,
<<"certfile">> => Certfile,
<<"cacertfile">> => Cacertfile,
<<"enable_crl_check">> => true,
<<"fail_if_no_peer_cert">> => true
}
},
ListenerData1 = emqx_utils_maps:deep_merge(ListenerData0, CRLConfig0),
{ok, {_, _, ListenerData2}} = update_listener_via_api(ListenerId, ListenerData1),
?assertMatch(
#{
<<"ssl_options">> :=
#{
<<"enable_crl_check">> := true,
<<"verify">> := <<"verify_peer">>,
<<"fail_if_no_peer_cert">> := true
}
},
ListenerData2
),
%% Disable CRL
CRLConfig1 =
#{
<<"ssl_options">> =>
#{
<<"keyfile">> => Keyfile,
<<"certfile">> => Certfile,
<<"cacertfile">> => Cacertfile,
<<"enable_crl_check">> => false,
<<"fail_if_no_peer_cert">> => true
}
},
ListenerData3 = emqx_utils_maps:deep_merge(ListenerData2, CRLConfig1),
redbug:start(
[
"esockd_server:get_listener_prop -> return",
"esockd_server:set_listener_prop -> return",
"esockd:merge_opts -> return",
"esockd_listener_sup:set_options -> return",
"emqx_listeners:inject_crl_config -> return"
],
[{msgs, 100}]
),
{ok, {_, _, ListenerData4}} = update_listener_via_api(ListenerId, ListenerData3),
?assertMatch(
#{
<<"ssl_options">> :=
#{
<<"enable_crl_check">> := false,
<<"verify">> := <<"verify_peer">>,
<<"fail_if_no_peer_cert">> := true
}
},
ListenerData4
),
%% Now the client that would be blocked tries to connect and should now be allowed.
{ok, C} = emqtt:start_link([
{ssl, true},
{ssl_opts, [
{certfile, ClientCert},
{keyfile, ClientKey},
{verify, verify_none}
]},
{port, 8883}
]),
?assertMatch({ok, _}, emqtt:connect(C)),
emqtt:stop(C),
?assertNotReceive({http_get, _}),
ok.

View File

@ -63,6 +63,7 @@ groups() ->
t_parse_malformed_properties,
t_malformed_connect_header,
t_malformed_connect_data,
t_malformed_connect_data_proto_ver,
t_reserved_connect_flag,
t_invalid_clientid,
t_undefined_password,
@ -167,6 +168,8 @@ t_parse_malformed_utf8_string(_) ->
ParseState = emqx_frame:initial_parse_state(#{strict_mode => true}),
?ASSERT_FRAME_THROW(utf8_string_invalid, emqx_frame:parse(MalformedPacket, ParseState)).
%% TODO: parse v3 with 0 length clientid
t_serialize_parse_v3_connect(_) ->
Bin =
<<16, 37, 0, 6, 77, 81, 73, 115, 100, 112, 3, 2, 0, 60, 0, 23, 109, 111, 115, 113, 112, 117,
@ -324,7 +327,7 @@ t_serialize_parse_bridge_connect(_) ->
header = #mqtt_packet_header{type = ?CONNECT},
variable = #mqtt_packet_connect{
clientid = <<"C_00:0C:29:2B:77:52">>,
proto_ver = 16#03,
proto_ver = ?MQTT_PROTO_V3,
proto_name = <<"MQIsdp">>,
is_bridge = true,
will_retain = true,
@ -686,15 +689,36 @@ t_malformed_connect_header(_) ->
).
t_malformed_connect_data(_) ->
ProtoNameWithLen = <<0, 6, "MQIsdp">>,
ConnectFlags = <<2#00000000>>,
ClientIdwithLen = <<0, 1, "a">>,
UnexpectedRestBin = <<0, 1, 2>>,
?ASSERT_FRAME_THROW(
#{cause := malformed_connect, unexpected_trailing_bytes := _},
emqx_frame:parse(<<16, 15, 0, 6, 77, 81, 73, 115, 100, 112, 3, 0, 0, 0, 0, 0, 0>>)
#{cause := malformed_connect, unexpected_trailing_bytes := 3},
emqx_frame:parse(
<<16, 18, ProtoNameWithLen/binary, ?MQTT_PROTO_V3, ConnectFlags/binary, 0, 0,
ClientIdwithLen/binary, UnexpectedRestBin/binary>>
)
).
t_malformed_connect_data_proto_ver(_) ->
Proto3NameWithLen = <<0, 6, "MQIsdp">>,
?ASSERT_FRAME_THROW(
#{cause := malformed_connect, header_bytes := <<>>},
emqx_frame:parse(<<16, 8, Proto3NameWithLen/binary>>)
),
ProtoNameWithLen = <<0, 4, "MQTT">>,
?ASSERT_FRAME_THROW(
#{cause := malformed_connect, header_bytes := <<>>},
emqx_frame:parse(<<16, 6, ProtoNameWithLen/binary>>)
).
t_reserved_connect_flag(_) ->
?assertException(
throw,
{frame_parse_error, reserved_connect_flag},
{frame_parse_error, #{
cause := reserved_connect_flag, proto_ver := ?MQTT_PROTO_V3, proto_name := <<"MQIsdp">>
}},
emqx_frame:parse(<<16, 15, 0, 6, 77, 81, 73, 115, 100, 112, 3, 1, 0, 0, 1, 0, 0>>)
).
@ -726,7 +750,7 @@ t_undefined_password(_) ->
},
variable = #mqtt_packet_connect{
proto_name = <<"MQTT">>,
proto_ver = 4,
proto_ver = ?MQTT_PROTO_V4,
is_bridge = false,
clean_start = true,
will_flag = false,
@ -774,7 +798,9 @@ t_invalid_will_retain(_) ->
54, 75, 78, 112, 57, 0, 6, 68, 103, 55, 87, 87, 87>>,
?assertException(
throw,
{frame_parse_error, invalid_will_retain},
{frame_parse_error, #{
cause := invalid_will_retain, proto_ver := ?MQTT_PROTO_V5, proto_name := <<"MQTT">>
}},
emqx_frame:parse(ConnectBin)
),
ok.
@ -796,22 +822,30 @@ t_invalid_will_qos(_) ->
),
?assertException(
throw,
{frame_parse_error, invalid_will_qos},
{frame_parse_error, #{
cause := invalid_will_qos, proto_ver := ?MQTT_PROTO_V5, proto_name := <<"MQTT">>
}},
emqx_frame:parse(ConnectBinFun(Will_F_WillQoS1))
),
?assertException(
throw,
{frame_parse_error, invalid_will_qos},
{frame_parse_error, #{
cause := invalid_will_qos, proto_ver := ?MQTT_PROTO_V5, proto_name := <<"MQTT">>
}},
emqx_frame:parse(ConnectBinFun(Will_F_WillQoS2))
),
?assertException(
throw,
{frame_parse_error, invalid_will_qos},
{frame_parse_error, #{
cause := invalid_will_qos, proto_ver := ?MQTT_PROTO_V5, proto_name := <<"MQTT">>
}},
emqx_frame:parse(ConnectBinFun(Will_F_WillQoS3))
),
?assertException(
throw,
{frame_parse_error, invalid_will_qos},
{frame_parse_error, #{
cause := invalid_will_qos, proto_ver := ?MQTT_PROTO_V5, proto_name := <<"MQTT">>
}},
emqx_frame:parse(ConnectBinFun(Will_T_WillQoS3))
),
ok.

View File

@ -377,42 +377,60 @@ t_will_msg(_) ->
t_format(_) ->
io:format("~ts", [
emqx_packet:format(#mqtt_packet{
emqx_packet:format(
#mqtt_packet{
header = #mqtt_packet_header{type = ?CONNACK, retain = true, dup = 0},
variable = undefined
})
]),
io:format("~ts", [
emqx_packet:format(#mqtt_packet{
header = #mqtt_packet_header{type = ?CONNACK}, variable = 1, payload = <<"payload">>
})
},
text
)
]),
io:format(
"~ts",
[
emqx_packet:format(
#mqtt_packet{
header = #mqtt_packet_header{type = ?CONNACK},
variable = 1,
payload = <<"payload">>
},
text
)
]
),
io:format("~ts", [
emqx_packet:format(
?CONNECT_PACKET(#mqtt_packet_connect{
?CONNECT_PACKET(
#mqtt_packet_connect{
will_flag = true,
will_retain = true,
will_qos = ?QOS_2,
will_topic = <<"topic">>,
will_payload = <<"payload">>
})
}
),
text
)
]),
io:format("~ts", [
emqx_packet:format(?CONNECT_PACKET(#mqtt_packet_connect{password = password}))
emqx_packet:format(?CONNECT_PACKET(#mqtt_packet_connect{password = password}), text)
]),
io:format("~ts", [emqx_packet:format(?CONNACK_PACKET(?CONNACK_SERVER))]),
io:format("~ts", [emqx_packet:format(?PUBLISH_PACKET(?QOS_1, 1))]),
io:format("~ts", [emqx_packet:format(?PUBLISH_PACKET(?QOS_2, <<"topic">>, 10, <<"payload">>))]),
io:format("~ts", [emqx_packet:format(?PUBACK_PACKET(?PUBACK, 98))]),
io:format("~ts", [emqx_packet:format(?PUBREL_PACKET(99))]),
io:format("~ts", [emqx_packet:format(?CONNACK_PACKET(?CONNACK_SERVER), text)]),
io:format("~ts", [emqx_packet:format(?PUBLISH_PACKET(?QOS_1, 1), text)]),
io:format("~ts", [
emqx_packet:format(?SUBSCRIBE_PACKET(15, [{<<"topic">>, ?QOS_0}, {<<"topic1">>, ?QOS_1}]))
emqx_packet:format(?PUBLISH_PACKET(?QOS_2, <<"topic">>, 10, <<"payload">>), text)
]),
io:format("~ts", [emqx_packet:format(?SUBACK_PACKET(40, [?QOS_0, ?QOS_1]))]),
io:format("~ts", [emqx_packet:format(?UNSUBSCRIBE_PACKET(89, [<<"t">>, <<"t2">>]))]),
io:format("~ts", [emqx_packet:format(?UNSUBACK_PACKET(90))]),
io:format("~ts", [emqx_packet:format(?DISCONNECT_PACKET(128))]).
io:format("~ts", [emqx_packet:format(?PUBACK_PACKET(?PUBACK, 98), text)]),
io:format("~ts", [emqx_packet:format(?PUBREL_PACKET(99), text)]),
io:format("~ts", [
emqx_packet:format(
?SUBSCRIBE_PACKET(15, [{<<"topic">>, ?QOS_0}, {<<"topic1">>, ?QOS_1}]), text
)
]),
io:format("~ts", [emqx_packet:format(?SUBACK_PACKET(40, [?QOS_0, ?QOS_1]), text)]),
io:format("~ts", [emqx_packet:format(?UNSUBSCRIBE_PACKET(89, [<<"t">>, <<"t2">>]), text)]),
io:format("~ts", [emqx_packet:format(?UNSUBACK_PACKET(90), text)]),
io:format("~ts", [emqx_packet:format(?DISCONNECT_PACKET(128), text)]).
t_parse_empty_publish(_) ->
%% 52: 0011(type=PUBLISH) 0100 (QoS=2)

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth, [
{description, "EMQX Authentication and authorization"},
{vsn, "0.3.3"},
{vsn, "0.3.4"},
{modules, []},
{registered, [emqx_auth_sup]},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_http, [
{description, "EMQX External HTTP API Authentication and Authorization"},
{vsn, "0.3.0"},
{vsn, "0.3.1"},
{registered, []},
{mod, {emqx_auth_http_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_jwt, [
{description, "EMQX JWT Authentication and Authorization"},
{vsn, "0.3.2"},
{vsn, "0.3.3"},
{registered, []},
{mod, {emqx_auth_jwt_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_mnesia, [
{description, "EMQX Buitl-in Database Authentication and Authorization"},
{vsn, "0.1.6"},
{vsn, "0.1.7"},
{registered, []},
{mod, {emqx_auth_mnesia_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_mongodb, [
{description, "EMQX MongoDB Authentication and Authorization"},
{vsn, "0.2.1"},
{vsn, "0.2.2"},
{registered, []},
{mod, {emqx_auth_mongodb_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_mysql, [
{description, "EMQX MySQL Authentication and Authorization"},
{vsn, "0.2.1"},
{vsn, "0.2.2"},
{registered, []},
{mod, {emqx_auth_mysql_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_postgresql, [
{description, "EMQX PostgreSQL Authentication and Authorization"},
{vsn, "0.2.1"},
{vsn, "0.2.2"},
{registered, []},
{mod, {emqx_auth_postgresql_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_auth_redis, [
{description, "EMQX Redis Authentication and Authorization"},
{vsn, "0.2.1"},
{vsn, "0.2.2"},
{registered, []},
{mod, {emqx_auth_redis_app, []}},
{applications, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_bridge, [
{description, "EMQX bridges"},
{vsn, "0.2.3"},
{vsn, "0.2.4"},
{registered, [emqx_bridge_sup]},
{mod, {emqx_bridge_app, []}},
{applications, [

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_gcp_pubsub, [
{description, "EMQX Enterprise GCP Pub/Sub Bridge"},
{vsn, "0.3.2"},
{vsn, "0.3.3"},
{registered, []},
{applications, [
kernel,

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_http, [
{description, "EMQX HTTP Bridge and Connector Application"},
{vsn, "0.3.3"},
{vsn, "0.3.4"},
{registered, []},
{applications, [kernel, stdlib, emqx_resource, ehttpc]},
{env, [

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_bridge_kafka, [
{description, "EMQX Enterprise Kafka Bridge"},
{vsn, "0.3.3"},
{vsn, "0.3.4"},
{registered, [emqx_bridge_kafka_consumer_sup]},
{applications, [
kernel,

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_bridge_mqtt, [
{description, "EMQX MQTT Broker Bridge"},
{vsn, "0.2.3"},
{vsn, "0.2.4"},
{registered, []},
{applications, [
kernel,

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_pulsar, [
{description, "EMQX Pulsar Bridge"},
{vsn, "0.2.3"},
{vsn, "0.2.4"},
{registered, []},
{applications, [
kernel,

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_rabbitmq, [
{description, "EMQX Enterprise RabbitMQ Bridge"},
{vsn, "0.2.2"},
{vsn, "0.2.3"},
{registered, []},
{mod, {emqx_bridge_rabbitmq_app, []}},
{applications, [

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_s3, [
{description, "EMQX Enterprise S3 Bridge"},
{vsn, "0.1.5"},
{vsn, "0.1.6"},
{registered, []},
{applications, [
kernel,

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_sqlserver, [
{description, "EMQX Enterprise SQL Server Bridge"},
{vsn, "0.2.3"},
{vsn, "0.2.4"},
{registered, []},
{applications, [kernel, stdlib, emqx_resource, odbc]},
{env, [

View File

@ -1,6 +1,6 @@
{application, emqx_bridge_syskeeper, [
{description, "EMQX Enterprise Data bridge for Syskeeper"},
{vsn, "0.1.4"},
{vsn, "0.1.5"},
{registered, []},
{applications, [
kernel,

View File

@ -1,6 +1,6 @@
{application, emqx_conf, [
{description, "EMQX configuration management"},
{vsn, "0.2.3"},
{vsn, "0.2.4"},
{registered, []},
{mod, {emqx_conf_app, []}},
{applications, [kernel, stdlib]},

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_connector, [
{description, "EMQX Data Integration Connectors"},
{vsn, "0.3.3"},
{vsn, "0.3.4"},
{registered, []},
{mod, {emqx_connector_app, []}},
{applications, [

View File

@ -125,6 +125,7 @@ create(Type, Name, Conf0, Opts) ->
TypeBin = bin(Type),
ResourceId = resource_id(Type, Name),
Conf = Conf0#{connector_type => TypeBin, connector_name => Name},
_ = emqx_alarm:ensure_deactivated(ResourceId),
{ok, _Data} = emqx_resource:create_local(
ResourceId,
?CONNECTOR_RESOURCE_GROUP,
@ -132,7 +133,6 @@ create(Type, Name, Conf0, Opts) ->
parse_confs(TypeBin, Name, Conf),
parse_opts(Conf, Opts)
),
_ = emqx_alarm:ensure_deactivated(ResourceId),
ok.
update(ConnectorId, {OldConf, Conf}) ->

View File

@ -2,7 +2,7 @@
{application, emqx_dashboard, [
{description, "EMQX Web Dashboard"},
% strict semver, bump manually!
{vsn, "5.1.3"},
{vsn, "5.1.4"},
{modules, []},
{registered, [emqx_dashboard_sup]},
{applications, [

View File

@ -1,6 +1,6 @@
{application, emqx_dashboard_sso, [
{description, "EMQX Dashboard Single Sign-On"},
{vsn, "0.1.5"},
{vsn, "0.1.6"},
{registered, [emqx_dashboard_sso_sup]},
{applications, [
kernel,

View File

@ -100,7 +100,7 @@ open(TopicSubscriptions, Opts) ->
State0 = init_state(Opts),
State1 = lists:foldl(
fun({ShareTopicFilter, #{}}, State) ->
?tp(warning, ds_agent_open_subscription, #{
?tp(debug, ds_agent_open_subscription, #{
topic_filter => ShareTopicFilter
}),
add_shared_subscription(State, ShareTopicFilter)
@ -120,7 +120,7 @@ can_subscribe(_State, _ShareTopicFilter, _SubOpts) ->
-spec on_subscribe(t(), share_topic_filter(), emqx_types:subopts()) -> t().
on_subscribe(State0, ShareTopicFilter, _SubOpts) ->
?tp(warning, ds_agent_on_subscribe, #{
?tp(debug, ds_agent_on_subscribe, #{
share_topic_filter => ShareTopicFilter
}),
add_shared_subscription(State0, ShareTopicFilter).
@ -163,7 +163,7 @@ on_disconnect(#{groups := Groups0} = State, StreamProgresses) ->
-spec on_info(t(), term()) -> t().
on_info(State, ?leader_lease_streams_match(GroupId, Leader, StreamProgresses, Version)) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_lease_streams,
group_id => GroupId,
streams => StreamProgresses,
@ -176,7 +176,7 @@ on_info(State, ?leader_lease_streams_match(GroupId, Leader, StreamProgresses, Ve
)
end);
on_info(State, ?leader_renew_stream_lease_match(GroupId, Version)) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_renew_stream_lease,
group_id => GroupId,
version => Version
@ -185,7 +185,7 @@ on_info(State, ?leader_renew_stream_lease_match(GroupId, Version)) ->
emqx_ds_shared_sub_group_sm:handle_leader_renew_stream_lease(GSM, Version)
end);
on_info(State, ?leader_renew_stream_lease_match(GroupId, VersionOld, VersionNew)) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_renew_stream_lease,
group_id => GroupId,
version_old => VersionOld,
@ -195,7 +195,7 @@ on_info(State, ?leader_renew_stream_lease_match(GroupId, VersionOld, VersionNew)
emqx_ds_shared_sub_group_sm:handle_leader_renew_stream_lease(GSM, VersionOld, VersionNew)
end);
on_info(State, ?leader_update_streams_match(GroupId, VersionOld, VersionNew, StreamsNew)) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_update_streams,
group_id => GroupId,
version_old => VersionOld,
@ -208,7 +208,7 @@ on_info(State, ?leader_update_streams_match(GroupId, VersionOld, VersionNew, Str
)
end);
on_info(State, ?leader_invalidate_match(GroupId)) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_invalidate,
group_id => GroupId
}),
@ -245,7 +245,7 @@ delete_shared_subscription(State, ShareTopicFilter, GroupProgress) ->
add_shared_subscription(
#{session_id := SessionId, groups := Groups0} = State0, ShareTopicFilter
) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => agent_add_shared_subscription,
share_topic_filter => ShareTopicFilter
}),

View File

@ -120,7 +120,7 @@ new(#{
send_after := SendAfter
}) ->
?SLOG(
info,
debug,
#{
msg => group_sm_new,
agent => Agent,
@ -133,7 +133,7 @@ new(#{
agent => Agent,
send_after => SendAfter
},
?tp(warning, group_sm_new, #{
?tp(debug, group_sm_new, #{
agent => Agent,
share_topic_filter => ShareTopicFilter
}),
@ -176,7 +176,7 @@ handle_disconnect(
%% Connecting state
handle_connecting(#{agent := Agent, share_topic_filter := ShareTopicFilter} = GSM) ->
?tp(warning, group_sm_enter_connecting, #{
?tp(debug, group_sm_enter_connecting, #{
agent => Agent,
share_topic_filter => ShareTopicFilter
}),
@ -264,11 +264,13 @@ handle_leader_update_streams(
VersionNew,
StreamProgresses
) ->
?tp(warning, shared_sub_group_sm_leader_update_streams, #{
?tp(debug, shared_sub_group_sm_leader_update_streams, #{
id => Id,
version_old => VersionOld,
version_new => VersionNew,
stream_progresses => emqx_ds_shared_sub_proto:format_stream_progresses(StreamProgresses)
stream_progresses => emqx_persistent_session_ds_shared_subs:format_stream_progresses(
StreamProgresses
)
}),
{AddEvents, Streams1} = lists:foldl(
fun(#{stream := Stream, progress := Progress}, {AddEventAcc, StreamsAcc}) ->
@ -303,9 +305,11 @@ handle_leader_update_streams(
maps:keys(Streams1)
),
StreamLeaseEvents = AddEvents ++ RevokeEvents,
?tp(warning, shared_sub_group_sm_leader_update_streams, #{
?tp(debug, shared_sub_group_sm_leader_update_streams, #{
id => Id,
stream_lease_events => emqx_ds_shared_sub_proto:format_lease_events(StreamLeaseEvents)
stream_lease_events => emqx_persistent_session_ds_shared_subs:format_lease_events(
StreamLeaseEvents
)
}),
transition(
GSM,
@ -431,24 +435,11 @@ handle_leader_invalidate(#{agent := Agent, share_topic_filter := ShareTopicFilte
%% Internal API
%%-----------------------------------------------------------------------
handle_state_timeout(
#{state := ?connecting, share_topic_filter := ShareTopicFilter} = GSM,
find_leader_timeout,
_Message
) ->
?tp(debug, find_leader_timeout, #{share_topic_filter => ShareTopicFilter}),
handle_state_timeout(#{state := ?connecting} = GSM, find_leader_timeout, _Message) ->
handle_find_leader_timeout(GSM);
handle_state_timeout(
#{state := ?replaying} = GSM,
renew_lease_timeout,
_Message
) ->
handle_state_timeout(#{state := ?replaying} = GSM, renew_lease_timeout, _Message) ->
handle_renew_lease_timeout(GSM);
handle_state_timeout(
GSM,
update_stream_state_timeout,
_Message
) ->
handle_state_timeout(GSM, update_stream_state_timeout, _Message) ->
?tp(debug, update_stream_state_timeout, #{}),
handle_stream_progress(GSM, []).

View File

@ -164,7 +164,7 @@ handle_event({call, From}, #register{register_fun = Fun}, ?leader_waiting_regist
%%--------------------------------------------------------------------
%% repalying state
handle_event(enter, _OldState, ?leader_active, #{topic := Topic} = _Data) ->
?tp(warning, shared_sub_leader_enter_actve, #{topic => Topic}),
?tp(debug, shared_sub_leader_enter_actve, #{topic => Topic}),
{keep_state_and_data, [
{{timeout, #renew_streams{}}, 0, #renew_streams{}},
{{timeout, #renew_leases{}}, ?dq_config(leader_renew_lease_interval_ms), #renew_leases{}},
@ -174,7 +174,7 @@ handle_event(enter, _OldState, ?leader_active, #{topic := Topic} = _Data) ->
%% timers
%% renew_streams timer
handle_event({timeout, #renew_streams{}}, #renew_streams{}, ?leader_active, Data0) ->
% ?tp(warning, shared_sub_leader_timeout, #{timeout => renew_streams}),
?tp(debug, shared_sub_leader_timeout, #{timeout => renew_streams}),
Data1 = renew_streams(Data0),
{keep_state, Data1,
{
@ -184,7 +184,7 @@ handle_event({timeout, #renew_streams{}}, #renew_streams{}, ?leader_active, Data
}};
%% renew_leases timer
handle_event({timeout, #renew_leases{}}, #renew_leases{}, ?leader_active, Data0) ->
% ?tp(warning, shared_sub_leader_timeout, #{timeout => renew_leases}),
?tp(debug, shared_sub_leader_timeout, #{timeout => renew_leases}),
Data1 = renew_leases(Data0),
{keep_state, Data1,
{{timeout, #renew_leases{}}, ?dq_config(leader_renew_lease_interval_ms), #renew_leases{}}};
@ -279,7 +279,7 @@ renew_streams(
Data2 = Data1#{stream_states => NewStreamStates, rank_progress => RankProgress1},
Data3 = revoke_streams(Data2),
Data4 = assign_streams(Data3),
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_renew_streams,
topic_filter => TopicFilter,
new_streams => length(NewStreamsWRanks)
@ -368,7 +368,7 @@ revoke_excess_streams_from_agent(Data0, Agent, DesiredCount) ->
false ->
AgentState0;
true ->
?tp(warning, shared_sub_leader_revoke_streams, #{
?tp(debug, shared_sub_leader_revoke_streams, #{
agent => Agent,
agent_stream_count => length(Streams0),
revoke_count => RevokeCount,
@ -421,7 +421,7 @@ assign_lacking_streams(Data0, Agent, DesiredCount) ->
false ->
Data0;
true ->
?tp(warning, shared_sub_leader_assign_streams, #{
?tp(debug, shared_sub_leader_assign_streams, #{
agent => Agent,
agent_stream_count => length(Streams0),
assign_count => AssignCount,
@ -449,7 +449,7 @@ select_streams_for_assign(Data0, _Agent, AssignCount) ->
%% renew_leases - send lease confirmations to agents
renew_leases(#{agents := AgentStates} = Data) ->
?tp(warning, shared_sub_leader_renew_leases, #{agents => maps:keys(AgentStates)}),
?tp(debug, shared_sub_leader_renew_leases, #{agents => maps:keys(AgentStates)}),
ok = lists:foreach(
fun({Agent, AgentState}) ->
renew_lease(Data, Agent, AgentState)
@ -492,7 +492,7 @@ drop_timeout_agents(#{agents := Agents} = Data) ->
(is_integer(NoReplayingDeadline) andalso NoReplayingDeadline < Now)
of
true ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_agent_timeout,
now => Now,
update_deadline => UpdateDeadline,
@ -516,14 +516,14 @@ connect_agent(
Agent,
AgentMetadata
) ->
?SLOG(info, #{
?SLOG(debug, #{
msg => leader_agent_connected,
agent => Agent,
group_id => GroupId
}),
case Agents of
#{Agent := AgentState} ->
?tp(warning, shared_sub_leader_agent_already_connected, #{
?tp(debug, shared_sub_leader_agent_already_connected, #{
agent => Agent
}),
reconnect_agent(Data, Agent, AgentMetadata, AgentState);
@ -546,7 +546,7 @@ reconnect_agent(
AgentMetadata,
#{streams := OldStreams, revoked_streams := OldRevokedStreams} = _OldAgentState
) ->
?tp(warning, shared_sub_leader_agent_reconnect, #{
?tp(debug, shared_sub_leader_agent_reconnect, #{
agent => Agent,
agent_metadata => AgentMetadata,
inherited_streams => OldStreams
@ -767,7 +767,7 @@ update_agent_stream_states(Data0, Agent, AgentStreamProgresses, VersionOld, Vers
disconnect_agent(Data0, Agent, AgentStreamProgresses, Version) ->
case get_agent_state(Data0, Agent) of
#{version := Version} ->
?tp(warning, shared_sub_leader_disconnect_agent, #{
?tp(debug, shared_sub_leader_disconnect_agent, #{
agent => Agent,
version => Version
}),
@ -794,7 +794,7 @@ agent_transition_to_waiting_updating(
Streams,
RevokedStreams
) ->
?tp(warning, shared_sub_leader_agent_state_transition, #{
?tp(debug, shared_sub_leader_agent_state_transition, #{
agent => Agent,
old_state => OldState,
new_state => ?waiting_updating
@ -818,7 +818,7 @@ agent_transition_to_waiting_updating(
agent_transition_to_waiting_replaying(
#{group_id := GroupId} = _Data, Agent, #{state := OldState, version := Version} = AgentState0
) ->
?tp(warning, shared_sub_leader_agent_state_transition, #{
?tp(debug, shared_sub_leader_agent_state_transition, #{
agent => Agent,
old_state => OldState,
new_state => ?waiting_replaying
@ -833,7 +833,7 @@ agent_transition_to_waiting_replaying(
agent_transition_to_initial_waiting_replaying(
#{group_id := GroupId} = Data, Agent, AgentMetadata, InitialStreams
) ->
?tp(warning, shared_sub_leader_agent_state_transition, #{
?tp(debug, shared_sub_leader_agent_state_transition, #{
agent => Agent,
old_state => none,
new_state => ?waiting_replaying
@ -856,7 +856,7 @@ agent_transition_to_initial_waiting_replaying(
renew_no_replaying_deadline(AgentState).
agent_transition_to_replaying(Agent, #{state := ?waiting_replaying} = AgentState) ->
?tp(warning, shared_sub_leader_agent_state_transition, #{
?tp(debug, shared_sub_leader_agent_state_transition, #{
agent => Agent,
old_state => ?waiting_replaying,
new_state => ?replaying
@ -868,7 +868,7 @@ agent_transition_to_replaying(Agent, #{state := ?waiting_replaying} = AgentState
}.
agent_transition_to_updating(Agent, #{state := ?waiting_updating} = AgentState0) ->
?tp(warning, shared_sub_leader_agent_state_transition, #{
?tp(debug, shared_sub_leader_agent_state_transition, #{
agent => Agent,
old_state => ?waiting_updating,
new_state => ?updating
@ -995,7 +995,7 @@ drop_agent(#{agents := Agents} = Data0, Agent) ->
#{streams := Streams, revoked_streams := RevokedStreams} = AgentState,
AllStreams = Streams ++ RevokedStreams,
Data1 = unassign_streams(Data0, AllStreams),
?tp(warning, shared_sub_leader_drop_agent, #{agent => Agent}),
?tp(debug, shared_sub_leader_drop_agent, #{agent => Agent}),
Data1#{agents => maps:remove(Agent, Agents)}.
invalidate_agent(#{group_id := GroupId}, Agent) ->

View File

@ -55,7 +55,7 @@ set_replayed({{RankX, RankY}, Stream}, State) ->
State#{RankX => #{min_y => MinY, ys => Ys2}};
_ ->
?SLOG(
warning,
debug,
#{
msg => leader_rank_progress_double_or_invalid_update,
rank_x => RankX,

View File

@ -22,12 +22,6 @@
]).
-export([
format_stream_progresses/1,
format_stream_progress/1,
format_stream_key/1,
format_stream_keys/1,
format_lease_event/1,
format_lease_events/1,
agent/2
]).
@ -57,6 +51,20 @@
agent_metadata/0
]).
-define(log_agent_msg(ToLeader, Msg),
?tp(debug, shared_sub_proto_msg, #{
to_leader => ToLeader,
msg => emqx_ds_shared_sub_proto_format:format_agent_msg(Msg)
})
).
-define(log_leader_msg(ToAgent, Msg),
?tp(debug, shared_sub_proto_msg, #{
to_agent => ToAgent,
msg => emqx_ds_shared_sub_proto_format:format_leader_msg(Msg)
})
).
%%--------------------------------------------------------------------
%% API
%%--------------------------------------------------------------------
@ -67,15 +75,7 @@
agent_connect_leader(ToLeader, FromAgent, AgentMetadata, ShareTopicFilter) when
?is_local_leader(ToLeader)
->
?tp(warning, shared_sub_proto_msg, #{
type => agent_connect_leader,
to_leader => ToLeader,
from_agent => FromAgent,
agent_metadata => AgentMetadata,
share_topic_filter => ShareTopicFilter
}),
_ = erlang:send(ToLeader, ?agent_connect_leader(FromAgent, AgentMetadata, ShareTopicFilter)),
ok;
send_agent_msg(ToLeader, ?agent_connect_leader(FromAgent, AgentMetadata, ShareTopicFilter));
agent_connect_leader(ToLeader, FromAgent, AgentMetadata, ShareTopicFilter) ->
emqx_ds_shared_sub_proto_v1:agent_connect_leader(
?leader_node(ToLeader), ToLeader, FromAgent, AgentMetadata, ShareTopicFilter
@ -85,15 +85,7 @@ agent_connect_leader(ToLeader, FromAgent, AgentMetadata, ShareTopicFilter) ->
agent_update_stream_states(ToLeader, FromAgent, StreamProgresses, Version) when
?is_local_leader(ToLeader)
->
?tp(warning, shared_sub_proto_msg, #{
type => agent_update_stream_states,
to_leader => ToLeader,
from_agent => FromAgent,
stream_progresses => format_stream_progresses(StreamProgresses),
version => Version
}),
_ = erlang:send(ToLeader, ?agent_update_stream_states(FromAgent, StreamProgresses, Version)),
ok;
send_agent_msg(ToLeader, ?agent_update_stream_states(FromAgent, StreamProgresses, Version));
agent_update_stream_states(ToLeader, FromAgent, StreamProgresses, Version) ->
emqx_ds_shared_sub_proto_v1:agent_update_stream_states(
?leader_node(ToLeader), ToLeader, FromAgent, StreamProgresses, Version
@ -105,18 +97,9 @@ agent_update_stream_states(ToLeader, FromAgent, StreamProgresses, Version) ->
agent_update_stream_states(ToLeader, FromAgent, StreamProgresses, VersionOld, VersionNew) when
?is_local_leader(ToLeader)
->
?tp(warning, shared_sub_proto_msg, #{
type => agent_update_stream_states,
to_leader => ToLeader,
from_agent => FromAgent,
stream_progresses => format_stream_progresses(StreamProgresses),
version_old => VersionOld,
version_new => VersionNew
}),
_ = erlang:send(
send_agent_msg(
ToLeader, ?agent_update_stream_states(FromAgent, StreamProgresses, VersionOld, VersionNew)
),
ok;
);
agent_update_stream_states(ToLeader, FromAgent, StreamProgresses, VersionOld, VersionNew) ->
emqx_ds_shared_sub_proto_v1:agent_update_stream_states(
?leader_node(ToLeader), ToLeader, FromAgent, StreamProgresses, VersionOld, VersionNew
@ -125,15 +108,7 @@ agent_update_stream_states(ToLeader, FromAgent, StreamProgresses, VersionOld, Ve
agent_disconnect(ToLeader, FromAgent, StreamProgresses, Version) when
?is_local_leader(ToLeader)
->
?tp(warning, shared_sub_proto_msg, #{
type => agent_disconnect,
to_leader => ToLeader,
from_agent => FromAgent,
stream_progresses => format_stream_progresses(StreamProgresses),
version => Version
}),
_ = erlang:send(ToLeader, ?agent_disconnect(FromAgent, StreamProgresses, Version)),
ok;
send_agent_msg(ToLeader, ?agent_disconnect(FromAgent, StreamProgresses, Version));
agent_disconnect(ToLeader, FromAgent, StreamProgresses, Version) ->
emqx_ds_shared_sub_proto_v1:agent_disconnect(
?leader_node(ToLeader), ToLeader, FromAgent, StreamProgresses, Version
@ -144,19 +119,7 @@ agent_disconnect(ToLeader, FromAgent, StreamProgresses, Version) ->
-spec leader_lease_streams(agent(), group(), leader(), list(leader_stream_progress()), version()) ->
ok.
leader_lease_streams(ToAgent, OfGroup, Leader, Streams, Version) when ?is_local_agent(ToAgent) ->
?tp(warning, shared_sub_proto_msg, #{
type => leader_lease_streams,
to_agent => ToAgent,
of_group => OfGroup,
leader => Leader,
streams => format_stream_progresses(Streams),
version => Version
}),
_ = emqx_persistent_session_ds_shared_subs_agent:send(
?agent_pid(ToAgent),
?leader_lease_streams(OfGroup, Leader, Streams, Version)
),
ok;
send_leader_msg(ToAgent, ?leader_lease_streams(OfGroup, Leader, Streams, Version));
leader_lease_streams(ToAgent, OfGroup, Leader, Streams, Version) ->
emqx_ds_shared_sub_proto_v1:leader_lease_streams(
?agent_node(ToAgent), ToAgent, OfGroup, Leader, Streams, Version
@ -164,17 +127,7 @@ leader_lease_streams(ToAgent, OfGroup, Leader, Streams, Version) ->
-spec leader_renew_stream_lease(agent(), group(), version()) -> ok.
leader_renew_stream_lease(ToAgent, OfGroup, Version) when ?is_local_agent(ToAgent) ->
?tp(warning, shared_sub_proto_msg, #{
type => leader_renew_stream_lease,
to_agent => ToAgent,
of_group => OfGroup,
version => Version
}),
_ = emqx_persistent_session_ds_shared_subs_agent:send(
?agent_pid(ToAgent),
?leader_renew_stream_lease(OfGroup, Version)
),
ok;
send_leader_msg(ToAgent, ?leader_renew_stream_lease(OfGroup, Version));
leader_renew_stream_lease(ToAgent, OfGroup, Version) ->
emqx_ds_shared_sub_proto_v1:leader_renew_stream_lease(
?agent_node(ToAgent), ToAgent, OfGroup, Version
@ -182,18 +135,7 @@ leader_renew_stream_lease(ToAgent, OfGroup, Version) ->
-spec leader_renew_stream_lease(agent(), group(), version(), version()) -> ok.
leader_renew_stream_lease(ToAgent, OfGroup, VersionOld, VersionNew) when ?is_local_agent(ToAgent) ->
?tp(warning, shared_sub_proto_msg, #{
type => leader_renew_stream_lease,
to_agent => ToAgent,
of_group => OfGroup,
version_old => VersionOld,
version_new => VersionNew
}),
_ = emqx_persistent_session_ds_shared_subs_agent:send(
?agent_pid(ToAgent),
?leader_renew_stream_lease(OfGroup, VersionOld, VersionNew)
),
ok;
send_leader_msg(ToAgent, ?leader_renew_stream_lease(OfGroup, VersionOld, VersionNew));
leader_renew_stream_lease(ToAgent, OfGroup, VersionOld, VersionNew) ->
emqx_ds_shared_sub_proto_v1:leader_renew_stream_lease(
?agent_node(ToAgent), ToAgent, OfGroup, VersionOld, VersionNew
@ -204,19 +146,7 @@ leader_renew_stream_lease(ToAgent, OfGroup, VersionOld, VersionNew) ->
leader_update_streams(ToAgent, OfGroup, VersionOld, VersionNew, StreamsNew) when
?is_local_agent(ToAgent)
->
?tp(warning, shared_sub_proto_msg, #{
type => leader_update_streams,
to_agent => ToAgent,
of_group => OfGroup,
version_old => VersionOld,
version_new => VersionNew,
streams_new => format_stream_progresses(StreamsNew)
}),
_ = emqx_persistent_session_ds_shared_subs_agent:send(
?agent_pid(ToAgent),
?leader_update_streams(OfGroup, VersionOld, VersionNew, StreamsNew)
),
ok;
send_leader_msg(ToAgent, ?leader_update_streams(OfGroup, VersionOld, VersionNew, StreamsNew));
leader_update_streams(ToAgent, OfGroup, VersionOld, VersionNew, StreamsNew) ->
emqx_ds_shared_sub_proto_v1:leader_update_streams(
?agent_node(ToAgent), ToAgent, OfGroup, VersionOld, VersionNew, StreamsNew
@ -224,16 +154,7 @@ leader_update_streams(ToAgent, OfGroup, VersionOld, VersionNew, StreamsNew) ->
-spec leader_invalidate(agent(), group()) -> ok.
leader_invalidate(ToAgent, OfGroup) when ?is_local_agent(ToAgent) ->
?tp(warning, shared_sub_proto_msg, #{
type => leader_invalidate,
to_agent => ToAgent,
of_group => OfGroup
}),
_ = emqx_persistent_session_ds_shared_subs_agent:send(
?agent_pid(ToAgent),
?leader_invalidate(OfGroup)
),
ok;
send_leader_msg(ToAgent, ?leader_invalidate(OfGroup));
leader_invalidate(ToAgent, OfGroup) ->
emqx_ds_shared_sub_proto_v1:leader_invalidate(
?agent_node(ToAgent), ToAgent, OfGroup
@ -247,41 +168,12 @@ agent(Id, Pid) ->
_ = Id,
?agent(Id, Pid).
format_stream_progresses(Streams) ->
lists:map(
fun format_stream_progress/1,
Streams
).
send_agent_msg(ToLeader, Msg) ->
?log_agent_msg(ToLeader, Msg),
_ = erlang:send(ToLeader, Msg),
ok.
format_stream_progress(#{stream := Stream, progress := Progress} = Value) ->
Value#{stream => format_opaque(Stream), progress => format_progress(Progress)}.
format_progress(#{iterator := Iterator} = Progress) ->
Progress#{iterator => format_opaque(Iterator)}.
format_stream_key({SubId, Stream}) ->
{SubId, format_opaque(Stream)}.
format_stream_keys(StreamKeys) ->
lists:map(
fun format_stream_key/1,
StreamKeys
).
format_lease_events(Events) ->
lists:map(
fun format_lease_event/1,
Events
).
format_lease_event(#{stream := Stream, progress := Progress} = Event) ->
Event#{stream => format_opaque(Stream), progress => format_progress(Progress)};
format_lease_event(#{stream := Stream} = Event) ->
Event#{stream => format_opaque(Stream)}.
%%--------------------------------------------------------------------
%% Helpers
%%--------------------------------------------------------------------
format_opaque(Opaque) ->
erlang:phash2(Opaque).
send_leader_msg(ToAgent, Msg) ->
?log_leader_msg(ToAgent, Msg),
_ = emqx_persistent_session_ds_shared_subs_agent:send(?agent_pid(ToAgent), Msg),
ok.

View File

@ -12,146 +12,167 @@
%% agent messages, sent from agent side to the leader
-define(agent_connect_leader_msg, agent_connect_leader).
-define(agent_update_stream_states_msg, agent_update_stream_states).
-define(agent_connect_leader_timeout_msg, agent_connect_leader_timeout).
-define(agent_renew_stream_lease_timeout_msg, agent_renew_stream_lease_timeout).
-define(agent_disconnect_msg, agent_disconnect).
-define(agent_connect_leader_msg, 1).
-define(agent_update_stream_states_msg, 2).
-define(agent_connect_leader_timeout_msg, 3).
-define(agent_renew_stream_lease_timeout_msg, 4).
-define(agent_disconnect_msg, 5).
%% message keys (used used not to send atoms over the network)
-define(agent_msg_type, 1).
-define(agent_msg_agent, 2).
-define(agent_msg_share_topic_filter, 3).
-define(agent_msg_agent_metadata, 4).
-define(agent_msg_stream_states, 5).
-define(agent_msg_version, 6).
-define(agent_msg_version_old, 7).
-define(agent_msg_version_new, 8).
%% Agent messages sent to the leader.
%% Leader talks to many agents, `agent` field is used to identify the sender.
-define(agent_connect_leader(Agent, AgentMetadata, ShareTopicFilter), #{
type => ?agent_connect_leader_msg,
share_topic_filter => ShareTopicFilter,
agent_metadata => AgentMetadata,
agent => Agent
?agent_msg_type => ?agent_connect_leader_msg,
?agent_msg_share_topic_filter => ShareTopicFilter,
?agent_msg_agent_metadata => AgentMetadata,
?agent_msg_agent => Agent
}).
-define(agent_connect_leader_match(Agent, AgentMetadata, ShareTopicFilter), #{
type := ?agent_connect_leader_msg,
share_topic_filter := ShareTopicFilter,
agent_metadata := AgentMetadata,
agent := Agent
?agent_msg_type := ?agent_connect_leader_msg,
?agent_msg_share_topic_filter := ShareTopicFilter,
?agent_msg_agent_metadata := AgentMetadata,
?agent_msg_agent := Agent
}).
-define(agent_update_stream_states(Agent, StreamStates, Version), #{
type => ?agent_update_stream_states_msg,
stream_states => StreamStates,
version => Version,
agent => Agent
?agent_msg_type => ?agent_update_stream_states_msg,
?agent_msg_stream_states => StreamStates,
?agent_msg_version => Version,
?agent_msg_agent => Agent
}).
-define(agent_update_stream_states_match(Agent, StreamStates, Version), #{
type := ?agent_update_stream_states_msg,
stream_states := StreamStates,
version := Version,
agent := Agent
?agent_msg_type := ?agent_update_stream_states_msg,
?agent_msg_stream_states := StreamStates,
?agent_msg_version := Version,
?agent_msg_agent := Agent
}).
-define(agent_update_stream_states(Agent, StreamStates, VersionOld, VersionNew), #{
type => ?agent_update_stream_states_msg,
stream_states => StreamStates,
version_old => VersionOld,
version_new => VersionNew,
agent => Agent
?agent_msg_type => ?agent_update_stream_states_msg,
?agent_msg_stream_states => StreamStates,
?agent_msg_version_old => VersionOld,
?agent_msg_version_new => VersionNew,
?agent_msg_agent => Agent
}).
-define(agent_update_stream_states_match(Agent, StreamStates, VersionOld, VersionNew), #{
type := ?agent_update_stream_states_msg,
stream_states := StreamStates,
version_old := VersionOld,
version_new := VersionNew,
agent := Agent
?agent_msg_type := ?agent_update_stream_states_msg,
?agent_msg_stream_states := StreamStates,
?agent_msg_version_old := VersionOld,
?agent_msg_version_new := VersionNew,
?agent_msg_agent := Agent
}).
-define(agent_disconnect(Agent, StreamStates, Version), #{
type => ?agent_disconnect_msg,
stream_states => StreamStates,
version => Version,
agent => Agent
?agent_msg_type => ?agent_disconnect_msg,
?agent_msg_stream_states => StreamStates,
?agent_msg_version => Version,
?agent_msg_agent => Agent
}).
-define(agent_disconnect_match(Agent, StreamStates, Version), #{
type := ?agent_disconnect_msg,
stream_states := StreamStates,
version := Version,
agent := Agent
?agent_msg_type := ?agent_disconnect_msg,
?agent_msg_stream_states := StreamStates,
?agent_msg_version := Version,
?agent_msg_agent := Agent
}).
%% leader messages, sent from the leader to the agent
%% Agent may have several shared subscriptions, so may talk to several leaders
%% `group_id` field is used to identify the leader.
-define(leader_lease_streams_msg, leader_lease_streams).
-define(leader_renew_stream_lease_msg, leader_renew_stream_lease).
-define(leader_lease_streams_msg, 101).
-define(leader_renew_stream_lease_msg, 102).
-define(leader_update_streams, 103).
-define(leader_invalidate, 104).
-define(leader_msg_type, 101).
-define(leader_msg_streams, 102).
-define(leader_msg_version, 103).
-define(leader_msg_version_old, 104).
-define(leader_msg_version_new, 105).
-define(leader_msg_streams_new, 106).
-define(leader_msg_leader, 107).
-define(leader_msg_group_id, 108).
-define(leader_lease_streams(GrouId, Leader, Streams, Version), #{
type => ?leader_lease_streams_msg,
streams => Streams,
version => Version,
leader => Leader,
group_id => GrouId
?leader_msg_type => ?leader_lease_streams_msg,
?leader_msg_streams => Streams,
?leader_msg_version => Version,
?leader_msg_leader => Leader,
?leader_msg_group_id => GrouId
}).
-define(leader_lease_streams_match(GroupId, Leader, Streams, Version), #{
type := ?leader_lease_streams_msg,
streams := Streams,
version := Version,
leader := Leader,
group_id := GroupId
?leader_msg_type := ?leader_lease_streams_msg,
?leader_msg_streams := Streams,
?leader_msg_version := Version,
?leader_msg_leader := Leader,
?leader_msg_group_id := GroupId
}).
-define(leader_renew_stream_lease(GroupId, Version), #{
type => ?leader_renew_stream_lease_msg,
version => Version,
group_id => GroupId
?leader_msg_type => ?leader_renew_stream_lease_msg,
?leader_msg_version => Version,
?leader_msg_group_id => GroupId
}).
-define(leader_renew_stream_lease_match(GroupId, Version), #{
type := ?leader_renew_stream_lease_msg,
version := Version,
group_id := GroupId
?leader_msg_type := ?leader_renew_stream_lease_msg,
?leader_msg_version := Version,
?leader_msg_group_id := GroupId
}).
-define(leader_renew_stream_lease(GroupId, VersionOld, VersionNew), #{
type => ?leader_renew_stream_lease_msg,
version_old => VersionOld,
version_new => VersionNew,
group_id => GroupId
?leader_msg_type => ?leader_renew_stream_lease_msg,
?leader_msg_version_old => VersionOld,
?leader_msg_version_new => VersionNew,
?leader_msg_group_id => GroupId
}).
-define(leader_renew_stream_lease_match(GroupId, VersionOld, VersionNew), #{
type := ?leader_renew_stream_lease_msg,
version_old := VersionOld,
version_new := VersionNew,
group_id := GroupId
?leader_msg_type := ?leader_renew_stream_lease_msg,
?leader_msg_version_old := VersionOld,
?leader_msg_version_new := VersionNew,
?leader_msg_group_id := GroupId
}).
-define(leader_update_streams(GroupId, VersionOld, VersionNew, StreamsNew), #{
type => leader_update_streams,
version_old => VersionOld,
version_new => VersionNew,
streams_new => StreamsNew,
group_id => GroupId
?leader_msg_type => ?leader_update_streams,
?leader_msg_version_old => VersionOld,
?leader_msg_version_new => VersionNew,
?leader_msg_streams_new => StreamsNew,
?leader_msg_group_id => GroupId
}).
-define(leader_update_streams_match(GroupId, VersionOld, VersionNew, StreamsNew), #{
type := leader_update_streams,
version_old := VersionOld,
version_new := VersionNew,
streams_new := StreamsNew,
group_id := GroupId
?leader_msg_type := ?leader_update_streams,
?leader_msg_version_old := VersionOld,
?leader_msg_version_new := VersionNew,
?leader_msg_streams_new := StreamsNew,
?leader_msg_group_id := GroupId
}).
-define(leader_invalidate(GroupId), #{
type => leader_invalidate,
group_id => GroupId
?leader_msg_type => ?leader_invalidate,
?leader_msg_group_id => GroupId
}).
-define(leader_invalidate_match(GroupId), #{
type := leader_invalidate,
group_id := GroupId
?leader_msg_type := ?leader_invalidate,
?leader_msg_group_id := GroupId
}).
%% Helpers

View File

@ -0,0 +1,82 @@
%%--------------------------------------------------------------------
%% Copyright (c) 2024 EMQ Technologies Co., Ltd. All Rights Reserved.
%%--------------------------------------------------------------------
-module(emqx_ds_shared_sub_proto_format).
-include("emqx_ds_shared_sub_proto.hrl").
-export([format_agent_msg/1, format_leader_msg/1]).
%%--------------------------------------------------------------------
%% API
%%--------------------------------------------------------------------
format_agent_msg(Msg) ->
maps:from_list(
lists:map(
fun({K, V}) ->
FormattedKey = agent_msg_key(K),
{FormattedKey, format_agent_msg_value(FormattedKey, V)}
end,
maps:to_list(Msg)
)
).
format_leader_msg(Msg) ->
maps:from_list(
lists:map(
fun({K, V}) ->
FormattedKey = leader_msg_key(K),
{FormattedKey, format_leader_msg_value(FormattedKey, V)}
end,
maps:to_list(Msg)
)
).
%%--------------------------------------------------------------------
%% Internal functions
%%--------------------------------------------------------------------
format_agent_msg_value(agent_msg_type, Type) ->
agent_msg_type(Type);
format_agent_msg_value(agent_msg_stream_states, StreamStates) ->
emqx_persistent_session_ds_shared_subs:format_stream_progresses(StreamStates);
format_agent_msg_value(_, Value) ->
Value.
format_leader_msg_value(leader_msg_type, Type) ->
leader_msg_type(Type);
format_leader_msg_value(leader_msg_streams, Streams) ->
emqx_persistent_session_ds_shared_subs:format_lease_events(Streams);
format_leader_msg_value(_, Value) ->
Value.
agent_msg_type(?agent_connect_leader_msg) -> agent_connect_leader_msg;
agent_msg_type(?agent_update_stream_states_msg) -> agent_update_stream_states_msg;
agent_msg_type(?agent_connect_leader_timeout_msg) -> agent_connect_leader_timeout_msg;
agent_msg_type(?agent_renew_stream_lease_timeout_msg) -> agent_renew_stream_lease_timeout_msg;
agent_msg_type(?agent_disconnect_msg) -> agent_disconnect_msg.
agent_msg_key(?agent_msg_type) -> agent_msg_type;
agent_msg_key(?agent_msg_agent) -> agent_msg_agent;
agent_msg_key(?agent_msg_share_topic_filter) -> agent_msg_share_topic_filter;
agent_msg_key(?agent_msg_agent_metadata) -> agent_msg_agent_metadata;
agent_msg_key(?agent_msg_stream_states) -> agent_msg_stream_states;
agent_msg_key(?agent_msg_version) -> agent_msg_version;
agent_msg_key(?agent_msg_version_old) -> agent_msg_version_old;
agent_msg_key(?agent_msg_version_new) -> agent_msg_version_new.
leader_msg_type(?leader_lease_streams_msg) -> leader_lease_streams_msg;
leader_msg_type(?leader_renew_stream_lease_msg) -> leader_renew_stream_lease_msg;
leader_msg_type(?leader_update_streams) -> leader_update_streams;
leader_msg_type(?leader_invalidate) -> leader_invalidate.
leader_msg_key(?leader_msg_type) -> leader_msg_type;
leader_msg_key(?leader_msg_streams) -> leader_msg_streams;
leader_msg_key(?leader_msg_version) -> leader_msg_version;
leader_msg_key(?leader_msg_version_old) -> leader_msg_version_old;
leader_msg_key(?leader_msg_version_new) -> leader_msg_version_new;
leader_msg_key(?leader_msg_streams_new) -> leader_msg_streams_new;
leader_msg_key(?leader_msg_leader) -> leader_msg_leader;
leader_msg_key(?leader_msg_group_id) -> leader_msg_group_id.

View File

@ -113,7 +113,7 @@ do_lookup_leader(Agent, AgentMetadata, ShareTopicFilter, State) ->
Pid ->
Pid
end,
?SLOG(info, #{
?SLOG(debug, #{
msg => lookup_leader,
agent => Agent,
share_topic_filter => ShareTopicFilter,

View File

@ -417,7 +417,7 @@ t_lease_reconnect(_Config) ->
?assertWaitEvent(
{ok, _, _} = emqtt:subscribe(ConnShared, <<"$share/gr2/topic2/#">>, 1),
#{?snk_kind := find_leader_timeout},
#{?snk_kind := group_sm_find_leader_timeout},
5_000
),

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_gateway_coap, [
{description, "CoAP Gateway"},
{vsn, "0.1.9"},
{vsn, "0.1.10"},
{registered, []},
{applications, [kernel, stdlib, emqx, emqx_gateway]},
{env, []},

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_gateway_exproto, [
{description, "ExProto Gateway"},
{vsn, "0.1.12"},
{vsn, "0.1.13"},
{registered, []},
{applications, [kernel, stdlib, grpc, emqx, emqx_gateway]},
{env, []},

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_gateway_gbt32960, [
{description, "GBT32960 Gateway"},
{vsn, "0.1.4"},
{vsn, "0.1.5"},
{registered, []},
{applications, [kernel, stdlib, emqx, emqx_gateway]},
{env, []},

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_gateway_jt808, [
{description, "JT/T 808 Gateway"},
{vsn, "0.1.0"},
{vsn, "0.1.1"},
{registered, []},
{applications, [kernel, stdlib, emqx, emqx_gateway]},
{env, []},

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_gateway_mqttsn, [
{description, "MQTT-SN Gateway"},
{vsn, "0.2.2"},
{vsn, "0.2.3"},
{registered, []},
{applications, [kernel, stdlib, emqx, emqx_gateway]},
{env, []},

View File

@ -3,7 +3,7 @@
{id, "emqx_machine"},
{description, "The EMQX Machine"},
% strict semver, bump manually!
{vsn, "0.3.3"},
{vsn, "0.3.4"},
{modules, []},
{registered, []},
{applications, [kernel, stdlib, emqx_ctl, redbug]},

View File

@ -2,7 +2,7 @@
{application, emqx_management, [
{description, "EMQX Management API and CLI"},
% strict semver, bump manually!
{vsn, "5.2.3"},
{vsn, "5.2.4"},
{modules, []},
{registered, [emqx_management_sup]},
{applications, [

View File

@ -29,13 +29,9 @@
start(_Type, _Args) ->
ok = mria:wait_for_tables(emqx_mgmt_auth:create_tables()),
case emqx_mgmt_auth:init_bootstrap_file() of
ok ->
emqx_mgmt_auth:try_init_bootstrap_file(),
emqx_conf:add_handler([api_key], emqx_mgmt_auth),
emqx_mgmt_sup:start_link();
{error, Reason} ->
{error, Reason}
end.
emqx_mgmt_sup:start_link().
stop(_State) ->
emqx_conf:remove_handler([api_key]),

View File

@ -32,7 +32,7 @@
update/5,
delete/1,
list/0,
init_bootstrap_file/0,
try_init_bootstrap_file/0,
format/1
]).
@ -52,6 +52,7 @@
-ifdef(TEST).
-export([create/7]).
-export([trans/2, force_create_app/1]).
-export([init_bootstrap_file/1]).
-endif.
-define(APP, emqx_app).
@ -114,11 +115,12 @@ post_config_update([api_key], _Req, NewConf, _OldConf, _AppEnvs) ->
end,
ok.
-spec init_bootstrap_file() -> ok | {error, _}.
init_bootstrap_file() ->
-spec try_init_bootstrap_file() -> ok | {error, _}.
try_init_bootstrap_file() ->
File = bootstrap_file(),
?SLOG(debug, #{msg => "init_bootstrap_api_keys_from_file", file => File}),
init_bootstrap_file(File).
_ = init_bootstrap_file(File),
ok.
create(Name, Enable, ExpiredAt, Desc, Role) ->
ApiKey = generate_unique_api_key(Name),
@ -357,10 +359,6 @@ init_bootstrap_file(File) ->
init_bootstrap_file(File, Dev, MP);
{error, Reason0} ->
Reason = emqx_utils:explain_posix(Reason0),
FmtReason = emqx_utils:format(
"load API bootstrap file failed, file:~ts, reason:~ts",
[File, Reason]
),
?SLOG(
error,
@ -371,7 +369,7 @@ init_bootstrap_file(File) ->
}
),
{error, FmtReason}
{error, Reason}
end.
init_bootstrap_file(File, Dev, MP) ->

View File

@ -100,7 +100,7 @@ t_bootstrap_file(_) ->
BadBin = <<"test-1:secret-11\ntest-2 secret-12">>,
ok = file:write_file(File, BadBin),
update_file(File),
?assertMatch({error, #{reason := "invalid_format"}}, emqx_mgmt_auth:init_bootstrap_file()),
?assertMatch({error, #{reason := "invalid_format"}}, emqx_mgmt_auth:init_bootstrap_file(File)),
?assertEqual(ok, auth_authorize(TestPath, <<"test-1">>, <<"secret-11">>)),
?assertMatch({error, _}, auth_authorize(TestPath, <<"test-2">>, <<"secret-12">>)),
update_file(<<>>),
@ -123,7 +123,7 @@ t_bootstrap_file_override(_) ->
ok = file:write_file(File, Bin),
update_file(File),
?assertEqual(ok, emqx_mgmt_auth:init_bootstrap_file()),
?assertEqual(ok, emqx_mgmt_auth:init_bootstrap_file(File)),
MatchFun = fun(ApiKey) -> mnesia:match_object(#?APP{api_key = ApiKey, _ = '_'}) end,
?assertMatch(
@ -156,7 +156,7 @@ t_bootstrap_file_dup_override(_) ->
File = "./bootstrap_api_keys.txt",
ok = file:write_file(File, Bin),
update_file(File),
?assertEqual(ok, emqx_mgmt_auth:init_bootstrap_file()),
?assertEqual(ok, emqx_mgmt_auth:init_bootstrap_file(File)),
SameAppWithDiffName = #?APP{
name = <<"name-1">>,
@ -190,7 +190,7 @@ t_bootstrap_file_dup_override(_) ->
%% Similar to loading bootstrap file at node startup
%% the duplicated apikey in mnesia will be cleaned up
?assertEqual(ok, emqx_mgmt_auth:init_bootstrap_file()),
?assertEqual(ok, emqx_mgmt_auth:init_bootstrap_file(File)),
?assertMatch(
{ok, [
#?APP{

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_modules, [
{description, "EMQX Modules"},
{vsn, "5.0.27"},
{vsn, "5.0.28"},
{modules, []},
{applications, [kernel, stdlib, emqx, emqx_ctl, observer_cli]},
{mod, {emqx_modules_app, []}},

View File

@ -1,6 +1,6 @@
{application, emqx_node_rebalance, [
{description, "EMQX Node Rebalance"},
{vsn, "5.0.9"},
{vsn, "5.0.10"},
{registered, [
emqx_node_rebalance_sup,
emqx_node_rebalance,

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_plugins, [
{description, "EMQX Plugin Management"},
{vsn, "0.2.2"},
{vsn, "0.2.3"},
{modules, []},
{mod, {emqx_plugins_app, []}},
{applications, [kernel, stdlib, emqx, erlavro]},

View File

@ -1049,19 +1049,22 @@ do_load_plugin_app(AppName, Ebin) ->
end.
start_app(App) ->
case application:ensure_all_started(App) of
{ok, Started} ->
case run_with_timeout(application, ensure_all_started, [App], 10_000) of
{ok, {ok, Started}} ->
case Started =/= [] of
true -> ?SLOG(debug, #{msg => "started_plugin_apps", apps => Started});
false -> ok
end,
?SLOG(debug, #{msg => "started_plugin_app", app => App}),
ok;
{error, {ErrApp, Reason}} ->
end;
{ok, {error, Reason}} ->
throw(#{
msg => "failed_to_start_app",
app => App,
reason => Reason
});
{error, Reason} ->
throw(#{
msg => "failed_to_start_plugin_app",
app => App,
err_app => ErrApp,
reason => Reason
})
end.
@ -1586,3 +1589,20 @@ bin(B) when is_binary(B) -> B.
wrap_to_list(Path) ->
binary_to_list(iolist_to_binary(Path)).
run_with_timeout(Module, Function, Args, Timeout) ->
Self = self(),
Fun = fun() ->
Result = apply(Module, Function, Args),
Self ! {self(), Result}
end,
Pid = spawn(Fun),
TimerRef = erlang:send_after(Timeout, self(), {timeout, Pid}),
receive
{Pid, Result} ->
_ = erlang:cancel_timer(TimerRef),
{ok, Result};
{timeout, Pid} ->
exit(Pid, kill),
{error, timeout}
end.

View File

@ -2,7 +2,7 @@
{application, emqx_prometheus, [
{description, "Prometheus for EMQX"},
% strict semver, bump manually!
{vsn, "5.2.3"},
{vsn, "5.2.4"},
{modules, []},
{registered, [emqx_prometheus_sup]},
{applications, [kernel, stdlib, prometheus, emqx, emqx_auth, emqx_resource, emqx_management]},

View File

@ -1,7 +1,7 @@
%% -*- mode: erlang -*-
{application, emqx_resource, [
{description, "Manager for all external resources"},
{vsn, "0.1.32"},
{vsn, "0.1.33"},
{registered, []},
{mod, {emqx_resource_app, []}},
{applications, [

View File

@ -34,7 +34,7 @@
type :: serde_type(),
eval_context :: term(),
%% for future use
extra = []
extra = #{}
}).
-type serde() :: #serde{}.

View File

@ -148,14 +148,19 @@ post_config_update(
post_config_update(
[?CONF_KEY_ROOT, schemas, NewName],
_Cmd,
NewSchemas,
%% undefined or OldSchemas
_,
NewSchema,
OldSchema,
_AppEnvs
) ->
case build_serdes([{NewName, NewSchemas}]) of
case OldSchema of
undefined ->
ok;
_ ->
ensure_serde_absent(NewName)
end,
case build_serdes([{NewName, NewSchema}]) of
ok ->
{ok, #{NewName => NewSchemas}};
{ok, #{NewName => NewSchema}};
{error, Reason, SerdesToRollback} ->
lists:foreach(fun ensure_serde_absent/1, SerdesToRollback),
{error, Reason}
@ -176,6 +181,7 @@ post_config_update(?CONF_KEY_PATH, _Cmd, NewConf = #{schemas := NewSchemas}, Old
async_delete_serdes(RemovedNames)
end,
SchemasToBuild = maps:to_list(maps:merge(Changed, Added)),
ok = lists:foreach(fun ensure_serde_absent/1, [N || {N, _} <- SchemasToBuild]),
case build_serdes(SchemasToBuild) of
ok ->
{ok, NewConf};

View File

@ -48,6 +48,10 @@
-type eval_context() :: term().
-type fingerprint() :: binary().
-type protobuf_cache_key() :: {schema_name(), fingerprint()}.
-export_type([serde_type/0]).
%%------------------------------------------------------------------------------
@ -175,11 +179,12 @@ make_serde(avro, Name, Source) ->
eval_context = Store
};
make_serde(protobuf, Name, Source) ->
SerdeMod = make_protobuf_serde_mod(Name, Source),
{CacheKey, SerdeMod} = make_protobuf_serde_mod(Name, Source),
#serde{
name = Name,
type = protobuf,
eval_context = SerdeMod
eval_context = SerdeMod,
extra = #{cache_key => CacheKey}
};
make_serde(json, Name, Source) ->
case json_decode(Source) of
@ -254,8 +259,9 @@ eval_encode(#serde{type = json, name = Name}, [Map]) ->
destroy(#serde{type = avro, name = _Name}) ->
?tp(serde_destroyed, #{type => avro, name => _Name}),
ok;
destroy(#serde{type = protobuf, name = _Name, eval_context = SerdeMod}) ->
destroy(#serde{type = protobuf, name = _Name, eval_context = SerdeMod} = Serde) ->
unload_code(SerdeMod),
destroy_protobuf_code(Serde),
?tp(serde_destroyed, #{type => protobuf, name => _Name}),
ok;
destroy(#serde{type = json, name = Name}) ->
@ -282,13 +288,14 @@ jesse_validate(Name, Map) ->
jesse_name(Str) ->
unicode:characters_to_list(Str).
-spec make_protobuf_serde_mod(schema_name(), schema_source()) -> module().
-spec make_protobuf_serde_mod(schema_name(), schema_source()) -> {protobuf_cache_key(), module()}.
make_protobuf_serde_mod(Name, Source) ->
{SerdeMod0, SerdeModFileName} = protobuf_serde_mod_name(Name),
case lazy_generate_protobuf_code(Name, SerdeMod0, Source) of
{ok, SerdeMod, ModBinary} ->
load_code(SerdeMod, SerdeModFileName, ModBinary),
SerdeMod;
CacheKey = protobuf_cache_key(Name, Source),
{CacheKey, SerdeMod};
{error, #{error := Error, warnings := Warnings}} ->
?SLOG(
warning,
@ -310,6 +317,13 @@ protobuf_serde_mod_name(Name) ->
SerdeModFileName = SerdeModName ++ ".memory",
{SerdeMod, SerdeModFileName}.
%% Fixme: we cannot uncomment the following typespec because Dialyzer complains that
%% `Source' should be `string()' due to `gpb_compile:string/3', but it does work fine with
%% binaries...
%% -spec protobuf_cache_key(schema_name(), schema_source()) -> {schema_name(), fingerprint()}.
protobuf_cache_key(Name, Source) ->
{Name, erlang:md5(Source)}.
-spec lazy_generate_protobuf_code(schema_name(), module(), schema_source()) ->
{ok, module(), binary()} | {error, #{error := term(), warnings := [term()]}}.
lazy_generate_protobuf_code(Name, SerdeMod0, Source) ->
@ -326,9 +340,9 @@ lazy_generate_protobuf_code(Name, SerdeMod0, Source) ->
-spec lazy_generate_protobuf_code_trans(schema_name(), module(), schema_source()) ->
{ok, module(), binary()} | {error, #{error := term(), warnings := [term()]}}.
lazy_generate_protobuf_code_trans(Name, SerdeMod0, Source) ->
Fingerprint = erlang:md5(Source),
_ = mnesia:lock({record, ?PROTOBUF_CACHE_TAB, Fingerprint}, write),
case mnesia:read(?PROTOBUF_CACHE_TAB, Fingerprint) of
CacheKey = protobuf_cache_key(Name, Source),
_ = mnesia:lock({record, ?PROTOBUF_CACHE_TAB, CacheKey}, write),
case mnesia:read(?PROTOBUF_CACHE_TAB, CacheKey) of
[#protobuf_cache{module = SerdeMod, module_binary = ModBinary}] ->
?tp(schema_registry_protobuf_cache_hit, #{name => Name}),
{ok, SerdeMod, ModBinary};
@ -337,7 +351,7 @@ lazy_generate_protobuf_code_trans(Name, SerdeMod0, Source) ->
case generate_protobuf_code(SerdeMod0, Source) of
{ok, SerdeMod, ModBinary} ->
CacheEntry = #protobuf_cache{
fingerprint = Fingerprint,
fingerprint = CacheKey,
module = SerdeMod,
module_binary = ModBinary
},
@ -345,7 +359,7 @@ lazy_generate_protobuf_code_trans(Name, SerdeMod0, Source) ->
{ok, SerdeMod, ModBinary};
{ok, SerdeMod, ModBinary, _Warnings} ->
CacheEntry = #protobuf_cache{
fingerprint = Fingerprint,
fingerprint = CacheKey,
module = SerdeMod,
module_binary = ModBinary
},
@ -390,6 +404,21 @@ unload_code(SerdeMod) ->
_ = code:delete(SerdeMod),
ok.
-spec destroy_protobuf_code(serde()) -> ok.
destroy_protobuf_code(Serde) ->
#serde{extra = #{cache_key := CacheKey}} = Serde,
{atomic, Res} = mria:transaction(
?SCHEMA_REGISTRY_SHARD,
fun destroy_protobuf_code_trans/1,
[CacheKey]
),
?tp("schema_registry_protobuf_cache_destroyed", #{name => Serde#serde.name}),
Res.
-spec destroy_protobuf_code_trans({schema_name(), fingerprint()}) -> ok.
destroy_protobuf_code_trans(CacheKey) ->
mnesia:delete(?PROTOBUF_CACHE_TAB, CacheKey, write).
-spec has_inner_type(serde_type(), eval_context(), [binary()]) ->
boolean().
has_inner_type(protobuf, _SerdeMod, [_, _ | _]) ->

View File

@ -207,6 +207,66 @@ t_protobuf_invalid_schema(_Config) ->
),
ok.
%% Checks that we unload code and clear code generation cache after destroying a protobuf
%% serde.
t_destroy_protobuf(_Config) ->
SerdeName = ?FUNCTION_NAME,
SerdeNameBin = atom_to_binary(SerdeName),
?check_trace(
#{timetrap => 5_000},
begin
Params = schema_params(protobuf),
ok = emqx_schema_registry:add_schema(SerdeName, Params),
{ok, {ok, _}} =
?wait_async_action(
emqx_schema_registry:delete_schema(SerdeName),
#{?snk_kind := serde_destroyed, name := SerdeNameBin}
),
%% Create again to check we don't hit the cache.
ok = emqx_schema_registry:add_schema(SerdeName, Params),
{ok, {ok, _}} =
?wait_async_action(
emqx_schema_registry:delete_schema(SerdeName),
#{?snk_kind := serde_destroyed, name := SerdeNameBin}
),
ok
end,
fun(Trace) ->
?assertMatch([], ?of_kind(schema_registry_protobuf_cache_hit, Trace)),
?assertMatch([_ | _], ?of_kind("schema_registry_protobuf_cache_destroyed", Trace)),
ok
end
),
ok.
%% Checks that we don't leave entries lingering in the protobuf code cache table when
%% updating the source of a serde.
t_update_protobuf_cache(_Config) ->
SerdeName = ?FUNCTION_NAME,
?check_trace(
#{timetrap => 5_000},
begin
#{source := Source0} = Params0 = schema_params(protobuf),
ok = emqx_schema_registry:add_schema(SerdeName, Params0),
%% Now we touch the source so protobuf needs to be recompiled.
Source1 = <<Source0/binary, "\n\n">>,
Params1 = Params0#{source := Source1},
{ok, {ok, _}} =
?wait_async_action(
emqx_schema_registry:add_schema(SerdeName, Params1),
#{?snk_kind := "schema_registry_protobuf_cache_destroyed"}
),
ok
end,
fun(Trace) ->
?assertMatch([], ?of_kind(schema_registry_protobuf_cache_hit, Trace)),
?assertMatch([_, _ | _], ?of_kind(schema_registry_protobuf_cache_miss, Trace)),
?assertMatch([_ | _], ?of_kind("schema_registry_protobuf_cache_destroyed", Trace)),
ok
end
),
ok.
t_json_invalid_schema(_Config) ->
SerdeName = invalid_json,
Params = schema_params(json),

View File

@ -2,7 +2,7 @@
{application, emqx_utils, [
{description, "Miscellaneous utilities for EMQX apps"},
% strict semver, bump manually!
{vsn, "5.2.3"},
{vsn, "5.2.4"},
{modules, [
emqx_utils,
emqx_utils_api,

View File

@ -0,0 +1,4 @@
Stop returning `CONNACK` or `DISCONNECT` to clients that sent malformed CONNECT packets.
- Only send `CONNACK` with reason code `frame_too_large` for MQTT-v5.0 when connecting if the protocol version field in CONNECT can be detected.
- Otherwise **DONOT** send any CONNACK or DISCONNECT packet.

View File

@ -0,0 +1 @@
Previously, if CRL checks were ever enabled for a listener, later disabling them via the configuration would not actually disable them until the listener restarted. This has been fixed.

View File

@ -0,0 +1,8 @@
Add a startup timeout limit for the plug-in application. Currently the timeout is 10 seconds.
Starting a bad plugin while EMQX is running will result in a thrown runtime error.
When EMQX is closed and restarted, the main starting process may hang due to the the plugin application to start failures.
Maybe restarting with modified:
- Modifed config file: make the bad plugin enabled.
- Add a plugin with bad plugin config.

View File

@ -0,0 +1 @@
Fixed an issue where the internal cache for Protobuf schemas in Schema Registry was not properly cleaned up after deleting or updating a schema.

87
changes/v5.7.2.en.md Normal file
View File

@ -0,0 +1,87 @@
## 5.7.2
*Release Date: 2024-08-06*
### Enhancements
- [#13317](https://github.com/emqx/emqx/pull/13317) Added a new per-authorization source metric type: `ignore`. This metric increments when an authorization source attempts to authorize a request but encounters scenarios where the authorization is not applicable or encounters an error, resulting in an undecidable outcome.
- [#13336](https://github.com/emqx/emqx/pull/13336) Added functionality to initialize authentication data in the built-in database of an empty EMQX node or cluster using a bootstrap file in CSV or JSON format. This feature introduces new configuration entries, `bootstrap_file` and `bootstrap_type`.
- [#13348](https://github.com/emqx/emqx/pull/13348) Added a new field `payload_encode` in the log configuration to determine the format of the payload in the log data.
- [#13436](https://github.com/emqx/emqx/pull/13436) Added the option to add custom request headers to JWKS requests.
- [#13507](https://github.com/emqx/emqx/pull/13507) Introduced a new built-in function `getenv` in the rule engine and variform expression to facilitate access to environment variables. This function adheres to the following constraints:
- Prefix `EMQXVAR_` is added before reading from OS environment variables. For example, `getenv('FOO_BAR')` is to read `EMQXVAR_FOO_BAR`.
- These values are immutable once loaded from the OS environment.
- [#13521](https://github.com/emqx/emqx/pull/13521) Resolved an issue where LDAP query timeouts could cause the underlying connection to become unusable, potentially causing subsequent queries to return outdated results. The fix ensures the system reconnects automatically in case of a timeout.
- [#13528](https://github.com/emqx/emqx/pull/13528) Applied log throttling for the event of unrecoverable errors in data integrations.
- [#13548](https://github.com/emqx/emqx/pull/13548) EMQX now can optionally invoke the `on_config_changed/2` callback function when the plugin configuration is updated via the REST API. This callback function is assumed to be exported by the `<PluginName>_app` module.
For example, if the plugin name and version are `my_plugin-1.0.0`, then the callback function is assumed to be `my_plugin_app:on_config_changed/2`.
- [#13386](https://github.com/emqx/emqx/pull/13386) Added support for initializing a list of banned clients on an empty EMQX node or cluster with a bootstrap file in CSV format. The corresponding config entry to specify the file path is `banned.bootstrap_file`. This file is a CSV file with `,` as its delimiter. The first line of this file must be a header line. All valid headers are listed here:
- as :: required
- who :: required
- by :: optional
- reason :: optional
- at :: optional
- until :: optional
See the [Configuration Manual](https://docs.emqx.com/en/enterprise/v@EE_VERSION@/hocon/) for details on each field.
Each row in the rest of this file must contain the same number of columns as the header line, and the column can be omitted then its value is `undefined`.
### Bug Fixes
- [#13222](https://github.com/emqx/emqx/pull/13222) Resolved issues with flags checking and error handling associated with the Will message in the `CONNECT` packet.
For detailed specifications, refer to:
- MQTT-v3.1.1-[MQTT-3.1.2-13], MQTT-v5.0-[MQTT-3.1.2-11]
- MQTT-v3.1.1-[MQTT-3.1.2-14], MQTT-v5.0-[MQTT-3.1.2-12]
- MQTT-v3.1.1-[MQTT-3.1.2-15], MQTT-v5.0-[MQTT-3.1.2-13]
- [#13307](https://github.com/emqx/emqx/pull/13307) Updated `ekka` library to version 0.19.5. This version of `ekka` utilizes `mria` 0.8.8, enhancing auto-heal functionality. Previously, the auto-heal worked only when all core nodes were reachable. This update allows to apply auto-heal once the majority of core nodes are alive. For details, refer to the [Mria PR](https://github.com/emqx/mria/pull/180).
- [#13334](https://github.com/emqx/emqx/pull/13334) Implemented strict mode checking for the `PasswordFlag` in the MQTT v3.1.1 CONNECT packet to align with protocol specifications.
Note: To ensure bug-to-bug compatibility, this check is performed only in strict mode.
- [#13344](https://github.com/emqx/emqx/pull/13344) Resolved an issue where the `POST /clients/:clientid/subscribe/bulk` API would not function correctly if the node receiving the API request did not maintain the connection to the specified `clientid`.
- [#13358](https://github.com/emqx/emqx/pull/13358) Fixed an issue when the `reason` in the `authn_complete_event` event was incorrectly displayed.
- [#13375](https://github.com/emqx/emqx/pull/13375) The value `infinity` has been added as default value to the listener configuration fields `max_conn_rate`, `messages_rate`, and `bytes_rate`.
- [#13382](https://github.com/emqx/emqx/pull/13382) Updated the `emqtt` library to version 0.4.14, which resolves an issue preventing `emqtt_pool`s from reusing pools that are in an inconsistent state.
- [#13389](https://github.com/emqx/emqx/pull/13389) Fixed an issue where the `Derived Key Length` for `pbkdf2` could be set to a negative integer.
- [#13389](https://github.com/emqx/emqx/pull/13389) Fixed an issue where topics in the authorization rules might be parsed incorrectly.
- [#13393](https://github.com/emqx/emqx/pull/13393) Fixed an issue where plugin applications failed to restart after a node joined a cluster, resulting in hooks not being properly installed and causing inconsistent states.
- [#13398](https://github.com/emqx/emqx/pull/13398) Fixed an issue where ACL rules were incorrectly cleared when reloading the built-in database for authorization using the command line.
- [#13403](https://github.com/emqx/emqx/pull/13403) Addressed a security issue where environment variable configuration overrides were inadvertently logging passwords. This fix ensures that passwords present in environment variables are not logged.
- [#13408](https://github.com/emqx/emqx/pull/13408) Resolved a `function_clause` crash triggered by authentication attempts with invalid salt or password types. This fix enhances error handling to better manage authentication failures involving incorrect salt or password types.
- [#13419](https://github.com/emqx/emqx/pull/13419) Resolved an issue where crash log messages from the `/configs` API were displaying garbled hints. This fix ensures that log messages related to API calls are clear and understandable.
- [#13422](https://github.com/emqx/emqx/pull/13422) Fixed an issue where the option `force_shutdown.max_heap_size` could not be set to 0 to disable this tuning.
- [#13442](https://github.com/emqx/emqx/pull/13442) Fixed an issue where the health check interval configuration for actions/sources was not being respected. Previously, EMQX ignored the specified health check interval for actions and used the connector's interval instead. The fix ensures that EMQX now correctly uses the health check interval configured for actions/sources, allowing for independent and accurate health monitoring frequencies.
- [#13503](https://github.com/emqx/emqx/pull/13503) Fixed an issue where connectors did not adhere to the configured health check interval upon initial startup, requiring an update or restart to apply the correct interval.
- [#13515](https://github.com/emqx/emqx/pull/13515) Fixed an issue where the same client could not subscribe to the same exclusive topic when the node was down for some reason.
- [#13527](https://github.com/emqx/emqx/pull/13527) Fixed an issue in the Rule Engine where executing a SQL test for the Message Publish event would consistently return no results when a `$bridges/...` source was included in the `FROM` clause.
- [#13541](https://github.com/emqx/emqx/pull/13541) Fixed an issue where disabling CRL checks for a listener required a listener restart to take effect.
- [#13552](https://github.com/emqx/emqx/pull/13552) Added a startup timeout limit for EMQX plugins with a default timeout of 10 seconds. Before this update, problematic plugins could cause runtime errors during startup, leading to potential issues where the main startup process might hang when EMQX is stopped and restarted.