Compare commits

..

52 Commits

Author SHA1 Message Date
github-actions[bot]
8a3940d8db Bump version to 3.3.2 on release-3.3 branch (#26550)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-02-22 14:29:30 +02:00
argo-cd-cherry-pick-bot[bot]
1bf62aea19 docs: instruct to enable ClientSideApplyMigration in 3.3.2 (cherry-pick #26547 for 3.3) (#26549)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-22 12:50:31 +02:00
argo-cd-cherry-pick-bot[bot]
67c23193c4 fix: use csapgrade to patch managedFields for client-side apply migration (cherry-pick #26289 for 3.3) (#26516)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
Co-authored-by: Peter Jiang <35584807+pjiang-dev@users.noreply.github.com>
2026-02-19 10:00:06 +02:00
github-actions[bot]
326a1dbd6b Bump version to 3.3.1 on release-3.3 branch (#26501)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-02-18 13:39:12 +02:00
argo-cd-cherry-pick-bot[bot]
d0b2a6cfd7 fix: Fix excessive ls-remote requests on monorepos with Auto Sync enabled apps (26277) (cherry-pick #26278 for 3.3) (#26372)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
Co-authored-by: Dan Garfield <dan.garfield@octopus.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-18 13:31:39 +02:00
argo-cd-cherry-pick-bot[bot]
e464f6ae43 fix: AppProject finalizer should consider apps in all allowed namespaces (#24347) (cherry-pick #26416 for 3.3) (#26480)
Signed-off-by: Dhruvang Makadia <dhruvang1@users.noreply.github.com>
Co-authored-by: Dhruvang Makadia <dhruvang1@users.noreply.github.com>
2026-02-17 23:15:37 -10:00
argo-cd-cherry-pick-bot[bot]
4b0a2c0ef2 chore: bumps ubuntu base docker image to 25.10 (cherry-pick #25758 for 3.3) (#26436)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2026-02-13 12:17:50 +05:30
argo-cd-cherry-pick-bot[bot]
8449d9a0f3 fix(server): OIDC config via secrets fails (#18269) (cherry-pick #26214 for 3.3) (#26423)
Signed-off-by: Valéry Fouques <48053275+BZValoche@users.noreply.github.com>
Co-authored-by: Valéry Fouques <48053275+BZValoche@users.noreply.github.com>
2026-02-13 11:40:27 +05:30
Kanika Rana
92df21cfc0 chore(appset): cherry-pick basic progressive sync e2e tests (#26092) (#26191)
Signed-off-by: krana limaDocker <krana-lima-argo@example.com>
Signed-off-by: Kanika Rana <krana@redhat.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: krana limaDocker <krana-lima-argo@example.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-12 10:33:42 -05:00
Kanika Rana
24493145a6 test(e2e): add isolation by ensuring unique name (cherry-pick #25724 for 3.3) (#26287)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Signed-off-by: Kanika Rana <krana@redhat.com>
Signed-off-by: Kanika Rana <46766610+ranakan19@users.noreply.github.com>
Signed-off-by: Oliver Gondža <ogondza@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Oliver Gondža <ogondza@gmail.com>
2026-02-12 14:31:21 +02:00
argo-cd-cherry-pick-bot[bot]
273683b647 chore: placate Sonar by ignoring testdata files (cherry-pick #26371 for 3.3) (#26377)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-11 14:40:19 +02:00
argo-cd-cherry-pick-bot[bot]
a1d18559f5 test: flaky e2e tests with argocd-e2e-external ns not found - removed deletion of shared ns in e2e (cherry-pick #25731 for 3.3) (#26370)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-11 12:07:41 +02:00
argo-cd-cherry-pick-bot[bot]
8df5e96981 test(e2e): CMP test fails locally on Mac (cherry-pick #25901 for 3.3) (#26340)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-02-11 12:06:52 +02:00
Kanika Rana
c4f0cd3e84 test(e2e): configurable tmp dir locally (#25780) (#26339)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Signed-off-by: shubham singh mahar <shubhammahar1306@gmail.com>
Signed-off-by: CI <ci@argoproj.com>
Signed-off-by: Josh Soref <jsoref@gmail.com>
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Signed-off-by: Jakub Rudnik <jakub@rudnik.io>
Signed-off-by: ioleksiuk <ioleksiuk@users.noreply.github.com>
Signed-off-by: Illia Oleksiuk <ilya.oleksiuk@gmail.com>
Signed-off-by: Aya <ayia.hosni@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pasha Kostohrys <pasha.kostohrys@gmail.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
Co-authored-by: Shubham Singh <shubhammahar1306@gmail.com>
Co-authored-by: shubham singh mahar <smahar@obmondo.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: CI <ci@argoproj.com>
Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
Co-authored-by: Jakub Rudnik <jakub@rudnik.io>
Co-authored-by: Illia Oleksiuk <42911468+ioleksiuk@users.noreply.github.com>
Co-authored-by: Aya Hosni <ayia.hosni@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2026-02-11 12:05:24 +02:00
argo-cd-cherry-pick-bot[bot]
0038fce14d test(e2e): update local certs so they are valid on MacOS (cherry-pick #25864 for 3.3) (#26338)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-02-11 12:03:55 +02:00
argo-cd-cherry-pick-bot[bot]
6f270cc8f4 test(e2e): oras binary not found locally if not installed in path (cherry-pick #25751 for 3.3) (#26337)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-02-11 12:02:38 +02:00
argo-cd-cherry-pick-bot[bot]
61267982ab chore(deps): Upgrade Kustomize to 5.8.1 (cherry-pick #26367 for 3.3) (#26369)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-10 11:11:54 -05:00
argo-cd-cherry-pick-bot[bot]
445916fdb0 fix: compressedLayerExtracterStore+isCompressedLayer - allow tar.gzip suffixes (cherry-pick #26355 for 3.3) (#26376)
Signed-off-by: erin liman <erin.liman@tiktokusds.com>
Co-authored-by: erin <6914822+nepeat@users.noreply.github.com>
2026-02-10 10:58:03 -05:00
argo-cd-cherry-pick-bot[bot]
54f29167a6 test(e2e): unstable CMP e2e test when running locally (cherry-pick #25752 for 3.3) (#26288)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-02-05 17:39:39 -05:00
argo-cd-cherry-pick-bot[bot]
55d0d09802 test(e2e): fix TestDeletionConfirmation flakiness (cherry-pick #25902 for 3.3) (#26284)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-02-05 23:10:55 +02:00
Kanika Rana
2502af402d test: cherry-pick - use unique app name per test (#25720) (#26246)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-02-04 09:33:48 +02:00
github-actions[bot]
fd6b7d5b3c Bump version to 3.3.0 on release-3.3 branch (#26206)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-02-02 09:24:38 +02:00
Alexandre Gaudreault
f4e479e3f0 fix: consider Replace/Force sync option on live resource annotations (#26110) (#26189)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-30 09:52:45 -05:00
github-actions[bot]
436da4e7d8 Bump version to 3.3.0-rc4 on release-3.3 branch (#26119)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2026-01-22 14:35:38 -05:00
Codey Jenkins
cd6a9aaf3f fix: cherry pick #25516 to release-3.3 (#26114)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
Signed-off-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-01-22 13:34:47 -05:00
John Soutar
ac071b57a1 chore(deps): update notifications-engine to fix GitHub PR comments nil panic (cherry-pick #26065 for 3.3) (#26075)
Signed-off-by: John Soutar <john@tella.com>
2026-01-21 09:27:32 +02:00
argo-cd-cherry-pick-bot[bot]
3d64c21206 fix: invalid error message on health check failure (#26040) (cherry-pick #26039 for 3.3) (#26063)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
2026-01-20 13:43:26 +02:00
argo-cd-cherry-pick-bot[bot]
0fa47b11b2 fix(hydrator): pass destination.namespace to manifest rendering (#25478) (cherry-pick #25699 for 3.3) (#26019)
Signed-off-by: Sean Liao <sean@liao.dev>
Co-authored-by: Sean Liao <seankhliao@gmail.com>
2026-01-15 16:40:33 -05:00
argo-cd-cherry-pick-bot[bot]
48a9dcc23b fix(hydrator): empty links for failed operation (#25025) (cherry-pick #26014 for 3.3) (#26018)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-01-15 16:26:49 -05:00
argo-cd-cherry-pick-bot[bot]
b52a0750b2 fix(hydrator): .gitattributes include deeply nested files (#25870) (cherry-pick #26011 for 3.3) (#26013)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-01-15 15:39:37 -05:00
argo-cd-cherry-pick-bot[bot]
8fbb44c336 fix: close response body on error paths to prevent connection leak (cherry-pick #25824 for 3.3) (#26005)
Signed-off-by: chentiewen <tiewen.chen@aminer.cn>
Co-authored-by: QingHe <634008786@qq.com>
Co-authored-by: chentiewen <tiewen.chen@aminer.cn>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-15 14:35:13 +01:00
argo-cd-cherry-pick-bot[bot]
28e8472c69 fix: nil and empty ignoredifferences (cherry-pick #25980 for 3.3) (#26000)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-01-15 11:30:38 +01:00
argo-cd-cherry-pick-bot[bot]
a6472c8393 fix: allow docker dhi helm charts to be used (cherry-pick #25835 for 3.3) (#25964)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-01-13 20:00:40 +01:00
argo-cd-cherry-pick-bot[bot]
74de77a24c fix: Toggle automated.enabled to disable auto-sync for rollbacks (cherry-pick #25719 for 3.3) (#25943)
Signed-off-by: Daniel Moran <danxmoran@gmail.com>
Co-authored-by: Daniel Moran <danxmoran@gmail.com>
2026-01-12 18:06:27 -05:00
argo-cd-cherry-pick-bot[bot]
15568cb9d5 fix(appset): do not trigger reconciliation on appsets not part of allowed namespaces when updating a cluster secret (cherry-pick #25622 for 3.3) (#25909)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
Co-authored-by: OpenGuidou <73480729+OpenGuidou@users.noreply.github.com>
2026-01-09 16:18:34 +01:00
argo-cd-cherry-pick-bot[bot]
32c32a67cb fix: Only show please update resource specification message when spec… (cherry-pick #25066 for 3.3) (#25894)
Signed-off-by: Josh Soref <jsoref@gmail.com>
Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2026-01-07 10:11:03 -05:00
Nitish Kumar
675f8cfe3f chore(cherry-pick-3.3): bump expr to v1.17.7 (#25888)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-01-07 13:31:03 +02:00
argo-cd-cherry-pick-bot[bot]
9ae26e4e74 ci: test against k8s 1.34.2 (cherry-pick #25856 for 3.3) (#25858)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-01-05 17:30:41 +02:00
argo-cd-cherry-pick-bot[bot]
369fb7577e chore(deps): update notifications-engine to v0.5.1-0.20251223091026-8c0c96d8d530 (cherry-pick #25785 for 3.3) (#25853)
Co-authored-by: Pasha Kostohrys <pasha.kostohrys@gmail.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
2026-01-05 11:08:19 +02:00
argo-cd-cherry-pick-bot[bot]
efca5b9144 chore: bumps golang version everywhere to the latest 1.25.5 (cherry-pick #25716 for 3.3) (#25808)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
Co-authored-by: reggie-k <regina.voloshin@codefresh.io>
2026-01-04 13:57:55 +02:00
argo-cd-cherry-pick-bot[bot]
2c3bc6f991 test: fix flaky create repository test by resyncing informers (cherry-pick #25706 for 3.3) (#25794)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-12-24 17:21:21 +02:00
Regina Voloshin
8639b7be5e docs: added Helm 3.19.4 upgrade to the upgrade guide (#25776)
Signed-off-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-12-22 12:28:43 +02:00
argo-cd-cherry-pick-bot[bot]
5de1e6472d chore(deps): update to helm 3.19.4 due to cve : https://github.com/helm/helm/releases/tag/v3.19.4 (cherry-pick #25769 for 3.3) (#25774)
Signed-off-by: Bryan Stenson <bryan@siliconvortex.com>
Co-authored-by: Bryan Stenson <bryan.stenson@gmail.com>
2025-12-22 12:16:17 +02:00
github-actions[bot]
51b595b1ee Bump version to 3.3.0-rc3 on release-3.3 branch (#25743)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-12-18 18:03:58 -05:00
argo-cd-cherry-pick-bot[bot]
fe0466de51 fix(hydrator): git fetch needs creds (#25727) (cherry-pick #25738 for 3.3) (#25742)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-18 17:39:16 -05:00
argo-cd-cherry-pick-bot[bot]
05b416906e fix(metrics): more consistent oci metrics (cherry-pick #25549 for 3.3) (#25728)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-12-18 07:13:13 +02:00
argo-cd-cherry-pick-bot[bot]
20604f1b21 fix(server): update resourceVersion on Terminate retry (cherry-pick #25650 for 3.3) (#25717)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-18 00:43:50 +01:00
github-actions[bot]
fd2d0adae9 Bump version to 3.3.0-rc2 on release-3.3 branch (#25712)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-12-17 12:55:58 -05:00
argo-cd-cherry-pick-bot[bot]
708c63683c fix(hydrator): race when pushing notes (cherry-pick #25700 for 3.3) (#25709)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-17 11:30:44 -05:00
argo-cd-cherry-pick-bot[bot]
393cb97042 fix(hydrator): hydrated sha missing on no-ops (#25694) (cherry-pick #25695 for 3.3) (#25697)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-17 10:31:28 -05:00
argo-cd-cherry-pick-bot[bot]
99434863c9 chore(deps): update module k8s.io/kubernetes to v1.34.2 [security] (cherry-pick #25682 for 3.3) (#25683)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <regina.voloshin@codefresh.io>
2025-12-16 10:47:00 +02:00
github-actions[bot]
814db444c3 Bump version to 3.3.0-rc1 on release-3.3 branch (#25667)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2025-12-15 18:47:04 +02:00
267 changed files with 20751 additions and 24851 deletions

View File

@@ -41,7 +41,6 @@ Target GA date: ___. __, ____
Thanks to all the folks who spent their time contributing to this release in any way possible!
```
- [ ] Monitor support channels for issues, cherry-picking bugfixes and docs fixes as appropriate during the RC period (or delegate this task to an Approver and coordinate timing)
- [ ] After creating the RC, open a documentation PR for the next minor version using [this](../../docs/operator-manual/templates/minor_version_upgrade.md) template.
## GA Release Checklist

View File

@@ -10,7 +10,7 @@ jobs:
contents: write # for peter-evans/create-pull-request to create branch
pull-requests: write # for peter-evans/create-pull-request to create a PR
name: Automatically update major version
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
@@ -37,7 +37,7 @@ jobs:
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Add ~/go/bin to PATH

View File

@@ -28,7 +28,7 @@ on:
jobs:
cherry-pick:
name: Cherry Pick to ${{ inputs.version_number }}
runs-on: ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Generate a token
id: generate-token

View File

@@ -14,7 +14,7 @@ jobs:
(github.event.action == 'labeled' && startsWith(github.event.label.name, 'cherry-pick/')) ||
(github.event.action == 'closed' && contains(toJSON(github.event.pull_request.labels.*.name), 'cherry-pick/'))
)
runs-on: ubuntu-24.04
runs-on: ubuntu-latest
outputs:
labels: ${{ steps.extract-labels.outputs.labels }}
steps:

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.6'
GOLANG_VERSION: '1.25.5'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -25,7 +25,7 @@ permissions:
jobs:
changes:
runs-on: ubuntu-24.04
runs-on: ubuntu-latest
outputs:
backend: ${{ steps.filter.outputs.backend_any_changed }}
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
@@ -50,14 +50,14 @@ jobs:
check-go:
name: Ensure Go modules synchronicity
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Download all Go modules
@@ -70,18 +70,18 @@ jobs:
build-go:
name: Build & cache Go code
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -97,14 +97,14 @@ jobs:
pull-requests: read # for golangci/golangci-lint-action to fetch pull requests
name: Lint Go code
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
@@ -117,7 +117,7 @@ jobs:
test-go:
name: Run unit tests for Go packages
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- build-go
- changes
@@ -132,7 +132,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@@ -152,7 +152,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -173,7 +173,7 @@ jobs:
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: test-results
path: test-results
@@ -181,7 +181,7 @@ jobs:
test-go-race:
name: Run unit tests with -race for Go packages
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- build-go
- changes
@@ -196,7 +196,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@@ -216,7 +216,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -237,7 +237,7 @@ jobs:
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: race-results
path: test-results/
@@ -245,14 +245,14 @@ jobs:
codegen:
name: Check changes to generated code
if: ${{ needs.changes.outputs.backend == 'true' || needs.changes.outputs.docs == 'true'}}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Create symlink in GOPATH
@@ -302,20 +302,20 @@ jobs:
name: Build, test & lint UI code
# We run UI logic for backend changes so that we have a complete set of coverage documents to send to codecov.
if: ${{ needs.changes.outputs.backend == 'true' || needs.changes.outputs.frontend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup NodeJS
uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -338,7 +338,7 @@ jobs:
working-directory: ui/
shellcheck:
runs-on: ubuntu-24.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- run: |
@@ -349,7 +349,7 @@ jobs:
analyze:
name: Process & analyze test artifacts
if: ${{ needs.changes.outputs.backend == 'true' || needs.changes.outputs.frontend == 'true' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
needs:
- test-go
- build-ui
@@ -357,7 +357,6 @@ jobs:
- test-e2e
env:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
codecov_secret: ${{ secrets.CODECOV_TOKEN }}
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
@@ -365,27 +364,23 @@ jobs:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
if: env.codecov_secret != ''
- name: Remove other node_modules directory
run: |
rm -rf ui/node_modules/argo-ui/node_modules
if: env.codecov_secret != ''
- name: Get e2e code coverage
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: e2e-code-coverage
path: e2e-code-coverage
if: env.codecov_secret != ''
- name: Get unit test code coverage
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: test-results
path: test-results
if: env.codecov_secret != ''
- name: combine-go-coverage
# We generate coverage reports for all Argo CD components, but only the applicationset-controller,
# app-controller, repo-server, and commit-server report contain coverage data. The other components currently
@@ -393,7 +388,6 @@ jobs:
# references to their coverage output directories.
run: |
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller,e2e-code-coverage/commit-server -o test-results/full-coverage.out
if: env.codecov_secret != ''
- name: Upload code coverage information to codecov.io
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
@@ -401,16 +395,13 @@ jobs:
fail_ci_if_error: true
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
if: env.codecov_secret != ''
- name: Upload test results to Codecov
# Codecov uploads test results to Codecov.io on upstream master branch and on fork master branch if the token is configured.
if: env.codecov_secret != '' && github.ref == 'refs/heads/master' && github.event_name == 'push'
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
if: github.ref == 'refs/heads/master' && github.event_name == 'push' && github.repository == 'argoproj/argo-cd'
uses: codecov/test-results-action@47f89e9acb64b76debcd5ea40642d25a4adced9f # v1.1.1
with:
files: test-results/junit.xml
file: test-results/junit.xml
fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }}
report_type: test_results
- name: Perform static code analysis using SonarCloud
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -420,7 +411,7 @@ jobs:
test-e2e:
name: Run end-to-end tests
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ${{ github.repository == 'argoproj/argo-cd' && 'oracle-vm-16cpu-64gb-x86-64' || 'ubuntu-24.04' }}
runs-on: ${{ github.repository == 'argoproj/argo-cd' && 'oracle-vm-16cpu-64gb-x86-64' || 'ubuntu-22.04' }}
strategy:
fail-fast: false
matrix:
@@ -458,7 +449,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Set GOPATH
@@ -480,7 +471,7 @@ jobs:
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -538,13 +529,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log
@@ -562,7 +553,7 @@ jobs:
needs:
- test-e2e
- changes
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- run: |
result="${{ needs.test-e2e.result }}"

View File

@@ -26,14 +26,14 @@ jobs:
if: github.repository == 'argoproj/argo-cd' || vars.enable_codeql
# CodeQL runs on ubuntu-latest and windows-latest
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version-file: go.mod

View File

@@ -51,8 +51,8 @@ jobs:
contents: read
packages: write # Used to push images to `ghcr.io` if used.
id-token: write # Needed to create an OIDC token for keyless signing
runs-on: ubuntu-24.04
outputs:
runs-on: ubuntu-22.04
outputs:
image-digest: ${{ steps.image.outputs.digest }}
steps:
- name: Checkout code
@@ -67,7 +67,7 @@ jobs:
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ inputs.go-version }}
cache: false
@@ -76,7 +76,7 @@ jobs:
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Setup tags for container image as a CSV type
run: |

View File

@@ -20,7 +20,7 @@ jobs:
permissions:
contents: read
# Always run to calculate variables - other jobs check outputs
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
outputs:
image-tag: ${{ steps.image.outputs.tag}}
platforms: ${{ steps.platforms.outputs.platforms }}
@@ -86,7 +86,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.6
go-version: 1.25.5
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -103,7 +103,7 @@ jobs:
ghcr_image_name: ${{ needs.set-vars.outputs.ghcr_image_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.6
go-version: 1.25.5
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:
@@ -138,7 +138,7 @@ jobs:
contents: write # for git to push upgrade commit if not already deployed
packages: write # for pushing packages to GHCR, which is used by cd.apps.argoproj.io to avoid polluting Quay with tags
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"

View File

@@ -20,7 +20,7 @@ jobs:
contents: write # for peter-evans/create-pull-request to create branch
pull-requests: write # for peter-evans/create-pull-request to create a PR
name: Automatically generate version and manifests on ${{ inputs.TARGET_BRANCH }}
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
env:
# Calculate image names with defaults, this will be used in the make manifests-local command
# to generate the correct image name in the manifests

View File

@@ -21,7 +21,7 @@ jobs:
contents: read
pull-requests: read
name: Validate PR Title
runs-on: ubuntu-24.04
runs-on: ubuntu-latest
steps:
- uses: thehanimo/pr-title-checker@7fbfe05602bdd86f926d3fb3bccb6f3aed43bc70 # v1.4.3
with:

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.6' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.25.5' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -26,7 +26,7 @@ jobs:
quay_image_name: ${{ needs.setup-variables.outputs.quay_image_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.6
go-version: 1.25.5
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
@@ -36,7 +36,7 @@ jobs:
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd' || (github.repository_owner != 'argoproj' && vars.ENABLE_FORK_RELEASES == 'true' && vars.IMAGE_NAMESPACE && vars.IMAGE_NAMESPACE != 'argoproj')
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
@@ -117,7 +117,7 @@ jobs:
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
@@ -133,7 +133,7 @@ jobs:
run: git fetch --force --tags
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
cache: false
@@ -210,7 +210,7 @@ jobs:
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
@@ -219,7 +219,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
cache: false
@@ -295,7 +295,7 @@ jobs:
contents: write # Needed to push commit to update stable tag
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd' || needs.setup-variables.outputs.allow_fork_release == 'true'
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:

View File

@@ -9,7 +9,7 @@ permissions:
jobs:
renovate:
runs-on: ubuntu-24.04
runs-on: ubuntu-latest
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Get token
@@ -24,13 +24,13 @@ jobs:
# Some codegen commands require Go to be setup
- name: Setup Golang
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.6
go-version: 1.25.5
- name: Self-hosted Renovate
uses: renovatebot/github-action@66387ab8c2464d575b933fa44e9e5a86b2822809 #44.2.4
uses: renovatebot/github-action@5712c6a41dea6cdf32c72d92a763bd417e6606aa #44.0.5
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'

View File

@@ -17,7 +17,7 @@ permissions: read-all
jobs:
analysis:
name: Scorecards analysis
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
@@ -54,7 +54,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: SARIF file
path: results.sarif

View File

@@ -14,7 +14,7 @@ jobs:
pull-requests: write
if: github.repository == 'argoproj/argo-cd'
name: Update Snyk report in the docs directory
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:25.10@sha256:5922638447b1e3ba114332c896a
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c AS builder
FROM docker.io/library/golang:1.25.5@sha256:31c1e53dfc1cc2d269deec9c83f58729fa3c53dc9a576f6426109d1e319e9e9a AS builder
WORKDIR /tmp
@@ -50,10 +50,10 @@ RUN groupadd -g $ARGOCD_USER_ID argocd && \
chmod g=u /home/argocd && \
apt-get update && \
apt-get dist-upgrade -y && \
apt-get install --no-install-recommends -y \
git git-lfs tini ca-certificates gpg gpg-agent tzdata connect-proxy openssh-client && \
apt-get install -y \
git git-lfs tini gpg tzdata connect-proxy && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/gpg-wrapper.sh \
hack/git-verify-wrapper.sh \
@@ -103,7 +103,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.5@sha256:31c1e53dfc1cc2d269deec9c83f58729fa3c53dc9a576f6426109d1e319e9e9a AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c
FROM docker.io/library/golang:1.25.5@sha256:31c1e53dfc1cc2d269deec9c83f58729fa3c53dc9a576f6426109d1e319e9e9a
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -1,39 +0,0 @@
# Argo CD Maintainers
This document lists the maintainers of the Argo CD project.
## Maintainers
| Maintainer | GitHub ID | Project Roles | Affiliation |
|---------------------------|---------------------------------------------------------|----------------------|-------------------------------------------------|
| Zach Aller | [zachaller](https://github.com/zachaller) | Reviewer | [Intuit](https://www.github.com/intuit/) |
| Leonardo Luz Almeida | [leoluz](https://github.com/leoluz) | Approver | [Intuit](https://www.github.com/intuit/) |
| Chetan Banavikalmutt | [chetan-rns](https://github.com/chetan-rns) | Reviewer | [Red Hat](https://redhat.com/) |
| Keith Chong | [keithchong](https://github.com/keithchong) | Approver | [Red Hat](https://redhat.com/) |
| Alex Collins | [alexec](https://github.com/alexec) | Approver | [Intuit](https://www.github.com/intuit/) |
| Michael Crenshaw | [crenshaw-dev](https://github.com/crenshaw-dev) | Lead | [Intuit](https://www.github.com/intuit/) |
| Soumya Ghosh Dastidar | [gdsoumya](https://github.com/gdsoumya) | Approver | [Akuity](https://akuity.io/) |
| Eugene Doudine | [dudinea](https://github.com/dudinea) | Reviewer | [Octopus Deploy](https://octopus.com/) |
| Jann Fischer | [jannfis](https://github.com/jannfis) | Approver | [Red Hat](https://redhat.com/) |
| Dan Garfield | [todaywasawesome](https://github.com/todaywasawesome) | Approver(docs) | [Octopus Deploy](https://octopus.com/) |
| Alexandre Gaudreault | [agaudreault](https://github.com/agaudreault) | Approver | [Intuit](https://www.github.com/intuit/) |
| Christian Hernandez | [christianh814](https://github.com/christianh814) | Reviewer(docs) | [Akuity](https://akuity.io/) |
| Peter Jiang | [pjiang](https://github.com/pjiang) | Reviewer | [Intuit](https://www.intuit.com/) |
| Andrii Korotkov | [andrii-korotkov](https://github.com/andrii-korotkov) | Reviewer | [Verkada](https://www.verkada.com/) |
| Pasha Kostohrys | [pasha-codefresh](https://github.com/pasha-codefresh) | Approver | [Codefresh](https://www.github.com/codefresh/) |
| Nitish Kumar | [nitishfy](https://github.com/nitishfy) | Approver(cli,docs) | [Akuity](https://akuity.io/) |
| Justin Marquis | [34fathombelow](https://github.com/34fathombelow) | Approver(docs/ci) | [Akuity](https://akuity.io/) |
| Alexander Matyushentsev | [alexmt](https://github.com/alexmt) | Lead | [Akuity](https://akuity.io/) |
| Nicholas Morey | [morey-tech](https://github.com/morey-tech) | Reviewer(docs) | [Akuity](https://akuity.io/) |
| Papapetrou Patroklos | [ppapapetrou76](https://github.com/ppapapetrou76) | Reviewer | [Octopus Deploy](https://octopus.com/) |
| Blake Pettersson | [blakepettersson](https://github.com/blakepettersson) | Approver | [Akuity](https://akuity.io/) |
| Ishita Sequeira | [ishitasequeira](https://github.com/ishitasequeira) | Approver | [Red Hat](https://redhat.com/) |
| Ashutosh Singh | [ashutosh16](https://github.com/ashutosh16) | Approver(docs) | [Intuit](https://www.github.com/intuit/) |
| Linghao Su | [linghaoSu](https://github.com/linghaoSu) | Reviewer | [DaoCloud](https://daocloud.io) |
| Jesse Suen | [jessesuen](https://github.com/jessesuen) | Approver | [Akuity](https://akuity.io/) |
| Yuan Tang | [terrytangyuan](https://github.com/terrytangyuan) | Reviewer | [Red Hat](https://redhat.com/) |
| William Tam | [wtam2018](https://github.com/wtam2018) | Reviewer | [Red Hat](https://redhat.com/) |
| Ryan Umstead | [rumstead](https://github.com/rumstead) | Approver | [Black Rock](https://www.github.com/blackrock/) |
| Regina Voloshin | [reggie-k](https://github.com/reggie-k) | Approver | [Octopus Deploy](https://octopus.com/) |
| Hong Wang | [wanghong230](https://github.com/wanghong230) | Reviewer | [Akuity](https://akuity.io/) |
| Jonathan West | [jgwest](https://github.com/jgwest) | Approver | [Red Hat](https://redhat.com/) |

View File

@@ -85,6 +85,8 @@ ARGOCD_IN_CI?=false
ARGOCD_TEST_E2E?=true
ARGOCD_BIN_MODE?=true
ARGOCD_LINT_GOGC?=20
# Depending on where we are (legacy or non-legacy pwd), we need to use
# different Docker volume mounts for our source tree
LEGACY_PATH=$(GOPATH)/src/github.com/argoproj/argo-cd
@@ -144,6 +146,7 @@ define run-in-test-client
-e ARGOCD_E2E_K3S=$(ARGOCD_E2E_K3S) \
-e GITHUB_TOKEN \
-e GOCACHE=/tmp/go-build-cache \
-e ARGOCD_LINT_GOGC=$(ARGOCD_LINT_GOGC) \
-v ${DOCKER_SRC_MOUNT} \
-v ${GOPATH}/pkg/mod:/go/pkg/mod${VOLUME_MOUNT} \
-v ${GOCACHE}:/tmp/go-build-cache${VOLUME_MOUNT} \
@@ -202,35 +205,18 @@ else
IMAGE_TAG?=latest
endif
# defaults for building images and manifests
ifeq (${DOCKER_PUSH},true)
ifndef IMAGE_NAMESPACE
$(error IMAGE_NAMESPACE must be set to push images (e.g. IMAGE_NAMESPACE=argoproj))
endif
endif
# Consruct prefix for docker image
# Note: keeping same logic as in hacks/update_manifests.sh
ifdef IMAGE_REGISTRY
ifdef IMAGE_NAMESPACE
IMAGE_PREFIX=${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/
else
$(error IMAGE_NAMESPACE must be set when IMAGE_REGISTRY is set (e.g. IMAGE_NAMESPACE=argoproj))
endif
else
ifdef IMAGE_NAMESPACE
# for backwards compatibility with the old way like IMAGE_NAMESPACE='quay.io/argoproj'
IMAGE_PREFIX=${IMAGE_NAMESPACE}/
else
# Neither namespace nor registry given - apply the default values
IMAGE_REGISTRY="quay.io"
IMAGE_NAMESPACE="argoproj"
IMAGE_PREFIX=${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/
endif
endif
ifndef IMAGE_REPOSITORY
IMAGE_REPOSITORY=argocd
ifndef IMAGE_REGISTRY
IMAGE_REGISTRY="quay.io"
endif
.PHONY: all
@@ -363,23 +349,23 @@ ifeq ($(DEV_IMAGE), true)
IMAGE_TAG="dev-$(shell git describe --always --dirty)"
image: build-ui
DOCKER_BUILDKIT=1 $(DOCKER) build --platform=$(TARGET_ARCH) -t argocd-base --target argocd-base .
GOOS=linux GOARCH=$(TARGET_ARCH:linux/%=%) GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd ./cmd
CGO_ENABLED=${CGO_FLAG} GOOS=linux GOARCH=amd64 GODEBUG="tarinsecurepath=0,zipinsecurepath=0" go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd ./cmd
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-server
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-application-controller
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-repo-server
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-cmp-server
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-dex
cp Dockerfile.dev dist
DOCKER_BUILDKIT=1 $(DOCKER) build --platform=$(TARGET_ARCH) -t $(IMAGE_PREFIX)$(IMAGE_REPOSITORY):$(IMAGE_TAG) -f dist/Dockerfile.dev dist
DOCKER_BUILDKIT=1 $(DOCKER) build --platform=$(TARGET_ARCH) -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) -f dist/Dockerfile.dev dist
else
image:
DOCKER_BUILDKIT=1 $(DOCKER) build -t $(IMAGE_PREFIX)$(IMAGE_REPOSITORY):$(IMAGE_TAG) --platform=$(TARGET_ARCH) .
DOCKER_BUILDKIT=1 $(DOCKER) build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) --platform=$(TARGET_ARCH) .
endif
@if [ "$(DOCKER_PUSH)" = "true" ] ; then $(DOCKER) push $(IMAGE_PREFIX)$(IMAGE_REPOSITORY):$(IMAGE_TAG) ; fi
@if [ "$(DOCKER_PUSH)" = "true" ] ; then $(DOCKER) push $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) ; fi
.PHONY: armimage
armimage:
$(DOCKER) build -t $(IMAGE_PREFIX)(IMAGE_REPOSITORY):$(IMAGE_TAG)-arm .
$(DOCKER) build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)-arm .
.PHONY: builder-image
builder-image:
@@ -411,7 +397,9 @@ lint: test-tools-image
.PHONY: lint-local
lint-local:
golangci-lint --version
golangci-lint run --fix --verbose
# NOTE: If you get a "Killed" OOM message, try reducing the value of GOGC
# See https://github.com/golangci/golangci-lint#memory-usage-of-golangci-lint
GOGC=$(ARGOCD_LINT_GOGC) GOMAXPROCS=2 golangci-lint run --fix --verbose
.PHONY: lint-ui
lint-ui: test-tools-image
@@ -443,19 +431,13 @@ test: test-tools-image
# Run all unit tests (local version)
.PHONY: test-local
test-local: test-gitops-engine
test-local:
if test "$(TEST_MODULE)" = ""; then \
DIST_DIR=${DIST_DIR} RERUN_FAILS=0 PACKAGES=`go list ./... | grep -v 'test/e2e'` ./hack/test.sh -args -test.gocoverdir="$(PWD)/test-results"; \
else \
DIST_DIR=${DIST_DIR} RERUN_FAILS=0 PACKAGES="$(TEST_MODULE)" ./hack/test.sh -args -test.gocoverdir="$(PWD)/test-results" "$(TEST_MODULE)"; \
fi
# Run gitops-engine unit tests
.PHONY: test-gitops-engine
test-gitops-engine:
mkdir -p $(PWD)/test-results
cd gitops-engine && go test -race -cover ./... -args -test.gocoverdir="$(PWD)/test-results"
.PHONY: test-race
test-race: test-tools-image
mkdir -p $(GOCACHE)

View File

@@ -13,7 +13,6 @@
[![Twitter Follow](https://img.shields.io/twitter/follow/argoproj?style=social)](https://twitter.com/argoproj)
[![Slack](https://img.shields.io/badge/slack-argoproj-brightgreen.svg?logo=slack)](https://argoproj.github.io/community/join-slack)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-argoproj-blue.svg?logo=linkedin)](https://www.linkedin.com/company/argoproj/)
[![Bluesky](https://img.shields.io/badge/Bluesky-argoproj-blue.svg?style=social&logo=bluesky)](https://bsky.app/profile/argoproj.bsky.social)
# Argo CD - Declarative Continuous Delivery for Kubernetes

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 814db444c36503851dc3d45cf9c44394821ca1a4
commit-hash: 06ef059f9fc7cf9da2dfaef2a505ee1e3c693485
project-url: https://github.com/argoproj/argo-cd
project-release: v3.4.0
project-release: v3.3.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -208,7 +208,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Kurly](https://www.kurly.com/)
1. [Kvist](https://kvistsolutions.com)
1. [Kyriba](https://www.kyriba.com/)
1. [Lattice](https://lattice.com)
1. [LeFigaro](https://www.lefigaro.fr/)
1. [Lely](https://www.lely.com/)
1. [LexisNexis](https://www.lexisnexis.com/)
@@ -246,7 +245,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Municipality of The Hague](https://www.denhaag.nl/)
1. [My Job Glasses](https://myjobglasses.com)
1. [Natura &Co](https://naturaeco.com/)
1. [Netease Cloud Music](https://music.163.com/)
1. [Nethopper](https://nethopper.io)
1. [New Relic](https://newrelic.com/)
1. [Nextbasket](https://nextbasket.com)

View File

@@ -1 +1 @@
3.4.0
3.3.2

View File

@@ -57,7 +57,6 @@ import (
"github.com/argoproj/argo-cd/v3/common"
applog "github.com/argoproj/argo-cd/v3/util/app/log"
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/settings"
argov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
argoutil "github.com/argoproj/argo-cd/v3/util/argo"
@@ -111,7 +110,6 @@ type ApplicationSetReconciler struct {
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -827,7 +825,7 @@ func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, a
// deleteInCluster will delete Applications that are currently on the cluster, but not in appList.
// The function must be called after all generators had been called and generated applications
func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
clusterList, err := utils.ListClusters(r.ClusterInformer)
clusterList, err := utils.ListClusters(ctx, r.KubeClientset, r.ArgoCDNamespace)
if err != nil {
return fmt.Errorf("error listing clusters: %w", err)
}

View File

@@ -1,7 +1,6 @@
package controllers
import (
"context"
"encoding/json"
"errors"
"fmt"
@@ -20,7 +19,6 @@ import (
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
kubefake "k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
@@ -1190,8 +1188,6 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
for _, c := range []struct {
// name is human-readable test name
@@ -1248,6 +1244,9 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
},
}
initObjs := []crtclient.Object{&app, &appSet}
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(initObjs...).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "my-secret",
@@ -1265,12 +1264,8 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
},
}
initObjs := []crtclient.Object{&app, &appSet, secret}
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(initObjs...).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
objects := append([]runtime.Object{}, secret)
kubeclientset := kubefake.NewClientset(objects...)
kubeclientset := kubefake.NewSimpleClientset(objects...)
metrics := appsetmetrics.NewFakeAppsetMetrics()
settingsMgr := settings.NewSettingsManager(t.Context(), kubeclientset, "argocd")
@@ -1278,11 +1273,6 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
_ = settingsMgr.ResyncInformers()
argodb := db.NewDB("argocd", settingsMgr, kubeclientset)
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "namespace")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
@@ -1291,7 +1281,7 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
Metrics: metrics,
ArgoDB: argodb,
}
clusterList, err := utils.ListClusters(clusterInformer)
clusterList, err := utils.ListClusters(t.Context(), kubeclientset, "namespace")
require.NoError(t, err)
appLog := log.WithFields(applog.GetAppLogFields(&app)).WithField("appSet", "")
@@ -1308,7 +1298,7 @@ func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
// App on the cluster should have the expected finalizers
assert.ElementsMatch(t, c.expectedFinalizers, retrievedApp.Finalizers)
// App object passed in as a parameter should have the expected finalizers
// App object passed in as a parameter should have the expected finaliers
assert.ElementsMatch(t, c.expectedFinalizers, appInputParam.Finalizers)
bytes, _ := json.MarshalIndent(retrievedApp, "", " ")
@@ -1321,8 +1311,6 @@ func TestRemoveFinalizerOnInvalidDestination_DestinationTypes(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
for _, c := range []struct {
// name is human-readable test name
@@ -1415,6 +1403,9 @@ func TestRemoveFinalizerOnInvalidDestination_DestinationTypes(t *testing.T) {
},
}
initObjs := []crtclient.Object{&app, &appSet}
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(initObjs...).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "my-secret",
@@ -1432,10 +1423,6 @@ func TestRemoveFinalizerOnInvalidDestination_DestinationTypes(t *testing.T) {
},
}
initObjs := []crtclient.Object{&app, &appSet, secret}
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(initObjs...).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
kubeclientset := getDefaultTestClientSet(secret)
metrics := appsetmetrics.NewFakeAppsetMetrics()
@@ -1444,11 +1431,6 @@ func TestRemoveFinalizerOnInvalidDestination_DestinationTypes(t *testing.T) {
_ = settingsMgr.ResyncInformers()
argodb := db.NewDB("argocd", settingsMgr, kubeclientset)
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "argocd")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
@@ -1458,7 +1440,7 @@ func TestRemoveFinalizerOnInvalidDestination_DestinationTypes(t *testing.T) {
ArgoDB: argodb,
}
clusterList, err := utils.ListClusters(clusterInformer)
clusterList, err := utils.ListClusters(t.Context(), kubeclientset, "argocd")
require.NoError(t, err)
appLog := log.WithFields(applog.GetAppLogFields(&app)).WithField("appSet", "")
@@ -1763,8 +1745,6 @@ func TestDeleteInCluster(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
for _, c := range []struct {
// appSet is the application set on which the delete function is called
@@ -1877,19 +1857,12 @@ func TestDeleteInCluster(t *testing.T) {
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(initObjs...).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
kubeclientset := kubefake.NewClientset()
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "namespace")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(len(initObjs) + len(c.expected)),
KubeClientset: kubeclientset,
Metrics: metrics,
ClusterInformer: clusterInformer,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(len(initObjs) + len(c.expected)),
KubeClientset: kubefake.NewSimpleClientset(),
Metrics: metrics,
}
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), c.appSet, c.desiredApps)
@@ -2220,8 +2193,6 @@ func TestReconcilerValidationProjectErrorBehaviour(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
project := v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "good-project", Namespace: "argocd"},
@@ -2265,9 +2236,6 @@ func TestReconcilerValidationProjectErrorBehaviour(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "argocd")
require.NoError(t, err)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
@@ -2281,7 +2249,6 @@ func TestReconcilerValidationProjectErrorBehaviour(t *testing.T) {
Policy: v1alpha1.ApplicationsSyncPolicySync,
ArgoCDNamespace: "argocd",
Metrics: metrics,
ClusterInformer: clusterInformer,
}
req := ctrl.Request{
@@ -2312,7 +2279,7 @@ func TestSetApplicationSetStatusCondition(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
someTime := &metav1.Time{Time: time.Now().Add(-5 * time.Minute)}
existingParameterGeneratedCondition := getParametersGeneratedCondition(true, "")
existingParameterGeneratedCondition.LastTransitionTime = someTime
@@ -2783,8 +2750,6 @@ func applicationsUpdateSyncPolicyTest(t *testing.T, applicationsSyncPolicy v1alp
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
defaultProject := v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: "argocd"},
@@ -2841,14 +2806,10 @@ func applicationsUpdateSyncPolicyTest(t *testing.T, applicationsSyncPolicy v1alp
kubeclientset := getDefaultTestClientSet(secret)
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&appSet, &defaultProject, secret).WithStatusSubresource(&appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&appSet, &defaultProject).WithStatusSubresource(&appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "argocd")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
r := ApplicationSetReconciler{
Client: client,
@@ -2864,7 +2825,6 @@ func applicationsUpdateSyncPolicyTest(t *testing.T, applicationsSyncPolicy v1alp
Policy: v1alpha1.ApplicationsSyncPolicySync,
EnablePolicyOverride: allowPolicyOverride,
Metrics: metrics,
ClusterInformer: clusterInformer,
}
req := ctrl.Request{
@@ -2965,8 +2925,6 @@ func applicationsDeleteSyncPolicyTest(t *testing.T, applicationsSyncPolicy v1alp
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
defaultProject := v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: "argocd"},
@@ -3023,16 +2981,11 @@ func applicationsDeleteSyncPolicyTest(t *testing.T, applicationsSyncPolicy v1alp
kubeclientset := getDefaultTestClientSet(secret)
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&appSet, &defaultProject, secret).WithStatusSubresource(&appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&appSet, &defaultProject).WithStatusSubresource(&appSet).WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "argocd")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
@@ -3047,7 +3000,6 @@ func applicationsDeleteSyncPolicyTest(t *testing.T, applicationsSyncPolicy v1alp
Policy: v1alpha1.ApplicationsSyncPolicySync,
EnablePolicyOverride: allowPolicyOverride,
Metrics: metrics,
ClusterInformer: clusterInformer,
}
req := ctrl.Request{
@@ -3140,8 +3092,6 @@ func TestPolicies(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
defaultProject := v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: "argocd"},
@@ -3225,11 +3175,6 @@ func TestPolicies(t *testing.T) {
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "argocd")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
@@ -3242,7 +3187,6 @@ func TestPolicies(t *testing.T) {
ArgoCDNamespace: "argocd",
KubeClientset: kubeclientset,
Policy: policy,
ClusterInformer: clusterInformer,
Metrics: metrics,
}
@@ -3252,27 +3196,27 @@ func TestPolicies(t *testing.T) {
Name: "name",
},
}
ctx := t.Context()
// Check if the application is created
res, err := r.Reconcile(ctx, req)
// Check if Application is created
res, err := r.Reconcile(t.Context(), req)
require.NoError(t, err)
assert.Equal(t, time.Duration(0), res.RequeueAfter)
var app v1alpha1.Application
err = r.Get(ctx, crtclient.ObjectKey{Namespace: "argocd", Name: "my-app"}, &app)
err = r.Get(t.Context(), crtclient.ObjectKey{Namespace: "argocd", Name: "my-app"}, &app)
require.NoError(t, err)
assert.Equal(t, "value", app.Annotations["key"])
// Check if the Application is updated
// Check if Application is updated
app.Annotations["key"] = "edited"
err = r.Update(ctx, &app)
err = r.Update(t.Context(), &app)
require.NoError(t, err)
res, err = r.Reconcile(ctx, req)
res, err = r.Reconcile(t.Context(), req)
require.NoError(t, err)
assert.Equal(t, time.Duration(0), res.RequeueAfter)
err = r.Get(ctx, crtclient.ObjectKey{Namespace: "argocd", Name: "my-app"}, &app)
err = r.Get(t.Context(), crtclient.ObjectKey{Namespace: "argocd", Name: "my-app"}, &app)
require.NoError(t, err)
if c.allowedUpdate {
@@ -3281,22 +3225,22 @@ func TestPolicies(t *testing.T) {
assert.Equal(t, "edited", app.Annotations["key"])
}
// Check if the Application is deleted
err = r.Get(ctx, crtclient.ObjectKey{Namespace: "argocd", Name: "name"}, &appSet)
// Check if Application is deleted
err = r.Get(t.Context(), crtclient.ObjectKey{Namespace: "argocd", Name: "name"}, &appSet)
require.NoError(t, err)
appSet.Spec.Generators[0] = v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{
Elements: []apiextensionsv1.JSON{},
},
}
err = r.Update(ctx, &appSet)
err = r.Update(t.Context(), &appSet)
require.NoError(t, err)
res, err = r.Reconcile(ctx, req)
res, err = r.Reconcile(t.Context(), req)
require.NoError(t, err)
assert.Equal(t, time.Duration(0), res.RequeueAfter)
err = r.Get(ctx, crtclient.ObjectKey{Namespace: "argocd", Name: "my-app"}, &app)
err = r.Get(t.Context(), crtclient.ObjectKey{Namespace: "argocd", Name: "my-app"}, &app)
require.NoError(t, err)
if c.allowedDelete {
assert.NotNil(t, app.DeletionTimestamp)
@@ -3312,7 +3256,7 @@ func TestSetApplicationSetApplicationStatus(t *testing.T) {
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
for _, cc := range []struct {
name string
@@ -4189,7 +4133,7 @@ func TestBuildAppDependencyList(t *testing.T) {
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
@@ -4624,7 +4568,7 @@ func TestGetAppsToSync(t *testing.T) {
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
argodb := db.NewDB("argocd", settings.NewSettingsManager(t.Context(), kubeclientset, "argocd"), kubeclientset)
@@ -5311,7 +5255,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
@@ -6064,7 +6008,7 @@ func TestUpdateApplicationSetApplicationStatusProgress(t *testing.T) {
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
client := fake.NewClientBuilder().WithScheme(scheme).WithObjects(&cc.appSet).WithStatusSubresource(&cc.appSet).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
@@ -6337,7 +6281,7 @@ func TestUpdateResourceStatus(t *testing.T) {
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
client := fake.NewClientBuilder().WithScheme(scheme).WithStatusSubresource(&cc.appSet).WithObjects(&cc.appSet).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
@@ -6427,7 +6371,7 @@ func TestResourceStatusAreOrdered(t *testing.T) {
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
client := fake.NewClientBuilder().WithScheme(scheme).WithStatusSubresource(&cc.appSet).WithObjects(&cc.appSet).Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
@@ -7632,7 +7576,7 @@ func TestReconcileProgressiveSyncDisabled(t *testing.T) {
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
for _, cc := range []struct {
name string
@@ -7705,14 +7649,3 @@ func TestReconcileProgressiveSyncDisabled(t *testing.T) {
})
}
}
func startAndSyncInformer(t *testing.T, informer cache.SharedIndexInformer) context.CancelFunc {
t.Helper()
ctx, cancel := context.WithCancel(t.Context())
go informer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), informer.HasSynced) {
cancel()
t.Fatal("Timed out waiting for caches to sync")
}
return cancel
}

View File

@@ -19,7 +19,6 @@ import (
appsetmetrics "github.com/argoproj/argo-cd/v3/applicationset/metrics"
"github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
argov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/util/settings"
)
func TestRequeueAfter(t *testing.T) {
@@ -58,17 +57,12 @@ func TestRequeueAfter(t *testing.T) {
}
fakeDynClient := dynfake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, duckType)
scmConfig := generators.NewSCMConfig("", []string{""}, true, true, nil, true)
clusterInformer, err := settings.NewClusterInformer(appClientset, "argocd")
require.NoError(t, err)
defer startAndSyncInformer(t, clusterInformer)()
terminalGenerators := map[string]generators.Generator{
"List": generators.NewListGenerator(),
"Clusters": generators.NewClusterGenerator(k8sClient, "argocd"),
"Clusters": generators.NewClusterGenerator(ctx, k8sClient, appClientset, "argocd"),
"Git": generators.NewGitGenerator(mockServer, "namespace"),
"SCMProvider": generators.NewSCMProviderGenerator(fake.NewClientBuilder().WithObjects(&corev1.Secret{}).Build(), scmConfig),
"ClusterDecisionResource": generators.NewDuckTypeGenerator(ctx, fakeDynClient, appClientset, "argocd", clusterInformer),
"ClusterDecisionResource": generators.NewDuckTypeGenerator(ctx, fakeDynClient, appClientset, "argocd"),
"PullRequest": generators.NewPullRequestGenerator(k8sClient, scmConfig),
}

View File

@@ -9,6 +9,7 @@ import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
@@ -21,15 +22,19 @@ var _ Generator = (*ClusterGenerator)(nil)
// ClusterGenerator generates Applications for some or all clusters registered with ArgoCD.
type ClusterGenerator struct {
client.Client
ctx context.Context
clientset kubernetes.Interface
// namespace is the Argo CD namespace
namespace string
}
var render = &utils.Render{}
func NewClusterGenerator(c client.Client, namespace string) Generator {
func NewClusterGenerator(ctx context.Context, c client.Client, clientset kubernetes.Interface, namespace string) Generator {
g := &ClusterGenerator{
Client: c,
ctx: ctx,
clientset: clientset,
namespace: namespace,
}
return g
@@ -59,7 +64,16 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
// - Since local clusters do not have secrets, they do not have labels to match against
ignoreLocalClusters := len(appSetGenerator.Clusters.Selector.MatchExpressions) > 0 || len(appSetGenerator.Clusters.Selector.MatchLabels) > 0
// Get cluster secrets using the cached controller-runtime client
// ListCluster will include the local cluster in the list of clusters
clustersFromArgoCD, err := utils.ListClusters(g.ctx, g.clientset, g.namespace)
if err != nil {
return nil, fmt.Errorf("error listing clusters: %w", err)
}
if clustersFromArgoCD == nil {
return nil, nil
}
clusterSecrets, err := g.getSecretsByClusterName(logCtx, appSetGenerator)
if err != nil {
return nil, fmt.Errorf("error getting cluster secrets: %w", err)
@@ -68,14 +82,32 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
paramHolder := &paramHolder{isFlatMode: appSetGenerator.Clusters.FlatList}
logCtx.Debugf("Using flat mode = %t for cluster generator", paramHolder.isFlatMode)
// Convert map values to slice to check for an in-cluster secret
secretsList := make([]corev1.Secret, 0, len(clusterSecrets))
for _, secret := range clusterSecrets {
secretsList = append(secretsList, secret)
secretsFound := []corev1.Secret{}
for _, cluster := range clustersFromArgoCD {
// If there is a secret for this cluster, then it's a non-local cluster, so it will be
// handled by the next step.
if secretForCluster, exists := clusterSecrets[cluster.Name]; exists {
secretsFound = append(secretsFound, secretForCluster)
} else if !ignoreLocalClusters {
// If there is no secret for the cluster, it's the local cluster, so handle it here.
params := map[string]any{}
params["name"] = cluster.Name
params["nameNormalized"] = cluster.Name
params["server"] = cluster.Server
params["project"] = ""
err = appendTemplatedValues(appSetGenerator.Clusters.Values, params, appSet.Spec.GoTemplate, appSet.Spec.GoTemplateOptions)
if err != nil {
return nil, fmt.Errorf("error appending templated values for local cluster: %w", err)
}
paramHolder.append(params)
logCtx.WithField("cluster", "local cluster").Info("matched local cluster")
}
}
// For each matching cluster secret (non-local clusters only)
for _, cluster := range clusterSecrets {
for _, cluster := range secretsFound {
params := g.getClusterParameters(cluster, appSet)
err = appendTemplatedValues(appSetGenerator.Clusters.Values, params, appSet.Spec.GoTemplate, appSet.Spec.GoTemplateOptions)
@@ -87,23 +119,6 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
logCtx.WithField("cluster", cluster.Name).Debug("matched cluster secret")
}
// Add the in-cluster last if it doesn't have a secret, and we're not ignoring in-cluster
if !ignoreLocalClusters && !utils.SecretsContainInClusterCredentials(secretsList) {
params := map[string]any{}
params["name"] = argoappsetv1alpha1.KubernetesInClusterName
params["nameNormalized"] = argoappsetv1alpha1.KubernetesInClusterName
params["server"] = argoappsetv1alpha1.KubernetesInternalAPIServerAddr
params["project"] = ""
err = appendTemplatedValues(appSetGenerator.Clusters.Values, params, appSet.Spec.GoTemplate, appSet.Spec.GoTemplateOptions)
if err != nil {
return nil, fmt.Errorf("error appending templated values for local cluster: %w", err)
}
paramHolder.append(params)
logCtx.WithField("cluster", "local cluster").Info("matched local cluster")
}
return paramHolder.consolidate(), nil
}
@@ -171,7 +186,7 @@ func (g *ClusterGenerator) getSecretsByClusterName(log *log.Entry, appSetGenerat
return nil, fmt.Errorf("error converting label selector: %w", err)
}
if err := g.List(context.Background(), clusterSecretList, client.InNamespace(g.namespace), client.MatchingLabelsSelector{Selector: secretSelector}); err != nil {
if err := g.List(context.Background(), clusterSecretList, client.MatchingLabelsSelector{Selector: secretSelector}); err != nil {
return nil, err
}
log.Debugf("clusters matching labels: %d", len(clusterSecretList.Items))

View File

@@ -7,9 +7,12 @@ import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
kubefake "k8s.io/client-go/kubernetes/fake"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@@ -296,15 +299,23 @@ func TestGenerateParams(t *testing.T) {
},
}
// convert []client.Object to []runtime.Object, for use by kubefake package
runtimeClusters := []runtime.Object{}
for _, clientCluster := range clusters {
runtimeClusters = append(runtimeClusters, clientCluster)
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
cl := &possiblyErroringFakeCtrlRuntimeClient{
fakeClient,
testCase.clientError,
}
clusterGenerator := NewClusterGenerator(cl, "namespace")
clusterGenerator := NewClusterGenerator(t.Context(), cl, appClientset, "namespace")
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -325,25 +336,12 @@ func TestGenerateParams(t *testing.T) {
require.EqualError(t, err, testCase.expectedError.Error())
} else {
require.NoError(t, err)
assertEqualParamsFlat(t, testCase.expected, got, testCase.isFlatMode)
assert.ElementsMatch(t, testCase.expected, got)
}
})
}
}
func assertEqualParamsFlat(t *testing.T, expected, got []map[string]any, isFlatMode bool) {
t.Helper()
if isFlatMode && len(expected) == 1 && len(got) == 1 {
expectedClusters, ok1 := expected[0]["clusters"].([]map[string]any)
gotClusters, ok2 := got[0]["clusters"].([]map[string]any)
if ok1 && ok2 {
assert.ElementsMatch(t, expectedClusters, gotClusters)
return
}
}
assert.ElementsMatch(t, expected, got)
}
func TestGenerateParamsGoTemplate(t *testing.T) {
clusters := []client.Object{
&corev1.Secret{
@@ -839,15 +837,23 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
},
}
// convert []client.Object to []runtime.Object, for use by kubefake package
runtimeClusters := []runtime.Object{}
for _, clientCluster := range clusters {
runtimeClusters = append(runtimeClusters, clientCluster)
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
cl := &possiblyErroringFakeCtrlRuntimeClient{
fakeClient,
testCase.clientError,
}
clusterGenerator := NewClusterGenerator(cl, "namespace")
clusterGenerator := NewClusterGenerator(t.Context(), cl, appClientset, "namespace")
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -870,7 +876,7 @@ func TestGenerateParamsGoTemplate(t *testing.T) {
require.EqualError(t, err, testCase.expectedError.Error())
} else {
require.NoError(t, err)
assertEqualParamsFlat(t, testCase.expected, got, testCase.isFlatMode)
assert.ElementsMatch(t, testCase.expected, got)
}
})
}

View File

@@ -19,27 +19,24 @@ import (
"github.com/argoproj/argo-cd/v3/applicationset/utils"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/util/settings"
)
var _ Generator = (*DuckTypeGenerator)(nil)
// DuckTypeGenerator generates Applications for some or all clusters registered with ArgoCD.
type DuckTypeGenerator struct {
ctx context.Context
dynClient dynamic.Interface
clientset kubernetes.Interface
namespace string // namespace is the Argo CD namespace
clusterInformer *settings.ClusterInformer
ctx context.Context
dynClient dynamic.Interface
clientset kubernetes.Interface
namespace string // namespace is the Argo CD namespace
}
func NewDuckTypeGenerator(ctx context.Context, dynClient dynamic.Interface, clientset kubernetes.Interface, namespace string, clusterInformer *settings.ClusterInformer) Generator {
func NewDuckTypeGenerator(ctx context.Context, dynClient dynamic.Interface, clientset kubernetes.Interface, namespace string) Generator {
g := &DuckTypeGenerator{
ctx: ctx,
dynClient: dynClient,
clientset: clientset,
namespace: namespace,
clusterInformer: clusterInformer,
ctx: ctx,
dynClient: dynClient,
clientset: clientset,
namespace: namespace,
}
return g
}
@@ -68,7 +65,8 @@ func (g *DuckTypeGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.A
return nil, ErrEmptyAppSetGenerator
}
clustersFromArgoCD, err := utils.ListClusters(g.clusterInformer)
// ListCluster from Argo CD's util/db package will include the local cluster in the list of clusters
clustersFromArgoCD, err := utils.ListClusters(g.ctx, g.clientset, g.namespace)
if err != nil {
return nil, fmt.Errorf("error listing clusters: %w", err)
}

View File

@@ -11,13 +11,11 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic/fake"
dynfake "k8s.io/client-go/dynamic/fake"
kubefake "k8s.io/client-go/kubernetes/fake"
"sigs.k8s.io/controller-runtime/pkg/client"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/test"
"github.com/argoproj/argo-cd/v3/util/settings"
)
const (
@@ -292,14 +290,9 @@ func TestGenerateParamsForDuckType(t *testing.T) {
Resource: "ducks",
}: "DuckList"}
fakeDynClient := fake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, testCase.resource)
fakeDynClient := dynfake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, testCase.resource)
clusterInformer, err := settings.NewClusterInformer(appClientset, "namespace")
require.NoError(t, err)
defer test.StartInformer(clusterInformer)()
duckTypeGenerator := NewDuckTypeGenerator(t.Context(), fakeDynClient, appClientset, "namespace", clusterInformer)
duckTypeGenerator := NewDuckTypeGenerator(t.Context(), fakeDynClient, appClientset, "namespace")
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -593,14 +586,9 @@ func TestGenerateParamsForDuckTypeGoTemplate(t *testing.T) {
Resource: "ducks",
}: "DuckList"}
fakeDynClient := fake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, testCase.resource)
fakeDynClient := dynfake.NewSimpleDynamicClientWithCustomListKinds(runtime.NewScheme(), gvrToListKind, testCase.resource)
clusterInformer, err := settings.NewClusterInformer(appClientset, "namespace")
require.NoError(t, err)
defer test.StartInformer(clusterInformer)()
duckTypeGenerator := NewDuckTypeGenerator(t.Context(), fakeDynClient, appClientset, "namespace", clusterInformer)
duckTypeGenerator := NewDuckTypeGenerator(t.Context(), fakeDynClient, appClientset, "namespace")
applicationSetInfo := argoprojiov1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{

View File

@@ -1,6 +1,7 @@
package generators
import (
"context"
"testing"
log "github.com/sirupsen/logrus"
@@ -15,6 +16,8 @@ import (
"github.com/stretchr/testify/mock"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
kubefake "k8s.io/client-go/kubernetes/fake"
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
@@ -220,7 +223,7 @@ func TestTransForm(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(),
"Clusters": getMockClusterGenerator(t.Context()),
}
applicationSetInfo := argov1alpha1.ApplicationSet{
@@ -257,7 +260,7 @@ func emptyTemplate() argov1alpha1.ApplicationSetTemplate {
}
}
func getMockClusterGenerator() Generator {
func getMockClusterGenerator(ctx context.Context) Generator {
clusters := []crtclient.Object{
&corev1.Secret{
TypeMeta: metav1.TypeMeta{
@@ -332,8 +335,14 @@ func getMockClusterGenerator() Generator {
Type: corev1.SecretType("Opaque"),
},
}
runtimeClusters := []runtime.Object{}
for _, clientCluster := range clusters {
runtimeClusters = append(runtimeClusters, clientCluster)
}
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
return NewClusterGenerator(fakeClient, "namespace")
return NewClusterGenerator(ctx, fakeClient, appClientset, "namespace")
}
func getMockGitGenerator() Generator {
@@ -345,7 +354,7 @@ func getMockGitGenerator() Generator {
func TestGetRelevantGenerators(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(),
"Clusters": getMockClusterGenerator(t.Context()),
"Git": getMockGitGenerator(),
}

View File

@@ -316,7 +316,7 @@ func (g *GitGenerator) filterApps(directories []argoprojiov1alpha1.GitDirectoryG
appExclude = true
}
}
// Whenever there is a path with exclude: true it won't be included, even if it is included in a different path pattern
// Whenever there is a path with exclude: true it wont be included, even if it is included in a different path pattern
if appInclude && !appExclude {
res = append(res, appPath)
}

View File

@@ -8,6 +8,7 @@ import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
kubefake "k8s.io/client-go/kubernetes/fake"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
@@ -623,6 +624,11 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
Type: corev1.SecretType("Opaque"),
},
}
// convert []client.Object to []runtime.Object, for use by kubefake package
runtimeClusters := []runtime.Object{}
for _, clientCluster := range clusters {
runtimeClusters = append(runtimeClusters, clientCluster)
}
for _, testCase := range testCases {
testCaseCopy := testCase // Since tests may run in parallel
@@ -631,12 +637,13 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
genMock := &generatorsMock.Generator{}
appSet := &v1alpha1.ApplicationSet{}
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
cl := &possiblyErroringFakeCtrlRuntimeClient{
fakeClient,
testCase.clientError,
}
clusterGenerator := NewClusterGenerator(cl, "namespace")
clusterGenerator := NewClusterGenerator(t.Context(), cl, appClientset, "namespace")
for _, g := range testCaseCopy.baseGenerators {
gitGeneratorSpec := v1alpha1.ApplicationSetGenerator{
@@ -796,6 +803,11 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
Type: corev1.SecretType("Opaque"),
},
}
// convert []client.Object to []runtime.Object, for use by kubefake package
runtimeClusters := []runtime.Object{}
for _, clientCluster := range clusters {
runtimeClusters = append(runtimeClusters, clientCluster)
}
for _, testCase := range testCases {
testCaseCopy := testCase // Since tests may run in parallel
@@ -808,12 +820,13 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
},
}
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
cl := &possiblyErroringFakeCtrlRuntimeClient{
fakeClient,
testCase.clientError,
}
clusterGenerator := NewClusterGenerator(cl, "namespace")
clusterGenerator := NewClusterGenerator(t.Context(), cl, appClientset, "namespace")
for _, g := range testCaseCopy.baseGenerators {
gitGeneratorSpec := v1alpha1.ApplicationSetGenerator{

View File

@@ -8,16 +8,15 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/argoproj/argo-cd/v3/applicationset/services"
"github.com/argoproj/argo-cd/v3/util/settings"
)
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig, clusterInformer *settings.ClusterInformer) map[string]Generator {
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(c, controllerNamespace),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, controllerNamespace),
"Git": NewGitGenerator(argoCDService, controllerNamespace),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace, clusterInformer),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),
"Plugin": NewPluginGenerator(c, controllerNamespace),
}

View File

@@ -1,12 +1,15 @@
package utils
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
"github.com/argoproj/argo-cd/v3/common"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/util/settings"
"github.com/argoproj/argo-cd/v3/util/db"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
)
// ClusterSpecifier contains only the name and server URL of a cluster. We use this struct to avoid partially-populating
@@ -16,44 +19,42 @@ type ClusterSpecifier struct {
Server string
}
// SecretsContainInClusterCredentials checks if any of the provided secrets represent the in-cluster configuration.
func SecretsContainInClusterCredentials(secrets []corev1.Secret) bool {
for _, secret := range secrets {
if string(secret.Data["server"]) == appv1.KubernetesInternalAPIServerAddr {
return true
}
}
return false
}
// ListClusters returns a list of cluster specifiers using the ClusterInformer.
func ListClusters(clusterInformer *settings.ClusterInformer) ([]ClusterSpecifier, error) {
clusters, err := clusterInformer.ListClusters()
func ListClusters(ctx context.Context, clientset kubernetes.Interface, namespace string) ([]ClusterSpecifier, error) {
clusterSecretsList, err := clientset.CoreV1().Secrets(namespace).List(ctx,
metav1.ListOptions{LabelSelector: common.LabelKeySecretType + "=" + common.LabelValueSecretTypeCluster})
if err != nil {
return nil, fmt.Errorf("error listing clusters: %w", err)
return nil, err
}
// len of clusters +1 for the in cluster secret
clusterList := make([]ClusterSpecifier, 0, len(clusters)+1)
hasInCluster := false
for _, cluster := range clusters {
clusterList = append(clusterList, ClusterSpecifier{
if clusterSecretsList == nil {
return nil, nil
}
clusterSecrets := clusterSecretsList.Items
clusterList := make([]ClusterSpecifier, len(clusterSecrets))
hasInClusterCredentials := false
for i, clusterSecret := range clusterSecrets {
cluster, err := db.SecretToCluster(&clusterSecret)
if err != nil || cluster == nil {
return nil, fmt.Errorf("unable to convert cluster secret to cluster object '%s': %w", clusterSecret.Name, err)
}
clusterList[i] = ClusterSpecifier{
Name: cluster.Name,
Server: cluster.Server,
})
}
if cluster.Server == appv1.KubernetesInternalAPIServerAddr {
hasInCluster = true
hasInClusterCredentials = true
}
}
if !hasInCluster {
if !hasInClusterCredentials {
// There was no secret for the in-cluster config, so we add it here. We don't fully-populate the Cluster struct,
// since only the name and server fields are used by the generator.
clusterList = append(clusterList, ClusterSpecifier{
Name: appv1.KubernetesInClusterName,
Name: "in-cluster",
Server: appv1.KubernetesInternalAPIServerAddr,
})
}
return clusterList, nil
}

43
assets/swagger.json generated
View File

@@ -2265,44 +2265,6 @@
}
}
},
"/api/v1/applicationsets/{name}/events": {
"get": {
"tags": [
"ApplicationSetService"
],
"summary": "ListResourceEvents returns a list of event resources",
"operationId": "ApplicationSetService_ListResourceEvents",
"parameters": [
{
"type": "string",
"description": "the applicationsets's name",
"name": "name",
"in": "path",
"required": true
},
{
"type": "string",
"description": "The application set namespace. Default empty is argocd control plane namespace.",
"name": "appsetNamespace",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1EventList"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/applicationsets/{name}/resource-tree": {
"get": {
"tags": [
@@ -7299,7 +7261,7 @@
"type": "object",
"properties": {
"applyNestedSelectors": {
"description": "ApplyNestedSelectors enables selectors defined within the generators of two level-nested matrix or merge generators.\n\nDeprecated: This field is ignored, and the behavior is always enabled. The field will be removed in a future\nversion of the ApplicationSet CRD.",
"description": "ApplyNestedSelectors enables selectors defined within the generators of two level-nested matrix or merge generators\nDeprecated: This field is ignored, and the behavior is always enabled. The field will be removed in a future\nversion of the ApplicationSet CRD.",
"type": "boolean"
},
"generators": {
@@ -7357,9 +7319,6 @@
"$ref": "#/definitions/v1alpha1ApplicationSetCondition"
}
},
"health": {
"$ref": "#/definitions/v1alpha1HealthStatus"
},
"resources": {
"description": "Resources is a list of Applications resources managed by this application set.",
"type": "array",

View File

@@ -32,7 +32,6 @@ import (
"k8s.io/client-go/kubernetes"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
ctrlcache "sigs.k8s.io/controller-runtime/pkg/cache"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
@@ -194,18 +193,6 @@ func NewCommand() *cobra.Command {
argoSettingsMgr := argosettings.NewSettingsManager(ctx, k8sClient, namespace)
argoCDDB := db.NewDB(namespace, argoSettingsMgr, k8sClient)
clusterInformer, err := argosettings.NewClusterInformer(k8sClient, namespace)
if err != nil {
log.Error(err, "unable to create cluster informer")
os.Exit(1)
}
go clusterInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), clusterInformer.HasSynced) {
log.Error("Timed out waiting for cluster cache to sync")
os.Exit(1)
}
scmConfig := generators.NewSCMConfig(scmRootCAPath, allowedScmProviders, enableScmProviders, enableGitHubAPIMetrics, github_app.NewAuthCredentials(argoCDDB.(db.RepoCredsDB)), tokenRefStrictMode)
tlsConfig := apiclient.TLSConfiguration{
@@ -225,7 +212,7 @@ func NewCommand() *cobra.Command {
repoClientset := apiclient.NewRepoServerClientset(argocdRepoServer, repoServerTimeoutSeconds, tlsConfig)
argoCDService := services.NewArgoCDService(argoCDDB, gitSubmoduleEnabled, repoClientset, enableNewGitFileGlobbing)
topLevelGenerators := generators.GetGenerators(ctx, mgr.GetClient(), k8sClient, namespace, argoCDService, dynamicClient, scmConfig, clusterInformer)
topLevelGenerators := generators.GetGenerators(ctx, mgr.GetClient(), k8sClient, namespace, argoCDService, dynamicClient, scmConfig)
// start a webhook server that listens to incoming webhook payloads
webhookHandler, err := webhook.NewWebhookHandler(webhookParallelism, argoSettingsMgr, mgr.GetClient(), topLevelGenerators)
@@ -261,7 +248,6 @@ func NewCommand() *cobra.Command {
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)

View File

@@ -295,8 +295,7 @@ spec:
spec:
destination: {}
project: ""
status:
health: {}
status: {}
---
`,
},
@@ -326,8 +325,7 @@ spec:
spec:
destination: {}
project: ""
status:
health: {}
status: {}
---
`,
},

View File

@@ -60,7 +60,8 @@ func Test_loadClusters(t *testing.T) {
require.NoError(t, err)
for i := range clusters {
// This changes, nil it to avoid testing it.
clusters[i].Info.ConnectionState.ModifiedAt = nil
//nolint:staticcheck
clusters[i].ConnectionState.ModifiedAt = nil
}
expected := []ClusterWithInfo{{
@@ -68,13 +69,11 @@ func Test_loadClusters(t *testing.T) {
ID: "",
Server: "https://kubernetes.default.svc",
Name: "in-cluster",
Info: v1alpha1.ClusterInfo{
ConnectionState: v1alpha1.ConnectionState{
Status: "Successful",
},
ServerVersion: ".",
ConnectionState: v1alpha1.ConnectionState{
Status: "Successful",
},
Shard: ptr.To(int64(0)),
ServerVersion: ".",
Shard: ptr.To(int64(0)),
},
Namespaces: []string{"test"},
}}

View File

@@ -17,8 +17,6 @@ import (
"time"
"unicode/utf8"
"golang.org/x/sync/errgroup"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/sync/hook"
@@ -1277,32 +1275,24 @@ type objKeyLiveTarget struct {
target *unstructured.Unstructured
}
// addServerSideDiffPerfFlags adds server-side diff performance tuning flags to a command
func addServerSideDiffPerfFlags(command *cobra.Command, serverSideDiffConcurrency *int, serverSideDiffMaxBatchKB *int) {
command.Flags().IntVar(serverSideDiffConcurrency, "server-side-diff-concurrency", -1, "Max concurrent batches for server-side diff. -1 = unlimited, 1 = sequential, 2+ = concurrent (0 = invalid)")
command.Flags().IntVar(serverSideDiffMaxBatchKB, "server-side-diff-max-batch-kb", 250, "Max batch size in KB for server-side diff. Smaller values are safer for proxies")
}
// NewApplicationDiffCommand returns a new instance of an `argocd app diff` command
func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
refresh bool
hardRefresh bool
exitCode bool
diffExitCode int
local string
revision string
localRepoRoot string
serverSideGenerate bool
serverSideDiff bool
serverSideDiffConcurrency int
serverSideDiffMaxBatchKB int
localIncludes []string
appNamespace string
revisions []string
sourcePositions []int64
sourceNames []string
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
refresh bool
hardRefresh bool
exitCode bool
diffExitCode int
local string
revision string
localRepoRoot string
serverSideGenerate bool
serverSideDiff bool
localIncludes []string
appNamespace string
revisions []string
sourcePositions []int64
sourceNames []string
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
)
shortDesc := "Perform a diff against the target and live state."
command := &cobra.Command{
@@ -1433,7 +1423,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
proj := getProject(ctx, c, clientOpts, app.Spec.Project)
foundDiffs := findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, app.GetName(), app.GetNamespace(), serverSideDiffConcurrency, serverSideDiffMaxBatchKB)
foundDiffs := findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, app.GetName(), app.GetNamespace())
if foundDiffs && exitCode {
os.Exit(diffExitCode)
}
@@ -1448,7 +1438,6 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringVar(&localRepoRoot, "local-repo-root", "/", "Path to the repository root. Used together with --local allows setting the repository root")
command.Flags().BoolVar(&serverSideGenerate, "server-side-generate", false, "Used with --local, this will send your manifests to the server for diffing")
command.Flags().BoolVar(&serverSideDiff, "server-side-diff", false, "Use server-side diff to calculate the diff. This will default to true if the ServerSideDiff annotation is set on the application.")
addServerSideDiffPerfFlags(command, &serverSideDiffConcurrency, &serverSideDiffMaxBatchKB)
command.Flags().StringArrayVar(&localIncludes, "local-include", []string{"*.yaml", "*.yml", "*.json"}, "Used with --server-side-generate, specify patterns of filenames to send. Matching is based on filename and not path.")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only render the difference in namespace")
command.Flags().StringArrayVar(&revisions, "revisions", []string{}, "Show manifests at specific revisions for source position in source-positions")
@@ -1465,14 +1454,7 @@ func printResourceDiff(group, kind, namespace, name string, live, target *unstru
}
// findAndPrintServerSideDiff performs a server-side diff by making requests to the api server and prints the response
func findAndPrintServerSideDiff(ctx context.Context, app *argoappv1.Application, items []objKeyLiveTarget, resources *application.ManagedResourcesResponse, appIf application.ApplicationServiceClient, appName, appNs string, maxConcurrency int, maxBatchSizeKB int) bool {
if maxConcurrency == 0 {
errors.CheckError(stderrors.New("invalid value for --server-side-diff-concurrency: 0 is not allowed (use -1 for unlimited, or a positive number to limit concurrency)"))
}
liveResources := make([]*argoappv1.ResourceDiff, 0, len(items))
targetManifests := make([]string, 0, len(items))
func findAndPrintServerSideDiff(ctx context.Context, app *argoappv1.Application, items []objKeyLiveTarget, resources *application.ManagedResourcesResponse, appIf application.ApplicationServiceClient, appName, appNs string) bool {
// Process each item for server-side diff
foundDiffs := false
for _, item := range items {
@@ -1506,7 +1488,6 @@ func findAndPrintServerSideDiff(ctx context.Context, app *argoappv1.Application,
Modified: true,
}
}
liveResources = append(liveResources, liveResource)
if item.target != nil {
jsonBytes, err := json.Marshal(item.target)
@@ -1515,63 +1496,23 @@ func findAndPrintServerSideDiff(ctx context.Context, app *argoappv1.Application,
}
targetManifest = string(jsonBytes)
}
targetManifests = append(targetManifests, targetManifest)
}
if len(liveResources) == 0 {
return false
}
// Batch by size to avoid proxy limits
maxBatchSize := maxBatchSizeKB * 1024
var batches []struct{ start, end int }
for i := 0; i < len(liveResources); {
start := i
size := 0
for i < len(liveResources) {
resourceSize := len(liveResources[i].LiveState) + len(targetManifests[i])
if size+resourceSize > maxBatchSize && i > start {
break
}
size += resourceSize
i++
// Call server-side diff for this individual resource
serverSideDiffQuery := &application.ApplicationServerSideDiffQuery{
AppName: &appName,
AppNamespace: &appNs,
Project: &app.Spec.Project,
LiveResources: []*argoappv1.ResourceDiff{liveResource},
TargetManifests: []string{targetManifest},
}
batches = append(batches, struct{ start, end int }{start, i})
}
// Process batches in parallel
g, errGroupCtx := errgroup.WithContext(ctx)
g.SetLimit(maxConcurrency)
serverSideDiffRes, err := appIf.ServerSideDiff(ctx, serverSideDiffQuery)
if err != nil {
errors.CheckError(err)
}
results := make([][]*argoappv1.ResourceDiff, len(batches))
for idx, batch := range batches {
i := idx
b := batch
g.Go(func() error {
// Call server-side diff for this batch of resources
serverSideDiffQuery := &application.ApplicationServerSideDiffQuery{
AppName: &appName,
AppNamespace: &appNs,
Project: &app.Spec.Project,
LiveResources: liveResources[b.start:b.end],
TargetManifests: targetManifests[b.start:b.end],
}
serverSideDiffRes, err := appIf.ServerSideDiff(errGroupCtx, serverSideDiffQuery)
if err != nil {
return err
}
results[i] = serverSideDiffRes.Items
return nil
})
}
if err := g.Wait(); err != nil {
errors.CheckError(err)
}
for _, items := range results {
for _, resultItem := range items {
// Extract diff for this resource
for _, resultItem := range serverSideDiffRes.Items {
if resultItem.Hook || (!resultItem.Modified && resultItem.TargetState != "" && resultItem.LiveState != "") {
continue
}
@@ -1581,12 +1522,13 @@ func findAndPrintServerSideDiff(ctx context.Context, app *argoappv1.Application,
if resultItem.TargetState != "" && resultItem.TargetState != "null" {
target = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(resultItem.TargetState), target)
err = json.Unmarshal([]byte(resultItem.TargetState), target)
errors.CheckError(err)
}
if resultItem.LiveState != "" && resultItem.LiveState != "null" {
live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(resultItem.LiveState), live)
err = json.Unmarshal([]byte(resultItem.LiveState), live)
errors.CheckError(err)
}
@@ -1612,14 +1554,14 @@ type DifferenceOption struct {
}
// findAndPrintDiff ... Prints difference between application current state and state stored in git or locally, returns boolean as true if difference is found else returns false
func findAndPrintDiff(ctx context.Context, app *argoappv1.Application, proj *argoappv1.AppProject, resources *application.ManagedResourcesResponse, argoSettings *settings.Settings, diffOptions *DifferenceOption, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, useServerSideDiff bool, appIf application.ApplicationServiceClient, appName, appNs string, serverSideDiffConcurrency int, serverSideDiffMaxBatchKB int) bool {
func findAndPrintDiff(ctx context.Context, app *argoappv1.Application, proj *argoappv1.AppProject, resources *application.ManagedResourcesResponse, argoSettings *settings.Settings, diffOptions *DifferenceOption, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, useServerSideDiff bool, appIf application.ApplicationServiceClient, appName, appNs string) bool {
var foundDiffs bool
items, err := prepareObjectsForDiff(ctx, app, proj, resources, argoSettings, diffOptions)
errors.CheckError(err)
if useServerSideDiff {
return findAndPrintServerSideDiff(ctx, app, items, resources, appIf, appName, appNs, serverSideDiffConcurrency, serverSideDiffMaxBatchKB)
return findAndPrintServerSideDiff(ctx, app, items, resources, appIf, appName, appNs)
}
for _, item := range items {
@@ -2133,38 +2075,36 @@ func printTreeViewDetailed(nodeMapping map[string]argoappv1.ResourceNode, parent
// NewApplicationSyncCommand returns a new instance of an `argocd app sync` command
func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
revision string
revisions []string
sourcePositions []int64
sourceNames []string
resources []string
labels []string
selector string
prune bool
dryRun bool
timeout uint
strategy string
force bool
replace bool
serverSideApply bool
applyOutOfSyncOnly bool
async bool
retryLimit int64
retryRefresh bool
retryBackoffDuration time.Duration
retryBackoffMaxDuration time.Duration
retryBackoffFactor int64
local string
localRepoRoot string
infos []string
diffChanges bool
diffChangesConfirm bool
projects []string
output string
appNamespace string
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
serverSideDiffConcurrency int
serverSideDiffMaxBatchKB int
revision string
revisions []string
sourcePositions []int64
sourceNames []string
resources []string
labels []string
selector string
prune bool
dryRun bool
timeout uint
strategy string
force bool
replace bool
serverSideApply bool
applyOutOfSyncOnly bool
async bool
retryLimit int64
retryRefresh bool
retryBackoffDuration time.Duration
retryBackoffMaxDuration time.Duration
retryBackoffFactor int64
local string
localRepoRoot string
infos []string
diffChanges bool
diffChangesConfirm bool
projects []string
output string
appNamespace string
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
)
command := &cobra.Command{
Use: "sync [APPNAME... | -l selector | --project project-name]",
@@ -2453,7 +2393,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
// Check if application has ServerSideDiff annotation
serverSideDiff := resourceutil.HasAnnotationOption(app, argocommon.AnnotationCompareOptions, "ServerSideDiff=true")
foundDiffs = findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, appName, appNs, serverSideDiffConcurrency, serverSideDiffMaxBatchKB)
foundDiffs = findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, appName, appNs)
if !foundDiffs {
fmt.Printf("====== No Differences found ======\n")
// if no differences found, then no need to sync
@@ -2518,7 +2458,6 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringArrayVar(&revisions, "revisions", []string{}, "Show manifests at specific revisions for source position in source-positions")
command.Flags().Int64SliceVar(&sourcePositions, "source-positions", []int64{}, "List of source positions. Default is empty array. Counting start at 1.")
command.Flags().StringArrayVar(&sourceNames, "source-names", []string{}, "List of source names. Default is an empty array.")
addServerSideDiffPerfFlags(command, &serverSideDiffConcurrency, &serverSideDiffMaxBatchKB)
return command
}
@@ -3283,7 +3222,8 @@ func NewApplicationManifestsCommand(clientOpts *argocdclient.ClientOptions) *cob
errors.CheckError(err)
proj := getProject(ctx, c, clientOpts, app.Spec.Project)
unstructureds = getLocalObjects(context.Background(), app, proj.Project, local, localRepoRoot, argoSettings.AppLabelKey, cluster.Info.ServerVersion, cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod)
//nolint:staticcheck
unstructureds = getLocalObjects(context.Background(), app, proj.Project, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod)
case len(revisions) > 0 && len(sourcePositions) > 0:
q := application.ApplicationManifestQuery{
Name: &appName,

View File

@@ -395,12 +395,12 @@ func printApplicationSetNames(apps []arogappsetv1.ApplicationSet) {
func printApplicationSetTable(apps []arogappsetv1.ApplicationSet, output *string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
var fmtStr string
headers := []any{"NAME", "PROJECT", "SYNCPOLICY", "HEALTH", "CONDITIONS"}
headers := []any{"NAME", "PROJECT", "SYNCPOLICY", "CONDITIONS"}
if *output == "wide" {
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
fmtStr = "%s\t%s\t%s\t%s\t%s\t%s\t%s\n"
headers = append(headers, "REPO", "PATH", "TARGET")
} else {
fmtStr = "%s\t%s\t%s\t%s\t%s\n"
fmtStr = "%s\t%s\t%s\t%s\n"
}
_, _ = fmt.Fprintf(w, fmtStr, headers...)
for _, app := range apps {
@@ -414,7 +414,6 @@ func printApplicationSetTable(apps []arogappsetv1.ApplicationSet, output *string
app.QualifiedName(),
app.Spec.Template.Spec.Project,
app.Spec.SyncPolicy,
app.Status.Health.Status,
conditions,
}
if *output == "wide" {
@@ -438,7 +437,6 @@ func printAppSetSummaryTable(appSet *arogappsetv1.ApplicationSet) {
fmt.Printf(printOpFmtStr, "Project:", appSet.Spec.Template.Spec.GetProject())
fmt.Printf(printOpFmtStr, "Server:", getServerForAppSet(appSet))
fmt.Printf(printOpFmtStr, "Namespace:", appSet.Spec.Template.Spec.Destination.Namespace)
fmt.Printf(printOpFmtStr, "Health Status:", appSet.Status.Health.Status)
if !appSet.Spec.Template.Spec.HasMultipleSources() {
fmt.Println("Source:")
} else {

View File

@@ -107,7 +107,7 @@ func TestPrintApplicationSetTable(t *testing.T) {
return nil
})
require.NoError(t, err)
expectation := "NAME PROJECT SYNCPOLICY HEALTH CONDITIONS\napp-name default nil [{ResourcesUpToDate <nil> True }]\nteam-two/app-name default nil [{ResourcesUpToDate <nil> True }]\n"
expectation := "NAME PROJECT SYNCPOLICY CONDITIONS\napp-name default nil [{ResourcesUpToDate <nil> True }]\nteam-two/app-name default nil [{ResourcesUpToDate <nil> True }]\n"
assert.Equal(t, expectation, output)
}
@@ -200,7 +200,6 @@ func TestPrintAppSetSummaryTable(t *testing.T) {
Project: default
Server:
Namespace:
Health Status:
Source:
- Repo:
Target:
@@ -214,7 +213,6 @@ SyncPolicy: <none>
Project: default
Server:
Namespace:
Health Status:
Source:
- Repo:
Target:
@@ -228,7 +226,6 @@ SyncPolicy: Automated
Project: default
Server:
Namespace:
Health Status:
Source:
- Repo:
Target:
@@ -242,7 +239,6 @@ SyncPolicy: Automated
Project: default
Server:
Namespace:
Health Status:
Source:
- Repo: test1
Target: master1
@@ -257,7 +253,6 @@ SyncPolicy: <none>
Project: default
Server:
Namespace:
Health Status:
Sources:
- Repo: test1
Target: master1

View File

@@ -380,7 +380,8 @@ func printClusterDetails(clusters []argoappv1.Cluster) {
fmt.Printf("Cluster information\n\n")
fmt.Printf(" Server URL: %s\n", cluster.Server)
fmt.Printf(" Server Name: %s\n", strWithDefault(cluster.Name, "-"))
fmt.Printf(" Server Version: %s\n", cluster.Info.ServerVersion)
//nolint:staticcheck
fmt.Printf(" Server Version: %s\n", cluster.ServerVersion)
fmt.Printf(" Namespaces: %s\n", formatNamespaces(cluster))
fmt.Printf("\nTLS configuration\n\n")
fmt.Printf(" Client cert: %v\n", len(cluster.Config.CertData) != 0)
@@ -474,7 +475,8 @@ func printClusterTable(clusters []argoappv1.Cluster) {
if len(c.Namespaces) > 0 {
server = fmt.Sprintf("%s (%d namespaces)", c.Server, len(c.Namespaces))
}
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.Info.ServerVersion, c.Info.ConnectionState.Status, c.Info.ConnectionState.Message, c.Project)
//nolint:staticcheck
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message, c.Project)
}
_ = w.Flush()
}

View File

@@ -39,14 +39,12 @@ func Test_printClusterTable(_ *testing.T) {
AWSAuthConfig: nil,
DisableCompression: false,
},
Info: v1alpha1.ClusterInfo{
ConnectionState: v1alpha1.ConnectionState{
Status: "my-status",
Message: "my-message",
ModifiedAt: &metav1.Time{},
},
ServerVersion: "my-version",
ConnectionState: v1alpha1.ConnectionState{
Status: "my-status",
Message: "my-message",
ModifiedAt: &metav1.Time{},
},
ServerVersion: "my-version",
},
})
}

View File

@@ -247,5 +247,9 @@ func AddNote(gitClient git.Client, drySha, commitSha string) error {
if err != nil {
return fmt.Errorf("failed to marshal commit note: %w", err)
}
return gitClient.AddAndPushNote(commitSha, NoteNamespace, string(jsonBytes)) // nolint:wrapcheck // wrapping the error wouldn't add any information
err = gitClient.AddAndPushNote(commitSha, NoteNamespace, string(jsonBytes))
if err != nil {
return fmt.Errorf("failed to add commit note: %w", err)
}
return nil
}

View File

@@ -1137,13 +1137,13 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
}
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
apps, err := ctrl.appLister.Applications(ctrl.namespace).List(labels.Everything())
apps, err := ctrl.appLister.List(labels.Everything())
if err != nil {
return fmt.Errorf("error listing applications: %w", err)
}
appsCount := 0
for i := range apps {
if apps[i].Spec.GetProject() == proj.Name {
if apps[i].Spec.GetProject() == proj.Name && ctrl.isAppNamespaceAllowed(apps[i]) && proj.IsAppNamespacePermitted(apps[i], ctrl.namespace) {
appsCount++
}
}
@@ -1559,8 +1559,18 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
if _, err := cache.MetaNamespaceKeyFunc(app); err == nil {
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
var compareWith CompareWith
if state.Operation.InitiatedBy.Automated {
// Do not force revision resolution on automated operations because
// this would cause excessive Ls-Remote requests on monorepo commits
compareWith = CompareWithLatest
} else {
// Force app refresh with using most recent resolved revision after sync,
// so UI won't show a just synced application being out of sync if it was
// synced after commit but before app. refresh (see #18153)
compareWith = CompareWithLatestForceResolve
}
ctrl.requestAppRefresh(app.QualifiedName(), compareWith.Pointer(), nil)
} else {
logCtx.WithError(err).Warn("Fails to requeue application")
}

View File

@@ -2302,6 +2302,93 @@ func TestFinalizeProjectDeletion_DoesNotHaveApplications(t *testing.T) {
}, receivedPatch)
}
func TestFinalizeProjectDeletion_HasApplicationInOtherNamespace(t *testing.T) {
app := newFakeApp()
app.Namespace = "team-a"
proj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace},
Spec: v1alpha1.AppProjectSpec{
SourceNamespaces: []string{"team-a"},
},
}
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
applicationNamespaces: []string{"team-a"},
}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
patched := false
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
patched = true
return true, &v1alpha1.AppProject{}, nil
})
err := ctrl.finalizeProjectDeletion(proj)
require.NoError(t, err)
assert.False(t, patched)
}
func TestFinalizeProjectDeletion_IgnoresAppsInUnmonitoredNamespace(t *testing.T) {
app := newFakeApp()
app.Namespace = "team-b"
proj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace},
}
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
applicationNamespaces: []string{"team-a"},
}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
}
return true, &v1alpha1.AppProject{}, nil
})
err := ctrl.finalizeProjectDeletion(proj)
require.NoError(t, err)
assert.Equal(t, map[string]any{
"metadata": map[string]any{
"finalizers": nil,
},
}, receivedPatch)
}
func TestFinalizeProjectDeletion_IgnoresAppsNotPermittedByProject(t *testing.T) {
app := newFakeApp()
app.Namespace = "team-b"
proj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace},
Spec: v1alpha1.AppProjectSpec{
SourceNamespaces: []string{"team-a"},
},
}
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, proj},
applicationNamespaces: []string{"team-a", "team-b"},
}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
}
return true, &v1alpha1.AppProject{}, nil
})
err := ctrl.finalizeProjectDeletion(proj)
require.NoError(t, err)
assert.Equal(t, map[string]any{
"metadata": map[string]any{
"finalizers": nil,
},
}, receivedPatch)
}
func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "default"
@@ -2546,6 +2633,41 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
assert.Equal(t, CompareWithLatestForceResolve, level)
}
func TestProcessRequestedAppAutomatedOperation_Successful(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "default"
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
InitiatedBy: v1alpha1.OperationInitiator{
Automated: true,
},
}
ctrl := newFakeController(t.Context(), &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
}},
}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
}
return true, &v1alpha1.Application{}, nil
})
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Equal(t, string(synccommon.OperationSucceeded), phase)
assert.Equal(t, "successfully synced (no more tasks)", message)
ok, level := ctrl.isRefreshRequested(ctrl.toAppKey(app.Name))
assert.True(t, ok)
assert.Equal(t, CompareWithLatest, level)
}
func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
testCases := []struct {
name string

View File

@@ -8,6 +8,7 @@ import (
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application"
@@ -20,35 +21,27 @@ import (
func setApplicationHealth(resources []managedResource, statuses []appv1.ResourceStatus, resourceOverrides map[string]appv1.ResourceOverride, app *appv1.Application, persistResourceHealth bool) (health.HealthStatusCode, error) {
var savedErr error
var errCount uint
var containsResources, containsLiveResources bool
appHealthStatus := health.HealthStatusHealthy
for i, res := range resources {
if res.Target != nil && hookutil.Skip(res.Target) {
continue
}
if res.Live != nil && (hookutil.IsHook(res.Live) || ignore.Ignore(res.Live)) {
if res.Live != nil && res.Live.GetAnnotations() != nil && res.Live.GetAnnotations()[common.AnnotationIgnoreHealthCheck] == "true" {
continue
}
// Contains actual resources that are not hooks
containsResources = true
if res.Live != nil {
containsLiveResources = true
}
// Do not aggregate the health of the resource if the annotation to ignore health check is set to true
if res.Live != nil && res.Live.GetAnnotations() != nil && res.Live.GetAnnotations()[common.AnnotationIgnoreHealthCheck] == "true" {
if res.Live != nil && (hookutil.IsHook(res.Live) || ignore.Ignore(res.Live)) {
continue
}
var healthStatus *health.HealthStatus
var err error
healthOverrides := lua.ResourceHealthOverrides(resourceOverrides)
gvk := schema.GroupVersionKind{Group: res.Group, Version: res.Version, Kind: res.Kind}
if res.Live == nil {
healthStatus = &health.HealthStatus{Status: health.HealthStatusMissing}
} else {
// App that manages itself should not affect own health
// App the manages itself should not affect own health
if isSelfReferencedApp(app, kubeutil.GetObjectRef(res.Live)) {
continue
}
@@ -72,8 +65,8 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
statuses[i].Health = nil
}
// Missing resources should not affect parent app health - the OutOfSync status already indicates resources are missing
if res.Live == nil && healthStatus.Status == health.HealthStatusMissing {
// Is health status is missing but resource has not built-in/custom health check then it should not affect parent app health
if _, hasOverride := healthOverrides[lua.GetConfigMapKey(gvk)]; healthStatus.Status == health.HealthStatusMissing && !hasOverride && health.GetHealthCheckFunc(gvk) == nil {
continue
}
@@ -86,12 +79,6 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
appHealthStatus = healthStatus.Status
}
}
// If the app is expected to have resources but does not contain any live resources, set the app health to missing
if containsResources && !containsLiveResources && health.IsWorse(appHealthStatus, health.HealthStatusMissing) {
appHealthStatus = health.HealthStatusMissing
}
if persistResourceHealth {
app.Status.ResourceHealthSource = appv1.ResourceHealthLocationInline
} else {

View File

@@ -16,7 +16,6 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/util/lua"
@@ -104,103 +103,12 @@ func TestSetApplicationHealth_ResourceHealthNotPersisted(t *testing.T) {
assert.Nil(t, resourceStatuses[0].Health)
}
func TestSetApplicationHealth_NoResource(t *testing.T) {
resources := []managedResource{}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
}
func TestSetApplicationHealth_OnlyHooks(t *testing.T) {
pod := resourceFromFile("./testdata/pod-running-restart-always.yaml")
pod.SetAnnotations(map[string]string{synccommon.AnnotationKeyHook: string(synccommon.HookTypeSync)})
resources := []managedResource{{
Group: "", Version: "v1", Kind: "Pod", Target: &pod, Live: &pod,
}}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
}
func TestSetApplicationHealth_MissingResource(t *testing.T) {
pod := resourceFromFile("./testdata/pod-running-restart-always.yaml")
pod2 := pod.DeepCopy()
pod2.SetName("pod2")
resources := []managedResource{
{Group: "", Version: "v1", Kind: "Pod", Target: &pod},
{Group: "", Version: "v1", Kind: "Pod", Target: pod2, Live: pod2},
}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
}
func TestSetApplicationHealth_MissingResource_WithIgnoreHealthcheck(t *testing.T) {
pod := resourceFromFile("./testdata/pod-running-restart-always.yaml")
pod2 := pod.DeepCopy()
pod2.SetName("pod2")
pod2.SetAnnotations(map[string]string{common.AnnotationIgnoreHealthCheck: "true"})
resources := []managedResource{
{Group: "", Version: "v1", Kind: "Pod", Target: &pod},
{Group: "", Version: "v1", Kind: "Pod", Target: pod2, Live: pod2},
}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
}
func TestSetApplicationHealth_MissingResource_WithChildApp(t *testing.T) {
childApp := newAppLiveObj(health.HealthStatusUnknown)
pod := resourceFromFile("./testdata/pod-running-restart-always.yaml")
resources := []managedResource{
{Group: application.Group, Version: "v1alpha1", Kind: application.ApplicationKind, Target: childApp, Live: childApp},
{Group: "", Version: "v1", Kind: "Pod", Target: &pod},
}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
}
func TestSetApplicationHealth_AllMissingResources(t *testing.T) {
pod := resourceFromFile("./testdata/pod-running-restart-always.yaml")
pod2 := pod.DeepCopy()
pod2.SetName("pod2")
resources := []managedResource{
{Group: "", Version: "v1", Kind: "Pod", Target: &pod},
{Group: "", Version: "v1", Kind: "Pod", Target: pod2},
}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusMissing, healthStatus)
}
func TestSetApplicationHealth_AllMissingResources_WithHooks(t *testing.T) {
pod := resourceFromFile("./testdata/pod-running-restart-always.yaml")
pod2 := pod.DeepCopy()
pod2.SetName("pod2")
pod2.SetAnnotations(map[string]string{synccommon.AnnotationKeyHook: string(synccommon.HookTypeSync)})
resources := []managedResource{{
Group: "", Version: "v1", Kind: "Pod", Target: &pod,
}, {
Group: "", Version: "v1", Kind: "Pod", Target: pod2, Live: pod2,
}}
}, {}}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
@@ -241,6 +149,32 @@ func TestSetApplicationHealth_HealthImproves(t *testing.T) {
}
}
func TestSetApplicationHealth_MissingResourceNoBuiltHealthCheck(t *testing.T) {
cm := resourceFromFile("./testdata/configmap.yaml")
resources := []managedResource{{
Group: "", Version: "v1", Kind: "ConfigMap", Target: &cm,
}}
resourceStatuses := initStatuses(resources)
t.Run("NoOverride", func(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
assert.Equal(t, health.HealthStatusMissing, resourceStatuses[0].Health.Status)
})
t.Run("HasOverride", func(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{
lua.GetConfigMapKey(schema.GroupVersionKind{Version: "v1", Kind: "ConfigMap"}): appv1.ResourceOverride{
HealthLua: "some health check",
},
}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusMissing, healthStatus)
})
}
func newAppLiveObj(status health.HealthStatusCode) *unstructured.Unstructured {
app := appv1.Application{
ObjectMeta: metav1.ObjectMeta{
@@ -280,9 +214,9 @@ return hs`,
}
t.Run("ChildAppDegraded", func(t *testing.T) {
childApp := newAppLiveObj(health.HealthStatusDegraded)
degradedApp := newAppLiveObj(health.HealthStatusDegraded)
resources := []managedResource{{
Group: application.Group, Version: "v1alpha1", Kind: application.ApplicationKind, Live: childApp,
Group: application.Group, Version: "v1alpha1", Kind: application.ApplicationKind, Live: degradedApp,
}, {}}
resourceStatuses := initStatuses(resources)
@@ -292,21 +226,9 @@ return hs`,
})
t.Run("ChildAppMissing", func(t *testing.T) {
childApp := newAppLiveObj(health.HealthStatusMissing)
degradedApp := newAppLiveObj(health.HealthStatusMissing)
resources := []managedResource{{
Group: application.Group, Version: "v1alpha1", Kind: application.ApplicationKind, Live: childApp,
}, {}}
resourceStatuses := initStatuses(resources)
healthStatus, err := setApplicationHealth(resources, resourceStatuses, overrides, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)
})
t.Run("ChildAppUnknown", func(t *testing.T) {
childApp := newAppLiveObj(health.HealthStatusUnknown)
resources := []managedResource{{
Group: application.Group, Version: "v1alpha1", Kind: application.ApplicationKind, Live: childApp,
Group: application.Group, Version: "v1alpha1", Kind: application.ApplicationKind, Live: degradedApp,
}, {}}
resourceStatuses := initStatuses(resources)

View File

@@ -270,7 +270,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetAppRefreshPaths(app),
Paths: path.GetSourceRefreshPaths(app, source),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,
@@ -883,9 +883,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
Hook: isHook(obj),
RequiresPruning: targetObj == nil && liveObj != nil && isSelfReferencedObj,
RequiresDeletionConfirmation: targetObj != nil && resourceutil.HasAnnotationOption(targetObj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionDeleteRequireConfirm) ||
liveObj != nil && resourceutil.HasAnnotationOption(liveObj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionDeleteRequireConfirm) ||
targetObj != nil && resourceutil.HasAnnotationOption(targetObj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionPruneRequireConfirm) ||
liveObj != nil && resourceutil.HasAnnotationOption(liveObj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionPruneRequireConfirm),
liveObj != nil && resourceutil.HasAnnotationOption(liveObj, synccommon.AnnotationSyncOptions, synccommon.SyncOptionDeleteRequireConfirm),
}
if targetObj != nil {
resState.SyncWave = int64(syncwaves.Wave(targetObj))

View File

@@ -416,55 +416,6 @@ func TestCompareAppStateSkipHook(t *testing.T) {
assert.Empty(t, app.Status.Conditions)
}
func TestCompareAppStateRequireDeletion(t *testing.T) {
obj1 := NewPod()
obj1.SetName("my-pod-1")
obj1.SetAnnotations(map[string]string{"argocd.argoproj.io/sync-options": "Delete=confirm"})
obj2 := NewPod()
obj2.SetName("my-pod-2")
obj2.SetAnnotations(map[string]string{"argocd.argoproj.io/sync-options": "Prune=confirm"})
obj3 := NewPod()
obj3.SetName("my-pod-3")
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(obj1): obj1,
kube.GetResourceKey(obj2): obj2,
kube.GetResourceKey(obj3): obj3,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.Equal(t, v1alpha1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Len(t, compRes.resources, 3)
assert.Len(t, compRes.managedResources, 3)
assert.Empty(t, app.Status.Conditions)
countRequireDeletion := 0
for _, res := range compRes.resources {
if res.RequiresDeletionConfirmation {
countRequireDeletion++
}
}
assert.Equal(t, 2, countRequireDeletion)
}
// checks that ignore resources are detected, but excluded from status
func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
pod := NewPod()

View File

@@ -30,7 +30,7 @@ to understand our toolchain and our continuous integration processes. It contain
## Quick start
If you want a quick start contributing to Argo CD, take a look at issues that are labeled with
[help-wanted](https://github.com/argoproj/argo-cd/issues?q=is%3Aopen+is%3Aissue+label%3A%22help-wanted%22)
[help wanted](https://github.com/argoproj/argo-cd/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
or
[good first issue](https://github.com/argoproj/argo-cd/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).

View File

@@ -21,7 +21,7 @@ These are the upcoming releases dates:
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | |
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | | |
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | | |
Actual release dates might differ from the plan by a few days.

View File

@@ -98,15 +98,11 @@ checks to see if the release came out correctly:
### If something went wrong
A new Argo CD release results in:
- A new GitHub release created
- Stable Git tag pointing to the release (if the release is the latest release)
- The release Go packages are published for using Argo CD code as dependency
- Docker images and SBOM artifacts are published
If something went wrong, damage should be limited. Depending on the steps that
have been performed, you will need to manually clean up.
Because of all the above dependencies, in a case of a release that failed, it is not safe to delete and recreate it.
Instead, create the next patch release (for example, if 3.2.4 failed, create 3.2.5 after fixing the problem, but don't recreate 3.2.4).
Upon successful publishing of the fixed release (3.2.5 in our example), copy the full release notes manually from the failed release (3.2.4 in our example) and then update the failed release (3.2.4 in our example) release notes to state this release is invalid and should not be used.
* If the container image has been pushed to Quay.io, delete it
* Delete the release (if created) from the `Releases` page on GitHub
### Manual releasing

View File

@@ -222,18 +222,6 @@ Then you can build & push the image in one step:
DOCKER_PUSH=true make image
```
To speed up building of images you may use the DEV_IMAGE option that builds the argocd binaries in the users desktop environment
(instead of building everything in Docker) and copies them into the result image:
```bash
DEV_IMAGE=true DOCKER_PUSH=true make image
```
> [!NOTE]
> The first run of this build task may take a long time because it needs first to build the base image first; however,
> once it's done, the build process should be much faster than a regular full image build in Docker.
#### Configure manifests for your image
With `IMAGE_REGISTRY`, `IMAGE_NAMESPACE` and `IMAGE_TAG` still set, run:

View File

@@ -62,30 +62,21 @@ K3d is a minimal Kubernetes distribution, in docker. Because it's running in a d
The configuration you will need for Argo CD virtualized toolchain:
1. For most users, the following command works to find the host IP address.
1. Find your host IP by executing `ifconfig` on Mac/Linux and `ipconfig` on Windows. For most users, the following command works to find the IP address.
* If you have perl
* For Mac:
```pl
perl -e '
use strict;
use Socket;
```
IP=`ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}'`
echo $IP
```
my $target = sockaddr_in(53, inet_aton("8.8.8.8"));
socket(my $s, AF_INET, SOCK_DGRAM, getprotobyname("udp")) or die $!;
connect($s, $target) or die $!;
my $local_addr = getsockname($s) or die $!;
my (undef, $ip) = sockaddr_in($local_addr);
print "IP: ", inet_ntoa($ip), "\n";
'
```
* For Linux:
* If you don't
* Try `ip route get 8.8.8.8` on Linux
* Try `ifconfig`/`ipconfig` (and pick the ip address that feels right -- look for `192.168.x.x` or `10.x.x.x` addresses)
Note that `8.8.8.8` is Google's Public DNS server, in most places it's likely to be accessible and thus is a good proxy for "which outbound address would my computer use", but you can replace it with a different IP address if necessary.
```
IP=`ifconfig eth0 | grep inet | grep -v inet6 | awk '{print $2}'`
echo $IP
```
Keep in mind that this IP is dynamically assigned by the router so if your router restarts for any reason, your IP might change.

View File

@@ -441,33 +441,3 @@ If you can avoid using these features, you can avoid triggering the error. The o
Excluding mutation webhooks from the diff could cause undesired diffing behavior.
3. **Disable mutation webhooks when using server-side diff**: see [server-side diff docs](user-guide/diff-strategies.md#mutation-webhooks)
for details about that feature. Disabling mutation webhooks may have undesired effects on sync behavior.
### How do I fix `grpc: error while marshaling: string field contains invalid UTF-8`?
On Kubernetes v1.34.x clusters, Argo CD components may stop working and pods may
fail to start with errors such as:
```
Error: grpc: error while marshaling: string field contains invalid UTF-8
```
This issue typically affects pods that reference Kubernetes secrets via environment variables, e.g.
```yaml
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: argocd-redis
key: auth
optional: false
```
Kubernetes environment variables must be valid UTF-8 strings. In affected clusters, the argocd-redis Secret contained non-UTF-8 (binary) data, while other clusters used
ASCII-only values.
#### How do I fix the issue?
Inspect the decoded Redis password
```bash
kubectl get -n argocd secret argocd-redis -o json \
| jq -r '.data.auth' | base64 --decode | xxd
```
If the output contains non-printable characters or bytes outside the UTF-8 range, the Secret is invalid for use as an
environment variable. It is recommended to regenerate the secret using a UTF-8-safe password.

View File

@@ -80,7 +80,7 @@ For additional details, see [architecture overview](operator-manual/architecture
* CLI for automation and CI integration
* Webhook integration (GitHub, BitBucket, GitLab)
* Access tokens for automation
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g. blue/green & canary upgrades)
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
* Audit trails for application events and API calls
* Prometheus metrics
* Parameter overrides for overriding helm parameters in Git

View File

@@ -217,7 +217,7 @@ If you want to access a private repository, you must also provide the credential
In case of Bitbucket App Token, go with `bearerToken` section.
* `tokenRef`: A `Secret` name and key containing the app token to use for requests.
In case of self-signed BitBucket Server certificates, the following options can be useful:
In case self-signed BitBucket Server certificates, the following options can be usefully:
* `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates.
* `caRef`: Optional `ConfigMap` name and key containing the BitBucket server certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs.

View File

@@ -221,7 +221,7 @@ If you want to access a private repository, you must also provide the credential
In case of Bitbucket App Token, go with `bearerToken` section.
* `tokenRef`: A `Secret` name and key containing the app token to use for requests.
In case of self-signed BitBucket Server certificates, the following options can be useful:
In case self-signed BitBucket Server certificates, the following options can be usefully:
* `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates.
* `caRef`: Optional `ConfigMap` name and key containing the BitBucket server certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs.

View File

@@ -155,7 +155,7 @@ In this example, when applications are deleted:
This deletion order is useful for scenarios where you need to tear down dependent services in the correct sequence, such as deleting frontend services before backend dependencies.
### Example
#### Example
The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels.

View File

@@ -56,12 +56,9 @@ spec:
path: guestbook
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: HEAD
syncPolicy:
automated:
prune: true
```
```
This example sets the sync policy to automated with pruning enabled, so child apps are automatically created, synced, and deleted when the parent app's manifest changes. You may wish to disable automated sync for more control over when changes are applied. The finalizer ensures that child app resources are properly cleaned up on deletion.
The sync policy to automated + prune, so that child apps are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that your apps are deleted correctly.
Fix the revision to a specific Git commit SHA to make sure that, even if the child apps repo changes, the app will only change when the parent app change that revision. Alternatively, you can set it to HEAD or a branch name.

View File

@@ -504,7 +504,7 @@ stringData:
username: my-username
```
A note on noProxy: Argo CD uses exec to interact with different tools such as helm and kustomize. Not all of these tools support the same noProxy syntax as the [httpproxy go package](https://cs.opensource.google/go/x/net/+/internal-branch.go1.21-vendor:http/httpproxy/proxy.go;l=38-50) does. In case you run in trouble with noProxy not being respected you might want to try using the full domain instead of a wildcard pattern or IP range to find a common syntax that all tools support.
A note on noProxy: Argo CD uses exec to interact with different tools such as helm and kustomize. Not all of these tools support the same noProxy syntax as the [httpproxy go package](https://cs.opensource.google/go/x/net/+/internal-branch.go1.21-vendor:http/httpproxy/proxy.go;l=38-50) does. In case you run in trouble with noProxy not beeing respected you might want to try using the full domain instead of a wildcard pattern or IP range to find a common syntax that all tools support.
## Clusters
@@ -630,7 +630,7 @@ This setup requires:
3. A role created for each cluster being added to Argo CD that is assumable by the Argo CD management role
4. An [Access Entry](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) within each EKS cluster added to Argo CD that gives the cluster's role (from point 3) RBAC permissions
to perform actions within the cluster
- Or, alternatively, an entry within the `aws-auth` ConfigMap within the cluster added to Argo CD ([deprecated by EKS](https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html))
- Or, alternatively, an entry within the `aws-auth` ConfigMap within the cluster added to Argo CD ([depreciated by EKS](https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html))
#### Argo CD Management Role
@@ -880,7 +880,7 @@ associated EKS cluster.
**AWS Auth (Deprecated)**
Instead of using Access Entries, you may need to use the deprecated `aws-auth`.
Instead of using Access Entries, you may need to use the depreciated `aws-auth`.
If so, the `roleARN` of each managed cluster needs to be added to each respective cluster's `aws-auth` config map (see
[Enabling IAM principal access to your cluster](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html)), as
@@ -1368,7 +1368,7 @@ data:
## Resource Custom Labels
Custom Labels configured with `resource.customLabels` (comma separated string) will be displayed in the UI (for any resource that defines them). Note that this requires a restart to the Argo CD Application Controller to take effect.
Custom Labels configured with `resource.customLabels` (comma separated string) will be displayed in the UI (for any resource that defines them).
## Labels on Application Events

View File

@@ -12,7 +12,7 @@ There are several ways how Ingress can be configured.
The Ambassador Edge Stack can be used as a Kubernetes ingress controller with [automatic TLS termination](https://www.getambassador.io/docs/latest/topics/running/tls/#host) and routing capabilities for both the CLI and the UI.
The API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). Given the `argocd` CLI includes the port number in the request `host` header, 2 Mappings are required.
The API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). Given the `argocd` CLI includes the port number in the request `host` header, 2 Mappings are required.
Note: Disabling TLS in not required if you are using grpc-web
### Option 1: Mapping CRD for Host-based Routing
@@ -415,7 +415,7 @@ apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC # This tells AWS to send traffic from the ALB using GRPC. Plain HTTP2 can be used, but the health checks won't be available because argo currently downgrades non-grpc calls to HTTP1
alb.ingress.kubernetes.io/backend-protocol-version: GRPC # This tells AWS to send traffic from the ALB using GRPC. Plain HTTP2 can be used, but the health checks wont be available because argo currently downgrade non-grpc calls to HTTP1
labels:
app: argogrpc
name: argogrpc
@@ -494,7 +494,7 @@ resources:
patches:
- path: ./patch.yml
```
```
And following lines as patch.yml
@@ -878,15 +878,15 @@ http {
}
```
## Gateway API Example
## Cilium Gateway API Example
This section discusses using Gateway API to expose the Argo CD server in various TLS configurations,
accomodating both HTTP and gRPC traffic, possibly using HTTP/2.
This section provides a working example of using Cilium Gateway API with Argo CD, including HTTP and gRPC routes.
### TLS termination at the Gateway
### Prerequisites
Assume the following cluster-wide `Gateway` resource,
that terminates the TLS conection with a certificate stored in a `Secret` in the same namespace:
- API server run with TLS disabled (set `server.insecure: "true"` in argocd-cmd-params-cm ConfigMap)
### Gateway Example
```yaml
apiVersion: gateway.networking.k8s.io/v1
@@ -894,12 +894,17 @@ kind: Gateway
metadata:
name: cluster-gateway
namespace: gateway
annotations:
cert-manager.io/issuer: cloudflare-dns-issuer
spec:
gatewayClassName: example
gatewayClassName: cilium
addresses:
- type: IPAddress
value: "192.168.0.130"
listeners:
- protocol: HTTPS
port: 443
name: https
name: https-cluster
hostname: "*.local.example.com"
allowedRoutes:
namespaces:
@@ -912,38 +917,7 @@ spec:
group: ""
```
To automate certificate management, `cert-manager` supports [gateway annotations](https://cert-manager.io/docs/usage/gateway/).
#### Securing traffic between Argo CD and the gateway
If your security requirements allow it, the Argo CD API server can be run with TLS disabled: pass the `--insecure` flag to the `argocd-server` command,
or set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md).
It is also possible to keep TLS enabled, encrypting traffic between the gateway and the Argo CD API server, by using a [BackendTLSPolicy](https://gateway-api.sigs.k8s.io/api-types/backendtlspolicy/).
Consult the [Upstream TLS](https://gateway-api.sigs.k8s.io/guides/tls/#upstream-tls) documentation for more details.
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: BackendTLSPolicy
metadata:
name: tls-upstream-auth
namespace: argocd
spec:
targetRefs:
- kind: Service
name: argocd-server
group: ""
validation:
caCertificateRefs:
- kind: ConfigMap
name: argocd-server-ca-cert
group: ""
hostname: argocd-server.argocd.svc.cluster.local
```
#### Routing HTTP requests
### HTTPRoute Example
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
@@ -954,7 +928,6 @@ spec:
parentRefs:
- name: cluster-gateway
namespace: gateway
sectionName: https
hostnames:
- "argocd.local.example.com"
rules:
@@ -967,45 +940,7 @@ spec:
value: /
```
#### Routing gRPC requests
The `argocd` CLI operates at full capability when using gRPC over HTTP/2 to communicate with the API server, falling back to HTTP/1.1. (`--grpc-web` flag).
gRPC can be configured using a `GRPCRoute`, and HTTP/2 requested as the application protocol on the `argocd-server` service:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: GRPCRoute
metadata:
name: argocd-grpc-route
namespace: argocd
spec:
parentRefs:
- name: cluster-gateway
namespace: gateway
sectionName: https
hostnames:
- "grpc.argocd.local.example.com"
rules:
- backendRefs:
- name: argocd-server
port: 443
```
And in Argo CD's `values.yaml` (or [directly](https://kubernetes.io/docs/concepts/services-networking/service/#application-protocol) in the service manifest):
```
server:
service:
# Enable gRPC over HTTP/2
servicePortHttpsAppProtocol: kubernetes.io/h2c
```
##### Routing gRPC and HTTP on through the same domain
Although officially [discouraged](https://gateway-api.sigs.k8s.io/api-types/grpcroute/#cross-serving),
attaching the `HTTPRoute` and `GRPCRoute` to the same domain may be supported by some implementations.
Matching requests headers become necessary to disambiguate the destination, as shown below:
### GRPCRoute Example
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: GRPCRoute
@@ -1017,7 +952,7 @@ spec:
- name: cluster-gateway
namespace: gateway
hostnames:
- "grpc.argocd.local.example.com"
- "argocd.local.example.com"
rules:
- backendRefs:
- name: argocd-server
@@ -1027,57 +962,4 @@ spec:
- name: Content-Type
type: RegularExpression
value: "^application/grpc.*$"
```
### TLS passthrough
TLS can also be configured to terminate at the Argo CD API server.
This require attaching a `TLSRoute` to the gateway,
which is part of the [Experimental](https://gateway-api.sigs.k8s.io/reference/1.4/specx/) Gateway API CRDs.
```yaml
kind: Gateway
metadata:
name: cluster-gateway
namespace: gateway
spec:
gatewayClassName: example
listeners:
- name: tls
port: 443
protocol: TLS
hostname: "argocd.example.com"
allowedRoutes:
namespaces:
from: All
kinds:
- kind: TLSRoute
tls:
mode: Passthrough
```
```yaml
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:
namespace: argocd
name: argocd-server-tlsroute
spec:
parentRefs:
- name: cluster-gateway
namespace: gateway
sectionName: tls
hostnames:
- argocd.example.com
rules:
- backendRefs:
- name: argocd-server
port: 443
```
The TLS certificates are implicit here,
and found by the Argo CD server in the `argocd-server-tls` secret.
Note that `cert-manager` does not support generating certificates for passthrough gateway listeners.
```

View File

@@ -264,7 +264,7 @@ Scraped at the `argocd-repo-server:8084/metrics` endpoint.
| `argocd_redis_request_total` | counter | Number of Kubernetes requests executed during application reconciliation. |
| `argocd_repo_pending_request_total` | gauge | Number of pending requests requiring repository lock |
| `argocd_oci_request_total` | counter | Number of OCI requests performed by repo server |
| `argocd_oci_request_duration_seconds` | histogram | Duration of OCI requests performed by the repo server. |
| `argocd_oci_request_duration_seconds` | histogram | Number of OCI fetch requests failures by repo server |
| `argocd_oci_test_repo_fail_total` | counter | Number of OCI test repo requests failures by repo server |
| `argocd_oci_get_tags_fail_total` | counter | Number of OCI get tags requests failures by repo server |
| `argocd_oci_digest_metadata_fail_total` | counter | Number of OCI digest metadata failures by repo server |

View File

@@ -62,6 +62,8 @@ The parameters for the PagerDuty configuration in the template generally match w
* `group` - Logical grouping of components of a service.
* `class` - The class/type of the event.
* `url` - The URL that should be used for the link "View in ArgoCD" in PagerDuty.
* `dedupKey` - A string used by PagerDuty to deduplicate and correlate events. Events with the same `dedupKey` will be grouped into the same incident. If omitted, PagerDuty will create a new incident for each event.
The `timestamp` and `custom_details` parameters are not currently supported.

View File

@@ -78,6 +78,29 @@ metadata:
notifications.argoproj.io/subscribe.<trigger-name>.<webhook-name>: ""
```
4. TLS configuration (optional)
If your webhook server uses a custom TLS certificate, you can configure the notification service to trust it by adding the certificate to the `argocd-tls-certs-cm` ConfigMap as shown below:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-tls-certs-cm
data:
<hostname>: |
-----BEGIN CERTIFICATE-----
<TLS DATA>
-----END CERTIFICATE-----
```
*NOTE:*
*If the custom certificate is not trusted, you may encounter errors such as:*
```
Put \"https://...\": x509: certificate signed by unknown authority
```
*Adding the server's certificate to `argocd-tls-certs-cm` resolves this issue.*
## Examples
### Set GitHub commit status

View File

@@ -126,7 +126,7 @@ data:
## Ignoring updates for untracked resources
ArgoCD will only apply `ignoreResourceUpdates` configuration to tracked resources of an application. This means dependent resources, such as a `ReplicaSet` and `Pod` created by a `Deployment`, will not ignore any updates and trigger a reconcile of the application for any changes.
ArgoCD will only apply `ignoreResourceUpdates` configuration to tracked resources of an application. This means dependant resources, such as a `ReplicaSet` and `Pod` created by a `Deployment`, will not ignore any updates and trigger a reconcile of the application for any changes.
If you want to apply the `ignoreResourceUpdates` configuration to an untracked resource, you can add the
`argocd.argoproj.io/ignore-resource-updates=true` annotation in the dependent resources manifest.

View File

@@ -1,46 +0,0 @@
# vX.Y to X.Z
## Breaking Changes
<!-- What changed that could affect your setup:
Deprecated flags / behavior removals
Deprecated fields
Other incompatible behavior or semantics changes between versions
(Add list here) -->
## Behavioral Improvements / Fixes
<!-- Operational stability and performance fixes:
Minor stability, reconciler improvements, bug fixes -->
## API Changes
<!-- Changes in API surface:
Any removal of fields or changed API defaults -->
## Security Changes
<!-- Changes in security behavior:
Any removal of sensitive fields or changed authentication defaults -->
## Deprecated Items
<!-- Flags / behavior to be removed in future releases
Any deprecated config API or CLI flags -->
## Kustomize Upgraded
<!-- Bundled Kustomize version bump:
Any breaking behavior
Notes about specific upstream behavior changes (e.g., namespace propagation fixes). -->
## Helm Upgraded
<!-- Bundled Helm version bump:
Any breaking behavior
Verify if your charts depend on any features tied to Helm versions. -->
## Custom Healthchecks Added
<!-- New built-in health checks added in this release:
Add any new CRD health support added in this version. -->

View File

@@ -1,2 +1,5 @@
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 3.3 | v1.34, v1.33, v1.32, v1.31 |
| 3.2 | v1.34, v1.33, v1.32, v1.31 |
| 3.1 | v1.34, v1.33, v1.32, v1.31 |

View File

@@ -31,10 +31,8 @@ their own dedicated Certificate Authority.
| Connection | Strict TLS Parameter | Plain Text Parameter | Default Behavior |
|------------|---------------------|---------------------|------------------|
| `argocd-server``argocd-repo-server` | `--repo-server-strict-tls` | `--repo-server-plaintext` | Non-validating TLS |
| `argocd-server``argocd-dex-server` | `--dex-server-strict-tls` | `--dex-server-plaintext` | Non-validating TLS |
| `argocd-application-controller``argocd-repo-server` | `--repo-server-strict-tls` | `--repo-server-plaintext` | Non-validating TLS |
| `argocd-applicationset-controller``argocd-repo-server` | `--repo-server-strict-tls` | `--repo-server-plaintext` | Non-validating TLS |
| `argocd-notifications-controller``argocd-repo-server` | `--argocd-repo-server-strict-tls` | `--argocd-repo-server-plaintext` | Non-validating TLS |
| `argocd-server``argocd-dex-server` | `--dex-server-strict-tls` | `--dex-server-plaintext` | Non-validating TLS |
### Certificate Priority (argocd-server only)
@@ -182,28 +180,27 @@ on how your workloads connect to the repository server.
### Configuring TLS to argocd-repo-server
The componenets `argocd-server`, `argocd-application-controller`, `argocd-notifications-controller`,
and `argocd-applicationset-controller` communicate with the `argocd-repo-server`
using a gRPC API over TLS. By default, `argocd-repo-server` generates a non-persistent,
self-signed certificate to use for its gRPC endpoint on startup. Because the
`argocd-repo-server` has no means to connect to the K8s control plane API, this certificate
is not available to outside consumers for verification. These components will use a
non-validating connection to the `argocd-repo-server` for this reason.
Both `argocd-server` and `argocd-application-controller` communicate with the
`argocd-repo-server` using a gRPC API over TLS. By default,
`argocd-repo-server` generates a non-persistent, self-signed certificate
to use for its gRPC endpoint on startup. Because the `argocd-repo-server` has
no means to connect to the K8s control plane API, this certificate is not available
to outside consumers for verification. Both,
`argocd-server` and `argocd-application-server` will use a non-validating
connection to the `argocd-repo-server` for this reason.
To change this behavior to be more secure by having these componenets validate the TLS certificate of the
To change this behavior to be more secure by having the `argocd-server` and
`argocd-application-controller` validate the TLS certificate of the
`argocd-repo-server` endpoint, the following steps need to be performed:
* Create a persistent TLS certificate to be used by `argocd-repo-server`, as
shown above
* Restart the `argocd-repo-server` pod(s)
* Modify the pod startup parameters for `argocd-server`, `argocd-application-controller`,
and `argocd-applicationset-controller` to include the
`--repo-server-strict-tls` parameter.
* Modify the pod startup parameters for `argocd-notifications-controller` to include the
`--argocd-repo-server-strict-tls` parameter
* Modify the pod startup parameters for `argocd-server` and
`argocd-application-controller` to include the `--repo-server-strict-tls`
parameter.
The `argocd-server`, `argocd-application-controller`, `argocd-notifications-controller`,
and `argocd-applicationset-controller` workloads will now
The `argocd-server` and `argocd-application-controller` workloads will now
validate the TLS certificate of the `argocd-repo-server` by using the
certificate stored in the `argocd-repo-server-tls` secret.
@@ -248,8 +245,7 @@ secret.
In some scenarios where mTLS through sidecar proxies is involved (e.g.
in a service mesh), you may want to configure the connections between the
`argocd-server`, `argocd-application-controller`, `argocd-notifications-controller`,
and `argocd-applicationset-controller` to `argocd-repo-server`
`argocd-server` and `argocd-application-controller` to `argocd-repo-server`
to not use TLS at all.
In this case, you will need to:
@@ -259,18 +255,14 @@ In this case, you will need to:
Also, consider restricting listening addresses to the loopback interface by specifying
`--listen 127.0.0.1` parameter, so that the insecure endpoint is not exposed on
the pod's network interfaces, but still available to the sidecar container.
* Configure `argocd-server`, `argocd-application-controller`,
and `argocd-applicationset-controller` to not use TLS
* Configure `argocd-server` and `argocd-application-controller` to not use TLS
for connections to the `argocd-repo-server` by specifying the parameter
`--repo-server-plaintext` to the pod container's startup arguments
* Modify the pod startup parameters for `argocd-notifications-controller` to include the
`--argocd-repo-server-plaintext` parameter
* Configure `argocd-server` and `argocd-application-controller` to connect to
the sidecar instead of directly to the `argocd-repo-server` service by
specifying its address via the `--repo-server <address>` parameter
After this change, `argocd-server`, `argocd-application-controller`, `argocd-notifications-controller`,
and `argocd-applicationset-controller` will
After this change, `argocd-server` and `argocd-application-controller` will
use a plain text connection to the sidecar proxy, which will handle all aspects
of TLS to `argocd-repo-server`'s TLS sidecar proxy.

View File

@@ -6,7 +6,7 @@
### Hydration paths must now be non-root
Source hydration (with [Source Hydrator](../../../user-guide/source-hydrator/)) now requires that every application specify a non-root path.
Source hydration now requires that every application specify a non-root path.
Using the repository root (for example, "" or ".") is no longer supported. This change ensures
that hydration outputs are isolated to a dedicated subdirectory and prevents accidental overwrites
or deletions of important files stored at the root, such as CI pipelines, documentation, or configuration files.

View File

@@ -14,7 +14,6 @@ The `--force-conflicts` flag allows the apply operation to take ownership of fie
When Argo CD is upgraded using an Argo CD Application, it is required to enable Server-Side Apply in the Application spec:
``` yaml
syncPolicy:
syncOptions:
- ServerSideApply=true
```
@@ -28,6 +27,12 @@ When Argo CD is upgraded manually using plain manifests or Kustomize overlays, i
Users upgrading Argo CD manually using `helm upgrade` are not impacted by this change, since Helm does not use client-side apply and does not result in creation of the `last-applied` annotation.
#### Users who previously upgraded to 3.3.0 or 3.3.1
In some cases, after upgrading to one of those versions and applying Server-Side Apply, the following error occured:
`one or more synchronization tasks completed unsuccessfully, reason: Failed to perform client-side apply migration: failed to perform client-side apply migration on manager kubectl-client-side-apply: error when patching "/dev/shm/2047509016": CustomResourceDefinition.apiextensions.k8s.io "applicationsets.argoproj.io" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes`.
Users that have configured the sync option `ClientSideApplyMigration=false` as a temporary remediation for the above error, should remove it after upgrading to `3.3.2`. Disabling `ClientSideApplyMigration` imposes a risk to encounter conflicts between K8s field managers in the future.
### Source Hydrator Now Tracks Hydration State Using Git Notes
Previously, Argo CD's Source Hydrator pushed a new hydrated commit for every DRY (source) commit, regardless of whether any manifest files (`manifest.yaml`) actually changed. This was necessary for the hydrator to track which DRY commit had last been hydrated: it embedded this information in the `hydrator.metadata` file's `drySha` field in each hydrated commit.
@@ -85,18 +90,16 @@ From now onwards, the Kubernetes server-side timeout is controlled by a separate
The `--self-heal-backoff-cooldown-seconds` flag of the `argocd-application-controller` has been deprecated and will be
removed in a future release.
## Helm Upgraded to 3.19.2
## Helm Upgraded to 3.19.4
Argo CD v3.3 upgrades the bundled Helm version to 3.19.2. There are no breaking changes in Helm 3.19.2 according to the
Argo CD v3.3 upgrades the bundled Helm version to 3.19.4. There are no breaking changes in Helm 3.19.4 according to the
[release notes](https://github.com/helm/helm/releases/tag/v3.19.0).
Helm 2.x is no longer supported as we are dropping the `--client` flag in calls to `helm version`.
## Kustomize Upgraded to 5.8.1
## Kustomize Upgraded to 5.8.0
Argo CD v3.3 upgrades the bundled Kustomize version from v5.7.0 to v5.8.0. According to the
Argo CD v3.3 upgrades the bundled Kustomize version from v5.7.0 to v5.8.1. According to the
[5.7.1](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.7.1)
and [5.8.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.8.0) release notes, there are no breaking changes.
and [5.8.1](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.8.1) release notes, there are no breaking changes.
However, note that Kustomize 5.7.1 introduces code to replace the `shlex` library used for parsing arguments in exec plugins.
If any existing manifests become corrupted, please follow the instructions in the
@@ -115,4 +118,3 @@ If you rely on Helm charts within kustomization files, please review the details
* [services.cloud.sap.com/ServiceBinding](https://github.com/argoproj/argo-cd/commit/51c9add05d9bc8f8fafc1631968eb853db53a904)
* [services.cloud.sap.com/ServiceInstance](https://github.com/argoproj/argo-cd/commit/51c9add05d9bc8f8fafc1631968eb853db53a904)
* [\_.cnrm.cloud.google.com/\_](https://github.com/argoproj/argo-cd/commit/30abebda3d930d93065eec8864aac7e0d56ae119)
* [grafana-org-operator.kubitus-project.gitlab.io/\_](https://github.com/argoproj/argo-cd/commit/ac1a2f8536701cc23684632a2d1bc3b20408b1a8)

View File

@@ -1,19 +0,0 @@
# v3.3 to 3.4
## Applications with `Missing` health status
The behavior of Application health status has changed to be more consistent and informative. Previously, Applications would show `Missing` health status inconsistently depending on whether missing resources had built-in or custom health checks defined.
**New behavior:**
- Applications now show `Missing` health **only when ALL resources are missing** (e.g., before the first sync)
- Individual missing resources no longer affect the Application's overall health status
- The health status now reflects the aggregated health of **existing resources**
- The `OutOfSync` status already indicates when resources are missing, making the health status redundant for individual missing resources
- If the defined resource health check is explicitly returning `Missing` for an existing resource, that will still be reflected in the overall Application health
**Impact:**
- Applications with some missing resources will now show the health of their existing resources (e.g., `Healthy`, `Progressing`, `Degraded`) instead of `Missing`
- Automation relying on the Application Health status to detect missing resources should now check the Sync status for `OutOfSync` instead, and optionally inspect individual resource health if needed.
- Users can now distinguish between an Application that has never been synced (all resources missing = `Missing` health) vs. an Application with some resources deleted (shows health of remaining resources)

View File

@@ -39,8 +39,6 @@ kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubuse
<hr/>
- [v3.3 to v3.4](./3.3-3.4.md)
- [v3.2 to v3.3](./3.2-3.3.md)
- [v3.1 to v3.2](./3.1-3.2.md)
- [v3.0 to v3.1](./3.0-3.1.md)
- [v2.14 to v3.0](./2.14-3.0.md)

View File

@@ -137,4 +137,4 @@ More info: [RBAC Configuration](../rbac.md)
> [!NOTE]
> Defining policies are not supported on ArgoCD v2.
> To define policies, please [upgrade](../upgrading/overview.md)
> to v3.0.0 or later.
> to to v3.0.0 or later.

View File

@@ -1,83 +0,0 @@
# GitLab CI
GitLab is an OAuth identity provider which can be used in GitLab CI
to generate tokens that identifies the repository and where it runs.
See: <https://docs.gitlab.com/ci/secrets/id_token_authentication>
You need to use OAuth 2.0 Token Exchange. Some identity providers supports this
out of the box such as Dex.
## Using Dex
Edit the `argocd-cm` and configure the `dex.config` section:
```yaml
dex.config: |
connectors:
- type: oidc
id: github-ci
name: GitLab CI
config:
issuer: https://gitlab.com
# If using GitLab self-hosted, then use your GitLab issuer
scopes: [openid]
userNameKey: sub
insecureSkipEmailVerified: true
```
ArgoCD automatically generates a static client named `argo-cd-cli` that you can use to get your token from a GitLab CI.
Here is an example of GitLab CI that will retrieve a valid Argo CD authentication token from Dex and use it to perform operations with the CLI:
```yaml
deploy:
id_tokens:
GITLAB_OIDC_TOKEN:
aud: https://argocd.example.com # Your ArgoCD URL
script:
- apt-get update && apt-get install -y jq curl
- curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
- install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
- rm argocd-linux-amd64
- |
# Exchange GitLab token for Dex token
DEX_URL="https://argocd.example.com/api/dex/token"
DEX_TOKEN_RESPONSE=$(curl -sSf \
"$DEX_URL" \
--user argo-cd-cli: \
--data-urlencode "connector_id=gitlab-ci" \
--data-urlencode "grant_type=urn:ietf:params:oauth:grant-type:token-exchange" \
--data-urlencode "scope=openid email profile federated:id" \
--data-urlencode "requested_token_type=urn:ietf:params:oauth:token-type:access_token" \
--data-urlencode "subject_token=$GITLAB_OIDC_TOKEN" \
--data-urlencode "subject_token_type=urn:ietf:params:oauth:token-type:id_token")
DEX_TOKEN=$(echo "$DEX_TOKEN_RESPONSE" | jq -r .access_token)
# Use with ArgoCD CLI
export ARGOCD_SERVER="argocd.example.com"
export ARGOCD_OPTS="--grpc-web"
export ARGOCD_AUTH_TOKEN="$DEX_TOKEN"
argocd version
argocd account get-user-info
argocd app list
```
## Configuring RBAC
When using ArgoCD global RBAC comfig map, you can define your `policy.csv` like so:
```yaml
configs:
rbac:
policy.csv: |
# Specific project(infra) for specific apps
p, project_path:my-repo/my-project:*, applications, get, infra/*, allow
# Only main branch can sync under production project
p, project_path:my-repo/my-project:ref_type:branch:ref:main, applications, sync, production/*, allow
```
More info: [RBAC Configuration](../rbac.md)

View File

@@ -25,12 +25,7 @@ Kubernetes), then the user effectively has the same privileges as that ServiceAc
exec.enabled: "true"
```
2. Restart Argo CD
### Permissions for Kubernetes <1.31
Starting in Kubernetes 1.31, the `get` privilege is enough to exec into a container, so no additional permissions are required. Enabling web terminal before Kubernetes 1.31 requires adding additional RBAC permissions.
1. Patch the `argocd-server` Role (if using namespaced Argo) or ClusterRole (if using clustered Argo) to allow `argocd-server`
2. Patch the `argocd-server` Role (if using namespaced Argo) or ClusterRole (if using clustered Argo) to allow `argocd-server`
to `exec` into pods
- apiGroups:
@@ -50,7 +45,7 @@ to `exec` into pods
kubectl patch clusterrole <argocd-server-clusterrole-name> --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["*"], "resources": ["pods/exec"], "verbs": ["create"]}}]'
```
2. Add RBAC rules to allow your users to `create` the `exec` resource i.e.
3. Add RBAC rules to allow your users to `create` the `exec` resource i.e.
p, role:myrole, exec, create, */*, allow

View File

@@ -55,7 +55,7 @@ to reduce amount of data returned by the API server and improve the UI responsiv
**Pagination Cursor**
It is proposed to add `offset` and `limit` fields for pagination support in Application List API.
The Watch API is a bit more complex. Both Argo CD user interface and CLI are relying on the Watch API to display real time updates of Argo CD applications.
The The Watch API is a bit more complex. Both Argo CD user interface and CLI are relying on the Watch API to display real time updates of Argo CD applications.
The Watch API currently supports filtering by a project and an application name. In order to effectively
implement server side pagination for the Watch API we cannot rely on the order of the applications returned by the API server. Instead of
relying on the order it is proposed to rely on the application name and use it as a cursor for pagination. Both the Applications List and Watch

View File

@@ -81,7 +81,7 @@ When you are running `v1.4.x`, you can upgrade to `v1.4.3` by simply changing th
tags for `argocd-server`, `argocd-repo-server` and `argocd-controller` to `v1.4.3`.
The `v1.4.3` release does not contain additional functional bug fixes.
Likewise, when you are running `v1.5.x`, you can upgrade to `v1.5.2` by simply changing
Likewise, hen you are running `v1.5.x`, you can upgrade to `v1.5.2` by simply changing
the image tags for `argocd-server`, `argocd-repo-server` and `argocd-controller` to `v1.5.2`.
The `v1.5.2` release does not contain additional functional bug fixes.

View File

@@ -13,64 +13,51 @@ recent minor releases.
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](master/argocd-test.html) | 0 | 0 | 0 | 0 |
| [go.mod](master/argocd-test.html) | 0 | 0 | 5 | 0 |
| [ui/yarn.lock](master/argocd-test.html) | 0 | 0 | 1 | 2 |
| [dex:v2.43.0](master/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 0 | 0 | 5 |
| [haproxy:3.0.8-alpine](master/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:8.2.3-alpine](master/public.ecr.aws_docker_library_redis_8.2.3-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:latest](master/quay.io_argoproj_argocd_latest.html) | 0 | 0 | 8 | 8 |
| [argocd:latest](master/quay.io_argoproj_argocd_latest.html) | 0 | 0 | 5 | 9 |
| [install.yaml](master/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](master/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.3.0-rc3
### v3.2.1
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v3.3.0-rc3/argocd-test.html) | 0 | 0 | 0 | 0 |
| [ui/yarn.lock](v3.3.0-rc3/argocd-test.html) | 0 | 1 | 1 | 2 |
| [dex:v2.43.0](v3.3.0-rc3/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 0 | 0 | 5 |
| [haproxy:3.0.8-alpine](v3.3.0-rc3/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:8.2.3-alpine](v3.3.0-rc3/public.ecr.aws_docker_library_redis_8.2.3-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.3.0-rc3](v3.3.0-rc3/quay.io_argoproj_argocd_v3.3.0-rc3.html) | 0 | 1 | 6 | 11 |
| [install.yaml](v3.3.0-rc3/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.3.0-rc3/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v3.2.1/argocd-test.html) | 0 | 1 | 7 | 0 |
| [ui/yarn.lock](v3.2.1/argocd-test.html) | 0 | 0 | 3 | 2 |
| [dex:v2.43.0](v3.2.1/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 0 | 0 | 5 |
| [haproxy:3.0.8-alpine](v3.2.1/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:8.2.2-alpine](v3.2.1/public.ecr.aws_docker_library_redis_8.2.2-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.2.1](v3.2.1/quay.io_argoproj_argocd_v3.2.1.html) | 0 | 0 | 5 | 9 |
| [install.yaml](v3.2.1/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.2.1/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.2.5
### v3.1.9
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v3.2.5/argocd-test.html) | 0 | 0 | 0 | 0 |
| [ui/yarn.lock](v3.2.5/argocd-test.html) | 0 | 1 | 3 | 2 |
| [dex:v2.43.0](v3.2.5/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 0 | 0 | 5 |
| [haproxy:3.0.8-alpine](v3.2.5/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:8.2.2-alpine](v3.2.5/public.ecr.aws_docker_library_redis_8.2.2-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.2.5](v3.2.5/quay.io_argoproj_argocd_v3.2.5.html) | 0 | 0 | 6 | 11 |
| [install.yaml](v3.2.5/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.2.5/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v3.1.9/argocd-test.html) | 0 | 1 | 7 | 0 |
| [ui/yarn.lock](v3.1.9/argocd-test.html) | 1 | 0 | 3 | 2 |
| [dex:v2.43.0](v3.1.9/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 0 | 0 | 5 |
| [haproxy:3.0.8-alpine](v3.1.9/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:7.2.11-alpine](v3.1.9/public.ecr.aws_docker_library_redis_7.2.11-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.1.9](v3.1.9/quay.io_argoproj_argocd_v3.1.9.html) | 0 | 0 | 4 | 12 |
| [install.yaml](v3.1.9/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.1.9/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.1.11
### v3.0.20
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v3.1.11/argocd-test.html) | 0 | 0 | 0 | 0 |
| [ui/yarn.lock](v3.1.11/argocd-test.html) | 1 | 1 | 3 | 2 |
| [dex:v2.43.0](v3.1.11/ghcr.io_dexidp_dex_v2.43.0.html) | 0 | 0 | 0 | 5 |
| [haproxy:3.0.8-alpine](v3.1.11/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:7.2.11-alpine](v3.1.11/public.ecr.aws_docker_library_redis_7.2.11-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.1.11](v3.1.11/quay.io_argoproj_argocd_v3.1.11.html) | 0 | 0 | 7 | 15 |
| [install.yaml](v3.1.11/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.1.11/argocd-iac-namespace-install.html) | - | - | - | - |
### v3.0.22
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v3.0.22/argocd-test.html) | 0 | 0 | 0 | 0 |
| [ui/yarn.lock](v3.0.22/argocd-test.html) | 1 | 2 | 4 | 4 |
| [dex:v2.41.1](v3.0.22/ghcr.io_dexidp_dex_v2.41.1.html) | 0 | 2 | 0 | 8 |
| [haproxy:3.0.8-alpine](v3.0.22/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:7.2.11-alpine](v3.0.22/public.ecr.aws_docker_library_redis_7.2.11-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.0.22](v3.0.22/quay.io_argoproj_argocd_v3.0.22.html) | 0 | 0 | 7 | 15 |
| [redis:7.2.11-alpine](v3.0.22/redis_7.2.11-alpine.html) | 0 | 0 | 0 | 2 |
| [install.yaml](v3.0.22/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.0.22/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v3.0.20/argocd-test.html) | 0 | 4 | 7 | 0 |
| [ui/yarn.lock](v3.0.20/argocd-test.html) | 1 | 1 | 4 | 4 |
| [dex:v2.41.1](v3.0.20/ghcr.io_dexidp_dex_v2.41.1.html) | 0 | 2 | 0 | 8 |
| [haproxy:3.0.8-alpine](v3.0.20/public.ecr.aws_docker_library_haproxy_3.0.8-alpine.html) | 0 | 0 | 0 | 5 |
| [redis:7.2.11-alpine](v3.0.20/public.ecr.aws_docker_library_redis_7.2.11-alpine.html) | 0 | 0 | 0 | 2 |
| [argocd:v3.0.20](v3.0.20/quay.io_argoproj_argocd_v3.0.20.html) | 0 | 0 | 4 | 12 |
| [redis:7.2.11-alpine](v3.0.20/redis_7.2.11-alpine.html) | 0 | 0 | 0 | 2 |
| [install.yaml](v3.0.20/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v3.0.20/argocd-iac-namespace-install.html) | - | - | - | - |

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:27:35 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:26:47 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -507,7 +507,7 @@
</li>
<li class="card__meta__item">
Line number: 30946
Line number: 30936
</li>
</ul>
@@ -553,7 +553,7 @@
</li>
<li class="card__meta__item">
Line number: 30631
Line number: 30621
</li>
</ul>
@@ -599,7 +599,7 @@
</li>
<li class="card__meta__item">
Line number: 30719
Line number: 30709
</li>
</ul>
@@ -645,7 +645,7 @@
</li>
<li class="card__meta__item">
Line number: 30754
Line number: 30744
</li>
</ul>
@@ -691,7 +691,7 @@
</li>
<li class="card__meta__item">
Line number: 30784
Line number: 30774
</li>
</ul>
@@ -737,7 +737,7 @@
</li>
<li class="card__meta__item">
Line number: 30802
Line number: 30792
</li>
</ul>
@@ -783,7 +783,7 @@
</li>
<li class="card__meta__item">
Line number: 30820
Line number: 30810
</li>
</ul>
@@ -829,7 +829,7 @@
</li>
<li class="card__meta__item">
Line number: 30842
Line number: 30832
</li>
</ul>
@@ -881,7 +881,7 @@
</li>
<li class="card__meta__item">
Line number: 32049
Line number: 32039
</li>
</ul>
@@ -933,7 +933,7 @@
</li>
<li class="card__meta__item">
Line number: 32392
Line number: 32382
</li>
</ul>
@@ -991,7 +991,7 @@
</li>
<li class="card__meta__item">
Line number: 31529
Line number: 31519
</li>
</ul>
@@ -1049,7 +1049,7 @@
</li>
<li class="card__meta__item">
Line number: 31845
Line number: 31835
</li>
</ul>
@@ -1107,7 +1107,7 @@
</li>
<li class="card__meta__item">
Line number: 31793
Line number: 31783
</li>
</ul>
@@ -1165,7 +1165,7 @@
</li>
<li class="card__meta__item">
Line number: 31907
Line number: 31897
</li>
</ul>
@@ -1223,7 +1223,7 @@
</li>
<li class="card__meta__item">
Line number: 32020
Line number: 32010
</li>
</ul>
@@ -1281,7 +1281,7 @@
</li>
<li class="card__meta__item">
Line number: 32044
Line number: 32034
</li>
</ul>
@@ -1339,7 +1339,7 @@
</li>
<li class="card__meta__item">
Line number: 32392
Line number: 32382
</li>
</ul>
@@ -1397,7 +1397,7 @@
</li>
<li class="card__meta__item">
Line number: 32103
Line number: 32093
</li>
</ul>
@@ -1455,7 +1455,7 @@
</li>
<li class="card__meta__item">
Line number: 32480
Line number: 32470
</li>
</ul>
@@ -1513,7 +1513,7 @@
</li>
<li class="card__meta__item">
Line number: 32890
Line number: 32880
</li>
</ul>
@@ -1565,7 +1565,7 @@
</li>
<li class="card__meta__item">
Line number: 31825
Line number: 31815
</li>
</ul>
@@ -1617,7 +1617,7 @@
</li>
<li class="card__meta__item">
Line number: 31529
Line number: 31519
</li>
</ul>
@@ -1669,7 +1669,7 @@
</li>
<li class="card__meta__item">
Line number: 31793
Line number: 31783
</li>
</ul>
@@ -1721,7 +1721,7 @@
</li>
<li class="card__meta__item">
Line number: 32020
Line number: 32010
</li>
</ul>
@@ -1779,7 +1779,7 @@
</li>
<li class="card__meta__item">
Line number: 31529
Line number: 31519
</li>
</ul>
@@ -1837,7 +1837,7 @@
</li>
<li class="card__meta__item">
Line number: 31793
Line number: 31783
</li>
</ul>
@@ -1895,7 +1895,7 @@
</li>
<li class="card__meta__item">
Line number: 31845
Line number: 31835
</li>
</ul>
@@ -1953,7 +1953,7 @@
</li>
<li class="card__meta__item">
Line number: 31907
Line number: 31897
</li>
</ul>
@@ -2011,7 +2011,7 @@
</li>
<li class="card__meta__item">
Line number: 32020
Line number: 32010
</li>
</ul>
@@ -2069,7 +2069,7 @@
</li>
<li class="card__meta__item">
Line number: 32044
Line number: 32034
</li>
</ul>
@@ -2127,7 +2127,7 @@
</li>
<li class="card__meta__item">
Line number: 32392
Line number: 32382
</li>
</ul>
@@ -2185,7 +2185,7 @@
</li>
<li class="card__meta__item">
Line number: 32103
Line number: 32093
</li>
</ul>
@@ -2243,7 +2243,7 @@
</li>
<li class="card__meta__item">
Line number: 32480
Line number: 32470
</li>
</ul>
@@ -2301,7 +2301,7 @@
</li>
<li class="card__meta__item">
Line number: 32890
Line number: 32880
</li>
</ul>
@@ -2357,7 +2357,7 @@
</li>
<li class="card__meta__item">
Line number: 31706
Line number: 31696
</li>
</ul>
@@ -2413,7 +2413,7 @@
</li>
<li class="card__meta__item">
Line number: 31853
Line number: 31843
</li>
</ul>
@@ -2469,7 +2469,7 @@
</li>
<li class="card__meta__item">
Line number: 31828
Line number: 31818
</li>
</ul>
@@ -2525,7 +2525,7 @@
</li>
<li class="card__meta__item">
Line number: 31952
Line number: 31942
</li>
</ul>
@@ -2581,7 +2581,7 @@
</li>
<li class="card__meta__item">
Line number: 32037
Line number: 32027
</li>
</ul>
@@ -2637,7 +2637,7 @@
</li>
<li class="card__meta__item">
Line number: 32051
Line number: 32041
</li>
</ul>
@@ -2693,7 +2693,7 @@
</li>
<li class="card__meta__item">
Line number: 32400
Line number: 32390
</li>
</ul>
@@ -2749,7 +2749,7 @@
</li>
<li class="card__meta__item">
Line number: 32365
Line number: 32355
</li>
</ul>
@@ -2805,7 +2805,7 @@
</li>
<li class="card__meta__item">
Line number: 32789
Line number: 32779
</li>
</ul>
@@ -2861,7 +2861,7 @@
</li>
<li class="card__meta__item">
Line number: 33165
Line number: 33155
</li>
</ul>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:27:45 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:26:56 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="3 known vulnerabilities found in 6 vulnerable dependency paths.">
<meta name="description" content="8 known vulnerabilities found in 29 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -247,11 +247,6 @@
border-color: #88879E;
}
.card .label--exploit {
background-color: #8B5A96;
border-color: #8B5A96;
}
.severity--low {
border-color: #88879E;
}
@@ -492,11 +487,12 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:25:09 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:24:32 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
<ul>
<li class="paths">/argo-cd/argoproj/argo-cd/v3/go.mod (gomodules)</li>
<li class="paths">/argo-cd/argoproj/gitops-engine/gitops-engine/go.mod (gomodules)</li>
<li class="paths">/argo-cd/argoproj/argo-cd/get-previous-release/hack/get-previous-release/go.mod (gomodules)</li>
<li class="paths">/argo-cd/ui/yarn.lock (yarn)</li>
@@ -504,9 +500,9 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>3</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>6 vulnerable dependency paths</span></div>
<div class="meta-count"><span>1016</span> <span>dependencies</span></div>
<div class="meta-count"><span>8</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>29 vulnerable dependency paths</span></div>
<div class="meta-count"><span>2868</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->
@@ -515,15 +511,592 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Regular Expression Denial of Service (ReDoS)</h2>
<h2 class="card__title">MPL-2.0 license</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Proof of Concept</span>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: /argo-cd/argoproj/argo-cd/v3 <span class="list-paths__item__arrow"></span> go.mod
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Module:
github.com/r3labs/diff/v3
</li>
<li class="card__meta__item">Introduced through:
github.com/argoproj/argo-cd/v3@0.0.0 and github.com/r3labs/diff/v3@3.0.2
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/r3labs/diff/v3@3.0.2
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<p>MPL-2.0 license</p>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/snyk:lic:golang:github.com:r3labs:diff:v3:MPL-2.0">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">MPL-2.0 license</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: /argo-cd/argoproj/argo-cd/v3 <span class="list-paths__item__arrow"></span> go.mod
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Module:
github.com/hashicorp/go-version
</li>
<li class="card__meta__item">Introduced through:
github.com/argoproj/argo-cd/v3@0.0.0, code.gitea.io/sdk/gitea@0.22.1 and others
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
code.gitea.io/sdk/gitea@0.22.1
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-version@1.7.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<p>MPL-2.0 license</p>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/snyk:lic:golang:github.com:hashicorp:go-version:MPL-2.0">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">MPL-2.0 license</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: /argo-cd/argoproj/argo-cd/v3 <span class="list-paths__item__arrow"></span> go.mod
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Module:
github.com/hashicorp/go-retryablehttp
</li>
<li class="card__meta__item">Introduced through:
github.com/argoproj/argo-cd/v3@0.0.0 and github.com/hashicorp/go-retryablehttp@0.7.8
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
gitlab.com/gitlab-org/api/client-go@1.8.1
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/cmd@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/api@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/controller@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/cmd@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/api@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/controller@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<p>MPL-2.0 license</p>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/snyk:lic:golang:github.com:hashicorp:go-retryablehttp:MPL-2.0">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">MPL-2.0 license</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: /argo-cd/argoproj/argo-cd/v3 <span class="list-paths__item__arrow"></span> go.mod
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Module:
github.com/hashicorp/go-cleanhttp
</li>
<li class="card__meta__item">Introduced through:
github.com/argoproj/argo-cd/v3@0.0.0, github.com/hashicorp/go-retryablehttp@0.7.8 and others
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
gitlab.com/gitlab-org/api/client-go@1.8.1
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
gitlab.com/gitlab-org/api/client-go@1.8.1
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/cmd@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/api@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/controller@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/subscriptions@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/argoproj/notifications-engine/pkg/services@#e2e7fe18381a
<span class="list-paths__item__arrow"></span>
github.com/opsgenie/opsgenie-go-sdk-v2/client@1.2.23
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-retryablehttp@0.7.8
<span class="list-paths__item__arrow"></span>
github.com/hashicorp/go-cleanhttp@0.5.2
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<p>MPL-2.0 license</p>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/snyk:lic:golang:github.com:hashicorp:go-cleanhttp:MPL-2.0">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">MPL-2.0 license</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: /argo-cd/argoproj/argo-cd/v3 <span class="list-paths__item__arrow"></span> go.mod
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Module:
github.com/gosimple/slug
</li>
<li class="card__meta__item">Introduced through:
github.com/argoproj/argo-cd/v3@0.0.0 and github.com/gosimple/slug@1.15.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v3@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/gosimple/slug@1.15.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<p>MPL-2.0 license</p>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/snyk:lic:golang:github.com:gosimple:slug:MPL-2.0">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Regular Expression Denial of Service (ReDoS)</h2>
<div class="card__section">
<div class="card__labels">
<div class="label label--medium">
<span class="label__text">medium severity</span>
</div>
</div>
@@ -672,9 +1245,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Proof of Concept</span>
</div>
</div>
<hr/>
@@ -747,9 +1317,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Proof of Concept</span>
</div>
</div>
<hr/>

File diff suppressed because it is too large Load Diff

View File

@@ -247,11 +247,6 @@
border-color: #88879E;
}
.card .label--exploit {
background-color: #8B5A96;
border-color: #8B5A96;
}
.severity--low {
border-color: #88879E;
}
@@ -492,7 +487,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:25:25 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:24:48 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -529,9 +524,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -721,9 +713,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -910,9 +899,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -1104,9 +1090,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -1231,9 +1214,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>

View File

@@ -247,11 +247,6 @@
border-color: #88879E;
}
.card .label--exploit {
background-color: #8B5A96;
border-color: #8B5A96;
}
.severity--low {
border-color: #88879E;
}
@@ -492,7 +487,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:25:32 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:24:55 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -529,9 +524,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -646,9 +638,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>

File diff suppressed because it is too large Load Diff

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:38:09 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:34:22 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:38:20 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:34:31 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

File diff suppressed because it is too large Load Diff

View File

@@ -247,11 +247,6 @@
border-color: #88879E;
}
.card .label--exploit {
background-color: #8B5A96;
border-color: #8B5A96;
}
.severity--low {
border-color: #88879E;
}
@@ -492,7 +487,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:33:43 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:32:25 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -529,9 +524,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -721,9 +713,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -910,9 +899,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -1104,9 +1090,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -1231,9 +1214,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>

View File

@@ -247,11 +247,6 @@
border-color: #88879E;
}
.card .label--exploit {
background-color: #8B5A96;
border-color: #8B5A96;
}
.severity--low {
border-color: #88879E;
}
@@ -492,7 +487,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:36:24 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:32:42 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -521,9 +516,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -637,9 +629,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>

View File

@@ -247,11 +247,6 @@
border-color: #88879E;
}
.card .label--exploit {
background-color: #8B5A96;
border-color: #8B5A96;
}
.severity--low {
border-color: #88879E;
}
@@ -492,7 +487,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:36:51 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:33:08 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -521,9 +516,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>
@@ -637,9 +629,6 @@
<div class="label label--low">
<span class="label__text">low severity</span>
</div>
<div class="label label--exploit">
<span class="label__text">Exploit: Not Defined</span>
</div>
</div>
<hr/>

File diff suppressed because it is too large Load Diff

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:35:37 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:31:44 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">January 18th 2026, 12:35:46 am (UTC+00:00)</p>
<p class="timestamp">December 14th 2025, 12:31:54 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More