Compare commits

...

73 Commits

Author SHA1 Message Date
github-actions[bot]
206a6eeca5 Bump version to 2.14.21 on release-2.14 branch (#25161)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-11-04 09:56:45 -05:00
Jakub Ciolek
8b31544069 fix: make webhook payload handlers recover from panics (cherry-pick #24862 for 2.14) (#24926)
Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
2025-10-14 14:14:22 -04:00
Carlos R.F.
9b7bf3e16d chore(deps): bump redis from 7.0.14 to 7.2.11 to address vuln (release-2.14) (#24892)
Signed-off-by: Carlos Rodriguez-Fernandez <carlosrodrifernandez@gmail.com>
2025-10-09 17:08:18 -04:00
github-actions[bot]
879895af78 Bump version to 2.14.20 on release-2.14 branch (#24796)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:32:00 -04:00
Ville Vesilehto
1f98e3f989 Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
741f00e2e3 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
e889f0a7ff Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
7b219ee97f Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
4ab9cd45bf fix: allow for backwards compatibility of durations defined in days (cherry-pick #24769 for 2.14) (#24772)
Signed-off-by: lplazas <felipe.plazas10@gmail.com>
Co-authored-by: lplazas <luis.plazas@workato.com>
2025-09-29 09:08:15 -04:00
github-actions[bot]
ba5b938858 Bump version to 2.14.19 on release-2.14 branch (#24700)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-09-22 15:14:10 -07:00
Alexander Matyushentsev
4a133ce41b fix: limit number of resources in appset status (#24690) (#24694)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:57:44 -07:00
Alexandre Gaudreault
376525eea2 ci(release): only set latest release in github when latest (#24525) (#24688)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:47:51 -04:00
github-actions[bot]
9fa9bb8c89 Bump version to 2.14.18 on release-2.14 branch (#24637)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: blakepettersson <1227954+blakepettersson@users.noreply.github.com>
2025-09-18 04:19:04 -10:00
Alexander Matyushentsev
4359b3c774 fix: use informer in webhook handler to reduce memory usage (#24622) (#24628)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-18 08:13:03 +02:00
argo-cd-cherry-pick-bot[bot]
4f6686fc3f fix: correct post-delete finalizer removal when cluster not found (cherry-pick #24415 for 2.14) (#24591)
Signed-off-by: Pavel Aborilov <aborilov@gmail.com>
Co-authored-by: Pavel <aborilov@gmail.com>
2025-09-16 16:14:18 -07:00
Anand Francis Joseph
caa4dc1bd2 fix(util): Fix default key exchange algorthims used for SSH connection to be FIPS compliant (#24499)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-09-10 11:21:45 -04:00
Nitish Kumar
981e7f762a fix(2.14): change the appset namespace to server namespace when generating appset (#24481)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-09-09 14:00:35 +03:00
Fox Piacenti
3d76aa5daa docs: Update URL for HA manifests to stable. (#24456)
Signed-off-by: Fox Danger Piacenti <fox@opencraft.com>
2025-09-09 12:35:30 +03:00
github-actions[bot]
fca3a7ed2a Bump version to 2.14.17 on release-2.14 branch (#24403)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 13:31:21 -04:00
github-actions[bot]
2bb617d02a Bump version to 2.14.16 on release-2.14 branch (#24397)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 11:53:37 -04:00
Michael Crenshaw
72e2387795 fix(security): repository.GetDetailedProject exposes repo secrets (#24389)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Signed-off-by: Dan Garfield <dan.garfield@octopus.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Dan Garfield <dan.garfield@octopus.com>
2025-09-04 11:33:04 -04:00
Adrian Berger
f8eba3e6e9 fix(cherry-pick-2.14): custom resource health for flux helm repository of type oci (#24339)
Signed-off-by: Adrian Berger <adrian.berger@bedag.ch>
2025-09-02 15:19:27 -04:00
Nitish Kumar
510b77546e chore(cherry-pick-2.14): replace bitnami images (#24289)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-08-27 14:37:51 +02:00
gcp-cherry-pick-bot[bot]
d77ecdf113 chore: adds all components in goreman run script (cherry-pick #23777) (#23790)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-08-24 21:37:14 -04:00
Ville Vesilehto
f9bb3b608e chore: update Go to 1.24.6 (release-2.14) (#24091)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-08-11 11:24:15 +02:00
rumstead
5d0a4f0dd3 fix(appset): When Appset is deleted, the controller should reconcile applicationset #23723 (cherry-pick ##23823) (#23832)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: sangeer <86688098+sangdammad@users.noreply.github.com>
2025-07-17 11:56:24 -04:00
Michael Crenshaw
8a3b2fdd2b fix(server): infer resource status health for apps-in-any-ns (#22944) (#23707)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-09 11:50:18 -04:00
gcp-cherry-pick-bot[bot]
ddb6073e52 fix: improves the ui message when an operation is terminated due to controller sync timeout (cherry-pick #23657) (#23673)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-07-07 10:49:51 +03:00
gcp-cherry-pick-bot[bot]
d95b710055 fix(controller): get commit server url from env (cherry-pick #23536) (#23543)
Signed-off-by: Alexej Disterhoft <alexej.disterhoft@redcare-pharmacy.com>
Co-authored-by: Alexej Disterhoft <alexej@disterhoft.de>
2025-06-24 14:41:11 -04:00
github-actions[bot]
6c7d6940cd Bump version to 2.14.15 on release-2.14 branch (#23427)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-06-16 13:59:54 -04:00
rumstead
ec5198949e fix(applicationset): requeue applicationste when application status changes (#23413)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: Dmitry Shmelev <avikez@gmail.com>
2025-06-16 11:46:59 -04:00
Alexandre Gaudreault
da2ef7db67 fix(sync): auto-sync loop when FailOnSharedResource (#23357)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-06-12 11:58:04 -04:00
github-actions[bot]
c6166fd531 Bump version to 2.14.14 on release-2.14 branch (#23299)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: blakepettersson <1227954+blakepettersson@users.noreply.github.com>
2025-06-09 11:37:55 -04:00
Ville Vesilehto
3c68b26d7a chore: upgrade Go from 1.23.4 to 1.24.4 (release-2.14) (#23294)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-06-08 20:00:00 +02:00
Ville Vesilehto
5f890629a9 chore: upgrade mockery to v2.53.4 (release-2.14) (#23316)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-06-08 07:58:46 -04:00
Ville Vesilehto
e24ee58f28 chore: upgrade golangci-lint to v2 (release-2.14) (#23305)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-06-06 16:54:43 -04:00
gcp-cherry-pick-bot[bot]
14fa0e0d9f fix: parse project with applicationset resource (cherry-pick #23252) (#23268)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-06-04 23:16:30 -10:00
Siddhesh Ghadi
2aceb1dc44 fix: update broken yarn.lock (#23212)
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2025-05-30 10:41:41 -06:00
Soumya Ghosh Dastidar
a2361bf850 fix: add cooldown to prevent resetting autoheal exp backoff preemptively (cherry-pick #23057) (#23188)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2025-05-28 22:40:38 +02:00
github-actions[bot]
5ad281ef56 Bump version to 2.14.13 on release-2.14 branch (#23183)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-05-28 08:45:02 -06:00
Michael Crenshaw
24d57220ca Merge commit from fork
Fix shadowed variable name

Signed-off-by: Ry0taK <49341894+Ry0taK@users.noreply.github.com>
Co-authored-by: Ry0taK <49341894+Ry0taK@users.noreply.github.com>
2025-05-28 08:20:48 -06:00
Peter Jiang
d213c305e4 chore: bump gitops-engine ssd fix (#23072)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-05-20 21:10:33 -04:00
github-actions[bot]
25e327bb9a Bump version to 2.14.12 on release-2.14 branch (#23064)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 11:29:01 -04:00
nathanlaceyraft
efe5d2956e chore(deps): resolve CVE GO-2025-3540, GO-2025-3503, GO-2025-3487 within 2.14.10 (#22709)
Signed-off-by: Nathan Lacey <nlacey@teamraft.com>
2025-05-20 10:57:12 -04:00
Alexandre Gaudreault
5bc6f4722b fix: infinite reconciliation loop when app is in error (#23047)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 10:48:47 -04:00
gcp-cherry-pick-bot[bot]
25235fbc2d fix(test): broken e2e test (cherry-pick #22975) (#23052)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 09:49:13 -04:00
gcp-cherry-pick-bot[bot]
3a9ab77537 fix(commit-server): apply image override (cherry-pick #22916) (#22918) 2025-05-09 21:54:33 -04:00
gcp-cherry-pick-bot[bot]
ced6a7822e fix(health): handle nil lastTransitionTime (#22897) (cherry-pick #22900) (#22909)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-09 10:46:25 -04:00
Blake Pettersson
fe93963970 fix: do not normalize resource tracking on live crds (#22722) - cherrypick 2.14 (#22746)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-05-07 16:35:29 -07:00
Mike Bryant
78e61ba71c fix: Only port-forward to ready pods (#10610) (cherry-pick #22794) (#22826)
Signed-off-by: Mike Bryant <mike.bryant@mettle.co.uk>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-05-07 15:24:52 -07:00
gcp-cherry-pick-bot[bot]
f7ad2adf50 fix(ApplicationSet): Check strategy type to verify it's a progressive sync (cherry-pick #22563) (#22833)
Signed-off-by: Fernando Crespo Gravalos <fcrespo@fastly.com>
Signed-off-by: Fernando Crespo Grávalos <59588094+fcrespofastly@users.noreply.github.com>
Co-authored-by: Fernando Crespo Grávalos <59588094+fcrespofastly@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-04-30 12:37:32 -07:00
Peter Jiang
b163de0784 fix: remove project from cache key for project scoped credentials (#22816)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-04-28 16:41:57 -04:00
github-actions[bot]
82831155c2 Bump version to 2.14.11 on release-2.14 branch (#22755)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-04-22 08:25:09 -07:00
gcp-cherry-pick-bot[bot]
91f54459d4 feat(hydrator): handle sourceHydrator fields from webhook (#19397) (cherry-pick #22485) (#22754)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
Signed-off-by: Alexy Mantha <alexy@mantha.dev>
Co-authored-by: Alexy Mantha <alexy.mantha@goto.com>
Co-authored-by: Kunho Lee <gunho1020@gmail.com>
2025-04-22 08:20:58 -07:00
gcp-cherry-pick-bot[bot]
0451723be1 fix(appset): generated app errors should use the default requeue (#21887) (cherry-pick #21936) (#22672)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-04-18 11:48:08 -07:00
gcp-cherry-pick-bot[bot]
f6f7d29c11 fix(ui): avoid spurious error on hydration (#22506) (cherry-pick #22711) (#22714)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-04-17 16:40:39 -07:00
github-actions[bot]
5feb8c21f2 Bump version to 2.14.10 on release-2.14 branch (#22670)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-04-14 17:38:21 -07:00
Hesham Sherif
4826fb0ab8 chore(deps): Update github.com/expr-lang/expr to v1.17.0 fixing CVE-2025-29786 (#22651)
Signed-off-by: heshamelsherif97 <heshamelsherif97@gmail.com>
2025-04-14 15:05:10 -07:00
gcp-cherry-pick-bot[bot]
3b308d66e2 fix: respect delete confirmation for argocd app deletion (cherry-pick #22657) (#22664)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2025-04-14 08:36:01 -07:00
gcp-cherry-pick-bot[bot]
b31d700188 fix(cli): wrong variable to store --no-proxy value (cherry-pick #21226) (#22590)
Signed-off-by: Nathanael Liechti <technat@technat.ch>
Co-authored-by: Nathanael Liechti <technat@technat.ch>
2025-04-10 07:08:55 -07:00
gcp-cherry-pick-bot[bot]
be81419f27 fix: login return_url doesn't work with custom server paths (cherry-pick #21588) (#22594)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-04-07 13:14:27 -07:00
Aaron Hoffman
6b15a04509 fix: [cherry-pick] selfhealattemptscount needs to be reset at times (#22095, #20978) (#22583)
Signed-off-by: Michal Ryšavý <michal.rysavy@ext.csas.cz>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Michal Ryšavý <mrysavy@users.noreply.github.com>
Co-authored-by: Michal Ryšavý <michal.rysavy@ext.csas.cz>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-04-07 06:32:03 -07:00
github-actions[bot]
38985bdcd6 Bump version to 2.14.9 on release-2.14 branch (#22549)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-04-02 12:29:11 -07:00
Peter Jiang
c868711d03 chore(dep): bump gitops-engine 2.14 (#22520)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-03-29 10:44:07 -04:00
gcp-cherry-pick-bot[bot]
31a554568a fix: Check for semver constraint matching in application webhook handler (cherry-pick #21648) (#22508)
Signed-off-by: eadred <eadred77@googlemail.com>
Co-authored-by: Eadred <eadred77@googlemail.com>
2025-03-27 11:27:09 -04:00
github-actions[bot]
a7178be1c1 Bump version to 2.14.8 on release-2.14 branch (#22469)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 15:38:39 -04:00
Michael Crenshaw
9a9e62d392 fix(server): fully populate app destination before project checks (#22408) (#22426)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-03-24 11:20:07 -07:00
Michael Crenshaw
9f832cd099 chore(deps): bump github.com/golang-jwt/jwt to 4.5.2/5.2.2 (#22465)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 14:06:41 -04:00
Blake Pettersson
ec45e33800 fix(ui, rbac): project-roles (#21829) (2.14 backport) (#22461)
Signed-off-by: wyttime04 <vanessa80332@gmail.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: wyttime04 <vanessa80332@gmail.com>
2025-03-24 10:47:26 -07:00
Atif Ali
872319e8e7 fix: handle annotated git tags correctly in repo server cache (#21771) (#22424)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-03-21 09:49:35 -04:00
nmirasch
7acdaa96e0 fix: CVE-2025-26791 upgrading redoc dep to 2.4.0 to avoid DOMPurify b… (#21997)
Signed-off-by: nmirasch <neus.miras@gmail.com>
2025-03-21 07:33:19 -04:00
github-actions[bot]
d107d4e41a Bump version to 2.14.7 on release-2.14 branch (#22412)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-03-19 13:04:41 -04:00
Michael Crenshaw
39407827d3 chore(deps): bump gitops engine (#22405)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-19 10:51:18 -04:00
165 changed files with 4403 additions and 2469 deletions

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.23.3'
GOLANG_VERSION: '1.24.6'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -109,10 +109,10 @@ jobs:
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@971e284b6050e8a5849b72094c50ab08da042db8 # v6.1.1
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v1.62.2
version: v2.1.6
args: --verbose
test-go:
@@ -486,7 +486,7 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.41.1
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.0.15-alpine
docker pull redis:7.2.11-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist

View File

@@ -53,7 +53,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.23.3
go-version: 1.24.6
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -70,7 +70,7 @@ jobs:
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.23.3
go-version: 1.24.6
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.23.3' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.24.6' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -25,13 +25,49 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.23.3
go-version: 1.24.6
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,15 +86,17 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -140,7 +178,7 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
@@ -218,6 +256,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -226,6 +265,8 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -239,27 +280,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -1,20 +1,12 @@
issues:
exclude:
- SA5011
max-issues-per-linter: 0
max-same-issues: 0
exclude-rules:
- path: '(.+)_test\.go'
linters:
- unparam
version: "2"
run:
timeout: 50m
linters:
default: none
enable:
- errcheck
- errorlint
- gocritic
- gofumpt
- goimports
- gosimple
- govet
- ineffassign
- misspell
@@ -25,35 +17,85 @@ linters:
- unparam
- unused
- usestdlibvars
- whitespace
linters-settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp # Keep it disabled for readability
- badCond
- commentFormatting
- exitAfterDefer
- ifElseChain
- mapKey
- singleCaseSwitch
- typeSwitchVar
goimports:
local-prefixes: github.com/argoproj/argo-cd/v2
perfsprint:
# Optimizes even if it requires an int or uint type cast.
int-conversion: true
# Optimizes into `err.Error()` even if it is only equivalent for non-nil errors.
err-error: false
# Optimizes `fmt.Errorf`.
errorf: false
# Optimizes `fmt.Sprintf` with only one argument.
sprintf1: true
# Optimizes into strings concatenation.
strconcat: false
testifylint:
enable-all: true
disable:
- go-require
run:
timeout: 50m
- whitespace
settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp
- badCond
- commentFormatting
- exitAfterDefer
- ifElseChain
- mapKey
- singleCaseSwitch
- typeSwitchVar
perfsprint:
int-conversion: true
err-error: false
errorf: false
sprintf1: true
strconcat: false
staticcheck:
checks:
- "all"
- "-ST1001" # dot imports are discouraged
- "-ST1003" # poorly chosen identifier
- "-ST1005" # incorrectly formatted error string
- "-ST1011" # poorly chosen name for variable of type time.Duration
- "-ST1012" # poorly chosen name for an error variable
- "-ST1016" # use consistent method receiver names
- "-ST1017" # don't use Yoda conditions
- "-ST1019" # importing the same package multiple times
- "-ST1023" # redundant type in variable declaration
- "-QF1001" # apply De Morgans law
- "-QF1003" # convert if/else-if chain to tagged switch
- "-QF1006" # lift if+break into loop condition
- "-QF1007" # merge conditional assignment into variable declaration
- "-QF1008" # omit embedded fields from selector expression
- "-QF1009" # use time.Time.Equal instead of == operator
- "-QF1011" # omit redundant type from variable declaration
- "-QF1012" # use fmt.Fprintf(x, ...) instead of x.Write(fmt.Sprintf(...))
testifylint:
enable-all: true
disable:
- go-require
- equal-values
- empty
- len
- expected-actual
- formatter
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- unparam
path: (.+)_test\.go
- path: (.+)\.go$
text: SA5011
paths:
- third_party$
- builtin$
- examples$
issues:
max-issues-per-linter: 0
max-same-issues: 0
formatters:
enable:
- gofumpt
- goimports
settings:
goimports:
local-prefixes:
- github.com/argoproj/argo-cd/v2
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -46,16 +46,17 @@ builds:
archives:
- id: argocd-archive
builds:
- argocd-cli
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
format: binary
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |
@@ -85,16 +86,14 @@ release:
If upgrading from a different minor version, be sure to read the [upgrading](https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/overview/) documentation.
footer: |
**Full Changelog**: https://github.com/argoproj/argo-cd/compare/{{ .PreviousTag }}...{{ .Tag }}
<a href="https://argoproj.github.io/cd/"><img src="https://raw.githubusercontent.com/argoproj/argo-site/master/content/pages/cd/gitops-cd.png" width="25%" ></a>
snapshot: #### To be removed for PR
name_template: "2.6.0"
name_template: '2.6.0'
changelog:
use:
github
use: github
sort: asc
abbrev: 0
groups: # Regex use RE2 syntax as defined here: https://github.com/google/re2/wiki/Syntax.
@@ -117,7 +116,4 @@ changelog:
- '^test:'
- '^.*?Bump(\([[:word:]]+\))?.+$'
- '^.*?\[Bot\](\([[:word:]]+\))?.+$'
# yaml-language-server: $schema=https://goreleaser.com/static/schema.json

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:24.04@sha256:3f85b7caad41a95462cf5b787d8
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.23.3@sha256:d56c3e08fe5b27729ee3834854ae8f7015af48fd651cd25d1e3bcf3c19830174 AS builder
FROM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe978d2b877d475be0d72f6c6ce6f6 AS builder
RUN echo 'deb http://archive.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -101,7 +101,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.23.3@sha256:d56c3e08fe5b27729ee3834854ae8f7015af48fd651cd25d1e3bcf3c19830174 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe978d2b877d475be0d72f6c6ce6f6 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -1 +1 @@
2.14.6
2.14.21

View File

@@ -91,6 +91,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -127,8 +128,8 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}()
// Do not attempt to further reconcile the ApplicationSet if it is being deleted.
if applicationSetInfo.ObjectMeta.DeletionTimestamp != nil {
appsetName := applicationSetInfo.ObjectMeta.Name
if applicationSetInfo.DeletionTimestamp != nil {
appsetName := applicationSetInfo.Name
logCtx.Debugf("DeletionTimestamp is set on %s", appsetName)
deleteAllowed := utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete()
if !deleteAllowed {
@@ -155,6 +156,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
// desiredApplications is the main list of all expected Applications from all generators in this appset.
desiredApplications, applicationSetReason, err := template.GenerateApplications(logCtx, applicationSetInfo, r.Generators, r.Renderer, r.Client)
if err != nil {
logCtx.Errorf("unable to generate applications: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
@@ -164,7 +166,8 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, parametersGenerated,
)
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, err
// In order for the controller SDK to respect RequeueAfter, the error must be nil
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, nil
}
parametersGenerated = true
@@ -208,16 +211,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
appSyncMap := map[string]bool{}
if r.EnableProgressiveSyncs {
if applicationSetInfo.Spec.Strategy == nil && len(applicationSetInfo.Status.ApplicationStatus) > 0 {
// If appset used progressive sync but stopped, clean up the progressive sync application statuses
if !isRollingSyncStrategy(&applicationSetInfo) && len(applicationSetInfo.Status.ApplicationStatus) > 0 {
// If an appset was previously syncing with a `RollingSync` strategy but it has switched to the default strategy, clean up the progressive sync application statuses
logCtx.Infof("Removing %v unnecessary AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear previous AppSet application statuses for %v: %w", applicationSetInfo.Name, err)
}
} else if applicationSetInfo.Spec.Strategy != nil {
// appset uses progressive sync
} else if isRollingSyncStrategy(&applicationSetInfo) {
// The appset uses progressive sync with `RollingSync` strategy
for _, app := range currentApplications {
appMap[app.Name] = app
}
@@ -312,7 +315,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
if applicationSetInfo.RefreshRequired() {
delete(applicationSetInfo.Annotations, common.AnnotationApplicationSetRefresh)
err := r.Client.Update(ctx, &applicationSetInfo)
err := r.Update(ctx, &applicationSetInfo)
if err != nil {
logCtx.Warnf("error occurred while updating ApplicationSet: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
@@ -487,7 +490,7 @@ func (r *ApplicationSetReconciler) validateGeneratedApplications(ctx context.Con
}
appProject := &argov1alpha1.AppProject{}
err := r.Client.Get(ctx, types.NamespacedName{Name: app.Spec.Project, Namespace: r.ArgoCDNamespace}, appProject)
err := r.Get(ctx, types.NamespacedName{Name: app.Spec.Project, Namespace: r.ArgoCDNamespace}, appProject)
if err != nil {
if apierr.IsNotFound(err) {
errorsByIndex[i] = fmt.Errorf("application references project %s which does not exist", app.Spec.Project)
@@ -551,12 +554,13 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
return fmt.Errorf("error setting up with manager: %w", err)
}
ownsHandler := getOwnsHandlerPredicates(enableProgressiveSyncs)
appOwnsHandler := getApplicationOwnsHandler(enableProgressiveSyncs)
appSetOwnsHandler := getApplicationSetOwnsHandler(enableProgressiveSyncs)
return ctrl.NewControllerManagedBy(mgr).WithOptions(controller.Options{
MaxConcurrentReconciles: maxConcurrentReconciliations,
}).For(&argov1alpha1.ApplicationSet{}).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(ownsHandler)).
}).For(&argov1alpha1.ApplicationSet{}, builder.WithPredicates(appSetOwnsHandler)).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(appOwnsHandler)).
WithEventFilter(ignoreNotAllowedNamespaces(r.ApplicationSetNamespaces)).
Watches(
&corev1.Secret{},
@@ -564,7 +568,6 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
}).
// TODO: also watch Applications and respond on changes if we own them.
Complete(r)
}
@@ -623,7 +626,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.ObjectMeta.Annotations[key]; exists {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
@@ -632,7 +635,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
for _, key := range preservedLabels {
if state, exists := found.ObjectMeta.Labels[key]; exists {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
@@ -642,7 +645,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
// Preserve post-delete finalizers:
// https://github.com/argoproj/argo-cd/issues/17181
for _, finalizer := range found.ObjectMeta.Finalizers {
for _, finalizer := range found.Finalizers {
if strings.HasPrefix(finalizer, argov1alpha1.PostDeleteFinalizerName) {
if generatedApp.Finalizers == nil {
generatedApp.Finalizers = []string{}
@@ -651,10 +654,10 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
}
found.ObjectMeta.Annotations = generatedApp.Annotations
found.Annotations = generatedApp.Annotations
found.ObjectMeta.Finalizers = generatedApp.Finalizers
found.ObjectMeta.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
found.Labels = generatedApp.Labels
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
@@ -708,7 +711,7 @@ func (r *ApplicationSetReconciler) createInCluster(ctx context.Context, logCtx *
func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, applicationSet argov1alpha1.ApplicationSet) ([]argov1alpha1.Application, error) {
var current argov1alpha1.ApplicationList
err := r.Client.List(ctx, &current, client.MatchingFields{".metadata.controller": applicationSet.Name}, client.InNamespace(applicationSet.Namespace))
err := r.List(ctx, &current, client.MatchingFields{".metadata.controller": applicationSet.Name}, client.InNamespace(applicationSet.Namespace))
if err != nil {
return nil, fmt.Errorf("error retrieving applications: %w", err)
}
@@ -719,9 +722,6 @@ func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, a
// deleteInCluster will delete Applications that are currently on the cluster, but not in appList.
// The function must be called after all generators had been called and generated applications
func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
// settingsMgr := settings.NewSettingsManager(context.TODO(), r.KubeClientset, applicationSet.Namespace)
// argoDB := db.NewDB(applicationSet.Namespace, settingsMgr, r.KubeClientset)
// clusterList, err := argoDB.ListClusters(ctx)
clusterList, err := utils.ListClusters(ctx, r.KubeClientset, r.ArgoCDNamespace)
if err != nil {
return fmt.Errorf("error listing clusters: %w", err)
@@ -756,7 +756,7 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
continue
}
err = r.Client.Delete(ctx, &app)
err = r.Delete(ctx, &app)
if err != nil {
logCtx.WithError(err).Error("failed to delete Application")
if firstError != nil {
@@ -828,7 +828,7 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
if log.IsLevelEnabled(log.DebugLevel) {
utils.LogPatch(appLog, patch, updated)
}
if err := r.Client.Patch(ctx, updated, patch); err != nil {
if err := r.Patch(ctx, updated, patch); err != nil {
return fmt.Errorf("error updating finalizers: %w", err)
}
// Application must have updated list of finalizers
@@ -850,7 +850,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
for _, app := range applications {
app.SetOwnerReferences([]metav1.OwnerReference{})
err := r.Client.Update(ctx, &app)
err := r.Update(ctx, &app)
if err != nil {
return fmt.Errorf("error updating application: %w", err)
}
@@ -1006,8 +1006,14 @@ func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1al
return true
}
func isRollingSyncStrategy(appset *argov1alpha1.ApplicationSet) bool {
// It's only RollingSync if the type specifically sets it
return appset.Spec.Strategy != nil && appset.Spec.Strategy.Type == "RollingSync" && appset.Spec.Strategy.RollingSync != nil
}
func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.ApplicationSet) bool {
return appset.Spec.Strategy != nil && appset.Spec.Strategy.RollingSync != nil && appset.Spec.Strategy.Type == "RollingSync" && len(appset.Spec.Strategy.RollingSync.Steps) > 0
// ProgressiveSync is enabled if the strategy is set to `RollingSync` + steps slice is not empty
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
@@ -1309,6 +1315,11 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
@@ -1456,7 +1467,7 @@ func syncApplication(application argov1alpha1.Application, prune bool) argov1alp
return application
}
func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
func getApplicationOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
// if we are the owner and there is a create event, we most likely created it and do not need to
@@ -1493,8 +1504,8 @@ func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
if !isApp {
return false
}
requeue := shouldRequeueApplicationSet(appOld, appNew, enableProgressiveSyncs)
logCtx.WithField("requeue", requeue).Debugf("requeue: %t caused by application %s", requeue, appNew.Name)
requeue := shouldRequeueForApplication(appOld, appNew, enableProgressiveSyncs)
logCtx.WithField("requeue", requeue).Debugf("requeue caused by application %s", appNew.Name)
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
@@ -1511,13 +1522,13 @@ func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
}
}
// shouldRequeueApplicationSet determines when we want to requeue an ApplicationSet for reconciling based on an owned
// shouldRequeueForApplication determines when we want to requeue an ApplicationSet for reconciling based on an owned
// application change
// The applicationset controller owns a subset of the Application CR.
// We do not need to re-reconcile if parts of the application change outside the applicationset's control.
// An example being, Application.ApplicationStatus.ReconciledAt which gets updated by the application controller.
// Additionally, Application.ObjectMeta.ResourceVersion and Application.ObjectMeta.Generation which are set by K8s.
func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
func shouldRequeueForApplication(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
if appOld == nil || appNew == nil {
return false
}
@@ -1527,9 +1538,9 @@ func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov
// https://pkg.go.dev/reflect#DeepEqual
// ApplicationDestination has an unexported field so we can just use the == for comparison
if !cmp.Equal(appOld.Spec, appNew.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{})) ||
!cmp.Equal(appOld.ObjectMeta.GetAnnotations(), appNew.ObjectMeta.GetAnnotations(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.ObjectMeta.GetLabels(), appNew.ObjectMeta.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.ObjectMeta.GetFinalizers(), appNew.ObjectMeta.GetFinalizers(), cmpopts.EquateEmpty()) {
!cmp.Equal(appOld.GetAnnotations(), appNew.GetAnnotations(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.GetLabels(), appNew.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.GetFinalizers(), appNew.GetFinalizers(), cmpopts.EquateEmpty()) {
return true
}
@@ -1550,4 +1561,90 @@ func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov
return false
}
func getApplicationSetOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received create event")
// Always queue a new applicationset
return true
},
DeleteFunc: func(e event.DeleteEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received delete event")
// Always queue for the deletion of an applicationset
return true
},
UpdateFunc: func(e event.UpdateEvent) bool {
appSetOld, isAppSet := e.ObjectOld.(*argov1alpha1.ApplicationSet)
if !isAppSet {
return false
}
appSetNew, isAppSet := e.ObjectNew.(*argov1alpha1.ApplicationSet)
if !isAppSet {
return false
}
requeue := shouldRequeueForApplicationSet(appSetOld, appSetNew, enableProgressiveSyncs)
log.WithField("applicationset", appSetNew.QualifiedName()).
WithField("requeue", requeue).Debugln("received update event")
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received generic event")
// Always queue for the generic of an applicationset
return true
},
}
}
// shouldRequeueForApplicationSet determines when we need to requeue an applicationset
func shouldRequeueForApplicationSet(appSetOld, appSetNew *argov1alpha1.ApplicationSet, enableProgressiveSyncs bool) bool {
if appSetOld == nil || appSetNew == nil {
return false
}
// Requeue if any ApplicationStatus.Status changed for Progressive sync strategy
if enableProgressiveSyncs {
if !cmp.Equal(appSetOld.Status.ApplicationStatus, appSetNew.Status.ApplicationStatus, cmpopts.EquateEmpty()) {
return true
}
}
// only compare the applicationset spec, annotations, labels and finalizers, deletionTimestamp, specifically avoiding
// the status field. status is owned by the applicationset controller,
// and we do not need to requeue when it does bookkeeping
// NB: the ApplicationDestination comes from the ApplicationSpec being embedded
// in the ApplicationSetTemplate from the generators
if !cmp.Equal(appSetOld.Spec, appSetNew.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{})) ||
!cmp.Equal(appSetOld.GetLabels(), appSetNew.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appSetOld.GetFinalizers(), appSetNew.GetFinalizers(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appSetOld.DeletionTimestamp, appSetNew.DeletionTimestamp, cmpopts.EquateEmpty()) {
return true
}
// Requeue only when the refresh annotation is newly added to the ApplicationSet.
// Changes to other annotations made simultaneously might be missed, but such cases are rare.
if !cmp.Equal(appSetOld.GetAnnotations(), appSetNew.GetAnnotations(), cmpopts.EquateEmpty()) {
_, oldHasRefreshAnnotation := appSetOld.Annotations[common.AnnotationApplicationSetRefresh]
_, newHasRefreshAnnotation := appSetNew.Annotations[common.AnnotationApplicationSetRefresh]
if oldHasRefreshAnnotation && !newHasRefreshAnnotation {
return false
}
return true
}
return false
}
var _ handler.EventHandler = &clusterSecretEventHandler{}

View File

@@ -1885,7 +1885,7 @@ func TestRequeueGeneratorFails(t *testing.T) {
}
res, err := r.Reconcile(context.Background(), req)
require.Error(t, err)
require.NoError(t, err)
assert.Equal(t, ReconcileRequeueOnValidationError, res.RequeueAfter)
}
@@ -6097,10 +6097,11 @@ func TestUpdateResourceStatus(t *testing.T) {
require.NoError(t, err)
for _, cc := range []struct {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
expectedResources []v1alpha1.ResourceStatus
maxResourcesStatusCount int
}{
{
name: "handles an empty application list",
@@ -6271,6 +6272,73 @@ func TestUpdateResourceStatus(t *testing.T) {
apps: []v1alpha1.Application{},
expectedResources: nil,
},
{
name: "truncates resources status list to",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Status: v1alpha1.ApplicationSetStatus{
Resources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
{
Name: "app2",
Status: v1alpha1.SyncStatusCodeOutOfSync,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
Message: "this is progressing",
},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "app2",
},
Status: v1alpha1.ApplicationStatus{
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
},
expectedResources: []v1alpha1.ResourceStatus{
{
Name: "app1",
Status: v1alpha1.SyncStatusCodeSynced,
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
},
},
maxResourcesStatusCount: 1,
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewSimpleClientset([]runtime.Object{}...)
@@ -6279,13 +6347,14 @@ func TestUpdateResourceStatus(t *testing.T) {
metrics := appsetmetrics.NewFakeAppsetMetrics(client)
r := ApplicationSetReconciler{
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: &dbmocks.ArgoDB{},
KubeClientset: kubeclientset,
Metrics: metrics,
Client: client,
Scheme: scheme,
Recorder: record.NewFakeRecorder(1),
Generators: map[string]generators.Generator{},
ArgoDB: &dbmocks.ArgoDB{},
KubeClientset: kubeclientset,
Metrics: metrics,
MaxResourcesStatusCount: cc.maxResourcesStatusCount,
}
err := r.updateResourcesStatus(context.TODO(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps)
@@ -6393,11 +6462,11 @@ func TestResourceStatusAreOrdered(t *testing.T) {
func TestOwnsHandler(t *testing.T) {
// progressive syncs do not affect create, delete, or generic
ownsHandler := getOwnsHandlerPredicates(true)
ownsHandler := getApplicationOwnsHandler(true)
assert.False(t, ownsHandler.CreateFunc(event.CreateEvent{}))
assert.True(t, ownsHandler.DeleteFunc(event.DeleteEvent{}))
assert.True(t, ownsHandler.GenericFunc(event.GenericEvent{}))
ownsHandler = getOwnsHandlerPredicates(false)
ownsHandler = getApplicationOwnsHandler(false)
assert.False(t, ownsHandler.CreateFunc(event.CreateEvent{}))
assert.True(t, ownsHandler.DeleteFunc(event.DeleteEvent{}))
assert.True(t, ownsHandler.GenericFunc(event.GenericEvent{}))
@@ -6577,7 +6646,7 @@ func TestOwnsHandler(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ownsHandler = getOwnsHandlerPredicates(tt.args.enableProgressiveSyncs)
ownsHandler = getApplicationOwnsHandler(tt.args.enableProgressiveSyncs)
assert.Equalf(t, tt.want, ownsHandler.UpdateFunc(tt.args.e), "UpdateFunc(%v)", tt.args.e)
})
}
@@ -6654,6 +6723,358 @@ func TestMigrateStatus(t *testing.T) {
}
}
func TestApplicationSetOwnsHandlerUpdate(t *testing.T) {
buildAppSet := func(annotations map[string]string) *v1alpha1.ApplicationSet {
return &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Annotations: annotations,
},
}
}
tests := []struct {
name string
appSetOld crtclient.Object
appSetNew crtclient.Object
enableProgressiveSyncs bool
want bool
}{
{
name: "Different Spec",
appSetOld: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{List: &v1alpha1.ListGenerator{}},
},
},
},
appSetNew: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{Git: &v1alpha1.GitGenerator{}},
},
},
},
enableProgressiveSyncs: false,
want: true,
},
{
name: "Different Annotations",
appSetOld: buildAppSet(map[string]string{"key1": "value1"}),
appSetNew: buildAppSet(map[string]string{"key1": "value2"}),
enableProgressiveSyncs: false,
want: true,
},
{
name: "Different Labels",
appSetOld: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"key1": "value1"},
},
},
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"key1": "value2"},
},
},
enableProgressiveSyncs: false,
want: true,
},
{
name: "Different Finalizers",
appSetOld: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Finalizers: []string{"finalizer1"},
},
},
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Finalizers: []string{"finalizer2"},
},
},
enableProgressiveSyncs: false,
want: true,
},
{
name: "No Changes",
appSetOld: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{List: &v1alpha1.ListGenerator{}},
},
},
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{"key1": "value1"},
Labels: map[string]string{"key1": "value1"},
Finalizers: []string{"finalizer1"},
},
},
appSetNew: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{List: &v1alpha1.ListGenerator{}},
},
},
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{"key1": "value1"},
Labels: map[string]string{"key1": "value1"},
Finalizers: []string{"finalizer1"},
},
},
enableProgressiveSyncs: false,
want: false,
},
{
name: "annotation removed",
appSetOld: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
appSetNew: buildAppSet(map[string]string{}),
enableProgressiveSyncs: false,
want: false,
},
{
name: "annotation not removed",
appSetOld: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
appSetNew: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
enableProgressiveSyncs: false,
want: false,
},
{
name: "annotation added",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
enableProgressiveSyncs: false,
want: true,
},
{
name: "old object is not an appset",
appSetOld: &v1alpha1.Application{},
appSetNew: buildAppSet(map[string]string{}),
enableProgressiveSyncs: false,
want: false,
},
{
name: "new object is not an appset",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: &v1alpha1.Application{},
enableProgressiveSyncs: false,
want: false,
},
{
name: "deletionTimestamp present when progressive sync enabled",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: time.Now()},
},
},
enableProgressiveSyncs: true,
want: true,
},
{
name: "deletionTimestamp present when progressive sync disabled",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: time.Now()},
},
},
enableProgressiveSyncs: false,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(tt.enableProgressiveSyncs)
requeue := ownsHandler.UpdateFunc(event.UpdateEvent{
ObjectOld: tt.appSetOld,
ObjectNew: tt.appSetNew,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.UpdateFunc(%v, %v, %t)", tt.appSetOld, tt.appSetNew, tt.enableProgressiveSyncs)
})
}
}
func TestApplicationSetOwnsHandlerGeneric(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(false)
tests := []struct {
name string
obj crtclient.Object
want bool
}{
{
name: "Object is ApplicationSet",
obj: &v1alpha1.ApplicationSet{},
want: true,
},
{
name: "Object is not ApplicationSet",
obj: &v1alpha1.Application{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
requeue := ownsHandler.GenericFunc(event.GenericEvent{
Object: tt.obj,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.GenericFunc(%v)", tt.obj)
})
}
}
func TestApplicationSetOwnsHandlerCreate(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(false)
tests := []struct {
name string
obj crtclient.Object
want bool
}{
{
name: "Object is ApplicationSet",
obj: &v1alpha1.ApplicationSet{},
want: true,
},
{
name: "Object is not ApplicationSet",
obj: &v1alpha1.Application{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
requeue := ownsHandler.CreateFunc(event.CreateEvent{
Object: tt.obj,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.CreateFunc(%v)", tt.obj)
})
}
}
func TestApplicationSetOwnsHandlerDelete(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(false)
tests := []struct {
name string
obj crtclient.Object
want bool
}{
{
name: "Object is ApplicationSet",
obj: &v1alpha1.ApplicationSet{},
want: true,
},
{
name: "Object is not ApplicationSet",
obj: &v1alpha1.Application{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
requeue := ownsHandler.DeleteFunc(event.DeleteEvent{
Object: tt.obj,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.DeleteFunc(%v)", tt.obj)
})
}
}
func TestShouldRequeueForApplicationSet(t *testing.T) {
type args struct {
appSetOld *v1alpha1.ApplicationSet
appSetNew *v1alpha1.ApplicationSet
enableProgressiveSyncs bool
}
tests := []struct {
name string
args args
want bool
}{
{
name: "NilAppSet",
args: args{
appSetNew: &v1alpha1.ApplicationSet{},
appSetOld: nil,
enableProgressiveSyncs: false,
},
want: false,
},
{
name: "ApplicationSetApplicationStatusChanged",
args: args{
appSetOld: &v1alpha1.ApplicationSet{
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Healthy",
},
},
},
},
appSetNew: &v1alpha1.ApplicationSet{
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Waiting",
},
},
},
},
enableProgressiveSyncs: true,
},
want: true,
},
{
name: "ApplicationSetWithDeletionTimestamp",
args: args{
appSetOld: &v1alpha1.ApplicationSet{
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Healthy",
},
},
},
},
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: time.Now()},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Waiting",
},
},
},
},
enableProgressiveSyncs: false,
},
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equalf(t, tt.want, shouldRequeueForApplicationSet(tt.args.appSetOld, tt.args.appSetNew, tt.args.enableProgressiveSyncs), "shouldRequeueForApplicationSet(%v, %v)", tt.args.appSetOld, tt.args.appSetNew)
})
}
}
func TestIgnoreNotAllowedNamespaces(t *testing.T) {
tests := []struct {
name string
@@ -6736,3 +7157,62 @@ func TestIgnoreNotAllowedNamespaces(t *testing.T) {
})
}
}
func TestIsRollingSyncStrategy(t *testing.T) {
tests := []struct {
name string
appset *v1alpha1.ApplicationSet
expected bool
}{
{
name: "RollingSync strategy is explicitly set",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
},
},
},
expected: true,
},
{
name: "AllAtOnce strategy is explicitly set",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "AllAtOnce",
},
},
},
expected: false,
},
{
name: "Strategy is empty",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{},
},
},
expected: false,
},
{
name: "Strategy is nil",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: nil,
},
},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isRollingSyncStrategy(tt.appset)
assert.Equal(t, tt.expected, result)
})
}
}

View File

@@ -28,10 +28,11 @@ type GitGenerator struct {
namespace string
}
func NewGitGenerator(repos services.Repos, namespace string) Generator {
// NewGitGenerator creates a new instance of Git Generator
func NewGitGenerator(repos services.Repos, controllerNamespace string) Generator {
g := &GitGenerator{
repos: repos,
namespace: namespace,
namespace: controllerNamespace,
}
return g
@@ -70,11 +71,11 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project := appSet.Spec.Template.Spec.Project
appProject := &argoprojiov1alpha1.AppProject{}
namespace := g.namespace
if namespace == "" {
namespace = appSet.Namespace
controllerNamespace := g.namespace
if controllerNamespace == "" {
controllerNamespace = appSet.Namespace
}
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: namespace}, appProject); err != nil {
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: controllerNamespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -10,15 +10,15 @@ import (
"github.com/argoproj/argo-cd/v2/applicationset/services"
)
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, namespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(c, ctx, k8sClient, namespace),
"Git": NewGitGenerator(argoCDService, namespace),
"Clusters": NewClusterGenerator(c, ctx, k8sClient, controllerNamespace),
"Git": NewGitGenerator(argoCDService, controllerNamespace),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),
"Plugin": NewPluginGenerator(c, ctx, k8sClient, namespace),
"Plugin": NewPluginGenerator(c, ctx, k8sClient, controllerNamespace),
}
nestedGenerators := map[string]Generator{

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -25,10 +25,14 @@ import (
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v2/util/guard"
)
const payloadQueueSize = 50000
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
type WebhookHandler struct {
sync.WaitGroup // for testing
namespace string
@@ -103,6 +107,7 @@ func NewWebhookHandler(namespace string, webhookParallelism int, argocdSettingsM
}
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "applicationset-webhook")
for i := 0; i < webhookParallelism; i++ {
h.Add(1)
go func() {
@@ -112,7 +117,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
h.HandleEvent(payload)
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
}
}()
}

View File

@@ -64,6 +64,7 @@ func NewCommand() *cobra.Command {
selfHealBackoffTimeoutSeconds int
selfHealBackoffFactor int
selfHealBackoffCapSeconds int
selfHealBackoffCooldownSeconds int
syncTimeout int
statusProcessors int
operationProcessors int
@@ -196,6 +197,7 @@ func NewCommand() *cobra.Command {
time.Duration(appResyncJitter)*time.Second,
time.Duration(selfHealTimeoutSeconds)*time.Second,
selfHealBackoff,
time.Duration(selfHealBackoffCooldownSeconds)*time.Second,
time.Duration(syncTimeout)*time.Second,
time.Duration(repoErrorGracePeriod)*time.Second,
metricsPort,
@@ -266,6 +268,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&selfHealBackoffTimeoutSeconds, "self-heal-backoff-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_TIMEOUT_SECONDS", 2, 0, math.MaxInt32), "Specifies initial timeout of exponential backoff between self heal attempts")
command.Flags().IntVar(&selfHealBackoffFactor, "self-heal-backoff-factor", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_FACTOR", 3, 0, math.MaxInt32), "Specifies factor of exponential timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCapSeconds, "self-heal-backoff-cap-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_CAP_SECONDS", 300, 0, math.MaxInt32), "Specifies max timeout of exponential backoff between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCooldownSeconds, "self-heal-backoff-cooldown-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS", 330, 0, math.MaxInt32), "Specifies period of time the app needs to stay synced before the self heal backoff can reset")
command.Flags().IntVar(&syncTimeout, "sync-timeout", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT", 0, 0, math.MaxInt32), "Specifies the timeout after which a sync would be terminated. 0 means no timeout (default 0).")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")

View File

@@ -74,6 +74,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -227,6 +228,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -270,6 +272,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&globalPreservedLabels, "preserved-labels", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS", []string{}, ","), "Sets global preserved field values for labels")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -16,12 +16,13 @@ import (
"golang.org/x/term"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/utils"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/account"
"github.com/argoproj/argo-cd/v2/pkg/apiclient/session"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/io"
@@ -218,7 +219,7 @@ argocd account can-i create clusters '*'
Actions: %v
Resources: %v
`, rbacpolicy.Actions, rbacpolicy.Resources),
`, rbac.Actions, rbac.Resources),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -262,7 +263,7 @@ func NewAccountListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Use: "list",
Short: "List accounts",
Example: "argocd account list",
Run: func(c *cobra.Command, args []string) {
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
conn, client := headless.NewClientOrDie(clientOpts, c).NewAccountClientOrDie()
@@ -309,7 +310,7 @@ argocd account get
# Get details for an account by name
argocd account get --account <account-name>`,
Run: func(c *cobra.Command, args []string) {
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
clientset := headless.NewClientOrDie(clientOpts, c)
@@ -358,7 +359,7 @@ func printAccountDetails(acc *accountpkg.Account) {
expiresAt := time.Unix(t.ExpiresAt, 0)
expiresAtFormatted = expiresAt.Format(time.RFC3339)
if expiresAt.Before(time.Now()) {
expiresAtFormatted = fmt.Sprintf("%s (expired)", expiresAtFormatted)
expiresAtFormatted = expiresAtFormatted + " (expired)"
}
}
@@ -382,7 +383,7 @@ argocd account generate-token
# Generate token for the account with the specified name
argocd account generate-token --account <account-name>`,
Run: func(c *cobra.Command, args []string) {
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
clientset := headless.NewClientOrDie(clientOpts, c)

View File

@@ -9,13 +9,12 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/assets"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
@@ -28,94 +27,85 @@ type rbacTrait struct {
allowPath bool
}
// Provide a mapping of short-hand resource names to their RBAC counterparts
// Provide a mapping of shorthand resource names to their RBAC counterparts
var resourceMap = map[string]string{
"account": rbacpolicy.ResourceAccounts,
"app": rbacpolicy.ResourceApplications,
"apps": rbacpolicy.ResourceApplications,
"application": rbacpolicy.ResourceApplications,
"applicationsets": rbacpolicy.ResourceApplicationSets,
"cert": rbacpolicy.ResourceCertificates,
"certs": rbacpolicy.ResourceCertificates,
"certificate": rbacpolicy.ResourceCertificates,
"cluster": rbacpolicy.ResourceClusters,
"extension": rbacpolicy.ResourceExtensions,
"gpgkey": rbacpolicy.ResourceGPGKeys,
"key": rbacpolicy.ResourceGPGKeys,
"log": rbacpolicy.ResourceLogs,
"logs": rbacpolicy.ResourceLogs,
"exec": rbacpolicy.ResourceExec,
"proj": rbacpolicy.ResourceProjects,
"projs": rbacpolicy.ResourceProjects,
"project": rbacpolicy.ResourceProjects,
"repo": rbacpolicy.ResourceRepositories,
"repos": rbacpolicy.ResourceRepositories,
"repository": rbacpolicy.ResourceRepositories,
}
var projectScoped = map[string]bool{
rbacpolicy.ResourceApplications: true,
rbacpolicy.ResourceApplicationSets: true,
rbacpolicy.ResourceLogs: true,
rbacpolicy.ResourceExec: true,
rbacpolicy.ResourceClusters: true,
rbacpolicy.ResourceRepositories: true,
"account": rbac.ResourceAccounts,
"app": rbac.ResourceApplications,
"apps": rbac.ResourceApplications,
"application": rbac.ResourceApplications,
"applicationsets": rbac.ResourceApplicationSets,
"cert": rbac.ResourceCertificates,
"certs": rbac.ResourceCertificates,
"certificate": rbac.ResourceCertificates,
"cluster": rbac.ResourceClusters,
"extension": rbac.ResourceExtensions,
"gpgkey": rbac.ResourceGPGKeys,
"key": rbac.ResourceGPGKeys,
"log": rbac.ResourceLogs,
"logs": rbac.ResourceLogs,
"exec": rbac.ResourceExec,
"proj": rbac.ResourceProjects,
"projs": rbac.ResourceProjects,
"project": rbac.ResourceProjects,
"repo": rbac.ResourceRepositories,
"repos": rbac.ResourceRepositories,
"repository": rbac.ResourceRepositories,
}
// List of allowed RBAC resources
var validRBACResourcesActions = map[string]actionTraitMap{
rbacpolicy.ResourceAccounts: accountsActions,
rbacpolicy.ResourceApplications: applicationsActions,
rbacpolicy.ResourceApplicationSets: defaultCRUDActions,
rbacpolicy.ResourceCertificates: defaultCRDActions,
rbacpolicy.ResourceClusters: defaultCRUDActions,
rbacpolicy.ResourceExtensions: extensionActions,
rbacpolicy.ResourceGPGKeys: defaultCRDActions,
rbacpolicy.ResourceLogs: logsActions,
rbacpolicy.ResourceExec: execActions,
rbacpolicy.ResourceProjects: defaultCRUDActions,
rbacpolicy.ResourceRepositories: defaultCRUDActions,
rbac.ResourceAccounts: accountsActions,
rbac.ResourceApplications: applicationsActions,
rbac.ResourceApplicationSets: defaultCRUDActions,
rbac.ResourceCertificates: defaultCRDActions,
rbac.ResourceClusters: defaultCRUDActions,
rbac.ResourceExtensions: extensionActions,
rbac.ResourceGPGKeys: defaultCRDActions,
rbac.ResourceLogs: logsActions,
rbac.ResourceExec: execActions,
rbac.ResourceProjects: defaultCRUDActions,
rbac.ResourceRepositories: defaultCRUDActions,
}
// List of allowed RBAC actions
var defaultCRUDActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionGet: rbacTrait{},
rbacpolicy.ActionUpdate: rbacTrait{},
rbacpolicy.ActionDelete: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionGet: rbacTrait{},
rbac.ActionUpdate: rbacTrait{},
rbac.ActionDelete: rbacTrait{},
}
var defaultCRDActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionGet: rbacTrait{},
rbacpolicy.ActionDelete: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionGet: rbacTrait{},
rbac.ActionDelete: rbacTrait{},
}
var applicationsActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionGet: rbacTrait{},
rbacpolicy.ActionUpdate: rbacTrait{allowPath: true},
rbacpolicy.ActionDelete: rbacTrait{allowPath: true},
rbacpolicy.ActionAction: rbacTrait{allowPath: true},
rbacpolicy.ActionOverride: rbacTrait{},
rbacpolicy.ActionSync: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionGet: rbacTrait{},
rbac.ActionUpdate: rbacTrait{allowPath: true},
rbac.ActionDelete: rbacTrait{allowPath: true},
rbac.ActionAction: rbacTrait{allowPath: true},
rbac.ActionOverride: rbacTrait{},
rbac.ActionSync: rbacTrait{},
}
var accountsActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionUpdate: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionUpdate: rbacTrait{},
}
var execActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
}
var logsActions = actionTraitMap{
rbacpolicy.ActionGet: rbacTrait{},
rbac.ActionGet: rbacTrait{},
}
var extensionActions = actionTraitMap{
rbacpolicy.ActionInvoke: rbacTrait{},
rbac.ActionInvoke: rbacTrait{},
}
// NewRBACCommand is the command for 'rbac'
@@ -226,7 +216,7 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
// even if there is no explicit RBAC allow, or if there is an explicit RBAC deny)
var isLogRbacEnforced func() bool
if nsOverride && policyFile == "" {
if resolveRBACResourceName(resource) == rbacpolicy.ResourceLogs {
if resolveRBACResourceName(resource) == rbac.ResourceLogs {
isLogRbacEnforced = func() bool {
if opts, ok := cmdCtx.(*settingsOpts); ok {
opts.loadClusterSettings = true
@@ -248,12 +238,11 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
fmt.Println("Yes")
}
os.Exit(0)
} else {
if !quiet {
fmt.Println("No")
}
os.Exit(1)
}
if !quiet {
fmt.Println("No")
}
os.Exit(1)
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
@@ -321,13 +310,11 @@ argocd admin settings rbac validate --namespace argocd
if err := rbac.ValidatePolicy(userPolicy); err == nil {
fmt.Printf("Policy is valid.\n")
os.Exit(0)
} else {
fmt.Printf("Policy is invalid: %v\n", err)
os.Exit(1)
}
} else {
log.Fatalf("Policy is empty or could not be loaded.")
fmt.Printf("Policy is invalid: %v\n", err)
os.Exit(1)
}
log.Fatalf("Policy is empty or could not be loaded.")
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
@@ -402,7 +389,7 @@ func getPolicyFromConfigMap(cm *corev1.ConfigMap) (string, string, string) {
// getPolicyConfigMap fetches the RBAC config map from K8s cluster
func getPolicyConfigMap(ctx context.Context, client kubernetes.Interface, namespace string) (*corev1.ConfigMap, error) {
cm, err := client.CoreV1().ConfigMaps(namespace).Get(ctx, common.ArgoCDRBACConfigMapName, v1.GetOptions{})
cm, err := client.CoreV1().ConfigMaps(namespace).Get(ctx, common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
if err != nil {
return nil, err
}
@@ -448,12 +435,12 @@ func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPoli
// Some project scoped resources have a special notation - for simplicity's sake,
// if user gives no sub-resource (or specifies simple '*'), we construct
// the required notation by setting subresource to '*/*'.
if projectScoped[realResource] {
if rbac.ProjectScoped[realResource] {
if subResource == "*" || subResource == "" {
subResource = "*/*"
}
}
if realResource == rbacpolicy.ResourceLogs {
if realResource == rbac.ResourceLogs {
if isLogRbacEnforced != nil && !isLogRbacEnforced() {
return true
}
@@ -466,9 +453,8 @@ func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPoli
func resolveRBACResourceName(name string) string {
if res, ok := resourceMap[name]; ok {
return res
} else {
return name
}
return name
}
// validateRBACResourceAction checks whether a given resource is a valid RBAC resource.

View File

@@ -7,14 +7,15 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/util/assets"
)
@@ -56,8 +57,8 @@ func Test_validateRBACResourceAction(t *testing.T) {
{
name: "Test valid resource and action",
args: args{
resource: rbacpolicy.ResourceApplications,
action: rbacpolicy.ActionCreate,
resource: rbac.ResourceApplications,
action: rbac.ActionCreate,
},
valid: true,
},
@@ -71,7 +72,7 @@ func Test_validateRBACResourceAction(t *testing.T) {
{
name: "Test invalid action",
args: args{
resource: rbacpolicy.ResourceApplications,
resource: rbac.ResourceApplications,
action: "invalid",
},
valid: false,
@@ -79,24 +80,24 @@ func Test_validateRBACResourceAction(t *testing.T) {
{
name: "Test invalid action for resource",
args: args{
resource: rbacpolicy.ResourceLogs,
action: rbacpolicy.ActionCreate,
resource: rbac.ResourceLogs,
action: rbac.ActionCreate,
},
valid: false,
},
{
name: "Test valid action with path",
args: args{
resource: rbacpolicy.ResourceApplications,
action: rbacpolicy.ActionAction + "/apps/Deployment/restart",
resource: rbac.ResourceApplications,
action: rbac.ActionAction + "/apps/Deployment/restart",
},
valid: true,
},
{
name: "Test invalid action with path",
args: args{
resource: rbacpolicy.ResourceApplications,
action: rbacpolicy.ActionGet + "/apps/Deployment/restart",
resource: rbac.ResourceApplications,
action: rbac.ActionGet + "/apps/Deployment/restart",
},
valid: false,
},
@@ -147,7 +148,7 @@ func Test_PolicyFromK8s(t *testing.T) {
ctx := context.Background()
require.NoError(t, err)
kubeclientset := fake.NewClientset(&v1.ConfigMap{
kubeclientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-rbac-cm",
Namespace: "argocd",
@@ -280,7 +281,7 @@ p, role:user, logs, get, .*/.*, allow
p, role:user, exec, create, .*/.*, allow
`
kubeclientset := fake.NewClientset(&v1.ConfigMap{
kubeclientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-rbac-cm",
Namespace: "argocd",

View File

@@ -69,7 +69,7 @@ func newSettingsManager(data map[string]string) *settings.SettingsManager {
type fakeCmdContext struct {
mgr *settings.SettingsManager
// nolint:unused,structcheck
// nolint:unused
out bytes.Buffer
}

View File

@@ -53,7 +53,7 @@ func (p *Prompt) ConfirmBaseOnCount(messageForSingle string, messageForArray str
}
if count == 1 {
return p.Confirm(messageForSingle), true
return p.Confirm(messageForSingle), false
}
return p.ConfirmAll(messageForArray)

View File

@@ -38,11 +38,47 @@ func TestConfirmBaseOnCountPromptDisabled(t *testing.T) {
assert.True(t, result2)
}
func TestConfirmBaseOnCountZeroApps(t *testing.T) {
p := &Prompt{enabled: true}
result1, result2 := p.ConfirmBaseOnCount("Proceed?", "Process all?", 0)
assert.True(t, result1)
assert.True(t, result2)
func TestConfirmBaseOnCount(t *testing.T) {
tests := []struct {
input string
output bool
count int
}{
{
input: "y\n",
output: true,
count: 0,
},
{
input: "y\n",
output: true,
count: 1,
},
{
input: "n\n",
output: false,
count: 1,
},
}
origStdin := os.Stdin
for _, tt := range tests {
tmpFile, err := writeToStdin(tt.input)
require.NoError(t, err)
p := &Prompt{enabled: true}
result1, result2 := p.ConfirmBaseOnCount("Proceed?", "Proceed all?", tt.count)
assert.Equal(t, tt.output, result1)
if tt.count == 1 {
assert.False(t, result2)
} else {
assert.Equal(t, tt.output, result2)
}
_ = tmpFile.Close()
os.Remove(tmpFile.Name())
}
os.Stdin = origStdin
}
func TestConfirmPrompt(t *testing.T) {
@@ -62,8 +98,8 @@ func TestConfirmPrompt(t *testing.T) {
p := &Prompt{enabled: true}
result := p.Confirm("Are you sure you want to run this command? (y/n) \n")
assert.Equal(t, c.output, result)
os.Remove(tmpFile.Name())
_ = tmpFile.Close()
os.Remove(tmpFile.Name())
}
os.Stdin = origStdin
@@ -89,8 +125,8 @@ func TestConfirmAllPrompt(t *testing.T) {
confirm, confirmAll := p.ConfirmAll("Are you sure you want to run this command? (y/n) \n")
assert.Equal(t, c.confirm, confirm)
assert.Equal(t, c.confirmAll, confirmAll)
os.Remove(tmpFile.Name())
_ = tmpFile.Close()
os.Remove(tmpFile.Name())
}
os.Stdin = origStdin

View File

@@ -18,11 +18,13 @@ func TestProjectOpts_ResourceLists(t *testing.T) {
}
assert.ElementsMatch(t,
[]v1.GroupKind{{Kind: "ConfigMap"}}, opts.GetAllowedNamespacedResources(),
[]v1.GroupKind{{Group: "apps", Kind: "DaemonSet"}}, opts.GetDeniedNamespacedResources(),
[]v1.GroupKind{{Group: "apiextensions.k8s.io", Kind: "CustomResourceDefinition"}}, opts.GetAllowedClusterResources(),
[]v1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources(),
)
[]v1.GroupKind{{Kind: "ConfigMap"}}, opts.GetAllowedNamespacedResources())
assert.ElementsMatch(t,
[]v1.GroupKind{{Group: "apps", Kind: "DaemonSet"}}, opts.GetDeniedNamespacedResources())
assert.ElementsMatch(t,
[]v1.GroupKind{{Group: "apiextensions.k8s.io", Kind: "CustomResourceDefinition"}}, opts.GetAllowedClusterResources())
assert.ElementsMatch(t,
[]v1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources())
}
func TestProjectOpts_GetDestinationServiceAccounts(t *testing.T) {

View File

@@ -45,7 +45,7 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().StringVar(&opts.GithubAppPrivateKeyPath, "github-app-private-key-path", "", "private key of the GitHub Application")
command.Flags().StringVar(&opts.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
command.Flags().StringVar(&opts.Proxy, "proxy", "", "use proxy to access repository")
command.Flags().StringVar(&opts.Proxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.NoProxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.GCPServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks
@@ -14,7 +14,7 @@ type Clientset struct {
mock.Mock
}
// NewCommitServerClient provides a mock function with given fields:
// NewCommitServerClient provides a mock function with no fields
func (_m *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
ret := _m.Called()

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -41,6 +41,7 @@ import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/ptr"
commitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/common"
@@ -135,6 +136,7 @@ type ApplicationController struct {
statusRefreshJitter time.Duration
selfHealTimeout time.Duration
selfHealBackOff *wait.Backoff
selfHealBackoffCooldown time.Duration
syncTimeout time.Duration
db db.ArgoDB
settingsMgr *settings_util.SettingsManager
@@ -169,6 +171,7 @@ func NewApplicationController(
appResyncJitter time.Duration,
selfHealTimeout time.Duration,
selfHealBackoff *wait.Backoff,
selfHealBackoffCooldown time.Duration,
syncTimeout time.Duration,
repoErrorGracePeriod time.Duration,
metricsPort int,
@@ -214,6 +217,7 @@ func NewApplicationController(
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
selfHealBackOff: selfHealBackoff,
selfHealBackoffCooldown: selfHealBackoffCooldown,
syncTimeout: syncTimeout,
clusterSharding: clusterSharding,
projByNameCache: sync.Map{},
@@ -1193,7 +1197,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
isValid, cluster := ctrl.isValidDestination(app)
if !isValid {
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizer()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
return err
}
@@ -1472,7 +1476,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
} else {
state.Phase = synccommon.OperationRunning
state.RetryCount++
state.Message = fmt.Sprintf("%s. Retrying attempt #%d at %s.", state.Message, state.RetryCount, retryAt.Format(time.Kitchen))
state.Message = fmt.Sprintf("%s due to application controller sync timeout. Retrying attempt #%d at %s.", state.Message, state.RetryCount, retryAt.Format(time.Kitchen))
}
} else if state.RetryCount > 0 {
state.Message = fmt.Sprintf("%s (retried %d times).", state.Message, state.RetryCount)
@@ -1674,11 +1678,9 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
project, hasErrors := ctrl.refreshAppConditions(app)
ts.AddCheckpoint("refresh_app_conditions_ms")
now := metav1.Now()
if hasErrors {
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
app.Status.Health.Status = health.HealthStatusUnknown
app.Status.Health.LastTransitionTime = &now
patchMs = ctrl.persistAppStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
@@ -1769,6 +1771,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
ts.AddCheckpoint("auto_sync_ms")
if app.Status.ReconciledAt == nil || comparisonLevel >= CompareWithLatest {
now := metav1.Now()
app.Status.ReconciledAt = &now
}
app.Status.Sync = *compareResult.syncStatus
@@ -1999,9 +2002,15 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
ctrl.logAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message, context.TODO())
}
if orig.Status.Health.Status != newStatus.Health.Status {
now := metav1.Now()
newStatus.Health.LastTransitionTime = &now
message := fmt.Sprintf("Updated health status: %s -> %s", orig.Status.Health.Status, newStatus.Health.Status)
ctrl.logAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message, context.TODO())
} else {
// make sure the last transition time is the same and populated if the health is the same
newStatus.Health.LastTransitionTime = orig.Status.Health.LastTransitionTime
}
var newAnnotations map[string]string
if orig.GetAnnotations() != nil {
newAnnotations = make(map[string]string)
@@ -2103,9 +2112,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
InitiatedBy: appv1.OperationInitiator{Automated: true},
Retry: appv1.RetryStrategy{Limit: 5},
}
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
if app.Spec.SyncPolicy.Retry != nil {
op.Retry = *app.Spec.SyncPolicy.Retry
}
@@ -2121,8 +2128,18 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil, 0
} else if alreadyAttempted && selfHeal {
if shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app); shouldSelfHeal {
} else if selfHeal {
shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app, alreadyAttempted)
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
if alreadyAttempted {
if !shouldSelfHeal {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
return nil, 0
}
op.Sync.SelfHealAttemptsCount++
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
@@ -2133,10 +2150,6 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
})
}
}
} else {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
return nil, 0
}
}
ts.AddCheckpoint("already_attempted_check_ms")
@@ -2220,29 +2233,41 @@ func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS
}
}
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool, time.Duration) {
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application, alreadyAttempted bool) (bool, time.Duration) {
if app.Status.OperationState == nil {
return true, time.Duration(0)
}
var timeSinceOperation *time.Duration
if app.Status.OperationState.FinishedAt != nil {
timeSinceOperation = ptr.To(time.Since(app.Status.OperationState.FinishedAt.Time))
}
// Reset counter if the prior sync was successful and the cooldown period is over OR if the revision has changed
if !alreadyAttempted || (timeSinceOperation != nil && *timeSinceOperation >= ctrl.selfHealBackoffCooldown && app.Status.Sync.Status == appv1.SyncStatusCodeSynced) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = 0
}
var retryAfter time.Duration
if ctrl.selfHealBackOff == nil {
if app.Status.OperationState.FinishedAt == nil {
if timeSinceOperation == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
retryAfter = ctrl.selfHealTimeout - *timeSinceOperation
}
} else {
backOff := *ctrl.selfHealBackOff
backOff.Steps = int(app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
var delay time.Duration
for backOff.Steps > 0 {
steps := backOff.Steps
for i := 0; i < steps; i++ {
delay = backOff.Step()
}
if app.Status.OperationState.FinishedAt == nil {
if timeSinceOperation == nil {
retryAfter = delay
} else {
retryAfter = delay - time.Since(app.Status.OperationState.FinishedAt.Time)
retryAfter = delay - *timeSinceOperation
}
}
return retryAfter <= 0, retryAfter

View File

@@ -171,6 +171,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
time.Second,
time.Minute,
nil,
time.Minute,
0,
time.Second*10,
common.DefaultPortArgoCDMetrics,
@@ -1837,7 +1838,7 @@ apps/Deployment:
hs = {}
hs.status = ""
hs.message = ""
if obj.metadata ~= nil then
if obj.metadata.labels ~= nil then
current_status = obj.metadata.labels["status"]
@@ -2061,7 +2062,7 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
assert.Equal(t, string(synccommon.OperationRunning), phase)
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Contains(t, message, "Retrying attempt #1")
assert.Contains(t, message, "due to application controller sync timeout. Retrying attempt #1")
retryCount, _, _ := unstructured.NestedFloat64(receivedPatch, "status", "operationState", "retryCount")
assert.InEpsilon(t, float64(1), retryCount, 0.0001)
}
@@ -2533,7 +2534,7 @@ func TestSelfHealExponentialBackoff(t *testing.T) {
ctrl.selfHealBackOff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
Cap: 5 * time.Minute,
Cap: 2 * time.Minute,
}
app := &v1alpha1.Application{
@@ -2548,29 +2549,92 @@ func TestSelfHealExponentialBackoff(t *testing.T) {
testCases := []struct {
attempts int64
expectedAttempts int64
finishedAt *metav1.Time
expectedDuration time.Duration
shouldSelfHeal bool
alreadyAttempted bool
syncStatus v1alpha1.SyncStatusCode
}{{
attempts: 0,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 0,
shouldSelfHeal: true,
alreadyAttempted: true,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 1,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 2 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 1,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 2,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 6 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 2,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 3,
finishedAt: nil,
expectedDuration: 18 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 3,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 4,
finishedAt: nil,
expectedDuration: 54 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 4,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 5,
finishedAt: nil,
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 5,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 6,
finishedAt: nil,
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 6,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 6,
finishedAt: nil,
expectedDuration: 0,
shouldSelfHeal: true,
alreadyAttempted: false,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, { // backoff will not reset as finished tme isn't >= cooldown
attempts: 6,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 6,
syncStatus: v1alpha1.SyncStatusCodeSynced,
}, { // backoff will reset as finished time is >= cooldown
attempts: 40,
finishedAt: &metav1.Time{Time: time.Now().Add(-(1 * time.Minute))},
expectedDuration: -60 * time.Second,
shouldSelfHeal: true,
alreadyAttempted: true,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeSynced,
}}
for i := range testCases {
@@ -2578,8 +2642,10 @@ func TestSelfHealExponentialBackoff(t *testing.T) {
t.Run(fmt.Sprintf("test case %d", i), func(t *testing.T) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = tc.attempts
app.Status.OperationState.FinishedAt = tc.finishedAt
ok, duration := ctrl.shouldSelfHeal(app)
app.Status.Sync.Status = tc.syncStatus
ok, duration := ctrl.shouldSelfHeal(app, tc.alreadyAttempted)
require.Equal(t, ok, tc.shouldSelfHeal)
require.Equal(t, tc.expectedAttempts, app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
assertDurationAround(t, tc.expectedDuration, duration)
})
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks
@@ -55,7 +55,7 @@ func (_m *LiveStateCache) GetClusterCache(server string) (cache.ClusterCache, er
return r0, r1
}
// GetClustersInfo provides a mock function with given fields:
// GetClustersInfo provides a mock function with no fields
func (_m *LiveStateCache) GetClustersInfo() []cache.ClusterInfo {
ret := _m.Called()
@@ -172,7 +172,7 @@ func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIR
return r0, r1, r2
}
// Init provides a mock function with given fields:
// Init provides a mock function with no fields
func (_m *LiveStateCache) Init() error {
ret := _m.Called()

View File

@@ -8,7 +8,6 @@ import (
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/v2/common"
@@ -22,7 +21,9 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
var savedErr error
var errCount uint
appHealth := appv1.HealthStatus{Status: health.HealthStatusHealthy}
appHealth := app.Status.Health.DeepCopy()
appHealth.Status = health.HealthStatusHealthy
for i, res := range resources {
if res.Target != nil && hookutil.Skip(res.Target) {
continue
@@ -82,18 +83,11 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
}
if persistResourceHealth {
app.Status.ResourceHealthSource = appv1.ResourceHealthLocationInline
// if the status didn't change, don't update the timestamp
if app.Status.Health.Status == appHealth.Status && app.Status.Health.LastTransitionTime != nil {
appHealth.LastTransitionTime = app.Status.Health.LastTransitionTime
} else {
now := metav1.Now()
appHealth.LastTransitionTime = &now
}
} else {
app.Status.ResourceHealthSource = appv1.ResourceHealthLocationAppTree
}
if savedErr != nil && errCount > 1 {
savedErr = fmt.Errorf("see application-controller logs for %d other errors; most recent error was: %w", errCount-1, savedErr)
}
return &appHealth, savedErr
return appHealth, savedErr
}

View File

@@ -73,7 +73,6 @@ func TestSetApplicationHealth(t *testing.T) {
assert.NotNil(t, healthStatus.LastTransitionTime)
assert.Nil(t, resourceStatuses[0].Health.LastTransitionTime)
assert.Nil(t, resourceStatuses[1].Health.LastTransitionTime)
previousLastTransitionTime := healthStatus.LastTransitionTime
app.Status.Health = *healthStatus
// now mark the job as a hook and retry. it should ignore the hook and consider the app healthy
@@ -81,9 +80,8 @@ func TestSetApplicationHealth(t *testing.T) {
healthStatus, err = setApplicationHealth(resources, resourceStatuses, nil, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus.Status)
// change in health, timestamp should change
assert.NotEqual(t, *previousLastTransitionTime, *healthStatus.LastTransitionTime)
previousLastTransitionTime = healthStatus.LastTransitionTime
// timestamp should be the same in case health did not change
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
app.Status.Health = *healthStatus
// now we set the `argocd.argoproj.io/ignore-healthcheck: "true"` annotation on the job's target.
@@ -94,8 +92,7 @@ func TestSetApplicationHealth(t *testing.T) {
healthStatus, err = setApplicationHealth(resources, resourceStatuses, nil, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus.Status)
// no change in health, timestamp shouldn't change
assert.Equal(t, *previousLastTransitionTime, *healthStatus.LastTransitionTime)
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
}
func TestSetApplicationHealth_ResourceHealthNotPersisted(t *testing.T) {
@@ -124,7 +121,7 @@ func TestSetApplicationHealth_MissingResource(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusMissing, healthStatus.Status)
assert.False(t, healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
}
func TestSetApplicationHealth_HealthImproves(t *testing.T) {
@@ -156,7 +153,7 @@ func TestSetApplicationHealth_HealthImproves(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, overrides, app, true)
require.NoError(t, err)
assert.Equal(t, tc.newStatus, healthStatus.Status)
assert.NotEqual(t, testTimestamp, *healthStatus.LastTransitionTime)
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
})
}
}
@@ -173,6 +170,7 @@ func TestSetApplicationHealth_MissingResourceNoBuiltHealthCheck(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus.Status)
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
assert.Equal(t, health.HealthStatusMissing, resourceStatuses[0].Health.Status)
})
@@ -184,7 +182,7 @@ func TestSetApplicationHealth_MissingResourceNoBuiltHealthCheck(t *testing.T) {
}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusMissing, healthStatus.Status)
assert.False(t, healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
})
}

View File

@@ -481,7 +481,6 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
// return unknown comparison result if basic comparison settings cannot be loaded
if err != nil {
now := metav1.Now()
if hasMultipleSources {
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
@@ -489,7 +488,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
Status: v1alpha1.SyncStatusCodeUnknown,
Revisions: revisions,
},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown, LastTransitionTime: &now},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown},
}, nil
} else {
return &comparisonResult{
@@ -498,7 +497,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
Status: v1alpha1.SyncStatusCodeUnknown,
Revision: revisions[0],
},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown, LastTransitionTime: &now},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown},
}, nil
}
}

View File

@@ -715,7 +715,7 @@ func TestSetHealth(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus.Status)
assert.False(t, compRes.healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, compRes.healthStatus.LastTransitionTime)
}
func TestPreserveStatusTimestamp(t *testing.T) {
@@ -790,7 +790,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus.Status)
assert.False(t, compRes.healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, compRes.healthStatus.LastTransitionTime)
}
func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
@@ -866,7 +866,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, health.HealthStatusUnknown, compRes.healthStatus.Status)
assert.False(t, compRes.healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, compRes.healthStatus.LastTransitionTime)
assert.Equal(t, argoappv1.SyncStatusCodeUnknown, compRes.syncStatus.Status)
}

View File

@@ -114,15 +114,6 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
}
syncOp = *state.Operation.Sync
// validates if it should fail the sync if it finds shared resources
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
hasSharedResource {
state.Phase = common.OperationFailed
state.Message = fmt.Sprintf("Shared resource found: %s", sharedResourceMessage)
return
}
isMultiSourceRevision := app.Spec.HasMultipleSources()
rollback := len(syncOp.Sources) > 0 || syncOp.Source != nil
if rollback {
@@ -213,6 +204,15 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
syncRes.Revision = compareResult.syncStatus.Revision
syncRes.Revisions = compareResult.syncStatus.Revisions
// validates if it should fail the sync if it finds shared resources
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
hasSharedResource {
state.Phase = common.OperationFailed
state.Message = fmt.Sprintf("Shared resource found: %s", sharedResourceMessage)
return
}
// If there are any comparison or spec errors error conditions do not perform the operation
if errConditions := app.Status.GetConditions(map[v1alpha1.ApplicationConditionType]bool{
v1alpha1.ApplicationConditionComparisonError: true,

View File

@@ -15,6 +15,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
argocommon "github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/controller/testdata"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
@@ -190,17 +191,23 @@ func TestSyncComparisonError(t *testing.T) {
}
func TestAppStateManager_SyncAppState(t *testing.T) {
t.Parallel()
type fixture struct {
project *v1alpha1.AppProject
application *v1alpha1.Application
project *v1alpha1.AppProject
controller *ApplicationController
}
setup := func() *fixture {
setup := func(liveObjects map[kube.ResourceKey]*unstructured.Unstructured) *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
if liveObjects == nil {
liveObjects = make(map[kube.ResourceKey]*unstructured.Unstructured)
}
project := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
@@ -208,6 +215,12 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
},
Spec: v1alpha1.AppProjectSpec{
SignatureKeys: []v1alpha1.SignatureKey{{KeyID: "test"}},
Destinations: []v1alpha1.ApplicationDestination{
{
Namespace: "*",
Server: "*",
},
},
},
}
data := fakeData{
@@ -218,13 +231,13 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
managedLiveObjs: liveObjects,
}
ctrl := newFakeController(&data, nil)
return &fixture{
project: project,
application: app,
project: project,
controller: ctrl,
}
}
@@ -232,13 +245,23 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
t.Run("will fail the sync if finds shared resources", func(t *testing.T) {
// given
t.Parallel()
f := setup()
syncErrorMsg := "deployment already applied by another application"
condition := v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
Message: syncErrorMsg,
}
f.application.Status.Conditions = append(f.application.Status.Conditions, condition)
sharedObject := kube.MustToUnstructured(&corev1.ConfigMap{
TypeMeta: v1.TypeMeta{
APIVersion: "v1",
Kind: "ConfigMap",
},
ObjectMeta: v1.ObjectMeta{
Name: "configmap1",
Namespace: "default",
Labels: map[string]string{
argocommon.LabelKeyAppInstance: "another-app",
},
},
})
liveObjects := make(map[kube.ResourceKey]*unstructured.Unstructured)
liveObjects[kube.GetResourceKey(sharedObject)] = sharedObject
f := setup(liveObjects)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -253,7 +276,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
// then
assert.Equal(t, common.OperationFailed, opState.Phase)
assert.Contains(t, opState.Message, syncErrorMsg)
assert.Contains(t, opState.Message, "ConfigMap/configmap1 is part of applications fake-argocd-ns/my-app and another-app")
})
}

View File

@@ -272,6 +272,8 @@ data:
applicationsetcontroller.requeue.after: "3m"
# Enable strict mode for tokenRef in ApplicationSet resources. When enabled, the referenced secret must have a label `argocd.argoproj.io/secret-type` with value `scm-creds`.
applicationsetcontroller.enable.tokenref.strict.mode: "false"
# The maximum number of resources stored in the status of an ApplicationSet. This is a safeguard to prevent the status from growing too large.
applicationsetcontroller.status.max.resources.count: "5000"
## Argo CD Notifications Controller Properties
# Set the logging level. One of: debug|info|warn|error (default "info")

View File

@@ -2,7 +2,7 @@
Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/stable/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.
> **NOTE:** The HA installation will require at least three different nodes due to pod anti-affinity roles in the
> specs. Additionally, IPv6 only clusters are not supported.

View File

@@ -70,6 +70,7 @@ argocd-application-controller [flags]
--repo-server-timeout-seconds int Repo server RPC call timeout seconds. (default 60)
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
--self-heal-backoff-cap-seconds int Specifies max timeout of exponential backoff between application self heal attempts (default 300)
--self-heal-backoff-cooldown-seconds int Specifies period of time the app needs to stay synced before the self heal backoff can reset (default 330)
--self-heal-backoff-factor int Specifies factor of exponential timeout between application self heal attempts (default 3)
--self-heal-backoff-timeout-seconds int Specifies initial timeout of exponential backoff between self heal attempts (default 2)
--self-heal-timeout-seconds int Specifies timeout between application self heal attempts

View File

@@ -3,4 +3,12 @@
## Upgraded Helm Version
Helm was upgraded to 3.16.2 and the skipSchemaValidation Flag was added to
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
## Breaking Changes
## Sanitized project API response
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
the project API response was sanitized to remove sensitive information. This includes
credentials of project-scoped repositories and clusters.

24
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/argoproj/argo-cd/v2
go 1.22.0
go 1.24.6
require (
code.gitea.io/sdk/gitea v0.19.0
@@ -10,7 +10,7 @@ require (
github.com/TomOnTime/utfutil v0.0.0-20180511104225-09c41003ee1d
github.com/alicebob/miniredis/v2 v2.33.0
github.com/antonmedv/expr v1.15.1
github.com/argoproj/gitops-engine v0.7.1-0.20250304190342-43fce7ce19f1
github.com/argoproj/gitops-engine v0.7.1-0.20250521000818-c08b0a72c1f1
github.com/argoproj/notifications-engine v0.4.1-0.20241007194503-2fef5c9049fd
github.com/argoproj/pkg v0.13.7-0.20230626144333-d56162821bd1
github.com/aws/aws-sdk-go v1.55.5
@@ -25,7 +25,7 @@ require (
github.com/cyphar/filepath-securejoin v0.3.6
github.com/dustin/go-humanize v1.0.1
github.com/evanphx/json-patch v5.9.0+incompatible
github.com/expr-lang/expr v1.16.9
github.com/expr-lang/expr v1.17.0
github.com/felixge/httpsnoop v1.0.4
github.com/fsnotify/fsnotify v1.8.0
github.com/gfleury/go-bitbucket-v1 v0.0.0-20220301131131-8e7ed04b843e
@@ -39,7 +39,7 @@ require (
github.com/gobwas/glob v0.2.3
github.com/gogits/go-gogs-client v0.0.0-20200905025246-8bb8a50cb355
github.com/gogo/protobuf v1.3.2
github.com/golang-jwt/jwt/v4 v4.5.1
github.com/golang-jwt/jwt/v4 v4.5.2
github.com/golang/protobuf v1.5.4
github.com/google/btree v1.1.3
github.com/google/go-cmp v0.6.0
@@ -68,7 +68,7 @@ require (
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/prometheus/client_golang v1.20.5
github.com/r3labs/diff v1.1.0
github.com/redis/go-redis/v9 v9.7.1
github.com/redis/go-redis/v9 v9.7.3
github.com/robfig/cron/v3 v3.0.1
github.com/sirupsen/logrus v1.9.3
github.com/skratchdot/open-golang v0.0.0-20160302144031-75fb7ed4208c
@@ -83,12 +83,12 @@ require (
go.opentelemetry.io/otel v1.33.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.32.0
go.opentelemetry.io/otel/sdk v1.33.0
golang.org/x/crypto v0.32.0
golang.org/x/crypto v0.37.0
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f
golang.org/x/net v0.34.0
golang.org/x/net v0.39.0
golang.org/x/oauth2 v0.24.0
golang.org/x/sync v0.10.0
golang.org/x/term v0.28.0
golang.org/x/sync v0.13.0
golang.org/x/term v0.31.0
golang.org/x/time v0.8.0
google.golang.org/genproto/googleapis/api v0.0.0-20241104194629-dd2ea8efbc28
google.golang.org/grpc v1.68.1
@@ -137,7 +137,7 @@ require (
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-fed/httpsig v1.1.0 // indirect
github.com/go-jose/go-jose/v4 v4.0.2 // indirect
github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/s2a-go v0.1.7 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect
@@ -151,8 +151,8 @@ require (
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
golang.org/x/mod v0.22.0 // indirect
golang.org/x/sys v0.29.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/text v0.24.0 // indirect
golang.org/x/tools v0.27.0 // indirect
google.golang.org/api v0.171.0 // indirect
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 // indirect

44
go.sum
View File

@@ -88,8 +88,8 @@ github.com/antonmedv/expr v1.15.1/go.mod h1:0E/6TxnOlRNp81GMzX9QfDPAmHo2Phg00y4J
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/appscode/go v0.0.0-20191119085241-0887d8ec2ecc/go.mod h1:OawnOmAL4ZX3YaPdN+8HTNwBveT1jMsqP74moa9XUbE=
github.com/argoproj/gitops-engine v0.7.1-0.20250304190342-43fce7ce19f1 h1:3qG1uQNtjCIunZ5b3qmO1JS8OBLCssll41VwBYYlKio=
github.com/argoproj/gitops-engine v0.7.1-0.20250304190342-43fce7ce19f1/go.mod h1:WsnykM8idYRUnneeT31cM/Fq/ZsjkefCbjiD8ioCJkU=
github.com/argoproj/gitops-engine v0.7.1-0.20250521000818-c08b0a72c1f1 h1:Ze4U6kV49vSzlUBhH10HkO52bYKAIXS4tHr/MlNDfdU=
github.com/argoproj/gitops-engine v0.7.1-0.20250521000818-c08b0a72c1f1/go.mod h1:WsnykM8idYRUnneeT31cM/Fq/ZsjkefCbjiD8ioCJkU=
github.com/argoproj/notifications-engine v0.4.1-0.20241007194503-2fef5c9049fd h1:lOVVoK89j9Nd4+JYJiKAaMNYC1402C0jICROOfUPWn0=
github.com/argoproj/notifications-engine v0.4.1-0.20241007194503-2fef5c9049fd/go.mod h1:N0A4sEws2soZjEpY4hgZpQS8mRIEw6otzwfkgc3g9uQ=
github.com/argoproj/pkg v0.13.7-0.20230626144333-d56162821bd1 h1:qsHwwOJ21K2Ao0xPju1sNuqphyMnMYkyB3ZLoLtxWpo=
@@ -251,8 +251,8 @@ github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d h1:105gxyaGwCFad8crR9dcMQWvV9Hvulu6hwUh4tWPJnM=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/expr-lang/expr v1.16.9 h1:WUAzmR0JNI9JCiF0/ewwHB1gmcGw5wW7nWt8gc6PpCI=
github.com/expr-lang/expr v1.16.9/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/expr-lang/expr v1.17.0 h1:+vpszOyzKLQXC9VF+wA8cVA0tlA984/Wabc/1hF9Whg=
github.com/expr-lang/expr v1.17.0/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/facebookgo/ensure v0.0.0-20160127193407-b4ab57deab51/go.mod h1:Yg+htXGokKKdzcwhuNDwVvN+uBxDGXJ7G/VN1d8fa64=
github.com/facebookgo/stack v0.0.0-20160209184415-751773369052/go.mod h1:UbMTZqLaRiH3MsBH8va0n7s1pQYcu3uTb8G4tygF4Zg=
github.com/facebookgo/subset v0.0.0-20150612182917-8dac2c3c4870/go.mod h1:5tD+neXqOorC30/tWg0LCSkrqj/AR6gu8yY8/fpw1q0=
@@ -390,10 +390,10 @@ github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.2.2 h1:1+mZ9upx1Dh6FmUTFR1naJ77miKiXgALjWOZ3NVFPmY=
github.com/golang/glog v1.2.2/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
@@ -843,8 +843,8 @@ github.com/r3labs/diff v1.1.0 h1:V53xhrbTHrWFWq3gI4b94AjgEJOerO1+1l0xyHOBi8M=
github.com/r3labs/diff v1.1.0/go.mod h1:7WjXasNzi0vJetRcB/RqNl5dlIsmXcTTLmF5IoH6Xig=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.0.0-rc.4/go.mod h1:Vo3EsyWnicKnSKCA7HhgnvnyA74wOA69Cd2Meli5mmA=
github.com/redis/go-redis/v9 v9.7.1 h1:4LhKRCIduqXqtvCUlaq9c8bdHOkICjDMrr1+Zb3osAc=
github.com/redis/go-redis/v9 v9.7.1/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/redis/go-redis/v9 v9.7.3 h1:YpPyAayJV+XErNsatSElgRZZVCwXX9QzkKYNvO7x0wM=
github.com/redis/go-redis/v9 v9.7.3/go.mod h1:bGUrSggJ9X9GUmZpZNEOQKaANxSGgOEBRltRTZHSvrA=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
@@ -1046,8 +1046,8 @@ golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOM
golang.org/x/crypto v0.22.0/go.mod h1:vr6Su+7cTlO45qkww3VDJlzDn0ctJvRgYbC2NvXHt+M=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.24.0/go.mod h1:Z1PMYSOR5nyMcyAVAIQSKCDwalqy85Aqn1x3Ws4L5DM=
golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20200331195152-e8c3332aa8e5/go.mod h1:4M0jN8W1tt0AVLNr8HDosyJCDCDuyL9N9+3m7wDWgKw=
@@ -1132,8 +1132,8 @@ golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.26.0/go.mod h1:5YKkiSynbBIh3p6iOc/vibscux0x38BZDkn8sCUPxHE=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1157,8 +1157,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1228,8 +1228,8 @@ golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240208230135-b75ee8823808/go.mod h1:KG1lNk5ZFNssSZLrpVb4sMXKMpGwGXOxSG3rnu2gZQQ=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@@ -1254,8 +1254,8 @@ golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/term v0.19.0/go.mod h1:2CuTdWZ7KHSQwUzKva0cbMg6q2DMI3Mmxp+gKJbskEk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.21.0/go.mod h1:ooXLefLobQVslOqselCNF4SxFAaoS6KujMbsGzSDmX0=
golang.org/x/term v0.28.0 h1:/Ts8HFuMR2E6IP/jlo7QVLZHggjKQbhu/7H0LJFr3Gg=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/term v0.31.0 h1:erwDkOK1Msy6offm1mOgvspSkslFnIGsFnxOKoufg3o=
golang.org/x/term v0.31.0/go.mod h1:R4BeIy7D95HzImkxGkTW1UQTtP54tio2RyHz7PwK0aw=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
@@ -1274,8 +1274,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
declare -a services=("controller" "api-server" "redis" "repo-server" "ui")
declare -a services=("controller" "api-server" "redis" "repo-server" "cmp-server" "ui" "applicationset-controller" "commit-server" "notification" "dex" "git-server" "helm-registry" "dev-mounter")
EXCLUDE=$exclude

View File

@@ -54,4 +54,4 @@ go install github.com/go-swagger/go-swagger/cmd/swagger@v0.28.0
go install golang.org/x/tools/cmd/goimports@v0.1.8
# mockery is used to generate mock
go install github.com/vektra/mockery/v2@v2.43.2
go install github.com/vektra/mockery/v2@v2.53.4

View File

@@ -2,6 +2,6 @@
set -eux -o pipefail
# renovate: datasource=go packageName=github.com/golangci/golangci-lint
GOLANGCI_LINT_VERSION=1.62.2
GOLANGCI_LINT_VERSION=2.1.6
GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v${GOLANGCI_LINT_VERSION}"

View File

@@ -35,6 +35,10 @@ cd ${SRCROOT}/manifests/base && $KUSTOMIZE edit set image quay.io/argoproj/argoc
cd ${SRCROOT}/manifests/ha/base && $KUSTOMIZE edit set image quay.io/argoproj/argocd=${IMAGE_NAMESPACE}/argocd:${IMAGE_TAG}
cd ${SRCROOT}/manifests/core-install && $KUSTOMIZE edit set image quay.io/argoproj/argocd=${IMAGE_NAMESPACE}/argocd:${IMAGE_TAG}
# Because commit-server is added as a resource outside the base, we have to explicitly set the image override here.
# If/when commit-server is added to the base, this can be removed.
cd ${SRCROOT}/manifests/base/commit-server && $KUSTOMIZE edit set image quay.io/argoproj/argocd=${IMAGE_NAMESPACE}/argocd:${IMAGE_TAG}
echo "${AUTOGENMSG}" > "${SRCROOT}/manifests/install.yaml"
$KUSTOMIZE build "${SRCROOT}/manifests/cluster-install" >> "${SRCROOT}/manifests/install.yaml"

View File

@@ -115,6 +115,12 @@ spec:
name: argocd-cmd-params-cm
key: controller.self.heal.backoff.cap.seconds
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: controller.self.heal.backoff.cooldown.seconds
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -241,6 +247,12 @@ spec:
name: argocd-cmd-params-cm
key: controller.cluster.cache.events.processing.interval
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: commit.server
optional: true
image: quay.io/argoproj/argocd:latest
imagePullPolicy: Always
name: argocd-application-controller

View File

@@ -118,6 +118,12 @@ spec:
name: argocd-cmd-params-cm
key: controller.self.heal.backoff.cap.seconds
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: controller.self.heal.backoff.cooldown.seconds
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -250,6 +256,12 @@ spec:
name: argocd-cmd-params-cm
key: controller.cluster.cache.events.processing.interval
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: commit.server
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:latest

View File

@@ -175,6 +175,12 @@ spec:
name: argocd-cmd-params-cm
key: applicationsetcontroller.requeue.after
optional: true
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
name: argocd-cmd-params-cm
key: applicationsetcontroller.status.max.resources.count
optional: true
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts

View File

@@ -6,3 +6,10 @@ resources:
- argocd-commit-server-deployment.yaml
- argocd-commit-server-service.yaml
- argocd-commit-server-network-policy.yaml
# Because commit-server is added as a resource outside the base, we have to explicitly set the image override here.
# If/when commit-server is added to the base, this can be removed.
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v2.14.21

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v2.14.6
newTag: v2.14.21
resources:
- ./application-controller
- ./dex

View File

@@ -40,7 +40,7 @@ spec:
serviceAccountName: argocd-redis
containers:
- name: redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
args:
- "--save"

View File

@@ -24165,7 +24165,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24285,7 +24291,7 @@ spec:
key: commitserver.log.level
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -24331,7 +24337,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -24419,7 +24425,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24435,7 +24441,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -24696,7 +24702,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -24748,7 +24754,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -24932,6 +24938,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -25064,9 +25076,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -24133,7 +24133,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24237,7 +24243,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24253,7 +24259,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -24514,7 +24520,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -24566,7 +24572,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -24750,6 +24756,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -24882,9 +24894,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -12,4 +12,4 @@ resources:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v2.14.6
newTag: v2.14.21

View File

@@ -12,7 +12,7 @@ patches:
images:
- name: quay.io/argoproj/argocd
newName: quay.io/argoproj/argocd
newTag: v2.14.6
newTag: v2.14.21
resources:
- ../../base/application-controller
- ../../base/applicationset-controller

View File

@@ -1219,7 +1219,7 @@ spec:
automountServiceAccountToken: false
initContainers:
- name: config-init
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
resources:
{}
@@ -1258,7 +1258,7 @@ spec:
containers:
- name: redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
command:
- redis-server
@@ -1321,7 +1321,7 @@ spec:
- /bin/sh
- /readonly-config/trigger-failover-if-master.sh
- name: sentinel
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
command:
- redis-sentinel
@@ -1383,7 +1383,7 @@ spec:
- sleep 30; redis-cli -p 26379 sentinel reset argocd
- name: split-brain-fix
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
command:
- sh

View File

@@ -23,7 +23,7 @@ redis-ha:
metrics:
enabled: true
image:
tag: 7.0.15-alpine
tag: 7.2.11-alpine
containerSecurityContext: null
sentinel:
bind: '0.0.0.0'

View File

@@ -25506,7 +25506,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25626,7 +25632,7 @@ spec:
key: commitserver.log.level
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25672,7 +25678,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25793,7 +25799,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25883,7 +25889,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -26004,7 +26010,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26291,7 +26297,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26343,7 +26349,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26705,7 +26711,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26925,6 +26931,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -27057,9 +27069,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -27157,7 +27175,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27217,7 +27235,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -27281,7 +27299,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -27316,7 +27334,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -25476,7 +25476,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -25613,7 +25619,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25703,7 +25709,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25824,7 +25830,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -26111,7 +26117,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -26163,7 +26169,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -26525,7 +26531,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -26745,6 +26751,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -26877,9 +26889,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -26977,7 +26995,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -27037,7 +27055,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -27101,7 +27119,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -27136,7 +27154,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1736,7 +1736,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1856,7 +1862,7 @@ spec:
key: commitserver.log.level
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1902,7 +1908,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2023,7 +2029,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -2113,7 +2119,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2234,7 +2240,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2521,7 +2527,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2573,7 +2579,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2935,7 +2941,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -3155,6 +3161,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -3287,9 +3299,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3387,7 +3405,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3447,7 +3465,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3511,7 +3529,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3546,7 +3564,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -1706,7 +1706,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -1843,7 +1849,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1933,7 +1939,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -2054,7 +2060,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -2341,7 +2347,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -2393,7 +2399,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2755,7 +2761,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2975,6 +2981,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -3107,9 +3119,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:
@@ -3207,7 +3225,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@@ -3267,7 +3285,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
@@ -3331,7 +3349,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: split-brain-fix
resources: {}
@@ -3366,7 +3384,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: public.ecr.aws/docker/library/redis:7.0.15-alpine
image: public.ecr.aws/docker/library/redis:7.2.11-alpine
imagePullPolicy: IfNotPresent
name: config-init
securityContext:

View File

@@ -24625,7 +24625,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24745,7 +24751,7 @@ spec:
key: commitserver.log.level
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -24791,7 +24797,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -24912,7 +24918,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -25002,7 +25008,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -25088,7 +25094,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -25104,7 +25110,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25365,7 +25371,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25417,7 +25423,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25777,7 +25783,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -25997,6 +26003,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -26129,9 +26141,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:

36
manifests/install.yaml generated
View File

@@ -24593,7 +24593,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -24730,7 +24736,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -24820,7 +24826,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -24906,7 +24912,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -24922,7 +24928,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -25183,7 +25189,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -25235,7 +25241,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -25595,7 +25601,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -25815,6 +25821,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -25947,9 +25959,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -855,7 +855,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -975,7 +981,7 @@ spec:
key: commitserver.log.level
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1021,7 +1027,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:latest
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1142,7 +1148,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1232,7 +1238,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1318,7 +1324,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1334,7 +1340,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1595,7 +1601,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1647,7 +1653,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -2007,7 +2013,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2227,6 +2233,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -2359,9 +2371,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -823,7 +823,13 @@ spec:
key: applicationsetcontroller.requeue.after
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
- name: ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT
valueFrom:
configMapKeyRef:
key: applicationsetcontroller.status.max.resources.count
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-applicationset-controller
ports:
@@ -960,7 +966,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: copyutil
securityContext:
@@ -1050,7 +1056,7 @@ spec:
key: notificationscontroller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
tcpSocket:
@@ -1136,7 +1142,7 @@ spec:
secretKeyRef:
key: auth
name: argocd-redis
image: redis:7.0.15-alpine
image: redis:7.2.11-alpine
imagePullPolicy: Always
name: redis
ports:
@@ -1152,7 +1158,7 @@ spec:
- argocd
- admin
- redis-initial-password
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: IfNotPresent
name: secret-init
securityContext:
@@ -1413,7 +1419,7 @@ spec:
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
@@ -1465,7 +1471,7 @@ spec:
- -n
- /usr/local/bin/argocd
- /var/run/argocd/argocd-cmp-server
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
name: copyutil
securityContext:
allowPrivilegeEscalation: false
@@ -1825,7 +1831,7 @@ spec:
key: hydrator.enabled
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2045,6 +2051,12 @@ spec:
key: controller.self.heal.backoff.cap.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.backoff.cooldown.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT
valueFrom:
configMapKeyRef:
@@ -2177,9 +2189,15 @@ spec:
key: controller.cluster.cache.events.processing.interval
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER
valueFrom:
configMapKeyRef:
key: commit.server
name: argocd-cmd-params-cm
optional: true
- name: KUBECACHEDIR
value: /tmp/kubecache
image: quay.io/argoproj/argocd:v2.14.6
image: quay.io/argoproj/argocd:v2.14.21
imagePullPolicy: Always
name: argocd-application-controller
ports:

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -290,7 +290,6 @@ func (m *Repository) Sanitized() *Repository {
Repo: m.Repo,
Type: m.Type,
Name: m.Name,
Username: m.Username,
Insecure: m.IsInsecure(),
EnableLFS: m.EnableLFS,
EnableOCI: m.EnableOCI,

View File

@@ -2,6 +2,7 @@ package v1alpha1
import (
"encoding/json"
"errors"
"fmt"
"maps"
"math"
@@ -25,7 +26,7 @@ import (
log "github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
v1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@@ -38,6 +39,8 @@ import (
"k8s.io/client-go/tools/clientcmd/api"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/util/env"
"github.com/argoproj/argo-cd/v2/util/helm"
@@ -166,9 +169,8 @@ func (e Env) Envsubst(s string) string {
// allow escaping $ with $$
if s == "$" {
return "$"
} else {
return valByEnv[s]
}
return valByEnv[s]
})
}
@@ -201,12 +203,12 @@ type ApplicationSource struct {
// ApplicationSources contains list of required information about the sources of an application
type ApplicationSources []ApplicationSource
func (s ApplicationSources) Equals(other ApplicationSources) bool {
if len(s) != len(other) {
func (a ApplicationSources) Equals(other ApplicationSources) bool {
if len(a) != len(other) {
return false
}
for i := range s {
if !s[i].Equals(&other[i]) {
for i := range a {
if !a[i].Equals(&other[i]) {
return false
}
}
@@ -218,154 +220,153 @@ func (a ApplicationSources) IsZero() bool {
return len(a) == 0
}
func (a *ApplicationSpec) GetSource() ApplicationSource {
if a.SourceHydrator != nil {
return a.SourceHydrator.GetSyncSource()
func (spec *ApplicationSpec) GetSource() ApplicationSource {
if spec.SourceHydrator != nil {
return spec.SourceHydrator.GetSyncSource()
}
// if Application has multiple sources, return the first source in sources
if a.HasMultipleSources() {
return a.Sources[0]
if spec.HasMultipleSources() {
return spec.Sources[0]
}
if a.Source != nil {
return *a.Source
if spec.Source != nil {
return *spec.Source
}
return ApplicationSource{}
}
// GetHydrateToSource returns the hydrateTo source if it exists, otherwise returns the sync source.
func (a *ApplicationSpec) GetHydrateToSource() ApplicationSource {
if a.SourceHydrator != nil {
targetRevision := a.SourceHydrator.SyncSource.TargetBranch
if a.SourceHydrator.HydrateTo != nil {
targetRevision = a.SourceHydrator.HydrateTo.TargetBranch
func (spec *ApplicationSpec) GetHydrateToSource() ApplicationSource {
if spec.SourceHydrator != nil {
targetRevision := spec.SourceHydrator.SyncSource.TargetBranch
if spec.SourceHydrator.HydrateTo != nil {
targetRevision = spec.SourceHydrator.HydrateTo.TargetBranch
}
return ApplicationSource{
RepoURL: a.SourceHydrator.DrySource.RepoURL,
Path: a.SourceHydrator.SyncSource.Path,
RepoURL: spec.SourceHydrator.DrySource.RepoURL,
Path: spec.SourceHydrator.SyncSource.Path,
TargetRevision: targetRevision,
}
}
return ApplicationSource{}
}
func (a *ApplicationSpec) GetSources() ApplicationSources {
if a.SourceHydrator != nil {
return ApplicationSources{a.SourceHydrator.GetSyncSource()}
func (spec *ApplicationSpec) GetSources() ApplicationSources {
if spec.SourceHydrator != nil {
return ApplicationSources{spec.SourceHydrator.GetSyncSource()}
}
if a.HasMultipleSources() {
return a.Sources
if spec.HasMultipleSources() {
return spec.Sources
}
if a.Source != nil {
return ApplicationSources{*a.Source}
if spec.Source != nil {
return ApplicationSources{*spec.Source}
}
return ApplicationSources{}
}
func (a *ApplicationSpec) HasMultipleSources() bool {
return a.SourceHydrator == nil && len(a.Sources) > 0
func (spec *ApplicationSpec) HasMultipleSources() bool {
return spec.SourceHydrator == nil && len(spec.Sources) > 0
}
func (a *ApplicationSpec) GetSourcePtrByPosition(sourcePosition int) *ApplicationSource {
func (spec *ApplicationSpec) GetSourcePtrByPosition(sourcePosition int) *ApplicationSource {
// if Application has multiple sources, return the first source in sources
return a.GetSourcePtrByIndex(sourcePosition - 1)
return spec.GetSourcePtrByIndex(sourcePosition - 1)
}
func (a *ApplicationSpec) GetSourcePtrByIndex(sourceIndex int) *ApplicationSource {
if a.SourceHydrator != nil {
source := a.SourceHydrator.GetSyncSource()
func (spec *ApplicationSpec) GetSourcePtrByIndex(sourceIndex int) *ApplicationSource {
if spec.SourceHydrator != nil {
source := spec.SourceHydrator.GetSyncSource()
return &source
}
// if Application has multiple sources, return the first source in sources
if a.HasMultipleSources() {
if spec.HasMultipleSources() {
if sourceIndex > 0 {
return &a.Sources[sourceIndex]
return &spec.Sources[sourceIndex]
}
return &a.Sources[0]
return &spec.Sources[0]
}
return a.Source
return spec.Source
}
// AllowsConcurrentProcessing returns true if given application source can be processed concurrently
func (a *ApplicationSource) AllowsConcurrentProcessing() bool {
switch {
func (source *ApplicationSource) AllowsConcurrentProcessing() bool {
// Kustomize with parameters requires changing kustomization.yaml file
case a.Kustomize != nil:
return a.Kustomize.AllowsConcurrentProcessing()
if source.Kustomize != nil {
return source.Kustomize.AllowsConcurrentProcessing()
}
return true
}
// IsRef returns true when the application source is of type Ref
func (a *ApplicationSource) IsRef() bool {
return a.Ref != ""
func (source *ApplicationSource) IsRef() bool {
return source.Ref != ""
}
// IsHelm returns true when the application source is of type Helm
func (a *ApplicationSource) IsHelm() bool {
return a.Chart != ""
func (source *ApplicationSource) IsHelm() bool {
return source.Chart != ""
}
// IsHelmOci returns true when the application source is of type Helm OCI
func (a *ApplicationSource) IsHelmOci() bool {
if a.Chart == "" {
func (source *ApplicationSource) IsHelmOci() bool {
if source.Chart == "" {
return false
}
return helm.IsHelmOciRepo(a.RepoURL)
return helm.IsHelmOciRepo(source.RepoURL)
}
// IsZero returns true if the application source is considered empty
func (a *ApplicationSource) IsZero() bool {
return a == nil ||
a.RepoURL == "" &&
a.Path == "" &&
a.TargetRevision == "" &&
a.Helm.IsZero() &&
a.Kustomize.IsZero() &&
a.Directory.IsZero() &&
a.Plugin.IsZero()
func (source *ApplicationSource) IsZero() bool {
return source == nil ||
source.RepoURL == "" &&
source.Path == "" &&
source.TargetRevision == "" &&
source.Helm.IsZero() &&
source.Kustomize.IsZero() &&
source.Directory.IsZero() &&
source.Plugin.IsZero()
}
// GetNamespaceOrDefault gets the static namespace configured in the source. If none is configured, returns the given
// default.
func (a *ApplicationSource) GetNamespaceOrDefault(defaultNamespace string) string {
if a == nil {
func (source *ApplicationSource) GetNamespaceOrDefault(defaultNamespace string) string {
if source == nil {
return defaultNamespace
}
if a.Helm != nil && a.Helm.Namespace != "" {
return a.Helm.Namespace
if source.Helm != nil && source.Helm.Namespace != "" {
return source.Helm.Namespace
}
if a.Kustomize != nil && a.Kustomize.Namespace != "" {
return a.Kustomize.Namespace
if source.Kustomize != nil && source.Kustomize.Namespace != "" {
return source.Kustomize.Namespace
}
return defaultNamespace
}
// GetKubeVersionOrDefault gets the static Kubernetes API version configured in the source. If none is configured,
// returns the given default.
func (a *ApplicationSource) GetKubeVersionOrDefault(defaultKubeVersion string) string {
if a == nil {
func (source *ApplicationSource) GetKubeVersionOrDefault(defaultKubeVersion string) string {
if source == nil {
return defaultKubeVersion
}
if a.Helm != nil && a.Helm.KubeVersion != "" {
return a.Helm.KubeVersion
if source.Helm != nil && source.Helm.KubeVersion != "" {
return source.Helm.KubeVersion
}
if a.Kustomize != nil && a.Kustomize.KubeVersion != "" {
return a.Kustomize.KubeVersion
if source.Kustomize != nil && source.Kustomize.KubeVersion != "" {
return source.Kustomize.KubeVersion
}
return defaultKubeVersion
}
// GetAPIVersionsOrDefault gets the static API versions list configured in the source. If none is configured, returns
// the given default.
func (a *ApplicationSource) GetAPIVersionsOrDefault(defaultAPIVersions []string) []string {
if a == nil {
func (source *ApplicationSource) GetAPIVersionsOrDefault(defaultAPIVersions []string) []string {
if source == nil {
return defaultAPIVersions
}
if a.Helm != nil && len(a.Helm.APIVersions) > 0 {
return a.Helm.APIVersions
if source.Helm != nil && len(source.Helm.APIVersions) > 0 {
return source.Helm.APIVersions
}
if a.Kustomize != nil && len(a.Kustomize.APIVersions) > 0 {
return a.Kustomize.APIVersions
if source.Kustomize != nil && len(source.Kustomize.APIVersions) > 0 {
return source.Kustomize.APIVersions
}
return defaultAPIVersions
}
@@ -564,39 +565,39 @@ func NewHelmFileParameter(text string) (*HelmFileParameter, error) {
// AddParameter adds a HelmParameter to the application source. If a parameter with the same name already
// exists, its value will be overwritten. Otherwise, the HelmParameter will be appended as a new entry.
func (in *ApplicationSourceHelm) AddParameter(p HelmParameter) {
func (ash *ApplicationSourceHelm) AddParameter(p HelmParameter) {
found := false
for i, cp := range in.Parameters {
for i, cp := range ash.Parameters {
if cp.Name == p.Name {
found = true
in.Parameters[i] = p
ash.Parameters[i] = p
break
}
}
if !found {
in.Parameters = append(in.Parameters, p)
ash.Parameters = append(ash.Parameters, p)
}
}
// AddFileParameter adds a HelmFileParameter to the application source. If a file parameter with the same name already
// exists, its value will be overwritten. Otherwise, the HelmFileParameter will be appended as a new entry.
func (in *ApplicationSourceHelm) AddFileParameter(p HelmFileParameter) {
func (ash *ApplicationSourceHelm) AddFileParameter(p HelmFileParameter) {
found := false
for i, cp := range in.FileParameters {
for i, cp := range ash.FileParameters {
if cp.Name == p.Name {
found = true
in.FileParameters[i] = p
ash.FileParameters[i] = p
break
}
}
if !found {
in.FileParameters = append(in.FileParameters, p)
ash.FileParameters = append(ash.FileParameters, p)
}
}
// IsZero Returns true if the Helm options in an application source are considered zero
func (h *ApplicationSourceHelm) IsZero() bool {
return h == nil || (h.Version == "") && (h.ReleaseName == "") && len(h.ValueFiles) == 0 && len(h.Parameters) == 0 && len(h.FileParameters) == 0 && h.ValuesIsEmpty() && !h.PassCredentials && !h.IgnoreMissingValueFiles && !h.SkipCrds && !h.SkipTests && !h.SkipSchemaValidation && h.KubeVersion == "" && len(h.APIVersions) == 0 && h.Namespace == ""
func (ash *ApplicationSourceHelm) IsZero() bool {
return ash == nil || (ash.Version == "") && (ash.ReleaseName == "") && len(ash.ValueFiles) == 0 && len(ash.Parameters) == 0 && len(ash.FileParameters) == 0 && ash.ValuesIsEmpty() && !ash.PassCredentials && !ash.IgnoreMissingValueFiles && !ash.SkipCrds && !ash.SkipTests && !ash.SkipSchemaValidation && ash.KubeVersion == "" && len(ash.APIVersions) == 0 && ash.Namespace == ""
}
// KustomizeImage represents a Kustomize image definition in the format [old_image_name=]<image_name>:<image_tag>
@@ -684,14 +685,13 @@ type KustomizeReplicas []KustomizeReplica
// If parsing error occurs, returns 0 and error.
func (kr KustomizeReplica) GetIntCount() (int, error) {
if kr.Count.Type == intstr.String {
if count, err := strconv.Atoi(kr.Count.StrVal); err != nil {
count, err := strconv.Atoi(kr.Count.StrVal)
if err != nil {
return 0, fmt.Errorf("expected integer value for count. Received: %s", kr.Count.StrVal)
} else {
return count, nil
}
} else {
return kr.Count.IntValue(), nil
return count, nil
}
return kr.Count.IntValue(), nil
}
// NewKustomizeReplica parses a string in format name=count into a KustomizeReplica object and returns it
@@ -821,9 +821,8 @@ func NewJsonnetVar(s string, code bool) JsonnetVar {
parts := strings.SplitN(s, "=", 2)
if len(parts) == 2 {
return JsonnetVar{Name: parts[0], Value: parts[1], Code: code}
} else {
return JsonnetVar{Name: s, Code: code}
}
return JsonnetVar{Name: s, Code: code}
}
// ApplicationSourceJsonnet holds options specific to applications of type Jsonnet
@@ -935,7 +934,7 @@ type ApplicationSourcePluginParameter struct {
// Name is the name identifying a parameter.
Name string `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
// String_ is the value of a string type parameter.
String_ *string `json:"string,omitempty" protobuf:"bytes,5,opt,name=string"`
String_ *string `json:"string,omitempty" protobuf:"bytes,5,opt,name=string"` //nolint:revive //FIXME(var-naming)
// Map is the value of a map type parameter.
*OptionalMap `json:",omitempty" protobuf:"bytes,3,rep,name=map"`
// Array is the value of an array type parameter.
@@ -959,7 +958,7 @@ func (p ApplicationSourcePluginParameter) Equals(other ApplicationSourcePluginPa
//
// There are efforts to change things upstream, but nothing has been merged yet. See https://github.com/golang/go/issues/37711
func (p ApplicationSourcePluginParameter) MarshalJSON() ([]byte, error) {
out := map[string]interface{}{}
out := map[string]any{}
out["name"] = p.Name
if p.String_ != nil {
out["string"] = p.String_
@@ -1016,12 +1015,12 @@ func (p ApplicationSourcePluginParameters) Environ() ([]string, error) {
if err != nil {
return nil, fmt.Errorf("failed to marshal plugin parameters: %w", err)
}
jsonParam := fmt.Sprintf("ARGOCD_APP_PARAMETERS=%s", string(out))
jsonParam := "ARGOCD_APP_PARAMETERS=" + string(out)
env := []string{jsonParam}
for _, param := range p {
envBaseName := fmt.Sprintf("PARAM_%s", escaped(param.Name))
envBaseName := "PARAM_" + escaped(param.Name)
if param.String_ != nil {
env = append(env, fmt.Sprintf("%s=%s", envBaseName, *param.String_))
}
@@ -1175,9 +1174,9 @@ type SourceHydratorStatus struct {
CurrentOperation *HydrateOperation `json:"currentOperation,omitempty" protobuf:"bytes,2,opt,name=currentOperation"`
}
func (a *ApplicationStatus) FindResource(key kube.ResourceKey) (*ResourceStatus, bool) {
for i := range a.Resources {
res := a.Resources[i]
func (status *ApplicationStatus) FindResource(key kube.ResourceKey) (*ResourceStatus, bool) {
for i := range status.Resources {
res := status.Resources[i]
if kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name) == key {
return &res, true
}
@@ -1227,12 +1226,12 @@ const (
// If app has multisources, it will return all corresponding revisions preserving
// order from the app.spec.sources. If app has only one source, it will return a
// single revision in the list.
func (a *ApplicationStatus) GetRevisions() []string {
func (status *ApplicationStatus) GetRevisions() []string {
revisions := []string{}
if len(a.Sync.Revisions) > 0 {
revisions = a.Sync.Revisions
} else if a.Sync.Revision != "" {
revisions = append(revisions, a.Sync.Revision)
if len(status.Sync.Revisions) > 0 {
revisions = status.Sync.Revisions
} else if status.Sync.Revision != "" {
revisions = append(revisions, status.Sync.Revision)
}
return revisions
}
@@ -1528,15 +1527,15 @@ type SyncStrategy struct {
// Force returns true if the sync strategy specifies to perform a forced sync
func (m *SyncStrategy) Force() bool {
if m == nil {
switch {
case m == nil:
return false
} else if m.Apply != nil {
case m.Apply != nil:
return m.Apply.Force
} else if m.Hook != nil {
case m.Hook != nil:
return m.Hook.Force
} else {
return false
}
return false
}
// SyncStrategyApply uses `kubectl apply` to perform the apply
@@ -1796,29 +1795,29 @@ type InfoItem struct {
// ResourceNetworkingInfo holds networking resource related information
// TODO: describe members of this type
type ResourceNetworkingInfo struct {
TargetLabels map[string]string `json:"targetLabels,omitempty" protobuf:"bytes,1,opt,name=targetLabels"`
TargetRefs []ResourceRef `json:"targetRefs,omitempty" protobuf:"bytes,2,opt,name=targetRefs"`
Labels map[string]string `json:"labels,omitempty" protobuf:"bytes,3,opt,name=labels"`
Ingress []v1.LoadBalancerIngress `json:"ingress,omitempty" protobuf:"bytes,4,opt,name=ingress"`
TargetLabels map[string]string `json:"targetLabels,omitempty" protobuf:"bytes,1,opt,name=targetLabels"`
TargetRefs []ResourceRef `json:"targetRefs,omitempty" protobuf:"bytes,2,opt,name=targetRefs"`
Labels map[string]string `json:"labels,omitempty" protobuf:"bytes,3,opt,name=labels"`
Ingress []corev1.LoadBalancerIngress `json:"ingress,omitempty" protobuf:"bytes,4,opt,name=ingress"`
// ExternalURLs holds list of URLs which should be available externally. List is populated for ingress resources using rules hostnames.
ExternalURLs []string `json:"externalURLs,omitempty" protobuf:"bytes,5,opt,name=externalURLs"`
}
// TODO: describe this type
type HostResourceInfo struct {
ResourceName v1.ResourceName `json:"resourceName,omitempty" protobuf:"bytes,1,name=resourceName"`
RequestedByApp int64 `json:"requestedByApp,omitempty" protobuf:"bytes,2,name=requestedByApp"`
RequestedByNeighbors int64 `json:"requestedByNeighbors,omitempty" protobuf:"bytes,3,name=requestedByNeighbors"`
Capacity int64 `json:"capacity,omitempty" protobuf:"bytes,4,name=capacity"`
ResourceName corev1.ResourceName `json:"resourceName,omitempty" protobuf:"bytes,1,name=resourceName"`
RequestedByApp int64 `json:"requestedByApp,omitempty" protobuf:"bytes,2,name=requestedByApp"`
RequestedByNeighbors int64 `json:"requestedByNeighbors,omitempty" protobuf:"bytes,3,name=requestedByNeighbors"`
Capacity int64 `json:"capacity,omitempty" protobuf:"bytes,4,name=capacity"`
}
// HostInfo holds host name and resources metrics
// TODO: describe purpose of this type
// TODO: describe members of this type
type HostInfo struct {
Name string `json:"name,omitempty" protobuf:"bytes,1,name=name"`
ResourcesInfo []HostResourceInfo `json:"resourcesInfo,omitempty" protobuf:"bytes,2,name=resourcesInfo"`
SystemInfo v1.NodeSystemInfo `json:"systemInfo,omitempty" protobuf:"bytes,3,opt,name=systemInfo"`
Name string `json:"name,omitempty" protobuf:"bytes,1,name=name"`
ResourcesInfo []HostResourceInfo `json:"resourcesInfo,omitempty" protobuf:"bytes,2,name=resourcesInfo"`
SystemInfo corev1.NodeSystemInfo `json:"systemInfo,omitempty" protobuf:"bytes,3,opt,name=systemInfo"`
}
// ApplicationTree holds nodes which belongs to the application
@@ -1938,7 +1937,8 @@ func (t *ApplicationTree) GetSummary(app *Application) ApplicationSummary {
urlsSet[v] = true
}
}
urls := make([]string, 0)
urls := make([]string, 0, len(urlsSet))
for url := range urlsSet {
urls = append(urls, url)
}
@@ -2099,6 +2099,32 @@ type Cluster struct {
Annotations map[string]string `json:"annotations,omitempty" protobuf:"bytes,13,opt,name=annotations"`
}
func (c *Cluster) Sanitized() *Cluster {
return &Cluster{
ID: c.ID,
Server: c.Server,
Name: c.Name,
Project: c.Project,
Namespaces: c.Namespaces,
Shard: c.Shard,
Labels: c.Labels,
Annotations: c.Annotations,
ClusterResources: c.ClusterResources,
ConnectionState: c.ConnectionState,
ServerVersion: c.ServerVersion,
Info: c.Info,
RefreshRequestedAt: c.RefreshRequestedAt,
Config: ClusterConfig{
AWSAuthConfig: c.Config.AWSAuthConfig,
ProxyUrl: c.Config.ProxyUrl,
DisableCompression: c.Config.DisableCompression,
TLSClientConfig: TLSClientConfig{
Insecure: c.Config.Insecure,
},
},
}
}
// Equals returns true if two cluster objects are considered to be equal
func (c *Cluster) Equals(other *Cluster) bool {
if c.Server != other.Server {
@@ -2155,7 +2181,7 @@ func (c *ClusterInfo) GetKubeVersion() string {
return c.ServerVersion
}
func (c *ClusterInfo) GetApiVersions() []string {
func (c *ClusterInfo) GetApiVersions() []string { //nolint:revive //FIXME(var-naming)
return c.APIVersions
}
@@ -2231,7 +2257,7 @@ type ClusterConfig struct {
DisableCompression bool `json:"disableCompression,omitempty" protobuf:"bytes,7,opt,name=disableCompression"`
// ProxyURL is the URL to the proxy to be used for all requests send to the server
ProxyUrl string `json:"proxyUrl,omitempty" protobuf:"bytes,8,opt,name=proxyUrl"`
ProxyUrl string `json:"proxyUrl,omitempty" protobuf:"bytes,8,opt,name=proxyUrl"` //nolint:revive //FIXME(var-naming)
}
// TLSClientConfig contains settings to enable transport layer security
@@ -2293,21 +2319,23 @@ type ResourceOverride struct {
KnownTypeFields []KnownTypeField `protobuf:"bytes,4,opt,name=knownTypeFields"`
}
// TODO: describe this method
func (s *ResourceOverride) UnmarshalJSON(data []byte) error {
// UnmarshalJSON unmarshals a JSON byte slice into a ResourceOverride object.
// It parses the raw input data and handles special processing for `IgnoreDifferences`
// and `IgnoreResourceUpdates` fields using YAML format.
func (ro *ResourceOverride) UnmarshalJSON(data []byte) error {
raw := &rawResourceOverride{}
if err := json.Unmarshal(data, &raw); err != nil {
return err
}
s.KnownTypeFields = raw.KnownTypeFields
s.HealthLua = raw.HealthLua
s.UseOpenLibs = raw.UseOpenLibs
s.Actions = raw.Actions
err := yaml.Unmarshal([]byte(raw.IgnoreDifferences), &s.IgnoreDifferences)
ro.KnownTypeFields = raw.KnownTypeFields
ro.HealthLua = raw.HealthLua
ro.UseOpenLibs = raw.UseOpenLibs
ro.Actions = raw.Actions
err := yaml.Unmarshal([]byte(raw.IgnoreDifferences), &ro.IgnoreDifferences)
if err != nil {
return err
}
err = yaml.Unmarshal([]byte(raw.IgnoreResourceUpdates), &s.IgnoreResourceUpdates)
err = yaml.Unmarshal([]byte(raw.IgnoreResourceUpdates), &ro.IgnoreResourceUpdates)
if err != nil {
return err
}
@@ -2372,7 +2400,7 @@ type ResourceActionParam struct {
Default string `json:"default,omitempty" protobuf:"bytes,4,opt,name=default"`
}
// TODO: refactor to use rbacpolicy.ActionGet, rbacpolicy.ActionCreate, without import cycle
// TODO: refactor to use rbac.ActionGet, rbac.ActionCreate, without import cycle
var validActions = map[string]bool{
"get": true,
"create": true,
@@ -2401,19 +2429,6 @@ func isValidAction(action string) bool {
return false
}
// TODO: same as validActions, refacotor to use rbacpolicy.ResourceApplications etc.
var validResources = map[string]bool{
"applications": true,
"repositories": true,
"clusters": true,
"exec": true,
"logs": true,
}
func isValidResource(resource string) bool {
return validResources[resource]
}
func isValidObject(proj string, object string) bool {
// match against <PROJECT>[/<NAMESPACE>]/<APPLICATION>
objectRegexp, err := regexp.Compile(fmt.Sprintf(`^%s(/[*\w-.]+)?/[*\w-.]+$`, regexp.QuoteMeta(proj)))
@@ -2433,8 +2448,8 @@ func validatePolicy(proj string, role string, policy string) error {
}
// resource
resource := strings.Trim(policyComponents[2], " ")
if !isValidResource(resource) {
return status.Errorf(codes.InvalidArgument, "invalid policy rule '%s': project resource must be: 'applications', 'repositories' or 'clusters', not '%s'", policy, resource)
if !rbac.ProjectScoped[resource] {
return status.Errorf(codes.InvalidArgument, "invalid policy rule '%s': project resource must be: 'applications', 'applicationsets', 'repositories', 'exec', 'logs' or 'clusters', not '%s'", policy, resource)
}
// action
action := strings.Trim(policyComponents[3], " ")
@@ -2571,24 +2586,24 @@ type SyncWindow struct {
}
// HasWindows returns true if SyncWindows has one or more SyncWindow
func (s *SyncWindows) HasWindows() bool {
return s != nil && len(*s) > 0
func (w *SyncWindows) HasWindows() bool {
return w != nil && len(*w) > 0
}
// Active returns a list of sync windows that are currently active
func (s *SyncWindows) Active() (*SyncWindows, error) {
return s.active(time.Now())
func (w *SyncWindows) Active() (*SyncWindows, error) {
return w.active(time.Now())
}
func (s *SyncWindows) active(currentTime time.Time) (*SyncWindows, error) {
func (w *SyncWindows) active(currentTime time.Time) (*SyncWindows, error) {
// If SyncWindows.Active() is called outside of a UTC locale, it should be
// first converted to UTC before we scan through the SyncWindows.
currentTime = currentTime.In(time.UTC)
if s.HasWindows() {
if w.HasWindows() {
var active SyncWindows
specParser := cron.NewParser(cron.Minute | cron.Hour | cron.Dom | cron.Month | cron.Dow)
for _, w := range *s {
for _, w := range *w {
schedule, sErr := specParser.Parse(w.Schedule)
if sErr != nil {
return nil, fmt.Errorf("cannot parse schedule '%s': %w", w.Schedule, sErr)
@@ -2615,19 +2630,19 @@ func (s *SyncWindows) active(currentTime time.Time) (*SyncWindows, error) {
// InactiveAllows will iterate over the SyncWindows and return all inactive allow windows
// for the current time. If the current time is in an inactive allow window, syncs will
// be denied.
func (s *SyncWindows) InactiveAllows() (*SyncWindows, error) {
return s.inactiveAllows(time.Now())
func (w *SyncWindows) InactiveAllows() (*SyncWindows, error) {
return w.inactiveAllows(time.Now())
}
func (s *SyncWindows) inactiveAllows(currentTime time.Time) (*SyncWindows, error) {
func (w *SyncWindows) inactiveAllows(currentTime time.Time) (*SyncWindows, error) {
// If SyncWindows.InactiveAllows() is called outside of a UTC locale, it should be
// first converted to UTC before we scan through the SyncWindows.
currentTime = currentTime.In(time.UTC)
if s.HasWindows() {
if w.HasWindows() {
var inactive SyncWindows
specParser := cron.NewParser(cron.Minute | cron.Hour | cron.Dom | cron.Month | cron.Dow)
for _, w := range *s {
for _, w := range *w {
if w.Kind == "allow" {
schedule, sErr := specParser.Parse(w.Schedule)
if sErr != nil {
@@ -2664,9 +2679,9 @@ func (w *SyncWindow) scheduleOffsetByTimeZone() time.Duration {
}
// AddWindow adds a sync window with the given parameters to the AppProject
func (s *AppProjectSpec) AddWindow(knd string, sch string, dur string, app []string, ns []string, cl []string, ms bool, timeZone string) error {
func (spec *AppProjectSpec) AddWindow(knd string, sch string, dur string, app []string, ns []string, cl []string, ms bool, timeZone string) error {
if len(knd) == 0 || len(sch) == 0 || len(dur) == 0 {
return fmt.Errorf("cannot create window: require kind, schedule, duration and one or more of applications, namespaces and clusters")
return errors.New("cannot create window: require kind, schedule, duration and one or more of applications, namespaces and clusters")
}
window := &SyncWindow{
@@ -2692,18 +2707,18 @@ func (s *AppProjectSpec) AddWindow(knd string, sch string, dur string, app []str
return err
}
s.SyncWindows = append(s.SyncWindows, window)
spec.SyncWindows = append(spec.SyncWindows, window)
return nil
}
// DeleteWindow deletes a sync window with the given id from the AppProject
func (s *AppProjectSpec) DeleteWindow(id int) error {
func (spec *AppProjectSpec) DeleteWindow(id int) error {
var exists bool
for i := range s.SyncWindows {
for i := range spec.SyncWindows {
if i == id {
exists = true
s.SyncWindows = append(s.SyncWindows[:i], s.SyncWindows[i+1:]...)
spec.SyncWindows = append(spec.SyncWindows[:i], spec.SyncWindows[i+1:]...)
break
}
}
@@ -2768,9 +2783,8 @@ func (w *SyncWindows) CanSync(isManual bool) (bool, error) {
if hasActiveDeny {
if isManual && manualEnabled {
return true, nil
} else {
return false, nil
}
return false, nil
}
if active.hasAllow() {
@@ -2784,9 +2798,8 @@ func (w *SyncWindows) CanSync(isManual bool) (bool, error) {
if inactiveAllows.HasWindows() {
if isManual && inactiveAllows.manualEnabled() {
return true, nil
} else {
return false, nil
}
return false, nil
}
return true, nil
@@ -2873,7 +2886,7 @@ func (w SyncWindow) active(currentTime time.Time) (bool, error) {
// Update updates a sync window's settings with the given parameter
func (w *SyncWindow) Update(s string, d string, a []string, n []string, c []string, tz string) error {
if len(s) == 0 && len(d) == 0 && len(a) == 0 && len(n) == 0 && len(c) == 0 {
return fmt.Errorf("cannot update: require one or more of schedule, duration, application, namespace, or cluster")
return errors.New("cannot update: require one or more of schedule, duration, application, namespace, or cluster")
}
if len(s) > 0 {
@@ -2927,10 +2940,10 @@ func (w *SyncWindow) Validate() error {
}
// DestinationClusters returns a list of cluster URLs allowed as destination in an AppProject
func (d AppProjectSpec) DestinationClusters() []string {
func (spec AppProjectSpec) DestinationClusters() []string {
servers := make([]string, 0)
for _, d := range d.Destinations {
for _, d := range spec.Destinations {
servers = append(servers, d.Server)
}
@@ -3047,6 +3060,14 @@ func (app *Application) SetPostDeleteFinalizer(stage ...string) {
setFinalizer(&app.ObjectMeta, strings.Join(append([]string{PostDeleteFinalizerName}, stage...), "/"), true)
}
func (app *Application) UnSetPostDeleteFinalizerAll() {
for _, finalizer := range app.Finalizers {
if strings.HasPrefix(finalizer, PostDeleteFinalizerName) {
setFinalizer(&app.ObjectMeta, finalizer, false)
}
}
}
func (app *Application) UnSetPostDeleteFinalizer(stage ...string) {
setFinalizer(&app.ObjectMeta, strings.Join(append([]string{PostDeleteFinalizerName}, stage...), "/"), false)
}
@@ -3349,7 +3370,7 @@ func SetK8SConfigDefaults(config *rest.Config) error {
}
// ParseProxyUrl returns a parsed url and verifies that schema is correct
func ParseProxyUrl(proxyUrl string) (*url.URL, error) {
func ParseProxyUrl(proxyUrl string) (*url.URL, error) { //nolint:revive //FIXME(var-naming)
u, err := url.Parse(proxyUrl)
if err != nil {
return nil, err
@@ -3366,7 +3387,9 @@ func ParseProxyUrl(proxyUrl string) (*url.URL, error) {
func (c *Cluster) RawRestConfig() (*rest.Config, error) {
var config *rest.Config
var err error
if c.Server == KubernetesInternalAPIServerAddr && env.ParseBoolFromEnv(EnvVarFakeInClusterConfig, false) {
switch {
case c.Server == KubernetesInternalAPIServerAddr && env.ParseBoolFromEnv(EnvVarFakeInClusterConfig, false):
conf, exists := os.LookupEnv("KUBECONFIG")
if exists {
config, err = clientcmd.BuildConfigFromFlags("", conf)
@@ -3378,9 +3401,9 @@ func (c *Cluster) RawRestConfig() (*rest.Config, error) {
}
config, err = clientcmd.BuildConfigFromFlags("", filepath.Join(homeDir, ".kube", "config"))
}
} else if c.Server == KubernetesInternalAPIServerAddr && c.Config.Username == "" && c.Config.Password == "" && c.Config.BearerToken == "" {
case c.Server == KubernetesInternalAPIServerAddr && c.Config.Username == "" && c.Config.Password == "" && c.Config.BearerToken == "":
config, err = rest.InClusterConfig()
} else if c.Server == KubernetesInternalAPIServerAddr {
case c.Server == KubernetesInternalAPIServerAddr:
config, err = rest.InClusterConfig()
if err == nil {
config.Username = c.Config.Username
@@ -3388,7 +3411,7 @@ func (c *Cluster) RawRestConfig() (*rest.Config, error) {
config.BearerToken = c.Config.BearerToken
config.BearerTokenFile = ""
}
} else {
default:
tlsClientConfig := rest.TLSClientConfig{
Insecure: c.Config.TLSClientConfig.Insecure,
ServerName: c.Config.TLSClientConfig.ServerName,
@@ -3396,7 +3419,8 @@ func (c *Cluster) RawRestConfig() (*rest.Config, error) {
KeyData: c.Config.TLSClientConfig.KeyData,
CAData: c.Config.TLSClientConfig.CAData,
}
if c.Config.AWSAuthConfig != nil {
switch {
case c.Config.AWSAuthConfig != nil:
args := []string{"aws", "--cluster-name", c.Config.AWSAuthConfig.ClusterName}
if c.Config.AWSAuthConfig.RoleARN != "" {
args = append(args, "--role-arn", c.Config.AWSAuthConfig.RoleARN)
@@ -3414,7 +3438,7 @@ func (c *Cluster) RawRestConfig() (*rest.Config, error) {
InteractiveMode: api.NeverExecInteractiveMode,
},
}
} else if c.Config.ExecProviderConfig != nil {
case c.Config.ExecProviderConfig != nil:
var env []api.ExecEnvVar
if c.Config.ExecProviderConfig.Env != nil {
for key, value := range c.Config.ExecProviderConfig.Env {
@@ -3436,7 +3460,7 @@ func (c *Cluster) RawRestConfig() (*rest.Config, error) {
InteractiveMode: api.NeverExecInteractiveMode,
},
}
} else {
default:
config = &rest.Config{
Host: c.Server,
Username: c.Config.Username,
@@ -3545,37 +3569,36 @@ func (d *ApplicationDestination) MarshalJSON() ([]byte, error) {
// tracking values, i.e. in the format <namespace>_<name>. When the namespace
// of the application is similar to the value of defaultNs, only the name of
// the application is returned to keep backwards compatibility.
func (a *Application) InstanceName(defaultNs string) string {
func (app *Application) InstanceName(defaultNs string) string {
// When app has no namespace set, or the namespace is the default ns, we
// return just the application name
if a.Namespace == "" || a.Namespace == defaultNs {
return a.Name
if app.Namespace == "" || app.Namespace == defaultNs {
return app.Name
}
return a.Namespace + "_" + a.Name
return app.Namespace + "_" + app.Name
}
// QualifiedName returns the full qualified name of the application, including
// the name of the namespace it is created in delimited by a forward slash,
// i.e. <namespace>/<appname>
func (a *Application) QualifiedName() string {
if a.Namespace == "" {
return a.Name
} else {
return a.Namespace + "/" + a.Name
func (app *Application) QualifiedName() string {
if app.Namespace == "" {
return app.Name
}
return app.Namespace + "/" + app.Name
}
// RBACName returns the full qualified RBAC resource name for the application
// in a backwards-compatible way.
func (a *Application) RBACName(defaultNS string) string {
return security.RBACName(defaultNS, a.Spec.GetProject(), a.Namespace, a.Name)
func (app *Application) RBACName(defaultNS string) string {
return security.RBACName(defaultNS, app.Spec.GetProject(), app.Namespace, app.Name)
}
// GetAnnotation returns the value of the specified annotation if it exists,
// e.g., a.GetAnnotation("argocd.argoproj.io/manifest-generate-paths").
// If the annotation does not exist, it returns an empty string.
func (a *Application) GetAnnotation(annotation string) string {
v, exists := a.Annotations[annotation]
func (app *Application) GetAnnotation(annotation string) string {
v, exists := app.Annotations[annotation]
if !exists {
return ""
}

View File

@@ -3513,6 +3513,8 @@ func Test_validatePolicy_projIsNotRegex(t *testing.T) {
func Test_validatePolicy_ValidResource(t *testing.T) {
err := validatePolicy("some-project", "org-admin", "p, proj:some-project:org-admin, applications, *, some-project/*, allow")
require.NoError(t, err)
err = validatePolicy("some-project", "org-admin", "p, proj:some-project:org-admin, applicationsets, *, some-project/*, allow")
require.NoError(t, err)
err = validatePolicy("some-project", "org-admin", "p, proj:some-project:org-admin, repositories, *, some-project/*, allow")
require.NoError(t, err)
err = validatePolicy("some-project", "org-admin", "p, proj:some-project:org-admin, clusters, *, some-project/*, allow")
@@ -4346,3 +4348,58 @@ func TestCluster_ParseProxyUrl(t *testing.T) {
}
}
}
func TestSanitized(t *testing.T) {
now := metav1.Now()
cluster := &Cluster{
ID: "123",
Server: "https://example.com",
Name: "example",
ServerVersion: "v1.0.0",
Namespaces: []string{"default", "kube-system"},
Project: "default",
Labels: map[string]string{
"env": "production",
},
Annotations: map[string]string{
"annotation-key": "annotation-value",
},
ConnectionState: ConnectionState{
Status: ConnectionStatusSuccessful,
Message: "Connection successful",
ModifiedAt: &now,
},
Config: ClusterConfig{
Username: "admin",
Password: "password123",
BearerToken: "abc",
TLSClientConfig: TLSClientConfig{
Insecure: true,
},
ExecProviderConfig: &ExecProviderConfig{
Command: "test",
},
},
}
assert.Equal(t, &Cluster{
ID: "123",
Server: "https://example.com",
Name: "example",
ServerVersion: "v1.0.0",
Namespaces: []string{"default", "kube-system"},
Project: "default",
Labels: map[string]string{"env": "production"},
Annotations: map[string]string{"annotation-key": "annotation-value"},
ConnectionState: ConnectionState{
Status: ConnectionStatusSuccessful,
Message: "Connection successful",
ModifiedAt: &now,
},
Config: ClusterConfig{
TLSClientConfig: TLSClientConfig{
Insecure: true,
},
},
}, cluster.Sanitized())
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks
@@ -17,7 +17,7 @@ type RepoServerService_GenerateManifestWithFilesClient struct {
mock.Mock
}
// CloseAndRecv provides a mock function with given fields:
// CloseAndRecv provides a mock function with no fields
func (_m *RepoServerService_GenerateManifestWithFilesClient) CloseAndRecv() (*apiclient.ManifestResponse, error) {
ret := _m.Called()
@@ -47,7 +47,7 @@ func (_m *RepoServerService_GenerateManifestWithFilesClient) CloseAndRecv() (*ap
return r0, r1
}
// CloseSend provides a mock function with given fields:
// CloseSend provides a mock function with no fields
func (_m *RepoServerService_GenerateManifestWithFilesClient) CloseSend() error {
ret := _m.Called()
@@ -65,7 +65,7 @@ func (_m *RepoServerService_GenerateManifestWithFilesClient) CloseSend() error {
return r0
}
// Context provides a mock function with given fields:
// Context provides a mock function with no fields
func (_m *RepoServerService_GenerateManifestWithFilesClient) Context() context.Context {
ret := _m.Called()
@@ -85,7 +85,7 @@ func (_m *RepoServerService_GenerateManifestWithFilesClient) Context() context.C
return r0
}
// Header provides a mock function with given fields:
// Header provides a mock function with no fields
func (_m *RepoServerService_GenerateManifestWithFilesClient) Header() (metadata.MD, error) {
ret := _m.Called()
@@ -169,7 +169,7 @@ func (_m *RepoServerService_GenerateManifestWithFilesClient) SendMsg(m interface
return r0
}
// Trailer provides a mock function with given fields:
// Trailer provides a mock function with no fields
func (_m *RepoServerService_GenerateManifestWithFilesClient) Trailer() metadata.MD {
ret := _m.Called()

View File

@@ -345,7 +345,7 @@ func (s *Service) runRepoOperation(
if source.IsHelm() {
if settings.noCache {
err = helmClient.CleanChartCache(source.Chart, revision, repo.Project)
err = helmClient.CleanChartCache(source.Chart, revision)
if err != nil {
return err
}
@@ -354,7 +354,7 @@ func (s *Service) runRepoOperation(
if source.Helm != nil {
helmPassCredentials = source.Helm.PassCredentials
}
chartPath, closer, err := helmClient.ExtractChart(source.Chart, revision, repo.Project, helmPassCredentials, s.initConstants.HelmManifestMaxExtractedSize, s.initConstants.DisableHelmManifestMaxExtractedSize)
chartPath, closer, err := helmClient.ExtractChart(source.Chart, revision, helmPassCredentials, s.initConstants.HelmManifestMaxExtractedSize, s.initConstants.DisableHelmManifestMaxExtractedSize)
if err != nil {
return err
}
@@ -1180,7 +1180,7 @@ func helmTemplate(appPath string, repoRoot string, env *v1alpha1.Env, q *apiclie
referencedSource := getReferencedSource(p.Path, q.RefSources)
if referencedSource != nil {
// If the $-prefixed path appears to reference another source, do env substitution _after_ resolving the source
resolvedPath, err = getResolvedRefValueFile(p.Path, env, q.GetValuesFileSchemes(), referencedSource.Repo.Repo, gitRepoPaths, referencedSource.Repo.Project)
resolvedPath, err = getResolvedRefValueFile(p.Path, env, q.GetValuesFileSchemes(), referencedSource.Repo.Repo, gitRepoPaths)
if err != nil {
return nil, "", fmt.Errorf("error resolving set-file path: %w", err)
}
@@ -1306,7 +1306,7 @@ func getResolvedValueFiles(
referencedSource := getReferencedSource(rawValueFile, refSources)
if referencedSource != nil {
// If the $-prefixed path appears to reference another source, do env substitution _after_ resolving that source.
resolvedPath, err = getResolvedRefValueFile(rawValueFile, env, allowedValueFilesSchemas, referencedSource.Repo.Repo, gitRepoPaths, referencedSource.Repo.Project)
resolvedPath, err = getResolvedRefValueFile(rawValueFile, env, allowedValueFilesSchemas, referencedSource.Repo.Repo, gitRepoPaths)
if err != nil {
return nil, fmt.Errorf("error resolving value file path: %w", err)
}
@@ -1339,15 +1339,9 @@ func getResolvedRefValueFile(
allowedValueFilesSchemas []string,
refSourceRepo string,
gitRepoPaths io.TempPaths,
project string,
) (pathutil.ResolvedFilePath, error) {
pathStrings := strings.Split(rawValueFile, "/")
keyData, err := json.Marshal(map[string]string{"url": git.NormalizeGitURL(refSourceRepo), "project": project})
if err != nil {
return "", err
}
repoPath := gitRepoPaths.GetPathIfExists(string(keyData))
repoPath := gitRepoPaths.GetPathIfExists(git.NormalizeGitURL(refSourceRepo))
if repoPath == "" {
return "", fmt.Errorf("failed to find repo %q", refSourceRepo)
}
@@ -2382,7 +2376,7 @@ func (s *Service) GetRevisionChartDetails(ctx context.Context, q *apiclient.Repo
if err != nil {
return nil, fmt.Errorf("helm client error: %w", err)
}
chartPath, closer, err := helmClient.ExtractChart(q.Name, revision, q.Repo.Project, false, s.initConstants.HelmManifestMaxExtractedSize, s.initConstants.DisableHelmManifestMaxExtractedSize)
chartPath, closer, err := helmClient.ExtractChart(q.Name, revision, false, s.initConstants.HelmManifestMaxExtractedSize, s.initConstants.DisableHelmManifestMaxExtractedSize)
if err != nil {
return nil, fmt.Errorf("error extracting chart: %w", err)
}
@@ -2412,11 +2406,7 @@ func fileParameters(q *apiclient.RepoServerAppDetailsQuery) []v1alpha1.HelmFileP
}
func (s *Service) newClient(repo *v1alpha1.Repository, opts ...git.ClientOpts) (git.Client, error) {
keyData, err := json.Marshal(map[string]string{"url": git.NormalizeGitURL(repo.Repo), "project": repo.Project})
if err != nil {
return nil, err
}
repoPath, err := s.gitRepoPaths.GetPath(string(keyData))
repoPath, err := s.gitRepoPaths.GetPath(git.NormalizeGitURL(repo.Repo))
if err != nil {
return nil, err
}

View File

@@ -127,10 +127,10 @@ func newServiceWithMocks(t *testing.T, root string, signed bool) (*Service, *git
chart: {{Version: "1.0.0"}, {Version: version}},
oobChart: {{Version: "1.0.0"}, {Version: version}},
}}, nil)
helmClient.On("ExtractChart", chart, version, "", false, int64(0), false).Return("./testdata/my-chart", io.NopCloser, nil)
helmClient.On("ExtractChart", oobChart, version, "", false, int64(0), false).Return("./testdata2/out-of-bounds-chart", io.NopCloser, nil)
helmClient.On("CleanChartCache", chart, version, "").Return(nil)
helmClient.On("CleanChartCache", oobChart, version, "").Return(nil)
helmClient.On("ExtractChart", chart, version, false, int64(0), false).Return("./testdata/my-chart", io.NopCloser, nil)
helmClient.On("ExtractChart", oobChart, version, false, int64(0), false).Return("./testdata2/out-of-bounds-chart", io.NopCloser, nil)
helmClient.On("CleanChartCache", chart, version).Return(nil)
helmClient.On("CleanChartCache", oobChart, version).Return(nil)
helmClient.On("DependencyBuild").Return(nil)
paths.On("Add", mock.Anything, mock.Anything).Return(root, nil)
@@ -3283,8 +3283,7 @@ func Test_getResolvedValueFiles(t *testing.T) {
tempDir := t.TempDir()
paths := io.NewRandomizedTempPaths(tempDir)
key, _ := json.Marshal(map[string]string{"url": git.NormalizeGitURL("https://github.com/org/repo1"), "project": ""})
paths.Add(string(key), path.Join(tempDir, "repo1"))
paths.Add(git.NormalizeGitURL("https://github.com/org/repo1"), path.Join(tempDir, "repo1"))
testCases := []struct {
name string

View File

@@ -7,8 +7,18 @@ if obj.status == nil or obj.status.conditions == nil then
end
-- Sort conditions by lastTransitionTime, from old to new.
-- Ensure that conditions with nil lastTransitionTime are always sorted after those with non-nil values.
table.sort(obj.status.conditions, function(a, b)
return a.lastTransitionTime < b.lastTransitionTime
-- Nil values are considered "less than" non-nil values.
-- This means that conditions with nil lastTransitionTime will be sorted to the end.
if a.lastTransitionTime == nil then
return false
elseif b.lastTransitionTime == nil then
return true
else
-- If both have non-nil lastTransitionTime, compare them normally.
return a.lastTransitionTime < b.lastTransitionTime
end
end)
for _, condition in ipairs(obj.status.conditions) do

View File

@@ -14,4 +14,8 @@ tests:
- healthStatus:
status: Degraded
message: "Has Errors: Waiting for foo/keycloak-1 due to CrashLoopBackOff: back-off 10s"
inputPath: testdata/degraded.yaml
inputPath: testdata/degraded.yaml
- healthStatus:
status: Healthy
message: ""
inputPath: testdata/nil_last_transition_time.yaml

View File

@@ -0,0 +1,13 @@
apiVersion: k8s.keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: keycloak-23
namespace: keycloak
status:
conditions:
- type: Ready
status: "True"
lastTransitionTime: "2025-05-06T12:00:00Z" # Non-nil lastTransitionTime
- type: HasErrors
status: "False"
lastTransitionTime: null # Nil lastTransitionTime

View File

@@ -4,6 +4,13 @@ if obj.spec.suspend ~= nil and obj.spec.suspend == true then
hs.status = "Suspended"
return hs
end
-- Helm repositories of type "oci" do not contain any information in the status
-- https://fluxcd.io/flux/components/source/helmrepositories/#helmrepository-status
if obj.spec.type ~= nil and obj.spec.type == "oci" then
hs.message = "Helm repositories of type 'oci' do not contain any information in the status."
hs.status = "Healthy"
return hs
end
if obj.status ~= nil then
if obj.status.conditions ~= nil then
local numProgressing = 0

View File

@@ -11,3 +11,7 @@ tests:
status: Healthy
message: Succeeded
inputPath: testdata/healthy.yaml
- healthStatus:
status: Healthy
message: "Helm repositories of type 'oci' do not contain any information in the status."
inputPath: testdata/oci.yaml

View File

@@ -0,0 +1,10 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: podinfo
namespace: default
spec:
type: "oci"
interval: 5m0s
url: oci://ghcr.io/stefanprodan/charts
status: {}

View File

@@ -47,7 +47,7 @@ func (s *Server) UpdatePassword(ctx context.Context, q *account.UpdatePasswordRe
// check for permission is user is trying to change someone else's password
// assuming user is trying to update someone else if username is different or issuer is not Argo CD
if updatedUsername != username || issuer != session.SessionManagerClaimsIssuer {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceAccounts, rbacpolicy.ActionUpdate, q.Name); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceAccounts, rbac.ActionUpdate, q.Name); err != nil {
return nil, fmt.Errorf("permission denied: %w", err)
}
}
@@ -118,11 +118,11 @@ func (s *Server) UpdatePassword(ctx context.Context, q *account.UpdatePasswordRe
// CanI checks if the current account has permission to perform an action
func (s *Server) CanI(ctx context.Context, r *account.CanIRequest) (*account.CanIResponse, error) {
if !slice.ContainsString(rbacpolicy.Actions, r.Action, nil) {
return nil, status.Errorf(codes.InvalidArgument, "%v does not contain %s", rbacpolicy.Actions, r.Action)
if !slice.ContainsString(rbac.Actions, r.Action, nil) {
return nil, status.Errorf(codes.InvalidArgument, "%v does not contain %s", rbac.Actions, r.Action)
}
if !slice.ContainsString(rbacpolicy.Resources, r.Resource, nil) {
return nil, status.Errorf(codes.InvalidArgument, "%v does not contain %s", rbacpolicy.Resources, r.Resource)
if !slice.ContainsString(rbac.Resources, r.Resource, nil) {
return nil, status.Errorf(codes.InvalidArgument, "%v does not contain %s", rbac.Resources, r.Resource)
}
// Logs RBAC will be enforced only if an internal var serverRBACLogEnforceEnable (representing server.rbac.log.enforce.enable env var)
@@ -143,12 +143,11 @@ func (s *Server) CanI(ctx context.Context, r *account.CanIRequest) (*account.Can
ok := s.enf.Enforce(ctx.Value("claims"), r.Resource, r.Action, r.Subresource)
if ok {
return &account.CanIResponse{Value: "yes"}, nil
} else {
return &account.CanIResponse{Value: "no"}, nil
}
return &account.CanIResponse{Value: "no"}, nil
}
func toApiAccount(name string, a settings.Account) *account.Account {
func toAPIAccount(name string, a settings.Account) *account.Account {
var capabilities []string
for _, c := range a.Capabilities {
capabilities = append(capabilities, string(c))
@@ -173,22 +172,22 @@ func (s *Server) ensureHasAccountPermission(ctx context.Context, action string,
if session.Sub(ctx) == account && session.Iss(ctx) == session.SessionManagerClaimsIssuer {
return nil
}
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceAccounts, action, account); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceAccounts, action, account); err != nil {
return fmt.Errorf("permission denied for account %s with action %s: %w", account, action, err)
}
return nil
}
// ListAccounts returns the list of accounts
func (s *Server) ListAccounts(ctx context.Context, r *account.ListAccountRequest) (*account.AccountsList, error) {
func (s *Server) ListAccounts(ctx context.Context, _ *account.ListAccountRequest) (*account.AccountsList, error) {
resp := account.AccountsList{}
accounts, err := s.settingsMgr.GetAccounts()
if err != nil {
return nil, fmt.Errorf("failed to get accounts: %w", err)
}
for name, a := range accounts {
if err := s.ensureHasAccountPermission(ctx, rbacpolicy.ActionGet, name); err == nil {
resp.Items = append(resp.Items, toApiAccount(name, a))
if err := s.ensureHasAccountPermission(ctx, rbac.ActionGet, name); err == nil {
resp.Items = append(resp.Items, toAPIAccount(name, a))
}
}
sort.Slice(resp.Items, func(i, j int) bool {
@@ -199,19 +198,19 @@ func (s *Server) ListAccounts(ctx context.Context, r *account.ListAccountRequest
// GetAccount returns an account
func (s *Server) GetAccount(ctx context.Context, r *account.GetAccountRequest) (*account.Account, error) {
if err := s.ensureHasAccountPermission(ctx, rbacpolicy.ActionGet, r.Name); err != nil {
if err := s.ensureHasAccountPermission(ctx, rbac.ActionGet, r.Name); err != nil {
return nil, fmt.Errorf("permission denied to get account %s: %w", r.Name, err)
}
a, err := s.settingsMgr.GetAccount(r.Name)
if err != nil {
return nil, fmt.Errorf("failed to get account %s: %w", r.Name, err)
}
return toApiAccount(r.Name, *a), nil
return toAPIAccount(r.Name, *a), nil
}
// CreateToken creates a token
func (s *Server) CreateToken(ctx context.Context, r *account.CreateTokenRequest) (*account.CreateTokenResponse, error) {
if err := s.ensureHasAccountPermission(ctx, rbacpolicy.ActionUpdate, r.Name); err != nil {
if err := s.ensureHasAccountPermission(ctx, rbac.ActionUpdate, r.Name); err != nil {
return nil, fmt.Errorf("permission denied to create token for account %s: %w", r.Name, err)
}
@@ -259,7 +258,7 @@ func (s *Server) CreateToken(ctx context.Context, r *account.CreateTokenRequest)
// DeleteToken deletes a token
func (s *Server) DeleteToken(ctx context.Context, r *account.DeleteTokenRequest) (*account.EmptyResponse, error) {
if err := s.ensureHasAccountPermission(ctx, rbacpolicy.ActionUpdate, r.Name); err != nil {
if err := s.ensureHasAccountPermission(ctx, rbac.ActionUpdate, r.Name); err != nil {
return nil, fmt.Errorf("permission denied to delete account %s: %w", r.Name, err)
}

View File

@@ -22,8 +22,8 @@ import (
log "github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
v1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/fields"
@@ -45,7 +45,6 @@ import (
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
"github.com/argoproj/argo-cd/v2/server/deeplinks"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/argo"
argoutil "github.com/argoproj/argo-cd/v2/util/argo"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
@@ -159,7 +158,7 @@ func (s *Server) getAppEnforceRBAC(ctx context.Context, action, project, namespa
if user == "" {
user = "Unknown user"
}
logCtx := log.WithFields(map[string]interface{}{
logCtx := log.WithFields(map[string]any{
"user": user,
"application": name,
"namespace": namespace,
@@ -167,8 +166,8 @@ func (s *Server) getAppEnforceRBAC(ctx context.Context, action, project, namespa
if project != "" {
// The user has provided everything we need to perform an initial RBAC check.
givenRBACName := security.RBACName(s.ns, project, namespace, name)
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, action, givenRBACName); err != nil {
logCtx.WithFields(map[string]interface{}{
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, action, givenRBACName); err != nil {
logCtx.WithFields(map[string]any{
"project": project,
argocommon.SecurityField: argocommon.SecurityMedium,
}).Warnf("user tried to %s application which they do not have access to: %s", action, err)
@@ -181,10 +180,10 @@ func (s *Server) getAppEnforceRBAC(ctx context.Context, action, project, namespa
}
a, err := getApp()
if err != nil {
if apierr.IsNotFound(err) {
if apierrors.IsNotFound(err) {
if project != "" {
// We know that the user was allowed to get the Application, but the Application does not exist. Return 404.
return nil, nil, status.Error(codes.NotFound, apierr.NewNotFound(schema.GroupResource{Group: "argoproj.io", Resource: "applications"}, name).Error())
return nil, nil, status.Error(codes.NotFound, apierrors.NewNotFound(schema.GroupResource{Group: "argoproj.io", Resource: "applications"}, name).Error())
}
// We don't know if the user was allowed to get the Application, and we don't want to leak information about
// the Application's existence. Return 403.
@@ -197,8 +196,8 @@ func (s *Server) getAppEnforceRBAC(ctx context.Context, action, project, namespa
// Even if we performed an initial RBAC check (because the request was fully parameterized), we still need to
// perform a second RBAC check to ensure that the user has access to the actual Application's project (not just the
// project they specified in the request).
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, action, a.RBACName(s.ns)); err != nil {
logCtx.WithFields(map[string]interface{}{
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, action, a.RBACName(s.ns)); err != nil {
logCtx.WithFields(map[string]any{
"project": a.Spec.Project,
argocommon.SecurityField: argocommon.SecurityMedium,
}).Warnf("user tried to %s application which they do not have access to: %s", action, err)
@@ -206,7 +205,7 @@ func (s *Server) getAppEnforceRBAC(ctx context.Context, action, project, namespa
// The user specified a project. We would have returned a 404 if the user had access to the app, but the app
// did not exist. So we have to return a 404 when the app does exist, but the user does not have access.
// Otherwise, they could infer that the app exists based on the error code.
return nil, nil, status.Error(codes.NotFound, apierr.NewNotFound(schema.GroupResource{Group: "argoproj.io", Resource: "applications"}, name).Error())
return nil, nil, status.Error(codes.NotFound, apierrors.NewNotFound(schema.GroupResource{Group: "argoproj.io", Resource: "applications"}, name).Error())
}
// The user didn't specify a project. We always return permission denied for both lack of access and lack of
// existence.
@@ -217,13 +216,13 @@ func (s *Server) getAppEnforceRBAC(ctx context.Context, action, project, namespa
effectiveProject = a.Spec.Project
}
if project != "" && effectiveProject != project {
logCtx.WithFields(map[string]interface{}{
logCtx.WithFields(map[string]any{
"project": a.Spec.Project,
argocommon.SecurityField: argocommon.SecurityMedium,
}).Warnf("user tried to %s application in project %s, but the application is in project %s", action, project, effectiveProject)
// The user has access to the app, but the app is in a different project. Return 404, meaning "app doesn't
// exist in that project".
return nil, nil, status.Error(codes.NotFound, apierr.NewNotFound(schema.GroupResource{Group: "argoproj.io", Resource: "applications"}, name).Error())
return nil, nil, status.Error(codes.NotFound, apierrors.NewNotFound(schema.GroupResource{Group: "argoproj.io", Resource: "applications"}, name).Error())
}
// Get the app's associated project, and make sure all project restrictions are enforced.
proj, err := s.getAppProject(ctx, a, logCtx)
@@ -293,7 +292,7 @@ func (s *Server) List(ctx context.Context, q *application.ApplicationQuery) (*ap
if !s.isNamespaceEnabled(a.Namespace) {
continue
}
if s.enf.Enforce(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionGet, a.RBACName(s.ns)) {
if s.enf.Enforce(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionGet, a.RBACName(s.ns)) {
newItems = append(newItems, *a)
}
}
@@ -315,11 +314,11 @@ func (s *Server) List(ctx context.Context, q *application.ApplicationQuery) (*ap
// Create creates an application
func (s *Server) Create(ctx context.Context, q *application.ApplicationCreateRequest) (*appv1.Application, error) {
if q.GetApplication() == nil {
return nil, fmt.Errorf("error creating application: application is nil in request")
return nil, errors.New("error creating application: application is nil in request")
}
a := q.GetApplication()
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionCreate, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionCreate, a.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -362,7 +361,7 @@ func (s *Server) Create(ctx context.Context, q *application.ApplicationCreateReq
s.waitSync(created)
return created, nil
}
if !apierr.IsAlreadyExists(err) {
if !apierrors.IsAlreadyExists(err) {
return nil, fmt.Errorf("error creating application: %w", err)
}
@@ -388,7 +387,7 @@ func (s *Server) Create(ctx context.Context, q *application.ApplicationCreateReq
if q.Upsert == nil || !*q.Upsert {
return nil, status.Errorf(codes.InvalidArgument, "existing application spec is different, use upsert flag to force update")
}
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionUpdate, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionUpdate, a.RBACName(s.ns)); err != nil {
return nil, err
}
updated, err := s.updateApp(existing, a, ctx, true)
@@ -443,9 +442,9 @@ func (s *Server) queryRepoServer(ctx context.Context, proj *appv1.AppProject, ac
// GetManifests returns application manifests
func (s *Server) GetManifests(ctx context.Context, q *application.ApplicationManifestQuery) (*apiclient.ManifestResponse, error) {
if q.Name == nil || *q.Name == "" {
return nil, fmt.Errorf("invalid request: application name is missing")
return nil, errors.New("invalid request: application name is missing")
}
a, proj, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
a, proj, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
if err != nil {
return nil, err
}
@@ -590,10 +589,10 @@ func (s *Server) GetManifestsWithFiles(stream application.ApplicationService_Get
}
if query.Name == nil || *query.Name == "" {
return fmt.Errorf("invalid request: application name is missing")
return errors.New("invalid request: application name is missing")
}
a, proj, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, query.GetProject(), query.GetAppNamespace(), query.GetName())
a, proj, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, query.GetProject(), query.GetAppNamespace(), query.GetName())
if err != nil {
return err
}
@@ -724,7 +723,7 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*app
// We must use a client Get instead of an informer Get, because it's common to call Get immediately
// following a Watch (which is not yet powered by an informer), and the Get must reflect what was
// previously seen by the client.
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionGet, project, appNs, appName, q.GetResourceVersion())
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionGet, project, appNs, appName, q.GetResourceVersion())
if err != nil {
return nil, err
}
@@ -800,7 +799,7 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*app
for {
select {
case <-ctx.Done():
return nil, fmt.Errorf("application refresh deadline exceeded")
return nil, errors.New("application refresh deadline exceeded")
case event := <-events:
if appVersion, err := strconv.Atoi(event.Application.ResourceVersion); err == nil && appVersion > minVersion {
annotations := event.Application.GetAnnotations()
@@ -816,8 +815,8 @@ func (s *Server) Get(ctx context.Context, q *application.ApplicationQuery) (*app
}
// ListResourceEvents returns a list of event resources
func (s *Server) ListResourceEvents(ctx context.Context, q *application.ApplicationResourceEventsQuery) (*v1.EventList, error) {
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
func (s *Server) ListResourceEvents(ctx context.Context, q *application.ApplicationResourceEventsQuery) (*corev1.EventList, error) {
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
if err != nil {
return nil, err
}
@@ -954,7 +953,7 @@ func (s *Server) updateApp(app *appv1.Application, newApp *appv1.Application, ct
s.waitSync(res)
return res, nil
}
if !apierr.IsConflict(err) {
if !apierrors.IsConflict(err) {
return nil, err
}
@@ -970,10 +969,10 @@ func (s *Server) updateApp(app *appv1.Application, newApp *appv1.Application, ct
// Update updates an application
func (s *Server) Update(ctx context.Context, q *application.ApplicationUpdateRequest) (*appv1.Application, error) {
if q.GetApplication() == nil {
return nil, fmt.Errorf("error updating application: application is nil in request")
return nil, errors.New("error updating application: application is nil in request")
}
a := q.GetApplication()
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionUpdate, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionUpdate, a.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -981,15 +980,15 @@ func (s *Server) Update(ctx context.Context, q *application.ApplicationUpdateReq
if q.Validate != nil {
validate = *q.Validate
}
return s.validateAndUpdateApp(ctx, q.Application, false, validate, rbacpolicy.ActionUpdate, q.GetProject())
return s.validateAndUpdateApp(ctx, q.Application, false, validate, rbac.ActionUpdate, q.GetProject())
}
// UpdateSpec updates an application spec and filters out any invalid parameter overrides
func (s *Server) UpdateSpec(ctx context.Context, q *application.ApplicationUpdateSpecRequest) (*appv1.ApplicationSpec, error) {
if q.GetSpec() == nil {
return nil, fmt.Errorf("error updating application spec: spec is nil in request")
return nil, errors.New("error updating application spec: spec is nil in request")
}
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionUpdate, q.GetProject(), q.GetAppNamespace(), q.GetName(), "")
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionUpdate, q.GetProject(), q.GetAppNamespace(), q.GetName(), "")
if err != nil {
return nil, err
}
@@ -999,7 +998,7 @@ func (s *Server) UpdateSpec(ctx context.Context, q *application.ApplicationUpdat
if q.Validate != nil {
validate = *q.Validate
}
a, err = s.validateAndUpdateApp(ctx, a, false, validate, rbacpolicy.ActionUpdate, q.GetProject())
a, err = s.validateAndUpdateApp(ctx, a, false, validate, rbac.ActionUpdate, q.GetProject())
if err != nil {
return nil, fmt.Errorf("error validating and updating app: %w", err)
}
@@ -1008,12 +1007,12 @@ func (s *Server) UpdateSpec(ctx context.Context, q *application.ApplicationUpdat
// Patch patches an application
func (s *Server) Patch(ctx context.Context, q *application.ApplicationPatchRequest) (*appv1.Application, error) {
app, _, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName(), "")
app, _, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName(), "")
if err != nil {
return nil, err
}
if err = s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionUpdate, app.RBACName(s.ns)); err != nil {
if err = s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionUpdate, app.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -1048,7 +1047,7 @@ func (s *Server) Patch(ctx context.Context, q *application.ApplicationPatchReque
if err != nil {
return nil, fmt.Errorf("error unmarshaling patched app: %w", err)
}
return s.validateAndUpdateApp(ctx, newApp, false, true, rbacpolicy.ActionUpdate, q.GetProject())
return s.validateAndUpdateApp(ctx, newApp, false, true, rbac.ActionUpdate, q.GetProject())
}
func (s *Server) getAppProject(ctx context.Context, a *appv1.Application, logCtx *log.Entry) (*appv1.AppProject, error) {
@@ -1060,13 +1059,13 @@ func (s *Server) getAppProject(ctx context.Context, a *appv1.Application, logCtx
// If there's a permission issue or the app doesn't exist, return a vague error to avoid letting the user enumerate project names.
vagueError := status.Errorf(codes.InvalidArgument, "app is not allowed in project %q, or the project does not exist", a.Spec.Project)
if apierr.IsNotFound(err) {
if apierrors.IsNotFound(err) {
return nil, vagueError
}
var applicationNotAllowedToUseProjectErr *appv1.ErrApplicationNotAllowedToUseProject
if errors.As(err, &applicationNotAllowedToUseProjectErr) {
logCtx.WithFields(map[string]interface{}{
logCtx.WithFields(map[string]any{
"project": a.Spec.Project,
argocommon.SecurityField: argocommon.SecurityMedium,
}).Warnf("error getting app project: %s", err)
@@ -1080,7 +1079,7 @@ func (s *Server) getAppProject(ctx context.Context, a *appv1.Application, logCtx
func (s *Server) Delete(ctx context.Context, q *application.ApplicationDeleteRequest) (*application.ApplicationResponse, error) {
appName := q.GetName()
appNs := s.appNamespaceOrDefault(q.GetAppNamespace())
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionGet, q.GetProject(), appNs, appName, "")
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionGet, q.GetProject(), appNs, appName, "")
if err != nil {
return nil, err
}
@@ -1088,7 +1087,7 @@ func (s *Server) Delete(ctx context.Context, q *application.ApplicationDeleteReq
s.projectLock.RLock(a.Spec.Project)
defer s.projectLock.RUnlock(a.Spec.Project)
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionDelete, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionDelete, a.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -1116,8 +1115,8 @@ func (s *Server) Delete(ctx context.Context, q *application.ApplicationDeleteReq
// Although the cascaded deletion/propagation policy finalizer is not set when apps are created via
// API, they will often be set by the user as part of declarative config. As part of a delete
// request, we always calculate the patch to see if we need to set/unset the finalizer.
patch, err := json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
patch, err := json.Marshal(map[string]any{
"metadata": map[string]any{
"finalizers": a.Finalizers,
},
})
@@ -1155,7 +1154,7 @@ func (s *Server) isApplicationPermitted(selector labels.Selector, minVersion int
return false
}
if !s.enf.Enforce(claims, rbacpolicy.ResourceApplications, rbacpolicy.ActionGet, a.RBACName(s.ns)) {
if !s.enf.Enforce(claims, rbac.ResourceApplications, rbac.ActionGet, a.RBACName(s.ns)) {
// do not emit apps user does not have accessing
return false
}
@@ -1235,7 +1234,7 @@ func (s *Server) Watch(q *application.ApplicationQuery, ws application.Applicati
func (s *Server) validateAndNormalizeApp(ctx context.Context, app *appv1.Application, proj *appv1.AppProject, validate bool) error {
if app.GetName() == "" {
return fmt.Errorf("resource name may not be empty")
return errors.New("resource name may not be empty")
}
// ensure sources names are unique
@@ -1252,7 +1251,7 @@ func (s *Server) validateAndNormalizeApp(ctx context.Context, app *appv1.Applica
appNs := s.appNamespaceOrDefault(app.Namespace)
currApp, err := s.appclientset.ArgoprojV1alpha1().Applications(appNs).Get(ctx, app.Name, metav1.GetOptions{})
if err != nil {
if !apierr.IsNotFound(err) {
if !apierrors.IsNotFound(err) {
return fmt.Errorf("error getting application by name: %w", err)
}
// Kubernetes go-client will return a pointer to a zero-value app instead of nil, even
@@ -1262,11 +1261,11 @@ func (s *Server) validateAndNormalizeApp(ctx context.Context, app *appv1.Applica
if currApp != nil && currApp.Spec.GetProject() != app.Spec.GetProject() {
// When changing projects, caller must have application create & update privileges in new project
// NOTE: the update check was already verified in the caller to this function
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionCreate, app.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionCreate, app.RBACName(s.ns)); err != nil {
return err
}
// They also need 'update' privileges in the old project
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionUpdate, currApp.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionUpdate, currApp.RBACName(s.ns)); err != nil {
return err
}
}
@@ -1361,11 +1360,11 @@ func (s *Server) getAppLiveResource(ctx context.Context, action string, q *appli
return nil, nil, nil, err
}
if fineGrainedInheritanceDisabled && (action == rbacpolicy.ActionDelete || action == rbacpolicy.ActionUpdate) {
if fineGrainedInheritanceDisabled && (action == rbac.ActionDelete || action == rbac.ActionUpdate) {
action = fmt.Sprintf("%s/%s/%s/%s/%s", action, q.GetGroup(), q.GetKind(), q.GetNamespace(), q.GetResourceName())
}
a, _, err := s.getApplicationEnforceRBACInformer(ctx, action, q.GetProject(), q.GetAppNamespace(), q.GetName())
if !fineGrainedInheritanceDisabled && err != nil && errors.Is(err, argocommon.PermissionDeniedAPIError) && (action == rbacpolicy.ActionDelete || action == rbacpolicy.ActionUpdate) {
if !fineGrainedInheritanceDisabled && err != nil && errors.Is(err, argocommon.PermissionDeniedAPIError) && (action == rbac.ActionDelete || action == rbac.ActionUpdate) {
action = fmt.Sprintf("%s/%s/%s/%s/%s", action, q.GetGroup(), q.GetKind(), q.GetNamespace(), q.GetResourceName())
a, _, err = s.getApplicationEnforceRBACInformer(ctx, action, q.GetProject(), q.GetAppNamespace(), q.GetName())
}
@@ -1390,7 +1389,7 @@ func (s *Server) getAppLiveResource(ctx context.Context, action string, q *appli
}
func (s *Server) GetResource(ctx context.Context, q *application.ApplicationResourceRequest) (*application.ApplicationResourceResponse, error) {
res, config, _, err := s.getAppLiveResource(ctx, rbacpolicy.ActionGet, q)
res, config, _, err := s.getAppLiveResource(ctx, rbac.ActionGet, q)
if err != nil {
return nil, err
}
@@ -1438,7 +1437,7 @@ func (s *Server) PatchResource(ctx context.Context, q *application.ApplicationRe
Group: q.Group,
Project: q.Project,
}
res, config, a, err := s.getAppLiveResource(ctx, rbacpolicy.ActionUpdate, resourceRequest)
res, config, a, err := s.getAppLiveResource(ctx, rbac.ActionUpdate, resourceRequest)
if err != nil {
return nil, err
}
@@ -1452,7 +1451,7 @@ func (s *Server) PatchResource(ctx context.Context, q *application.ApplicationRe
return nil, fmt.Errorf("error patching resource: %w", err)
}
if manifest == nil {
return nil, fmt.Errorf("failed to patch resource: manifest was nil")
return nil, errors.New("failed to patch resource: manifest was nil")
}
manifest, err = s.replaceSecretValues(manifest)
if err != nil {
@@ -1481,19 +1480,20 @@ func (s *Server) DeleteResource(ctx context.Context, q *application.ApplicationR
Group: q.Group,
Project: q.Project,
}
res, config, a, err := s.getAppLiveResource(ctx, rbacpolicy.ActionDelete, resourceRequest)
res, config, a, err := s.getAppLiveResource(ctx, rbac.ActionDelete, resourceRequest)
if err != nil {
return nil, err
}
var deleteOption metav1.DeleteOptions
if q.GetOrphan() {
switch {
case q.GetOrphan():
propagationPolicy := metav1.DeletePropagationOrphan
deleteOption = metav1.DeleteOptions{PropagationPolicy: &propagationPolicy}
} else if q.GetForce() {
case q.GetForce():
propagationPolicy := metav1.DeletePropagationBackground
zeroGracePeriod := int64(0)
deleteOption = metav1.DeleteOptions{PropagationPolicy: &propagationPolicy, GracePeriodSeconds: &zeroGracePeriod}
} else {
default:
propagationPolicy := metav1.DeletePropagationForeground
deleteOption = metav1.DeleteOptions{PropagationPolicy: &propagationPolicy}
}
@@ -1506,7 +1506,7 @@ func (s *Server) DeleteResource(ctx context.Context, q *application.ApplicationR
}
func (s *Server) ResourceTree(ctx context.Context, q *application.ResourcesQuery) (*appv1.ApplicationTree, error) {
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetApplicationName())
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetApplicationName())
if err != nil {
return nil, err
}
@@ -1515,7 +1515,7 @@ func (s *Server) ResourceTree(ctx context.Context, q *application.ResourcesQuery
}
func (s *Server) WatchResourceTree(q *application.ResourcesQuery, ws application.ApplicationService_WatchResourceTreeServer) error {
_, _, err := s.getApplicationEnforceRBACInformer(ws.Context(), rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetApplicationName())
_, _, err := s.getApplicationEnforceRBACInformer(ws.Context(), rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetApplicationName())
if err != nil {
return err
}
@@ -1532,7 +1532,7 @@ func (s *Server) WatchResourceTree(q *application.ResourcesQuery, ws application
}
func (s *Server) RevisionMetadata(ctx context.Context, q *application.RevisionMetadataQuery) (*appv1.RevisionMetadata, error) {
a, proj, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
a, proj, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
if err != nil {
return nil, err
}
@@ -1560,7 +1560,7 @@ func (s *Server) RevisionMetadata(ctx context.Context, q *application.RevisionMe
// RevisionChartDetails returns the helm chart metadata, as fetched from the reposerver
func (s *Server) RevisionChartDetails(ctx context.Context, q *application.RevisionMetadataQuery) (*appv1.ChartDetails, error) {
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
if err != nil {
return nil, err
}
@@ -1667,7 +1667,7 @@ func isMatchingResource(q *application.ResourcesQuery, key kube.ResourceKey) boo
}
func (s *Server) ManagedResources(ctx context.Context, q *application.ResourcesQuery) (*application.ManagedResourcesResponse, error) {
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetApplicationName())
a, _, err := s.getApplicationEnforceRBACInformer(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetApplicationName())
if err != nil {
return nil, err
}
@@ -1706,12 +1706,12 @@ func (s *Server) PodLogs(q *application.ApplicationPodLogsQuery, ws application.
}
var untilTime *metav1.Time
if q.GetUntilTime() != "" {
if val, err := time.Parse(time.RFC3339Nano, q.GetUntilTime()); err != nil {
val, err := time.Parse(time.RFC3339Nano, q.GetUntilTime())
if err != nil {
return fmt.Errorf("invalid untilTime parameter value: %w", err)
} else {
untilTimeVal := metav1.NewTime(val)
untilTime = &untilTimeVal
}
untilTimeVal := metav1.NewTime(val)
untilTime = &untilTimeVal
}
literal := ""
@@ -1724,7 +1724,7 @@ func (s *Server) PodLogs(q *application.ApplicationPodLogsQuery, ws application.
}
}
a, _, err := s.getApplicationEnforceRBACInformer(ws.Context(), rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
a, _, err := s.getApplicationEnforceRBACInformer(ws.Context(), rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName())
if err != nil {
return err
}
@@ -1739,7 +1739,7 @@ func (s *Server) PodLogs(q *application.ApplicationPodLogsQuery, ws application.
}
if serverRBACLogEnforceEnable {
if err := s.enf.EnforceErr(ws.Context().Value("claims"), rbacpolicy.ResourceLogs, rbacpolicy.ActionGet, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ws.Context().Value("claims"), rbac.ResourceLogs, rbac.ActionGet, a.RBACName(s.ns)); err != nil {
return err
}
}
@@ -1777,7 +1777,7 @@ func (s *Server) PodLogs(q *application.ApplicationPodLogsQuery, ws application.
var streams []chan logEntry
for _, pod := range pods {
stream, err := kubeClientset.CoreV1().Pods(pod.Namespace).GetLogs(pod.Name, &v1.PodLogOptions{
stream, err := kubeClientset.CoreV1().Pods(pod.Namespace).GetLogs(pod.Name, &corev1.PodLogOptions{
Container: q.GetContainer(),
Follow: q.GetFollow(),
Timestamps: true,
@@ -1813,37 +1813,35 @@ func (s *Server) PodLogs(q *application.ApplicationPodLogsQuery, ws application.
if entry.err != nil {
done <- entry.err
return
} else {
if q.Filter != nil {
lineContainsFilter := strings.Contains(entry.line, literal)
if (inverse && lineContainsFilter) || (!inverse && !lineContainsFilter) {
continue
}
}
ts := metav1.NewTime(entry.timeStamp)
if untilTime != nil && entry.timeStamp.After(untilTime.Time) {
done <- ws.Send(&application.LogEntry{
Last: ptr.To(true),
PodName: &entry.podName,
Content: &entry.line,
TimeStampStr: ptr.To(entry.timeStamp.Format(time.RFC3339Nano)),
TimeStamp: &ts,
})
return
} else {
sentCount++
if err := ws.Send(&application.LogEntry{
PodName: &entry.podName,
Content: &entry.line,
TimeStampStr: ptr.To(entry.timeStamp.Format(time.RFC3339Nano)),
TimeStamp: &ts,
Last: ptr.To(false),
}); err != nil {
done <- err
break
}
}
if q.Filter != nil {
lineContainsFilter := strings.Contains(entry.line, literal)
if (inverse && lineContainsFilter) || (!inverse && !lineContainsFilter) {
continue
}
}
ts := metav1.NewTime(entry.timeStamp)
if untilTime != nil && entry.timeStamp.After(untilTime.Time) {
done <- ws.Send(&application.LogEntry{
Last: ptr.To(true),
PodName: &entry.podName,
Content: &entry.line,
TimeStampStr: ptr.To(entry.timeStamp.Format(time.RFC3339Nano)),
TimeStamp: &ts,
})
return
}
sentCount++
if err := ws.Send(&application.LogEntry{
PodName: &entry.podName,
Content: &entry.line,
TimeStampStr: ptr.To(entry.timeStamp.Format(time.RFC3339Nano)),
TimeStamp: &ts,
Last: ptr.To(false),
}); err != nil {
done <- err
break
}
}
now := time.Now()
nowTS := metav1.NewTime(now)
@@ -1921,7 +1919,7 @@ func isTheSelectedOne(currentNode *appv1.ResourceNode, q *application.Applicatio
// Sync syncs an application to its target state
func (s *Server) Sync(ctx context.Context, syncReq *application.ApplicationSyncRequest) (*appv1.Application, error) {
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionGet, syncReq.GetProject(), syncReq.GetAppNamespace(), syncReq.GetName(), "")
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionGet, syncReq.GetProject(), syncReq.GetAppNamespace(), syncReq.GetName(), "")
if err != nil {
return nil, err
}
@@ -1936,12 +1934,12 @@ func (s *Server) Sync(ctx context.Context, syncReq *application.ApplicationSyncR
return a, status.Errorf(codes.PermissionDenied, "cannot sync: blocked by sync window")
}
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionSync, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionSync, a.RBACName(s.ns)); err != nil {
return nil, err
}
if syncReq.Manifests != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionOverride, a.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionOverride, a.RBACName(s.ns)); err != nil {
return nil, err
}
if a.Spec.SyncPolicy != nil && a.Spec.SyncPolicy.Automated != nil && !syncReq.GetDryRun() {
@@ -2035,7 +2033,7 @@ func (s *Server) resolveSourceRevisions(ctx context.Context, a *appv1.Applicatio
sources := a.Spec.GetSources()
for i, pos := range syncReq.SourcePositions {
if pos <= 0 || pos > numOfSources {
return "", "", nil, nil, fmt.Errorf("source position is out of range")
return "", "", nil, nil, errors.New("source position is out of range")
}
sources[pos-1].TargetRevision = syncReq.Revisions[i]
}
@@ -2053,23 +2051,22 @@ func (s *Server) resolveSourceRevisions(ctx context.Context, a *appv1.Applicatio
displayRevisions[index] = displayRevision
}
return "", "", sourceRevisions, displayRevisions, nil
} else {
source := a.Spec.GetSource()
if a.Spec.SyncPolicy != nil && a.Spec.SyncPolicy.Automated != nil && !syncReq.GetDryRun() {
if syncReq.GetRevision() != "" && syncReq.GetRevision() != text.FirstNonEmpty(source.TargetRevision, "HEAD") {
return "", "", nil, nil, status.Errorf(codes.FailedPrecondition, "Cannot sync to %s: auto-sync currently set to %s", syncReq.GetRevision(), source.TargetRevision)
}
}
revision, displayRevision, err := s.resolveRevision(ctx, a, syncReq, -1)
if err != nil {
return "", "", nil, nil, status.Error(codes.FailedPrecondition, err.Error())
}
return revision, displayRevision, nil, nil, nil
}
source := a.Spec.GetSource()
if a.Spec.SyncPolicy != nil && a.Spec.SyncPolicy.Automated != nil && !syncReq.GetDryRun() {
if syncReq.GetRevision() != "" && syncReq.GetRevision() != text.FirstNonEmpty(source.TargetRevision, "HEAD") {
return "", "", nil, nil, status.Errorf(codes.FailedPrecondition, "Cannot sync to %s: auto-sync currently set to %s", syncReq.GetRevision(), source.TargetRevision)
}
}
revision, displayRevision, err := s.resolveRevision(ctx, a, syncReq, -1)
if err != nil {
return "", "", nil, nil, status.Error(codes.FailedPrecondition, err.Error())
}
return revision, displayRevision, nil, nil, nil
}
func (s *Server) Rollback(ctx context.Context, rollbackReq *application.ApplicationRollbackRequest) (*appv1.Application, error) {
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionSync, rollbackReq.GetProject(), rollbackReq.GetAppNamespace(), rollbackReq.GetName(), "")
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionSync, rollbackReq.GetProject(), rollbackReq.GetAppNamespace(), rollbackReq.GetName(), "")
if err != nil {
return nil, err
}
@@ -2131,7 +2128,7 @@ func (s *Server) Rollback(ctx context.Context, rollbackReq *application.Applicat
}
func (s *Server) ListLinks(ctx context.Context, req *application.ListAppLinksRequest) (*application.LinksResponse, error) {
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionGet, req.GetProject(), req.GetNamespace(), req.GetName(), "")
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionGet, req.GetProject(), req.GetNamespace(), req.GetName(), "")
if err != nil {
return nil, err
}
@@ -2205,7 +2202,7 @@ func (s *Server) getObjectsForDeepLinks(ctx context.Context, app *appv1.Applicat
}
func (s *Server) ListResourceLinks(ctx context.Context, req *application.ApplicationResourceRequest) (*application.LinksResponse, error) {
obj, _, app, _, err := s.getUnstructuredLiveResourceOrApp(ctx, rbacpolicy.ActionGet, req)
obj, _, app, _, err := s.getUnstructuredLiveResourceOrApp(ctx, rbac.ActionGet, req)
if err != nil {
return nil, err
}
@@ -2272,12 +2269,12 @@ func (s *Server) resolveRevision(ctx context.Context, app *appv1.Application, sy
ambiguousRevision := getAmbiguousRevision(app, syncReq, sourceIndex)
repoUrl := app.Spec.GetSource().RepoURL
repoURL := app.Spec.GetSource().RepoURL
if app.Spec.HasMultipleSources() {
repoUrl = app.Spec.Sources[sourceIndex].RepoURL
repoURL = app.Spec.Sources[sourceIndex].RepoURL
}
repo, err := s.db.GetRepository(ctx, repoUrl, app.Spec.Project)
repo, err := s.db.GetRepository(ctx, repoURL, app.Spec.Project)
if err != nil {
return "", "", fmt.Errorf("error getting repository by URL: %w", err)
}
@@ -2310,7 +2307,7 @@ func (s *Server) resolveRevision(ctx context.Context, app *appv1.Application, sy
func (s *Server) TerminateOperation(ctx context.Context, termOpReq *application.OperationTerminateRequest) (*application.OperationTerminateResponse, error) {
appName := termOpReq.GetName()
appNs := s.appNamespaceOrDefault(termOpReq.GetAppNamespace())
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionSync, termOpReq.GetProject(), appNs, appName, "")
a, _, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionSync, termOpReq.GetProject(), appNs, appName, "")
if err != nil {
return nil, err
}
@@ -2326,7 +2323,7 @@ func (s *Server) TerminateOperation(ctx context.Context, termOpReq *application.
s.logAppEvent(a, ctx, argo.EventReasonResourceUpdated, "terminated running operation")
return &application.OperationTerminateResponse{}, nil
}
if !apierr.IsConflict(err) {
if !apierrors.IsConflict(err) {
return nil, fmt.Errorf("error updating application: %w", err)
}
log.Warnf("failed to set operation for app %q due to update conflict. retrying again...", *termOpReq.Name)
@@ -2340,7 +2337,7 @@ func (s *Server) TerminateOperation(ctx context.Context, termOpReq *application.
}
func (s *Server) logAppEvent(a *appv1.Application, ctx context.Context, reason string, action string) {
eventInfo := argo.EventInfo{Type: v1.EventTypeNormal, Reason: reason}
eventInfo := argo.EventInfo{Type: corev1.EventTypeNormal, Reason: reason}
user := session.Username(ctx)
if user == "" {
user = "Unknown user"
@@ -2351,7 +2348,7 @@ func (s *Server) logAppEvent(a *appv1.Application, ctx context.Context, reason s
}
func (s *Server) logResourceEvent(res *appv1.ResourceNode, ctx context.Context, reason string, action string) {
eventInfo := argo.EventInfo{Type: v1.EventTypeNormal, Reason: reason}
eventInfo := argo.EventInfo{Type: corev1.EventTypeNormal, Reason: reason}
user := session.Username(ctx)
if user == "" {
user = "Unknown user"
@@ -2361,7 +2358,7 @@ func (s *Server) logResourceEvent(res *appv1.ResourceNode, ctx context.Context,
}
func (s *Server) ListResourceActions(ctx context.Context, q *application.ApplicationResourceRequest) (*application.ResourceActionsListResponse, error) {
obj, _, _, _, err := s.getUnstructuredLiveResourceOrApp(ctx, rbacpolicy.ActionGet, q)
obj, _, _, _, err := s.getUnstructuredLiveResourceOrApp(ctx, rbac.ActionGet, q)
if err != nil {
return nil, err
}
@@ -2388,7 +2385,7 @@ func (s *Server) getUnstructuredLiveResourceOrApp(ctx context.Context, rbacReque
if err != nil {
return nil, nil, nil, nil, err
}
if err = s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacRequest, app.RBACName(s.ns)); err != nil {
if err = s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbacRequest, app.RBACName(s.ns)); err != nil {
return nil, nil, nil, nil, err
}
config, err = s.getApplicationClusterConfig(ctx, app)
@@ -2439,7 +2436,7 @@ func (s *Server) RunResourceAction(ctx context.Context, q *application.ResourceA
Group: q.Group,
Project: q.Project,
}
actionRequest := fmt.Sprintf("%s/%s/%s/%s", rbacpolicy.ActionAction, q.GetGroup(), q.GetKind(), q.GetAction())
actionRequest := fmt.Sprintf("%s/%s/%s/%s", rbac.ActionAction, q.GetGroup(), q.GetKind(), q.GetAction())
liveObj, res, a, config, err := s.getUnstructuredLiveResourceOrApp(ctx, actionRequest, resourceRequest)
if err != nil {
return nil, err
@@ -2483,6 +2480,12 @@ func (s *Server) RunResourceAction(ctx context.Context, q *application.ResourceA
return nil, err
}
// Validate the destination, which has the side effect of populating the cluster name and server URL in the app
// spec. This ensures that calls to `verifyResourcePermitted` below act on a fully-populated app spec.
if err = argo.ValidateDestination(ctx, &app.Spec.Destination, s.db); err != nil {
return nil, fmt.Errorf("error validating destination cluster: %w", err)
}
// First, make sure all the returned resources are permitted, for each operation.
// Also perform create with dry-runs for all create-operation resources.
// This is performed separately to reduce the risk of only some of the resources being successfully created later.
@@ -2494,8 +2497,7 @@ func (s *Server) RunResourceAction(ctx context.Context, q *application.ResourceA
if err != nil {
return nil, err
}
switch impactedResource.K8SOperation {
case lua.CreateOperation:
if impactedResource.K8SOperation == lua.CreateOperation {
createOptions := metav1.CreateOptions{DryRun: []string{"All"}}
_, err := s.kubectl.CreateResource(ctx, config, newObj.GroupVersionKind(), newObj.GetName(), newObj.GetNamespace(), newObj, createOptions)
if err != nil {
@@ -2563,7 +2565,7 @@ func (s *Server) patchResource(ctx context.Context, config *rest.Config, liveObj
if statusPatch != nil {
_, err = s.kubectl.PatchResource(ctx, config, newObj.GroupVersionKind(), newObj.GetName(), newObj.GetNamespace(), types.MergePatchType, diffBytes, "status")
if err != nil {
if !apierr.IsNotFound(err) {
if !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("error patching resource: %w", err)
}
// K8s API server returns 404 NotFound when the CRD does not support the status subresource
@@ -2612,7 +2614,7 @@ func (s *Server) createResource(ctx context.Context, config *rest.Config, newObj
// splitStatusPatch splits a patch into two: one for a non-status patch, and the status-only patch.
// Returns nil for either if the patch doesn't have modifications to non-status, or status, respectively.
func splitStatusPatch(patch []byte) ([]byte, []byte, error) {
var obj map[string]interface{}
var obj map[string]any
err := json.Unmarshal(patch, &obj)
if err != nil {
return nil, nil, err
@@ -2620,7 +2622,7 @@ func splitStatusPatch(patch []byte) ([]byte, []byte, error) {
var nonStatusPatch, statusPatch []byte
if statusVal, ok := obj["status"]; ok {
// calculate the status-only patch
statusObj := map[string]interface{}{
statusObj := map[string]any{
"status": statusVal,
}
statusPatch, err = json.Marshal(statusObj)
@@ -2643,7 +2645,7 @@ func splitStatusPatch(patch []byte) ([]byte, []byte, error) {
}
func (s *Server) GetApplicationSyncWindows(ctx context.Context, q *application.ApplicationSyncWindowsQuery) (*application.ApplicationSyncWindowsResponse, error) {
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbacpolicy.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName(), "")
a, proj, err := s.getApplicationEnforceRBACClient(ctx, rbac.ActionGet, q.GetProject(), q.GetAppNamespace(), q.GetName(), "")
if err != nil {
return nil, err
}
@@ -2670,7 +2672,7 @@ func (s *Server) GetApplicationSyncWindows(ctx context.Context, q *application.A
func (s *Server) inferResourcesStatusHealth(app *appv1.Application) {
if app.Status.ResourceHealthSource == appv1.ResourceHealthLocationAppTree {
tree := &appv1.ApplicationTree{}
if err := s.cache.GetAppResourcesTree(app.Name, tree); err == nil {
if err := s.cache.GetAppResourcesTree(app.InstanceName(s.ns), tree); err == nil {
healthByKey := map[kube.ResourceKey]*appv1.HealthStatus{}
for _, node := range tree.Nodes {
healthByKey[kube.NewResourceKey(node.Group, node.Kind, node.Namespace, node.Name)] = node.Health
@@ -2718,9 +2720,8 @@ func getPropagationPolicyFinalizer(policy string) string {
func (s *Server) appNamespaceOrDefault(appNs string) string {
if appNs == "" {
return s.ns
} else {
return appNs
}
return appNs
}
func (s *Server) isNamespaceEnabled(namespace string) bool {

View File

@@ -12,8 +12,6 @@ import (
"testing"
"time"
"k8s.io/apimachinery/pkg/labels"
"github.com/argoproj/gitops-engine/pkg/health"
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@@ -32,6 +30,7 @@ import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
@@ -3462,3 +3461,171 @@ func Test_DeepCopyInformers(t *testing.T) {
assert.NotSame(t, p, &spList[i])
}
}
func Test_RunResourceActionDestinationInference(t *testing.T) {
t.Parallel()
deployment := k8sappsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
}
testServer := newTestApp(func(app *appsv1.Application) {
app.ObjectMeta.Namespace = "default"
app.Name = "test-server"
app.Status.Resources = []appsv1.ResourceStatus{
{
Group: deployment.GroupVersionKind().Group,
Kind: deployment.GroupVersionKind().Kind,
Version: deployment.GroupVersionKind().Version,
Name: deployment.Name,
Namespace: deployment.Namespace,
Status: "Synced",
},
}
app.Status.History = []appsv1.RevisionHistory{
{
ID: 0,
Source: appsv1.ApplicationSource{
TargetRevision: "something-old",
},
},
}
app.Spec.Destination = appsv1.ApplicationDestination{
Server: "https://cluster-api.example.com",
Namespace: "default",
Name: "",
}
})
testName := newTestApp(func(app *appsv1.Application) {
app.ObjectMeta.Namespace = "default"
app.Name = "test-name"
app.Status.Resources = []appsv1.ResourceStatus{
{
Group: deployment.GroupVersionKind().Group,
Kind: deployment.GroupVersionKind().Kind,
Version: deployment.GroupVersionKind().Version,
Name: deployment.Name,
Namespace: deployment.Namespace,
Status: "Synced",
},
}
app.Status.History = []appsv1.RevisionHistory{
{
ID: 0,
Source: appsv1.ApplicationSource{
TargetRevision: "something-old",
},
},
}
app.Spec.Destination = appsv1.ApplicationDestination{
Server: "",
Namespace: "default",
Name: "fake-cluster",
}
})
testBoth := newTestApp(func(app *appsv1.Application) {
app.ObjectMeta.Namespace = "default"
app.Name = "test-both"
app.Status.Resources = []appsv1.ResourceStatus{
{
Group: deployment.GroupVersionKind().Group,
Kind: deployment.GroupVersionKind().Kind,
Version: deployment.GroupVersionKind().Version,
Name: deployment.Name,
Namespace: deployment.Namespace,
Status: "Synced",
},
}
app.Status.History = []appsv1.RevisionHistory{
{
ID: 0,
Source: appsv1.ApplicationSource{
TargetRevision: "something-old",
},
},
}
app.Spec.Destination = appsv1.ApplicationDestination{
Server: "https://cluster-api.example.com",
Namespace: "default",
Name: "fake-cluster",
}
})
testDeployment := kube.MustToUnstructured(&deployment)
server := newTestAppServer(t, testServer, testName, testBoth, testDeployment)
// nolint:staticcheck
adminCtx := context.WithValue(context.Background(), "claims", &jwt.MapClaims{"groups": []string{"admin"}})
type fields struct {
Server
}
type args struct {
q *application.ResourceActionRunRequest
}
tests := []struct {
name string
fields fields
args args
want *application.ApplicationResponse
wantErr assert.ErrorAssertionFunc
}{
{
name: "InferServerUrl", fields: fields{*server}, args: args{
q: &application.ResourceActionRunRequest{
Name: ptr.To(testName.Name),
Namespace: ptr.To(testName.Namespace),
ResourceName: ptr.To(deployment.Name),
Group: ptr.To(deployment.GroupVersionKind().Group),
Kind: ptr.To(deployment.GroupVersionKind().Kind),
Version: ptr.To(deployment.GroupVersionKind().Version),
Action: ptr.To("restart"),
},
},
want: &application.ApplicationResponse{}, wantErr: assert.NoError,
},
{
name: "InferName", fields: fields{*server}, args: args{
q: &application.ResourceActionRunRequest{
Name: ptr.To(testServer.Name),
Namespace: ptr.To(testServer.Namespace),
ResourceName: ptr.To(deployment.Name),
Group: ptr.To(deployment.GroupVersionKind().Group),
Kind: ptr.To(deployment.GroupVersionKind().Kind),
Version: ptr.To(deployment.GroupVersionKind().Version),
Action: ptr.To("restart"),
},
},
want: &application.ApplicationResponse{}, wantErr: assert.NoError,
},
{
name: "ErrorOnBoth", fields: fields{*server}, args: args{
q: &application.ResourceActionRunRequest{
Name: ptr.To(testBoth.Name),
Namespace: ptr.To(testBoth.Namespace),
ResourceName: ptr.To(deployment.Name),
Group: ptr.To(deployment.GroupVersionKind().Group),
Kind: ptr.To(deployment.GroupVersionKind().Kind),
Version: ptr.To(deployment.GroupVersionKind().Version),
Action: ptr.To("restart"),
},
},
want: nil, wantErr: func(t assert.TestingT, err error, i ...interface{}) bool {
return assert.ErrorContains(t, err, "application destination can't have both name and server defined")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := tt.fields.Server.RunResourceAction(adminCtx, tt.args.q)
if !tt.wantErr(t, err, fmt.Sprintf("RunResourceAction(%v)", tt.args.q)) {
return
}
assert.Equalf(t, tt.want, got, "RunResourceAction(%v)", tt.args.q)
})
}
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -8,8 +8,8 @@ import (
"github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
@@ -21,7 +21,6 @@ import (
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/rbac"
@@ -153,12 +152,12 @@ func (s *terminalHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
appRBACName := security.RBACName(s.namespace, project, appNamespace, app)
if err := s.terminalOptions.Enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionGet, appRBACName); err != nil {
if err := s.terminalOptions.Enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionGet, appRBACName); err != nil {
http.Error(w, err.Error(), http.StatusUnauthorized)
return
}
if err := s.terminalOptions.Enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceExec, rbacpolicy.ActionCreate, appRBACName); err != nil {
if err := s.terminalOptions.Enf.EnforceErr(ctx.Value("claims"), rbac.ResourceExec, rbac.ActionCreate, appRBACName); err != nil {
http.Error(w, err.Error(), http.StatusUnauthorized)
return
}
@@ -170,7 +169,7 @@ func (s *terminalHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
a, err := s.appLister.Applications(ns).Get(app)
if err != nil {
if apierr.IsNotFound(err) {
if apierrors.IsNotFound(err) {
http.Error(w, "App not found", http.StatusNotFound)
return
}
@@ -216,7 +215,7 @@ func (s *terminalHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
return
}
if pod.Status.Phase != v1.PodRunning {
if pod.Status.Phase != corev1.PodRunning {
http.Error(w, "Pod not running", http.StatusBadRequest)
return
}
@@ -309,7 +308,7 @@ func startProcess(k8sClient kubernetes.Interface, cfg *rest.Config, namespace, p
Namespace(namespace).
SubResource("exec")
req.VersionedParams(&v1.PodExecOptions{
req.VersionedParams(&corev1.PodExecOptions{
Container: containerName,
Command: cmd,
Stdin: true,

View File

@@ -9,8 +9,8 @@ import (
"time"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
httputil "github.com/argoproj/argo-cd/v2/util/http"
"github.com/argoproj/argo-cd/v2/util/rbac"
util_session "github.com/argoproj/argo-cd/v2/util/session"
"github.com/gorilla/websocket"
@@ -26,7 +26,7 @@ const (
var upgrader = func() websocket.Upgrader {
upgrader := websocket.Upgrader{}
upgrader.HandshakeTimeout = time.Second * 2
upgrader.CheckOrigin = func(r *http.Request) bool {
upgrader.CheckOrigin = func(_ *http.Request) bool {
return true
}
return upgrader
@@ -139,7 +139,7 @@ func (t *terminalSession) validatePermissions(p []byte) (int, error) {
Operation: "stdout",
Data: "Permission denied",
})
if err := t.terminalOpts.Enf.EnforceErr(t.ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionGet, t.appRBACName); err != nil {
if err := t.terminalOpts.Enf.EnforceErr(t.ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionGet, t.appRBACName); err != nil {
err = t.wsConn.WriteMessage(websocket.TextMessage, permissionDeniedMessage)
if err != nil {
log.Errorf("permission denied message err: %v", err)
@@ -147,7 +147,7 @@ func (t *terminalSession) validatePermissions(p []byte) (int, error) {
return copy(p, EndOfTransmission), permissionDeniedErr
}
if err := t.terminalOpts.Enf.EnforceErr(t.ctx.Value("claims"), rbacpolicy.ResourceExec, rbacpolicy.ActionCreate, t.appRBACName); err != nil {
if err := t.terminalOpts.Enf.EnforceErr(t.ctx.Value("claims"), rbac.ResourceExec, rbac.ActionCreate, t.appRBACName); err != nil {
err = t.wsConn.WriteMessage(websocket.TextMessage, permissionDeniedMessage)
if err != nil {
log.Errorf("permission denied message err: %v", err)
@@ -178,7 +178,7 @@ func (t *terminalSession) performValidationsAndReconnect(p []byte) (int, error)
return 0, nil
}
// Read called in a loop from remotecommand as long as the process is running
// Read called in a loop from remote command as long as the process is running
func (t *terminalSession) Read(p []byte) (int, error) {
code, err := t.performValidationsAndReconnect(p)
if err != nil {
@@ -219,7 +219,7 @@ func (t *terminalSession) Ping() error {
return err
}
// Write called from remotecommand whenever there is any output
// Write called from remote command whenever there is any output
func (t *terminalSession) Write(p []byte) (int, error) {
msg, err := json.Marshal(TerminalMessage{
Operation: "stdout",

View File

@@ -3,6 +3,7 @@ package applicationset
import (
"bytes"
"context"
"errors"
"fmt"
"reflect"
"sort"
@@ -14,8 +15,8 @@ import (
log "github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
v1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/dynamic"
@@ -33,7 +34,6 @@ import (
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
applisters "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
repoapiclient "github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/collections"
"github.com/argoproj/argo-cd/v2/util/db"
@@ -126,7 +126,7 @@ func (s *Server) Get(ctx context.Context, q *applicationset.ApplicationSetGetQue
if err != nil {
return nil, fmt.Errorf("error getting ApplicationSet: %w", err)
}
if err = s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionGet, a.RBACName(s.ns)); err != nil {
if err = s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionGet, a.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -159,7 +159,7 @@ func (s *Server) List(ctx context.Context, q *applicationset.ApplicationSetListQ
continue
}
if s.enf.Enforce(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionGet, a.RBACName(s.ns)) {
if s.enf.Enforce(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionGet, a.RBACName(s.ns)) {
newItems = append(newItems, *a)
}
}
@@ -184,7 +184,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
appset := q.GetApplicationset()
if appset == nil {
return nil, fmt.Errorf("error creating ApplicationSets: ApplicationSets is nil in request")
return nil, errors.New("error creating ApplicationSets: ApplicationSets is nil in request")
}
projectName, err := s.validateAppSet(appset)
@@ -203,7 +203,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
}
if q.GetDryRun() {
apps, err := s.generateApplicationSetApps(ctx, log.WithField("applicationset", appset.Name), *appset, namespace)
apps, err := s.generateApplicationSetApps(ctx, log.WithField("applicationset", appset.Name), *appset)
if err != nil {
return nil, fmt.Errorf("unable to generate Applications of ApplicationSet: %w", err)
}
@@ -229,7 +229,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
return created, nil
}
if !apierr.IsAlreadyExists(err) {
if !apierrors.IsAlreadyExists(err) {
return nil, fmt.Errorf("error creating ApplicationSet: %w", err)
}
// act idempotent if existing spec matches new spec
@@ -252,7 +252,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
if !q.Upsert {
return nil, status.Errorf(codes.InvalidArgument, "existing ApplicationSet spec is different, use upsert flag to force update")
}
if err = s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionUpdate, appset.RBACName(s.ns)); err != nil {
if err = s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionUpdate, appset.RBACName(s.ns)); err != nil {
return nil, err
}
updated, err := s.updateAppSet(existing, appset, ctx, true)
@@ -262,7 +262,7 @@ func (s *Server) Create(ctx context.Context, q *applicationset.ApplicationSetCre
return updated, nil
}
func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.Entry, appset v1alpha1.ApplicationSet, namespace string) ([]v1alpha1.Application, error) {
func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.Entry, appset v1alpha1.ApplicationSet) ([]v1alpha1.Application, error) {
argoCDDB := s.db
scmConfig := generators.NewSCMConfig(s.ScmRootCAPath, s.AllowedScmProviders, s.EnableScmProviders, github_app.NewAuthCredentials(argoCDDB.(db.RepoCredsDB)), true)
@@ -275,7 +275,7 @@ func (s *Server) generateApplicationSetApps(ctx context.Context, logEntry *log.E
return nil, fmt.Errorf("error creating ArgoCDService: %w", err)
}
appSetGenerators := generators.GetGenerators(ctx, s.client, s.k8sClient, namespace, argoCDService, s.dynamicClient, scmConfig)
appSetGenerators := generators.GetGenerators(ctx, s.client, s.k8sClient, s.ns, argoCDService, s.dynamicClient, scmConfig)
apps, _, err := appsettemplate.GenerateApplications(logEntry, appset, appSetGenerators, &appsetutils.Render{}, s.client)
if err != nil {
@@ -288,11 +288,11 @@ func (s *Server) updateAppSet(appset *v1alpha1.ApplicationSet, newAppset *v1alph
if appset != nil && appset.Spec.Template.Spec.Project != newAppset.Spec.Template.Spec.Project {
// When changing projects, caller must have applicationset create and update privileges in new project
// NOTE: the update check was already verified in the caller to this function
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionCreate, newAppset.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionCreate, newAppset.RBACName(s.ns)); err != nil {
return nil, err
}
// They also need 'update' privileges in the old project
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionUpdate, appset.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionUpdate, appset.RBACName(s.ns)); err != nil {
return nil, err
}
}
@@ -313,7 +313,7 @@ func (s *Server) updateAppSet(appset *v1alpha1.ApplicationSet, newAppset *v1alph
s.waitSync(res)
return res, nil
}
if !apierr.IsConflict(err) {
if !apierrors.IsConflict(err) {
return nil, err
}
@@ -333,7 +333,7 @@ func (s *Server) Delete(ctx context.Context, q *applicationset.ApplicationSetDel
return nil, fmt.Errorf("error getting ApplicationSets: %w", err)
}
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionDelete, appset.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionDelete, appset.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -359,7 +359,7 @@ func (s *Server) ResourceTree(ctx context.Context, q *applicationset.Application
if err != nil {
return nil, fmt.Errorf("error getting ApplicationSet: %w", err)
}
if err = s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionGet, a.RBACName(s.ns)); err != nil {
if err = s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionGet, a.RBACName(s.ns)); err != nil {
return nil, err
}
@@ -370,13 +370,17 @@ func (s *Server) Generate(ctx context.Context, q *applicationset.ApplicationSetG
appset := q.GetApplicationSet()
if appset == nil {
return nil, fmt.Errorf("error creating ApplicationSets: ApplicationSets is nil in request")
return nil, errors.New("error creating ApplicationSets: ApplicationSets is nil in request")
}
namespace := s.appsetNamespaceOrDefault(appset.Namespace)
// The RBAC check needs to be performed against the appset namespace
// However, when trying to generate params, the server namespace needs
// to be passed.
namespace := s.appsetNamespaceOrDefault(appset.Namespace)
if !s.isNamespaceEnabled(namespace) {
return nil, security.NamespaceNotPermittedError(namespace)
}
projectName, err := s.validateAppSet(appset)
if err != nil {
return nil, fmt.Errorf("error validating ApplicationSets: %w", err)
@@ -389,7 +393,16 @@ func (s *Server) Generate(ctx context.Context, q *applicationset.ApplicationSetG
logger := log.New()
logger.SetOutput(logs)
apps, err := s.generateApplicationSetApps(ctx, logger.WithField("applicationset", appset.Name), *appset, namespace)
// The server namespace will be used in the function
// since this is the exact namespace that is being used
// to generate parameters (especially for git generator).
//
// In case of Git generator, if the namespace is set to
// appset namespace, we'll look for a project in the appset
// namespace that would lead to error when generating params
// for an appset in any namespace feature.
// See https://github.com/argoproj/argo-cd/issues/22942
apps, err := s.generateApplicationSetApps(ctx, logger.WithField("applicationset", appset.Name), *appset)
if err != nil {
return nil, fmt.Errorf("unable to generate Applications of ApplicationSet: %w\n%s", err, logs.String())
}
@@ -429,13 +442,13 @@ func (s *Server) buildApplicationSetTree(a *v1alpha1.ApplicationSet) (*v1alpha1.
func (s *Server) validateAppSet(appset *v1alpha1.ApplicationSet) (string, error) {
if appset == nil {
return "", fmt.Errorf("ApplicationSet cannot be validated for nil value")
return "", errors.New("ApplicationSet cannot be validated for nil value")
}
projectName := appset.Spec.Template.Spec.Project
if strings.Contains(projectName, "{{") {
return "", fmt.Errorf("the Argo CD API does not currently support creating ApplicationSets with templated `project` fields")
return "", errors.New("the Argo CD API does not currently support creating ApplicationSets with templated `project` fields")
}
if err := appsetutils.CheckInvalidGenerators(appset); err != nil {
@@ -446,13 +459,13 @@ func (s *Server) validateAppSet(appset *v1alpha1.ApplicationSet) (string, error)
}
func (s *Server) checkCreatePermissions(ctx context.Context, appset *v1alpha1.ApplicationSet, projectName string) error {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplicationSets, rbacpolicy.ActionCreate, appset.RBACName(s.ns)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceApplicationSets, rbac.ActionCreate, appset.RBACName(s.ns)); err != nil {
return err
}
_, err := s.appclientset.ArgoprojV1alpha1().AppProjects(s.ns).Get(ctx, projectName, metav1.GetOptions{})
if err != nil {
if apierr.IsNotFound(err) {
if apierrors.IsNotFound(err) {
return status.Errorf(codes.InvalidArgument, "ApplicationSet references project %s which does not exist", projectName)
}
return fmt.Errorf("error getting ApplicationSet's project %q: %w", projectName, err)
@@ -495,7 +508,7 @@ func (s *Server) waitSync(appset *v1alpha1.ApplicationSet) {
}
func (s *Server) logAppSetEvent(a *v1alpha1.ApplicationSet, ctx context.Context, reason string, action string) {
eventInfo := argo.EventInfo{Type: v1.EventTypeNormal, Reason: reason}
eventInfo := argo.EventInfo{Type: corev1.EventTypeNormal, Reason: reason}
user := session.Username(ctx)
if user == "" {
user = "Unknown user"
@@ -507,9 +520,8 @@ func (s *Server) logAppSetEvent(a *v1alpha1.ApplicationSet, ctx context.Context,
func (s *Server) appsetNamespaceOrDefault(appNs string) string {
if appNs == "" {
return s.ns
} else {
return appNs
}
return appNs
}
func (s *Server) isNamespaceEnabled(namespace string) bool {

View File

@@ -5,6 +5,9 @@ import (
"sort"
"testing"
"sigs.k8s.io/controller-runtime/pkg/client"
cr_fake "sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/pkg/sync"
"github.com/stretchr/testify/assert"
@@ -52,26 +55,29 @@ func fakeCluster() *appsv1.Cluster {
}
// return an ApplicationServiceServer which returns fake data
func newTestAppSetServer(objects ...runtime.Object) *Server {
func newTestAppSetServer(t *testing.T, objects ...client.Object) *Server {
t.Helper()
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
enf.SetDefaultRole("role:admin")
}
scopedNamespaces := ""
return newTestAppSetServerWithEnforcerConfigure(f, scopedNamespaces, objects...)
return newTestAppSetServerWithEnforcerConfigure(t, f, scopedNamespaces, objects...)
}
// return an ApplicationServiceServer which returns fake data
func newTestNamespacedAppSetServer(objects ...runtime.Object) *Server {
func newTestNamespacedAppSetServer(t *testing.T, objects ...client.Object) *Server {
t.Helper()
f := func(enf *rbac.Enforcer) {
_ = enf.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
enf.SetDefaultRole("role:admin")
}
scopedNamespaces := "argocd"
return newTestAppSetServerWithEnforcerConfigure(f, scopedNamespaces, objects...)
return newTestAppSetServerWithEnforcerConfigure(t, f, scopedNamespaces, objects...)
}
func newTestAppSetServerWithEnforcerConfigure(f func(*rbac.Enforcer), namespace string, objects ...runtime.Object) *Server {
func newTestAppSetServerWithEnforcerConfigure(t *testing.T, f func(*rbac.Enforcer), namespace string, objects ...client.Object) *Server {
t.Helper()
kubeclientset := fake.NewClientset(&v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: testNamespace,
@@ -114,8 +120,12 @@ func newTestAppSetServerWithEnforcerConfigure(f func(*rbac.Enforcer), namespace
objects = append(objects, defaultProj, myProj)
fakeAppsClientset := apps.NewSimpleClientset(objects...)
factory := appinformer.NewSharedInformerFactoryWithOptions(fakeAppsClientset, 0, appinformer.WithNamespace(namespace), appinformer.WithTweakListOptions(func(options *metav1.ListOptions) {}))
runtimeObjects := make([]runtime.Object, len(objects))
for i := range objects {
runtimeObjects[i] = objects[i]
}
fakeAppsClientset := apps.NewSimpleClientset(runtimeObjects...)
factory := appinformer.NewSharedInformerFactoryWithOptions(fakeAppsClientset, 0, appinformer.WithNamespace(namespace), appinformer.WithTweakListOptions(func(_ *metav1.ListOptions) {}))
fakeProjLister := factory.Argoproj().V1alpha1().AppProjects().Lister().AppProjects(testNamespace)
enforcer := rbac.NewEnforcer(kubeclientset, testNamespace, common.ArgoCDRBACConfigMapName, nil)
@@ -139,6 +149,13 @@ func newTestAppSetServerWithEnforcerConfigure(f func(*rbac.Enforcer), namespace
panic("Timed out waiting for caches to sync")
}
scheme := runtime.NewScheme()
err = appsv1.AddToScheme(scheme)
require.NoError(t, err)
err = v1.AddToScheme(scheme)
require.NoError(t, err)
crClient := cr_fake.NewClientBuilder().WithScheme(scheme).WithObjects(objects...).Build()
projInformer := factory.Argoproj().V1alpha1().AppProjects().Informer()
go projInformer.Run(ctx.Done())
if !k8scache.WaitForCacheSync(ctx.Done(), projInformer.HasSynced) {
@@ -149,7 +166,7 @@ func newTestAppSetServerWithEnforcerConfigure(f func(*rbac.Enforcer), namespace
db,
kubeclientset,
nil,
nil,
crClient,
enforcer,
nil,
fakeAppsClientset,
@@ -274,17 +291,17 @@ func testListAppsetsWithLabels(t *testing.T, appsetQuery applicationset.Applicat
func TestListAppSetsInNamespaceWithLabels(t *testing.T) {
testNamespace := "test-namespace"
appSetServer := newTestAppSetServer(newTestAppSet(func(appset *appsv1.ApplicationSet) {
appSetServer := newTestAppSetServer(t, newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet1"
appset.ObjectMeta.Namespace = testNamespace
appset.Namespace = testNamespace
appset.SetLabels(map[string]string{"key1": "value1", "key2": "value1"})
}), newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet2"
appset.ObjectMeta.Namespace = testNamespace
appset.Namespace = testNamespace
appset.SetLabels(map[string]string{"key1": "value2"})
}), newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet3"
appset.ObjectMeta.Namespace = testNamespace
appset.Namespace = testNamespace
appset.SetLabels(map[string]string{"key1": "value3"})
}))
appSetServer.enabledNamespaces = []string{testNamespace}
@@ -294,7 +311,7 @@ func TestListAppSetsInNamespaceWithLabels(t *testing.T) {
}
func TestListAppSetsInDefaultNSWithLabels(t *testing.T) {
appSetServer := newTestAppSetServer(newTestAppSet(func(appset *appsv1.ApplicationSet) {
appSetServer := newTestAppSetServer(t, newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet1"
appset.SetLabels(map[string]string{"key1": "value1", "key2": "value1"})
}), newTestAppSet(func(appset *appsv1.ApplicationSet) {
@@ -314,30 +331,30 @@ func TestListAppSetsInDefaultNSWithLabels(t *testing.T) {
// default namespace must be used and not all the namespaces
func TestListAppSetsWithoutNamespace(t *testing.T) {
testNamespace := "test-namespace"
appSetServer := newTestNamespacedAppSetServer(newTestAppSet(func(appset *appsv1.ApplicationSet) {
appSetServer := newTestNamespacedAppSetServer(t, newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet1"
appset.ObjectMeta.Namespace = testNamespace
appset.Namespace = testNamespace
appset.SetLabels(map[string]string{"key1": "value1", "key2": "value1"})
}), newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet2"
appset.ObjectMeta.Namespace = testNamespace
appset.Namespace = testNamespace
appset.SetLabels(map[string]string{"key1": "value2"})
}), newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet3"
appset.ObjectMeta.Namespace = testNamespace
appset.Namespace = testNamespace
appset.SetLabels(map[string]string{"key1": "value3"})
}))
appSetServer.enabledNamespaces = []string{testNamespace}
appsetQuery := applicationset.ApplicationSetListQuery{}
res, err := appSetServer.List(context.Background(), &appsetQuery)
res, err := appSetServer.List(t.Context(), &appsetQuery)
require.NoError(t, err)
assert.Empty(t, res.Items)
}
func TestCreateAppSet(t *testing.T) {
testAppSet := newTestAppSet()
appServer := newTestAppSetServer()
appServer := newTestAppSetServer(t)
testAppSet.Spec.Generators = []appsv1.ApplicationSetGenerator{
{
List: &appsv1.ListGenerator{},
@@ -346,36 +363,36 @@ func TestCreateAppSet(t *testing.T) {
createReq := applicationset.ApplicationSetCreateRequest{
Applicationset: testAppSet,
}
_, err := appServer.Create(context.Background(), &createReq)
_, err := appServer.Create(t.Context(), &createReq)
require.NoError(t, err)
}
func TestCreateAppSetTemplatedProject(t *testing.T) {
testAppSet := newTestAppSet()
appServer := newTestAppSetServer()
appServer := newTestAppSetServer(t)
testAppSet.Spec.Template.Spec.Project = "{{ .project }}"
createReq := applicationset.ApplicationSetCreateRequest{
Applicationset: testAppSet,
}
_, err := appServer.Create(context.Background(), &createReq)
_, err := appServer.Create(t.Context(), &createReq)
assert.EqualError(t, err, "error validating ApplicationSets: the Argo CD API does not currently support creating ApplicationSets with templated `project` fields")
}
func TestCreateAppSetWrongNamespace(t *testing.T) {
testAppSet := newTestAppSet()
appServer := newTestAppSetServer()
testAppSet.ObjectMeta.Namespace = "NOT-ALLOWED"
appServer := newTestAppSetServer(t)
testAppSet.Namespace = "NOT-ALLOWED"
createReq := applicationset.ApplicationSetCreateRequest{
Applicationset: testAppSet,
}
_, err := appServer.Create(context.Background(), &createReq)
_, err := appServer.Create(t.Context(), &createReq)
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
}
func TestCreateAppSetDryRun(t *testing.T) {
testAppSet := newTestAppSet()
appServer := newTestAppSetServer()
appServer := newTestAppSetServer(t)
testAppSet.Spec.Template.Name = "{{name}}"
testAppSet.Spec.Generators = []appsv1.ApplicationSetGenerator{
{
@@ -388,7 +405,7 @@ func TestCreateAppSetDryRun(t *testing.T) {
Applicationset: testAppSet,
DryRun: true,
}
result, err := appServer.Create(context.Background(), &createReq)
result, err := appServer.Create(t.Context(), &createReq)
require.NoError(t, err)
assert.Len(t, result.Status.Resources, 2)
@@ -406,7 +423,7 @@ func TestCreateAppSetDryRun(t *testing.T) {
func TestCreateAppSetDryRunWithDuplicate(t *testing.T) {
testAppSet := newTestAppSet()
appServer := newTestAppSetServer()
appServer := newTestAppSetServer(t)
testAppSet.Spec.Template.Name = "{{name}}"
testAppSet.Spec.Generators = []appsv1.ApplicationSetGenerator{
{
@@ -419,7 +436,7 @@ func TestCreateAppSetDryRunWithDuplicate(t *testing.T) {
Applicationset: testAppSet,
DryRun: true,
}
result, err := appServer.Create(context.Background(), &createReq)
result, err := appServer.Create(t.Context(), &createReq)
require.NoError(t, err)
assert.Len(t, result.Status.Resources, 1)
@@ -441,7 +458,7 @@ func TestGetAppSet(t *testing.T) {
})
t.Run("Get in default namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetGetQuery{Name: "AppSet1"}
@@ -451,7 +468,7 @@ func TestGetAppSet(t *testing.T) {
})
t.Run("Get in named namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetGetQuery{Name: "AppSet1", AppsetNamespace: testNamespace}
@@ -461,7 +478,7 @@ func TestGetAppSet(t *testing.T) {
})
t.Run("Get in not allowed namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetGetQuery{Name: "AppSet1", AppsetNamespace: "NOT-ALLOWED"}
@@ -484,7 +501,7 @@ func TestDeleteAppSet(t *testing.T) {
})
t.Run("Delete in default namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetDeleteRequest{Name: "AppSet1"}
@@ -494,7 +511,7 @@ func TestDeleteAppSet(t *testing.T) {
})
t.Run("Delete in named namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetDeleteRequest{Name: "AppSet1", AppsetNamespace: testNamespace}
@@ -526,7 +543,7 @@ func TestUpdateAppSet(t *testing.T) {
})
t.Run("Update merge", func(t *testing.T) {
appServer := newTestAppSetServer(appSet)
appServer := newTestAppSetServer(t, appSet)
updated, err := appServer.updateAppSet(appSet, newAppSet, context.Background(), true)
@@ -542,7 +559,7 @@ func TestUpdateAppSet(t *testing.T) {
})
t.Run("Update no merge", func(t *testing.T) {
appServer := newTestAppSetServer(appSet)
appServer := newTestAppSetServer(t, appSet)
updated, err := appServer.updateAppSet(appSet, newAppSet, context.Background(), false)
@@ -611,7 +628,7 @@ func TestResourceTree(t *testing.T) {
}
t.Run("ResourceTree in default namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetTreeQuery{Name: "AppSet1"}
@@ -621,7 +638,7 @@ func TestResourceTree(t *testing.T) {
})
t.Run("ResourceTree in named namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetTreeQuery{Name: "AppSet1", AppsetNamespace: testNamespace}
@@ -631,7 +648,7 @@ func TestResourceTree(t *testing.T) {
})
t.Run("ResourceTree in not allowed namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(appSet1, appSet2, appSet3)
appSetServer := newTestAppSetServer(t, appSet1, appSet2, appSet3)
appsetQuery := applicationset.ApplicationSetTreeQuery{Name: "AppSet1", AppsetNamespace: "NOT-ALLOWED"}
@@ -639,3 +656,54 @@ func TestResourceTree(t *testing.T) {
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
})
}
func TestAppSet_Generate_Cluster(t *testing.T) {
appSet1 := newTestAppSet(func(appset *appsv1.ApplicationSet) {
appset.Name = "AppSet1"
appset.Spec.Template.Name = "{{name}}"
appset.Spec.Generators = []appsv1.ApplicationSetGenerator{
{
Clusters: &appsv1.ClusterGenerator{},
},
}
})
t.Run("Generate in default namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appsetQuery := applicationset.ApplicationSetGenerateRequest{
ApplicationSet: appSet1,
}
res, err := appSetServer.Generate(t.Context(), &appsetQuery)
require.NoError(t, err)
require.Len(t, res.Applications, 2)
assert.Equal(t, "fake-cluster", res.Applications[0].Name)
assert.Equal(t, "in-cluster", res.Applications[1].Name)
})
t.Run("Generate in different namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appSet1Ns := appSet1.DeepCopy()
appSet1Ns.Namespace = "external-namespace"
appsetQuery := applicationset.ApplicationSetGenerateRequest{ApplicationSet: appSet1Ns}
res, err := appSetServer.Generate(t.Context(), &appsetQuery)
require.NoError(t, err)
require.Len(t, res.Applications, 2)
assert.Equal(t, "fake-cluster", res.Applications[0].Name)
assert.Equal(t, "in-cluster", res.Applications[1].Name)
})
t.Run("Generate in not allowed namespace", func(t *testing.T) {
appSetServer := newTestAppSetServer(t, appSet1)
appSet1Ns := appSet1.DeepCopy()
appSet1Ns.Namespace = "NOT-ALLOWED"
appsetQuery := applicationset.ApplicationSetGenerateRequest{ApplicationSet: appSet1Ns}
_, err := appSetServer.Generate(t.Context(), &appsetQuery)
assert.EqualError(t, err, "namespace 'NOT-ALLOWED' is not permitted")
})
}

View File

@@ -6,7 +6,6 @@ import (
certificatepkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/certificate"
appsv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/rbac"
)
@@ -37,7 +36,7 @@ func NewServer(
// Returns a list of configured certificates that match the query
func (s *Server) ListCertificates(ctx context.Context, q *certificatepkg.RepositoryCertificateQuery) (*appsv1.RepositoryCertificateList, error) {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceCertificates, rbacpolicy.ActionGet, ""); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceCertificates, rbac.ActionGet, ""); err != nil {
return nil, err
}
certList, err := s.db.ListRepoCertificates(ctx, &db.CertificateListSelector{
@@ -53,7 +52,7 @@ func (s *Server) ListCertificates(ctx context.Context, q *certificatepkg.Reposit
// Batch creates certificates for verifying repositories
func (s *Server) CreateCertificate(ctx context.Context, q *certificatepkg.RepositoryCertificateCreateRequest) (*appsv1.RepositoryCertificateList, error) {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceCertificates, rbacpolicy.ActionCreate, ""); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceCertificates, rbac.ActionCreate, ""); err != nil {
return nil, err
}
certs, err := s.db.CreateRepoCertificate(ctx, q.Certificates, q.Upsert)
@@ -66,7 +65,7 @@ func (s *Server) CreateCertificate(ctx context.Context, q *certificatepkg.Reposi
// Batch deletes a list of certificates that match the query
func (s *Server) DeleteCertificate(ctx context.Context, q *certificatepkg.RepositoryCertificateQuery) (*appsv1.RepositoryCertificateList, error) {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceCertificates, rbacpolicy.ActionDelete, ""); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceCertificates, rbac.ActionDelete, ""); err != nil {
return nil, err
}
certs, err := s.db.RemoveRepoCertificates(ctx, &db.CertificateListSelector{

View File

@@ -10,7 +10,7 @@ import (
log "github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/client-go/kubernetes"
@@ -18,7 +18,6 @@ import (
"github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster"
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/clusterauth"
"github.com/argoproj/argo-cd/v2/util/db"
@@ -72,7 +71,7 @@ func (s *Server) List(ctx context.Context, q *cluster.ClusterQuery) (*appv1.Clus
items := make([]appv1.Cluster, 0)
for _, clust := range filteredItems {
if s.enf.Enforce(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionGet, CreateClusterRBACObject(clust.Project, clust.Server)) {
if s.enf.Enforce(ctx.Value("claims"), rbac.ResourceClusters, rbac.ActionGet, CreateClusterRBACObject(clust.Project, clust.Server)) {
items = append(items, clust)
}
}
@@ -143,7 +142,7 @@ func filterClustersByServer(clusters []appv1.Cluster, server string) []appv1.Clu
// Create creates a cluster
func (s *Server) Create(ctx context.Context, q *cluster.ClusterCreateRequest) (*appv1.Cluster, error) {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionCreate, CreateClusterRBACObject(q.Cluster.Project, q.Cluster.Server)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceClusters, rbac.ActionCreate, CreateClusterRBACObject(q.Cluster.Project, q.Cluster.Server)); err != nil {
return nil, fmt.Errorf("permission denied while creating cluster: %w", err)
}
c := q.Cluster
@@ -182,7 +181,7 @@ func (s *Server) Create(ctx context.Context, q *cluster.ClusterCreateRequest) (*
ServerVersion: serverVersion,
ConnectionState: appv1.ConnectionState{
Status: appv1.ConnectionStatusSuccessful,
ModifiedAt: &v1.Time{Time: time.Now()},
ModifiedAt: &metav1.Time{Time: time.Now()},
},
})
if err != nil {
@@ -193,7 +192,7 @@ func (s *Server) Create(ctx context.Context, q *cluster.ClusterCreateRequest) (*
// Get returns a cluster from a query
func (s *Server) Get(ctx context.Context, q *cluster.ClusterQuery) (*appv1.Cluster, error) {
c, err := s.getClusterAndVerifyAccess(ctx, q, rbacpolicy.ActionGet)
c, err := s.getClusterAndVerifyAccess(ctx, q, rbac.ActionGet)
if err != nil {
return nil, fmt.Errorf("error verifying access to update cluster: %w", err)
}
@@ -216,7 +215,7 @@ func (s *Server) getClusterAndVerifyAccess(ctx context.Context, q *cluster.Clust
}
// verify that user can do the specified action inside project where cluster is located
if !s.enf.Enforce(ctx.Value("claims"), rbacpolicy.ResourceClusters, action, CreateClusterRBACObject(c.Project, c.Server)) {
if !s.enf.Enforce(ctx.Value("claims"), rbac.ResourceClusters, action, CreateClusterRBACObject(c.Project, c.Server)) {
log.WithField("cluster", q.Server).Warnf("encountered permissions issue while processing request: %v", err)
return nil, common.PermissionDeniedAPIError
}
@@ -228,15 +227,16 @@ func (s *Server) getCluster(ctx context.Context, q *cluster.ClusterQuery) (*appv
if q.Id != nil {
q.Server = ""
q.Name = ""
if q.Id.Type == "name" {
switch q.Id.Type {
case "name":
q.Name = q.Id.Value
} else if q.Id.Type == "name_escaped" {
case "name_escaped":
nameUnescaped, err := url.QueryUnescape(q.Id.Value)
if err != nil {
return nil, fmt.Errorf("failed to unescape cluster name: %w", err)
}
q.Name = nameUnescaped
} else {
default:
q.Server = q.Id.Value
}
}
@@ -299,14 +299,14 @@ func (s *Server) Update(ctx context.Context, q *cluster.ClusterUpdateRequest) (*
Server: q.Cluster.Server,
Name: q.Cluster.Name,
Id: q.Id,
}, rbacpolicy.ActionUpdate)
}, rbac.ActionUpdate)
if err != nil {
return nil, fmt.Errorf("failed to verify access for updating cluster: %w", err)
}
if len(q.UpdatedFields) == 0 || sets.NewString(q.UpdatedFields...).Has("project") {
// verify that user can do update inside project where cluster will be located
if !s.enf.Enforce(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionUpdate, CreateClusterRBACObject(q.Cluster.Project, c.Server)) {
if !s.enf.Enforce(ctx.Value("claims"), rbac.ResourceClusters, rbac.ActionUpdate, CreateClusterRBACObject(q.Cluster.Project, c.Server)) {
return nil, common.PermissionDeniedAPIError
}
}
@@ -338,7 +338,7 @@ func (s *Server) Update(ctx context.Context, q *cluster.ClusterUpdateRequest) (*
ServerVersion: serverVersion,
ConnectionState: appv1.ConnectionState{
Status: appv1.ConnectionStatusSuccessful,
ModifiedAt: &v1.Time{Time: time.Now()},
ModifiedAt: &metav1.Time{Time: time.Now()},
},
})
if err != nil {
@@ -375,14 +375,11 @@ func (s *Server) Delete(ctx context.Context, q *cluster.ClusterQuery) (*cluster.
}
func enforceAndDelete(s *Server, ctx context.Context, server, project string) error {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionDelete, CreateClusterRBACObject(project, server)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceClusters, rbac.ActionDelete, CreateClusterRBACObject(project, server)); err != nil {
log.WithField("cluster", server).Warnf("encountered permissions issue while processing request: %v", err)
return common.PermissionDeniedAPIError
}
if err := s.db.DeleteCluster(ctx, server); err != nil {
return err
}
return nil
return s.db.DeleteCluster(ctx, server)
}
// RotateAuth rotates the bearer token used for a cluster
@@ -400,13 +397,13 @@ func (s *Server) RotateAuth(ctx context.Context, q *cluster.ClusterQuery) (*clus
return nil, common.PermissionDeniedAPIError
}
for _, server := range servers {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionUpdate, CreateClusterRBACObject(clust.Project, server)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceClusters, rbac.ActionUpdate, CreateClusterRBACObject(clust.Project, server)); err != nil {
log.WithField("cluster", server).Warnf("encountered permissions issue while processing request: %v", err)
return nil, common.PermissionDeniedAPIError
}
}
} else {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceClusters, rbacpolicy.ActionUpdate, CreateClusterRBACObject(clust.Project, q.Server)); err != nil {
if err := s.enf.EnforceErr(ctx.Value("claims"), rbac.ResourceClusters, rbac.ActionUpdate, CreateClusterRBACObject(clust.Project, q.Server)); err != nil {
log.WithField("cluster", q.Server).Warnf("encountered permissions issue while processing request: %v", err)
return nil, common.PermissionDeniedAPIError
}
@@ -458,7 +455,7 @@ func (s *Server) RotateAuth(ctx context.Context, q *cluster.ClusterQuery) (*clus
ServerVersion: serverVersion,
ConnectionState: appv1.ConnectionState{
Status: appv1.ConnectionStatusSuccessful,
ModifiedAt: &v1.Time{Time: time.Now()},
ModifiedAt: &metav1.Time{Time: time.Now()},
},
})
if err != nil {
@@ -474,19 +471,8 @@ func (s *Server) RotateAuth(ctx context.Context, q *cluster.ClusterQuery) (*clus
}
func (s *Server) toAPIResponse(clust *appv1.Cluster) *appv1.Cluster {
clust = clust.Sanitized()
_ = s.cache.GetClusterInfo(clust.Server, &clust.Info)
clust.Config.Password = ""
clust.Config.BearerToken = ""
clust.Config.TLSClientConfig.KeyData = nil
if clust.Config.ExecProviderConfig != nil {
// We can't know what the user has put into args or
// env vars on the exec provider that might be sensitive
// (e.g. --private-key=XXX, PASSWORD=XXX)
// Implicitly assumes the command executable name is non-sensitive
clust.Config.ExecProviderConfig.Env = make(map[string]string)
clust.Config.ExecProviderConfig.Args = nil
}
// populate deprecated fields for backward compatibility
// nolint:staticcheck
clust.ServerVersion = clust.Info.ServerVersion
@@ -497,11 +483,11 @@ func (s *Server) toAPIResponse(clust *appv1.Cluster) *appv1.Cluster {
// InvalidateCache invalidates cluster cache
func (s *Server) InvalidateCache(ctx context.Context, q *cluster.ClusterQuery) (*appv1.Cluster, error) {
cls, err := s.getClusterAndVerifyAccess(ctx, q, rbacpolicy.ActionUpdate)
cls, err := s.getClusterAndVerifyAccess(ctx, q, rbac.ActionUpdate)
if err != nil {
return nil, fmt.Errorf("failed to verify access for cluster: %w", err)
}
now := v1.Now()
now := metav1.Now()
cls.RefreshRequestedAt = &now
cls, err = s.db.UpdateCluster(ctx, cls)
if err != nil {

View File

@@ -10,7 +10,6 @@ import (
"github.com/golang-jwt/jwt/v4"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/assets"
"github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest"
@@ -19,14 +18,12 @@ import (
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/utils/ptr"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster"
clusterapi "github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
@@ -129,7 +126,7 @@ func newEnforcer() *rbac.Enforcer {
enforcer := rbac.NewEnforcer(fake.NewClientset(test.NewFakeConfigMap()), test.FakeArgoCDNamespace, common.ArgoCDRBACConfigMapName, nil)
_ = enforcer.SetBuiltinPolicy(assets.BuiltinPolicyCSV)
enforcer.SetDefaultRole("role:test")
enforcer.SetClaimsEnforcerFunc(func(claims jwt.Claims, rvals ...interface{}) bool {
enforcer.SetClaimsEnforcerFunc(func(_ jwt.Claims, _ ...any) bool {
return true
})
return enforcer
@@ -138,23 +135,23 @@ func newEnforcer() *rbac.Enforcer {
func TestUpdateCluster_RejectInvalidParams(t *testing.T) {
testCases := []struct {
name string
request clusterapi.ClusterUpdateRequest
request cluster.ClusterUpdateRequest
}{
{
name: "allowed cluster URL in body, disallowed cluster URL in query",
request: clusterapi.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "", ClusterResources: true}, Id: &clusterapi.ClusterID{Type: "", Value: "https://127.0.0.2"}, UpdatedFields: []string{"clusterResources", "project"}},
request: cluster.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "", ClusterResources: true}, Id: &cluster.ClusterID{Type: "", Value: "https://127.0.0.2"}, UpdatedFields: []string{"clusterResources", "project"}},
},
{
name: "allowed cluster URL in body, disallowed cluster name in query",
request: clusterapi.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "", ClusterResources: true}, Id: &clusterapi.ClusterID{Type: "name", Value: "disallowed-unscoped"}, UpdatedFields: []string{"clusterResources", "project"}},
request: cluster.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "", ClusterResources: true}, Id: &cluster.ClusterID{Type: "name", Value: "disallowed-unscoped"}, UpdatedFields: []string{"clusterResources", "project"}},
},
{
name: "allowed cluster URL in body, disallowed cluster name in query, changing unscoped to scoped",
request: clusterapi.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "allowed-project", ClusterResources: true}, Id: &clusterapi.ClusterID{Type: "", Value: "https://127.0.0.2"}, UpdatedFields: []string{"clusterResources", "project"}},
request: cluster.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "allowed-project", ClusterResources: true}, Id: &cluster.ClusterID{Type: "", Value: "https://127.0.0.2"}, UpdatedFields: []string{"clusterResources", "project"}},
},
{
name: "allowed cluster URL in body, disallowed cluster URL in query, changing unscoped to scoped",
request: clusterapi.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "allowed-project", ClusterResources: true}, Id: &clusterapi.ClusterID{Type: "name", Value: "disallowed-unscoped"}, UpdatedFields: []string{"clusterResources", "project"}},
request: cluster.ClusterUpdateRequest{Cluster: &v1alpha1.Cluster{Name: "", Server: "https://127.0.0.1", Project: "allowed-project", ClusterResources: true}, Id: &cluster.ClusterID{Type: "name", Value: "disallowed-unscoped"}, UpdatedFields: []string{"clusterResources", "project"}},
},
}
@@ -182,18 +179,18 @@ func TestUpdateCluster_RejectInvalidParams(t *testing.T) {
}
db.On("ListClusters", mock.Anything).Return(
func(ctx context.Context) *v1alpha1.ClusterList {
func(_ context.Context) *v1alpha1.ClusterList {
return &v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: clusters,
}
},
func(ctx context.Context) error {
func(_ context.Context) error {
return nil
},
)
db.On("UpdateCluster", mock.Anything, mock.Anything).Return(
func(ctx context.Context, c *v1alpha1.Cluster) *v1alpha1.Cluster {
func(_ context.Context, c *v1alpha1.Cluster) *v1alpha1.Cluster {
for _, cluster := range clusters {
if c.Server == cluster.Server {
return c
@@ -201,7 +198,7 @@ func TestUpdateCluster_RejectInvalidParams(t *testing.T) {
}
return nil
},
func(ctx context.Context, c *v1alpha1.Cluster) error {
func(_ context.Context, c *v1alpha1.Cluster) error {
for _, cluster := range clusters {
if c.Server == cluster.Server {
return nil
@@ -211,7 +208,7 @@ func TestUpdateCluster_RejectInvalidParams(t *testing.T) {
},
)
db.On("GetCluster", mock.Anything, mock.Anything).Return(
func(ctx context.Context, server string) *v1alpha1.Cluster {
func(_ context.Context, server string) *v1alpha1.Cluster {
for _, cluster := range clusters {
if server == cluster.Server {
return &cluster
@@ -219,7 +216,7 @@ func TestUpdateCluster_RejectInvalidParams(t *testing.T) {
}
return nil
},
func(ctx context.Context, server string) error {
func(_ context.Context, server string) error {
for _, cluster := range clusters {
if server == cluster.Server {
return nil
@@ -255,7 +252,7 @@ func TestGetCluster_UrlEncodedName(t *testing.T) {
Namespaces: []string{"default", "kube-system"},
}
mockClusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{
mockCluster,
},
@@ -265,15 +262,15 @@ func TestGetCluster_UrlEncodedName(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
cluster, err := server.Get(context.Background(), &clusterapi.ClusterQuery{
Id: &clusterapi.ClusterID{
localCluster, err := server.Get(context.Background(), &cluster.ClusterQuery{
Id: &cluster.ClusterID{
Type: "name_escaped",
Value: "test%2fing",
},
})
require.NoError(t, err)
assert.Equal(t, "test/ing", cluster.Name)
assert.Equal(t, "test/ing", localCluster.Name)
}
func TestGetCluster_NameWithUrlEncodingButShouldNotBeUnescaped(t *testing.T) {
@@ -285,7 +282,7 @@ func TestGetCluster_NameWithUrlEncodingButShouldNotBeUnescaped(t *testing.T) {
Namespaces: []string{"default", "kube-system"},
}
mockClusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{
mockCluster,
},
@@ -295,20 +292,20 @@ func TestGetCluster_NameWithUrlEncodingButShouldNotBeUnescaped(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
cluster, err := server.Get(context.Background(), &clusterapi.ClusterQuery{
Id: &clusterapi.ClusterID{
localCluster, err := server.Get(context.Background(), &cluster.ClusterQuery{
Id: &cluster.ClusterID{
Type: "name",
Value: "test%2fing",
},
})
require.NoError(t, err)
assert.Equal(t, "test%2fing", cluster.Name)
assert.Equal(t, "test%2fing", localCluster.Name)
}
func TestGetCluster_CannotSetCADataAndInsecureTrue(t *testing.T) {
testNamespace := "default"
cluster := &v1alpha1.Cluster{
localCluster := &v1alpha1.Cluster{
Name: "my-cluster-name",
Server: "https://my-cluster-server",
Namespaces: []string{testNamespace},
@@ -326,17 +323,17 @@ func TestGetCluster_CannotSetCADataAndInsecureTrue(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
t.Run("Create Fails When CAData is Set and Insecure is True", func(t *testing.T) {
_, err := server.Create(context.Background(), &clusterapi.ClusterCreateRequest{
Cluster: cluster,
_, err := server.Create(context.Background(), &cluster.ClusterCreateRequest{
Cluster: localCluster,
})
assert.EqualError(t, err, `error getting REST config: Unable to apply K8s REST config defaults: specifying a root certificates file with the insecure flag is not allowed`)
})
cluster.Config.TLSClientConfig.CAData = nil
localCluster.Config.TLSClientConfig.CAData = nil
t.Run("Create Succeeds When CAData is nil and Insecure is True", func(t *testing.T) {
_, err := server.Create(context.Background(), &clusterapi.ClusterCreateRequest{
Cluster: cluster,
_, err := server.Create(context.Background(), &cluster.ClusterCreateRequest{
Cluster: localCluster,
})
require.NoError(t, err)
})
@@ -355,7 +352,7 @@ func TestUpdateCluster_NoFieldsPaths(t *testing.T) {
}
clusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: clusters,
}
@@ -367,7 +364,7 @@ func TestUpdateCluster_NoFieldsPaths(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
_, err := server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err := server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Name: "minikube",
Namespaces: []string{"default", "kube-system"},
@@ -395,7 +392,7 @@ func TestUpdateCluster_FieldsPathSet(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
_, err := server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err := server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Server: "https://127.0.0.1",
Shard: ptr.To(int64(1)),
@@ -412,7 +409,7 @@ func TestUpdateCluster_FieldsPathSet(t *testing.T) {
labelEnv := map[string]string{
"env": "qa",
}
_, err = server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err = server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Server: "https://127.0.0.1",
Labels: labelEnv,
@@ -429,7 +426,7 @@ func TestUpdateCluster_FieldsPathSet(t *testing.T) {
annotationEnv := map[string]string{
"env": "qa",
}
_, err = server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err = server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Server: "https://127.0.0.1",
Annotations: annotationEnv,
@@ -443,7 +440,7 @@ func TestUpdateCluster_FieldsPathSet(t *testing.T) {
assert.Equal(t, []string{"default", "kube-system"}, updated.Namespaces)
assert.Equal(t, updated.Annotations, annotationEnv)
_, err = server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err = server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Server: "https://127.0.0.1",
Project: "new-project",
@@ -481,7 +478,7 @@ func TestDeleteClusterByName(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
t.Run("Delete Fails When Deleting by Unknown Name", func(t *testing.T) {
_, err := server.Delete(context.Background(), &clusterapi.ClusterQuery{
_, err := server.Delete(context.Background(), &cluster.ClusterQuery{
Name: "foo",
})
@@ -489,7 +486,7 @@ func TestDeleteClusterByName(t *testing.T) {
})
t.Run("Delete Succeeds When Deleting by Name", func(t *testing.T) {
_, err := server.Delete(context.Background(), &clusterapi.ClusterQuery{
_, err := server.Delete(context.Background(), &cluster.ClusterQuery{
Name: "my-cluster-name",
})
require.NoError(t, err)
@@ -558,7 +555,7 @@ func TestRotateAuth(t *testing.T) {
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
t.Run("RotateAuth by Unknown Name", func(t *testing.T) {
_, err := server.RotateAuth(context.Background(), &clusterapi.ClusterQuery{
_, err := server.RotateAuth(context.Background(), &cluster.ClusterQuery{
Name: "foo",
})
@@ -569,7 +566,7 @@ func TestRotateAuth(t *testing.T) {
// demonstrate the proper mapping of cluster names/server to server info (i.e. my-cluster-name
// results in https://my-cluster-name info being used and https://my-cluster-name results in https://my-cluster-name).
t.Run("RotateAuth by Name - Error from no such host", func(t *testing.T) {
_, err := server.RotateAuth(context.Background(), &clusterapi.ClusterQuery{
_, err := server.RotateAuth(context.Background(), &cluster.ClusterQuery{
Name: "my-cluster-name",
})
@@ -577,7 +574,7 @@ func TestRotateAuth(t *testing.T) {
})
t.Run("RotateAuth by Server - Error from no such host", func(t *testing.T) {
_, err := server.RotateAuth(context.Background(), &clusterapi.ClusterQuery{
_, err := server.RotateAuth(context.Background(), &cluster.ClusterQuery{
Server: "https://my-cluster-name",
})
@@ -629,7 +626,7 @@ func TestListCluster(t *testing.T) {
}
mockClusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{fooCluster, barCluster, bazCluster},
}
@@ -645,60 +642,60 @@ func TestListCluster(t *testing.T) {
}{
{
name: "filter by name",
q: &clusterapi.ClusterQuery{
q: &cluster.ClusterQuery{
Name: fooCluster.Name,
},
want: &v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{fooCluster},
},
},
{
name: "filter by server",
q: &clusterapi.ClusterQuery{
q: &cluster.ClusterQuery{
Server: barCluster.Server,
},
want: &v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{barCluster},
},
},
{
name: "filter by id - name",
q: &clusterapi.ClusterQuery{
Id: &clusterapi.ClusterID{
q: &cluster.ClusterQuery{
Id: &cluster.ClusterID{
Type: "name",
Value: fooCluster.Name,
},
},
want: &v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{fooCluster},
},
},
{
name: "filter by id - name_escaped",
q: &clusterapi.ClusterQuery{
Id: &clusterapi.ClusterID{
q: &cluster.ClusterQuery{
Id: &cluster.ClusterID{
Type: "name_escaped",
Value: "test%2fing",
},
},
want: &v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{bazCluster},
},
},
{
name: "filter by id - server",
q: &clusterapi.ClusterQuery{
Id: &clusterapi.ClusterID{
q: &cluster.ClusterQuery{
Id: &cluster.ClusterID{
Type: "server",
Value: barCluster.Server,
},
},
want: &v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{barCluster},
},
},
@@ -710,11 +707,12 @@ func TestListCluster(t *testing.T) {
t.Parallel()
got, err := s.List(context.Background(), tt.q)
if (err != nil) != tt.wantErr {
t.Errorf("Server.List() error = %v, wantErr %v", err, tt.wantErr)
return
if tt.wantErr {
assert.Error(t, err, "Server.List()")
} else {
require.NoError(t, err)
assert.Truef(t, reflect.DeepEqual(got, tt.want), "Server.List() = %v, want %v", got, tt.want)
}
assert.Truef(t, reflect.DeepEqual(got, tt.want), "Server.List() = %v, want %v", got, tt.want)
})
}
}
@@ -729,7 +727,7 @@ func TestGetClusterAndVerifyAccess(t *testing.T) {
Namespaces: []string{"default", "kube-system"},
}
mockClusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{
mockCluster,
},
@@ -738,11 +736,11 @@ func TestGetClusterAndVerifyAccess(t *testing.T) {
db.On("ListClusters", mock.Anything).Return(&mockClusterList, nil)
server := NewServer(db, newNoopEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
cluster, err := server.getClusterAndVerifyAccess(context.Background(), &clusterapi.ClusterQuery{
localCluster, err := server.getClusterAndVerifyAccess(context.Background(), &cluster.ClusterQuery{
Name: "test/not-exists",
}, rbacpolicy.ActionGet)
}, rbac.ActionGet)
assert.Nil(t, cluster)
assert.Nil(t, localCluster)
assert.ErrorIs(t, err, common.PermissionDeniedAPIError)
})
@@ -755,7 +753,7 @@ func TestGetClusterAndVerifyAccess(t *testing.T) {
Namespaces: []string{"default", "kube-system"},
}
mockClusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{
mockCluster,
},
@@ -764,11 +762,11 @@ func TestGetClusterAndVerifyAccess(t *testing.T) {
db.On("ListClusters", mock.Anything).Return(&mockClusterList, nil)
server := NewServer(db, newEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
cluster, err := server.getClusterAndVerifyAccess(context.Background(), &clusterapi.ClusterQuery{
localCluster, err := server.getClusterAndVerifyAccess(context.Background(), &cluster.ClusterQuery{
Name: "test/ing",
}, rbacpolicy.ActionGet)
}, rbac.ActionGet)
assert.Nil(t, cluster)
assert.Nil(t, localCluster)
assert.ErrorIs(t, err, common.PermissionDeniedAPIError)
})
}
@@ -782,7 +780,7 @@ func TestNoClusterEnumeration(t *testing.T) {
Namespaces: []string{"default", "kube-system"},
}
mockClusterList := v1alpha1.ClusterList{
ListMeta: v1.ListMeta{},
ListMeta: metav1.ListMeta{},
Items: []v1alpha1.Cluster{
mockCluster,
},
@@ -794,26 +792,26 @@ func TestNoClusterEnumeration(t *testing.T) {
server := NewServer(db, newEnforcer(), newServerInMemoryCache(), &kubetest.MockKubectlCmd{})
t.Run("Get", func(t *testing.T) {
_, err := server.Get(context.Background(), &clusterapi.ClusterQuery{
_, err := server.Get(context.Background(), &cluster.ClusterQuery{
Name: "cluster-not-exists",
})
require.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
_, err = server.Get(context.Background(), &clusterapi.ClusterQuery{
_, err = server.Get(context.Background(), &cluster.ClusterQuery{
Name: "test/ing",
})
assert.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
})
t.Run("Update", func(t *testing.T) {
_, err := server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err := server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Name: "cluster-not-exists",
},
})
require.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
_, err = server.Update(context.Background(), &clusterapi.ClusterUpdateRequest{
_, err = server.Update(context.Background(), &cluster.ClusterUpdateRequest{
Cluster: &v1alpha1.Cluster{
Name: "test/ing",
},
@@ -822,36 +820,36 @@ func TestNoClusterEnumeration(t *testing.T) {
})
t.Run("Delete", func(t *testing.T) {
_, err := server.Delete(context.Background(), &clusterapi.ClusterQuery{
_, err := server.Delete(context.Background(), &cluster.ClusterQuery{
Server: "https://127.0.0.2",
})
require.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
_, err = server.Delete(context.Background(), &clusterapi.ClusterQuery{
_, err = server.Delete(context.Background(), &cluster.ClusterQuery{
Server: "https://127.0.0.1",
})
assert.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
})
t.Run("RotateAuth", func(t *testing.T) {
_, err := server.RotateAuth(context.Background(), &clusterapi.ClusterQuery{
_, err := server.RotateAuth(context.Background(), &cluster.ClusterQuery{
Server: "https://127.0.0.2",
})
require.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
_, err = server.RotateAuth(context.Background(), &clusterapi.ClusterQuery{
_, err = server.RotateAuth(context.Background(), &cluster.ClusterQuery{
Server: "https://127.0.0.1",
})
assert.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
})
t.Run("InvalidateCache", func(t *testing.T) {
_, err := server.InvalidateCache(context.Background(), &clusterapi.ClusterQuery{
_, err := server.InvalidateCache(context.Background(), &cluster.ClusterQuery{
Server: "https://127.0.0.2",
})
require.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")
_, err = server.InvalidateCache(context.Background(), &clusterapi.ClusterQuery{
_, err = server.InvalidateCache(context.Background(), &cluster.ClusterQuery{
Server: "https://127.0.0.1",
})
assert.ErrorIs(t, err, common.PermissionDeniedAPIError, "error message must be _only_ the permission error, to avoid leaking information about cluster existence")

View File

@@ -16,6 +16,8 @@ import (
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
@@ -332,7 +334,7 @@ func (a *DefaultApplicationGetter) Get(ns, name string) (*v1alpha1.Application,
// RbacEnforcer defines the contract to enforce rbac rules
type RbacEnforcer interface {
EnforceErr(rvals ...interface{}) error
EnforceErr(rvals ...any) error
}
// Manager is the object that will be responsible for registering
@@ -411,12 +413,12 @@ func proxyKey(extName, cName, cServer string) ProxyKey {
func parseAndValidateConfig(s *settings.ArgoCDSettings) (*ExtensionConfigs, error) {
if len(s.ExtensionConfig) == 0 {
return nil, fmt.Errorf("no extensions configurations found")
return nil, errors.New("no extensions configurations found")
}
configs := ExtensionConfigs{}
for extName, extConfig := range s.ExtensionConfig {
extConfigMap := map[string]interface{}{}
extConfigMap := map[string]any{}
err := yaml.Unmarshal([]byte(extConfig), &extConfigMap)
if err != nil {
return nil, fmt.Errorf("invalid extension config: %w", err)
@@ -461,10 +463,10 @@ func validateConfigs(configs *ExtensionConfigs) error {
exts := make(map[string]struct{})
for _, ext := range configs.Extensions {
if ext.Name == "" {
return fmt.Errorf("extensions.name must be configured")
return errors.New("extensions.name must be configured")
}
if !nameSafeRegex.MatchString(ext.Name) {
return fmt.Errorf("invalid extensions.name: only alphanumeric characters, hyphens, and underscores are allowed")
return errors.New("invalid extensions.name: only alphanumeric characters, hyphens, and underscores are allowed")
}
if _, found := exts[ext.Name]; found {
return fmt.Errorf("duplicated extension found in the configs for %q", ext.Name)
@@ -476,23 +478,23 @@ func validateConfigs(configs *ExtensionConfigs) error {
}
for _, svc := range ext.Backend.Services {
if svc.URL == "" {
return fmt.Errorf("extensions.backend.services.url must be configured")
return errors.New("extensions.backend.services.url must be configured")
}
if svcTotal > 1 && svc.Cluster == nil {
return fmt.Errorf("extensions.backend.services.cluster must be configured when defining more than one service per extension")
return errors.New("extensions.backend.services.cluster must be configured when defining more than one service per extension")
}
if svc.Cluster != nil {
if svc.Cluster.Name == "" && svc.Cluster.Server == "" {
return fmt.Errorf("cluster.name or cluster.server must be defined when cluster is provided in the configuration")
return errors.New("cluster.name or cluster.server must be defined when cluster is provided in the configuration")
}
}
if len(svc.Headers) > 0 {
for _, header := range svc.Headers {
if header.Name == "" {
return fmt.Errorf("header.name must be defined when providing service headers in the configuration")
return errors.New("header.name must be defined when providing service headers in the configuration")
}
if header.Value == "" {
return fmt.Errorf("header.value must be defined when providing service headers in the configuration")
return errors.New("header.value must be defined when providing service headers in the configuration")
}
}
}
@@ -651,7 +653,7 @@ func appendProxy(registry ProxyRegistry,
return nil
}
// authorize will enforce rbac rules are satified for the given RequestResources.
// authorize will enforce rbac rules are satisfied for the given RequestResources.
// The following validations are executed:
// - enforce the subject has permission to read application/project provided
// in HeaderArgoCDApplicationName and HeaderArgoCDProjectName.
@@ -659,17 +661,17 @@ func appendProxy(registry ProxyRegistry,
// extName.
// - enforce that the project has permission to access the destination cluster.
//
// If all validations are satified it will return the Application resource
// If all validations are satisfied it will return the Application resource
func (m *Manager) authorize(ctx context.Context, rr *RequestResources, extName string) (*v1alpha1.Application, error) {
if m.rbac == nil {
return nil, fmt.Errorf("rbac enforcer not set in extension manager")
return nil, errors.New("rbac enforcer not set in extension manager")
}
appRBACName := security.RBACName(rr.ApplicationNamespace, rr.ProjectName, rr.ApplicationNamespace, rr.ApplicationName)
if err := m.rbac.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceApplications, rbacpolicy.ActionGet, appRBACName); err != nil {
if err := m.rbac.EnforceErr(ctx.Value("claims"), rbac.ResourceApplications, rbac.ActionGet, appRBACName); err != nil {
return nil, fmt.Errorf("application authorization error: %w", err)
}
if err := m.rbac.EnforceErr(ctx.Value("claims"), rbacpolicy.ResourceExtensions, rbacpolicy.ActionInvoke, extName); err != nil {
if err := m.rbac.EnforceErr(ctx.Value("claims"), rbac.ResourceExtensions, rbac.ActionInvoke, extName); err != nil {
return nil, fmt.Errorf("unauthorized to invoke extension %q: %w", extName, err)
}
@@ -698,7 +700,7 @@ func (m *Manager) authorize(ctx context.Context, rr *RequestResources, extName s
return nil, fmt.Errorf("error validating project destinations: %w", err)
}
if !permitted {
return nil, fmt.Errorf("the provided project is not allowed to access the cluster configured in the Application destination")
return nil, errors.New("the provided project is not allowed to access the cluster configured in the Application destination")
}
return app, nil
@@ -737,7 +739,7 @@ func (m *Manager) CallExtension() func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
segments := strings.Split(strings.TrimPrefix(r.URL.Path, "/"), "/")
if segments[0] != "extensions" {
http.Error(w, fmt.Sprintf("Invalid URL: first segment must be %s", URLPrefix), http.StatusBadRequest)
http.Error(w, "Invalid URL: first segment must be "+URLPrefix, http.StatusBadRequest)
return
}
extName := segments[1]

View File

@@ -15,12 +15,13 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/server/extension"
"github.com/argoproj/argo-cd/v2/server/extension/mocks"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/settings"
)
@@ -274,8 +275,8 @@ func TestCallExtension(t *testing.T) {
getApp := func(destName, destServer, projName string) *v1alpha1.Application {
return &v1alpha1.Application{
TypeMeta: v1.TypeMeta{},
ObjectMeta: v1.ObjectMeta{},
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{},
Spec: v1alpha1.ApplicationSpec{
Destination: v1alpha1.ApplicationDestination{
Name: destName,
@@ -312,7 +313,7 @@ func TestCallExtension(t *testing.T) {
destinations = append(destinations, destination)
}
return &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
Name: prjName,
},
Spec: v1alpha1.AppProjectSpec{
@@ -339,8 +340,8 @@ func TestCallExtension(t *testing.T) {
if !allowExt {
extAccessError = errors.New("no extension permission")
}
f.rbacMock.On("EnforceErr", mock.Anything, rbacpolicy.ResourceApplications, rbacpolicy.ActionGet, mock.Anything).Return(appAccessError)
f.rbacMock.On("EnforceErr", mock.Anything, rbacpolicy.ResourceExtensions, rbacpolicy.ActionInvoke, mock.Anything).Return(extAccessError)
f.rbacMock.On("EnforceErr", mock.Anything, rbac.ResourceApplications, rbac.ActionGet, mock.Anything).Return(appAccessError)
f.rbacMock.On("EnforceErr", mock.Anything, rbac.ResourceExtensions, rbac.ActionInvoke, mock.Anything).Return(extAccessError)
}
withUser := func(f *fixture, username string, groups []string) {

Some files were not shown because too many files have changed in this diff Show More