Compare commits

..

109 Commits

Author SHA1 Message Date
github-actions[bot]
ed35bc3204 Bump version to 3.1.13 on release-3.1 branch (#27007)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-03-25 16:18:08 +02:00
dudinea
beb9860db5 fix: mitigation of grpc-go CVE-2026-33186 for release-3.1 (#26982)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-03-25 15:30:42 +02:00
argo-cd-cherry-pick-bot[bot]
7a7dd4e5a5 fix(UI): show RollingSync step clearly when labels match no step (cherry-pick #26877 for 3.1) (#26885)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <atali@redhat.com>
2026-03-17 21:46:39 -04:00
argo-cd-cherry-pick-bot[bot]
12c8d42f4f chore: use base ref for cherry-pick prs (cherry-pick #26551 for 3.1) (#26555)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-02-22 21:34:49 +02:00
nmirasch
c19d63446d chore(deps): bump lodash from 4.17.21 to 4.17.23 (Cherry-pick 3.1) (#26210)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-02-10 17:49:44 +02:00
github-actions[bot]
a46152e850 Bump version to 3.1.12 on release-3.1 branch (#26117)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2026-01-22 14:34:09 -05:00
Codey Jenkins
28b618775d fix: cherry pick #25516 to release-3.1 (#26116)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
Signed-off-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-01-22 13:33:42 -05:00
dudinea
1b7cfc0c06 fix: invalid error message on health check failure (#26040) (cherry pick #26039 for 3.1) (#26071)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-01-20 18:35:15 +02:00
argo-cd-cherry-pick-bot[bot]
0262c8ff97 fix(hydrator): empty links for failed operation (#25025) (cherry-pick #26014 for 3.1) (#26017)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-01-15 16:26:23 -05:00
Nitish Kumar
cc053b2eeb fix(ci): bump golang.x/tools to v0.35.0 (#25955)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-01-13 13:03:04 +02:00
github-actions[bot]
f9adb4e7f4 Bump version to 3.1.11 on release-3.1 branch (#25953)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-01-13 09:59:27 +02:00
Keith Chong
6c3f3eab5d fix: Selecting repoType in dropdown doesn't do anything (#23747) (#25945)
Signed-off-by: Keith Chong <kykchong@redhat.com>
2026-01-13 08:25:28 +01:00
Nitish Kumar
f22ccc2192 chore(cherry-pick-3.1): bump go to 1.25.5 and bump expr to v1.17.7 (#25890)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-01-12 10:57:44 -05:00
argo-cd-cherry-pick-bot[bot]
b424210b52 fix(appset): do not trigger reconciliation on appsets not part of allowed namespaces when updating a cluster secret (cherry-pick #25622 for 3.1) (#25910)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
Co-authored-by: OpenGuidou <73480729+OpenGuidou@users.noreply.github.com>
2026-01-09 16:18:10 +01:00
Atif Ali
955ea1b1df chore(cherry-pick-3.1): bump expr version from v1.17.5 to v1.17.7 (#25907)
Signed-off-by: Atif Ali <atali@redhat.com>
2026-01-08 13:44:26 -05:00
argo-cd-cherry-pick-bot[bot]
836d47f311 fix: Only show please update resource specification message when spec… (cherry-pick #25066 for 3.1) (#25896)
Signed-off-by: Josh Soref <jsoref@gmail.com>
Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2026-01-07 10:11:13 -05:00
argo-cd-cherry-pick-bot[bot]
db2004f2e2 ci: test against k8s 1.34.2 (cherry-pick #25856 for 3.1) (#25861)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-01-05 13:40:38 -05:00
github-actions[bot]
b68c964321 Bump version to 3.1.10 on release-3.1 branch (#25793)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2025-12-24 13:43:56 +02:00
Anand Francis Joseph
f4f21b3642 chore(deps): bump golang.org/x/crypto from 0.39.0 to 0.46.0 (#25792)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-12-24 13:30:01 +02:00
argo-cd-cherry-pick-bot[bot]
38c15ada45 fix(hydrator): appset should preserve annotation when hydration is requested (cherry-pick #25644 for 3.1) (#25653)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-12-15 08:54:48 +01:00
argo-cd-cherry-pick-bot[bot]
11e7758ca9 docs: Document usage of ?. in notifications triggers and fix examples (#25352) (cherry-pick #25418 for 3.1) (#25422)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
2025-11-26 09:42:45 +02:00
Regina Voloshin
f2f4f4579b docs: Improve switch to annotation tracking docs, clarifying that a new Git commit may be needed to avoid orphan resources - (cherry-pick #25309 for 3.1) (#25337)
Signed-off-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-11-19 11:46:40 +01:00
argo-cd-cherry-pick-bot[bot]
f9ada0403d fix: Allow the ISVC to be healthy when the Stopped Condition is False (cherry-pick #25312 for 3.1) (#25317)
Signed-off-by: Hannah DeFazio <h2defazio@gmail.com>
Co-authored-by: Hannah DeFazio <h2defazio@gmail.com>
2025-11-17 23:50:11 -10:00
jwinters01
b6660a2a7a fix:(ui) don't render ApplicationSelector unless the panel is showing (Cherry pick release 3.1) (#25218)
Signed-off-by: Jonathan Winters <wintersjonathan0@gmail.com>
2025-11-07 13:37:51 -05:00
argo-cd-cherry-pick-bot[bot]
787f3ec6a2 fix(ui): add null-safe handling for assignedWindows in status panel (cherry-pick #25128 for 3.1) (#25181)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
Co-authored-by: Jaewoo Choi <jaewoo45@gmail.com>
2025-11-05 01:12:26 -10:00
argo-cd-cherry-pick-bot[bot]
49ee0040c4 fix: capture stderr in executil RunWithExecRunOpts (cherry-pick #25139 for 3.1) (#25141)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
2025-11-03 08:14:38 +01:00
Alexander Lindeskär
7eca62c2d6 fix: Health status for HTTPRoute with multiple generations (#24958) (cherry-pick #24959 for 3.1) (#25037)
Signed-off-by: Alexander Lindeskär <lindeskar@users.noreply.github.com>
2025-10-23 07:08:03 +03:00
github-actions[bot]
8665140f96 Bump version to 3.1.9 on release-3.1 branch (#25000)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-10-17 14:33:09 -07:00
argo-cd-cherry-pick-bot[bot]
a419e477e6 fix: don't show error about missing appset (cherry-pick #24995 for 3.1) (#24996)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-10-17 12:28:31 -07:00
argo-cd-cherry-pick-bot[bot]
e53196f9fd fix: make webhook payload handlers recover from panics (cherry-pick #24862 for 3.1) (#24914)
Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
Co-authored-by: Jakub Ciolek <66125090+jake-ciolek@users.noreply.github.com>
2025-10-14 14:14:53 -04:00
Carlos R.F.
16ba5f9c43 chore(deps): bump redis from 7.2.7 to 7.2.11 to address vuln (release-3.1) (#24886)
Signed-off-by: Carlos Rodriguez-Fernandez <carlosrodrifernandez@gmail.com>
2025-10-09 17:07:43 -04:00
argo-cd-cherry-pick-bot[bot]
1904de5065 fix(server): ensure resource health status is inferred on application retrieval (#24832) (cherry-pick #24851 for 3.1) (#24867)
Signed-off-by: Viacheslav Rianov <rianovviacheslav@gmail.com>
Co-authored-by: Rianov Viacheslav <55545103+vr009@users.noreply.github.com>
2025-10-06 16:58:09 -04:00
github-actions[bot]
becb020064 Bump version to 3.1.8 on release-3.1 branch (#24794)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:29:59 -04:00
Steve Ramage
c63c2d8909 fix(docs): include v3.1 upgrade docs (#23529) [backport] (#24799)
Signed-off-by: Mubarak Jama <mubarak.jama@gmail.com>
Signed-off-by: Steve Ramage <gitcommits@sjrx.net>
Co-authored-by: Mubarak Jama <83465122+mubarak-j@users.noreply.github.com>
2025-09-30 05:24:54 -10:00
Ville Vesilehto
e20828f869 Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
761fc27068 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
1a023f1ca7 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
5c466a4e39 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
b2fa7dcde6 fix: #24781 update crossplane healthchecks to V2 version (cherry-pick #24782 for 3.1) (#24783)
Signed-off-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
Co-authored-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
2025-09-30 18:04:57 +05:30
argo-cd-cherry-pick-bot[bot]
38808d03cd fix: allow for backwards compatibility of durations defined in days (cherry-pick #24769 for 3.1) (#24771)
Signed-off-by: lplazas <felipe.plazas10@gmail.com>
Co-authored-by: lplazas <luis.plazas@workato.com>
2025-09-29 15:23:09 +05:30
Atif Ali
41eac62eac fix: Clear ApplicationSet applicationStatus when ProgressiveSync is d… (#24715)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-26 10:18:37 -04:00
argo-cd-cherry-pick-bot[bot]
54bab39a80 fix: update ExternalSecret discovery.lua to also include the refreshPolicy (cherry-pick #24707 for 3.1) (#24712)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
Co-authored-by: AvivGuiser <avivguiser@gmail.com>
2025-09-23 13:38:55 -04:00
github-actions[bot]
511ebd799e Bump version to 3.1.7 on release-3.1 branch (#24702)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-09-22 15:15:25 -07:00
Alexander Matyushentsev
2e4458b91a fix: limit number of resources in appset status (#24690) (#24696)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:57:55 -07:00
argo-cd-cherry-pick-bot[bot]
7f92418a9c ci(release): only set latest release in github when latest (cherry-pick #24525 for 3.1) (#24685)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:46:48 -04:00
argo-cd-cherry-pick-bot[bot]
f3d59b0bb7 fix: resolve argocdService initialization issue in notifications CLI (cherry-pick #24664 for 3.1) (#24681)
Signed-off-by: puretension <rlrlfhtm5@gmail.com>
Co-authored-by: DOHYEONG LEE <rlrlfhtm5@gmail.com>
2025-09-22 04:43:49 -10:00
argo-cd-cherry-pick-bot[bot]
4081e2983a fix(server): validate new project on update (#23970) (cherry-pick #23973 for 3.1) (#24662)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-19 11:21:57 -04:00
argo-cd-cherry-pick-bot[bot]
c26cd5502b fix: Progress Sync Unknown in UI (cherry-pick #24202 for 3.1) (#24643)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <56743004+aali309@users.noreply.github.com>
2025-09-18 14:30:42 -04:00
github-actions[bot]
96797ba846 Bump version to 3.1.6 on release-3.1 branch (#24635)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: blakepettersson <1227954+blakepettersson@users.noreply.github.com>
2025-09-18 06:28:14 -10:00
argo-cd-cherry-pick-bot[bot]
b46a57ab82 fix(oci): loosen up layer restrictions (cherry-pick #24640 for 3.1) (#24649)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-18 06:01:17 -10:00
Alexander Matyushentsev
2b3df7f5a8 fix: use informer in webhook handler to reduce memory usage (#24622) (#24626)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-18 12:43:06 +05:30
argo-cd-cherry-pick-bot[bot]
4ef56634b4 docs: Delete dangling word in Source Hydrator docs (cherry-pick #24601 for 3.1) (#24603)
Signed-off-by: José Maia <josecbmaia@hotmail.com>
Co-authored-by: José Maia <josecbmaia@hotmail.com>
2025-09-17 11:36:10 -04:00
argo-cd-cherry-pick-bot[bot]
cb9574597e fix: correct post-delete finalizer removal when cluster not found (cherry-pick #24415 for 3.1) (#24590)
Signed-off-by: Pavel Aborilov <aborilov@gmail.com>
Co-authored-by: Pavel <aborilov@gmail.com>
2025-09-16 16:14:23 -07:00
Linus Ehlers
468870f65d fix: Ensure that symlink targets are not made absolute on extracting a tar (#24145) - backport/cherry-pick to 3.1 (#24519)
Signed-off-by: Linus Ehlers <Linus.Ehlers@ppi.de>
2025-09-11 10:48:02 -04:00
github-actions[bot]
cfeed49105 Bump version to 3.1.5 on release-3.1 branch (#24503)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-10 11:30:13 -04:00
Codey Jenkins
c21141a51f fix(cherry pick 3.1): RunResourceAction: error getting Lua resource action: built-in script does not exist #24491 (#24500)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
2025-09-10 11:28:01 -04:00
Fox Piacenti
0415c60af9 docs: Update URL for HA manifests to stable. (#24454)
Signed-off-by: Fox Danger Piacenti <fox@opencraft.com>
2025-09-09 12:36:21 +03:00
Nitish Kumar
9a3235ef92 fix(3.1): change the appset namespace to server namespace when generating appset (#24478)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-09 10:45:31 +03:00
OpenGuidou
3320f1ed7a fix(cherry-pick-3.1): Do not block project update when a cluster referenced in an App doesn't exist (#24450)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
2025-09-08 11:38:25 -04:00
github-actions[bot]
20dd73af34 Bump version to 3.1.4 on release-3.1 branch (#24424)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: agaudreault <47184027+agaudreault@users.noreply.github.com>
2025-09-05 14:56:15 -04:00
Alexandre Gaudreault
206d57b0de chore(deps): bump gitops-engine (#24418)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-05 13:29:18 -04:00
github-actions[bot]
c1467b81bc Bump version to 3.1.3 on release-3.1 branch (#24401)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 13:30:54 -04:00
github-actions[bot]
791b036d98 Bump version to 3.1.2 on release-3.1 branch (#24395)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 11:45:55 -04:00
Michael Crenshaw
60c62a944b fix(security): repository.GetDetailedProject exposes repo secrets (#24391)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-04 11:32:13 -04:00
Michael Crenshaw
fe6efec8f4 fix(appset): add applicationsets to the built-in readonly role (#24190) (#24318) (#24321)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-03 21:45:58 -04:00
Peter Jiang
6de4f7739b fix(cherry-pick-3.1): handle missing resources on UI (#24357)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-09-03 09:36:05 -04:00
Adrian Berger
ed9149beea fix(cherry-pick-3.1): custom resource health for flux helm repository of type oci (#24341)
Signed-off-by: Adrian Berger <adrian.berger@bedag.ch>
2025-09-02 15:18:58 -04:00
Blake Pettersson
20447f7f57 fix: downgrade go-git (#24288) (release-3.1) (#24317)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-08-29 09:53:30 -04:00
jan-mrm
7982a74600 fix(discovery): add missing lua syntax and return to discovery (fixes #24257) - 3.1 (#24268)
Signed-off-by: jan-mrm <67435696+jan-mrm@users.noreply.github.com>
2025-08-28 11:35:37 -04:00
Nitish Kumar
b3ad040b2c chore(cherry-pick-3.1): replace bitnami images (#24101) (#24286)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-08-27 14:03:19 +02:00
Anand Francis Joseph
30d8ce66e2 fix(appset): prevent idle connection buildup by cloning http.DefaultTransport in Bitbucket SCM/PR generator (#24264)
Signed-off-by: portly-halicore-76 <170707699+portly-halicore-76@users.noreply.github.com>
Signed-off-by: anandf <anjoseph@redhat.com>
Co-authored-by: portly-halicore-76 <170707699+portly-halicore-76@users.noreply.github.com>
2025-08-26 09:53:30 -04:00
github-actions[bot]
fa342d153e Bump version to 3.1.1 on release-3.1 branch (#24260)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-08-25 11:30:18 -04:00
raweber
c140eb27f8 fix: resolve argocd ui error for externalSecrets, fixes #23886 (#24232) (#24236)
Signed-off-by: ralf.weber <ralf.weber@cistec.com>
Co-authored-by: ralf.weber <ralf.weber@cistec.com>
2025-08-22 23:00:48 +02:00
Codey Jenkins
70dde2c27b chore: cherry pick #24235 to release-3.1 (#24238)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
Co-authored-by: Matthew Bennett <mtbennett@godaddy.com>
2025-08-22 16:29:13 -04:00
rumstead
eb72a0bd3b fix(server): Send Azure DevOps token via git extra headers (#23478) (#23631) (#24223)
Signed-off-by: Mike Bordon <mikebordon@gmail.com>
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: mikebordon <31316193+mikebordon@users.noreply.github.com>
2025-08-21 16:43:05 -04:00
Anand Francis Joseph
fdd099181c fix(util): Fix default key exchange algorthims used for SSH connection to be FIPS compliant (#24086) (cherry-pick 3.1) (#24166)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-08-15 11:32:30 +02:00
Blake Pettersson
a0f065316b chore: add oci env vars to manifests (#24113) (cherry-pick 3.1) (#24153)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-08-14 13:59:13 +02:00
Alexandre Gaudreault
b22566d001 fix(lua): allow actions to add items to array (#24137)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-08-13 17:20:16 -04:00
github-actions[bot]
03c4ad854c Bump version to 3.1.0 on release-3.1 branch (#24135)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: agaudreault <47184027+agaudreault@users.noreply.github.com>
2025-08-13 16:36:22 -04:00
Alexandre Gaudreault
a9a0c7bf57 fix: revert kubeVersion change to preserve trailing + (cherry-pick #24066) (#24104)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-08-13 10:33:51 -04:00
Blake Pettersson
1c1c17663e fix: kustomize edit add component check (#24100) (cherry-pick 3.1) (#24102)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-08-11 12:02:46 +02:00
Ville Vesilehto
61d2a05fd0 chore: update Go to 1.24.6 (release-3.1) (#24093)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-08-11 10:38:22 +02:00
gcp-cherry-pick-bot[bot]
64e94af771 docs(actions): document parameterized resource actions (cherry-pick #24007) (#24009)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-31 14:43:04 -07:00
gcp-cherry-pick-bot[bot]
269e213889 docs: 3.0 migration - added remediation for explicitly syncing apps that use ApplyOutOfSyncOnly=true (cherry-pick #23918) (#23959)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-07-30 07:24:42 +03:00
gcp-cherry-pick-bot[bot]
26c63b9283 fix: helm GetTags cache writing (cherry-pick #23865) (#23952)
Signed-off-by: Matthew Clarke <mclarke@spotify.com>
Co-authored-by: Matthew Clarke <matthewclarke47@gmail.com>
Co-authored-by: Linghao Su <linghao.su@daocloud.io>
2025-07-28 23:42:54 +02:00
github-actions[bot]
1ad527f094 Bump version to 3.1.0-rc4 on release-3.1 branch (#23935)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-07-25 10:34:10 -07:00
Regina Voloshin
b7d39b57fc docs: Improve developer guide cherry-pick (#23916)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Signed-off-by: Regina Voloshin <regina.voloshin@codefresh.io>
Co-authored-by: Dan Garfield <dan@codefresh.io>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2025-07-24 12:58:18 +03:00
pbhatnagar-oss
9f18ff1fcd fix(metrics): Cherrypick grpc stats fix release 3.1 (#23890)
Signed-off-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
2025-07-23 07:51:19 -07:00
rumstead
66d06c0b23 fix(appset): When Appset is deleted, the controller should reconcile applicationset #23723 (cherry-pick ##23823) (#23835)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: sangeer <86688098+sangdammad@users.noreply.github.com>
2025-07-17 11:56:38 -04:00
github-actions[bot]
007a5aea6e Bump version to 3.1.0-rc3 on release-3.1 branch (#23764)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-07-11 13:59:43 -04:00
gcp-cherry-pick-bot[bot]
36cc2d1b86 fix(health): CRD health check message (#23690) (cherry-pick #23691) (#23738)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-10 10:18:17 -04:00
github-actions[bot]
9db5e25e03 Bump version to 3.1.0-rc2 on release-3.1 branch (#23743)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-07-10 10:17:47 -04:00
gcp-cherry-pick-bot[bot]
8eaccb7ea0 feat(helm): upgrading helm to 3.18.4 (cherry-pick #23724) (#23731)
Signed-off-by: Mubarak Jama <mubarak.jama@gmail.com>
Co-authored-by: Mubarak Jama <83465122+mubarak-j@users.noreply.github.com>
2025-07-09 21:29:09 -10:00
gcp-cherry-pick-bot[bot]
fed347da29 docs(images): add a note about missing images for 3.0 releases (#23612) (cherry-pick #23712) (#23713)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-09 13:56:43 -04:00
gcp-cherry-pick-bot[bot]
1cbd28cb30 fix(server): make parameterized resource actions backwards-compatible (cherry-pick #23695) (#23709)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-09 11:21:17 -04:00
gcp-cherry-pick-bot[bot]
6fe5ec794a fix: improves the ui message when an operation is terminated due to controller sync timeout (cherry-pick #23657) (#23672)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-07-09 06:37:58 +03:00
Alexandre Gaudreault
6142c5bf56 fix(sync): auto-sync loop when FailOnSharedResource (#23357) (#23640)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-07-02 15:00:37 -04:00
Michael Crenshaw
563d45b8db feat(kustomize): upgrade to 5.7.0 (#23619) (#23625)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-02 10:54:37 -04:00
gcp-cherry-pick-bot[bot]
37f2793e95 fix(hydrator): omit Argocd- trailers from hydrator.metadata (cherry-pick #23463) (#23621)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-01 13:21:19 -04:00
gcp-cherry-pick-bot[bot]
7224a15502 feat(helm): upgrade to 3.18.3 (cherry-pick #23618) (#23620)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-01 11:52:47 -04:00
gcp-cherry-pick-bot[bot]
dd675feb33 fix: UI error with ApplicationSet in any namespace (cherry-pick #23601) (#23604)
Signed-off-by: jaqxues <32979131+jaqxues@users.noreply.github.com>
Co-authored-by: jaqxues <32979131+jaqxues@users.noreply.github.com>
2025-06-30 00:41:55 -10:00
gcp-cherry-pick-bot[bot]
c215dbf202 fix: OCI client, avoid calling tags/list if revision is not a constraint #23580 (cherry-pick #23581) (#23582)
Signed-off-by: Diego Erdody <diego.erdody@dexterity.ai>
Co-authored-by: Diego Erdody <erdody@gmail.com>
Co-authored-by: Diego Erdody <diego.erdody@dexterity.ai>
2025-06-26 23:55:19 +02:00
gcp-cherry-pick-bot[bot]
75f7016d89 fix(controller): get commit server url from env (cherry-pick #23536) (#23541)
Signed-off-by: Alexej Disterhoft <alexej.disterhoft@redcare-pharmacy.com>
Co-authored-by: Alexej Disterhoft <alexej@disterhoft.de>
2025-06-24 13:31:39 -04:00
gcp-cherry-pick-bot[bot]
4a7e581080 fix: kustomize components + monorepos (cherry-pick #23486) (#23540)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-06-24 18:07:20 +02:00
gcp-cherry-pick-bot[bot]
320f46f06b fix(darwin): remove the need for cgo when building a darwin binary on linux (cherry-pick #23507) (#23526)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-06-23 12:07:29 -04:00
gcp-cherry-pick-bot[bot]
d69639a073 fix(controller): impersonation with destination name (#23309) (cherry-pick #23504) (#23519)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-06-23 09:52:54 -04:00
github-actions[bot]
b75f532a91 Bump version to 3.1.0-rc1 on release-3.1 branch (#23498)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: agaudreault <47184027+agaudreault@users.noreply.github.com>
2025-06-19 13:08:02 -04:00
gcp-cherry-pick-bot[bot]
200dc1d499 ci: fix supported-version script (cherry-pick #23496) (#23497)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-06-19 13:03:51 -04:00
Christian Hernandez
3f1d9bf5a2 chore: Updated Blog Link for v3.1 (#23494)
Signed-off-by: Christian Hernandez <christian@chernand.io>
2025-06-19 17:27:33 +02:00
593 changed files with 12076 additions and 39442 deletions

View File

@@ -1,15 +0,0 @@
module.exports = {
platform: 'github',
gitAuthor: 'renovate[bot] <renovate[bot]@users.noreply.github.com>',
autodiscover: false,
allowPostUpgradeCommandTemplating: true,
allowedPostUpgradeCommands: ["make mockgen"],
extends: [
"github>argoproj/argo-cd//renovate-presets/commons.json5",
"github>argoproj/argo-cd//renovate-presets/custom-managers/shell.json5",
"github>argoproj/argo-cd//renovate-presets/custom-managers/yaml.json5",
"github>argoproj/argo-cd//renovate-presets/fix/disable-all-updates.json5",
"github>argoproj/argo-cd//renovate-presets/devtool.json5",
"github>argoproj/argo-cd//renovate-presets/docs.json5"
]
}

View File

@@ -8,7 +8,7 @@ Checklist:
* [ ] Either (a) I've created an [enhancement proposal](https://github.com/argoproj/argo-cd/issues/new/choose) and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
* [ ] The title of the PR states what changed and the related issues number (used for the release note).
* [ ] The title of the PR conforms to the [Title of the PR](https://argo-cd.readthedocs.io/en/latest/developer-guide/submit-your-pr/#title-of-the-pr)
* [ ] The title of the PR conforms to the [Toolchain Guide](https://argo-cd.readthedocs.io/en/latest/developer-guide/toolchain-guide/#title-of-the-pr)
* [ ] I've included "Closes [ISSUE #]" or "Fixes [ISSUE #]" in the description to automatically close the associated issue.
* [ ] I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
* [ ] Does this PR require documentation updates?

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.0'
GOLANG_VERSION: '1.25.5'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -82,7 +82,7 @@ jobs:
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -103,16 +103,16 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Setup Golang
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.4.0
version: v2.5.0
args: --verbose
test-go:
@@ -153,7 +153,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -217,7 +217,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -311,7 +311,7 @@ jobs:
node-version: '22.9.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -339,7 +339,7 @@ jobs:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- run: |
sudo apt-get install shellcheck
shellcheck -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC1091 $(find . -type f -name '*.sh' | grep -v './ui/node_modules') | tee sc.log
shellcheck -e SC2086 -e SC2046 -e SC2068 -e SC2206 -e SC2048 -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC2128 -e SC1091 -e SC2207 $(find . -type f -name '*.sh') | tee sc.log
test ! -s sc.log
analyze:
@@ -360,7 +360,7 @@ jobs:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -368,12 +368,12 @@ jobs:
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Get e2e code coverage
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: e2e-code-coverage
path: e2e-code-coverage
- name: Get unit test code coverage
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: test-results
path: test-results
@@ -385,7 +385,7 @@ jobs:
run: |
go tool covdata percent -i=test-results,e2e-code-coverage/applicationset-controller,e2e-code-coverage/repo-server,e2e-code-coverage/app-controller,e2e-code-coverage/commit-server -o test-results/full-coverage.out
- name: Upload code coverage information to codecov.io
uses: codecov/codecov-action@fdcc8476540edceab3de004e990f80d881c6cc00 # v5.5.0
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
with:
files: test-results/full-coverage.out
fail_ci_if_error: true
@@ -402,7 +402,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
uses: SonarSource/sonarqube-scan-action@8c71dc039c2dd71d3821e89a2b58ecc7fee6ced9 # v5.3.0
uses: SonarSource/sonarqube-scan-action@2500896589ef8f7247069a56136f8dc177c27ccf # v5.2.0
if: env.sonar_secret != ''
test-e2e:
name: Run end-to-end tests
@@ -414,14 +414,14 @@ jobs:
# latest: true means that this version mush upload the coverage report to codecov.io
# We designate the latest version because we only collect code coverage for that version.
k3s:
- version: v1.33.1
- version: v1.34.2
latest: true
- version: v1.33.1
latest: false
- version: v1.32.1
latest: false
- version: v1.31.0
latest: false
- version: v1.30.4
latest: false
needs:
- build-go
- changes
@@ -468,7 +468,7 @@ jobs:
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -496,7 +496,7 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.43.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:7.2.7-alpine
docker pull redis:7.2.11-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist

View File

@@ -73,7 +73,7 @@ jobs:
cache: false
- name: Install cosign
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2
uses: sigstore/cosign-installer@fb28c2b6339dcd94da6e4cbcbc5e888961f6f8c3 # v3.9.0
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
@@ -103,7 +103,7 @@ jobs:
echo 'EOF' >> $GITHUB_ENV
- name: Login to Quay.io
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ secrets.quay_username }}
@@ -111,7 +111,7 @@ jobs:
if: ${{ inputs.quay_image_name && inputs.push }}
- name: Login to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ secrets.ghcr_username }}
@@ -119,7 +119,7 @@ jobs:
if: ${{ inputs.ghcr_image_name && inputs.push }}
- name: Login to dockerhub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.docker_username }}
password: ${{ secrets.docker_password }}

View File

@@ -53,7 +53,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.0
go-version: 1.25.5
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -70,7 +70,7 @@ jobs:
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.0
go-version: 1.25.5
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.0' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.25.5' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -25,13 +25,49 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.0
go-version: 1.25.5
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
quay_username: ${{ secrets.RELEASE_QUAY_USERNAME }}
quay_password: ${{ secrets.RELEASE_QUAY_TOKEN }}
setup-variables:
name: Setup Release Variables
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
outputs:
is_pre_release: ${{ steps.var.outputs.is_pre_release }}
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup variables
id: var
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_RELEASE_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | grep -v '-' | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo ${{ github.ref_name }} | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
IS_LATEST=false
# Ensure latest release tag matches github.ref_name
if [[ $LATEST_RELEASE_TAG == ${{ github.ref_name }} ]];then
IS_LATEST=true
fi
echo "is_pre_release=$PRE_RELEASE" >> $GITHUB_OUTPUT
echo "is_latest_release=$IS_LATEST" >> $GITHUB_OUTPUT
argocd-image-provenance:
needs: [argocd-image]
permissions:
@@ -50,15 +86,17 @@ jobs:
goreleaser:
needs:
- setup-variables
- argocd-image
- argocd-image-provenance
permissions:
contents: write # used for uploading assets
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
GORELEASER_MAKE_LATEST: ${{ needs.setup-variables.outputs.is_latest_release }}
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -96,7 +134,7 @@ jobs:
tool-cache: false
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@e435ccd777264be153ace6237001ef4d979d3a7a # v6.4.0
uses: goreleaser/goreleaser-action@9c156ee8a17a598857849441385a2041ef570552 # v6.3.0
id: run-goreleaser
with:
version: latest
@@ -142,7 +180,7 @@ jobs:
permissions:
contents: write # Needed for release uploads
outputs:
hashes: ${{ steps.sbom-hash.outputs.hashes}}
hashes: ${{ steps.sbom-hash.outputs.hashes }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
steps:
@@ -221,6 +259,7 @@ jobs:
post-release:
needs:
- setup-variables
- argocd-image
- goreleaser
- generate-sbom
@@ -229,6 +268,8 @@ jobs:
pull-requests: write # Needed to create PR for VERSION update.
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-22.04
env:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
@@ -242,27 +283,6 @@ jobs:
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
- name: Check if tag is the latest version and not a pre-release
run: |
set -xue
# Fetch all tag information
git fetch --prune --tags --force
LATEST_TAG=$(git -c 'versionsort.suffix=-rc' tag --list --sort=version:refname | tail -n1)
PRE_RELEASE=false
# Check if latest tag is a pre-release
if echo $LATEST_TAG | grep -E -- '-rc[0-9]+$';then
PRE_RELEASE=true
fi
# Ensure latest tag matches github.ref_name & not a pre-release
if [[ $LATEST_TAG == ${{ github.ref_name }} ]] && [[ $PRE_RELEASE != 'true' ]];then
echo "TAG_STABLE=true" >> $GITHUB_ENV
else
echo "TAG_STABLE=false" >> $GITHUB_ENV
fi
- name: Update stable tag to latest version
run: |
git tag -f stable ${{ github.ref_name }}

View File

@@ -1,31 +0,0 @@
name: Renovate
on:
schedule:
- cron: '0 * * * *'
workflow_dispatch: {}
permissions:
contents: read
jobs:
renovate:
runs-on: ubuntu-latest
steps:
- name: Get token
id: get_token
uses: actions/create-github-app-token@a8d616148505b5069dccd32f177bb87d7f39123b # v2.1.1
with:
app-id: ${{ vars.RENOVATE_APP_ID }}
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
- name: Self-hosted Renovate
uses: renovatebot/github-action@b11417b9eaac3145fe9a8544cee66503724e32b6 #43.0.8
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'
env:
LOG_LEVEL: 'debug'
RENOVATE_REPOSITORIES: '${{ github.repository }}'

View File

@@ -58,6 +58,7 @@ linters:
- commentedOutCode
- deferInLoop
- exitAfterDefer
- exposedSyncMutex
- hugeParam
- importShadow
- paramTypeCombine # Leave disabled, there are too many failures to be worth fixing.

View File

@@ -49,13 +49,14 @@ archives:
- argocd-cli
name_template: |-
{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}
formats: [ binary ]
formats: [binary]
checksum:
name_template: 'cli_checksums.txt'
algorithm: sha256
release:
make_latest: '{{ .Env.GORELEASER_MAKE_LATEST }}'
prerelease: auto
draft: false
header: |
@@ -80,7 +81,7 @@ release:
All Argo CD container images are signed by cosign. A Provenance is generated for container images and CLI binaries which meet the SLSA Level 3 specifications. See the [documentation](https://argo-cd.readthedocs.io/en/stable/operator-manual/signed-release-assets) on how to verify.
## Release Notes Blog Post
For a detailed breakdown of the key changes and improvements in this release, check out the [official blog post](https://blog.argoproj.io/argo-cd-v3-0-release-candidate-a0b933f4e58f)
For a detailed breakdown of the key changes and improvements in this release, check out the [official blog post](https://blog.argoproj.io/announcing-argo-cd-v3-1-f4389bc783c8)
## Upgrading

View File

@@ -32,9 +32,6 @@ packages:
github.com/argoproj/argo-cd/v3/controller/cache:
interfaces:
LiveStateCache: {}
github.com/argoproj/argo-cd/v3/controller/hydrator:
interfaces:
Dependencies: {}
github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer: {}

View File

@@ -1,10 +1,10 @@
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:10bb10bb062de665d4dc3e0ea36715270ead632cfcb74d08ca2273712a0dfb42
ARG BASE_IMAGE=docker.io/library/ubuntu:24.04@sha256:80dd3c3b9c6cecb9f1667e9290b3bc61b78c2678c02cbdae5f0fea92cc6734ab
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS builder
FROM docker.io/library/golang:1.25.5@sha256:6cc2338c038bc20f96ab32848da2b5c0641bb9bb5363f2c33e9b7c8838f9a208 AS builder
WORKDIR /tmp
@@ -103,7 +103,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.5@sha256:6cc2338c038bc20f96ab32848da2b5c0641bb9bb5363f2c33e9b7c8838f9a208 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.25.0@sha256:9e56f0d0f043a68bb8c47c819e47dc29f6e8f5129b8885bed9d43f058f7f3ed6
FROM docker.io/library/golang:1.24.1@sha256:c5adecdb7b3f8c5ca3c88648a861882849cc8b02fed68ece31e25de88ad13418
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -113,6 +113,7 @@ define run-in-test-server
-v ${GOPATH}/pkg/mod:/go/pkg/mod${VOLUME_MOUNT} \
-v ${GOCACHE}:/tmp/go-build-cache${VOLUME_MOUNT} \
-v ${HOME}/.kube:/home/user/.kube${VOLUME_MOUNT} \
-v /tmp:/tmp${VOLUME_MOUNT} \
-w ${DOCKER_WORKDIR} \
-p ${ARGOCD_E2E_APISERVER_PORT}:8080 \
-p 4000:4000 \
@@ -137,6 +138,7 @@ define run-in-test-client
-v ${GOPATH}/pkg/mod:/go/pkg/mod${VOLUME_MOUNT} \
-v ${GOCACHE}:/tmp/go-build-cache${VOLUME_MOUNT} \
-v ${HOME}/.kube:/home/user/.kube${VOLUME_MOUNT} \
-v /tmp:/tmp${VOLUME_MOUNT} \
-w ${DOCKER_WORKDIR} \
$(PODMAN_ARGS) \
$(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE):$(TEST_TOOLS_TAG) \
@@ -602,7 +604,6 @@ install-test-tools-local:
.PHONY: install-codegen-tools-local
install-codegen-tools-local:
./hack/install.sh codegen-tools
./hack/install.sh codegen-go-tools
# Installs all tools required for running codegen (Go packages)
.PHONY: install-go-tools-local

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 320f46f06beaf75f9c406e3a47e2e09d36e2047a
commit-hash: 226a670fe6b3c6769ff6d18e6839298a58e4577d
project-url: https://github.com/argoproj/argo-cd
project-release: v3.2.0
project-release: v3.1.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -69,7 +69,7 @@ docker_build_with_restart(
],
platform=platform,
live_update=[
sync('.tilt-bin/argocd_linux', '/usr/local/bin/argocd'),
sync('.tilt-bin/argocd_linux_amd64', '/usr/local/bin/argocd'),
],
only=[
'.tilt-bin',
@@ -260,7 +260,6 @@ local_resource(
'make lint-local',
deps = code_deps,
allow_parallel=True,
resource_deps=['vendor']
)
local_resource(

View File

@@ -5,10 +5,8 @@ PR with your organization name if you are using Argo CD.
Currently, the following organizations are **officially** using Argo CD:
1. [100ms](https://www.100ms.ai/)
1. [127Labs](https://127labs.com/)
1. [3Rein](https://www.3rein.com/)
1. [42 School](https://42.fr/)
1. [4data](https://4data.ch/)
1. [7shifts](https://www.7shifts.com/)
1. [Adevinta](https://www.adevinta.com/)
@@ -42,7 +40,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Back Market](https://www.backmarket.com)
1. [Bajaj Finserv Health Ltd.](https://www.bajajfinservhealth.in)
1. [Baloise](https://www.baloise.com)
1. [Batumbu](https://batumbu.id)
1. [BCDevExchange DevOps Platform](https://bcdevexchange.org/DevOpsPlatform)
1. [Beat](https://thebeat.co/en/)
1. [Beez Innovation Labs](https://www.beezlabs.com/)
@@ -74,7 +71,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Chime](https://www.chime.com)
1. [Chronicle Labs](https://chroniclelabs.org)
1. [Cisco ET&I](https://eti.cisco.com/)
1. [Close](https://www.close.com/)
1. [Cloud Posse](https://www.cloudposse.com/)
1. [Cloud Scale](https://cloudscaleinc.com/)
1. [CloudScript](https://www.cloudscript.com.br/)
@@ -164,7 +160,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Hiya](https://hiya.com)
1. [Honestbank](https://honestbank.com)
1. [Hostinger](https://www.hostinger.com)
1. [Hotjar](https://www.hotjar.com)
1. [IABAI](https://www.iab.ai)
1. [IBM](https://www.ibm.com/)
1. [Ibotta](https://home.ibotta.com)
@@ -178,7 +173,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Info Support](https://www.infosupport.com/)
1. [InsideBoard](https://www.insideboard.com)
1. [Instruqt](https://www.instruqt.com)
1. [Intel](https://www.intel.com)
1. [Intuit](https://www.intuit.com/)
1. [Jellysmack](https://www.jellysmack.com)
1. [Joblift](https://joblift.com/)
@@ -327,7 +321,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [SEKAI](https://www.sekai.io/)
1. [Semgrep](https://semgrep.com)
1. [Shield](https://shield.com)
1. [Shipfox](https://www.shipfox.io)
1. [SI Analytics](https://si-analytics.ai)
1. [Sidewalk Entertainment](https://sidewalkplay.com/)
1. [Skit](https://skit.ai/)
@@ -340,7 +333,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Snapp](https://snapp.ir/)
1. [Snyk](https://snyk.io/)
1. [Softway Medical](https://www.softwaymedical.fr/)
1. [Sophotech](https://sopho.tech)
1. [South China Morning Post (SCMP)](https://www.scmp.com/)
1. [Speee](https://speee.jp/)
1. [Spendesk](https://spendesk.com/)

View File

@@ -1 +1 @@
3.2.0
3.1.13

View File

@@ -16,7 +16,6 @@ package controllers
import (
"context"
"errors"
"fmt"
"reflect"
"runtime/debug"
@@ -68,18 +67,12 @@ const (
// https://github.com/argoproj-labs/argocd-notifications/blob/33d345fa838829bb50fca5c08523aba380d2c12b/pkg/controller/state.go#L17
NotifiedAnnotationKey = "notified.notifications.argoproj.io"
ReconcileRequeueOnValidationError = time.Minute * 3
ReverseDeletionOrder = "Reverse"
AllAtOnceDeletionOrder = "AllAtOnce"
)
var defaultPreservedAnnotations = []string{
NotifiedAnnotationKey,
argov1alpha1.AnnotationKeyRefresh,
}
type deleteInOrder struct {
AppName string
Step int
argov1alpha1.AnnotationKeyHydrate,
}
// ApplicationSetReconciler reconciles a ApplicationSet object
@@ -100,6 +93,7 @@ type ApplicationSetReconciler struct {
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@@ -147,19 +141,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}
logCtx.Debugf("ownerReferences referring %s is deleted from generated applications", appsetName)
}
if isProgressiveSyncDeletionOrderReversed(&applicationSetInfo) {
logCtx.Debugf("DeletionOrder is set as Reverse on %s", appsetName)
currentApplications, err := r.getCurrentApplications(ctx, applicationSetInfo)
if err != nil {
return ctrl.Result{}, err
}
requeueTime, err := r.performReverseDeletion(ctx, logCtx, applicationSetInfo, currentApplications)
if err != nil {
return ctrl.Result{}, err
} else if requeueTime > 0 {
return ctrl.Result{RequeueAfter: requeueTime}, err
}
}
controllerutil.RemoveFinalizer(&applicationSetInfo, argov1alpha1.ResourcesFinalizerName)
if err := r.Update(ctx, &applicationSetInfo); err != nil {
return ctrl.Result{}, err
@@ -175,7 +156,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
// Log a warning if there are unrecognized generators
_ = utils.CheckInvalidGenerators(&applicationSetInfo)
// desiredApplications is the main list of all expected Applications from all generators in this appset.
generatedApplications, applicationSetReason, err := template.GenerateApplications(logCtx, applicationSetInfo, r.Generators, r.Renderer, r.Client)
desiredApplications, applicationSetReason, err := template.GenerateApplications(logCtx, applicationSetInfo, r.Generators, r.Renderer, r.Client)
if err != nil {
logCtx.Errorf("unable to generate applications: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
@@ -193,7 +174,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
parametersGenerated = true
validateErrors, err := r.validateGeneratedApplications(ctx, generatedApplications, applicationSetInfo)
validateErrors, err := r.validateGeneratedApplications(ctx, desiredApplications, applicationSetInfo)
if err != nil {
// While some generators may return an error that requires user intervention,
// other generators reference external resources that may change to cause
@@ -246,31 +227,35 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
appMap[app.Name] = app
}
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications, appMap)
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, desiredApplications, appMap)
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
}
} else {
// Progressive Sync is disabled, clear any existing applicationStatus to prevent stale data
if len(applicationSetInfo.Status.ApplicationStatus) > 0 {
logCtx.Infof("Progressive Sync disabled, removing %v AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear AppSet application statuses when Progressive Sync is disabled for %v: %w", applicationSetInfo.Name, err)
}
}
}
var validApps []argov1alpha1.Application
for i := range generatedApplications {
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
validApps = append(validApps, generatedApplications[i])
for i := range desiredApplications {
if validateErrors[i] == nil {
validApps = append(validApps, desiredApplications[i])
}
}
if len(validateErrors) > 0 {
errorApps := make([]string, 0, len(validateErrors))
for key := range validateErrors {
errorApps = append(errorApps, key)
}
sort.Strings(errorApps)
var message string
for _, appName := range errorApps {
message = validateErrors[appName].Error()
logCtx.WithField("application", appName).Errorf("validation error found during application validation: %s", message)
for _, v := range validateErrors {
message = v.Error()
logCtx.Errorf("validation error found during application validation: %s", message)
}
if len(validateErrors) > 1 {
// Only the last message gets added to the appset status, to keep the size reasonable.
@@ -325,12 +310,12 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete() {
err = r.deleteInCluster(ctx, logCtx, applicationSetInfo, generatedApplications)
err = r.deleteInCluster(ctx, logCtx, applicationSetInfo, desiredApplications)
if err != nil {
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionErrorOccurred,
Type: argov1alpha1.ApplicationSetConditionResourcesUpToDate,
Message: err.Error(),
Reason: argov1alpha1.ApplicationSetReasonDeleteApplicationError,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
@@ -384,169 +369,120 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}, nil
}
func (r *ApplicationSetReconciler) performReverseDeletion(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, currentApps []argov1alpha1.Application) (time.Duration, error) {
requeueTime := 10 * time.Second
stepLength := len(appset.Spec.Strategy.RollingSync.Steps)
// map applications by name using current applications
appMap := make(map[string]*argov1alpha1.Application)
for _, app := range currentApps {
appMap[app.Name] = &app
}
// Get Rolling Sync Step Maps
_, appStepMap := r.buildAppDependencyList(logCtx, appset, currentApps)
// reverse the AppStepMap to perform deletion
var reverseDeleteAppSteps []deleteInOrder
for appName, appStep := range appStepMap {
reverseDeleteAppSteps = append(reverseDeleteAppSteps, deleteInOrder{appName, stepLength - appStep - 1})
}
sort.Slice(reverseDeleteAppSteps, func(i, j int) bool {
return reverseDeleteAppSteps[i].Step < reverseDeleteAppSteps[j].Step
})
for _, step := range reverseDeleteAppSteps {
logCtx.Infof("step %v : app %v", step.Step, step.AppName)
app := appMap[step.AppName]
retrievedApp := argov1alpha1.Application{}
if err := r.Get(ctx, types.NamespacedName{Name: app.Name, Namespace: app.Namespace}, &retrievedApp); err != nil {
if apierrors.IsNotFound(err) {
logCtx.Infof("application %s successfully deleted", step.AppName)
continue
}
}
// Check if the application is already being deleted
if retrievedApp.DeletionTimestamp != nil {
logCtx.Infof("application %s has been marked for deletion, but object not removed yet", step.AppName)
if time.Since(retrievedApp.DeletionTimestamp.Time) > 2*time.Minute {
return 0, errors.New("application has not been deleted in over 2 minutes")
}
}
// The application has not been deleted yet, trigger its deletion
if err := r.Delete(ctx, &retrievedApp); err != nil {
return 0, err
}
return requeueTime, nil
}
logCtx.Infof("completed reverse deletion for ApplicationSet %v", appset.Name)
return 0, nil
}
func getParametersGeneratedCondition(parametersGenerated bool, message string) argov1alpha1.ApplicationSetCondition {
var parametersGeneratedCondition argov1alpha1.ApplicationSetCondition
var paramtersGeneratedCondition argov1alpha1.ApplicationSetCondition
if parametersGenerated {
parametersGeneratedCondition = argov1alpha1.ApplicationSetCondition{
paramtersGeneratedCondition = argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionParametersGenerated,
Message: "Successfully generated parameters for all Applications",
Reason: argov1alpha1.ApplicationSetReasonParametersGenerated,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}
} else {
parametersGeneratedCondition = argov1alpha1.ApplicationSetCondition{
paramtersGeneratedCondition = argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionParametersGenerated,
Message: message,
Reason: argov1alpha1.ApplicationSetReasonErrorOccurred,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
}
}
return parametersGeneratedCondition
return paramtersGeneratedCondition
}
func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet, condition argov1alpha1.ApplicationSetCondition, parametersGenerated bool) error {
// Initialize the default condition types that this method evaluates
func getResourceUpToDateCondition(errorOccurred bool, message string, reason string) argov1alpha1.ApplicationSetCondition {
var resourceUpToDateCondition argov1alpha1.ApplicationSetCondition
if errorOccurred {
resourceUpToDateCondition = argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionResourcesUpToDate,
Message: message,
Reason: reason,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
}
} else {
resourceUpToDateCondition = argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionResourcesUpToDate,
Message: "ApplicationSet up to date",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetUpToDate,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}
}
return resourceUpToDateCondition
}
func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet, condition argov1alpha1.ApplicationSetCondition, paramtersGenerated bool) error {
// check if error occurred during reconcile process
errOccurred := condition.Type == argov1alpha1.ApplicationSetConditionErrorOccurred
var errOccurredCondition argov1alpha1.ApplicationSetCondition
if errOccurred {
errOccurredCondition = condition
} else {
errOccurredCondition = argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionErrorOccurred,
Message: "Successfully generated parameters for all Applications",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetUpToDate,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
}
}
paramtersGeneratedCondition := getParametersGeneratedCondition(paramtersGenerated, condition.Message)
resourceUpToDateCondition := getResourceUpToDateCondition(errOccurred, condition.Message, condition.Reason)
evaluatedTypes := map[argov1alpha1.ApplicationSetConditionType]bool{
argov1alpha1.ApplicationSetConditionErrorOccurred: true,
argov1alpha1.ApplicationSetConditionParametersGenerated: true,
argov1alpha1.ApplicationSetConditionErrorOccurred: false,
argov1alpha1.ApplicationSetConditionResourcesUpToDate: false,
argov1alpha1.ApplicationSetConditionRolloutProgressing: false,
argov1alpha1.ApplicationSetConditionResourcesUpToDate: true,
}
// Evaluate current condition
evaluatedTypes[condition.Type] = true
newConditions := []argov1alpha1.ApplicationSetCondition{condition}
newConditions := []argov1alpha1.ApplicationSetCondition{errOccurredCondition, paramtersGeneratedCondition, resourceUpToDateCondition}
if !isRollingSyncStrategy(applicationSet) {
// Progressing sync is always evaluated so conditions are removed when it is not enabled
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
evaluatedTypes[argov1alpha1.ApplicationSetConditionRolloutProgressing] = true
}
// Evaluate ParametersGenerated since it is always provided
if condition.Type != argov1alpha1.ApplicationSetConditionParametersGenerated {
newConditions = append(newConditions, getParametersGeneratedCondition(parametersGenerated, condition.Message))
}
// Evaluate dependencies between conditions.
switch condition.Type {
case argov1alpha1.ApplicationSetConditionResourcesUpToDate:
if condition.Status == argov1alpha1.ApplicationSetConditionStatusTrue {
// If the resources are up to date, we know there was no errors
evaluatedTypes[argov1alpha1.ApplicationSetConditionErrorOccurred] = true
newConditions = append(newConditions, argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionErrorOccurred,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
Reason: condition.Reason,
Message: condition.Message,
})
}
case argov1alpha1.ApplicationSetConditionErrorOccurred:
if condition.Status == argov1alpha1.ApplicationSetConditionStatusTrue {
// If there is an error anywhere in the reconciliation, we cannot consider the resources up to date
evaluatedTypes[argov1alpha1.ApplicationSetConditionResourcesUpToDate] = true
newConditions = append(newConditions, argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionResourcesUpToDate,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
Reason: argov1alpha1.ApplicationSetReasonErrorOccurred,
Message: condition.Message,
})
}
case argov1alpha1.ApplicationSetConditionRolloutProgressing:
if !isRollingSyncStrategy(applicationSet) {
// if the condition is a rolling sync and it is disabled, ignore it
evaluatedTypes[condition.Type] = false
if condition.Type == argov1alpha1.ApplicationSetConditionRolloutProgressing {
newConditions = append(newConditions, condition)
}
}
// Update the applicationSet conditions
previousConditions := applicationSet.Status.Conditions
applicationSet.Status.SetConditions(newConditions, evaluatedTypes)
// Try to not call get/update if nothing has changed
needToUpdateConditions := len(applicationSet.Status.Conditions) != len(previousConditions)
if !needToUpdateConditions {
for i, c := range applicationSet.Status.Conditions {
previous := previousConditions[i]
if c.Type != previous.Type || c.Reason != previous.Reason || c.Status != previous.Status || c.Message != previous.Message {
needToUpdateConditions := false
for _, condition := range newConditions {
// do nothing if appset already has same condition
for _, c := range applicationSet.Status.Conditions {
if c.Type == condition.Type && (c.Reason != condition.Reason || c.Status != condition.Status || c.Message != condition.Message) {
needToUpdateConditions = true
break
}
}
}
if !needToUpdateConditions {
return nil
}
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
if needToUpdateConditions || len(applicationSet.Status.Conditions) < len(newConditions) {
// fetch updated Application Set object before updating it
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
updatedAppset.Status.SetConditions(newConditions, evaluatedTypes)
updatedAppset.Status.SetConditions(
newConditions, evaluatedTypes,
)
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
}
return nil
@@ -554,33 +490,33 @@ func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.
// validateGeneratedApplications uses the Argo CD validation functions to verify the correctness of the
// generated applications.
func (r *ApplicationSetReconciler) validateGeneratedApplications(ctx context.Context, desiredApplications []argov1alpha1.Application, applicationSetInfo argov1alpha1.ApplicationSet) (map[string]error, error) {
errorsByApp := map[string]error{}
func (r *ApplicationSetReconciler) validateGeneratedApplications(ctx context.Context, desiredApplications []argov1alpha1.Application, applicationSetInfo argov1alpha1.ApplicationSet) (map[int]error, error) {
errorsByIndex := map[int]error{}
namesSet := map[string]bool{}
for i := range desiredApplications {
app := &desiredApplications[i]
for i, app := range desiredApplications {
if namesSet[app.Name] {
errorsByApp[app.QualifiedName()] = fmt.Errorf("ApplicationSet %s contains applications with duplicate name: %s", applicationSetInfo.Name, app.Name)
errorsByIndex[i] = fmt.Errorf("ApplicationSet %s contains applications with duplicate name: %s", applicationSetInfo.Name, app.Name)
continue
}
namesSet[app.Name] = true
appProject := &argov1alpha1.AppProject{}
err := r.Get(ctx, types.NamespacedName{Name: app.Spec.Project, Namespace: r.ArgoCDNamespace}, appProject)
if err != nil {
if apierrors.IsNotFound(err) {
errorsByApp[app.QualifiedName()] = fmt.Errorf("application references project %s which does not exist", app.Spec.Project)
errorsByIndex[i] = fmt.Errorf("application references project %s which does not exist", app.Spec.Project)
continue
}
return nil, err
}
if _, err = argoutil.GetDestinationCluster(ctx, app.Spec.Destination, r.ArgoDB); err != nil {
errorsByApp[app.QualifiedName()] = fmt.Errorf("application destination spec is invalid: %s", err.Error())
errorsByIndex[i] = fmt.Errorf("application destination spec is invalid: %s", err.Error())
continue
}
}
return errorsByApp, nil
return errorsByIndex, nil
}
func (r *ApplicationSetReconciler) getMinRequeueAfter(applicationSetInfo *argov1alpha1.ApplicationSet) time.Duration {
@@ -640,8 +576,9 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
Watches(
&corev1.Secret{},
&clusterSecretEventHandler{
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
ApplicationSetNamespaces: r.ApplicationSetNamespaces,
}).
Complete(r)
}
@@ -808,7 +745,7 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
return fmt.Errorf("error getting current applications: %w", err)
}
m := make(map[string]bool) // will hold the app names in appList for the deletion process
m := make(map[string]bool) // Will holds the app names in appList for the deletion process
for _, app := range desiredApplications {
m[app.Name] = true
@@ -876,7 +813,7 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
}
if !matchingCluster {
appLog.Warnf("A match for the destination cluster for %s, by server url, couldn't be found", app.Name)
appLog.Warnf("A match for the destination cluster for %s, by server url, couldn't be found.", app.Name)
}
validDestination = matchingCluster
@@ -1087,11 +1024,6 @@ func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.Application
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
}
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
@@ -1221,10 +1153,15 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
// if we have no RollingUpdate steps, clear out the existing ApplicationStatus entries
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
updateCountMap := []int{}
totalCountMap := []int{}
length := len(applicationSet.Spec.Strategy.RollingSync.Steps)
updateCountMap := make([]int, length)
totalCountMap := make([]int, length)
for s := 0; s < length; s++ {
updateCountMap = append(updateCountMap, 0)
totalCountMap = append(totalCountMap, 0)
}
// populate updateCountMap with counts of existing Pending and Progressing Applications
for _, appStatus := range applicationSet.Status.ApplicationStatus {
@@ -1283,56 +1220,44 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
}
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditions(ctx context.Context, applicationSet *argov1alpha1.ApplicationSet) []argov1alpha1.ApplicationSetCondition {
if !isRollingSyncStrategy(applicationSet) {
return applicationSet.Status.Conditions
}
completedWaves := map[string]bool{}
appSetProgressing := false
for _, appStatus := range applicationSet.Status.ApplicationStatus {
if v, ok := completedWaves[appStatus.Step]; !ok {
completedWaves[appStatus.Step] = appStatus.Status == "Healthy"
} else {
completedWaves[appStatus.Step] = v && appStatus.Status == "Healthy"
}
}
isProgressing := false
progressingStep := ""
for i := range applicationSet.Spec.Strategy.RollingSync.Steps {
step := strconv.Itoa(i + 1)
isCompleted, ok := completedWaves[step]
if !ok {
// Step has no applications, so it is completed
continue
}
if !isCompleted {
isProgressing = true
progressingStep = step
if appStatus.Status != "Healthy" {
appSetProgressing = true
break
}
}
if isProgressing {
appSetConditionProgressing := false
for _, appSetCondition := range applicationSet.Status.Conditions {
if appSetCondition.Type == argov1alpha1.ApplicationSetConditionRolloutProgressing && appSetCondition.Status == argov1alpha1.ApplicationSetConditionStatusTrue {
appSetConditionProgressing = true
break
}
}
if appSetProgressing && !appSetConditionProgressing {
_ = r.setApplicationSetStatusCondition(ctx,
applicationSet,
argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionRolloutProgressing,
Message: "ApplicationSet is performing rollout of step " + progressingStep,
Message: "ApplicationSet Rollout Rollout started",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetModified,
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, true,
)
} else {
} else if !appSetProgressing && appSetConditionProgressing {
_ = r.setApplicationSetStatusCondition(ctx,
applicationSet,
argov1alpha1.ApplicationSetCondition{
Type: argov1alpha1.ApplicationSetConditionRolloutProgressing,
Message: "ApplicationSet Rollout has completed",
Message: "ApplicationSet Rollout Rollout complete",
Reason: argov1alpha1.ApplicationSetReasonApplicationSetRolloutComplete,
Status: argov1alpha1.ApplicationSetConditionStatusFalse,
}, true,
)
}
return applicationSet.Status.Conditions
}
@@ -1398,6 +1323,11 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
if r.MaxResourcesStatusCount > 0 && len(statuses) > r.MaxResourcesStatusCount {
logCtx.Warnf("Truncating ApplicationSet %s resource status from %d to max allowed %d entries", appset.Name, len(statuses), r.MaxResourcesStatusCount)
statuses = statuses[:r.MaxResourcesStatusCount]
}
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
@@ -1433,37 +1363,17 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
needToUpdateStatus := false
if len(applicationStatuses) != len(applicationSet.Status.ApplicationStatus) {
logCtx.WithFields(log.Fields{
"current_count": len(applicationSet.Status.ApplicationStatus),
"expected_count": len(applicationStatuses),
}).Debug("application status count changed")
needToUpdateStatus = true
} else {
for i := range applicationStatuses {
appStatus := applicationStatuses[i]
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, appStatus.Application)
if idx == -1 {
logCtx.WithFields(log.Fields{"application": appStatus.Application}).Debug("application not found in current status")
needToUpdateStatus = true
break
}
currentStatus := applicationSet.Status.ApplicationStatus[idx]
statusChanged := currentStatus.Status != appStatus.Status
stepChanged := currentStatus.Step != appStatus.Step
messageChanged := currentStatus.Message != appStatus.Message
if statusChanged || stepChanged || messageChanged {
if statusChanged {
logCtx.WithFields(log.Fields{"application": appStatus.Application, "previous_status": currentStatus.Status, "new_status": appStatus.Status}).
Debug("application status changed")
}
if stepChanged {
logCtx.WithFields(log.Fields{"application": appStatus.Application, "previous_step": currentStatus.Step, "new_step": appStatus.Step}).
Debug("application step changed")
}
if messageChanged {
logCtx.WithFields(log.Fields{"application": appStatus.Application}).Debug("application message changed")
}
if currentStatus.Message != appStatus.Message || currentStatus.Status != appStatus.Status || currentStatus.Step != appStatus.Step {
needToUpdateStatus = true
break
}
@@ -1471,17 +1381,17 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
}
if needToUpdateStatus {
// sort to make sure the array is always in the same order
applicationSet.Status.ApplicationStatus = make([]argov1alpha1.ApplicationSetApplicationStatus, len(applicationStatuses))
copy(applicationSet.Status.ApplicationStatus, applicationStatuses)
sort.Slice(applicationSet.Status.ApplicationStatus, func(i, j int) bool {
return applicationSet.Status.ApplicationStatus[i].Application < applicationSet.Status.ApplicationStatus[j].Application
})
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
// rebuild ApplicationStatus from scratch, we don't need any previous status history
applicationSet.Status.ApplicationStatus = []argov1alpha1.ApplicationSetApplicationStatus{}
for i := range applicationStatuses {
applicationSet.Status.SetApplicationStatus(applicationStatuses[i])
}
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}, updatedAppset); err != nil {
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
@@ -1545,7 +1455,7 @@ func syncApplication(application argov1alpha1.Application, prune bool) argov1alp
Info: []*argov1alpha1.Info{
{
Name: "Reason",
Value: "ApplicationSet RollingSync triggered a sync of this Application resource",
Value: "ApplicationSet RollingSync triggered a sync of this Application resource.",
},
},
Sync: &argov1alpha1.SyncOperation{},

File diff suppressed because it is too large Load Diff

View File

@@ -14,6 +14,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
"github.com/argoproj/argo-cd/v3/common"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -22,8 +23,9 @@ import (
// requeue any related ApplicationSets.
type clusterSecretEventHandler struct {
// handler.EnqueueRequestForOwner
Log log.FieldLogger
Client client.Client
Log log.FieldLogger
Client client.Client
ApplicationSetNamespaces []string
}
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
@@ -68,6 +70,10 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Contex
h.Log.WithField("count", len(appSetList.Items)).Info("listed ApplicationSets")
for _, appSet := range appSetList.Items {
if !utils.IsNamespaceAllowed(h.ApplicationSetNamespaces, appSet.GetNamespace()) {
// Ignore it as not part of the allowed list of namespaces in which to watch Appsets
continue
}
foundClusterGenerator := false
for _, generator := range appSet.Spec.Generators {
if generator.Clusters != nil {

View File

@@ -137,7 +137,7 @@ func TestClusterEventHandler(t *testing.T) {
{
ObjectMeta: metav1.ObjectMeta{
Name: "my-app-set",
Namespace: "another-namespace",
Namespace: "argocd",
},
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
@@ -171,9 +171,37 @@ func TestClusterEventHandler(t *testing.T) {
},
},
expectedRequests: []reconcile.Request{
{NamespacedName: types.NamespacedName{Namespace: "another-namespace", Name: "my-app-set"}},
{NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"}},
},
},
{
name: "cluster generators in other namespaces should not match",
items: []argov1alpha1.ApplicationSet{
{
ObjectMeta: metav1.ObjectMeta{
Name: "my-app-set",
Namespace: "my-namespace-not-allowed",
},
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
{
Clusters: &argov1alpha1.ClusterGenerator{},
},
},
},
},
},
secret: corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
},
},
},
expectedRequests: []reconcile.Request{},
},
{
name: "non-argo cd secret should not match",
items: []argov1alpha1.ApplicationSet{
@@ -552,8 +580,9 @@ func TestClusterEventHandler(t *testing.T) {
fakeClient := fake.NewClientBuilder().WithScheme(scheme).WithLists(&appSetList).Build()
handler := &clusterSecretEventHandler{
Client: fakeClient,
Log: log.WithField("type", "createSecretEventHandler"),
Client: fakeClient,
Log: log.WithField("type", "createSecretEventHandler"),
ApplicationSetNamespaces: []string{"argocd"},
}
mockAddRateLimitingInterface := mockAddRateLimitingInterface{}

View File

@@ -79,10 +79,14 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
return nil, fmt.Errorf("error getting cluster secrets: %w", err)
}
paramHolder := &paramHolder{isFlatMode: appSetGenerator.Clusters.FlatList}
logCtx.Debugf("Using flat mode = %t for cluster generator", paramHolder.isFlatMode)
res := []map[string]any{}
secretsFound := []corev1.Secret{}
isFlatMode := appSetGenerator.Clusters.FlatList
logCtx.Debugf("Using flat mode = %t for cluster generator", isFlatMode)
clustersParams := make([]map[string]any, 0)
for _, cluster := range clustersFromArgoCD {
// If there is a secret for this cluster, then it's a non-local cluster, so it will be
// handled by the next step.
@@ -101,80 +105,72 @@ func (g *ClusterGenerator) GenerateParams(appSetGenerator *argoappsetv1alpha1.Ap
return nil, fmt.Errorf("error appending templated values for local cluster: %w", err)
}
paramHolder.append(params)
if isFlatMode {
clustersParams = append(clustersParams, params)
} else {
res = append(res, params)
}
logCtx.WithField("cluster", "local cluster").Info("matched local cluster")
}
}
// For each matching cluster secret (non-local clusters only)
for _, cluster := range secretsFound {
params := g.getClusterParameters(cluster, appSet)
params := map[string]any{}
params["name"] = string(cluster.Data["name"])
params["nameNormalized"] = utils.SanitizeName(string(cluster.Data["name"]))
params["server"] = string(cluster.Data["server"])
project, ok := cluster.Data["project"]
if ok {
params["project"] = string(project)
} else {
params["project"] = ""
}
if appSet.Spec.GoTemplate {
meta := map[string]any{}
if len(cluster.Annotations) > 0 {
meta["annotations"] = cluster.Annotations
}
if len(cluster.Labels) > 0 {
meta["labels"] = cluster.Labels
}
params["metadata"] = meta
} else {
for key, value := range cluster.Annotations {
params["metadata.annotations."+key] = value
}
for key, value := range cluster.Labels {
params["metadata.labels."+key] = value
}
}
err = appendTemplatedValues(appSetGenerator.Clusters.Values, params, appSet.Spec.GoTemplate, appSet.Spec.GoTemplateOptions)
if err != nil {
return nil, fmt.Errorf("error appending templated values for cluster: %w", err)
}
paramHolder.append(params)
if isFlatMode {
clustersParams = append(clustersParams, params)
} else {
res = append(res, params)
}
logCtx.WithField("cluster", cluster.Name).Debug("matched cluster secret")
}
return paramHolder.consolidate(), nil
}
type paramHolder struct {
isFlatMode bool
params []map[string]any
}
func (p *paramHolder) append(params map[string]any) {
p.params = append(p.params, params)
}
func (p *paramHolder) consolidate() []map[string]any {
if p.isFlatMode {
p.params = []map[string]any{
{"clusters": p.params},
}
if isFlatMode {
res = append(res, map[string]any{
"clusters": clustersParams,
})
}
return p.params
}
func (g *ClusterGenerator) getClusterParameters(cluster corev1.Secret, appSet *argoappsetv1alpha1.ApplicationSet) map[string]any {
params := map[string]any{}
params["name"] = string(cluster.Data["name"])
params["nameNormalized"] = utils.SanitizeName(string(cluster.Data["name"]))
params["server"] = string(cluster.Data["server"])
project, ok := cluster.Data["project"]
if ok {
params["project"] = string(project)
} else {
params["project"] = ""
}
if appSet.Spec.GoTemplate {
meta := map[string]any{}
if len(cluster.Annotations) > 0 {
meta["annotations"] = cluster.Annotations
}
if len(cluster.Labels) > 0 {
meta["labels"] = cluster.Labels
}
params["metadata"] = meta
} else {
for key, value := range cluster.Annotations {
params["metadata.annotations."+key] = value
}
for key, value := range cluster.Labels {
params["metadata.labels."+key] = value
}
}
return params
return res, nil
}
func (g *ClusterGenerator) getSecretsByClusterName(log *log.Entry, appSetGenerator *argoappsetv1alpha1.ApplicationSetGenerator) (map[string]corev1.Secret, error) {

View File

@@ -29,10 +29,10 @@ type GitGenerator struct {
}
// NewGitGenerator creates a new instance of Git Generator
func NewGitGenerator(repos services.Repos, namespace string) Generator {
func NewGitGenerator(repos services.Repos, controllerNamespace string) Generator {
g := &GitGenerator{
repos: repos,
namespace: namespace,
namespace: controllerNamespace,
}
return g
@@ -78,11 +78,11 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
if !strings.Contains(appSet.Spec.Template.Spec.Project, "{{") {
project := appSet.Spec.Template.Spec.Project
appProject := &argoprojiov1alpha1.AppProject{}
namespace := g.namespace
if namespace == "" {
namespace = appSet.Namespace
controllerNamespace := g.namespace
if controllerNamespace == "" {
controllerNamespace = appSet.Namespace
}
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: namespace}, appProject); err != nil {
if err := client.Get(context.TODO(), types.NamespacedName{Name: project, Namespace: controllerNamespace}, appProject); err != nil {
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled
@@ -222,18 +222,19 @@ func (g *GitGenerator) generateParamsForGitFiles(appSetGenerator *argoprojiov1al
func (g *GitGenerator) generateParamsFromGitFile(filePath string, fileContent []byte, values map[string]string, useGoTemplate bool, goTemplateOptions []string, pathParamPrefix string) ([]map[string]any, error) {
objectsFound := []map[string]any{}
// First, we attempt to parse as a single object.
// This will also succeed for empty files.
singleObj := map[string]any{}
err := yaml.Unmarshal(fileContent, &singleObj)
if err == nil {
objectsFound = append(objectsFound, singleObj)
} else {
// If unable to parse as an object, try to parse as an array
err = yaml.Unmarshal(fileContent, &objectsFound)
// First, we attempt to parse as an array
err := yaml.Unmarshal(fileContent, &objectsFound)
if err != nil {
// If unable to parse as an array, attempt to parse as a single object
singleObj := make(map[string]any)
err = yaml.Unmarshal(fileContent, &singleObj)
if err != nil {
return nil, fmt.Errorf("unable to parse file: %w", err)
}
objectsFound = append(objectsFound, singleObj)
} else if len(objectsFound) == 0 {
// If file is valid but empty, add a default empty item
objectsFound = append(objectsFound, map[string]any{})
}
res := []map[string]any{}

View File

@@ -825,7 +825,7 @@ func TestGitGenerateParamsFromFiles(t *testing.T) {
},
repoPathsError: nil,
expected: []map[string]any{},
expectedError: errors.New("error generating params from git: unable to process file 'cluster-config/production/config.json': unable to parse file: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type []map[string]interface {}"),
expectedError: errors.New("error generating params from git: unable to process file 'cluster-config/production/config.json': unable to parse file: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}"),
},
{
name: "test JSON array",
@@ -982,16 +982,6 @@ cluster:
},
expectedError: nil,
},
{
name: "test empty YAML array",
files: []v1alpha1.GitFileGeneratorItem{{Path: "**/config.yaml"}},
repoFileContents: map[string][]byte{
"cluster-config/production/config.yaml": []byte(`[]`),
},
repoPathsError: nil,
expected: []map[string]any{},
expectedError: nil,
},
}
for _, testCase := range cases {
@@ -2070,7 +2060,7 @@ func TestGitGenerateParamsFromFilesGoTemplate(t *testing.T) {
},
repoPathsError: nil,
expected: []map[string]any{},
expectedError: errors.New("error generating params from git: unable to process file 'cluster-config/production/config.json': unable to parse file: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type []map[string]interface {}"),
expectedError: errors.New("error generating params from git: unable to process file 'cluster-config/production/config.json': unable to parse file: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}"),
},
{
name: "test JSON array",

View File

@@ -11,7 +11,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/gosimple/slug"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/applicationset/services"
pullrequest "github.com/argoproj/argo-cd/v3/applicationset/services/pull_request"
@@ -19,6 +18,8 @@ import (
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
var _ Generator = (*PullRequestGenerator)(nil)
const (
DefaultPullRequestRequeueAfter = 30 * time.Minute
)
@@ -48,10 +49,6 @@ func (g *PullRequestGenerator) GetRequeueAfter(appSetGenerator *argoprojiov1alph
return DefaultPullRequestRequeueAfter
}
func (g *PullRequestGenerator) GetContinueOnRepoNotFoundError(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) bool {
return appSetGenerator.PullRequest.ContinueOnRepoNotFoundError
}
func (g *PullRequestGenerator) GetTemplate(appSetGenerator *argoprojiov1alpha1.ApplicationSetGenerator) *argoprojiov1alpha1.ApplicationSetTemplate {
return &appSetGenerator.PullRequest.Template
}
@@ -72,15 +69,10 @@ func (g *PullRequestGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
}
pulls, err := pullrequest.ListPullRequests(ctx, svc, appSetGenerator.PullRequest.Filters)
params := make([]map[string]any, 0, len(pulls))
if err != nil {
if pullrequest.IsRepositoryNotFoundError(err) && g.GetContinueOnRepoNotFoundError(appSetGenerator) {
log.WithError(err).WithField("generator", g).
Warn("Skipping params generation for this repository since it was not found.")
return params, nil
}
return nil, fmt.Errorf("error listing repos: %w", err)
}
params := make([]map[string]any, 0, len(pulls))
// In order to follow the DNS label standard as defined in RFC 1123,
// we need to limit the 'branch' to 50 to give room to append/suffix-ing it

View File

@@ -16,12 +16,11 @@ import (
func TestPullRequestGithubGenerateParams(t *testing.T) {
ctx := t.Context()
cases := []struct {
selectFunc func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error)
values map[string]string
expected []map[string]any
expectedErr error
applicationSet argoprojiov1alpha1.ApplicationSet
continueOnRepoNotFoundError bool
selectFunc func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error)
values map[string]string
expected []map[string]any
expectedErr error
applicationSet argoprojiov1alpha1.ApplicationSet
}{
{
selectFunc: func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error) {
@@ -172,30 +171,6 @@ func TestPullRequestGithubGenerateParams(t *testing.T) {
expected: nil,
expectedErr: errors.New("error listing repos: fake error"),
},
{
selectFunc: func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error) {
return pullrequest.NewFakeService(
ctx,
nil,
pullrequest.NewRepositoryNotFoundError(errors.New("repository not found")),
)
},
expected: []map[string]any{},
expectedErr: nil,
continueOnRepoNotFoundError: true,
},
{
selectFunc: func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error) {
return pullrequest.NewFakeService(
ctx,
nil,
pullrequest.NewRepositoryNotFoundError(errors.New("repository not found")),
)
},
expected: nil,
expectedErr: errors.New("error listing repos: repository not found"),
continueOnRepoNotFoundError: false,
},
{
selectFunc: func(context.Context, *argoprojiov1alpha1.PullRequestGenerator, *argoprojiov1alpha1.ApplicationSet) (pullrequest.PullRequestService, error) {
return pullrequest.NewFakeService(
@@ -285,8 +260,7 @@ func TestPullRequestGithubGenerateParams(t *testing.T) {
}
generatorConfig := argoprojiov1alpha1.ApplicationSetGenerator{
PullRequest: &argoprojiov1alpha1.PullRequestGenerator{
Values: c.values,
ContinueOnRepoNotFoundError: c.continueOnRepoNotFoundError,
Values: c.values,
},
}

View File

@@ -10,15 +10,15 @@ import (
"github.com/argoproj/argo-cd/v3/applicationset/services"
)
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, namespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
func GetGenerators(ctx context.Context, c client.Client, k8sClient kubernetes.Interface, controllerNamespace string, argoCDService services.Repos, dynamicClient dynamic.Interface, scmConfig SCMConfig) map[string]Generator {
terminalGenerators := map[string]Generator{
"List": NewListGenerator(),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, namespace),
"Git": NewGitGenerator(argoCDService, namespace),
"Clusters": NewClusterGenerator(ctx, c, k8sClient, controllerNamespace),
"Git": NewGitGenerator(argoCDService, controllerNamespace),
"SCMProvider": NewSCMProviderGenerator(c, scmConfig),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, namespace),
"ClusterDecisionResource": NewDuckTypeGenerator(ctx, dynamicClient, k8sClient, controllerNamespace),
"PullRequest": NewPullRequestGenerator(c, scmConfig),
"Plugin": NewPluginGenerator(c, namespace),
"Plugin": NewPluginGenerator(c, controllerNamespace),
}
nestedGenerators := map[string]Generator{

View File

@@ -10,10 +10,7 @@ import (
"github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
)
const (
AZURE_DEVOPS_DEFAULT_URL = "https://dev.azure.com"
AZURE_DEVOPS_PROJECT_NOT_FOUND_ERROR = "The following project does not exist"
)
const AZURE_DEVOPS_DEFAULT_URL = "https://dev.azure.com"
type AzureDevOpsClientFactory interface {
// Returns an Azure Devops Client interface.
@@ -73,22 +70,13 @@ func (a *AzureDevOpsService) List(ctx context.Context) ([]*PullRequest, error) {
SearchCriteria: &git.GitPullRequestSearchCriteria{},
}
pullRequests := []*PullRequest{}
azurePullRequests, err := client.GetPullRequestsByProject(ctx, args)
if err != nil {
// A standard Http 404 error is not returned for Azure DevOps,
// so checking the error message for a specific pattern.
// NOTE: Since the repos are filtered later, only existence of the project
// is relevant for AzureDevOps
if strings.Contains(err.Error(), AZURE_DEVOPS_PROJECT_NOT_FOUND_ERROR) {
// return a custom error indicating that the repository is not found,
// but also return the empty result since the decision to continue or not in this case is made by the caller
return pullRequests, NewRepositoryNotFoundError(err)
}
return nil, fmt.Errorf("failed to get pull requests by project: %w", err)
}
pullRequests := []*PullRequest{}
for _, pr := range *azurePullRequests {
if pr.Repository == nil ||
pr.Repository.Name == nil ||

View File

@@ -2,7 +2,6 @@ package pull_request
import (
"context"
"errors"
"testing"
"github.com/microsoft/azure-devops-go-api/azuredevops/v7/core"
@@ -236,36 +235,3 @@ func TestBuildURL(t *testing.T) {
})
}
}
func TestAzureDevOpsListReturnsRepositoryNotFoundError(t *testing.T) {
args := git.GetPullRequestsByProjectArgs{
Project: createStringPtr("nonexistent"),
SearchCriteria: &git.GitPullRequestSearchCriteria{},
}
pullRequestMock := []git.GitPullRequest{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
// Mock the GetPullRequestsByProject to return an error containing "404"
gitClientMock.On("GetPullRequestsByProject", t.Context(), args).Return(&pullRequestMock,
errors.New("The following project does not exist:"))
provider := AzureDevOpsService{
clientFactory: clientFactoryMock,
project: "nonexistent",
repo: "nonexistent",
labels: nil,
}
prs, err := provider.List(t.Context())
// Should return empty pull requests list
assert.Empty(t, prs)
// Should return RepositoryNotFoundError
require.Error(t, err)
assert.True(t, IsRepositoryNotFoundError(err), "Expected RepositoryNotFoundError but got: %v", err)
}

View File

@@ -6,7 +6,6 @@ import (
"errors"
"fmt"
"net/url"
"strings"
"github.com/ktrysmt/go-bitbucket"
)
@@ -118,17 +117,8 @@ func (b *BitbucketCloudService) List(_ context.Context) ([]*PullRequest, error)
RepoSlug: b.repositorySlug,
}
pullRequests := []*PullRequest{}
response, err := b.client.Repositories.PullRequests.Gets(opts)
if err != nil {
// A standard Http 404 error is not returned for Bitbucket Cloud,
// so checking the error message for a specific pattern
if strings.Contains(err.Error(), "404 Not Found") {
// return a custom error indicating that the repository is not found,
// but also return the empty result since the decision to continue or not in this case is made by the caller
return pullRequests, NewRepositoryNotFoundError(err)
}
return nil, fmt.Errorf("error listing pull requests for %s/%s: %w", b.owner, b.repositorySlug, err)
}
@@ -152,6 +142,7 @@ func (b *BitbucketCloudService) List(_ context.Context) ([]*PullRequest, error)
return nil, fmt.Errorf("error unmarshalling json to type '[]BitbucketCloudPullRequest': %w", err)
}
pullRequests := []*PullRequest{}
for _, pull := range pulls {
pullRequests = append(pullRequests, &PullRequest{
Number: pull.ID,

View File

@@ -492,29 +492,3 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
TargetBranch: "branch-200",
}, *pullRequests[0])
}
func TestBitbucketCloudListReturnsRepositoryNotFoundError(t *testing.T) {
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
path := "/repositories/nonexistent/nonexistent/pullrequests/"
mux.HandleFunc(path, func(w http.ResponseWriter, _ *http.Request) {
// Return 404 status to simulate repository not found
w.WriteHeader(http.StatusNotFound)
_, _ = w.Write([]byte(`{"message": "404 Project Not Found"}`))
})
svc, err := NewBitbucketCloudServiceNoAuth(server.URL, "nonexistent", "nonexistent")
require.NoError(t, err)
prs, err := svc.List(t.Context())
// Should return empty pull requests list
assert.Empty(t, prs)
// Should return RepositoryNotFoundError
require.Error(t, err)
assert.True(t, IsRepositoryNotFoundError(err), "Expected RepositoryNotFoundError but got: %v", err)
}

View File

@@ -3,7 +3,6 @@ package pull_request
import (
"context"
"fmt"
"net/http"
bitbucketv1 "github.com/gfleury/go-bitbucket-v1"
log "github.com/sirupsen/logrus"
@@ -67,11 +66,6 @@ func (b *BitbucketService) List(_ context.Context) ([]*PullRequest, error) {
for {
response, err := b.client.DefaultApi.GetPullRequestsPage(b.projectKey, b.repositorySlug, paged)
if err != nil {
if response != nil && response.Response != nil && response.StatusCode == http.StatusNotFound {
// return a custom error indicating that the repository is not found,
// but also return the empty result since the decision to continue or not in this case is made by the caller
return pullRequests, NewRepositoryNotFoundError(err)
}
return nil, fmt.Errorf("error listing pull requests for %s/%s: %w", b.projectKey, b.repositorySlug, err)
}
pulls, err := bitbucketv1.GetPullRequestsResponse(response)

View File

@@ -510,29 +510,3 @@ func TestListPullRequestBranchMatch(t *testing.T) {
})
require.Error(t, err)
}
func TestBitbucketServerListReturnsRepositoryNotFoundError(t *testing.T) {
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
path := "/rest/api/1.0/projects/nonexistent/repos/nonexistent/pull-requests?limit=100"
mux.HandleFunc(path, func(w http.ResponseWriter, _ *http.Request) {
// Return 404 status to simulate repository not found
w.WriteHeader(http.StatusNotFound)
_, _ = w.Write([]byte(`{"message": "404 Project Not Found"}`))
})
svc, err := NewBitbucketServiceNoAuth(t.Context(), server.URL, "nonexistent", "nonexistent", "", false, nil)
require.NoError(t, err)
prs, err := svc.List(t.Context())
// Should return empty pull requests list
assert.Empty(t, prs)
// Should return RepositoryNotFoundError
require.Error(t, err)
assert.True(t, IsRepositoryNotFoundError(err), "Expected RepositoryNotFoundError but got: %v", err)
}

View File

@@ -1,23 +0,0 @@
package pull_request
import "errors"
// RepositoryNotFoundError represents an error when a repository is not found by a pull request provider
type RepositoryNotFoundError struct {
causingError error
}
func (e *RepositoryNotFoundError) Error() string {
return e.causingError.Error()
}
// NewRepositoryNotFoundError creates a new repository not found error
func NewRepositoryNotFoundError(err error) error {
return &RepositoryNotFoundError{causingError: err}
}
// IsRepositoryNotFoundError checks if the given error is a repository not found error
func IsRepositoryNotFoundError(err error) bool {
var repoErr *RepositoryNotFoundError
return errors.As(err, &repoErr)
}

View File

@@ -1,48 +0,0 @@
package pull_request
import (
"errors"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRepositoryNotFoundError(t *testing.T) {
t.Run("NewRepositoryNotFoundError creates correct error type", func(t *testing.T) {
originalErr := errors.New("repository does not exist")
repoNotFoundErr := NewRepositoryNotFoundError(originalErr)
require.Error(t, repoNotFoundErr)
assert.Equal(t, "repository does not exist", repoNotFoundErr.Error())
})
t.Run("IsRepositoryNotFoundError identifies RepositoryNotFoundError", func(t *testing.T) {
originalErr := errors.New("repository does not exist")
repoNotFoundErr := NewRepositoryNotFoundError(originalErr)
assert.True(t, IsRepositoryNotFoundError(repoNotFoundErr))
})
t.Run("IsRepositoryNotFoundError returns false for regular errors", func(t *testing.T) {
regularErr := errors.New("some other error")
assert.False(t, IsRepositoryNotFoundError(regularErr))
})
t.Run("IsRepositoryNotFoundError returns false for nil error", func(t *testing.T) {
assert.False(t, IsRepositoryNotFoundError(nil))
})
t.Run("IsRepositoryNotFoundError works with wrapped errors", func(t *testing.T) {
originalErr := errors.New("repository does not exist")
repoNotFoundErr := NewRepositoryNotFoundError(originalErr)
wrappedErr := errors.New("wrapped: " + repoNotFoundErr.Error())
// Direct RepositoryNotFoundError should be identified
assert.True(t, IsRepositoryNotFoundError(repoNotFoundErr))
// Wrapped string error should not be identified (this is expected behavior)
assert.False(t, IsRepositoryNotFoundError(wrappedErr))
})
}

View File

@@ -52,17 +52,11 @@ func (g *GiteaService) List(ctx context.Context) ([]*PullRequest, error) {
State: gitea.StateOpen,
}
g.client.SetContext(ctx)
list := []*PullRequest{}
prs, resp, err := g.client.ListRepoPullRequests(g.owner, g.repo, opts)
prs, _, err := g.client.ListRepoPullRequests(g.owner, g.repo, opts)
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
// return a custom error indicating that the repository is not found,
// but also returning the empty result since the decision to continue or not in this case is made by the caller
return list, NewRepositoryNotFoundError(err)
}
return nil, err
}
list := []*PullRequest{}
for _, pr := range prs {
if !giteaContainLabels(g.labels, pr.Labels) {
continue

View File

@@ -339,35 +339,3 @@ func TestGetGiteaPRLabelNames(t *testing.T) {
})
}
}
func TestGiteaListReturnsRepositoryNotFoundError(t *testing.T) {
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
// Handle version endpoint that Gitea client calls first
mux.HandleFunc("/api/v1/version", func(w http.ResponseWriter, _ *http.Request) {
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{"version":"1.17.0+dev-452-g1f0541780"}`))
})
path := "/api/v1/repos/nonexistent/nonexistent/pulls?limit=0&page=1&state=open"
mux.HandleFunc(path, func(w http.ResponseWriter, _ *http.Request) {
// Return 404 status to simulate repository not found
w.WriteHeader(http.StatusNotFound)
_, _ = w.Write([]byte(`{"message": "404 Project Not Found"}`))
})
svc, err := NewGiteaService("", server.URL, "nonexistent", "nonexistent", []string{}, false)
require.NoError(t, err)
prs, err := svc.List(t.Context())
// Should return empty pull requests list
assert.Empty(t, prs)
// Should return RepositoryNotFoundError
require.Error(t, err)
assert.True(t, IsRepositoryNotFoundError(err), "Expected RepositoryNotFoundError but got: %v", err)
}

View File

@@ -64,11 +64,6 @@ func (g *GithubService) List(ctx context.Context) ([]*PullRequest, error) {
for {
pulls, resp, err := g.client.PullRequests.List(ctx, g.owner, g.repo, opts)
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
// return a custom error indicating that the repository is not found,
// but also returning the empty result since the decision to continue or not in this case is made by the caller
return pullRequests, NewRepositoryNotFoundError(err)
}
return nil, fmt.Errorf("error listing pull requests for %s/%s: %w", g.owner, g.repo, err)
}
for _, pull := range pulls {

View File

@@ -1,12 +1,9 @@
package pull_request
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/google/go-github/v69/github"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -89,29 +86,3 @@ func TestGetGitHubPRLabelNames(t *testing.T) {
})
}
}
func TestGitHubListReturnsRepositoryNotFoundError(t *testing.T) {
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
path := "/repos/nonexistent/nonexistent/pulls"
mux.HandleFunc(path, func(w http.ResponseWriter, _ *http.Request) {
// Return 404 status to simulate repository not found
w.WriteHeader(http.StatusNotFound)
_, _ = w.Write([]byte(`{"message": "404 Project Not Found"}`))
})
svc, err := NewGithubService("", server.URL, "nonexistent", "nonexistent", []string{}, nil)
require.NoError(t, err)
prs, err := svc.List(t.Context())
// Should return empty pull requests list
assert.Empty(t, prs)
// Should return RepositoryNotFoundError
require.Error(t, err)
assert.True(t, IsRepositoryNotFoundError(err), "Expected RepositoryNotFoundError but got: %v", err)
}

View File

@@ -76,11 +76,6 @@ func (g *GitLabService) List(ctx context.Context) ([]*PullRequest, error) {
for {
mrs, resp, err := g.client.MergeRequests.ListProjectMergeRequests(g.project, opts, gitlab.WithContext(ctx))
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
// return a custom error indicating that the repository is not found,
// but also returning the empty result since the decision to continue or not in this case is made by the caller
return pullRequests, NewRepositoryNotFoundError(err)
}
return nil, fmt.Errorf("error listing merge requests for project '%s': %w", g.project, err)
}
for _, mr := range mrs {

View File

@@ -191,29 +191,3 @@ func TestListWithStateTLS(t *testing.T) {
})
}
}
func TestGitLabListReturnsRepositoryNotFoundError(t *testing.T) {
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
path := "/api/v4/projects/nonexistent/merge_requests"
mux.HandleFunc(path, func(w http.ResponseWriter, _ *http.Request) {
// Return 404 status to simulate repository not found
w.WriteHeader(http.StatusNotFound)
_, _ = w.Write([]byte(`{"message": "404 Project Not Found"}`))
})
svc, err := NewGitLabService("", server.URL, "nonexistent", []string{}, "", "", false, nil)
require.NoError(t, err)
prs, err := svc.List(t.Context())
// Should return empty pull requests list
assert.Empty(t, prs)
// Should return RepositoryNotFoundError
require.Error(t, err)
assert.True(t, IsRepositoryNotFoundError(err), "Expected RepositoryNotFoundError but got: %v", err)
}

View File

@@ -30,5 +30,4 @@ type PullRequestService interface {
type Filter struct {
BranchMatch *regexp.Regexp
TargetBranchMatch *regexp.Regexp
TitleMatch *regexp.Regexp
}

View File

@@ -25,12 +25,6 @@ func compileFilters(filters []argoprojiov1alpha1.PullRequestGeneratorFilter) ([]
return nil, fmt.Errorf("error compiling TargetBranchMatch regexp %q: %w", *filter.TargetBranchMatch, err)
}
}
if filter.TitleMatch != nil {
outFilter.TitleMatch, err = regexp.Compile(*filter.TitleMatch)
if err != nil {
return nil, fmt.Errorf("error compiling TitleMatch regexp %q: %w", *filter.TitleMatch, err)
}
}
outFilters = append(outFilters, outFilter)
}
return outFilters, nil
@@ -43,9 +37,6 @@ func matchFilter(pullRequest *PullRequest, filter *Filter) bool {
if filter.TargetBranchMatch != nil && !filter.TargetBranchMatch.MatchString(pullRequest.TargetBranch) {
return false
}
if filter.TitleMatch != nil && !filter.TitleMatch.MatchString(pullRequest.Title) {
return false
}
return true
}

View File

@@ -137,110 +137,6 @@ func TestFilterTargetBranchMatch(t *testing.T) {
assert.Equal(t, "two", pullRequests[0].Branch)
}
func TestFilterTitleMatch(t *testing.T) {
provider, _ := NewFakeService(
t.Context(),
[]*PullRequest{
{
Number: 1,
Title: "PR one - filter",
Branch: "one",
TargetBranch: "master",
HeadSHA: "189d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name1",
},
{
Number: 2,
Title: "PR two - ignore",
Branch: "two",
TargetBranch: "branch1",
HeadSHA: "289d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name2",
},
{
Number: 3,
Title: "[filter] PR three",
Branch: "three",
TargetBranch: "branch2",
HeadSHA: "389d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name3",
},
{
Number: 4,
Title: "[ignore] PR four",
Branch: "four",
TargetBranch: "branch3",
HeadSHA: "489d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name4",
},
},
nil,
)
filters := []argoprojiov1alpha1.PullRequestGeneratorFilter{
{
TitleMatch: strp("\\[filter]"),
},
}
pullRequests, err := ListPullRequests(t.Context(), provider, filters)
require.NoError(t, err)
assert.Len(t, pullRequests, 1)
assert.Equal(t, "three", pullRequests[0].Branch)
}
func TestMultiFilterOrWithTitle(t *testing.T) {
provider, _ := NewFakeService(
t.Context(),
[]*PullRequest{
{
Number: 1,
Title: "PR one - filter",
Branch: "one",
TargetBranch: "master",
HeadSHA: "189d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name1",
},
{
Number: 2,
Title: "PR two - ignore",
Branch: "two",
TargetBranch: "branch1",
HeadSHA: "289d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name2",
},
{
Number: 3,
Title: "[filter] PR three",
Branch: "three",
TargetBranch: "branch2",
HeadSHA: "389d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name3",
},
{
Number: 4,
Title: "[ignore] PR four",
Branch: "four",
TargetBranch: "branch3",
HeadSHA: "489d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name4",
},
},
nil,
)
filters := []argoprojiov1alpha1.PullRequestGeneratorFilter{
{
TitleMatch: strp("\\[filter]"),
},
{
TitleMatch: strp("- filter"),
},
}
pullRequests, err := ListPullRequests(t.Context(), provider, filters)
require.NoError(t, err)
assert.Len(t, pullRequests, 2)
assert.Equal(t, "one", pullRequests[0].Branch)
assert.Equal(t, "three", pullRequests[1].Branch)
}
func TestMultiFilterOr(t *testing.T) {
provider, _ := NewFakeService(
t.Context(),
@@ -296,7 +192,7 @@ func TestMultiFilterOr(t *testing.T) {
assert.Equal(t, "four", pullRequests[2].Branch)
}
func TestMultiFilterOrWithTargetBranchFilterOrWithTitleFilter(t *testing.T) {
func TestMultiFilterOrWithTargetBranchFilter(t *testing.T) {
provider, _ := NewFakeService(
t.Context(),
[]*PullRequest{
@@ -332,14 +228,6 @@ func TestMultiFilterOrWithTargetBranchFilterOrWithTitleFilter(t *testing.T) {
HeadSHA: "489d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name4",
},
{
Number: 5,
Title: "PR title is different than branch name",
Branch: "five",
TargetBranch: "branch3",
HeadSHA: "489d92cbf9ff857a39e6feccd32798ca700fb958",
Author: "name5",
},
},
nil,
)
@@ -352,21 +240,12 @@ func TestMultiFilterOrWithTargetBranchFilterOrWithTitleFilter(t *testing.T) {
BranchMatch: strp("r"),
TargetBranchMatch: strp("3"),
},
{
TitleMatch: strp("two"),
},
{
BranchMatch: strp("five"),
TitleMatch: strp("PR title is different than branch name"),
},
}
pullRequests, err := ListPullRequests(t.Context(), provider, filters)
require.NoError(t, err)
assert.Len(t, pullRequests, 3)
assert.Len(t, pullRequests, 2)
assert.Equal(t, "two", pullRequests[0].Branch)
assert.Equal(t, "four", pullRequests[1].Branch)
assert.Equal(t, "five", pullRequests[2].Branch)
assert.Equal(t, "PR title is different than branch name", pullRequests[2].Title)
}
func TestNoFilters(t *testing.T) {

View File

@@ -1,7 +1,6 @@
package services
import (
"context"
"crypto/tls"
"net/http"
"testing"
@@ -12,7 +11,7 @@ import (
)
func TestSetupBitbucketClient(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
cfg := &bitbucketv1.Configuration{}
// Act

View File

@@ -14,7 +14,7 @@ import (
var ErrDisallowedSecretAccess = fmt.Errorf("secret must have label %q=%q", common.LabelKeySecretType, common.LabelValueSecretTypeSCMCreds)
// GetSecretRef gets the value of the key for the specified Secret resource.
// getSecretRef gets the value of the key for the specified Secret resource.
func GetSecretRef(ctx context.Context, k8sClient client.Client, ref *argoprojiov1alpha1.SecretRef, namespace string, tokenRefStrictMode bool) (string, error) {
if ref == nil {
return "", nil

View File

@@ -26,10 +26,14 @@ import (
"github.com/go-playground/webhooks/v6/github"
"github.com/go-playground/webhooks/v6/gitlab"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v3/util/guard"
)
const payloadQueueSize = 50000
const panicMsgAppSet = "panic while processing applicationset-controller webhook event"
type WebhookHandler struct {
sync.WaitGroup // for testing
github *github.Webhook
@@ -74,15 +78,15 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
if err != nil {
return nil, fmt.Errorf("failed to get argocd settings: %w", err)
}
githubHandler, err := github.New(github.Options.Secret(argocdSettings.GetWebhookGitHubSecret()))
githubHandler, err := github.New(github.Options.Secret(argocdSettings.WebhookGitHubSecret))
if err != nil {
return nil, fmt.Errorf("unable to init GitHub webhook: %w", err)
}
gitlabHandler, err := gitlab.New(gitlab.Options.Secret(argocdSettings.GetWebhookGitLabSecret()))
gitlabHandler, err := gitlab.New(gitlab.Options.Secret(argocdSettings.WebhookGitLabSecret))
if err != nil {
return nil, fmt.Errorf("unable to init GitLab webhook: %w", err)
}
azuredevopsHandler, err := azuredevops.New(azuredevops.Options.BasicAuth(argocdSettings.GetWebhookAzureDevOpsUsername(), argocdSettings.GetWebhookAzureDevOpsPassword()))
azuredevopsHandler, err := azuredevops.New(azuredevops.Options.BasicAuth(argocdSettings.WebhookAzureDevOpsUsername, argocdSettings.WebhookAzureDevOpsPassword))
if err != nil {
return nil, fmt.Errorf("unable to init Azure DevOps webhook: %w", err)
}
@@ -102,6 +106,7 @@ func NewWebhookHandler(webhookParallelism int, argocdSettingsMgr *argosettings.S
}
func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
compLog := log.WithField("component", "applicationset-webhook")
for i := 0; i < webhookParallelism; i++ {
h.Add(1)
go func() {
@@ -111,7 +116,7 @@ func (h *WebhookHandler) startWorkerPool(webhookParallelism int) {
if !ok {
return
}
h.HandleEvent(payload)
guard.RecoverAndLog(func() { h.HandleEvent(payload) }, compLog, panicMsgAppSet)
}
}()
}
@@ -339,7 +344,7 @@ func genRevisionHasChanged(gen *v1alpha1.GitGenerator, revision string, touchedH
func gitGeneratorUsesURL(gen *v1alpha1.GitGenerator, webURL string, repoRegexp *regexp.Regexp) bool {
if !repoRegexp.MatchString(gen.RepoURL) {
log.Warnf("%s does not match %s", gen.RepoURL, repoRegexp.String())
log.Debugf("%s does not match %s", gen.RepoURL, repoRegexp.String())
return false
}

113
assets/swagger.json generated
View File

@@ -374,56 +374,6 @@
}
}
},
"/api/v1/applications/{appName}/server-side-diff": {
"get": {
"tags": [
"ApplicationService"
],
"summary": "ServerSideDiff performs server-side diff calculation using dry-run apply",
"operationId": "ApplicationService_ServerSideDiff",
"parameters": [
{
"type": "string",
"name": "appName",
"in": "path",
"required": true
},
{
"type": "string",
"name": "appNamespace",
"in": "query"
},
{
"type": "string",
"name": "project",
"in": "query"
},
{
"type": "array",
"items": {
"type": "string"
},
"collectionFormat": "multi",
"name": "targetManifests",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/applicationApplicationServerSideDiffResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/applications/{application.metadata.name}": {
"put": {
"tags": [
@@ -5069,20 +5019,6 @@
}
}
},
"applicationApplicationServerSideDiffResponse": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ResourceDiff"
}
},
"modified": {
"type": "boolean"
}
}
},
"applicationApplicationSyncRequest": {
"type": "object",
"title": "ApplicationSyncRequest is a request to apply the config state to live state",
@@ -7324,10 +7260,6 @@
"description": "ApplicationSetStrategy configures how generated Applications are updated in sequence.",
"type": "object",
"properties": {
"deletionOrder": {
"type": "string",
"title": "DeletionOrder allows specifying the order for deleting generated apps when progressive sync is enabled.\naccepts values \"AllAtOnce\" and \"Reverse\""
},
"rollingSync": {
"$ref": "#/definitions/v1alpha1ApplicationSetRolloutStrategy"
},
@@ -8703,20 +8635,12 @@
"title": "KustomizeOptions are options for kustomize to use when building manifests",
"properties": {
"binaryPath": {
"description": "Deprecated: Use settings.Settings instead. See: settings.Settings.KustomizeVersions.\nIf this field is set, it will be used as the Kustomize binary path.\nOtherwise, Versions is used.",
"type": "string",
"title": "BinaryPath holds optional path to kustomize binary"
},
"buildOptions": {
"type": "string",
"title": "BuildOptions is a string of build parameters to use when calling `kustomize build`"
},
"versions": {
"description": "Versions is a list of Kustomize versions and their corresponding binary paths and build options.",
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1KustomizeVersion"
}
}
}
},
@@ -8780,24 +8704,6 @@
}
}
},
"v1alpha1KustomizeVersion": {
"type": "object",
"title": "KustomizeVersion holds information about additional Kustomize versions",
"properties": {
"buildOptions": {
"type": "string",
"title": "BuildOptions that are specific to a Kustomize version"
},
"name": {
"type": "string",
"title": "Name holds Kustomize version name"
},
"path": {
"type": "string",
"title": "Path holds the corresponding binary path"
}
}
},
"v1alpha1ListGenerator": {
"type": "object",
"title": "ListGenerator include items info",
@@ -9119,10 +9025,6 @@
"bitbucketServer": {
"$ref": "#/definitions/v1alpha1PullRequestGeneratorBitbucketServer"
},
"continueOnRepoNotFoundError": {
"description": "ContinueOnRepoNotFoundError is a flag to continue the ApplicationSet Pull Request generator parameters generation even if the repository is not found.",
"type": "boolean"
},
"filters": {
"description": "Filters for which pull requests should be considered.",
"type": "array",
@@ -9252,9 +9154,6 @@
},
"targetBranchMatch": {
"type": "string"
},
"titleMatch": {
"type": "string"
}
}
},
@@ -9658,9 +9557,21 @@
"description": "ResourceActionParam represents a parameter for a resource action.\nIt includes a name, value, type, and an optional default value for the parameter.",
"type": "object",
"properties": {
"default": {
"description": "Default is the default value of the parameter, if any.",
"type": "string"
},
"name": {
"description": "Name is the name of the parameter.",
"type": "string"
},
"type": {
"description": "Type is the type of the parameter (e.g., string, integer).",
"type": "string"
},
"value": {
"description": "Value is the value of the parameter.",
"type": "string"
}
}
},

View File

@@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
enableScmProviders bool
webhookParallelism int
tokenRefStrictMode bool
maxResourcesStatusCount int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@@ -231,6 +232,7 @@ func NewCommand() *cobra.Command {
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@@ -268,13 +270,14 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS", 60, 0, math.MaxInt64), "Repo server RPC call timeout seconds.")
command.Flags().IntVar(&maxConcurrentReconciliations, "concurrent-reconciliations", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CONCURRENT_RECONCILIATIONS", 10, 1, math.MaxInt), "Max concurrent reconciliations limit for the controller")
command.Flags().IntVar(&maxConcurrentReconciliations, "concurrent-reconciliations", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CONCURRENT_RECONCILIATIONS", 10, 1, 100), "Max concurrent reconciliations limit for the controller")
command.Flags().StringVar(&scmRootCAPath, "scm-root-ca-path", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH", ""), "Provide Root CA Path for self-signed TLS Certificates")
command.Flags().StringSliceVar(&globalPreservedAnnotations, "preserved-annotations", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS", []string{}, ","), "Sets global preserved field values for annotations")
command.Flags().StringSliceVar(&globalPreservedLabels, "preserved-labels", env.StringsFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS", []string{}, ","), "Sets global preserved field values for labels")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&metricsAplicationsetLabels, "metrics-applicationset-labels", []string{}, "List of Application labels that will be added to the argocd_applicationset_labels metric")
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 0, 0, math.MaxInt), "Max number of resources stored in appset status.")
return &command
}

View File

@@ -50,6 +50,7 @@ func NewCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
processorsCount int
namespace string
appLabelSelector string
logLevel string
logFormat string
@@ -177,6 +178,7 @@ func NewCommand() *cobra.Command {
clientConfig = addK8SFlagsToCmd(&command)
command.Flags().IntVar(&processorsCount, "processors-count", 1, "Processors count.")
command.Flags().StringVar(&appLabelSelector, "app-label-selector", "", "App label selector.")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace which controller handles. Current namespace if empty.")
command.Flags().StringVar(&logLevel, "loglevel", env.StringFromEnv("ARGOCD_NOTIFICATIONS_CONTROLLER_LOGLEVEL", "info"), "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&logFormat, "logformat", env.StringFromEnv("ARGOCD_NOTIFICATIONS_CONTROLLER_LOGFORMAT", "json"), "Set the logging format. One of: json|text")
command.Flags().IntVar(&metricsPort, "metrics-port", defaultMetricsPort, "Metrics port")

View File

@@ -415,6 +415,7 @@ func reconcileApplications(
},
settingsMgr,
stateCache,
projInformer,
server,
cache,
time.Second,
@@ -463,7 +464,7 @@ func reconcileApplications(
sources = append(sources, app.Spec.GetSource())
revisions = append(revisions, app.Spec.GetSource().TargetRevision)
res, err := appStateManager.CompareAppState(&app, proj, revisions, sources, false, false, nil, false)
res, err := appStateManager.CompareAppState(&app, proj, revisions, sources, false, false, nil, false, false)
if err != nil {
return nil, fmt.Errorf("error comparing app states: %w", err)
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/redis/go-redis/v9"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/fake"
@@ -609,31 +608,7 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
clientConfig := clientcmd.NewDefaultClientConfig(*cfgAccess, &overrides)
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
// Seed a minimal in-memory Argo CD environment so settings retrieval succeeds
argoCDCM := &corev1.ConfigMap{
TypeMeta: metav1.TypeMeta{Kind: "ConfigMap", APIVersion: "v1"},
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
Namespace: ArgoCDNamespace,
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
}
argoCDSecret := &corev1.Secret{
TypeMeta: metav1.TypeMeta{Kind: "Secret", APIVersion: "v1"},
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDSecretName,
Namespace: ArgoCDNamespace,
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
Data: map[string][]byte{
"server.secretkey": []byte("test"),
},
}
kubeClientset := fake.NewClientset(argoCDCM, argoCDSecret)
kubeClientset := fake.NewClientset()
var awsAuthConf *v1alpha1.AWSAuthConfig
var execProviderConf *v1alpha1.ExecProviderConfig

View File

@@ -24,24 +24,24 @@ func TestRun_SignalHandling_GracefulShutdown(t *testing.T) {
},
}
var runErr error
var err error
doneCh := make(chan struct{})
go func() {
runErr = d.Run(t.Context(), &DashboardConfig{ClientOpts: &apiclient.ClientOptions{}})
err = d.Run(t.Context(), &DashboardConfig{ClientOpts: &apiclient.ClientOptions{}})
close(doneCh)
}()
// Allow some time for the dashboard to register the signal handler
time.Sleep(50 * time.Millisecond)
proc, procErr := os.FindProcess(os.Getpid())
require.NoErrorf(t, procErr, "failed to find process: %v", procErr)
sigErr := proc.Signal(syscall.SIGINT)
require.NoErrorf(t, sigErr, "failed to send SIGINT: %v", sigErr)
proc, err := os.FindProcess(os.Getpid())
require.NoErrorf(t, err, "failed to find process: %v", err)
err = proc.Signal(syscall.SIGINT)
require.NoErrorf(t, err, "failed to send SIGINT: %v", err)
select {
case <-doneCh:
require.NoError(t, runErr)
require.NoError(t, err)
case <-time.After(500 * time.Millisecond):
t.Fatal("timeout: dashboard.Run did not exit after SIGINT")
}

View File

@@ -30,11 +30,12 @@ func NewNotificationsCommand() *cobra.Command {
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"argocd admin notifications",
applications,
settings.GetFactorySettingsForCLI(argocdService, "argocd-notifications-secret", "argocd-notifications-cm", false),
settings.GetFactorySettingsForCLI(func() service.Service { return argocdService }, "argocd-notifications-secret", "argocd-notifications-cm", false),
func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {

View File

@@ -39,13 +39,9 @@ import (
"github.com/argoproj/argo-cd/v3/cmd/argocd/commands/headless"
"github.com/argoproj/argo-cd/v3/cmd/argocd/commands/utils"
cmdutil "github.com/argoproj/argo-cd/v3/cmd/util"
argocommon "github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/controller"
argocdclient "github.com/argoproj/argo-cd/v3/pkg/apiclient"
"github.com/argoproj/argo-cd/v3/pkg/apiclient/application"
resourceutil "github.com/argoproj/gitops-engine/pkg/sync/resource"
clusterpkg "github.com/argoproj/argo-cd/v3/pkg/apiclient/cluster"
projectpkg "github.com/argoproj/argo-cd/v3/pkg/apiclient/project"
"github.com/argoproj/argo-cd/v3/pkg/apiclient/settings"
@@ -99,7 +95,6 @@ func NewApplicationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
command.AddCommand(NewApplicationTerminateOpCommand(clientOpts))
command.AddCommand(NewApplicationEditCommand(clientOpts))
command.AddCommand(NewApplicationPatchCommand(clientOpts))
command.AddCommand(NewApplicationGetResourceCommand(clientOpts))
command.AddCommand(NewApplicationPatchResourceCommand(clientOpts))
command.AddCommand(NewApplicationDeleteResourceCommand(clientOpts))
command.AddCommand(NewApplicationResourceActionsCommand(clientOpts))
@@ -1286,7 +1281,6 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
revision string
localRepoRoot string
serverSideGenerate bool
serverSideDiff bool
localIncludes []string
appNamespace string
revisions []string
@@ -1349,22 +1343,6 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
argoSettings, err := settingsIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
diffOption := &DifferenceOption{}
hasServerSideDiffAnnotation := resourceutil.HasAnnotationOption(app, argocommon.AnnotationCompareOptions, "ServerSideDiff=true")
// Use annotation if flag not explicitly set
if !c.Flags().Changed("server-side-diff") {
serverSideDiff = hasServerSideDiffAnnotation
} else if serverSideDiff && !hasServerSideDiffAnnotation {
// Flag explicitly set to true, but app annotation is not set
fmt.Fprintf(os.Stderr, "Warning: Application does not have ServerSideDiff=true annotation.\n")
}
// Server side diff with local requires server side generate to be set as there will be a mismatch with client-generated manifests.
if serverSideDiff && local != "" && !serverSideGenerate {
log.Fatal("--server-side-diff with --local requires --server-side-generate.")
}
switch {
case app.Spec.HasMultipleSources() && len(revisions) > 0 && len(sourcePositions) > 0:
numOfSources := int64(len(app.Spec.GetSources()))
@@ -1420,8 +1398,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
}
proj := getProject(ctx, c, clientOpts, app.Spec.Project)
foundDiffs := findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, app.GetName(), app.GetNamespace())
foundDiffs := findandPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts)
if foundDiffs && exitCode {
os.Exit(diffExitCode)
}
@@ -1430,12 +1407,11 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().BoolVar(&refresh, "refresh", false, "Refresh application data when retrieving")
command.Flags().BoolVar(&hardRefresh, "hard-refresh", false, "Refresh application data as well as target manifests cache")
command.Flags().BoolVar(&exitCode, "exit-code", true, "Return non-zero exit code when there is a diff. May also return non-zero exit code if there is an error.")
command.Flags().IntVar(&diffExitCode, "diff-exit-code", 1, "Return specified exit code when there is a diff. Typical error code is 20 but use another exit code if you want to differentiate from the generic exit code (20) returned by all CLI commands.")
command.Flags().IntVar(&diffExitCode, "diff-exit-code", 1, "Return specified exit code when there is a diff. Typical error code is 20.")
command.Flags().StringVar(&local, "local", "", "Compare live app to a local manifests")
command.Flags().StringVar(&revision, "revision", "", "Compare live app to a particular revision")
command.Flags().StringVar(&localRepoRoot, "local-repo-root", "/", "Path to the repository root. Used together with --local allows setting the repository root")
command.Flags().BoolVar(&serverSideGenerate, "server-side-generate", false, "Used with --local, this will send your manifests to the server for diffing")
command.Flags().BoolVar(&serverSideDiff, "server-side-diff", false, "Use server-side diff to calculate the diff. This will default to true if the ServerSideDiff annotation is set on the application.")
command.Flags().StringArrayVar(&localIncludes, "local-include", []string{"*.yaml", "*.yml", "*.json"}, "Used with --server-side-generate, specify patterns of filenames to send. Matching is based on filename and not path.")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only render the difference in namespace")
command.Flags().StringArrayVar(&revisions, "revisions", []string{}, "Show manifests at specific revisions for source position in source-positions")
@@ -1445,101 +1421,6 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
return command
}
// printResourceDiff prints the diff header and calls cli.PrintDiff for a resource
func printResourceDiff(group, kind, namespace, name string, live, target *unstructured.Unstructured) {
fmt.Printf("\n===== %s/%s %s/%s ======\n", group, kind, namespace, name)
_ = cli.PrintDiff(name, live, target)
}
// findAndPrintServerSideDiff performs a server-side diff by making requests to the api server and prints the response
func findAndPrintServerSideDiff(ctx context.Context, app *argoappv1.Application, items []objKeyLiveTarget, resources *application.ManagedResourcesResponse, appIf application.ApplicationServiceClient, appName, appNs string) bool {
// Process each item for server-side diff
foundDiffs := false
for _, item := range items {
if item.target != nil && hook.IsHook(item.target) || item.live != nil && hook.IsHook(item.live) {
continue
}
// For server-side diff, we need to create aligned arrays for this specific resource
var liveResource *argoappv1.ResourceDiff
var targetManifest string
if item.live != nil {
for _, res := range resources.Items {
if res.Group == item.key.Group && res.Kind == item.key.Kind &&
res.Namespace == item.key.Namespace && res.Name == item.key.Name {
liveResource = res
break
}
}
}
if liveResource == nil {
// Create empty live resource for creation case
liveResource = &argoappv1.ResourceDiff{
Group: item.key.Group,
Kind: item.key.Kind,
Namespace: item.key.Namespace,
Name: item.key.Name,
LiveState: "",
TargetState: "",
Modified: true,
}
}
if item.target != nil {
jsonBytes, err := json.Marshal(item.target)
if err != nil {
errors.CheckError(fmt.Errorf("error marshaling target object: %w", err))
}
targetManifest = string(jsonBytes)
}
// Call server-side diff for this individual resource
serverSideDiffQuery := &application.ApplicationServerSideDiffQuery{
AppName: &appName,
AppNamespace: &appNs,
Project: &app.Spec.Project,
LiveResources: []*argoappv1.ResourceDiff{liveResource},
TargetManifests: []string{targetManifest},
}
serverSideDiffRes, err := appIf.ServerSideDiff(ctx, serverSideDiffQuery)
if err != nil {
errors.CheckError(err)
}
// Extract diff for this resource
for _, resultItem := range serverSideDiffRes.Items {
if resultItem.Hook || (!resultItem.Modified && resultItem.TargetState != "" && resultItem.LiveState != "") {
continue
}
if resultItem.Modified || resultItem.TargetState == "" || resultItem.LiveState == "" {
var live, target *unstructured.Unstructured
if resultItem.TargetState != "" && resultItem.TargetState != "null" {
target = &unstructured.Unstructured{}
err = json.Unmarshal([]byte(resultItem.TargetState), target)
errors.CheckError(err)
}
if resultItem.LiveState != "" && resultItem.LiveState != "null" {
live = &unstructured.Unstructured{}
err = json.Unmarshal([]byte(resultItem.LiveState), live)
errors.CheckError(err)
}
// Print resulting diff for this resource
foundDiffs = true
printResourceDiff(resultItem.Group, resultItem.Kind, resultItem.Namespace, resultItem.Name, live, target)
}
}
}
return foundDiffs
}
// DifferenceOption struct to store diff options
type DifferenceOption struct {
local string
@@ -1551,15 +1432,47 @@ type DifferenceOption struct {
revisions []string
}
// findAndPrintDiff ... Prints difference between application current state and state stored in git or locally, returns boolean as true if difference is found else returns false
func findAndPrintDiff(ctx context.Context, app *argoappv1.Application, proj *argoappv1.AppProject, resources *application.ManagedResourcesResponse, argoSettings *settings.Settings, diffOptions *DifferenceOption, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, useServerSideDiff bool, appIf application.ApplicationServiceClient, appName, appNs string) bool {
// findandPrintDiff ... Prints difference between application current state and state stored in git or locally, returns boolean as true if difference is found else returns false
func findandPrintDiff(ctx context.Context, app *argoappv1.Application, proj *argoappv1.AppProject, resources *application.ManagedResourcesResponse, argoSettings *settings.Settings, diffOptions *DifferenceOption, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) bool {
var foundDiffs bool
items, err := prepareObjectsForDiff(ctx, app, proj, resources, argoSettings, diffOptions)
liveObjs, err := cmdutil.LiveObjects(resources.Items)
errors.CheckError(err)
items := make([]objKeyLiveTarget, 0)
switch {
case diffOptions.local != "":
localObjs := groupObjsByKey(getLocalObjects(ctx, app, proj, diffOptions.local, diffOptions.localRepoRoot, argoSettings.AppLabelKey, diffOptions.cluster.Info.ServerVersion, diffOptions.cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod), liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, localObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
case diffOptions.revision != "" || len(diffOptions.revisions) > 0:
var unstructureds []*unstructured.Unstructured
for _, mfst := range diffOptions.res.Manifests {
obj, err := argoappv1.UnmarshalToUnstructured(mfst)
errors.CheckError(err)
unstructureds = append(unstructureds, obj)
}
groupedObjs := groupObjsByKey(unstructureds, liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
case diffOptions.serversideRes != nil:
var unstructureds []*unstructured.Unstructured
for _, mfst := range diffOptions.serversideRes.Manifests {
obj, err := argoappv1.UnmarshalToUnstructured(mfst)
errors.CheckError(err)
unstructureds = append(unstructureds, obj)
}
groupedObjs := groupObjsByKey(unstructureds, liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
default:
for i := range resources.Items {
res := resources.Items[i]
live := &unstructured.Unstructured{}
err := json.Unmarshal([]byte(res.NormalizedLiveState), &live)
errors.CheckError(err)
if useServerSideDiff {
return findAndPrintServerSideDiff(ctx, app, items, resources, appIf, appName, appNs)
target := &unstructured.Unstructured{}
err = json.Unmarshal([]byte(res.TargetState), &target)
errors.CheckError(err)
items = append(items, objKeyLiveTarget{kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name), live, target})
}
}
for _, item := range items {
@@ -1586,6 +1499,7 @@ func findAndPrintDiff(ctx context.Context, app *argoappv1.Application, proj *arg
errors.CheckError(err)
if diffRes.Modified || item.target == nil || item.live == nil {
fmt.Printf("\n===== %s/%s %s/%s ======\n", item.key.Group, item.key.Kind, item.key.Namespace, item.key.Name)
var live *unstructured.Unstructured
var target *unstructured.Unstructured
if item.target != nil && item.live != nil {
@@ -1597,8 +1511,10 @@ func findAndPrintDiff(ctx context.Context, app *argoappv1.Application, proj *arg
live = item.live
target = item.target
}
foundDiffs = true
printResourceDiff(item.key.Group, item.key.Kind, item.key.Namespace, item.key.Name, live, target)
if !foundDiffs {
foundDiffs = true
}
_ = cli.PrintDiff(item.key.Name, live, target)
}
}
return foundDiffs
@@ -2380,11 +2296,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
fmt.Printf("====== Previewing differences between live and desired state of application %s ======\n", appQualifiedName)
proj := getProject(ctx, c, clientOpts, app.Spec.Project)
// Check if application has ServerSideDiff annotation
serverSideDiff := resourceutil.HasAnnotationOption(app, argocommon.AnnotationCompareOptions, "ServerSideDiff=true")
foundDiffs = findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, appName, appNs)
foundDiffs = findandPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts)
if !foundDiffs {
fmt.Printf("====== No Differences found ======\n")
// if no differences found, then no need to sync
@@ -3607,60 +3519,3 @@ func NewApplicationConfirmDeletionCommand(clientOpts *argocdclient.ClientOptions
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Namespace of the target application where the source will be appended")
return command
}
// prepareObjectsForDiff prepares objects for diffing using the switch statement
// to handle different diff options and building the objKeyLiveTarget items
func prepareObjectsForDiff(ctx context.Context, app *argoappv1.Application, proj *argoappv1.AppProject, resources *application.ManagedResourcesResponse, argoSettings *settings.Settings, diffOptions *DifferenceOption) ([]objKeyLiveTarget, error) {
liveObjs, err := cmdutil.LiveObjects(resources.Items)
if err != nil {
return nil, err
}
items := make([]objKeyLiveTarget, 0)
switch {
case diffOptions.local != "":
localObjs := groupObjsByKey(getLocalObjects(ctx, app, proj, diffOptions.local, diffOptions.localRepoRoot, argoSettings.AppLabelKey, diffOptions.cluster.Info.ServerVersion, diffOptions.cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod), liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, localObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
case diffOptions.revision != "" || len(diffOptions.revisions) > 0:
var unstructureds []*unstructured.Unstructured
for _, mfst := range diffOptions.res.Manifests {
obj, err := argoappv1.UnmarshalToUnstructured(mfst)
if err != nil {
return nil, err
}
unstructureds = append(unstructureds, obj)
}
groupedObjs := groupObjsByKey(unstructureds, liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
case diffOptions.serversideRes != nil:
var unstructureds []*unstructured.Unstructured
for _, mfst := range diffOptions.serversideRes.Manifests {
obj, err := argoappv1.UnmarshalToUnstructured(mfst)
if err != nil {
return nil, err
}
unstructureds = append(unstructureds, obj)
}
groupedObjs := groupObjsByKey(unstructureds, liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, groupedObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
default:
for i := range resources.Items {
res := resources.Items[i]
live := &unstructured.Unstructured{}
err := json.Unmarshal([]byte(res.NormalizedLiveState), &live)
if err != nil {
return nil, err
}
target := &unstructured.Unstructured{}
err = json.Unmarshal([]byte(res.TargetState), &target)
if err != nil {
return nil, err
}
items = append(items, objKeyLiveTarget{kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name), live, target})
}
}
return items, nil
}

View File

@@ -2,14 +2,11 @@ package commands
import (
"bytes"
"encoding/json"
"strings"
"testing"
"text/tabwriter"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -120,561 +117,3 @@ func TestPrintResourcesTree(t *testing.T) {
assert.Equal(t, expectation, output)
}
func TestFilterFieldsFromObject(t *testing.T) {
tests := []struct {
name string
obj unstructured.Unstructured
filteredFields []string
expectedFields []string
unexpectedFields []string
}{
{
name: "filter nested field",
obj: unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "vX",
"kind": "kind",
"metadata": map[string]any{
"name": "test",
},
"spec": map[string]any{
"testfield": map[string]any{
"nestedtest": "test",
},
"testfield2": "test",
},
},
},
filteredFields: []string{"spec.testfield.nestedtest"},
expectedFields: []string{"spec.testfield.nestedtest"},
unexpectedFields: []string{"spec.testfield2"},
},
{
name: "filter multiple fields",
obj: unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "vX",
"kind": "kind",
"metadata": map[string]any{
"name": "test",
},
"spec": map[string]any{
"testfield": map[string]any{
"nestedtest": "test",
},
"testfield2": "test",
"testfield3": "deleteme",
},
},
},
filteredFields: []string{"spec.testfield.nestedtest", "spec.testfield3"},
expectedFields: []string{"spec.testfield.nestedtest"},
unexpectedFields: []string{"spec.testfield2"},
},
{
name: "filter nested list object",
obj: unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "vX",
"kind": "kind",
"metadata": map[string]any{
"name": "test",
},
"spec": map[string]any{
"testfield": map[string]any{
"nestedtest": "test",
},
"testfield2": "test",
},
},
},
filteredFields: []string{"spec.testfield.nestedtest"},
expectedFields: []string{"spec.testfield.nestedtest"},
unexpectedFields: []string{"spec.testfield2"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.obj.SetName("test-object")
filtered := filterFieldsFromObject(&tt.obj, tt.filteredFields)
for _, field := range tt.expectedFields {
fieldPath := strings.Split(field, ".")
_, exists, err := unstructured.NestedFieldCopy(filtered.Object, fieldPath...)
require.NoError(t, err)
assert.True(t, exists, "Expected field %s to exist", field)
}
for _, field := range tt.unexpectedFields {
fieldPath := strings.Split(field, ".")
_, exists, err := unstructured.NestedFieldCopy(filtered.Object, fieldPath...)
require.NoError(t, err)
assert.False(t, exists, "Expected field %s to not exist", field)
}
assert.Equal(t, tt.obj.GetName(), filtered.GetName())
})
}
}
func TestExtractNestedItem(t *testing.T) {
tests := []struct {
name string
obj map[string]any
fields []string
depth int
expected map[string]any
}{
{
name: "extract simple nested item",
obj: map[string]any{
"listofitems": []any{
map[string]any{
"extract": "123",
"dontextract": "abc",
},
map[string]any{
"extract": "456",
"dontextract": "def",
},
map[string]any{
"extract": "789",
"dontextract": "ghi",
},
},
},
fields: []string{"listofitems", "extract"},
depth: 0,
expected: map[string]any{
"listofitems": []any{
map[string]any{
"extract": "123",
},
map[string]any{
"extract": "456",
},
map[string]any{
"extract": "789",
},
},
},
},
{
name: "double nested list of objects",
obj: map[string]any{
"listofitems": []any{
map[string]any{
"doublenested": []any{
map[string]any{
"extract": "123",
},
},
"dontextract": "abc",
},
map[string]any{
"doublenested": []any{
map[string]any{
"extract": "456",
},
},
"dontextract": "def",
},
map[string]any{
"doublenested": []any{
map[string]any{
"extract": "789",
},
},
"dontextract": "ghi",
},
},
},
fields: []string{"listofitems", "doublenested", "extract"},
depth: 0,
expected: map[string]any{
"listofitems": []any{
map[string]any{
"doublenested": []any{
map[string]any{
"extract": "123",
},
},
},
map[string]any{
"doublenested": []any{
map[string]any{
"extract": "456",
},
},
},
map[string]any{
"doublenested": []any{
map[string]any{
"extract": "789",
},
},
},
},
},
},
{
name: "depth is greater then list of field size",
obj: map[string]any{"test1": "1234567890"},
fields: []string{"test1"},
depth: 4,
expected: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
filteredObj := extractNestedItem(tt.obj, tt.fields, tt.depth)
assert.Equal(t, tt.expected, filteredObj, "Did not get the correct filtered obj")
})
}
}
func TestExtractItemsFromList(t *testing.T) {
tests := []struct {
name string
list []any
fields []string
expected []any
}{
{
name: "test simple field",
list: []any{
map[string]any{"extract": "value1", "dontextract": "valueA"},
map[string]any{"extract": "value2", "dontextract": "valueB"},
map[string]any{"extract": "value3", "dontextract": "valueC"},
},
fields: []string{"extract"},
expected: []any{
map[string]any{"extract": "value1"},
map[string]any{"extract": "value2"},
map[string]any{"extract": "value3"},
},
},
{
name: "test simple field with some depth",
list: []any{
map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"extract": "123",
"dontextract": "abc",
},
},
},
map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"extract": "456",
"dontextract": "def",
},
},
},
map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"extract": "789",
"dontextract": "ghi",
},
},
},
},
fields: []string{"test1", "test2", "extract"},
expected: []any{
map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"extract": "123",
},
},
},
map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"extract": "456",
},
},
},
map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"extract": "789",
},
},
},
},
},
{
name: "test a missing field",
list: []any{
map[string]any{"test1": "123"},
map[string]any{"test1": "456"},
map[string]any{"test1": "789"},
},
fields: []string{"test2"},
expected: nil,
},
{
name: "test getting an object",
list: []any{
map[string]any{
"extract": map[string]any{
"keyA": "valueA",
"keyB": "valueB",
"keyC": "valueC",
},
"dontextract": map[string]any{
"key1": "value1",
"key2": "value2",
"key3": "value3",
},
},
map[string]any{
"extract": map[string]any{
"keyD": "valueD",
"keyE": "valueE",
"keyF": "valueF",
},
"dontextract": map[string]any{
"key4": "value4",
"key5": "value5",
"key6": "value6",
},
},
},
fields: []string{"extract"},
expected: []any{
map[string]any{
"extract": map[string]any{
"keyA": "valueA",
"keyB": "valueB",
"keyC": "valueC",
},
},
map[string]any{
"extract": map[string]any{
"keyD": "valueD",
"keyE": "valueE",
"keyF": "valueF",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
extractedList := extractItemsFromList(tt.list, tt.fields)
assert.Equal(t, tt.expected, extractedList, "Lists were not equal")
})
}
}
func TestReconstructObject(t *testing.T) {
tests := []struct {
name string
extracted []any
fields []string
depth int
expected map[string]any
}{
{
name: "simple single field at depth 0",
extracted: []any{"value1", "value2"},
fields: []string{"items"},
depth: 0,
expected: map[string]any{
"items": []any{"value1", "value2"},
},
},
{
name: "object nested at depth 1",
extracted: []any{map[string]any{"key": "value"}},
fields: []string{"test1", "test2"},
depth: 1,
expected: map[string]any{
"test1": map[string]any{
"test2": []any{map[string]any{"key": "value"}},
},
},
},
{
name: "empty list of extracted items",
extracted: []any{},
fields: []string{"test1"},
depth: 0,
expected: map[string]any{
"test1": []any{},
},
},
{
name: "complex object nesteed at depth 2",
extracted: []any{map[string]any{
"obj1": map[string]any{
"key1": "value1",
"key2": "value2",
},
"obj2": map[string]any{
"keyA": "valueA",
"keyB": "valueB",
},
}},
fields: []string{"test1", "test2", "test3"},
depth: 2,
expected: map[string]any{
"test1": map[string]any{
"test2": map[string]any{
"test3": []any{
map[string]any{
"obj1": map[string]any{
"key1": "value1",
"key2": "value2",
},
"obj2": map[string]any{
"keyA": "valueA",
"keyB": "valueB",
},
},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
filteredObj := reconstructObject(tt.extracted, tt.fields, tt.depth)
assert.Equal(t, tt.expected, filteredObj, "objects were not equal")
})
}
}
func TestPrintManifests(t *testing.T) {
obj := unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "vX",
"kind": "test",
"metadata": map[string]any{
"name": "unit-test",
},
"spec": map[string]any{
"testfield": "testvalue",
},
},
}
expectedYAML := `apiVersion: vX
kind: test
metadata:
name: unit-test
spec:
testfield: testvalue
`
output, _ := captureOutput(func() error {
printManifests(&[]unstructured.Unstructured{obj}, false, true, "yaml")
return nil
})
assert.Equal(t, expectedYAML+"\n", output, "Incorrect yaml output for printManifests")
output, _ = captureOutput(func() error {
printManifests(&[]unstructured.Unstructured{obj, obj}, false, true, "yaml")
return nil
})
assert.Equal(t, expectedYAML+"\n---\n"+expectedYAML+"\n", output, "Incorrect yaml output with multiple objs.")
expectedJSON := `{
"apiVersion": "vX",
"kind": "test",
"metadata": {
"name": "unit-test"
},
"spec": {
"testfield": "testvalue"
}
}`
output, _ = captureOutput(func() error {
printManifests(&[]unstructured.Unstructured{obj}, false, true, "json")
return nil
})
assert.Equal(t, expectedJSON+"\n", output, "Incorrect json output.")
output, _ = captureOutput(func() error {
printManifests(&[]unstructured.Unstructured{obj, obj}, false, true, "json")
return nil
})
assert.Equal(t, expectedJSON+"\n---\n"+expectedJSON+"\n", output, "Incorrect json output with multiple objs.")
output, _ = captureOutput(func() error {
printManifests(&[]unstructured.Unstructured{obj}, true, true, "wide")
return nil
})
assert.Contains(t, output, "FIELD RESOURCE NAME VALUE", "Missing or incorrect header line for table print with showing names.")
assert.Contains(t, output, "apiVersion unit-test vX", "Missing or incorrect row in table related to apiVersion with showing names.")
assert.Contains(t, output, "kind unit-test test", "Missing or incorrect line in the table related to kind with showing names.")
assert.Contains(t, output, "spec.testfield unit-test testvalue", "Missing or incorrect line in the table related to spec.testfield with showing names.")
assert.NotContains(t, output, "metadata.name unit-test testvalue", "Missing or incorrect line in the table related to metadata.name with showing names.")
output, _ = captureOutput(func() error {
printManifests(&[]unstructured.Unstructured{obj}, true, false, "wide")
return nil
})
assert.Contains(t, output, "FIELD VALUE", "Missing or incorrect header line for table print with not showing names.")
assert.Contains(t, output, "apiVersion vX", "Missing or incorrect row in table related to apiVersion with not showing names.")
assert.Contains(t, output, "kind test", "Missing or incorrect row in the table related to kind with not showing names.")
assert.Contains(t, output, "spec.testfield testvalue", "Missing or incorrect row in the table related to spec.testefield with not showing names.")
assert.NotContains(t, output, "metadata.name testvalue", "Missing or incorrect row in the tbale related to metadata.name with not showing names.")
}
func TestPrintManifests_FilterNestedListObject_Wide(t *testing.T) {
obj := unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "vX",
"kind": "kind",
"metadata": map[string]any{
"name": "unit-test",
},
"status": map[string]any{
"podIPs": []map[string]any{
{
"IP": "127.0.0.1",
},
{
"IP": "127.0.0.2",
},
},
},
},
}
output, _ := captureOutput(func() error {
v, err := json.Marshal(&obj)
if err != nil {
return nil
}
var obj2 *unstructured.Unstructured
err = json.Unmarshal([]byte(v), &obj2)
if err != nil {
return nil
}
printManifests(&[]unstructured.Unstructured{*obj2}, false, true, "wide")
return nil
})
// Verify table header
assert.Contains(t, output, "FIELD RESOURCE NAME VALUE", "Missing a line in the table")
assert.Contains(t, output, "apiVersion unit-test vX", "Test for apiVersion field failed for wide output")
assert.Contains(t, output, "kind unit-test kind", "Test for kind field failed for wide output")
assert.Contains(t, output, "status.podIPs[0].IP unit-test 127.0.0.1", "Test for podIP array index 0 field failed for wide output")
assert.Contains(t, output, "status.podIPs[1].IP unit-test 127.0.0.2", "Test for podIP array index 1 field failed for wide output")
}

View File

@@ -1,22 +1,16 @@
package commands
import (
"encoding/json"
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"gopkg.in/yaml.v3"
"github.com/argoproj/argo-cd/v3/cmd/argocd/commands/utils"
"github.com/argoproj/argo-cd/v3/cmd/util"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
@@ -28,273 +22,15 @@ import (
utilio "github.com/argoproj/argo-cd/v3/util/io"
)
// NewApplicationGetResourceCommand returns a new instance of the `app get-resource` command
func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
resourceName string
kind string
project string
filteredFields []string
showManagedFields bool
output string
)
command := &cobra.Command{
Use: "get-resource APPNAME",
Short: "Get details about the live Kubernetes manifests of a resource in an application. The filter-fields flag can be used to only display fields you want to see.",
Example: `
# Get a specific resource, Pod my-app-pod, in 'my-app' by name in wide format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod
# Get a specific resource, Pod my-app-pod, in 'my-app' by name in yaml format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod -o yaml
# Get a specific resource, Pod my-app-pod, in 'my-app' by name in json format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod -o json
# Get details about all Pods in the application
argocd app get-resource my-app --kind Pod
# Get a specific resource with managed fields, Pod my-app-pod, in 'my-app' by name in wide format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod --show-managed-fields
# Get the the details of a specific field in a resource in 'my-app' in the wide format
argocd app get-resource my-app --kind Pod --filter-fields status.podIP
# Get the details of multiple specific fields in a specific resource in 'my-app' in the wide format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod --filter-fields status.podIP,status.hostIP`,
}
command.Run = func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
appName, appNs := argo.ParseFromQualifiedName(args[0], "")
conn, appIf := headless.NewClientOrDie(clientOpts, c).NewApplicationClientOrDie()
defer utilio.Close(conn)
tree, err := appIf.ResourceTree(ctx, &applicationpkg.ResourcesQuery{
ApplicationName: &appName,
AppNamespace: &appNs,
})
errors.CheckError(err)
// Get manifests of resources
// If resource name is "" find all resources of that kind
var resources []unstructured.Unstructured
var fetchedStr string
for _, r := range tree.Nodes {
if (resourceName != "" && r.Name != resourceName) || r.Kind != kind {
continue
}
resource, err := appIf.GetResource(ctx, &applicationpkg.ApplicationResourceRequest{
Name: &appName,
AppNamespace: &appNs,
Group: &r.Group,
Kind: &r.Kind,
Namespace: &r.Namespace,
Project: &project,
ResourceName: &r.Name,
Version: &r.Version,
})
errors.CheckError(err)
manifest := resource.GetManifest()
var obj *unstructured.Unstructured
err = json.Unmarshal([]byte(manifest), &obj)
errors.CheckError(err)
if !showManagedFields {
unstructured.RemoveNestedField(obj.Object, "metadata", "managedFields")
}
if len(filteredFields) != 0 {
obj = filterFieldsFromObject(obj, filteredFields)
}
fetchedStr += obj.GetName() + ", "
resources = append(resources, *obj)
}
printManifests(&resources, len(filteredFields) > 0, resourceName == "", output)
if fetchedStr != "" {
fetchedStr = strings.TrimSuffix(fetchedStr, ", ")
}
log.Infof("Resources '%s' fetched", fetchedStr)
}
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource, if none is included will output details of all resources with specified kind")
command.Flags().StringVar(&kind, "kind", "", "Kind of resource [REQUIRED]")
err := command.MarkFlagRequired("kind")
errors.CheckError(err)
command.Flags().StringVar(&project, "project", "", "Project of resource")
command.Flags().StringSliceVar(&filteredFields, "filter-fields", nil, "A comma separated list of fields to display, if not provided will output the entire manifest")
command.Flags().BoolVar(&showManagedFields, "show-managed-fields", false, "Show managed fields in the output manifest")
command.Flags().StringVarP(&output, "output", "o", "wide", "Format of the output, wide, yaml, or json")
return command
}
// filterFieldsFromObject creates a new unstructured object containing only the specified fields from the source object.
func filterFieldsFromObject(obj *unstructured.Unstructured, filteredFields []string) *unstructured.Unstructured {
var filteredObj unstructured.Unstructured
filteredObj.Object = make(map[string]any)
for _, f := range filteredFields {
fields := strings.Split(f, ".")
value, exists, err := unstructured.NestedFieldCopy(obj.Object, fields...)
if exists {
errors.CheckError(err)
err = unstructured.SetNestedField(filteredObj.Object, value, fields...)
errors.CheckError(err)
} else {
// If doesn't exist assume its a nested inside a list of objects
value := extractNestedItem(obj.Object, fields, 0)
filteredObj.Object = value
}
}
filteredObj.SetName(obj.GetName())
return &filteredObj
}
// extractNestedItem recursively extracts an item that may be nested inside a list of objects.
func extractNestedItem(obj map[string]any, fields []string, depth int) map[string]any {
if depth >= len(fields) {
return nil
}
value, exists, _ := unstructured.NestedFieldCopy(obj, fields[:depth+1]...)
list, ok := value.([]any)
if !exists || !ok {
return extractNestedItem(obj, fields, depth+1)
}
extractedItems := extractItemsFromList(list, fields[depth+1:])
if len(extractedItems) == 0 {
for _, e := range list {
if o, ok := e.(map[string]any); ok {
result := extractNestedItem(o, fields[depth+1:], 0)
extractedItems = append(extractedItems, result)
}
}
}
filteredObj := reconstructObject(extractedItems, fields, depth)
return filteredObj
}
// extractItemsFromList processes a list of objects and extracts specific fields from each item.
func extractItemsFromList(list []any, fields []string) []any {
var extratedObjs []any
for _, e := range list {
extractedObj := make(map[string]any)
if o, ok := e.(map[string]any); ok {
value, exists, _ := unstructured.NestedFieldCopy(o, fields...)
if !exists {
continue
}
err := unstructured.SetNestedField(extractedObj, value, fields...)
errors.CheckError(err)
extratedObjs = append(extratedObjs, extractedObj)
}
}
return extratedObjs
}
// reconstructObject rebuilds the original object structure by placing extracted items back into their proper nested location.
func reconstructObject(extracted []any, fields []string, depth int) map[string]any {
obj := make(map[string]any)
err := unstructured.SetNestedField(obj, extracted, fields[:depth+1]...)
errors.CheckError(err)
return obj
}
// printManifests outputs resource manifests in the specified format (wide, JSON, or YAML).
func printManifests(objs *[]unstructured.Unstructured, filteredFields bool, showName bool, output string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
if showName {
fmt.Fprintf(w, "FIELD\tRESOURCE NAME\tVALUE\n")
} else {
fmt.Fprintf(w, "FIELD\tVALUE\n")
}
for i, o := range *objs {
if output == "json" || output == "yaml" {
var formattedManifest []byte
var err error
if output == "json" {
formattedManifest, err = json.MarshalIndent(o.Object, "", " ")
} else {
formattedManifest, err = yaml.Marshal(o.Object)
}
errors.CheckError(err)
fmt.Println(string(formattedManifest))
if len(*objs) > 1 && i != len(*objs)-1 {
fmt.Println("---")
}
} else {
name := o.GetName()
if filteredFields {
unstructured.RemoveNestedField(o.Object, "metadata", "name")
}
printManifestAsTable(w, name, showName, o.Object, "")
}
}
if output != "json" && output != "yaml" {
err := w.Flush()
errors.CheckError(err)
}
}
// printManifestAsTable recursively prints a manifest object as a tabular view with nested fields flattened.
func printManifestAsTable(w *tabwriter.Writer, name string, showName bool, obj map[string]any, parentField string) {
for key, value := range obj {
field := parentField + key
switch v := value.(type) {
case map[string]any:
printManifestAsTable(w, name, showName, v, field+".")
case []any:
for i, e := range v {
index := "[" + strconv.Itoa(i) + "]"
if innerObj, ok := e.(map[string]any); ok {
printManifestAsTable(w, name, showName, innerObj, field+index+".")
} else {
if showName {
fmt.Fprintf(w, "%v\t%v\t%v\n", field+index, name, e)
} else {
fmt.Fprintf(w, "%v\t%v\n", field+index, e)
}
}
}
default:
if showName {
fmt.Fprintf(w, "%v\t%v\t%v\n", field, name, v)
} else {
fmt.Fprintf(w, "%v\t%v\n", field, v)
}
}
}
}
func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
patch string
patchType string
resourceName string
namespace string
kind string
group string
all bool
project string
)
var patch string
var patchType string
var resourceName string
var namespace string
var kind string
var group string
var all bool
var project string
command := &cobra.Command{
Use: "patch-resource APPNAME",
Short: "Patch resource in an application",
@@ -354,16 +90,14 @@ func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions)
}
func NewApplicationDeleteResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
resourceName string
namespace string
kind string
group string
force bool
orphan bool
all bool
project string
)
var resourceName string
var namespace string
var kind string
var group string
var force bool
var orphan bool
var all bool
var project string
command := &cobra.Command{
Use: "delete-resource APPNAME",
Short: "Delete resource in an application",
@@ -519,16 +253,13 @@ func printResources(listAll bool, orphaned bool, appResourceTree *v1alpha1.Appli
}
}
}
err := w.Flush()
errors.CheckError(err)
_ = w.Flush()
}
func NewApplicationListResourcesCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
orphaned bool
output string
project string
)
var orphaned bool
var output string
var project string
command := &cobra.Command{
Use: "resources APPNAME",
Short: "List resource of application",

View File

@@ -2253,10 +2253,6 @@ func (c *fakeAppServiceClient) ListResourceLinks(_ context.Context, _ *applicati
return nil, nil
}
func (c *fakeAppServiceClient) ServerSideDiff(_ context.Context, _ *applicationpkg.ApplicationServerSideDiffQuery, _ ...grpc.CallOption) (*applicationpkg.ApplicationServerSideDiffResponse, error) {
return nil, nil
}
type fakeAcdClient struct {
simulateTimeout uint
}

View File

@@ -6,8 +6,6 @@ import (
"github.com/spf13/cobra"
"golang.org/x/crypto/bcrypt"
"github.com/argoproj/argo-cd/v3/util/cli"
)
// NewBcryptCmd represents the bcrypt command
@@ -17,25 +15,22 @@ func NewBcryptCmd() *cobra.Command {
Use: "bcrypt",
Short: "Generate bcrypt hash for any password",
Example: `# Generate bcrypt hash for any password
argocd account bcrypt --password YOUR_PASSWORD
# Prompt for password input
argocd account bcrypt
# Read password from stdin
echo -e "password" | argocd account bcrypt`,
argocd account bcrypt --password YOUR_PASSWORD`,
Run: func(cmd *cobra.Command, _ []string) {
password = cli.PromptPassword(password)
bytePassword := []byte(password)
// Hashing the password
hash, err := bcrypt.GenerateFromPassword(bytePassword, bcrypt.DefaultCost)
if err != nil {
log.Fatalf("Failed to generate bcrypt hash: %v", err)
log.Fatalf("Failed to genarate bcrypt hash: %v", err)
}
fmt.Fprint(cmd.OutOrStdout(), string(hash))
},
}
bcryptCmd.Flags().StringVar(&password, "password", "", "Password for which bcrypt hash is generated")
err := bcryptCmd.MarkFlagRequired("password")
if err != nil {
return nil
}
return bcryptCmd
}

View File

@@ -2,11 +2,9 @@ package commands
import (
"bytes"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
)
@@ -22,27 +20,3 @@ func TestGeneratePassword(t *testing.T) {
err = bcrypt.CompareHashAndPassword(output.Bytes(), []byte("abc"))
assert.NoError(t, err)
}
func TestGeneratePasswordWithStdin(t *testing.T) {
oldStdin := os.Stdin
defer func() {
os.Stdin = oldStdin
}()
input := bytes.NewBufferString("abc\n")
r, w, _ := os.Pipe()
_, _ = w.Write(input.Bytes())
w.Close()
os.Stdin = r
bcryptCmd := NewBcryptCmd()
bcryptCmd.SetArgs([]string{})
output := new(bytes.Buffer)
bcryptCmd.SetOut(output)
err := bcryptCmd.Execute()
require.NoError(t, err)
err = bcrypt.CompareHashAndPassword(output.Bytes(), []byte("abc"))
assert.NoError(t, err)
}

View File

@@ -2,7 +2,6 @@ package commands
import (
"fmt"
"os"
"strconv"
"github.com/spf13/cobra"
@@ -28,10 +27,6 @@ argocd configure --prompts-enabled=false`,
Run: func(_ *cobra.Command, _ []string) {
localCfg, err := localconfig.ReadLocalConfig(globalClientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
fmt.Println("No local configuration found")
os.Exit(1)
}
localCfg.PromptsEnabled = promptsEnabled

View File

@@ -185,7 +185,8 @@ argocd login cd.argoproj.io --core`,
command.Flags().StringVar(&password, "password", "", "The password of an account to authenticate")
command.Flags().BoolVar(&sso, "sso", false, "Perform SSO login")
command.Flags().IntVar(&ssoPort, "sso-port", DefaultSSOLocalPort, "Port to run local OAuth2 login application")
command.Flags().BoolVar(&skipTestTLS, "skip-test-tls", false, "Skip testing whether the server is configured with TLS (this can help when the command hangs for no apparent reason)")
command.Flags().
BoolVar(&skipTestTLS, "skip-test-tls", false, "Skip testing whether the server is configured with TLS (this can help when the command hangs for no apparent reason)")
command.Flags().BoolVar(&ssoLaunchBrowser, "sso-launch-browser", true, "Automatically launch the system default browser when performing SSO login")
return command
}

View File

@@ -19,12 +19,9 @@ func NewLogoutCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comma
Use: "logout CONTEXT",
Short: "Log out from Argo CD",
Long: "Log out from Argo CD",
Example: `# Logout from the active Argo CD context
Example: `# To log out of argocd
$ argocd logout
# This can be helpful for security reasons or when you want to switch between different Argo CD contexts or accounts.
argocd logout CONTEXT
# Logout from a specific context named 'cd.argoproj.io'
argocd logout cd.argoproj.io
`,
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {

View File

@@ -270,19 +270,6 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
command := &cobra.Command{
Use: "rm REPO ...",
Short: "Remove configured repositories",
Example: `
# Remove a single repository
argocd repo rm https://github.com/yourusername/your-repo.git
# Remove multiple repositories
argocd repo rm https://github.com/yourusername/your-repo.git https://git.example.com/repo2.git
# Remove repositories for a specific project
argocd repo rm https://github.com/yourusername/your-repo.git --project myproject
# Remove repository using SSH URL
argocd repo rm git@github.com:yourusername/your-repo.git
`,
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -343,25 +330,6 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command := &cobra.Command{
Use: "list",
Short: "List configured repositories",
Example: `
# List all repositories
argocd repo list
# List repositories in wide format
argocd repo list -o wide
# List repositories in YAML format
argocd repo list -o yaml
# List repositories in JSON format
argocd repo list -o json
# List urls of repositories
argocd repo list -o url
# Force refresh of cached repository connection status
argocd repo list --refresh hard
`,
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
@@ -404,26 +372,9 @@ func NewRepoGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
refresh string
project string
)
// For better readability and easier formatting
repoGetExamples := `
# Get Git or Helm repository details in wide format (default, '-o wide')
argocd repo get https://git.example.com/repos/repo
# Get repository details in YAML format
argocd repo get https://git.example.com/repos/repo -o yaml
# Get repository details in JSON format
argocd repo get https://git.example.com/repos/repo -o json
# Get repository URL
argocd repo get https://git.example.com/repos/repo -o url
`
command := &cobra.Command{
Use: "get REPO",
Short: "Get a configured repository by URL",
Example: repoGetExamples,
Use: "get REPO",
Short: "Get a configured repository by URL",
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()

View File

@@ -24,7 +24,7 @@ func extractHealthStatusAndReason(node v1alpha1.ResourceNode) (healthStatus heal
healthStatus = node.Health.Status
reason = node.Health.Message
}
return
return healthStatus, reason
}
func treeViewAppGet(prefix string, uidToNodeMap map[string]v1alpha1.ResourceNode, parentToChildMap map[string][]string, parent v1alpha1.ResourceNode, mapNodeNameToResourceState map[string]*resourceState, w *tabwriter.Writer) {

View File

@@ -20,6 +20,7 @@ import (
reposerver "github.com/argoproj/argo-cd/v3/cmd/argocd-repo-server/commands"
apiserver "github.com/argoproj/argo-cd/v3/cmd/argocd-server/commands"
cli "github.com/argoproj/argo-cd/v3/cmd/argocd/commands"
"github.com/argoproj/argo-cd/v3/cmd/util"
"github.com/argoproj/argo-cd/v3/util/log"
)
@@ -73,6 +74,7 @@ func main() {
command = cli.NewCommand()
isArgocdCLI = true
}
util.SetAutoMaxProcs(isArgocdCLI)
if isArgocdCLI {
// silence errors and usages since we'll be printing them manually.

View File

@@ -10,6 +10,8 @@ import (
"strings"
"time"
"go.uber.org/automaxprocs/maxprocs"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@@ -100,6 +102,19 @@ type AppOptions struct {
hydrateToBranch string
}
// SetAutoMaxProcs sets the GOMAXPROCS value based on the binary name.
// It suppresses logs for CLI binaries and logs the setting for services.
func SetAutoMaxProcs(isCLI bool) {
if isCLI {
_, _ = maxprocs.Set() // Intentionally ignore errors for CLI binaries
} else {
_, err := maxprocs.Set(maxprocs.Logger(log.Infof))
if err != nil {
log.Errorf("Error setting GOMAXPROCS: %v", err)
}
}
}
func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.repoURL, "repo", "", "Repository URL, ignored if a file is set")
command.Flags().StringVar(&opts.appPath, "path", "", "Path in repository to the app directory, ignored if a file is set")

View File

@@ -1,6 +1,7 @@
package util
import (
"bytes"
"log"
"os"
"testing"
@@ -572,3 +573,27 @@ func TestFilterResources(t *testing.T) {
assert.Nil(t, filteredResources)
})
}
func TestSetAutoMaxProcs(t *testing.T) {
t.Run("CLI mode ignores errors", func(t *testing.T) {
logBuffer := &bytes.Buffer{}
oldLogger := log.Default()
log.SetOutput(logBuffer)
defer log.SetOutput(oldLogger.Writer())
SetAutoMaxProcs(true)
assert.Empty(t, logBuffer.String(), "Expected no log output when isCLI is true")
})
t.Run("Non-CLI mode logs error on failure", func(t *testing.T) {
logBuffer := &bytes.Buffer{}
oldLogger := log.Default()
log.SetOutput(logBuffer)
defer log.SetOutput(oldLogger.Writer())
SetAutoMaxProcs(false)
assert.NotContains(t, logBuffer.String(), "Error setting GOMAXPROCS", "Unexpected log output detected")
})
}

View File

@@ -10,7 +10,6 @@ import (
grpc_retry "github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/retry"
log "github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
@@ -45,10 +44,11 @@ func NewConnection(address string) (*grpc.ClientConn, error) {
}
unaryInterceptors := []grpc.UnaryClientInterceptor{grpc_retry.UnaryClientInterceptor(retryOpts...)}
dialOpts := []grpc.DialOption{
grpc.WithStreamInterceptor(grpc_util.RetryOnlyForServerStreamInterceptor(retryOpts...)),
grpc.WithStreamInterceptor(grpc_retry.StreamClientInterceptor(retryOpts...)),
grpc.WithChainUnaryInterceptor(unaryInterceptors...),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(MaxGRPCMessageSize), grpc.MaxCallSendMsgSize(MaxGRPCMessageSize)),
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
grpc.WithUnaryInterceptor(grpc_util.OTELUnaryClientInterceptor()),
grpc.WithStreamInterceptor(grpc_util.OTELStreamClientInterceptor()),
}
dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials()))

View File

@@ -49,11 +49,13 @@ func NewServer(initConstants plugin.CMPServerInitConstants) (*ArgoCDCMPServer, e
serverLog := log.NewEntry(log.StandardLogger())
streamInterceptors := []grpc.StreamServerInterceptor{
otelgrpc.StreamServerInterceptor(), //nolint:staticcheck // TODO: ignore SA1019 for depreciation: see https://github.com/argoproj/argo-cd/issues/18258
logging.StreamServerInterceptor(grpc_util.InterceptorLogger(serverLog)),
serverMetrics.StreamServerInterceptor(),
recovery.StreamServerInterceptor(recovery.WithRecoveryHandler(grpc_util.LoggerRecoveryHandler(serverLog))),
}
unaryInterceptors := []grpc.UnaryServerInterceptor{
otelgrpc.UnaryServerInterceptor(), //nolint:staticcheck // TODO: ignore SA1019 for depreciation: see https://github.com/argoproj/argo-cd/issues/18258
logging.UnaryServerInterceptor(grpc_util.InterceptorLogger(serverLog)),
serverMetrics.UnaryServerInterceptor(),
recovery.UnaryServerInterceptor(recovery.WithRecoveryHandler(grpc_util.LoggerRecoveryHandler(serverLog))),
@@ -69,7 +71,6 @@ func NewServer(initConstants plugin.CMPServerInitConstants) (*ArgoCDCMPServer, e
MinTime: common.GetGRPCKeepAliveEnforcementMinimum(),
},
),
grpc.StatsHandler(otelgrpc.NewServerHandler()),
}
return &ArgoCDCMPServer{

View File

@@ -118,21 +118,10 @@ func (s *Service) handleCommitRequest(logCtx *log.Entry, r *apiclient.CommitHydr
return out, "", fmt.Errorf("failed to checkout target branch: %w", err)
}
logCtx.Debug("Clearing and preparing paths")
var pathsToClear []string
for _, p := range r.Paths {
if p.Path == "" || p.Path == "." {
logCtx.Debug("Using root directory for manifests, no directory removal needed")
} else {
pathsToClear = append(pathsToClear, p.Path)
}
}
if len(pathsToClear) > 0 {
logCtx.Debugf("Clearing paths: %v", pathsToClear)
out, err := gitClient.RemoveContents(pathsToClear)
if err != nil {
return out, "", fmt.Errorf("failed to clear paths %v: %w", pathsToClear, err)
}
logCtx.Debug("Clearing repo contents")
out, err = gitClient.RemoveContents()
if err != nil {
return out, "", fmt.Errorf("failed to clear repo: %w", err)
}
logCtx.Debug("Writing manifests")

View File

@@ -99,6 +99,7 @@ func Test_CommitHydratedManifests(t *testing.T) {
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents").Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
@@ -108,178 +109,6 @@ func Test_CommitHydratedManifests(t *testing.T) {
require.NotNil(t, resp)
assert.Equal(t, "it-worked!", resp.HydratedSha)
})
t.Run("root path with dot and blank - no directory removal", func(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
TargetBranch: "main",
SyncBranch: "env/test",
CommitMessage: "test commit message",
Paths: []*apiclient.PathDetails{
{
Path: ".",
Manifests: []*apiclient.HydratedManifestDetails{
{
ManifestJSON: `{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"test-dot"}}`,
},
},
},
{
Path: "",
Manifests: []*apiclient.HydratedManifestDetails{
{
ManifestJSON: `{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"test-blank"}}`,
},
},
},
},
}
resp, err := service.CommitHydratedManifests(t.Context(), requestWithRootAndBlank)
require.NoError(t, err)
require.NotNil(t, resp)
assert.Equal(t, "root-and-blank-sha", resp.HydratedSha)
})
t.Run("subdirectory path - triggers directory removal", func(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithSubdirPath := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
TargetBranch: "main",
SyncBranch: "env/test",
CommitMessage: "test commit message",
Paths: []*apiclient.PathDetails{
{
Path: "apps/staging", // subdirectory path
Manifests: []*apiclient.HydratedManifestDetails{
{
ManifestJSON: `{"apiVersion":"v1","kind":"Deployment","metadata":{"name":"test-app"}}`,
},
},
},
},
}
resp, err := service.CommitHydratedManifests(t.Context(), requestWithSubdirPath)
require.NoError(t, err)
require.NotNil(t, resp)
assert.Equal(t, "subdir-path-sha", resp.HydratedSha)
})
t.Run("mixed paths - root and subdirectory", func(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithMixedPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
TargetBranch: "main",
SyncBranch: "env/test",
CommitMessage: "test commit message",
Paths: []*apiclient.PathDetails{
{
Path: ".", // root path - should NOT trigger removal
Manifests: []*apiclient.HydratedManifestDetails{
{
ManifestJSON: `{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"global-config"}}`,
},
},
},
{
Path: "apps/production", // subdirectory path - SHOULD trigger removal
Manifests: []*apiclient.HydratedManifestDetails{
{
ManifestJSON: `{"apiVersion":"v1","kind":"Deployment","metadata":{"name":"prod-app"}}`,
},
},
},
{
Path: "apps/staging", // another subdirectory path - SHOULD trigger removal
Manifests: []*apiclient.HydratedManifestDetails{
{
ManifestJSON: `{"apiVersion":"v1","kind":"Deployment","metadata":{"name":"staging-app"}}`,
},
},
},
},
}
resp, err := service.CommitHydratedManifests(t.Context(), requestWithMixedPaths)
require.NoError(t, err)
require.NotNil(t, resp)
assert.Equal(t, "mixed-paths-sha", resp.HydratedSha)
})
t.Run("empty paths array", func(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithEmptyPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
Repo: "https://github.com/argoproj/argocd-example-apps.git",
},
TargetBranch: "main",
SyncBranch: "env/test",
CommitMessage: "test commit message",
}
resp, err := service.CommitHydratedManifests(t.Context(), requestWithEmptyPaths)
require.NoError(t, err)
require.NotNil(t, resp)
assert.Equal(t, "it-worked!", resp.HydratedSha)
})
}
func newServiceWithMocks(t *testing.T) (*Service, *mocks.RepoClientFactory) {

View File

@@ -2,7 +2,9 @@ package commit
import (
"encoding/json"
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"strings"
@@ -15,7 +17,6 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
"github.com/argoproj/argo-cd/v3/common"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/util/git"
"github.com/argoproj/argo-cd/v3/util/io"
@@ -23,9 +24,6 @@ import (
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
const gitAttributesContents = `*/README.md linguist-generated=true
*/hydrator.metadata linguist-generated=true`
func init() {
// Avoid allowing the user to learn things about the environment.
delete(sprigFuncMap, "env")
@@ -59,19 +57,13 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
return fmt.Errorf("failed to write top-level hydrator metadata: %w", err)
}
// Write .gitattributes
err = writeGitAttributes(root)
if err != nil {
return fmt.Errorf("failed to write git attributes: %w", err)
}
for _, p := range paths {
hydratePath := p.Path
if hydratePath == "." {
hydratePath = ""
}
err = root.MkdirAll(hydratePath, 0o755)
err = mkdirAll(root, hydratePath)
if err != nil {
return fmt.Errorf("failed to create path: %w", err)
}
@@ -145,30 +137,6 @@ func writeReadme(root *os.Root, dirPath string, metadata hydratorMetadataFile) e
return nil
}
func writeGitAttributes(root *os.Root) error {
gitAttributesFile, err := root.Create(".gitattributes")
if err != nil {
return fmt.Errorf("failed to create git attributes file: %w", err)
}
defer func() {
err = gitAttributesFile.Close()
if err != nil {
log.WithFields(log.Fields{
common.SecurityField: common.SecurityMedium,
common.SecurityCWEField: common.SecurityCWEMissingReleaseOfFileDescriptor,
}).Errorf("error closing file %q: %v", gitAttributesFile.Name(), err)
}
}()
_, err = gitAttributesFile.WriteString(gitAttributesContents)
if err != nil {
return fmt.Errorf("failed to write git attributes: %w", err)
}
return nil
}
// writeManifests writes the manifests to the manifest.yaml file, truncating the file if it exists and appending the
// manifests in the order they are provided.
func writeManifests(root *os.Root, dirPath string, manifests []*apiclient.HydratedManifestDetails) error {
@@ -210,3 +178,25 @@ func writeManifests(root *os.Root, dirPath string, manifests []*apiclient.Hydrat
return nil
}
// mkdirAll creates the directory and all its parents if they do not exist. It returns an error if the directory
// cannot be.
func mkdirAll(root *os.Root, dirPath string) error {
parts := strings.Split(dirPath, string(os.PathSeparator))
builtPath := ""
for _, part := range parts {
if part == "" {
continue
}
builtPath = filepath.Join(builtPath, part)
err := root.Mkdir(builtPath, os.ModePerm)
if err != nil {
if errors.Is(err, fs.ErrExist) {
log.WithError(err).Warnf("path %s already exists, skipping", dirPath)
continue
}
return fmt.Errorf("failed to create path: %w", err)
}
}
return nil
}

View File

@@ -223,16 +223,3 @@ func TestWriteManifests(t *testing.T) {
require.NoError(t, err)
assert.Contains(t, string(manifestBytes), "kind")
}
func TestWriteGitAttributes(t *testing.T) {
root := tempRoot(t)
err := writeGitAttributes(root)
require.NoError(t, err)
gitAttributesPath := filepath.Join(root.Name(), ".gitattributes")
gitAttributesBytes, err := os.ReadFile(gitAttributesPath)
require.NoError(t, err)
assert.Contains(t, string(gitAttributesBytes), "*/README.md linguist-generated=true")
assert.Contains(t, string(gitAttributesBytes), "*/hydrator.metadata linguist-generated=true")
}

View File

@@ -100,12 +100,6 @@ const (
PluginConfigFileName = "plugin.yaml"
)
// consts for podrequests metrics in cache/info
const (
PodRequestsCPU = "cpu"
PodRequestsMEM = "memory"
)
// Argo CD application related constants
const (
@@ -192,8 +186,6 @@ const (
LabelValueSecretTypeRepoCreds = "repo-creds"
// LabelValueSecretTypeRepositoryWrite indicates a secret type of repository credentials for writing
LabelValueSecretTypeRepositoryWrite = "repository-write"
// LabelValueSecretTypeRepoCredsWrite indicates a secret type of repository credentials for writing for templating
LabelValueSecretTypeRepoCredsWrite = "repo-write-creds"
// LabelValueSecretTypeSCMCreds indicates a secret type of SCM credentials
LabelValueSecretTypeSCMCreds = "scm-creds"

View File

@@ -1,82 +0,0 @@
package common
import (
"runtime"
"testing"
"github.com/stretchr/testify/assert"
)
func TestGetVersion(t *testing.T) {
tests := []struct {
name string
inputGitCommit string
inputGitTag string
inputTreeState string
inputVersion string
expected string
}{
{
name: "Official release with tag and clean state",
inputGitCommit: "abcdef123456",
inputGitTag: "v1.2.3",
inputTreeState: "clean",
inputVersion: "1.2.3",
expected: "v1.2.3",
},
{
name: "Dirty state with commit",
inputGitCommit: "deadbeefcafebabe",
inputGitTag: "",
inputTreeState: "dirty",
inputVersion: "2.0.1",
expected: "v2.0.1+deadbee.dirty",
},
{
name: "Clean state with commit, no tag",
inputGitCommit: "cafebabedeadbeef",
inputGitTag: "",
inputTreeState: "clean",
inputVersion: "2.1.0",
expected: "v2.1.0+cafebab",
},
{
name: "Missing commit and tag",
inputGitCommit: "",
inputGitTag: "",
inputTreeState: "clean",
inputVersion: "3.1.0",
expected: "v3.1.0+unknown",
},
{
name: "Short commit",
inputGitCommit: "abc",
inputGitTag: "",
inputTreeState: "clean",
inputVersion: "4.0.0",
expected: "v4.0.0+unknown",
},
}
for _, tt := range tests {
gitCommit = tt.inputGitCommit
gitTag = tt.inputGitTag
gitTreeState = tt.inputTreeState
version = tt.inputVersion
buildDate = "2025-06-26"
kubectlVersion = "v1.30.0"
extraBuildInfo = "test-build"
got := GetVersion()
assert.Equal(t, tt.expected, got.Version)
assert.Equal(t, buildDate, got.BuildDate)
assert.Equal(t, tt.inputGitCommit, got.GitCommit)
assert.Equal(t, tt.inputGitTag, got.GitTag)
assert.Equal(t, tt.inputTreeState, got.GitTreeState)
assert.Equal(t, runtime.Version(), got.GoVersion)
assert.Equal(t, runtime.Compiler, got.Compiler)
assert.Equal(t, runtime.GOOS+"/"+runtime.GOARCH, got.Platform)
assert.Equal(t, kubectlVersion, got.KubectlVersion)
assert.Equal(t, extraBuildInfo, got.ExtraBuildInfo)
}
}

View File

@@ -47,7 +47,6 @@ import (
"github.com/argoproj/argo-cd/v3/common"
statecache "github.com/argoproj/argo-cd/v3/controller/cache"
"github.com/argoproj/argo-cd/v3/controller/hydrator"
hydratortypes "github.com/argoproj/argo-cd/v3/controller/hydrator/types"
"github.com/argoproj/argo-cd/v3/controller/metrics"
"github.com/argoproj/argo-cd/v3/controller/sharding"
"github.com/argoproj/argo-cd/v3/pkg/apis/application"
@@ -116,7 +115,7 @@ type ApplicationController struct {
appOperationQueue workqueue.TypedRateLimitingInterface[string]
projectRefreshQueue workqueue.TypedRateLimitingInterface[string]
appHydrateQueue workqueue.TypedRateLimitingInterface[string]
hydrationQueue workqueue.TypedRateLimitingInterface[hydratortypes.HydrationQueueKey]
hydrationQueue workqueue.TypedRateLimitingInterface[hydrator.HydrationQueueKey]
appInformer cache.SharedIndexInformer
appLister applisters.ApplicationLister
projInformer cache.SharedIndexInformer
@@ -126,7 +125,7 @@ type ApplicationController struct {
statusHardRefreshTimeout time.Duration
statusRefreshJitter time.Duration
selfHealTimeout time.Duration
selfHealBackoff *wait.Backoff
selfHealBackOff *wait.Backoff
selfHealBackoffCooldown time.Duration
syncTimeout time.Duration
db db.ArgoDB
@@ -199,7 +198,7 @@ func NewApplicationController(
projectRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "project_reconciliation_queue"}),
appComparisonTypeRefreshQueue: workqueue.NewTypedRateLimitingQueue(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig)),
appHydrateQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_hydration_queue"}),
hydrationQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[hydratortypes.HydrationQueueKey](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[hydratortypes.HydrationQueueKey]{Name: "manifest_hydration_queue"}),
hydrationQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[hydrator.HydrationQueueKey](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[hydrator.HydrationQueueKey]{Name: "manifest_hydration_queue"}),
db: db,
statusRefreshTimeout: appResyncPeriod,
statusHardRefreshTimeout: appHardResyncPeriod,
@@ -209,7 +208,7 @@ func NewApplicationController(
auditLogger: argo.NewAuditLogger(kubeClientset, common.ApplicationController, enableK8sEvent),
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
selfHealBackoff: selfHealBackoff,
selfHealBackOff: selfHealBackoff,
selfHealBackoffCooldown: selfHealBackoffCooldown,
syncTimeout: syncTimeout,
clusterSharding: clusterSharding,
@@ -329,7 +328,7 @@ func NewApplicationController(
}
}
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, ctrl.metricsServer, ctrl.handleObjectUpdated, clusterSharding, argo.NewResourceTracking())
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.onKubectlRun, ctrl.settingsMgr, stateCache, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout, argo.NewResourceTracking(), persistResourceHealth, repoErrorGracePeriod, serverSideDiff, ignoreNormalizerOpts)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.onKubectlRun, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout, argo.NewResourceTracking(), persistResourceHealth, repoErrorGracePeriod, serverSideDiff, ignoreNormalizerOpts)
ctrl.appInformer = appInformer
ctrl.appLister = appLister
ctrl.projInformer = projInformer
@@ -603,9 +602,6 @@ func (ctrl *ApplicationController) getResourceTree(destCluster *appv1.Cluster, a
Group: managedResource.Group,
Namespace: managedResource.Namespace,
},
Health: &appv1.HealthStatus{
Status: health.HealthStatusMissing,
},
})
} else {
managedResourcesKeys = append(managedResourcesKeys, kube.GetResourceKey(live))
@@ -1000,7 +996,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
@@ -1013,16 +1009,16 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
return processNext
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
return processNext
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
return processNext
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app))
@@ -1042,7 +1038,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
return
return processNext
}
app = freshApp
}
@@ -1064,7 +1060,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
}
ts.AddCheckpoint("finalize_application_deletion_ms")
}
return
return processNext
}
func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processNext bool) {
@@ -1079,7 +1075,7 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
}()
if shutdown {
processNext = false
return
return processNext
}
if parts := strings.Split(key, "/"); len(parts) != 3 {
@@ -1088,11 +1084,11 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
compareWith, err := strconv.Atoi(parts[2])
if err != nil {
log.Warnf("Unable to parse comparison type: %v", err)
return
return processNext
}
ctrl.requestAppRefresh(ctrl.toAppQualifiedName(parts[1], parts[0]), CompareWith(compareWith).Pointer(), nil)
}
return
return processNext
}
func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool) {
@@ -1107,21 +1103,21 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
}()
if shutdown {
processNext = false
return
return processNext
}
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key)
if err != nil {
log.Errorf("Failed to get project '%s' from informer index: %+v", key, err)
return
return processNext
}
if !exists {
// This happens after appproj was deleted, but the work queue still had an entry for it.
return
return processNext
}
origProj, ok := obj.(*appv1.AppProject)
if !ok {
log.Warnf("Key '%s' in index is not an appproject", key)
return
return processNext
}
if origProj.DeletionTimestamp != nil && origProj.HasFinalizer() {
@@ -1129,7 +1125,7 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
log.Warnf("Failed to finalize project deletion: %v", err)
}
}
return
return processNext
}
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
@@ -1206,7 +1202,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
if err != nil {
logCtx.Warnf("Unable to get destination cluster: %v", err)
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizer()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
return err
}
@@ -1395,23 +1391,16 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
logCtx.Debug("Finished processing requested app operation")
}()
terminatingCause := ""
terminating := false
if isOperationInProgress(app) {
state = app.Status.OperationState.DeepCopy()
terminating = state.Phase == synccommon.OperationTerminating
// Failed operation with retry strategy might have be in-progress and has completion time
switch {
case state.Phase == synccommon.OperationTerminating:
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
case ctrl.syncTimeout != time.Duration(0) && time.Now().After(state.StartedAt.Add(ctrl.syncTimeout)):
state.Phase = synccommon.OperationTerminating
state.Message = "operation is terminating due to timeout"
terminatingCause = "controller sync timeout"
ctrl.setOperationState(app, state)
logCtx.Infof("Terminating in-progress operation due to timeout. Started at: %v, timeout: %v", state.StartedAt, ctrl.syncTimeout)
case state.Phase == synccommon.OperationRunning && state.FinishedAt != nil:
// Failed operation with retry strategy might be in-progress and has completion time
case state.FinishedAt != nil && !terminating:
retryAt, err := app.Status.OperationState.Operation.Retry.NextRetryAt(state.FinishedAt.Time, state.RetryCount)
if err != nil {
state.Phase = synccommon.OperationError
state.Phase = synccommon.OperationFailed
state.Message = err.Error()
ctrl.setOperationState(app, state)
return
@@ -1422,18 +1411,22 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
return
}
// Get rid of sync results and null out previous operation completion time
// This will start the retry attempt
state.Message = fmt.Sprintf("Retrying operation. Attempt #%d", state.RetryCount)
// retrying operation. remove previous failure time in app since it is used as a trigger
// that previous failed and operation should be retried
state.FinishedAt = nil
state.SyncResult = nil
ctrl.setOperationState(app, state)
logCtx.Infof("Retrying operation. Attempt #%d", state.RetryCount)
// Get rid of sync results and null out previous operation completion time
state.SyncResult = nil
case ctrl.syncTimeout != time.Duration(0) && time.Now().After(state.StartedAt.Add(ctrl.syncTimeout)) && !terminating:
state.Phase = synccommon.OperationTerminating
state.Message = "operation is terminating due to timeout"
ctrl.setOperationState(app, state)
logCtx.Infof("Terminating in-progress operation due to timeout. Started at: %v, timeout: %v", state.StartedAt, ctrl.syncTimeout)
default:
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
}
} else {
state = NewOperationState(*app.Operation)
state = &appv1.OperationState{Phase: synccommon.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
ctrl.setOperationState(app, state)
if ctrl.syncTimeout != time.Duration(0) {
// Schedule a check during which the timeout would be checked.
@@ -1443,16 +1436,22 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
}
ts.AddCheckpoint("initial_operation_stage_ms")
terminating := state.Phase == synccommon.OperationTerminating
project, err := ctrl.getAppProj(app)
if err == nil {
// Start or resume the sync
ctrl.appStateManager.SyncAppState(app, project, state)
// Call GetDestinationCluster to validate the destination cluster.
if _, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db); err != nil {
state.Phase = synccommon.OperationFailed
state.Message = err.Error()
} else {
state.Phase = synccommon.OperationError
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
ctrl.appStateManager.SyncAppState(app, state)
}
ts.AddCheckpoint("validate_and_sync_app_state_ms")
// Check whether application is allowed to use project
_, err := ctrl.getAppProj(app)
ts.AddCheckpoint("get_app_proj_ms")
if err != nil {
state.Phase = synccommon.OperationError
state.Message = err.Error()
}
ts.AddCheckpoint("sync_app_state_ms")
switch state.Phase {
case synccommon.OperationRunning:
@@ -1460,6 +1459,12 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
// to clobber the Terminated state with Running. Get the latest app state to check for this.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err == nil {
// App may have lost permissions to use the project meanwhile.
_, err = ctrl.getAppProj(freshApp)
if err != nil {
state.Phase = synccommon.OperationFailed
state.Message = fmt.Sprintf("operation not allowed: %v", err)
}
if freshApp.Status.OperationState != nil && freshApp.Status.OperationState.Phase == synccommon.OperationTerminating {
state.Phase = synccommon.OperationTerminating
state.Message = "operation is terminating"
@@ -1471,24 +1476,17 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
case synccommon.OperationFailed, synccommon.OperationError:
if !terminating && (state.RetryCount < state.Operation.Retry.Limit || state.Operation.Retry.Limit < 0) {
now := metav1.Now()
state.FinishedAt = &now
if retryAt, err := state.Operation.Retry.NextRetryAt(now.Time, state.RetryCount); err != nil {
state.Phase = synccommon.OperationError
state.Phase = synccommon.OperationFailed
state.Message = fmt.Sprintf("%s (failed to retry: %v)", state.Message, err)
} else {
// Set FinishedAt explicitly on a Running phase. This is a unique condition that will allow this
// function to perform a retry the next time the operation is processed.
state.Phase = synccommon.OperationRunning
state.FinishedAt = &now
state.RetryCount++
state.Message = fmt.Sprintf("%s. Retrying attempt #%d at %s.", state.Message, state.RetryCount, retryAt.Format(time.Kitchen))
}
} else {
if terminating && terminatingCause != "" {
state.Message = fmt.Sprintf("%s, triggered by %s", state.Message, terminatingCause)
}
if state.RetryCount > 0 {
state.Message = fmt.Sprintf("%s (retried %d times).", state.Message, state.RetryCount)
state.Message = fmt.Sprintf("%s due to application controller sync timeout. Retrying attempt #%d at %s.", state.Message, state.RetryCount, retryAt.Format(time.Kitchen))
}
} else if state.RetryCount > 0 {
state.Message = fmt.Sprintf("%s (retried %d times).", state.Message, state.RetryCount)
}
}
@@ -1620,7 +1618,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
@@ -1635,22 +1633,22 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
return processNext
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
return processNext
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
return processNext
}
origApp = origApp.DeepCopy()
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout, ctrl.statusHardRefreshTimeout)
if !needRefresh {
return
return processNext
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithFields(log.Fields{
@@ -1693,12 +1691,12 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
app.Status.Summary = tree.GetSummary(app)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), tree); err != nil {
logCtx.Errorf("Failed to cache resources tree: %v", err)
return
return processNext
}
}
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
return
return processNext
}
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
}
@@ -1720,14 +1718,14 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
logCtx.Warnf("failed to set app managed resources tree: %v", err)
}
ts.AddCheckpoint("process_refresh_app_conditions_errors_ms")
return
return processNext
}
destCluster, err = argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.Errorf("Failed to get destination cluster: %v", err)
// exit the reconciliation. ctrl.refreshAppConditions should have caught the error
return
return processNext
}
var localManifests []string
@@ -1762,13 +1760,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
sources = append(sources, app.Spec.GetSource())
}
compareResult, err := ctrl.appStateManager.CompareAppState(app, project, revisions, sources, refreshType == appv1.RefreshTypeHard, comparisonLevel == CompareWithLatestForceResolve, localManifests, hasMultipleSources)
compareResult, err := ctrl.appStateManager.CompareAppState(app, project, revisions, sources, refreshType == appv1.RefreshTypeHard, comparisonLevel == CompareWithLatestForceResolve, localManifests, hasMultipleSources, false)
ts.AddCheckpoint("compare_app_state_ms")
if stderrors.Is(err, ErrCompareStateRepo) {
logCtx.Warnf("Ignoring temporary failed attempt to compare app state against repo: %v", err)
return // short circuit if git error is encountered
return processNext // short circuit if git error is encountered
}
for k, v := range compareResult.timings {
@@ -1788,7 +1786,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false)
if canSync {
syncErrCond, opDuration := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources, compareResult.revisionsMayHaveChanges)
syncErrCond, opDuration := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources, compareResult.revisionUpdated)
setOpDuration = opDuration
if syncErrCond != nil {
app.Status.SetConditions(
@@ -1837,14 +1835,14 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
}
ts.AddCheckpoint("process_finalizers_ms")
return
return processNext
}
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appHydrateQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
@@ -1856,29 +1854,29 @@ func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext boo
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
return processNext
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
return processNext
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
return processNext
}
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
return
return processNext
}
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
hydrationKey, shutdown := ctrl.hydrationQueue.Get()
if shutdown {
processNext = false
return
return processNext
}
processNext = true
defer func() {
@@ -1899,7 +1897,7 @@ func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool
ctrl.hydrator.ProcessHydrationQueueItem(hydrationKey)
logCtx.Debug("Successfully processed hydration queue item")
return
return processNext
}
func resourceStatusKey(res appv1.ResourceStatus) string {
@@ -2062,11 +2060,11 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
if err != nil {
logCtx.Errorf("Error constructing app status patch: %v", err)
return
return patchDuration
}
if !modified {
logCtx.Infof("No status changes. Skipping patch")
return
return patchDuration
}
// calculate time for path call
start := time.Now()
@@ -2083,7 +2081,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus, shouldCompareRevisions bool) (*appv1.ApplicationCondition, time.Duration) {
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus, revisionUpdated bool) (*appv1.ApplicationCondition, time.Duration) {
logCtx := log.WithFields(applog.GetAppLogFields(app))
ts := stats.NewTimingStats()
defer func() {
@@ -2127,66 +2125,65 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
}
desiredRevisions := []string{syncStatus.Revision}
if app.Spec.HasMultipleSources() {
desiredRevisions = syncStatus.Revisions
selfHeal := app.Spec.SyncPolicy.Automated.SelfHeal
// Multi-Source Apps with selfHeal disabled should not trigger an autosync if
// the last sync revision and the new sync revision is the same.
if app.Spec.HasMultipleSources() && !selfHeal && reflect.DeepEqual(app.Status.Sync.Revisions, syncStatus.Revisions) {
logCtx.Infof("Skipping auto-sync: selfHeal disabled and sync caused by object update")
return nil, 0
}
desiredCommitSHA := syncStatus.Revision
desiredCommitSHAsMS := syncStatus.Revisions
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA, desiredCommitSHAsMS, app.Spec.HasMultipleSources(), revisionUpdated)
ts.AddCheckpoint("already_attempted_sync_ms")
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: syncStatus.Revision,
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
SyncOptions: app.Spec.SyncPolicy.SyncOptions,
Revisions: syncStatus.Revisions,
Revisions: desiredCommitSHAsMS,
},
InitiatedBy: appv1.OperationInitiator{Automated: true},
Retry: appv1.RetryStrategy{Limit: 5},
}
if app.Spec.SyncPolicy.Retry != nil {
op.Retry = *app.Spec.SyncPolicy.Retry
}
// It is possible for manifests to remain OutOfSync even after a sync/kubectl apply (e.g.
// auto-sync with pruning disabled). We need to ensure that we do not keep Syncing an
// application in an infinite loop. To detect this, we only attempt the Sync if the revision
// and parameter overrides are different from our most recent sync operation.
alreadyAttempted, lastAttemptedRevisions, lastAttemptedPhase := alreadyAttemptedSync(app, desiredRevisions, shouldCompareRevisions)
ts.AddCheckpoint("already_attempted_sync_ms")
if alreadyAttempted {
if !lastAttemptedPhase.Successful() {
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s and will not retry for %s", lastAttemptedRevisions, desiredRevisions)
message := fmt.Sprintf("Failed last sync attempt to %s: %s", lastAttemptedRevisions, app.Status.OperationState.Message)
if alreadyAttempted && (!selfHeal || !attemptPhase.Successful()) {
if !attemptPhase.Successful() {
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s", desiredCommitSHA)
message := fmt.Sprintf("Failed sync attempt to %s: %s", desiredCommitSHA, app.Status.OperationState.Message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}, 0
}
if !app.Spec.SyncPolicy.Automated.SelfHeal {
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredRevisions)
return nil, 0
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil, 0
} else if selfHeal {
shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app, alreadyAttempted)
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
// Self heal will trigger a new sync operation when the desired state changes and cause the application to
// be OutOfSync when it was previously synced Successfully. This means SelfHeal should only ever be attempted
// when the revisions have not changed, and where the previous sync to these revision was successful
// Only carry SelfHealAttemptsCount to be increased when the selfHealBackoffCooldown has not elapsed yet
if !ctrl.selfHealBackoffCooldownElapsed(app) {
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
if alreadyAttempted {
if !shouldSelfHeal {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
return nil, 0
}
}
if remainingTime := ctrl.selfHealRemainingBackoff(app, int(op.Sync.SelfHealAttemptsCount)); remainingTime > 0 {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", lastAttemptedRevisions, ctrl.selfHealTimeout, remainingTime)
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &remainingTime)
return nil, 0
}
op.Sync.SelfHealAttemptsCount++
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
op.Sync.Resources = append(op.Sync.Resources, appv1.SyncOperationResource{
Kind: resource.Kind,
Group: resource.Group,
Name: resource.Name,
})
op.Sync.SelfHealAttemptsCount++
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
op.Sync.Resources = append(op.Sync.Resources, appv1.SyncOperationResource{
Kind: resource.Kind,
Group: resource.Group,
Name: resource.Name,
})
}
}
}
}
@@ -2200,7 +2197,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
}
if bAllNeedPrune {
message := fmt.Sprintf("Skipping sync attempt to %s: auto-sync will wipe out all resources", desiredRevisions)
message := fmt.Sprintf("Skipping sync attempt to %s: auto-sync will wipe out all resources", desiredCommitSHA)
logCtx.Warn(message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}, 0
}
@@ -2216,65 +2213,62 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
if stderrors.Is(err, argo.ErrAnotherOperationInProgress) {
// skipping auto-sync because another operation is in progress and was not noticed due to stale data in informer
// it is safe to skip auto-sync because it is already running
logCtx.Warnf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
logCtx.Warnf("Failed to initiate auto-sync to %s: %v", desiredCommitSHA, err)
return nil, 0
}
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredCommitSHA, err)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}, setOpTime
}
ctrl.writeBackToInformer(updatedApp)
ts.AddCheckpoint("write_back_to_informer_ms")
message := fmt.Sprintf("Initiated automated sync to %s", desiredRevisions)
var target string
if updatedApp.Spec.HasMultipleSources() {
target = strings.Join(desiredCommitSHAsMS, ", ")
} else {
target = desiredCommitSHA
}
message := fmt.Sprintf("Initiated automated sync to '%s'", target)
ctrl.logAppEvent(context.TODO(), app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: corev1.EventTypeNormal}, message)
logCtx.Info(message)
return nil, setOpTime
}
// alreadyAttemptedSync returns whether the most recently synced revision(s) exactly match the given desiredRevisions
// and for the same application source. If the revision(s) have changed or the Application source configuration has been updated,
// it will return false, indicating that a new sync should be attempted.
// When newRevisionHasChanges is false, due to commits not having direct changes on the application, it will not compare the revision(s), but only the sources.
// It also returns the last synced revisions if any, and the result of that last sync operation.
func alreadyAttemptedSync(app *appv1.Application, desiredRevisions []string, newRevisionHasChanges bool) (bool, []string, synccommon.OperationPhase) {
if app.Status.OperationState == nil {
// The operation state may be removed when new operations are triggered
return false, []string{}, ""
// alreadyAttemptedSync returns whether the most recent sync was performed against the
// commitSHA and with the same app source config which are currently set in the app.
func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS []string, hasMultipleSources bool, revisionUpdated bool) (bool, synccommon.OperationPhase) {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false, ""
}
if app.Status.OperationState.SyncResult == nil {
// If the sync has completed without result, it is very likely that an error happened
// We don't want to resync with auto-sync indefinitely. We should have retried the configured amount of time already
// In this case, a manual action to restore the app may be required
log.WithFields(applog.GetAppLogFields(app)).Warn("Already attempted sync: sync does not have any results")
return app.Status.OperationState.Phase.Completed(), []string{}, app.Status.OperationState.Phase
}
if newRevisionHasChanges {
log.WithFields(applog.GetAppLogFields(app)).Infof("Already attempted sync: comparing synced revisions to %s", desiredRevisions)
if app.Spec.HasMultipleSources() {
if !reflect.DeepEqual(app.Status.OperationState.SyncResult.Revisions, desiredRevisions) {
return false, app.Status.OperationState.SyncResult.Revisions, app.Status.OperationState.Phase
if hasMultipleSources {
if revisionUpdated {
if !reflect.DeepEqual(app.Status.OperationState.SyncResult.Revisions, commitSHAsMS) {
return false, ""
}
} else {
if len(desiredRevisions) != 1 || app.Status.OperationState.SyncResult.Revision != desiredRevisions[0] {
return false, []string{app.Status.OperationState.SyncResult.Revision}, app.Status.OperationState.Phase
}
log.WithFields(applog.GetAppLogFields(app)).Debugf("Skipping auto-sync: commitSHA %s has no changes", commitSHA)
}
} else {
log.WithFields(applog.GetAppLogFields(app)).Debugf("Already attempted sync: revisions %s have no changes", desiredRevisions)
if revisionUpdated {
log.WithFields(applog.GetAppLogFields(app)).Infof("Executing compare of syncResult.Revision and commitSha because manifest changed: %v", commitSHA)
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
}
} else {
log.WithFields(applog.GetAppLogFields(app)).Debugf("Skipping auto-sync: commitSHA %s has no changes", commitSHA)
}
}
log.WithFields(applog.GetAppLogFields(app)).Debug("Already attempted sync: comparing sources")
if app.Spec.HasMultipleSources() {
return reflect.DeepEqual(app.Spec.Sources, app.Status.OperationState.SyncResult.Sources), app.Status.OperationState.SyncResult.Revisions, app.Status.OperationState.Phase
if hasMultipleSources {
return reflect.DeepEqual(app.Spec.Sources, app.Status.OperationState.SyncResult.Sources), app.Status.OperationState.Phase
}
return reflect.DeepEqual(app.Spec.GetSource(), app.Status.OperationState.SyncResult.Source), []string{app.Status.OperationState.SyncResult.Revision}, app.Status.OperationState.Phase
return reflect.DeepEqual(app.Spec.GetSource(), app.Status.OperationState.SyncResult.Source), app.Status.OperationState.Phase
}
func (ctrl *ApplicationController) selfHealRemainingBackoff(app *appv1.Application, selfHealAttemptsCount int) time.Duration {
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application, alreadyAttempted bool) (bool, time.Duration) {
if app.Status.OperationState == nil {
return time.Duration(0)
return true, time.Duration(0)
}
var timeSinceOperation *time.Duration
@@ -2282,41 +2276,34 @@ func (ctrl *ApplicationController) selfHealRemainingBackoff(app *appv1.Applicati
timeSinceOperation = ptr.To(time.Since(app.Status.OperationState.FinishedAt.Time))
}
// Reset counter if the prior sync was successful and the cooldown period is over OR if the revision has changed
if !alreadyAttempted || (timeSinceOperation != nil && *timeSinceOperation >= ctrl.selfHealBackoffCooldown && app.Status.Sync.Status == appv1.SyncStatusCodeSynced) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = 0
}
var retryAfter time.Duration
if ctrl.selfHealBackoff == nil {
if ctrl.selfHealBackOff == nil {
if timeSinceOperation == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - *timeSinceOperation
}
} else {
backOff := *ctrl.selfHealBackoff
backOff.Steps = selfHealAttemptsCount
backOff := *ctrl.selfHealBackOff
backOff.Steps = int(app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
var delay time.Duration
steps := backOff.Steps
for i := 0; i < steps; i++ {
delay = backOff.Step()
}
if timeSinceOperation == nil {
retryAfter = delay
} else {
retryAfter = delay - *timeSinceOperation
}
}
return retryAfter
}
// selfHealBackoffCooldownElapsed returns true when the last successful sync has occurred since longer
// than then self heal cooldown. This means that the application has been in sync for long enough to
// reset the self healing backoff to its initial state
func (ctrl *ApplicationController) selfHealBackoffCooldownElapsed(app *appv1.Application) bool {
if app.Status.OperationState == nil || app.Status.OperationState.FinishedAt == nil {
// Something is in progress, or about to be. In that case, selfHeal attempt should be zero anyway
return true
}
timeSinceLastOperation := time.Since(app.Status.OperationState.FinishedAt.Time)
return timeSinceLastOperation >= ctrl.selfHealBackoffCooldown && app.Status.OperationState.Phase.Successful()
return retryAfter <= 0, retryAfter
}
// isAppNamespaceAllowed returns whether the application is allowed in the

View File

@@ -348,13 +348,10 @@ status:
- cccccccccccccccccccccccccccccccccccccccc
sources:
- path: some/path
helm:
valueFiles:
- $values_test/values.yaml
repoURL: https://github.com/argoproj/argocd-example-apps.git
- path: some/other/path
repoURL: https://github.com/argoproj/argocd-example-apps-fake.git
- ref: values_test
- path: some/other/path
repoURL: https://github.com/argoproj/argocd-example-apps-fake-ref.git
`
@@ -628,13 +625,13 @@ func TestAutoSyncEnabledSetToTrue(t *testing.T) {
assert.False(t, app.Operation.Sync.Prune)
}
func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
func TestMultiSourceSelfHeal(t *testing.T) {
// Simulate OutOfSync caused by object change in cluster
// So our Sync Revisions and SyncStatus Revisions should deep equal
t.Run("ClusterObjectChangeShouldNotTriggerAutoSync", func(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
app.Status.Sync.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
@@ -646,14 +643,15 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
require.NoError(t, err)
assert.Nil(t, app.Operation)
})
t.Run("NewRevisionChangeShouldTriggerAutoSync", func(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
app.Status.Sync.Revisions = []string{"a", "b", "c"}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"a", "b", "c"},
Revisions: []string{"z", "x", "v"},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
@@ -796,30 +794,6 @@ func TestSkipAutoSync(t *testing.T) {
assert.Nil(t, app.Operation)
})
t.Run("PreviousSyncAttemptError", func(t *testing.T) {
app := newFakeApp()
app.Status.OperationState = &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
},
Phase: synccommon.OperationError,
SyncResult: &v1alpha1.SyncOperationResult{
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
assert.Nil(t, app.Operation)
})
t.Run("NeedsToPruneResourcesOnlyButAutomatedPruneDisabled", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
@@ -874,78 +848,45 @@ func TestAutoSyncIndicateError(t *testing.T) {
// TestAutoSyncParameterOverrides verifies we auto-sync if revision is same but parameter overrides are different
func TestAutoSyncParameterOverrides(t *testing.T) {
t.Run("Single source", func(t *testing.T) {
app := newFakeApp()
app.Spec.Source.Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "a",
Value: "1",
},
app := newFakeApp()
app.Spec.Source.Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "a",
Value: "1",
},
}
app.Status.OperationState = &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "a",
Value: "2", // this value changed
},
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "a",
Value: "2", // this value changed
},
},
},
},
},
Phase: synccommon.OperationFailed,
SyncResult: &v1alpha1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
},
Phase: synccommon.OperationFailed,
SyncResult: &v1alpha1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
assert.NotNil(t, app.Operation)
})
t.Run("Multi sources", func(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.Sources[0].Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "a",
Value: "1",
},
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
app.Status.OperationState.SyncResult.Sources[0].Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "a",
Value: "2", // this value changed
},
},
}
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
assert.NotNil(t, app.Operation)
})
},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
assert.NotNil(t, app.Operation)
}
// TestFinalizeAppDeletion verifies application deletion
@@ -1373,9 +1314,6 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
managedDeploy := v1alpha1.ResourceNode{
ResourceRef: v1alpha1.ResourceRef{Group: "apps", Kind: "Deployment", Namespace: "default", Name: "nginx-deployment", Version: "v1"},
Health: &v1alpha1.HealthStatus{
Status: health.HealthStatusMissing,
},
}
orphanedDeploy1 := v1alpha1.ResourceNode{
ResourceRef: v1alpha1.ResourceRef{Group: "apps", Kind: "Deployment", Namespace: "default", Name: "deploy1"},
@@ -1928,7 +1866,7 @@ apps/Deployment:
hs = {}
hs.status = ""
hs.message = ""
if obj.metadata ~= nil then
if obj.metadata.labels ~= nil then
current_status = obj.metadata.labels["status"]
@@ -2096,9 +2034,7 @@ func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Equal(t, string(synccommon.OperationError), phase)
assert.Equal(t, "Failed to load application project: error getting app project \"default\": appproject.argoproj.io \"default\" not found", message)
}
func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
@@ -2127,8 +2063,8 @@ func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
assert.Equal(t, string(synccommon.OperationFailed), phase)
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Equal(t, string(synccommon.OperationError), phase)
assert.Contains(t, message, "application destination can't have both name and server defined: another-cluster https://localhost:6443")
}
@@ -2152,24 +2088,20 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
retryCount, _, _ := unstructured.NestedFloat64(receivedPatch, "status", "operationState", "retryCount")
assert.Equal(t, string(synccommon.OperationRunning), phase)
assert.Contains(t, message, "Failed to load application project: error getting app project \"invalid-project\": appproject.argoproj.io \"invalid-project\" not found. Retrying attempt #1")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Contains(t, message, "due to application controller sync timeout. Retrying attempt #1")
retryCount, _, _ := unstructured.NestedFloat64(receivedPatch, "status", "operationState", "retryCount")
assert.InEpsilon(t, float64(1), retryCount, 0.0001)
}
func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
failedAttemptFinisedAt := time.Now().Add(-time.Minute * 5)
app := newFakeApp()
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
Retry: v1alpha1.RetryStrategy{Limit: 1},
}
app.Status.OperationState.Operation = *app.Operation
app.Status.OperationState.Phase = synccommon.OperationRunning
app.Status.OperationState.RetryCount = 1
app.Status.OperationState.FinishedAt = &metav1.Time{Time: failedAttemptFinisedAt}
app.Status.OperationState.SyncResult.Resources = []*v1alpha1.ResourceResult{{
Name: "guestbook",
Kind: "Deployment",
@@ -2199,58 +2131,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
finishedAtStr, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "finishedAt")
finishedAt, err := time.Parse(time.RFC3339, finishedAtStr)
require.NoError(t, err)
assert.Equal(t, string(synccommon.OperationSucceeded), phase)
assert.Equal(t, "successfully synced (no more tasks)", message)
assert.Truef(t, finishedAt.After(failedAttemptFinisedAt), "finishedAt was expected to be updated. The retry was not performed.")
}
func TestProcessRequestedAppOperation_RunningPreviouslyFailedBackoff(t *testing.T) {
failedAttemptFinisedAt := time.Now().Add(-time.Second)
app := newFakeApp()
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
Retry: v1alpha1.RetryStrategy{
Limit: 1,
Backoff: &v1alpha1.Backoff{
Duration: "1h",
Factor: ptr.To(int64(100)),
MaxDuration: "1h",
},
},
}
app.Status.OperationState.Operation = *app.Operation
app.Status.OperationState.Phase = synccommon.OperationRunning
app.Status.OperationState.Message = "pending retry"
app.Status.OperationState.RetryCount = 1
app.Status.OperationState.FinishedAt = &metav1.Time{Time: failedAttemptFinisedAt}
app.Status.OperationState.SyncResult.Resources = []*v1alpha1.ResourceResult{{
Name: "guestbook",
Kind: "Deployment",
Group: "apps",
Status: synccommon.ResultCodeSyncFailed,
}}
data := &fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
}
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
require.FailNow(t, "A patch should not have been called if the backoff has not passed")
return true, &v1alpha1.Application{}, nil
})
ctrl.processRequestedAppOperation(app)
}
func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
@@ -2259,7 +2140,6 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
Sync: &v1alpha1.SyncOperation{},
Retry: v1alpha1.RetryStrategy{Limit: 10},
}
app.Status.OperationState.Operation = *app.Operation
app.Status.OperationState.Phase = synccommon.OperationTerminating
data := &fakeData{
@@ -2284,9 +2164,7 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Equal(t, string(synccommon.OperationFailed), phase)
assert.Equal(t, "Operation terminated", message)
}
func TestProcessRequestedAppOperation_Successful(t *testing.T) {
@@ -2313,91 +2191,12 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Equal(t, string(synccommon.OperationSucceeded), phase)
assert.Equal(t, "successfully synced (no more tasks)", message)
ok, level := ctrl.isRefreshRequested(ctrl.toAppKey(app.Name))
assert.True(t, ok)
assert.Equal(t, CompareWithLatestForceResolve, level)
}
func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
testCases := []struct {
name string
startedSince time.Duration
syncTimeout time.Duration
retryAttempt int
currentPhase synccommon.OperationPhase
expectedPhase synccommon.OperationPhase
expectedMessage string
}{{
name: "Continue when running operation has not exceeded timeout",
syncTimeout: time.Minute,
startedSince: 30 * time.Second,
currentPhase: synccommon.OperationRunning,
expectedPhase: synccommon.OperationSucceeded,
expectedMessage: "successfully synced (no more tasks)",
}, {
name: "Continue when terminating operation has exceeded timeout",
syncTimeout: time.Minute,
startedSince: 2 * time.Minute,
currentPhase: synccommon.OperationTerminating,
expectedPhase: synccommon.OperationFailed,
expectedMessage: "Operation terminated",
}, {
name: "Terminate when running operation exceeded timeout",
syncTimeout: time.Minute,
startedSince: 2 * time.Minute,
currentPhase: synccommon.OperationRunning,
expectedPhase: synccommon.OperationFailed,
expectedMessage: "Operation terminated, triggered by controller sync timeout",
}, {
name: "Terminate when retried operation exceeded timeout",
syncTimeout: time.Minute,
startedSince: 15 * time.Minute,
currentPhase: synccommon.OperationRunning,
retryAttempt: 1,
expectedPhase: synccommon.OperationFailed,
expectedMessage: "Operation terminated, triggered by controller sync timeout (retried 1 times).",
}}
for i := range testCases {
tc := testCases[i]
t.Run(fmt.Sprintf("case %d: %s", i, tc.name), func(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "default"
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Revision: "HEAD",
},
}
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
}},
}, nil)
ctrl.syncTimeout = tc.syncTimeout
app.Status.OperationState = &v1alpha1.OperationState{
Operation: *app.Operation,
Phase: tc.currentPhase,
StartedAt: metav1.NewTime(time.Now().Add(-tc.startedSince)),
}
if tc.retryAttempt > 0 {
app.Status.OperationState.FinishedAt = ptr.To(metav1.NewTime(time.Now().Add(-tc.startedSince)))
app.Status.OperationState.RetryCount = int64(tc.retryAttempt)
}
ctrl.processRequestedAppOperation(app)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(t.Context(), app.Name, metav1.GetOptions{})
require.NoError(t, err)
assert.Equal(t, tc.expectedPhase, app.Status.OperationState.Phase)
assert.Equal(t, tc.expectedMessage, app.Status.OperationState.Message)
})
}
}
func TestGetAppHosts(t *testing.T) {
app := newFakeApp()
data := &fakeData{
@@ -2664,71 +2463,35 @@ func TestAppStatusIsReplaced(t *testing.T) {
func TestAlreadyAttemptSync(t *testing.T) {
app := newFakeApp()
defaultRevision := app.Status.OperationState.SyncResult.Revision
t.Run("no operation state", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState = nil
attempted, _, _ := alreadyAttemptedSync(app, []string{defaultRevision}, true)
attempted, _ := alreadyAttemptedSync(app, "", []string{}, false, false)
assert.False(t, attempted)
})
t.Run("no sync result for running sync", func(t *testing.T) {
t.Run("no sync operation", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult = nil
app.Status.OperationState.Phase = synccommon.OperationRunning
attempted, _, _ := alreadyAttemptedSync(app, []string{defaultRevision}, true)
app.Status.OperationState.Operation.Sync = nil
attempted, _ := alreadyAttemptedSync(app, "", []string{}, false, false)
assert.False(t, attempted)
})
t.Run("no sync result for completed sync", func(t *testing.T) {
t.Run("no sync result", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult = nil
app.Status.OperationState.Phase = synccommon.OperationError
attempted, _, _ := alreadyAttemptedSync(app, []string{defaultRevision}, true)
assert.True(t, attempted)
attempted, _ := alreadyAttemptedSync(app, "", []string{}, false, false)
assert.False(t, attempted)
})
t.Run("single source", func(t *testing.T) {
t.Run("no revision", func(t *testing.T) {
attempted, _, _ := alreadyAttemptedSync(app, []string{}, true)
assert.False(t, attempted)
})
t.Run("empty revision", func(t *testing.T) {
attempted, _, _ := alreadyAttemptedSync(app, []string{""}, true)
assert.False(t, attempted)
})
t.Run("too many revision", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revision = "sha"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha", "sha2"}, true)
assert.False(t, attempted)
})
t.Run("same manifest, same SHA with changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revision = "sha"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha"}, true)
t.Run("same manifest with sync result", func(t *testing.T) {
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, false)
assert.True(t, attempted)
})
t.Run("same manifest, different SHA with changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revision = "sha1"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha2"}, true)
assert.False(t, attempted)
})
t.Run("same manifest, different SHA without changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revision = "sha1"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha2"}, false)
assert.True(t, attempted)
})
t.Run("different manifest, same SHA with changes", func(t *testing.T) {
t.Run("same manifest with sync result different targetRevision, same SHA", func(t *testing.T) {
// This test represents the case where the user changed a source's target revision to a new branch, but it
// points to the same revision as the old branch. We currently do not consider this as having been "already
// attempted." In the future we may want to short-circuit the auto-sync in these cases.
@@ -2736,101 +2499,55 @@ func TestAlreadyAttemptSync(t *testing.T) {
app.Status.OperationState.SyncResult.Source = v1alpha1.ApplicationSource{TargetRevision: "branch1"}
app.Spec.Source = &v1alpha1.ApplicationSource{TargetRevision: "branch2"}
app.Status.OperationState.SyncResult.Revision = "sha"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha"}, true)
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, false)
assert.False(t, attempted)
})
t.Run("different manifest, different SHA with changes", func(t *testing.T) {
t.Run("different manifest with sync result, different SHA", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Source = v1alpha1.ApplicationSource{Path: "folder1"}
app.Spec.Source = &v1alpha1.ApplicationSource{Path: "folder2"}
app.Status.OperationState.SyncResult.Revision = "sha1"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha2"}, true)
attempted, _ := alreadyAttemptedSync(app, "sha2", []string{}, false, true)
assert.False(t, attempted)
})
t.Run("different manifest, different SHA without changes", func(t *testing.T) {
t.Run("different manifest with sync result, same SHA", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Source = v1alpha1.ApplicationSource{Path: "folder1"}
app.Spec.Source = &v1alpha1.ApplicationSource{Path: "folder2"}
app.Status.OperationState.SyncResult.Revision = "sha1"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha2"}, false)
assert.False(t, attempted)
})
t.Run("different manifest, same SHA without changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Source = v1alpha1.ApplicationSource{Path: "folder1"}
app.Spec.Source = &v1alpha1.ApplicationSource{Path: "folder2"}
app.Status.OperationState.SyncResult.Revision = "sha"
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha"}, false)
assert.False(t, attempted)
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, true)
assert.True(t, attempted)
})
})
t.Run("multi-source", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder2"}}
app.Spec.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder2"}}
t.Run("same manifest, same SHAs with changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a", "sha_b"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a", "sha_b"}, true)
t.Run("same manifest with sync result", func(t *testing.T) {
attempted, _ := alreadyAttemptedSync(app, "", []string{"sha"}, true, false)
assert.True(t, attempted)
})
t.Run("same manifest, different SHAs with changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a_=", "sha_b_1"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a_2", "sha_b_2"}, true)
assert.False(t, attempted)
})
t.Run("same manifest, different SHA without changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a_=", "sha_b_1"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a_2", "sha_b_2"}, false)
assert.True(t, attempted)
})
t.Run("different manifest, same SHA with changes", func(t *testing.T) {
t.Run("same manifest with sync result, different targetRevision, same SHA", func(t *testing.T) {
// This test represents the case where the user changed a source's target revision to a new branch, but it
// points to the same revision as the old branch. We currently do not consider this as having been "already
// attempted." In the future we may want to short-circuit the auto-sync in these cases.
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Sources = []v1alpha1.ApplicationSource{{TargetRevision: "branch1"}, {TargetRevision: "branch2"}}
app.Spec.Sources = []v1alpha1.ApplicationSource{{TargetRevision: "branch1"}, {TargetRevision: "branch3"}}
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a_2", "sha_b_2"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a_2", "sha_b_2"}, false)
app.Status.OperationState.SyncResult.Sources = []v1alpha1.ApplicationSource{{TargetRevision: "branch1"}}
app.Spec.Sources = []v1alpha1.ApplicationSource{{TargetRevision: "branch2"}}
app.Status.OperationState.SyncResult.Revisions = []string{"sha"}
attempted, _ := alreadyAttemptedSync(app, "", []string{"sha"}, true, false)
assert.False(t, attempted)
})
t.Run("different manifest, different SHA with changes", func(t *testing.T) {
t.Run("different manifest with sync result, different SHAs", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder2"}}
app.Spec.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder3"}}
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a", "sha_b"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a", "sha_b_2"}, true)
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a_=", "sha_b_1"}
attempted, _ := alreadyAttemptedSync(app, "", []string{"sha_a_2", "sha_b_2"}, true, true)
assert.False(t, attempted)
})
t.Run("different manifest, different SHA without changes", func(t *testing.T) {
t.Run("different manifest with sync result, same SHAs", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder2"}}
app.Spec.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder3"}}
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a", "sha_b"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a", "sha_b_2"}, false)
assert.False(t, attempted)
})
t.Run("different manifest, same SHA without changes", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.SyncResult.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder2"}}
app.Spec.Sources = []v1alpha1.ApplicationSource{{Path: "folder1"}, {Path: "folder3"}}
app.Status.OperationState.SyncResult.Revisions = []string{"sha_a", "sha_b"}
attempted, _, _ := alreadyAttemptedSync(app, []string{"sha_a", "sha_b"}, false)
assert.False(t, attempted)
attempted, _ := alreadyAttemptedSync(app, "", []string{"sha_a", "sha_b"}, true, true)
assert.True(t, attempted)
})
})
}
@@ -2842,13 +2559,14 @@ func assertDurationAround(t *testing.T, expected time.Duration, actual time.Dura
assert.LessOrEqual(t, expected, actual+delta)
}
func TestSelfHealRemainingBackoff(t *testing.T) {
func TestSelfHealExponentialBackoff(t *testing.T) {
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackoff = &wait.Backoff{
ctrl.selfHealBackOff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
Cap: 2 * time.Minute,
}
app := &v1alpha1.Application{
Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
@@ -2860,108 +2578,156 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
}
testCases := []struct {
attempts int
attempts int64
expectedAttempts int64
finishedAt *metav1.Time
expectedDuration time.Duration
shouldSelfHeal bool
alreadyAttempted bool
syncStatus v1alpha1.SyncStatusCode
}{{
attempts: 0,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 0,
shouldSelfHeal: true,
alreadyAttempted: true,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 1,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 2 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 1,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 2,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 6 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 2,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 3,
finishedAt: nil,
expectedDuration: 18 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 3,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 4,
finishedAt: nil,
expectedDuration: 54 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 4,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 5,
finishedAt: nil,
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 5,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 6,
finishedAt: nil,
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 6,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 6,
finishedAt: nil,
expectedDuration: 0,
shouldSelfHeal: true,
alreadyAttempted: false,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, { // backoff will not reset as finished tme isn't >= cooldown
attempts: 6,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
}, {
alreadyAttempted: true,
expectedAttempts: 6,
syncStatus: v1alpha1.SyncStatusCodeSynced,
}, { // backoff will reset as finished time is >= cooldown
attempts: 40,
finishedAt: &metav1.Time{Time: time.Now().Add(-1 * time.Minute)},
expectedDuration: 60 * time.Second,
shouldSelfHeal: false,
finishedAt: &metav1.Time{Time: time.Now().Add(-(1 * time.Minute))},
expectedDuration: -60 * time.Second,
shouldSelfHeal: true,
alreadyAttempted: true,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeSynced,
}}
for i := range testCases {
tc := testCases[i]
t.Run(fmt.Sprintf("test case %d", i), func(t *testing.T) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = tc.attempts
app.Status.OperationState.FinishedAt = tc.finishedAt
duration := ctrl.selfHealRemainingBackoff(app, tc.attempts)
shouldSelfHeal := duration <= 0
require.Equal(t, tc.shouldSelfHeal, shouldSelfHeal)
app.Status.Sync.Status = tc.syncStatus
ok, duration := ctrl.shouldSelfHeal(app, tc.alreadyAttempted)
require.Equal(t, ok, tc.shouldSelfHeal)
require.Equal(t, tc.expectedAttempts, app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
assertDurationAround(t, tc.expectedDuration, duration)
})
}
}
func TestSelfHealBackoffCooldownElapsed(t *testing.T) {
cooldown := time.Second * 30
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackoffCooldown = cooldown
func TestSyncTimeout(t *testing.T) {
testCases := []struct {
delta time.Duration
expectedPhase synccommon.OperationPhase
expectedMessage string
}{{
delta: 2 * time.Minute,
expectedPhase: synccommon.OperationFailed,
expectedMessage: "Operation terminated",
}, {
delta: 30 * time.Second,
expectedPhase: synccommon.OperationSucceeded,
expectedMessage: "successfully synced (no more tasks)",
}}
for i := range testCases {
tc := testCases[i]
t.Run(fmt.Sprintf("test case %d", i), func(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "default"
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Revision: "HEAD",
},
}
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
}},
}, nil)
app := &v1alpha1.Application{
Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
Phase: synccommon.OperationSucceeded,
},
},
ctrl.syncTimeout = time.Minute
app.Status.OperationState = &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Revision: "HEAD",
},
},
Phase: synccommon.OperationRunning,
StartedAt: metav1.NewTime(time.Now().Add(-tc.delta)),
}
ctrl.processRequestedAppOperation(app)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(t.Context(), app.Name, metav1.GetOptions{})
require.NoError(t, err)
require.Equal(t, tc.expectedPhase, app.Status.OperationState.Phase)
require.Equal(t, tc.expectedMessage, app.Status.OperationState.Message)
})
}
t.Run("operation not completed", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.FinishedAt = nil
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
assert.True(t, elapsed)
})
t.Run("successful operation finised after cooldown", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.FinishedAt = &metav1.Time{Time: time.Now().Add(-cooldown)}
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
assert.True(t, elapsed)
})
t.Run("unsuccessful operation finised after cooldown", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.Phase = synccommon.OperationFailed
app.Status.OperationState.FinishedAt = &metav1.Time{Time: time.Now().Add(-cooldown)}
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
assert.False(t, elapsed)
})
t.Run("successful operation finised before cooldown", func(t *testing.T) {
app := app.DeepCopy()
app.Status.OperationState.FinishedAt = &metav1.Time{Time: time.Now()}
elapsed := ctrl.selfHealBackoffCooldownElapsed(app)
assert.False(t, elapsed)
})
}

View File

@@ -137,6 +137,8 @@ type LiveStateCache interface {
IsNamespaced(server *appv1.Cluster, gk schema.GroupKind) (bool, error)
// Returns synced cluster cache
GetClusterCache(server *appv1.Cluster) (clustercache.ClusterCache, error)
// Executes give callback against resource specified by the key and all its children
IterateHierarchy(server *appv1.Cluster, key kube.ResourceKey, action func(child appv1.ResourceNode, appName string) bool) error
// Executes give callback against resources specified by the keys and all its children
IterateHierarchyV2(server *appv1.Cluster, keys []kube.ResourceKey, action func(child appv1.ResourceNode, appName string) bool) error
// Returns state of live nodes which correspond for target nodes of specified application.
@@ -667,6 +669,17 @@ func (c *liveStateCache) IsNamespaced(server *appv1.Cluster, gk schema.GroupKind
return clusterInfo.IsNamespaced(gk)
}
func (c *liveStateCache) IterateHierarchy(server *appv1.Cluster, key kube.ResourceKey, action func(child appv1.ResourceNode, appName string) bool) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return err
}
clusterInfo.IterateHierarchy(key, func(resource *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) bool {
return action(asResourceNode(resource), getApp(resource, namespaceResources))
})
return nil
}
func (c *liveStateCache) IterateHierarchyV2(server *appv1.Cluster, keys []kube.ResourceKey, action func(child appv1.ResourceNode, appName string) bool) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {

View File

@@ -1,7 +1,6 @@
package cache
import (
"context"
"errors"
"net"
"net/url"
@@ -40,36 +39,6 @@ func (n netError) Error() string { return string(n) }
func (n netError) Timeout() bool { return false }
func (n netError) Temporary() bool { return false }
func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
Namespace: "default",
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
Data: data,
}
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDSecretName,
Namespace: "default",
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
Data: map[string][]byte{},
}
for i := range opts {
opts[i](secret)
}
kubeClient := fake.NewClientset(cm, secret)
settingsManager := argosettings.NewSettingsManager(context.Background(), kubeClient, "default")
return kubeClient, settingsManager
}
func TestHandleModEvent_HasChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
@@ -776,72 +745,3 @@ func Test_GetVersionsInfo_error_redacted(t *testing.T) {
require.Error(t, err)
assert.NotContains(t, err.Error(), "password")
}
func TestLoadCacheSettings(t *testing.T) {
_, settingsManager := fixtures(map[string]string{
"application.instanceLabelKey": "testLabel",
"application.resourceTrackingMethod": string(appv1.TrackingMethodLabel),
"installationID": "123456789",
})
ch := liveStateCache{
settingsMgr: settingsManager,
}
label, err := settingsManager.GetAppInstanceLabelKey()
require.NoError(t, err)
trackingMethod, err := settingsManager.GetTrackingMethod()
require.NoError(t, err)
res, err := ch.loadCacheSettings()
require.NoError(t, err)
assert.Equal(t, label, res.appInstanceLabelKey)
assert.Equal(t, string(appv1.TrackingMethodLabel), trackingMethod)
assert.Equal(t, "123456789", res.installationID)
// By default the values won't be nil
assert.NotNil(t, res.resourceOverrides)
assert.NotNil(t, res.clusterSettings)
assert.True(t, res.ignoreResourceUpdatesEnabled)
}
func Test_ownerRefGV(t *testing.T) {
tests := []struct {
name string
input metav1.OwnerReference
expected schema.GroupVersion
}{
{
name: "valid API Version",
input: metav1.OwnerReference{
APIVersion: "apps/v1",
},
expected: schema.GroupVersion{
Group: "apps",
Version: "v1",
},
},
{
name: "custom defined version",
input: metav1.OwnerReference{
APIVersion: "custom-version",
},
expected: schema.GroupVersion{
Version: "custom-version",
Group: "",
},
},
{
name: "empty APIVersion",
input: metav1.OwnerReference{
APIVersion: "",
},
expected: schema.GroupVersion{},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
res := ownerRefGV(tt.input)
assert.Equal(t, tt.expected, res)
})
}
}

View File

@@ -446,7 +446,6 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
}
req, _ := resourcehelper.PodRequestsAndLimits(&pod)
res.PodInfo = &PodInfo{NodeName: pod.Spec.NodeName, ResourceRequests: req, Phase: pod.Status.Phase}
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "Node", Value: pod.Spec.NodeName})
@@ -455,17 +454,6 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "Restart Count", Value: strconv.Itoa(restarts)})
}
// Requests are relevant even for pods in the init phase or pending state (e.g., due to insufficient resources),
// as they help with diagnosing scheduling and startup issues.
// requests will be released for terminated pods either with success or failed state termination.
if !isPodPhaseTerminal(pod.Status.Phase) {
CPUReq := req[corev1.ResourceCPU]
MemoryReq := req[corev1.ResourceMemory]
res.Info = append(res.Info, v1alpha1.InfoItem{Name: common.PodRequestsCPU, Value: strconv.FormatInt(CPUReq.MilliValue(), 10)})
res.Info = append(res.Info, v1alpha1.InfoItem{Name: common.PodRequestsMEM, Value: strconv.FormatInt(MemoryReq.MilliValue(), 10)})
}
var urls []string
if res.NetworkingInfo != nil {
urls = res.NetworkingInfo.ExternalURLs

View File

@@ -12,7 +12,6 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
"github.com/argoproj/argo-cd/v3/util/errors"
@@ -305,8 +304,6 @@ func TestGetPodInfo(t *testing.T) {
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: common.PodRequestsCPU, Value: "0"}, // strings imported from common
{Name: common.PodRequestsMEM, Value: "134217728000"},
}, info.Info)
assert.Equal(t, []string{"bar"}, info.Images)
assert.Equal(t, &PodInfo{
@@ -368,81 +365,9 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/1"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
t.Run("TestGetPodWithInitialContainerInfoWithResources", func(t *testing.T) {
pod := strToUnstructured(`
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
app: "app-with-initial-container"
name: "app-with-initial-container-5f46976fdb-vd6rv"
namespace: "default"
ownerReferences:
- apiVersion: "apps/v1"
kind: "ReplicaSet"
name: "app-with-initial-container-5f46976fdb"
spec:
containers:
- image: "alpine:latest"
imagePullPolicy: "Always"
name: "app-with-initial-container"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
initContainers:
- image: "alpine:latest"
imagePullPolicy: "Always"
name: "app-with-initial-container-logshipper"
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "250m"
memory: "256Mi"
nodeName: "minikube"
status:
containerStatuses:
- image: "alpine:latest"
name: "app-with-initial-container"
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-10-08T08:44:25Z"
initContainerStatuses:
- image: "alpine:latest"
name: "app-with-initial-container-logshipper"
ready: true
restartCount: 0
started: false
state:
terminated:
exitCode: 0
reason: "Completed"
phase: "Running"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/1"},
{Name: common.PodRequestsCPU, Value: "100"},
{Name: common.PodRequestsMEM, Value: "134217728000"},
}, info.Info)
})
t.Run("TestGetPodInfoWithSidecar", func(t *testing.T) {
t.Parallel()
@@ -497,8 +422,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "2/2"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -557,8 +480,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Status Reason", Value: "Init:0/1"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -616,8 +537,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/3"},
{Name: "Restart Count", Value: "3"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -675,8 +594,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/3"},
{Name: "Restart Count", Value: "3"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -737,8 +654,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/3"},
{Name: "Restart Count", Value: "7"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -781,8 +696,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: "Restart Count", Value: "3"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -818,45 +731,6 @@ func TestGetPodInfo(t *testing.T) {
}, info.Info)
})
// Test pod condition succeed which had some allocated resources
t.Run("TestPodConditionSucceededWithResources", func(t *testing.T) {
t.Parallel()
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test8
spec:
nodeName: minikube
containers:
- name: container
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "250m"
memory: "256Mi"
status:
phase: Succeeded
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Completed
exitCode: 0
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Completed"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
})
// Test pod condition failed
t.Run("TestPodConditionFailed", func(t *testing.T) {
t.Parallel()
@@ -889,46 +763,6 @@ func TestGetPodInfo(t *testing.T) {
}, info.Info)
})
// Test pod condition failed with allocated resources
t.Run("TestPodConditionFailedWithResources", func(t *testing.T) {
t.Parallel()
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test9
spec:
nodeName: minikube
containers:
- name: container
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "250m"
memory: "256Mi"
status:
phase: Failed
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Error
exitCode: 1
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Error"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
})
// Test pod condition succeed with deletion
t.Run("TestPodConditionSucceededWithDeletion", func(t *testing.T) {
t.Parallel()
@@ -990,8 +824,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Status Reason", Value: "Terminating"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -1018,8 +850,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Status Reason", Value: "Terminating"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
@@ -1050,8 +880,6 @@ func TestGetPodInfo(t *testing.T) {
{Name: "Status Reason", Value: "SchedulingGated"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/2"},
{Name: common.PodRequestsCPU, Value: "0"},
{Name: common.PodRequestsMEM, Value: "0"},
}, info.Info)
})
}

View File

@@ -471,6 +471,69 @@ func (_c *LiveStateCache_IsNamespaced_Call) RunAndReturn(run func(server *v1alph
return _c
}
// IterateHierarchy provides a mock function for the type LiveStateCache
func (_mock *LiveStateCache) IterateHierarchy(server *v1alpha1.Cluster, key kube.ResourceKey, action func(child v1alpha1.ResourceNode, appName string) bool) error {
ret := _mock.Called(server, key, action)
if len(ret) == 0 {
panic("no return value specified for IterateHierarchy")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Cluster, kube.ResourceKey, func(child v1alpha1.ResourceNode, appName string) bool) error); ok {
r0 = returnFunc(server, key, action)
} else {
r0 = ret.Error(0)
}
return r0
}
// LiveStateCache_IterateHierarchy_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'IterateHierarchy'
type LiveStateCache_IterateHierarchy_Call struct {
*mock.Call
}
// IterateHierarchy is a helper method to define mock.On call
// - server *v1alpha1.Cluster
// - key kube.ResourceKey
// - action func(child v1alpha1.ResourceNode, appName string) bool
func (_e *LiveStateCache_Expecter) IterateHierarchy(server interface{}, key interface{}, action interface{}) *LiveStateCache_IterateHierarchy_Call {
return &LiveStateCache_IterateHierarchy_Call{Call: _e.mock.On("IterateHierarchy", server, key, action)}
}
func (_c *LiveStateCache_IterateHierarchy_Call) Run(run func(server *v1alpha1.Cluster, key kube.ResourceKey, action func(child v1alpha1.ResourceNode, appName string) bool)) *LiveStateCache_IterateHierarchy_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *v1alpha1.Cluster
if args[0] != nil {
arg0 = args[0].(*v1alpha1.Cluster)
}
var arg1 kube.ResourceKey
if args[1] != nil {
arg1 = args[1].(kube.ResourceKey)
}
var arg2 func(child v1alpha1.ResourceNode, appName string) bool
if args[2] != nil {
arg2 = args[2].(func(child v1alpha1.ResourceNode, appName string) bool)
}
run(
arg0,
arg1,
arg2,
)
})
return _c
}
func (_c *LiveStateCache_IterateHierarchy_Call) Return(err error) *LiveStateCache_IterateHierarchy_Call {
_c.Call.Return(err)
return _c
}
func (_c *LiveStateCache_IterateHierarchy_Call) RunAndReturn(run func(server *v1alpha1.Cluster, key kube.ResourceKey, action func(child v1alpha1.ResourceNode, appName string) bool) error) *LiveStateCache_IterateHierarchy_Call {
_c.Call.Return(run)
return _c
}
// IterateHierarchyV2 provides a mock function for the type LiveStateCache
func (_mock *LiveStateCache) IterateHierarchyV2(server *v1alpha1.Cluster, keys []kube.ResourceKey, action func(child v1alpha1.ResourceNode, appName string) bool) error {
ret := _mock.Called(server, keys, action)

View File

@@ -27,9 +27,10 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
if res.Target != nil && hookutil.Skip(res.Target) {
continue
}
if res.Live != nil && res.Live.GetAnnotations() != nil && res.Live.GetAnnotations()[common.AnnotationIgnoreHealthCheck] == "true" {
if res.Target != nil && res.Target.GetAnnotations() != nil && res.Target.GetAnnotations()[common.AnnotationIgnoreHealthCheck] == "true" {
continue
}
if res.Live != nil && (hookutil.IsHook(res.Live) || ignore.Ignore(res.Live)) {
continue
}

View File

@@ -82,7 +82,7 @@ func TestSetApplicationHealth(t *testing.T) {
// The app is considered healthy
failedJob.SetAnnotations(nil)
failedJobIgnoreHealthcheck := resourceFromFile("./testdata/job-failed-ignore-healthcheck.yaml")
resources[1].Live = &failedJobIgnoreHealthcheck
resources[1].Target = &failedJobIgnoreHealthcheck
healthStatus, err = setApplicationHealth(resources, resourceStatuses, nil, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus)

View File

@@ -51,7 +51,7 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
revisions = append(revisions, src.TargetRevision)
}
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, true)
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false, true)
if err != nil {
return false, err
}

View File

@@ -11,16 +11,12 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
commitclient "github.com/argoproj/argo-cd/v3/commitserver/apiclient"
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
applog "github.com/argoproj/argo-cd/v3/util/app/log"
"github.com/argoproj/argo-cd/v3/util/git"
utilio "github.com/argoproj/argo-cd/v3/util/io"
)
// RepoGetter is an interface that defines methods for getting repository objects. It's a subset of the DB interface to
// avoid granting access to things we don't need.
type RepoGetter interface {
// GetRepository returns a repository by its URL and project name.
GetRepository(ctx context.Context, repoURL, project string) (*appv1.Repository, error)
@@ -32,37 +28,16 @@ type RepoGetter interface {
type Dependencies interface {
// TODO: determine if we actually need to get the app, or if all the stuff we need the app for is done already on
// the app controller side.
// GetProcessableAppProj returns the AppProject for the given application. It should only return projects that are
// processable by the controller, meaning that the project is not deleted and the application is in a namespace
// permitted by the project.
GetProcessableAppProj(app *appv1.Application) (*appv1.AppProject, error)
// GetProcessableApps returns a list of applications that are processable by the controller.
GetProcessableApps() (*appv1.ApplicationList, error)
// GetRepoObjs returns the repository objects for the given application, source, and revision. It calls the repo-
// server and gets the manifests (objects).
GetRepoObjs(app *appv1.Application, source appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)
// GetWriteCredentials returns the repository credentials for the given repository URL and project. These are to be
// sent to the commit server to write the hydrated manifests.
GetWriteCredentials(ctx context.Context, repoURL string, project string) (*appv1.Repository, error)
// RequestAppRefresh requests a refresh of the application with the given name and namespace. This is used to
// trigger a refresh after the application has been hydrated and a new commit has been pushed.
RequestAppRefresh(appName string, appNamespace string) error
// PersistAppHydratorStatus persists the application status for the source hydrator.
// TODO: only allow access to the hydrator status
PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
// AddHydrationQueueItem adds a hydration queue item to the queue. This is used to trigger the hydration process for
// a group of applications which are hydrating to the same repo and target branch.
AddHydrationQueueItem(key types.HydrationQueueKey)
AddHydrationQueueItem(key HydrationQueueKey)
}
// Hydrator is the main struct that implements the hydration logic. It uses the Dependencies interface to access the
// app controller's functionality without directly depending on it.
type Hydrator struct {
dependencies Dependencies
statusRefreshTimeout time.Duration
@@ -71,9 +46,6 @@ type Hydrator struct {
repoGetter RepoGetter
}
// NewHydrator creates a new Hydrator instance with the given dependencies, status refresh timeout, commit clientset,
// repo clientset, and repo getter. The refresh timeout determines how often the hydrator checks if an application
// needs to be hydrated.
func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration, commitClientset commitclient.Clientset, repoClientset apiclient.Clientset, repoGetter RepoGetter) *Hydrator {
return &Hydrator{
dependencies: dependencies,
@@ -84,12 +56,6 @@ func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration,
}
}
// ProcessAppHydrateQueueItem processes an application hydrate queue item. It checks if the application needs hydration
// and if so, it updates the application's status to indicate that hydration is in progress. It then adds the
// hydration queue item to the queue for further processing.
//
// It's likely that multiple applications will trigger hydration at the same time. The hydration queue key is meant to
// dedupe these requests.
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
origApp = origApp.DeepCopy()
app := origApp.DeepCopy()
@@ -123,24 +89,38 @@ func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
logCtx.Debug("Successfully processed app hydrate queue item")
}
func getHydrationQueueKey(app *appv1.Application) types.HydrationQueueKey {
func getHydrationQueueKey(app *appv1.Application) HydrationQueueKey {
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
key := types.HydrationQueueKey{
SourceRepoURL: git.NormalizeGitURLAllowInvalid(app.Spec.SourceHydrator.DrySource.RepoURL),
key := HydrationQueueKey{
SourceRepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
DestinationBranch: destinationBranch,
}
return key
}
// ProcessHydrationQueueItem processes a hydration queue item. It retrieves the relevant applications for the given
// hydration key, hydrates their latest commit, and updates their status accordingly. If the hydration fails, it marks
// the operation as failed and logs the error. If successful, it updates the operation to indicate that hydration was
// successful and requests a refresh of the applications to pick up the new hydrated commit.
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKey) (processNext bool) {
type HydrationQueueKey struct {
SourceRepoURL string
SourceTargetRevision string
DestinationBranch string
}
// uniqueHydrationDestination is used to detect duplicate hydrate destinations.
type uniqueHydrationDestination struct {
//nolint:unused // used as part of a map key
sourceRepoURL string
//nolint:unused // used as part of a map key
sourceTargetRevision string
//nolint:unused // used as part of a map key
destinationBranch string
//nolint:unused // used as part of a map key
destinationPath string
}
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey HydrationQueueKey) (processNext bool) {
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
@@ -166,7 +146,7 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
logCtx = logCtx.WithFields(applog.GetAppLogFields(app))
logCtx.Errorf("Failed to hydrate app: %v", err)
}
return
return processNext
}
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
finishedAt := metav1.Now()
@@ -194,10 +174,10 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
}
}
return
return processNext
}
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, string, string, error) {
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey HydrationQueueKey) ([]*appv1.Application, string, string, error) {
relevantApps, err := h.getRelevantAppsForHydration(logCtx, hydrationKey)
if err != nil {
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
@@ -211,7 +191,7 @@ func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey types
return relevantApps, dryRevision, hydratedRevision, nil
}
func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey types.HydrationQueueKey) ([]*appv1.Application, error) {
func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey HydrationQueueKey) ([]*appv1.Application, error) {
// Get all apps
apps, err := h.dependencies.GetProcessableApps()
if err != nil {
@@ -219,13 +199,13 @@ func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey t
}
var relevantApps []*appv1.Application
uniquePaths := make(map[string]bool, len(apps.Items))
uniqueDestinations := make(map[uniqueHydrationDestination]bool, len(apps.Items))
for _, app := range apps.Items {
if app.Spec.SourceHydrator == nil {
continue
}
if !git.SameURL(app.Spec.SourceHydrator.DrySource.RepoURL, hydrationKey.SourceRepoURL) ||
if app.Spec.SourceHydrator.DrySource.RepoURL != hydrationKey.SourceRepoURL ||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
continue
}
@@ -249,12 +229,17 @@ func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey t
continue
}
// TODO: test the dupe detection
// TODO: normalize the path to avoid "path/.." from being treated as different from "."
if _, ok := uniquePaths[app.Spec.SourceHydrator.SyncSource.Path]; ok {
return nil, fmt.Errorf("multiple app hydrators use the same destination: %v", app.Spec.SourceHydrator.SyncSource.Path)
uniqueDestinationKey := uniqueHydrationDestination{
sourceRepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
sourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
destinationBranch: destinationBranch,
destinationPath: app.Spec.SourceHydrator.SyncSource.Path,
}
uniquePaths[app.Spec.SourceHydrator.SyncSource.Path] = true
// TODO: test the dupe detection
if _, ok := uniqueDestinations[uniqueDestinationKey]; ok {
return nil, fmt.Errorf("multiple app hydrators use the same destination: %v", uniqueDestinationKey)
}
uniqueDestinations[uniqueDestinationKey] = true
relevantApps = append(relevantApps, &app)
}

View File

@@ -4,14 +4,9 @@ import (
"testing"
"time"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/controller/hydrator/mocks"
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -106,64 +101,3 @@ func Test_appNeedsHydration(t *testing.T) {
})
}
}
func Test_getRelevantAppsForHydration_RepoURLNormalization(t *testing.T) {
t.Parallel()
d := mocks.NewDependencies(t)
d.On("GetProcessableApps").Return(&v1alpha1.ApplicationList{
Items: []v1alpha1.Application{
{
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "main",
Path: "app1",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "main",
Path: "app1",
},
},
},
},
{
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SourceHydrator: &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://example.com/repo",
TargetRevision: "main",
Path: "app2",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "main",
Path: "app2",
},
},
},
},
},
}, nil)
d.On("GetProcessableAppProj", mock.Anything).Return(&v1alpha1.AppProject{
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"https://example.com/*"},
},
}, nil)
hydrator := &Hydrator{dependencies: d}
hydrationKey := types.HydrationQueueKey{
SourceRepoURL: "https://example.com/repo",
SourceTargetRevision: "main",
DestinationBranch: "main",
}
logCtx := log.WithField("test", "RepoURLNormalization")
relevantApps, err := hydrator.getRelevantAppsForHydration(logCtx, hydrationKey)
require.NoError(t, err)
assert.Len(t, relevantApps, 2, "Expected both apps to be considered relevant despite URL differences")
}

View File

@@ -1,464 +0,0 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
mock "github.com/stretchr/testify/mock"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
// NewDependencies creates a new instance of Dependencies. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewDependencies(t interface {
mock.TestingT
Cleanup(func())
}) *Dependencies {
mock := &Dependencies{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// Dependencies is an autogenerated mock type for the Dependencies type
type Dependencies struct {
mock.Mock
}
type Dependencies_Expecter struct {
mock *mock.Mock
}
func (_m *Dependencies) EXPECT() *Dependencies_Expecter {
return &Dependencies_Expecter{mock: &_m.Mock}
}
// AddHydrationQueueItem provides a mock function for the type Dependencies
func (_mock *Dependencies) AddHydrationQueueItem(key types.HydrationQueueKey) {
_mock.Called(key)
return
}
// Dependencies_AddHydrationQueueItem_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'AddHydrationQueueItem'
type Dependencies_AddHydrationQueueItem_Call struct {
*mock.Call
}
// AddHydrationQueueItem is a helper method to define mock.On call
// - key types.HydrationQueueKey
func (_e *Dependencies_Expecter) AddHydrationQueueItem(key interface{}) *Dependencies_AddHydrationQueueItem_Call {
return &Dependencies_AddHydrationQueueItem_Call{Call: _e.mock.On("AddHydrationQueueItem", key)}
}
func (_c *Dependencies_AddHydrationQueueItem_Call) Run(run func(key types.HydrationQueueKey)) *Dependencies_AddHydrationQueueItem_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 types.HydrationQueueKey
if args[0] != nil {
arg0 = args[0].(types.HydrationQueueKey)
}
run(
arg0,
)
})
return _c
}
func (_c *Dependencies_AddHydrationQueueItem_Call) Return() *Dependencies_AddHydrationQueueItem_Call {
_c.Call.Return()
return _c
}
func (_c *Dependencies_AddHydrationQueueItem_Call) RunAndReturn(run func(key types.HydrationQueueKey)) *Dependencies_AddHydrationQueueItem_Call {
_c.Run(run)
return _c
}
// GetProcessableAppProj provides a mock function for the type Dependencies
func (_mock *Dependencies) GetProcessableAppProj(app *v1alpha1.Application) (*v1alpha1.AppProject, error) {
ret := _mock.Called(app)
if len(ret) == 0 {
panic("no return value specified for GetProcessableAppProj")
}
var r0 *v1alpha1.AppProject
var r1 error
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Application) (*v1alpha1.AppProject, error)); ok {
return returnFunc(app)
}
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Application) *v1alpha1.AppProject); ok {
r0 = returnFunc(app)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.AppProject)
}
}
if returnFunc, ok := ret.Get(1).(func(*v1alpha1.Application) error); ok {
r1 = returnFunc(app)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Dependencies_GetProcessableAppProj_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetProcessableAppProj'
type Dependencies_GetProcessableAppProj_Call struct {
*mock.Call
}
// GetProcessableAppProj is a helper method to define mock.On call
// - app *v1alpha1.Application
func (_e *Dependencies_Expecter) GetProcessableAppProj(app interface{}) *Dependencies_GetProcessableAppProj_Call {
return &Dependencies_GetProcessableAppProj_Call{Call: _e.mock.On("GetProcessableAppProj", app)}
}
func (_c *Dependencies_GetProcessableAppProj_Call) Run(run func(app *v1alpha1.Application)) *Dependencies_GetProcessableAppProj_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *v1alpha1.Application
if args[0] != nil {
arg0 = args[0].(*v1alpha1.Application)
}
run(
arg0,
)
})
return _c
}
func (_c *Dependencies_GetProcessableAppProj_Call) Return(appProject *v1alpha1.AppProject, err error) *Dependencies_GetProcessableAppProj_Call {
_c.Call.Return(appProject, err)
return _c
}
func (_c *Dependencies_GetProcessableAppProj_Call) RunAndReturn(run func(app *v1alpha1.Application) (*v1alpha1.AppProject, error)) *Dependencies_GetProcessableAppProj_Call {
_c.Call.Return(run)
return _c
}
// GetProcessableApps provides a mock function for the type Dependencies
func (_mock *Dependencies) GetProcessableApps() (*v1alpha1.ApplicationList, error) {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for GetProcessableApps")
}
var r0 *v1alpha1.ApplicationList
var r1 error
if returnFunc, ok := ret.Get(0).(func() (*v1alpha1.ApplicationList, error)); ok {
return returnFunc()
}
if returnFunc, ok := ret.Get(0).(func() *v1alpha1.ApplicationList); ok {
r0 = returnFunc()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.ApplicationList)
}
}
if returnFunc, ok := ret.Get(1).(func() error); ok {
r1 = returnFunc()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Dependencies_GetProcessableApps_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetProcessableApps'
type Dependencies_GetProcessableApps_Call struct {
*mock.Call
}
// GetProcessableApps is a helper method to define mock.On call
func (_e *Dependencies_Expecter) GetProcessableApps() *Dependencies_GetProcessableApps_Call {
return &Dependencies_GetProcessableApps_Call{Call: _e.mock.On("GetProcessableApps")}
}
func (_c *Dependencies_GetProcessableApps_Call) Run(run func()) *Dependencies_GetProcessableApps_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Dependencies_GetProcessableApps_Call) Return(applicationList *v1alpha1.ApplicationList, err error) *Dependencies_GetProcessableApps_Call {
_c.Call.Return(applicationList, err)
return _c
}
func (_c *Dependencies_GetProcessableApps_Call) RunAndReturn(run func() (*v1alpha1.ApplicationList, error)) *Dependencies_GetProcessableApps_Call {
_c.Call.Return(run)
return _c
}
// GetRepoObjs provides a mock function for the type Dependencies
func (_mock *Dependencies) GetRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
ret := _mock.Called(app, source, revision, project)
if len(ret) == 0 {
panic("no return value specified for GetRepoObjs")
}
var r0 []*unstructured.Unstructured
var r1 *apiclient.ManifestResponse
var r2 error
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)); ok {
return returnFunc(app, source, revision, project)
}
if returnFunc, ok := ret.Get(0).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) []*unstructured.Unstructured); ok {
r0 = returnFunc(app, source, revision, project)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*unstructured.Unstructured)
}
}
if returnFunc, ok := ret.Get(1).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) *apiclient.ManifestResponse); ok {
r1 = returnFunc(app, source, revision, project)
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).(*apiclient.ManifestResponse)
}
}
if returnFunc, ok := ret.Get(2).(func(*v1alpha1.Application, v1alpha1.ApplicationSource, string, *v1alpha1.AppProject) error); ok {
r2 = returnFunc(app, source, revision, project)
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
// Dependencies_GetRepoObjs_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetRepoObjs'
type Dependencies_GetRepoObjs_Call struct {
*mock.Call
}
// GetRepoObjs is a helper method to define mock.On call
// - app *v1alpha1.Application
// - source v1alpha1.ApplicationSource
// - revision string
// - project *v1alpha1.AppProject
func (_e *Dependencies_Expecter) GetRepoObjs(app interface{}, source interface{}, revision interface{}, project interface{}) *Dependencies_GetRepoObjs_Call {
return &Dependencies_GetRepoObjs_Call{Call: _e.mock.On("GetRepoObjs", app, source, revision, project)}
}
func (_c *Dependencies_GetRepoObjs_Call) Run(run func(app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject)) *Dependencies_GetRepoObjs_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *v1alpha1.Application
if args[0] != nil {
arg0 = args[0].(*v1alpha1.Application)
}
var arg1 v1alpha1.ApplicationSource
if args[1] != nil {
arg1 = args[1].(v1alpha1.ApplicationSource)
}
var arg2 string
if args[2] != nil {
arg2 = args[2].(string)
}
var arg3 *v1alpha1.AppProject
if args[3] != nil {
arg3 = args[3].(*v1alpha1.AppProject)
}
run(
arg0,
arg1,
arg2,
arg3,
)
})
return _c
}
func (_c *Dependencies_GetRepoObjs_Call) Return(unstructureds []*unstructured.Unstructured, manifestResponse *apiclient.ManifestResponse, err error) *Dependencies_GetRepoObjs_Call {
_c.Call.Return(unstructureds, manifestResponse, err)
return _c
}
func (_c *Dependencies_GetRepoObjs_Call) RunAndReturn(run func(app *v1alpha1.Application, source v1alpha1.ApplicationSource, revision string, project *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)) *Dependencies_GetRepoObjs_Call {
_c.Call.Return(run)
return _c
}
// GetWriteCredentials provides a mock function for the type Dependencies
func (_mock *Dependencies) GetWriteCredentials(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error) {
ret := _mock.Called(ctx, repoURL, project)
if len(ret) == 0 {
panic("no return value specified for GetWriteCredentials")
}
var r0 *v1alpha1.Repository
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) (*v1alpha1.Repository, error)); ok {
return returnFunc(ctx, repoURL, project)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, string, string) *v1alpha1.Repository); ok {
r0 = returnFunc(ctx, repoURL, project)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v1alpha1.Repository)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, string, string) error); ok {
r1 = returnFunc(ctx, repoURL, project)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Dependencies_GetWriteCredentials_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetWriteCredentials'
type Dependencies_GetWriteCredentials_Call struct {
*mock.Call
}
// GetWriteCredentials is a helper method to define mock.On call
// - ctx context.Context
// - repoURL string
// - project string
func (_e *Dependencies_Expecter) GetWriteCredentials(ctx interface{}, repoURL interface{}, project interface{}) *Dependencies_GetWriteCredentials_Call {
return &Dependencies_GetWriteCredentials_Call{Call: _e.mock.On("GetWriteCredentials", ctx, repoURL, project)}
}
func (_c *Dependencies_GetWriteCredentials_Call) Run(run func(ctx context.Context, repoURL string, project string)) *Dependencies_GetWriteCredentials_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 string
if args[1] != nil {
arg1 = args[1].(string)
}
var arg2 string
if args[2] != nil {
arg2 = args[2].(string)
}
run(
arg0,
arg1,
arg2,
)
})
return _c
}
func (_c *Dependencies_GetWriteCredentials_Call) Return(repository *v1alpha1.Repository, err error) *Dependencies_GetWriteCredentials_Call {
_c.Call.Return(repository, err)
return _c
}
func (_c *Dependencies_GetWriteCredentials_Call) RunAndReturn(run func(ctx context.Context, repoURL string, project string) (*v1alpha1.Repository, error)) *Dependencies_GetWriteCredentials_Call {
_c.Call.Return(run)
return _c
}
// PersistAppHydratorStatus provides a mock function for the type Dependencies
func (_mock *Dependencies) PersistAppHydratorStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
_mock.Called(orig, newStatus)
return
}
// Dependencies_PersistAppHydratorStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistAppHydratorStatus'
type Dependencies_PersistAppHydratorStatus_Call struct {
*mock.Call
}
// PersistAppHydratorStatus is a helper method to define mock.On call
// - orig *v1alpha1.Application
// - newStatus *v1alpha1.SourceHydratorStatus
func (_e *Dependencies_Expecter) PersistAppHydratorStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistAppHydratorStatus_Call {
return &Dependencies_PersistAppHydratorStatus_Call{Call: _e.mock.On("PersistAppHydratorStatus", orig, newStatus)}
}
func (_c *Dependencies_PersistAppHydratorStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *v1alpha1.Application
if args[0] != nil {
arg0 = args[0].(*v1alpha1.Application)
}
var arg1 *v1alpha1.SourceHydratorStatus
if args[1] != nil {
arg1 = args[1].(*v1alpha1.SourceHydratorStatus)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *Dependencies_PersistAppHydratorStatus_Call) Return() *Dependencies_PersistAppHydratorStatus_Call {
_c.Call.Return()
return _c
}
func (_c *Dependencies_PersistAppHydratorStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
_c.Run(run)
return _c
}
// RequestAppRefresh provides a mock function for the type Dependencies
func (_mock *Dependencies) RequestAppRefresh(appName string, appNamespace string) error {
ret := _mock.Called(appName, appNamespace)
if len(ret) == 0 {
panic("no return value specified for RequestAppRefresh")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(string, string) error); ok {
r0 = returnFunc(appName, appNamespace)
} else {
r0 = ret.Error(0)
}
return r0
}
// Dependencies_RequestAppRefresh_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'RequestAppRefresh'
type Dependencies_RequestAppRefresh_Call struct {
*mock.Call
}
// RequestAppRefresh is a helper method to define mock.On call
// - appName string
// - appNamespace string
func (_e *Dependencies_Expecter) RequestAppRefresh(appName interface{}, appNamespace interface{}) *Dependencies_RequestAppRefresh_Call {
return &Dependencies_RequestAppRefresh_Call{Call: _e.mock.On("RequestAppRefresh", appName, appNamespace)}
}
func (_c *Dependencies_RequestAppRefresh_Call) Run(run func(appName string, appNamespace string)) *Dependencies_RequestAppRefresh_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 string
if args[0] != nil {
arg0 = args[0].(string)
}
var arg1 string
if args[1] != nil {
arg1 = args[1].(string)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *Dependencies_RequestAppRefresh_Call) Return(err error) *Dependencies_RequestAppRefresh_Call {
_c.Call.Return(err)
return _c
}
func (_c *Dependencies_RequestAppRefresh_Call) RunAndReturn(run func(appName string, appNamespace string) error) *Dependencies_RequestAppRefresh_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -1,11 +0,0 @@
package types
// HydrationQueueKey is used to uniquely identify a hydration operation in the queue. If several applications request
// hydration, but they have the same queue key, only one hydration operation will be performed.
type HydrationQueueKey struct {
// SourceRepoURL must be normalized with git.NormalizeGitURL to ensure that we don't double-queue a single hydration
// operation because two apps have different URL formats.
SourceRepoURL string
SourceTargetRevision string
DestinationBranch string
}

View File

@@ -4,7 +4,7 @@ import (
"context"
"fmt"
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
"github.com/argoproj/argo-cd/v3/controller/hydrator"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
argoutil "github.com/argoproj/argo-cd/v3/util/argo"
@@ -50,19 +50,10 @@ func (ctrl *ApplicationController) GetRepoObjs(origApp *appv1.Application, drySo
delete(app.Annotations, appv1.AnnotationKeyManifestGeneratePaths)
// FIXME: use cache and revision cache
objs, resp, _, err := ctrl.appStateManager.GetRepoObjs(app, drySources, appLabelKey, dryRevisions, true, true, false, project, false)
objs, resp, _, err := ctrl.appStateManager.GetRepoObjs(app, drySources, appLabelKey, dryRevisions, true, true, false, project, false, false)
if err != nil {
return nil, nil, fmt.Errorf("failed to get repo objects: %w", err)
}
trackingMethod, err := ctrl.settingsMgr.GetTrackingMethod()
if err != nil {
return nil, nil, fmt.Errorf("failed to get tracking method: %w", err)
}
for _, obj := range objs {
if err := argoutil.NewResourceTracking().RemoveAppInstance(obj, trackingMethod); err != nil {
return nil, nil, fmt.Errorf("failed to remove the app instance value: %w", err)
}
}
if len(resp) != 1 {
return nil, nil, fmt.Errorf("expected one manifest response, got %d", len(resp))
@@ -94,6 +85,6 @@ func (ctrl *ApplicationController) PersistAppHydratorStatus(orig *appv1.Applicat
ctrl.persistAppStatus(orig, status)
}
func (ctrl *ApplicationController) AddHydrationQueueItem(key types.HydrationQueueKey) {
func (ctrl *ApplicationController) AddHydrationQueueItem(key hydrator.HydrationQueueKey) {
ctrl.hydrationQueue.AddRateLimited(key)
}

View File

@@ -1,79 +0,0 @@
package controller
import (
"encoding/json"
"errors"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
"github.com/argoproj/argo-cd/v3/test"
)
func TestGetRepoObjs(t *testing.T) {
cm := test.NewConfigMap()
cm.SetAnnotations(map[string]string{
"custom-annotation": "custom-value",
common.AnnotationInstallationID: "id", // tracking annotation should be removed
common.AnnotationKeyAppInstance: "my-app", // tracking annotation should be removed
})
cmBytes, _ := json.Marshal(cm)
app := newFakeApp()
// Enable the manifest-generate-paths annotation and set a synced revision
app.SetAnnotations(map[string]string{v1alpha1.AnnotationKeyManifestGeneratePaths: "."})
app.Status.Sync = v1alpha1.SyncStatus{
Revision: "abc123",
Status: v1alpha1.SyncStatusCodeSynced,
}
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{string(cmBytes)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
}
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"
objs, resp, err := ctrl.GetRepoObjs(app, source, "abc123", &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: test.FakeArgoCDNamespace,
},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []v1alpha1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
},
})
require.NoError(t, err)
assert.NotNil(t, resp)
assert.Equal(t, "abc123", resp.Revision)
assert.Len(t, objs, 1)
annotations := objs[0].GetAnnotations()
// only the tracking annotations set by Argo CD should be removed
// and not the custom annotations set by user
require.NotNil(t, annotations)
assert.Equal(t, "custom-value", annotations["custom-annotation"])
assert.NotContains(t, annotations, common.AnnotationInstallationID)
assert.NotContains(t, annotations, common.AnnotationKeyAppInstance)
assert.Equal(t, "ConfigMap", objs[0].GetKind())
}

View File

@@ -37,7 +37,8 @@ type Consistent struct {
loadMap map[string]*Host
totalLoad int64
replicationFactor int
lock sync.RWMutex
sync.RWMutex
}
type item struct {
@@ -67,8 +68,8 @@ func NewWithReplicationFactor(replicationFactor int) *Consistent {
}
func (c *Consistent) Add(server string) {
c.lock.Lock()
defer c.lock.Unlock()
c.Lock()
defer c.Unlock()
if _, ok := c.loadMap[server]; ok {
return
@@ -86,8 +87,8 @@ func (c *Consistent) Add(server string) {
// As described in https://en.wikipedia.org/wiki/Consistent_hashing
// It returns ErrNoHosts if the ring has no servers in it.
func (c *Consistent) Get(client string) (string, error) {
c.lock.RLock()
defer c.lock.RUnlock()
c.RLock()
defer c.RUnlock()
if c.clients.Len() == 0 {
return "", ErrNoHosts
@@ -115,8 +116,8 @@ func (c *Consistent) Get(client string) (string, error) {
// https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html
// It returns ErrNoHosts if the ring has no hosts in it.
func (c *Consistent) GetLeast(client string) (string, error) {
c.lock.RLock()
defer c.lock.RUnlock()
c.RLock()
defer c.RUnlock()
if c.clients.Len() == 0 {
return "", ErrNoHosts
@@ -150,8 +151,8 @@ func (c *Consistent) GetLeast(client string) (string, error) {
// Sets the load of `server` to the given `load`
func (c *Consistent) UpdateLoad(server string, load int64) {
c.lock.Lock()
defer c.lock.Unlock()
c.Lock()
defer c.Unlock()
if _, ok := c.loadMap[server]; !ok {
return
@@ -165,8 +166,8 @@ func (c *Consistent) UpdateLoad(server string, load int64) {
//
// should only be used with if you obtained a host with GetLeast
func (c *Consistent) Inc(server string) {
c.lock.Lock()
defer c.lock.Unlock()
c.Lock()
defer c.Unlock()
if _, ok := c.loadMap[server]; !ok {
return
@@ -179,8 +180,8 @@ func (c *Consistent) Inc(server string) {
//
// should only be used with if you obtained a host with GetLeast
func (c *Consistent) Done(server string) {
c.lock.Lock()
defer c.lock.Unlock()
c.Lock()
defer c.Unlock()
if _, ok := c.loadMap[server]; !ok {
return
@@ -191,8 +192,8 @@ func (c *Consistent) Done(server string) {
// Deletes host from the ring
func (c *Consistent) Remove(server string) bool {
c.lock.Lock()
defer c.lock.Unlock()
c.Lock()
defer c.Unlock()
for i := 0; i < c.replicationFactor; i++ {
h := c.hash(fmt.Sprintf("%s%d", server, i))
@@ -205,8 +206,8 @@ func (c *Consistent) Remove(server string) bool {
// Return the list of servers in the ring
func (c *Consistent) Servers() (servers []string) {
c.lock.RLock()
defer c.lock.RUnlock()
c.RLock()
defer c.RUnlock()
for k := range c.loadMap {
servers = append(servers, k)
}

View File

@@ -27,6 +27,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/cache"
"github.com/argoproj/argo-cd/v3/common"
statecache "github.com/argoproj/argo-cd/v3/controller/cache"
@@ -70,9 +71,9 @@ type managedResource struct {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool, rollback bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
// comparisonResult holds the state of an application after the reconciliation
@@ -90,8 +91,7 @@ type comparisonResult struct {
timings map[string]time.Duration
diffResultList *diff.DiffResultList
hasPostDeleteHooks bool
// revisionsMayHaveChanges indicates if there are any possibilities that the revisions contain changes
revisionsMayHaveChanges bool
revisionUpdated bool
}
func (res *comparisonResult) GetSyncStatus() *v1alpha1.SyncStatus {
@@ -108,6 +108,7 @@ type appStateManager struct {
db db.ArgoDB
settingsMgr *settings.SettingsManager
appclientset appclientset.Interface
projInformer cache.SharedIndexInformer
kubectl kubeutil.Kubectl
onKubectlRun kubeutil.OnKubectlRunFunc
repoClientset apiclient.Clientset
@@ -127,7 +128,7 @@ type appStateManager struct {
// task to the repo-server. It returns the list of generated manifests as unstructured
// objects. It also returns the full response from all calls to the repo server as the
// second argument.
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
ts := stats.NewTimingStats()
helmRepos, err := m.db.ListHelmRepositories(context.Background())
if err != nil {
@@ -218,12 +219,14 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
// Store the map of all sources having ref field into a map for applications with sources field
// If it's for a rollback process, the refSources[*].targetRevision fields are the desired
// revisions for the rollback
refSources, err := argo.GetRefSources(context.Background(), sources, app.Spec.Project, m.db.GetRepository, revisions)
refSources, err := argo.GetRefSources(context.Background(), sources, app.Spec.Project, m.db.GetRepository, revisions, rollback)
if err != nil {
return nil, nil, false, fmt.Errorf("failed to get ref sources: %w", err)
}
revisionsMayHaveChanges := false
revisionUpdated := false
atLeastOneRevisionIsNotPossibleToBeUpdated := false
keyManifestGenerateAnnotationVal, keyManifestGenerateAnnotationExists := app.Annotations[v1alpha1.AnnotationKeyManifestGeneratePaths]
@@ -235,6 +238,10 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
if err != nil {
return nil, nil, false, fmt.Errorf("failed to get repo %q: %w", source.RepoURL, err)
}
kustomizeOptions, err := kustomizeSettings.GetOptions(source)
if err != nil {
return nil, nil, false, fmt.Errorf("failed to get Kustomize options for source %d of %d: %w", i+1, len(sources), err)
}
syncedRevision := app.Status.Sync.Revision
if app.Spec.HasMultipleSources() {
@@ -260,7 +267,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetAppRefreshPaths(app),
Paths: path.GetSourceRefreshPaths(app, source),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,
@@ -276,7 +283,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
return nil, nil, false, fmt.Errorf("failed to compare revisions for source %d of %d: %w", i+1, len(sources), err)
}
if updateRevisionResult.Changes {
revisionsMayHaveChanges = true
revisionUpdated = true
}
// Generate manifests should use same revision as updateRevisionForPaths, because HEAD revision may be different between these two calls
@@ -284,8 +291,8 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
revision = updateRevisionResult.Revision
}
} else {
// revisionsMayHaveChanges is set to true if at least one revision is not possible to be updated
revisionsMayHaveChanges = true
// revisionUpdated is set to true if at least one revision is not possible to be updated,
atLeastOneRevisionIsNotPossibleToBeUpdated = true
}
repos := permittedHelmRepos
@@ -311,7 +318,7 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,
ApplicationSource: &source,
KustomizeOptions: kustomizeSettings,
KustomizeOptions: kustomizeOptions,
KubeVersion: serverVersion,
ApiVersions: apiVersions,
VerifySignature: verifySignature,
@@ -346,7 +353,13 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
logCtx.Info("GetRepoObjs stats")
return targetObjs, manifestInfos, revisionsMayHaveChanges, nil
// If a revision in any of the sources cannot be updated,
// we should trigger self-healing whenever there are changes to the manifests.
if atLeastOneRevisionIsNotPossibleToBeUpdated {
revisionUpdated = true
}
return targetObjs, manifestInfos, revisionUpdated, nil
}
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
@@ -529,37 +542,32 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool, rollback bool) (*comparisonResult, error) {
ts := stats.NewTimingStats()
logCtx := log.WithFields(applog.GetAppLogFields(app))
// Build initial sync status
syncStatus := &v1alpha1.SyncStatus{
ComparedTo: v1alpha1.ComparedTo{
Destination: app.Spec.Destination,
IgnoreDifferences: app.Spec.IgnoreDifferences,
},
Status: v1alpha1.SyncStatusCodeUnknown,
}
if hasMultipleSources {
syncStatus.ComparedTo.Sources = sources
syncStatus.Revisions = revisions
} else {
if len(sources) > 0 {
syncStatus.ComparedTo.Source = sources[0]
} else {
logCtx.Warn("CompareAppState: sources should not be empty")
}
if len(revisions) > 0 {
syncStatus.Revision = revisions[0]
}
}
appLabelKey, resourceOverrides, resFilter, installationID, trackingMethod, err := m.getComparisonSettings()
ts.AddCheckpoint("settings_ms")
// return unknown comparison result if basic comparison settings cannot be loaded
if err != nil {
// return unknown comparison result if basic comparison settings cannot be loaded
return &comparisonResult{syncStatus: syncStatus, healthStatus: health.HealthStatusUnknown}, nil
if hasMultipleSources {
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
ComparedTo: app.Spec.BuildComparedToStatus(),
Status: v1alpha1.SyncStatusCodeUnknown,
Revisions: revisions,
},
healthStatus: health.HealthStatusUnknown,
}, nil
}
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
ComparedTo: app.Spec.BuildComparedToStatus(),
Status: v1alpha1.SyncStatusCodeUnknown,
Revision: revisions[0],
},
healthStatus: health.HealthStatusUnknown,
}, nil
}
// When signature keys are defined in the project spec, we need to verify the signature on the Git revision
@@ -574,6 +582,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
return nil, err
}
logCtx := log.WithFields(applog.GetAppLogFields(app))
logCtx.Infof("Comparing app state (cluster: %s, namespace: %s)", app.Spec.Destination.Server, app.Spec.Destination.Namespace)
var targetObjs []*unstructured.Unstructured
@@ -582,7 +591,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
var manifestInfos []*apiclient.ManifestResponse
targetNsExists := false
var revisionsMayHaveChanges bool
var revisionUpdated bool
if len(localManifests) == 0 {
// If the length of revisions is not same as the length of sources,
@@ -594,7 +603,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
}
}
targetObjs, manifestInfos, revisionsMayHaveChanges, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, true)
targetObjs, manifestInfos, revisionUpdated, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, rollback, true)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
msg := "Failed to load target state: " + err.Error()
@@ -942,14 +951,32 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
} else if app.HasChangedManagedNamespaceMetadata() {
syncCode = v1alpha1.SyncStatusCodeOutOfSync
}
var revision string
syncStatus.Status = syncCode
// Update the initial revision to the resolved manifest SHA
if !hasMultipleSources && len(manifestRevisions) > 0 {
revision = manifestRevisions[0]
}
var syncStatus v1alpha1.SyncStatus
if hasMultipleSources {
syncStatus.Revisions = manifestRevisions
} else if len(manifestRevisions) > 0 {
syncStatus.Revision = manifestRevisions[0]
syncStatus = v1alpha1.SyncStatus{
ComparedTo: v1alpha1.ComparedTo{
Destination: app.Spec.Destination,
Sources: sources,
IgnoreDifferences: app.Spec.IgnoreDifferences,
},
Status: syncCode,
Revisions: manifestRevisions,
}
} else {
syncStatus = v1alpha1.SyncStatus{
ComparedTo: v1alpha1.ComparedTo{
Destination: app.Spec.Destination,
Source: app.Spec.GetSource(),
IgnoreDifferences: app.Spec.IgnoreDifferences,
},
Status: syncCode,
Revision: revision,
}
}
ts.AddCheckpoint("sync_ms")
@@ -969,15 +996,15 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
}
compRes := comparisonResult{
syncStatus: syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
managedResources: managedResources,
reconciliationResult: reconciliation,
diffConfig: diffConfig,
diffResultList: diffResults,
hasPostDeleteHooks: hasPostDeleteHooks,
revisionsMayHaveChanges: revisionsMayHaveChanges,
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
managedResources: managedResources,
reconciliationResult: reconciliation,
diffConfig: diffConfig,
diffResultList: diffResults,
hasPostDeleteHooks: hasPostDeleteHooks,
revisionUpdated: revisionUpdated,
}
if hasMultipleSources {
@@ -1035,7 +1062,7 @@ func useDiffCache(noCache bool, manifestInfos []*apiclient.ManifestResponse, sou
return false
}
if !specEqualsCompareTo(app.Spec, sources, app.Status.Sync.ComparedTo) {
if !specEqualsCompareTo(app.Spec, app.Status.Sync.ComparedTo) {
log.WithField("useDiffCache", "false").Debug("specChanged")
return false
}
@@ -1046,11 +1073,11 @@ func useDiffCache(noCache bool, manifestInfos []*apiclient.ManifestResponse, sou
// specEqualsCompareTo compares the application spec to the comparedTo status. It normalizes the destination to match
// the comparedTo destination before comparing. It does not mutate the original spec or comparedTo.
func specEqualsCompareTo(spec v1alpha1.ApplicationSpec, sources []v1alpha1.ApplicationSource, comparedTo v1alpha1.ComparedTo) bool {
func specEqualsCompareTo(spec v1alpha1.ApplicationSpec, comparedTo v1alpha1.ComparedTo) bool {
// Make a copy to be sure we don't mutate the original.
specCopy := spec.DeepCopy()
compareToSpec := specCopy.BuildComparedToStatus(sources)
return reflect.DeepEqual(comparedTo, compareToSpec)
currentSpec := specCopy.BuildComparedToStatus()
return reflect.DeepEqual(comparedTo, currentSpec)
}
func (m *appStateManager) persistRevisionHistory(
@@ -1112,6 +1139,7 @@ func NewAppStateManager(
onKubectlRun kubeutil.OnKubectlRunFunc,
settingsMgr *settings.SettingsManager,
liveStateCache statecache.LiveStateCache,
projInformer cache.SharedIndexInformer,
metricsServer *metrics.MetricsServer,
cache *appstatecache.Cache,
statusRefreshTimeout time.Duration,
@@ -1131,6 +1159,7 @@ func NewAppStateManager(
repoClientset: repoClientset,
namespace: namespace,
settingsMgr: settingsMgr,
projInformer: projInformer,
metricsServer: metricsServer,
statusRefreshTimeout: statusRefreshTimeout,
resourceTracking: resourceTracking,

View File

@@ -25,7 +25,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/utils/ptr"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/controller/testdata"
@@ -53,7 +52,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -71,18 +70,18 @@ func TestCompareAppStateRepoError(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
assert.Nil(t, compRes)
require.EqualError(t, err, ErrCompareStateRepo.Error())
// expect to still get compare state error to as inside grace period
compRes, err = ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err = ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
assert.Nil(t, compRes)
require.EqualError(t, err, ErrCompareStateRepo.Error())
time.Sleep(10 * time.Second)
// expect to not get error as outside of grace period, but status should be unknown
compRes, err = ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err = ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
assert.NotNil(t, compRes)
require.NoError(t, err)
assert.Equal(t, v1alpha1.SyncStatusCodeUnknown, compRes.syncStatus.Status)
@@ -117,7 +116,7 @@ func TestCompareAppStateNamespaceMetadataDiffers(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -166,7 +165,7 @@ func TestCompareAppStateNamespaceMetadataDiffersToManifest(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -224,7 +223,7 @@ func TestCompareAppStateNamespaceMetadata(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -283,7 +282,7 @@ func TestCompareAppStateNamespaceMetadataIsTheSame(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -311,7 +310,7 @@ func TestCompareAppStateMissing(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -343,7 +342,7 @@ func TestCompareAppStateExtra(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, v1alpha1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
@@ -374,7 +373,7 @@ func TestCompareAppStateHook(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, compRes.syncStatus.Status)
@@ -406,7 +405,7 @@ func TestCompareAppStateSkipHook(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, compRes.syncStatus.Status)
@@ -437,7 +436,7 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
@@ -470,7 +469,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
@@ -499,7 +498,7 @@ func TestAppRevisionsSingleSource(t *testing.T) {
app := newFakeApp()
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, app.Spec.HasMultipleSources())
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, app.Spec.HasMultipleSources(), false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -539,7 +538,7 @@ func TestAppRevisionsMultiSource(t *testing.T) {
app := newFakeMultiSourceApp()
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, app.Spec.HasMultipleSources())
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, app.Spec.HasMultipleSources(), false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -588,7 +587,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
@@ -625,7 +624,7 @@ func TestCompareAppStateManagedNamespaceMetadataWithLiveNsDoesNotGetPruned(t *te
},
}
ctrl := newFakeController(&data, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, []string{}, app.Spec.Sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, []string{}, app.Spec.Sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
@@ -679,7 +678,7 @@ func TestCompareAppStateWithManifestGeneratePath(t *testing.T) {
ctrl := newFakeController(&data, nil)
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, v1alpha1.SyncStatusCodeSynced, compRes.syncStatus.Status)
@@ -715,7 +714,7 @@ func TestSetHealth(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus)
@@ -751,7 +750,7 @@ func TestPreserveStatusTimestamp(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus)
@@ -788,7 +787,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus)
@@ -863,7 +862,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusUnknown, compRes.healthStatus)
@@ -1012,7 +1011,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1039,7 +1038,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1071,7 +1070,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1098,7 +1097,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1125,7 +1124,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1152,7 +1151,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1182,7 +1181,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &testProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &testProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1212,7 +1211,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, localManifests, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, localManifests, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1242,7 +1241,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1272,7 +1271,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, localManifests, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &signedProj, revisions, sources, false, false, localManifests, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
@@ -1482,6 +1481,7 @@ func TestIsLiveResourceManaged(t *testing.T) {
func TestUseDiffCache(t *testing.T) {
t.Parallel()
type fixture struct {
testName string
noCache bool
@@ -1493,6 +1493,7 @@ func TestUseDiffCache(t *testing.T) {
expectedUseCache bool
serverSideDiff bool
}
manifestInfos := func(revision string) []*apiclient.ManifestResponse {
return []*apiclient.ManifestResponse{
{
@@ -1508,15 +1509,14 @@ func TestUseDiffCache(t *testing.T) {
},
}
}
source := func() v1alpha1.ApplicationSource {
return v1alpha1.ApplicationSource{
RepoURL: "https://some-repo.com",
Path: "argocd/httpbin",
TargetRevision: "HEAD",
}
}
sources := func() []v1alpha1.ApplicationSource {
return []v1alpha1.ApplicationSource{source()}
return []v1alpha1.ApplicationSource{
{
RepoURL: "https://some-repo.com",
Path: "argocd/httpbin",
TargetRevision: "HEAD",
},
}
}
app := func(namespace string, revision string, refresh bool, a *v1alpha1.Application) *v1alpha1.Application {
@@ -1526,7 +1526,11 @@ func TestUseDiffCache(t *testing.T) {
Namespace: namespace,
},
Spec: v1alpha1.ApplicationSpec{
Source: ptr.To(source()),
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://some-repo.com",
Path: "argocd/httpbin",
TargetRevision: "HEAD",
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "httpbin",
@@ -1544,7 +1548,11 @@ func TestUseDiffCache(t *testing.T) {
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
ComparedTo: v1alpha1.ComparedTo{
Source: source(),
Source: v1alpha1.ApplicationSource{
RepoURL: "https://some-repo.com",
Path: "argocd/httpbin",
TargetRevision: "HEAD",
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "httpbin",
@@ -1569,6 +1577,7 @@ func TestUseDiffCache(t *testing.T) {
}
return app
}
cases := []fixture{
{
testName: "will use diff cache",
@@ -1585,7 +1594,7 @@ func TestUseDiffCache(t *testing.T) {
testName: "will use diff cache with sync policy",
noCache: false,
manifestInfos: manifestInfos("rev1"),
sources: []v1alpha1.ApplicationSource{test.YamlToApplication(testdata.DiffCacheYaml).Status.Sync.ComparedTo.Source},
sources: sources(),
app: test.YamlToApplication(testdata.DiffCacheYaml),
manifestRevisions: []string{"rev1"},
statusRefreshTimeout: time.Hour * 24,
@@ -1595,15 +1604,8 @@ func TestUseDiffCache(t *testing.T) {
{
testName: "will use diff cache for multisource",
noCache: false,
manifestInfos: append(manifestInfos("rev1"), manifestInfos("rev2")...),
sources: v1alpha1.ApplicationSources{
{
RepoURL: "multisource repo1",
},
{
RepoURL: "multisource repo2",
},
},
manifestInfos: manifestInfos("rev1"),
sources: sources(),
app: app("httpbin", "", false, &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: nil,
@@ -1741,13 +1743,16 @@ func TestUseDiffCache(t *testing.T) {
}
for _, tc := range cases {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
// Given
t.Parallel()
logger, _ := logrustest.NewNullLogger()
log := logrus.NewEntry(logger)
// When
useDiffCache := useDiffCache(tc.noCache, tc.manifestInfos, tc.sources, tc.app, tc.manifestRevisions, tc.statusRefreshTimeout, tc.serverSideDiff, log)
// Then
assert.Equal(t, tc.expectedUseCache, useDiffCache)
})
@@ -1770,11 +1775,11 @@ func TestCompareAppStateDefaultRevisionUpdated(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.True(t, compRes.revisionsMayHaveChanges)
assert.True(t, compRes.revisionUpdated)
}
func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
@@ -1793,11 +1798,11 @@ func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.True(t, compRes.revisionsMayHaveChanges)
assert.True(t, compRes.revisionUpdated)
}
func Test_normalizeClusterScopeTracking(t *testing.T) {
@@ -1845,6 +1850,6 @@ func TestCompareAppState_DoesNotCallUpdateRevisionForPaths_ForOCI(t *testing.T)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, source)
_, _, _, err := ctrl.appStateManager.GetRepoObjs(app, sources, "abc123", []string{"123456"}, false, false, false, &defaultProj, false)
_, _, _, err := ctrl.appStateManager.GetRepoObjs(app, sources, "abc123", []string{"123456"}, false, false, false, &defaultProj, false, false)
require.NoError(t, err)
}

View File

@@ -30,6 +30,7 @@ import (
"github.com/argoproj/argo-cd/v3/controller/metrics"
"github.com/argoproj/argo-cd/v3/controller/syncid"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
listersv1alpha1 "github.com/argoproj/argo-cd/v3/pkg/client/listers/application/v1alpha1"
applog "github.com/argoproj/argo-cd/v3/util/app/log"
"github.com/argoproj/argo-cd/v3/util/argo"
"github.com/argoproj/argo-cd/v3/util/argo/diff"
@@ -86,95 +87,123 @@ func (m *appStateManager) getServerSideDiffDryRunApplier(cluster *v1alpha1.Clust
return ops, cleanup, nil
}
func NewOperationState(operation v1alpha1.Operation) *v1alpha1.OperationState {
return &v1alpha1.OperationState{
Phase: common.OperationRunning,
Operation: operation,
StartedAt: metav1.Now(),
}
}
func newSyncOperationResult(app *v1alpha1.Application, op v1alpha1.SyncOperation) *v1alpha1.SyncOperationResult {
syncRes := &v1alpha1.SyncOperationResult{}
if len(op.Sources) > 0 || op.Source != nil {
// specific source specified in the SyncOperation
if op.Source != nil {
syncRes.Source = *op.Source
}
syncRes.Sources = op.Sources
} else {
// normal sync case, get sources from the spec
syncRes.Sources = app.Spec.Sources
syncRes.Source = app.Spec.GetSource()
}
func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState) {
// Sync requests might be requested with ambiguous revisions (e.g. master, HEAD, v1.2.3).
// This can change meaning when resuming operations (e.g a hook sync). After calculating a
// concrete git commit SHA, the revision of the SyncOperationResult will be updated with the SHA
syncRes.Revision = op.Revision
syncRes.Revisions = op.Revisions
return syncRes
}
// concrete git commit SHA, the SHA is remembered in the status.operationState.syncResult field.
// This ensures that when resuming an operation, we sync to the same revision that we initially
// started with.
func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState) {
syncId, err := syncid.Generate()
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to generate sync ID: %v", err)
return
}
logEntry := log.WithFields(applog.GetAppLogFields(app)).WithField("syncId", syncId)
var revision string
var syncOp v1alpha1.SyncOperation
var syncRes *v1alpha1.SyncOperationResult
var source v1alpha1.ApplicationSource
var sources []v1alpha1.ApplicationSource
revisions := make([]string, 0)
if state.Operation.Sync == nil {
state.Phase = common.OperationError
state.Phase = common.OperationFailed
state.Message = "Invalid operation request: no operation specified"
return
}
syncOp = *state.Operation.Sync
syncOp := *state.Operation.Sync
if state.SyncResult == nil {
state.SyncResult = newSyncOperationResult(app, syncOp)
}
if isBlocked, err := syncWindowPreventsSync(app, project); isBlocked {
// If the operation is currently running, simply let the user know the sync is blocked by a current sync window
if state.Phase == common.OperationRunning {
state.Message = "Sync operation blocked by sync window"
if err != nil {
state.Message = fmt.Sprintf("%s: %v", state.Message, err)
}
isMultiSourceRevision := app.Spec.HasMultipleSources()
rollback := len(syncOp.Sources) > 0 || syncOp.Source != nil
if rollback {
// rollback case
if len(state.Operation.Sync.Sources) > 0 {
sources = state.Operation.Sync.Sources
isMultiSourceRevision = true
} else {
source = *state.Operation.Sync.Source
sources = make([]v1alpha1.ApplicationSource, 0)
isMultiSourceRevision = false
}
} else {
// normal sync case (where source is taken from app.spec.sources)
if app.Spec.HasMultipleSources() {
sources = app.Spec.Sources
} else {
// normal sync case (where source is taken from app.spec.source)
source = app.Spec.GetSource()
sources = make([]v1alpha1.ApplicationSource, 0)
}
return
}
revisions := state.SyncResult.Revisions
sources := state.SyncResult.Sources
isMultiSourceSync := len(sources) > 0
if !isMultiSourceSync {
sources = []v1alpha1.ApplicationSource{state.SyncResult.Source}
revisions = []string{state.SyncResult.Revision}
if state.SyncResult != nil {
syncRes = state.SyncResult
revision = state.SyncResult.Revision
revisions = append(revisions, state.SyncResult.Revisions...)
} else {
syncRes = &v1alpha1.SyncOperationResult{}
// status.operationState.syncResult.source. must be set properly since auto-sync relies
// on this information to decide if it should sync (if source is different than the last
// sync attempt)
if isMultiSourceRevision {
syncRes.Sources = sources
} else {
syncRes.Source = source
}
state.SyncResult = syncRes
}
// if we get here, it means we did not remember a commit SHA which we should be syncing to.
// This typically indicates we are just about to begin a brand new sync/rollback operation.
// Take the value in the requested operation. We will resolve this to a SHA later.
if isMultiSourceRevision {
if len(revisions) != len(sources) {
revisions = syncOp.Revisions
}
} else {
if revision == "" {
revision = syncOp.Revision
}
}
proj, err := argo.GetAppProject(context.TODO(), app, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace, m.settingsMgr, m.db)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
return
} else {
isBlocked, err := syncWindowPreventsSync(app, proj)
if isBlocked {
// If the operation is currently running, simply let the user know the sync is blocked by a current sync window
if state.Phase == common.OperationRunning {
state.Message = "Sync operation blocked by sync window"
if err != nil {
state.Message = fmt.Sprintf("%s: %v", state.Message, err)
}
}
return
}
}
if !isMultiSourceRevision {
sources = []v1alpha1.ApplicationSource{source}
revisions = []string{revision}
}
// ignore error if CompareStateRepoError, this shouldn't happen as noRevisionCache is true
compareResult, err := m.CompareAppState(app, project, revisions, sources, false, true, syncOp.Manifests, isMultiSourceSync)
compareResult, err := m.CompareAppState(app, proj, revisions, sources, false, true, syncOp.Manifests, isMultiSourceRevision, rollback)
if err != nil && !stderrors.Is(err, ErrCompareStateRepo) {
state.Phase = common.OperationError
state.Message = err.Error()
return
}
// We are now guaranteed to have a concrete commit SHA. Save this in the sync result revision so that we remember
// We now have a concrete commit SHA. Save this in the sync result revision so that we remember
// what we should be syncing to when resuming operations.
state.SyncResult.Revision = compareResult.syncStatus.Revision
state.SyncResult.Revisions = compareResult.syncStatus.Revisions
// validates if it should fail the sync on that revision if it finds shared resources
syncRes.Revision = compareResult.syncStatus.Revision
syncRes.Revisions = compareResult.syncStatus.Revisions
// validates if it should fail the sync if it finds shared resources
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") && hasSharedResource {
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
hasSharedResource {
state.Phase = common.OperationFailed
state.Message = "Shared resource found: " + sharedResourceMessage
state.Message = "Shared resource found: %s" + sharedResourceMessage
return
}
@@ -217,8 +246,15 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
return
}
initialResourcesRes := make([]common.ResourceSyncResult, len(state.SyncResult.Resources))
for i, res := range state.SyncResult.Resources {
syncId, err := syncid.Generate()
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to generate sync ID: %v", err)
return
}
logEntry := log.WithFields(applog.GetAppLogFields(app)).WithField("syncId", syncId)
initialResourcesRes := make([]common.ResourceSyncResult, len(syncRes.Resources))
for i, res := range syncRes.Resources {
key := kube.ResourceKey{Group: res.Group, Kind: res.Kind, Namespace: res.Namespace, Name: res.Name}
initialResourcesRes[i] = common.ResourceSyncResult{
ResourceKey: key,
@@ -288,7 +324,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
return
}
if impersonationEnabled {
serviceAccountToImpersonate, err := deriveServiceAccountToImpersonate(project, app, destCluster)
serviceAccountToImpersonate, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to find a matching service account to impersonate: %v", err)
@@ -308,11 +344,11 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *metav1.APIResource) error {
if !project.IsGroupKindPermitted(un.GroupVersionKind().GroupKind(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
if !proj.IsGroupKindPermitted(un.GroupVersionKind().GroupKind(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, proj.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), func(project string) ([]*v1alpha1.Cluster, error) {
permitted, err := proj.IsDestinationPermitted(destCluster, un.GetNamespace(), func(project string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), project)
})
if err != nil {
@@ -320,7 +356,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
}
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), proj.Name)
}
}
return nil
@@ -422,7 +458,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
logEntry.WithField("duration", time.Since(start)).Info("sync/terminate complete")
if !syncOp.DryRun && len(syncOp.Resources) == 0 && state.Phase.Successful() {
err := m.persistRevisionHistory(app, compareResult.syncStatus.Revision, compareResult.syncStatus.ComparedTo.Source, compareResult.syncStatus.Revisions, compareResult.syncStatus.ComparedTo.Sources, isMultiSourceSync, state.StartedAt, state.Operation.InitiatedBy)
err := m.persistRevisionHistory(app, compareResult.syncStatus.Revision, source, compareResult.syncStatus.Revisions, compareResult.syncStatus.ComparedTo.Sources, isMultiSourceRevision, state.StartedAt, state.Operation.InitiatedBy)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to record sync to history: %v", err)

View File

@@ -50,7 +50,7 @@ func TestPersistRevisionHistory(t *testing.T) {
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}}
ctrl.appStateManager.SyncAppState(app, defaultProject, opState)
ctrl.appStateManager.SyncAppState(app, opState)
// Ensure we record spec.source into sync result
assert.Equal(t, app.Spec.GetSource(), opState.SyncResult.Source)
@@ -96,7 +96,7 @@ func TestPersistManagedNamespaceMetadataState(t *testing.T) {
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}}
ctrl.appStateManager.SyncAppState(app, defaultProject, opState)
ctrl.appStateManager.SyncAppState(app, opState)
// Ensure we record spec.syncPolicy.managedNamespaceMetadata into sync result
assert.Equal(t, app.Spec.SyncPolicy.ManagedNamespaceMetadata, opState.SyncResult.ManagedNamespaceMetadata)
}
@@ -139,7 +139,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
Source: &source,
},
}}
ctrl.appStateManager.SyncAppState(app, defaultProject, opState)
ctrl.appStateManager.SyncAppState(app, opState)
// Ensure we record opState's source into sync result
assert.Equal(t, source, opState.SyncResult.Source)
@@ -182,7 +182,7 @@ func TestSyncComparisonError(t *testing.T) {
Sync: &v1alpha1.SyncOperation{},
}}
t.Setenv("ARGOCD_GPG_ENABLED", "true")
ctrl.appStateManager.SyncAppState(app, defaultProject, opState)
ctrl.appStateManager.SyncAppState(app, opState)
conditions := app.Status.GetConditions(map[v1alpha1.ApplicationConditionType]bool{v1alpha1.ApplicationConditionComparisonError: true})
assert.NotEmpty(t, conditions)
@@ -194,7 +194,6 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
project *v1alpha1.AppProject
controller *ApplicationController
}
@@ -236,7 +235,6 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
return &fixture{
application: app,
project: project,
controller: ctrl,
}
}
@@ -271,7 +269,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
}}
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then
assert.Equal(t, synccommon.OperationFailed, opState.Phase)
@@ -284,7 +282,6 @@ func TestSyncWindowDeniesSync(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
project *v1alpha1.AppProject
controller *ApplicationController
}
@@ -323,7 +320,6 @@ func TestSyncWindowDeniesSync(t *testing.T) {
return &fixture{
application: app,
project: project,
controller: ctrl,
}
}
@@ -343,7 +339,7 @@ func TestSyncWindowDeniesSync(t *testing.T) {
Phase: synccommon.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then
assert.Equal(t, synccommon.OperationRunning, opState.Phase)
@@ -1366,7 +1362,6 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
func TestSyncWithImpersonate(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
project *v1alpha1.AppProject
controller *ApplicationController
}
@@ -1416,7 +1411,6 @@ func TestSyncWithImpersonate(t *testing.T) {
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
project: project,
controller: ctrl,
}
}
@@ -1435,7 +1429,7 @@ func TestSyncWithImpersonate(t *testing.T) {
Phase: synccommon.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then, app sync should fail with expected error message in operation state
assert.Equal(t, synccommon.OperationError, opState.Phase)
@@ -1456,7 +1450,7 @@ func TestSyncWithImpersonate(t *testing.T) {
Phase: synccommon.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then app sync should fail with expected error message in operation state
assert.Equal(t, synccommon.OperationError, opState.Phase)
@@ -1477,7 +1471,7 @@ func TestSyncWithImpersonate(t *testing.T) {
Phase: synccommon.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then app sync should not fail
assert.Equal(t, synccommon.OperationSucceeded, opState.Phase)
@@ -1498,7 +1492,7 @@ func TestSyncWithImpersonate(t *testing.T) {
Phase: synccommon.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then application sync should pass using the control plane service account
assert.Equal(t, synccommon.OperationSucceeded, opState.Phase)
@@ -1523,7 +1517,7 @@ func TestSyncWithImpersonate(t *testing.T) {
f.application.Spec.Destination.Name = "minikube"
// when
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then app sync should not fail
assert.Equal(t, synccommon.OperationSucceeded, opState.Phase)
@@ -1536,7 +1530,6 @@ func TestClientSideApplyMigration(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
project *v1alpha1.AppProject
controller *ApplicationController
}
@@ -1577,7 +1570,6 @@ func TestClientSideApplyMigration(t *testing.T) {
return &fixture{
application: app,
project: project,
controller: ctrl,
}
}
@@ -1593,7 +1585,7 @@ func TestClientSideApplyMigration(t *testing.T) {
Source: &v1alpha1.ApplicationSource{},
},
}}
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then
assert.Equal(t, synccommon.OperationSucceeded, opState.Phase)
@@ -1611,7 +1603,7 @@ func TestClientSideApplyMigration(t *testing.T) {
Source: &v1alpha1.ApplicationSource{},
},
}}
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then
assert.Equal(t, synccommon.OperationSucceeded, opState.Phase)
@@ -1629,7 +1621,7 @@ func TestClientSideApplyMigration(t *testing.T) {
Source: &v1alpha1.ApplicationSource{},
},
}}
f.controller.appStateManager.SyncAppState(f.application, f.project, opState)
f.controller.appStateManager.SyncAppState(f.application, opState)
// then
assert.Equal(t, synccommon.OperationSucceeded, opState.Phase)

View File

@@ -7,7 +7,7 @@ You can find the Swagger docs by setting the path to `/swagger-ui` in your Argo
You'll need to authorize your API using a bearer token. To get a token:
```bash
$ curl -H "Content-Type: application/json" $ARGOCD_SERVER/api/v1/session -d $'{"username":"admin","password":"password"}'
$ curl $ARGOCD_SERVER/api/v1/session -d $'{"username":"admin","password":"password"}'
{"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE1Njc4MTIzODcsImlzcyI6ImFyZ29jZCIsIm5iZiI6MTU2NzgxMjM4Nywic3ViIjoiYWRtaW4ifQ.ejyTgFxLhuY9mOBtKhcnvobg3QZXJ4_RusN_KIdVwao"}
```
@@ -29,4 +29,4 @@ is specified, and the specified Application does not exist, the API will return
Additionally, if the `project` query string parameter is specified and the Application exists but is not in
the given `project`, the API will return a `403` error. This is to prevent leaking information about the
existence of Applications to users who do not have access to them.
existence of Applications to users who do not have access to them.

View File

@@ -6,21 +6,21 @@
These are the upcoming releases dates:
| Release | Release Candidate 1 | General Availability | Release Champion | Release Approver | Checklist |
|---------|-----------------------|----------------------|---------------------------------------------------------|-------------------------------------------------------|---------------------------------------------------------------|
| v2.6 | Monday, Dec. 19, 2022 | Monday, Feb. 6, 2023 | [William Tam](https://github.com/wtam2018) | [William Tam](https://github.com/wtam2018) | [checklist](https://github.com/argoproj/argo-cd/issues/11563) |
| v2.7 | Monday, Mar. 20, 2023 | Monday, May 1, 2023 | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/12762) |
| v2.8 | Monday, Jun. 26, 2023 | Monday, Aug. 7, 2023 | [Keith Chong](https://github.com/keithchong) | [Keith Chong](https://github.com/keithchong) | [checklist](https://github.com/argoproj/argo-cd/issues/13742) |
| v2.9 | Monday, Sep. 18, 2023 | Monday, Nov. 6, 2023 | [Leonardo Almeida](https://github.com/leoluz) | [Leonardo Almeida](https://github.com/leoluz) | [checklist](https://github.com/argoproj/argo-cd/issues/14078) |
| v2.10 | Monday, Dec. 18, 2023 | Monday, Feb. 5, 2024 | [Katie Lamkin](https://github.com/kmlamkin9) | | [checklist](https://github.com/argoproj/argo-cd/issues/16339) |
| v2.11 | Friday, Apr. 5, 2024 | Monday, May 6, 2024 | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/17726) |
| v2.12 | Monday, Jun. 17, 2024 | Monday, Aug. 5, 2024 | [Ishita Sequeira](https://github.com/ishitasequeira) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19063) |
| v2.13 | Monday, Sep. 16, 2024 | Monday, Nov. 4, 2024 | [Regina Voloshin](https://github.com/reggie-k) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19513) |
| v2.14 | Monday, Dec. 16, 2024 | Monday, Feb. 3, 2025 | [Ryan Umstead](https://github.com/rumstead) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/20869) |
| v3.0 | Monday, Mar. 17, 2025 | Tuesday, May 6, 2025 | [Regina Voloshin](https://github.com/reggie-k) | | [checklist](https://github.com/argoproj/argo-cd/issues/21735) |
| Release | Release Candidate 1 | General Availability | Release Champion | Release Approver | Checklist |
|---------|-----------------------|----------------------|-------------------------------------------------------|-------------------------------------------------------|---------------------------------------------------------------|
| v2.6 | Monday, Dec. 19, 2022 | Monday, Feb. 6, 2023 | [William Tam](https://github.com/wtam2018) | [William Tam](https://github.com/wtam2018) | [checklist](https://github.com/argoproj/argo-cd/issues/11563) |
| v2.7 | Monday, Mar. 20, 2023 | Monday, May 1, 2023 | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/12762) |
| v2.8 | Monday, Jun. 26, 2023 | Monday, Aug. 7, 2023 | [Keith Chong](https://github.com/keithchong) | [Keith Chong](https://github.com/keithchong) | [checklist](https://github.com/argoproj/argo-cd/issues/13742) |
| v2.9 | Monday, Sep. 18, 2023 | Monday, Nov. 6, 2023 | [Leonardo Almeida](https://github.com/leoluz) | [Leonardo Almeida](https://github.com/leoluz) | [checklist](https://github.com/argoproj/argo-cd/issues/14078) |
| v2.10 | Monday, Dec. 18, 2023 | Monday, Feb. 5, 2024 | [Katie Lamkin](https://github.com/kmlamkin9) | | [checklist](https://github.com/argoproj/argo-cd/issues/16339) |
| v2.11 | Friday, Apr. 5, 2024 | Monday, May 6, 2024 | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/17726) |
| v2.12 | Monday, Jun. 17, 2024 | Monday, Aug. 5, 2024 | [Ishita Sequeira](https://github.com/ishitasequeira) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19063) |
| v2.13 | Monday, Sep. 16, 2024 | Monday, Nov. 4, 2024 | [Regina Voloshin](https://github.com/reggie-k) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19513) |
| v2.14 | Monday, Dec. 16, 2024 | Monday, Feb. 3, 2025 | [Ryan Umstead](https://github.com/rumstead) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/20869) |
| v3.0 | Monday, Mar. 17, 2025 | Tuesday, May 6, 2025 | [Regina Voloshin](https://github.com/reggie-k) | | [checklist](https://github.com/argoproj/argo-cd/issues/21735) |
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](#) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | | [checklist](#) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | | |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | | |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | | |
Actual release dates might differ from the plan by a few days.

View File

@@ -23,6 +23,7 @@ The Docker container for the virtualized toolchain will use the following local
* `~/go/src` - Your Go workspace's source directory (modifications expected)
* `~/.cache/go-build` - Your Go build cache (modifications expected)
* `~/.kube` - Your Kubernetes client configuration (no modifications)
* `/tmp` - Your system's temp directory (modifications expected)
#### Known issues on macOS
[Known issues](mac-users.md)

Some files were not shown because too many files have changed in this diff Show More