Compare commits

..

87 Commits

Author SHA1 Message Date
github-actions[bot]
48549a2035 Bump version to 3.2.7 on release-3.2 branch (#26503)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-02-18 14:35:48 +02:00
dudinea
10c3fd02f4 fix: Fix excessive ls-remote requests on monorepos with Auto Sync enabled apps (26277) (cherry-pick #26278 for 3.2) (#26502)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: Dan Garfield <dan.garfield@octopus.com>
2026-02-18 14:26:28 +02:00
argo-cd-cherry-pick-bot[bot]
ca08f90e96 fix(server): OIDC config via secrets fails (#18269) (cherry-pick #26214 for 3.2) (#26389)
Signed-off-by: Valéry Fouques <48053275+BZValoche@users.noreply.github.com>
Co-authored-by: Valéry Fouques <48053275+BZValoche@users.noreply.github.com>
2026-02-16 22:08:24 -10:00
argo-cd-cherry-pick-bot[bot]
1f03b27fd5 ci: exclude testdata from sonar.exclusions (cherry-pick #26398 and #26371 for 3.2) (#26424)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-02-12 17:42:27 +02:00
argo-cd-cherry-pick-bot[bot]
9c128e2d4c fix: compressedLayerExtracterStore+isCompressedLayer - allow tar.gzip suffixes (cherry-pick #26355 for 3.2) (#26375)
Signed-off-by: erin liman <erin.liman@tiktokusds.com>
Co-authored-by: erin <6914822+nepeat@users.noreply.github.com>
2026-02-10 10:58:06 -05:00
Nitish Kumar
75eddbd910 chore(deps): update group golang to v1.25.6 (cherry-pick release-3.2) (#26291)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-02-06 00:30:10 -10:00
github-actions[bot]
65b029342d Bump version to 3.2.6 on release-3.2 branch (#26120)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2026-01-22 14:34:29 -05:00
Codey Jenkins
2ff406ae33 fix: cherry pick #25516 to release-3.2 (#26115)
Signed-off-by: Codey Jenkins <FourFifthsCode@users.noreply.github.com>
Signed-off-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: pbhatnagar-oss <pbhatifiwork@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-01-22 13:34:06 -05:00
John Soutar
76fc92f655 chore(deps): update notifications-engine to fix GitHub PR comments nil panic (cherry-pick #26065 for 3.2) (#26074)
Signed-off-by: John Soutar <john@tella.com>
2026-01-21 09:21:20 +02:00
dudinea
ad117b88a6 fix: invalid error message on health check failure (#26040) (cherry pick #26039 for 3.2) (#26070)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-01-20 17:34:00 +02:00
argo-cd-cherry-pick-bot[bot]
508da9c791 fix(hydrator): empty links for failed operation (#25025) (cherry-pick #26014 for 3.2) (#26016)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-01-15 16:26:37 -05:00
argo-cd-cherry-pick-bot[bot]
20866f4557 fix(hydrator): .gitattributes include deeply nested files (#25870) (cherry-pick #26011 for 3.2) (#26012)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-01-15 15:39:26 -05:00
argo-cd-cherry-pick-bot[bot]
e3b108b738 fix: close response body on error paths to prevent connection leak (cherry-pick #25824 for 3.2) (#26006)
Signed-off-by: chentiewen <tiewen.chen@aminer.cn>
Co-authored-by: QingHe <634008786@qq.com>
Co-authored-by: chentiewen <tiewen.chen@aminer.cn>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-15 14:34:59 +01:00
github-actions[bot]
c56f4400f2 Bump version to 3.2.5 on release-3.2 branch (#25982)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-01-14 18:10:41 +02:00
Regina Voloshin
e9d03a633e docs: Run make codegen for notifications engine changes (#25958)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-01-13 14:00:42 +02:00
github-actions[bot]
030b4f982b Bump version to 3.2.4 on release-3.2 branch (#25954)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-01-13 10:02:02 +02:00
Regina Voloshin
fafbd44489 feat: Cherry-pick to 3.2 update notifications engine to v0.5.1 0.20251223091026 8c0c96d8d530 (#25930)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-01-12 17:06:36 +05:30
argo-cd-cherry-pick-bot[bot]
d7d9674e33 fix(appset): do not trigger reconciliation on appsets not part of allowed namespaces when updating a cluster secret (cherry-pick #25622 for 3.2) (#25911)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
Co-authored-by: OpenGuidou <73480729+OpenGuidou@users.noreply.github.com>
2026-01-09 16:17:42 +01:00
argo-cd-cherry-pick-bot[bot]
e6f54030f0 fix: Only show please update resource specification message when spec… (cherry-pick #25066 for 3.2) (#25895)
Signed-off-by: Josh Soref <jsoref@gmail.com>
Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2026-01-07 10:11:25 -05:00
Nitish Kumar
b4146969ed chore(cherry-pick-3.2): bump expr to v1.17.7 (#25889)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-01-07 13:31:41 +02:00
argo-cd-cherry-pick-bot[bot]
51c6375130 ci: test against k8s 1.34.2 (cherry-pick #25856 for 3.2) (#25859)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-01-05 18:47:45 +02:00
argo-cd-cherry-pick-bot[bot]
b67eb40a45 docs: link to source hydrator (cherry-pick #25813 for 3.2) (#25814)
Signed-off-by: Josh Soref <jsoref@gmail.com>
Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2026-01-05 10:52:46 +02:00
Nitish Kumar
8a0633b74a chore(deps): bump go to 1.25.5 (cherry-pick) (#25805)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2026-01-05 10:36:04 +02:00
argo-cd-cherry-pick-bot[bot]
0d4f505954 test: fix flaky create repository test by resyncing informers (cherry-pick #25706 for 3.2) (#25795)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-12-24 17:45:02 +02:00
github-actions[bot]
2b6251dfed Bump version to 3.2.3 on release-3.2 branch (#25796)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2025-12-24 14:06:14 +02:00
Anand Francis Joseph
8f903c3a11 chore(deps): bump golang.org/x/crypto from 0.42.0 to 0.46.0 (#25791)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-12-24 14:01:26 +02:00
github-actions[bot]
8d0dde1388 Bump version to 3.2.2 on release-3.2 branch (#25729)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2025-12-18 11:39:51 +02:00
argo-cd-cherry-pick-bot[bot]
784f62ca6d fix(server): update resourceVersion on Terminate retry (cherry-pick #25650 for 3.2) (#25718)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-18 01:15:13 +01:00
Marco Maurer (-Kilchhofer)
33b5043405 fix(oidc): check userinfo endpoint in AuthMiddleware (cherry-pick #23586 for 3.2) (#25415)
Signed-off-by: Nathanael Liechti <technat@technat.ch>
Co-authored-by: Nathanael Liechti <technat@technat.ch>
2025-12-17 18:48:23 -05:00
Regina Voloshin
88fe638aff chore(deps):bumped gitops-engine to v0.7.1-0.20251217140045-5baed5604d2d with bumped k8s.io/kubernetes to 1.34.2 (#25708)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2025-12-17 11:23:22 -05:00
argo-cd-cherry-pick-bot[bot]
a29703877e test(controller): avoid race in test (cherry-pick #25655 for 3.2) (#25691)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-16 21:06:06 +02:00
Regina Voloshin
95e7cdb16f chore(deps): bumped k8s.io/kubernetes v1.34.0 to v1.34.2 - manual cherry-pick of 25682 for 3-2 (#25687)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2025-12-16 18:07:01 +02:00
argo-cd-cherry-pick-bot[bot]
122f4db3db fix(hydrator): appset should preserve annotation when hydration is requested (cherry-pick #25644 for 3.2) (#25654)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-12-16 10:05:22 -05:00
argo-cd-cherry-pick-bot[bot]
2d65b26420 test: fix flaky impersonation test (cherry-pick #25641 for 3.2) (#25688)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-12-16 16:08:22 +02:00
Andreas Lindhé
0ace9bb9a3 docs: sync-waves guide: Use markdown formatting (cherry-pick #25372 for 3.2) (#25659)
Signed-off-by: Andreas Lindhé <7773090+lindhe@users.noreply.github.com>
Co-authored-by: Dov Murik <dov.murik@gmail.com>
2025-12-16 08:07:21 +02:00
argo-cd-cherry-pick-bot[bot]
6398ec3dcb chore: release champ 3.3 (cherry-pick #25202 for 3.2) (#25663)
Signed-off-by: Peter Jiang <35584807+pjiang-dev@users.noreply.github.com>
Co-authored-by: Peter Jiang <35584807+pjiang-dev@users.noreply.github.com>
2025-12-15 17:05:43 +02:00
argo-cd-cherry-pick-bot[bot]
732b16fb2a fix: create read and write secret for same url (cherry-pick #25581 for 3.2) (#25589)
Signed-off-by: emirot <emirot.nolan@gmail.com>
Co-authored-by: Nolan Emirot <emirot.nolan@gmail.com>
2025-12-10 11:17:58 +02:00
Ivan Pedersen
024c7e6020 chore: reference gitops-engine fork with nil pointer fix (#25522)
Signed-off-by: Ivan Pedersen <ivan.pedersen@volvocars.com>
2025-12-04 17:41:59 -05:00
argo-cd-cherry-pick-bot[bot]
26b7fb2c61 docs: add added healthchecks to upgrade docs (cherry-pick #25487 for 3.2) (#25490)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-12-03 13:48:57 +01:00
github-actions[bot]
8c4ab63a9c Bump version to 3.2.1 on release-3.2 branch (#25449)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2025-11-30 13:37:38 +02:00
dudinea
29f869c82f fix: the concurrency issue with git detached processing in Repo Server (#25101) (cherry-pick #25127 for 3.2) (#25448)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2025-11-30 13:24:37 +02:00
argo-cd-cherry-pick-bot[bot]
c11e67d4bf docs: Document usage of ?. in notifications triggers and fix examples (#25352) (cherry-pick #25418 for 3.2) (#25421)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
2025-11-26 09:41:59 +02:00
Regina Voloshin
a0a18438ab docs: Improve switch to annotation tracking docs, clarifying that a new Git commit may be needed to avoid orphan resources - (cherry-pick #25309 for 3.2) (#25338)
Signed-off-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-11-19 11:46:19 +01:00
Jaewoo Choi
dabdf39772 fix(ui): overlapping UI elements and add resource units to tooltips (cherry-pick #24717 for 3.2) (#25225)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2025-11-18 14:17:12 -08:00
argo-cd-cherry-pick-bot[bot]
cd8df1721c fix: Allow the ISVC to be healthy when the Stopped Condition is False (cherry-pick #25312 for 3.2) (#25318)
Signed-off-by: Hannah DeFazio <h2defazio@gmail.com>
Co-authored-by: Hannah DeFazio <h2defazio@gmail.com>
2025-11-17 23:20:41 -10:00
argo-cd-cherry-pick-bot[bot]
27c5065308 fix: revert #24197 (cherry-pick #25294 for 3.2) (#25314)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-11-17 23:19:48 -10:00
Peter Jiang
1545390cd8 fix(cherry-pick): bump gitops-engine ssd regression (#25226)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-11-08 19:13:02 -05:00
argo-cd-cherry-pick-bot[bot]
7bd02d7f02 fix:(ui) don't render ApplicationSelector unless the panel is showing (cherry-pick #25201 for 3.2) (#25208)
Signed-off-by: Jonathan Winters <wintersjonathan0@gmail.com>
Co-authored-by: jwinters01 <34199886+jwinters01@users.noreply.github.com>
2025-11-06 17:55:27 -05:00
argo-cd-cherry-pick-bot[bot]
86c9994394 docs: update user content for deleting applications (cherry-pick #25124 for 3.2) (#25174)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <atali@redhat.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-11-05 01:15:04 -10:00
argo-cd-cherry-pick-bot[bot]
6dd5e7a6d2 fix(ui): add null-safe handling for assignedWindows in status panel (cherry-pick #25128 for 3.2) (#25180)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
Co-authored-by: Jaewoo Choi <jaewoo45@gmail.com>
2025-11-05 01:12:47 -10:00
github-actions[bot]
66b2f302d9 Bump version to 3.2.0 on release-3.2 branch (#25160)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-11-04 09:50:07 -05:00
argo-cd-cherry-pick-bot[bot]
a1df57df93 fix: capture stderr in executil RunWithExecRunOpts (cherry-pick #25139 for 3.2) (#25140)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Co-authored-by: dudinea <eugene.doudine@octopus.com>
2025-11-02 16:52:25 +01:00
argo-cd-cherry-pick-bot[bot]
8884b27381 fix(ui): Improve Delete Dialog Behaviour when deleting child apps in the app-of-app pattern (cherry-pick #24802 for 3.2) (#25123)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <atali@redhat.com>
2025-11-02 08:16:39 +02:00
rumstead
be8e79eb31 feat(appset): add pprof endpoints (cherry-pick #25044 for 3.2) (#25051)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-10-27 10:37:39 -04:00
Alexander Lindeskär
6aa9c20e47 fix: Health status for HTTPRoute with multiple generations (#24958) (cherry-pick #24959 for 3.2) (#25039)
Signed-off-by: Alexander Lindeskär <lindeskar@users.noreply.github.com>
2025-10-23 07:09:06 +03:00
github-actions[bot]
1963030721 Bump version to 3.2.0-rc4 on release-3.2 branch (#25010)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-10-20 17:13:42 -04:00
argo-cd-cherry-pick-bot[bot]
b227ef1559 fix: don't show error about missing appset (cherry-pick #24995 for 3.2) (#24997)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-10-17 12:28:38 -07:00
argo-cd-cherry-pick-bot[bot]
d1251f407a fix(health): use promotion resource Ready condition regardless of reason (cherry-pick #24971 for 3.2) (#24973)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-15 22:28:47 -04:00
argo-cd-cherry-pick-bot[bot]
3db95b1fbe fix: make webhook payload handlers recover from panics (cherry-pick #24862 for 3.2) (#24912)
Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
Co-authored-by: Jakub Ciolek <66125090+jake-ciolek@users.noreply.github.com>
2025-10-14 14:15:16 -04:00
Carlos R.F.
7628473802 chore(deps): bump redis from 8.2.1 to 8.2.2 to address vuln (release-3.2) (#24891)
Signed-off-by: Carlos Rodriguez-Fernandez <carlosrodrifernandez@gmail.com>
2025-10-09 17:12:30 -04:00
github-actions[bot]
059e8d220e Bump version to 3.2.0-rc3 on release-3.2 branch (#24885)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-10-07 13:33:02 -04:00
argo-cd-cherry-pick-bot[bot]
1ba3929520 fix(server): ensure resource health status is inferred on application retrieval (#24832) (cherry-pick #24851 for 3.2) (#24865)
Signed-off-by: Viacheslav Rianov <rianovviacheslav@gmail.com>
Co-authored-by: Rianov Viacheslav <55545103+vr009@users.noreply.github.com>
2025-10-06 16:57:56 -04:00
Peter Jiang
a42ccaeeca chore: bump gitops engine (#24864)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-10-06 14:58:54 -04:00
Peter Jiang
d75bcfd7b2 fix(cherry-pick): server-side diff shows duplicate containerPorts (#24842)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-10-03 17:25:06 -04:00
argo-cd-cherry-pick-bot[bot]
35e3897f61 fix(health): incorrect reason in PullRequest script (cherry-pick #24826 for 3.2) (#24828)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-02 17:32:19 -04:00
argo-cd-cherry-pick-bot[bot]
dc309cbe0d docs: fix typo in hydrator commit message template documentation (cherry-pick #24822 for 3.2) (#24827)
Signed-off-by: gyu-young-park <gyoue200125@gmail.com>
Co-authored-by: gyu-young-park <44598664+gyu-young-park@users.noreply.github.com>
2025-10-02 17:06:47 -04:00
Alexandre Gaudreault
a1f42488d9 fix: hydration errors not set on applications (#24755) (#24809)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-10-01 11:46:15 -04:00
github-actions[bot]
973eccee0a Bump version to 3.2.0-rc2 on release-3.2 branch (#24797)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 11:32:15 -04:00
Ville Vesilehto
8f8a1ecacb Merge commit from fork
Fixed a race condition in repository credentials handling by
implementing deep copying of secrets before modification.
This prevents concurrent map read/write panics when multiple
goroutines access the same secret.

The fix ensures thread-safe operations by always operating on
copies rather than shared objects.

Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-09-30 10:45:59 -04:00
Michael Crenshaw
46409ae734 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:45:32 -04:00
Michael Crenshaw
5f5d46c78b Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 10:07:24 -04:00
Michael Crenshaw
722036d447 Merge commit from fork
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-09-30 09:45:21 -04:00
argo-cd-cherry-pick-bot[bot]
001bfda068 fix: #24781 update crossplane healthchecks to V2 version (cherry-pick #24782 for 3.2) (#24784)
Signed-off-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
Co-authored-by: Jonasz Łasut-Balcerzak <jonasz.lasut@gmail.com>
2025-09-30 18:04:50 +05:30
argo-cd-cherry-pick-bot[bot]
4821d71e3d fix(health): typo in PromotionStrategy health.lua (cherry-pick #24726 for 3.2) (#24760)
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2025-09-27 23:45:10 +02:00
Atif Ali
ef8ac49807 fix: Clear ApplicationSet applicationStatus when ProgressiveSync is disabled (cherry-pick #24587 for 3.2 (#24716)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-09-26 10:19:45 -04:00
Alexander Matyushentsev
feab307df3 feat: add status.resourcesCount field to appset and change limit default (#24698) (#24711)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-24 17:35:03 +05:30
argo-cd-cherry-pick-bot[bot]
087378c669 fix: update ExternalSecret discovery.lua to also include the refreshPolicy (cherry-pick #24707 for 3.2) (#24713)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
Co-authored-by: AvivGuiser <avivguiser@gmail.com>
2025-09-23 13:38:41 -04:00
Alexander Matyushentsev
f3c8e1d5e3 fix: limit number of resources in appset status (#24690) (#24697)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-22 14:58:00 -07:00
argo-cd-cherry-pick-bot[bot]
28510cdda6 fix: resolve argocdService initialization issue in notifications CLI (cherry-pick #24664 for 3.2) (#24680)
Signed-off-by: puretension <rlrlfhtm5@gmail.com>
Co-authored-by: DOHYEONG LEE <rlrlfhtm5@gmail.com>
2025-09-22 19:40:48 +02:00
argo-cd-cherry-pick-bot[bot]
6a2df4380a ci(release): only set latest release in github when latest (cherry-pick #24525 for 3.2) (#24686)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-09-22 11:46:55 -04:00
Michael Crenshaw
cd87a13a0d chore(ci): update github runners to oci gh arc runners (3.2) (#24632) (#24653)
Signed-off-by: Koray Oksay <koray.oksay@gmail.com>
Co-authored-by: Koray Oksay <koray.oksay@gmail.com>
2025-09-18 20:01:54 -04:00
argo-cd-cherry-pick-bot[bot]
1453367645 fix: Progress Sync Unknown in UI (cherry-pick #24202 for 3.2) (#24641)
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Atif Ali <56743004+aali309@users.noreply.github.com>
2025-09-18 14:37:39 -04:00
argo-cd-cherry-pick-bot[bot]
50531e6ab3 fix(oci): loosen up layer restrictions (cherry-pick #24640 for 3.2) (#24648)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-09-18 06:25:30 -10:00
argo-cd-cherry-pick-bot[bot]
bf9f927d55 fix: use informer in webhook handler to reduce memory usage (cherry-pick #24622 for 3.2) (#24623)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-17 14:51:08 -07:00
argo-cd-cherry-pick-bot[bot]
ee0de13be4 docs: Delete dangling word in Source Hydrator docs (cherry-pick #24601 for 3.2) (#24604)
Signed-off-by: José Maia <josecbmaia@hotmail.com>
Co-authored-by: José Maia <josecbmaia@hotmail.com>
2025-09-17 11:36:06 -04:00
argo-cd-cherry-pick-bot[bot]
4ac3f920d5 chore: bumps redis version to 8.2.1 (cherry-pick #24523 for 3.2) (#24582)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-09-16 12:22:10 -04:00
github-actions[bot]
06ef059f9f Bump version to 3.2.0-rc1 on release-3.2 branch (#24581)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-16 09:37:35 -04:00
831 changed files with 17077 additions and 153657 deletions

View File

@@ -22,6 +22,5 @@ Target GA date: ___. __, ____
- [ ] At release date, evaluate if any bugs justify delaying the release. If not, cut the release (or delegate this task to an Approver and coordinate timing)
- [ ] If unreleased changes are on the release branch for {current minor version minus 3}, cut a final patch release for that series (or delegate this task to an Approver and coordinate timing)
- [ ] After the release, post in #argo-cd that the {current minor version minus 3} has reached EOL (example: https://cloud-native.slack.com/archives/C01TSERG0KZ/p1667336234059729)
- [ ] Update the last patch release of the EOL minor release series to say that the version is EOL
- [ ] (For the next release champion) Review the [items scheduled for the next release](https://github.com/orgs/argoproj/projects/25). If any item does not have an assignee who can commit to finish the feature, move it to the next release.
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.
- [ ] (For the next release champion) Schedule a time mid-way through the release cycle to review items again.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -38,7 +38,7 @@ jobs:
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ steps.generate-token.outputs.token }}
@@ -91,17 +91,11 @@ jobs:
- name: Create Pull Request
run: |
# Create cherry-pick PR
TITLE="${PR_TITLE} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})"
BODY=$(cat <<EOF
Cherry-picked ${PR_TITLE} (#${{ inputs.pr_number }})
${{ steps.cherry-pick.outputs.signoff }}
EOF
)
gh pr create \
--title "$TITLE" \
--body "$BODY" \
--title "${{ inputs.pr_title }} (cherry-pick #${{ inputs.pr_number }} for ${{ inputs.version_number }})" \
--body "Cherry-picked ${{ inputs.pr_title }} (#${{ inputs.pr_number }})
${{ steps.cherry-pick.outputs.signoff }}" \
--base "${{ steps.cherry-pick.outputs.target_branch }}" \
--head "${{ steps.cherry-pick.outputs.branch_name }}"
@@ -109,13 +103,12 @@ jobs:
gh pr comment ${{ inputs.pr_number }} \
--body "🍒 Cherry-pick PR created for ${{ inputs.version_number }}: #$(gh pr list --head ${{ steps.cherry-pick.outputs.branch_name }} --json number --jq '.[0].number')"
env:
PR_TITLE: ${{ inputs.pr_title }}
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
- name: Comment on failure
if: failure()
run: |
gh pr comment ${{ inputs.pr_number }} \
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the [workflow logs](https://github.com/argoproj/argo-cd/actions/runs/${{ github.run_id }}) for details."
--body "❌ Cherry-pick failed for ${{ inputs.version_number }}. Please check the workflow logs for details."
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
GH_TOKEN: ${{ steps.generate-token.outputs.token }}

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.3'
GOLANG_VERSION: '1.25.6'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -31,7 +31,7 @@ jobs:
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
id: filter
with:
@@ -55,7 +55,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -67,6 +67,7 @@ jobs:
run: |
go mod tidy
git diff --exit-code -- .
build-go:
name: Build & cache Go code
if: ${{ needs.changes.outputs.backend == 'true' }}
@@ -75,13 +76,13 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -102,7 +103,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -111,7 +112,7 @@ jobs:
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.5.0
version: v2.4.0
args: --verbose
test-go:
@@ -128,7 +129,7 @@ jobs:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
@@ -152,7 +153,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -173,7 +174,7 @@ jobs:
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: test-results
path: test-results
@@ -192,7 +193,7 @@ jobs:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
@@ -216,7 +217,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -237,7 +238,7 @@ jobs:
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: race-results
path: test-results/
@@ -250,7 +251,7 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -302,15 +303,15 @@ jobs:
- changes
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup NodeJS
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # v5.0.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -335,7 +336,7 @@ jobs:
shellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- run: |
sudo apt-get install shellcheck
shellcheck -e SC2059 -e SC2154 -e SC2034 -e SC2016 -e SC1091 $(find . -type f -name '*.sh' | grep -v './ui/node_modules') | tee sc.log
@@ -354,12 +355,12 @@ jobs:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -367,12 +368,12 @@ jobs:
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Get e2e code coverage
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: e2e-code-coverage
path: e2e-code-coverage
- name: Get unit test code coverage
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: test-results
path: test-results
@@ -413,14 +414,14 @@ jobs:
# latest: true means that this version mush upload the coverage report to codecov.io
# We designate the latest version because we only collect code coverage for that version.
k3s:
- version: v1.33.1
- version: v1.34.2
latest: true
- version: v1.33.1
latest: false
- version: v1.32.1
latest: false
- version: v1.31.0
latest: false
- version: v1.30.4
latest: false
needs:
- build-go
- changes
@@ -446,7 +447,7 @@ jobs:
swap-storage: false
tool-cache: false
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with:
@@ -467,7 +468,7 @@ jobs:
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -495,7 +496,7 @@ jobs:
run: |
docker pull ghcr.io/dexidp/dex:v2.43.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:8.2.1-alpine
docker pull redis:8.2.2-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist
@@ -525,13 +526,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log

View File

@@ -29,7 +29,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang

View File

@@ -56,14 +56,14 @@ jobs:
image-digest: ${{ steps.image.outputs.digest }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
if: ${{ github.ref_type == 'tag'}}
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
@@ -73,7 +73,7 @@ jobs:
cache: false
- name: Install cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
- uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
@@ -103,7 +103,7 @@ jobs:
echo 'EOF' >> $GITHUB_ENV
- name: Login to Quay.io
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: quay.io
username: ${{ secrets.quay_username }}
@@ -111,7 +111,7 @@ jobs:
if: ${{ inputs.quay_image_name && inputs.push }}
- name: Login to GitHub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ secrets.ghcr_username }}
@@ -119,7 +119,7 @@ jobs:
if: ${{ inputs.ghcr_image_name && inputs.push }}
- name: Login to dockerhub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
username: ${{ secrets.docker_username }}
password: ${{ secrets.docker_password }}

View File

@@ -25,7 +25,7 @@ jobs:
image-tag: ${{ steps.image.outputs.tag}}
platforms: ${{ steps.platforms.outputs.platforms }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Set image tag for ghcr
run: echo "tag=$(cat ./VERSION)-${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
@@ -53,7 +53,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.6
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -70,7 +70,7 @@ jobs:
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.6
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:
@@ -106,7 +106,7 @@ jobs:
if: ${{ github.repository == 'argoproj/argo-cd' && github.event_name == 'push' }}
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
env:
TOKEN: ${{ secrets.TOKEN }}

View File

@@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.25.3' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.25.6' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -25,7 +25,7 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.6
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
@@ -41,7 +41,7 @@ jobs:
is_latest_release: ${{ steps.var.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -99,7 +99,7 @@ jobs:
hashes: ${{ steps.hash.outputs.hashes }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -185,7 +185,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
@@ -236,7 +236,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090 # v2.4.1
uses: softprops/action-gh-release@6cbd405e2c4e67a21c47fa9e383d020e4e28b836 # v2.3.3
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -272,7 +272,7 @@ jobs:
TAG_STABLE: ${{ needs.setup-variables.outputs.is_latest_release }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -10,7 +10,6 @@ permissions:
jobs:
renovate:
runs-on: ubuntu-latest
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Get token
id: get_token
@@ -20,17 +19,17 @@ jobs:
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # 5.0.0
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # 6.0.1
# Some codegen commands require Go to be setup
- name: Setup Golang
uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with:
# renovate: datasource=golang-version packageName=golang
go-version: 1.25.3
go-version: 1.25.6
- name: Self-hosted Renovate
uses: renovatebot/github-action@ea850436a5fe75c0925d583c7a02c60a5865461d #43.0.20
uses: renovatebot/github-action@f8af9272cd94a4637c29f60dea8731afd3134473 #43.0.12
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'

View File

@@ -30,12 +30,12 @@ jobs:
steps:
- name: "Checkout code"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
@@ -54,7 +54,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Build reports

View File

@@ -22,7 +22,6 @@ linters:
- govet
- importas
- misspell
- noctx
- perfsprint
- revive
- staticcheck

View File

@@ -1,5 +1,11 @@
dir: '{{.InterfaceDir}}/mocks'
structname: '{{.InterfaceName}}'
filename: '{{.InterfaceName}}.go'
pkgname: mocks
template-data:
unroll-variadic: true
packages:
github.com/argoproj/argo-cd/v3/applicationset/generators:
interfaces:
@@ -8,10 +14,11 @@ packages:
interfaces:
Repos: {}
github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider:
config:
dir: applicationset/services/scm_provider/aws_codecommit/mocks
interfaces:
AWSCodeCommitClient: {}
AWSTaggingClient: {}
AzureDevOpsClientFactory: {}
github.com/argoproj/argo-cd/v3/applicationset/utils:
interfaces:
Renderer: {}
@@ -40,8 +47,8 @@ packages:
AppProjectInterface: {}
github.com/argoproj/argo-cd/v3/reposerver/apiclient:
interfaces:
RepoServerServiceClient: {}
RepoServerService_GenerateManifestWithFilesClient: {}
RepoServerServiceClient: {}
github.com/argoproj/argo-cd/v3/server/application:
interfaces:
Broadcaster: {}
@@ -56,37 +63,26 @@ packages:
github.com/argoproj/argo-cd/v3/util/db:
interfaces:
ArgoDB: {}
RepoCredsDB: {}
github.com/argoproj/argo-cd/v3/util/git:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/helm:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/oci:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/io:
interfaces:
TempPaths: {}
github.com/argoproj/argo-cd/v3/util/notification/argocd:
interfaces:
Service: {}
github.com/argoproj/argo-cd/v3/util/oci:
interfaces:
Client: {}
github.com/argoproj/argo-cd/v3/util/workloadidentity:
interfaces:
TokenProvider: {}
github.com/argoproj/gitops-engine/pkg/cache:
interfaces:
ClusterCache: {}
github.com/argoproj/gitops-engine/pkg/diff:
interfaces:
ServerSideDryRunner: {}
github.com/microsoft/azure-devops-go-api/azuredevops/v7/git:
config:
dir: applicationset/services/scm_provider/azure_devops/git/mocks
interfaces:
Client: {}
pkgname: mocks
structname: '{{.InterfaceName}}'
template-data:
unroll-variadic: true

View File

@@ -16,5 +16,4 @@
# CLI
/cmd/argocd/** @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
/cmd/main.go @argoproj/argocd-approvers @argoproj/argocd-approvers-cli
# Also include @argoproj/argocd-approvers-docs to avoid requiring CLI approvers for docs-only PRs.
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-docs @argoproj/argocd-approvers-cli
/docs/operator-manual/ @argoproj/argocd-approvers @argoproj/argocd-approvers-cli

View File

@@ -1,10 +1,10 @@
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:27771fb7b40a58237c98e8d3e6b9ecdd9289cec69a857fccfb85ff36294dac20
ARG BASE_IMAGE=docker.io/library/ubuntu:25.04@sha256:10bb10bb062de665d4dc3e0ea36715270ead632cfcb74d08ca2273712a0dfb42
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.25.3@sha256:6bac879c5b77e0fc9c556a5ed8920e89dab1709bd510a854903509c828f67f96 AS builder
FROM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c AS builder
WORKDIR /tmp
@@ -103,13 +103,11 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.3@sha256:6bac879c5b77e0fc9c556a5ed8920e89dab1709bd510a854903509c828f67f96 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY go.* ./
RUN mkdir -p gitops-engine
COPY gitops-engine/go.* ./gitops-engine
RUN go mod download
# Perform the build

View File

@@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.25.3@sha256:6bac879c5b77e0fc9c556a5ed8920e89dab1709bd510a854903509c828f67f96
FROM docker.io/library/golang:1.25.6@sha256:fc24d3881a021e7b968a4610fc024fba749f98fe5c07d4f28e6cfa14dc65a84c
ENV DEBIAN_FRONTEND=noninteractive

View File

@@ -261,12 +261,8 @@ clidocsgen:
actionsdocsgen:
hack/generate-actions-list.sh
.PHONY: resourceiconsgen
resourceiconsgen:
hack/generate-icons-typescript.sh
.PHONY: codegen-local
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen resourceiconsgen manifests-local notification-docs notification-catalog
codegen-local: mod-vendor-local mockgen gogen protogen clientgen openapigen clidocsgen actionsdocsgen manifests-local notification-docs notification-catalog
rm -rf vendor/
.PHONY: codegen-local-fast

View File

@@ -9,6 +9,6 @@ ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
dev-mounter: [[ "$ARGOCD_E2E_TEST" != "true" ]] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"

View File

@@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 06ef059f9fc7cf9da2dfaef2a505ee1e3c693485
commit-hash: 320f46f06beaf75f9c406e3a47e2e09d36e2047a
project-url: https://github.com/argoproj/argo-cd
project-release: v3.3.0
project-release: v3.2.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View File

@@ -63,7 +63,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Camptocamp](https://camptocamp.com)
1. [Candis](https://www.candis.io)
1. [Capital One](https://www.capitalone.com)
1. [Capptain LTD](https://capptain.co/)
1. [CARFAX Europe](https://www.carfax.eu)
1. [CARFAX](https://www.carfax.com)
1. [Carrefour Group](https://www.carrefour.com)
@@ -310,7 +309,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Relex Solutions](https://www.relexsolutions.com/)
1. [RightRev](https://rightrev.com/)
1. [Rijkswaterstaat](https://www.rijkswaterstaat.nl/en)
1. Rise
1. [Rise](https://www.risecard.eu/)
1. [Riskified](https://www.riskified.com/)
1. [Robotinfra](https://www.robotinfra.com)
1. [Rocket.Chat](https://rocket.chat)

View File

@@ -1 +1 @@
3.3.0
3.2.7

View File

@@ -37,7 +37,6 @@ import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
"k8s.io/utils/ptr"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -47,8 +46,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/argo-cd/v3/applicationset/controllers/template"
"github.com/argoproj/argo-cd/v3/applicationset/generators"
"github.com/argoproj/argo-cd/v3/applicationset/metrics"
@@ -78,6 +75,7 @@ const (
var defaultPreservedAnnotations = []string{
NotifiedAnnotationKey,
argov1alpha1.AnnotationKeyRefresh,
argov1alpha1.AnnotationKeyHydrate,
}
type deleteInOrder struct {
@@ -230,6 +228,8 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to get update resources status for application set: %w", err)
}
// appMap is a name->app collection of Applications in this ApplicationSet.
appMap := map[string]argov1alpha1.Application{}
// appSyncMap tracks which apps will be synced during this reconciliation.
appSyncMap := map[string]bool{}
@@ -243,7 +243,12 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
return ctrl.Result{}, fmt.Errorf("failed to clear previous AppSet application statuses for %v: %w", applicationSetInfo.Name, err)
}
} else if isRollingSyncStrategy(&applicationSetInfo) {
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications)
// The appset uses progressive sync with `RollingSync` strategy
for _, app := range currentApplications {
appMap[app.Name] = app
}
appSyncMap, err = r.performProgressiveSyncs(ctx, logCtx, applicationSetInfo, currentApplications, generatedApplications, appMap)
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to perform progressive sync reconciliation for application set: %w", err)
}
@@ -260,6 +265,13 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}
}
var validApps []argov1alpha1.Application
for i := range generatedApplications {
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
validApps = append(validApps, generatedApplications[i])
}
}
if len(validateErrors) > 0 {
errorApps := make([]string, 0, len(validateErrors))
for key := range validateErrors {
@@ -287,25 +299,13 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
)
}
var validApps []argov1alpha1.Application
for i := range generatedApplications {
if validateErrors[generatedApplications[i].QualifiedName()] == nil {
validApps = append(validApps, generatedApplications[i])
}
}
if r.EnableProgressiveSyncs {
// trigger appropriate application syncs if RollingSync strategy is enabled
if progressiveSyncsRollingSyncStrategyEnabled(&applicationSetInfo) {
validApps = r.syncDesiredApplications(logCtx, &applicationSetInfo, appSyncMap, validApps)
validApps = r.syncValidApplications(logCtx, &applicationSetInfo, appSyncMap, appMap, validApps)
}
}
// Sort apps by name so they are updated/created in the same order, and condition errors are the same
sort.Slice(validApps, func(i, j int) bool {
return validApps[i].Name < validApps[j].Name
})
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowUpdate() {
err = r.createOrUpdateInCluster(ctx, logCtx, applicationSetInfo, validApps)
if err != nil {
@@ -337,7 +337,6 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}
if utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete() {
// Delete the generatedApplications instead of the validApps because we want to be able to delete applications in error/invalid state
err = r.deleteInCluster(ctx, logCtx, applicationSetInfo, generatedApplications)
if err != nil {
_ = r.setApplicationSetStatusCondition(ctx,
@@ -653,8 +652,9 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
Watches(
&corev1.Secret{},
&clusterSecretEventHandler{
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
ApplicationSetNamespaces: r.ApplicationSetNamespaces,
}).
Complete(r)
}
@@ -944,7 +944,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
return nil
}
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application) (map[string]bool, error) {
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appMap map[string]argov1alpha1.Application) (map[string]bool, error) {
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
@@ -953,21 +953,21 @@ func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context,
}
logCtx.Infof("ApplicationSet %v step list:", appset.Name)
for stepIndex, applicationNames := range appDependencyList {
logCtx.Infof("step %v: %+v", stepIndex+1, applicationNames)
for i, step := range appDependencyList {
logCtx.Infof("step %v: %+v", i+1, step)
}
appsToSync := r.getAppsToSync(appset, appDependencyList, applications)
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appsToSync)
appSyncMap := r.buildAppSyncMap(appset, appDependencyList, appMap)
logCtx.Infof("Application allowed to sync before maxUpdate?: %+v", appSyncMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appsToSync, appStepMap)
_, err = r.updateApplicationSetApplicationStatusProgress(ctx, logCtx, &appset, appSyncMap, appStepMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset application status progress: %w", err)
}
_ = r.updateApplicationSetApplicationStatusConditions(ctx, &appset)
return appsToSync, nil
return appSyncMap, nil
}
// this list tracks which Applications belong to each RollingUpdate step
@@ -1041,53 +1041,55 @@ func labelMatchedExpression(logCtx *log.Entry, val string, matchExpression argov
return valueMatched
}
// getAppsToSync returns a Map of Applications that should be synced in this progressive sync wave
func (r *ApplicationSetReconciler) getAppsToSync(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, currentApplications []argov1alpha1.Application) map[string]bool {
// this map is used to determine which stage of Applications are ready to be updated in the reconciler loop
func (r *ApplicationSetReconciler) buildAppSyncMap(applicationSet argov1alpha1.ApplicationSet, appDependencyList [][]string, appMap map[string]argov1alpha1.Application) map[string]bool {
appSyncMap := map[string]bool{}
currentAppsMap := map[string]bool{}
syncEnabled := true
for _, app := range currentApplications {
currentAppsMap[app.Name] = true
}
// healthy stages and the first non-healthy stage should have sync enabled
// every stage after should have sync disabled
for stepIndex := range appDependencyList {
for i := range appDependencyList {
// set the syncEnabled boolean for every Application in the current step
for _, appName := range appDependencyList[stepIndex] {
appSyncMap[appName] = true
for _, appName := range appDependencyList[i] {
appSyncMap[appName] = syncEnabled
}
// evaluate if we need to sync next waves
syncNextWave := true
for _, appName := range appDependencyList[stepIndex] {
// Check if application is created and managed by this AppSet, if it is not created yet, we cannot progress
if _, ok := currentAppsMap[appName]; !ok {
syncNextWave = false
break
}
// detect if we need to halt before progressing to the next step
for _, appName := range appDependencyList[i] {
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, appName)
if idx == -1 {
// No Application status found, likely because the Application is being newly created
// This mean this wave is not yet completed
syncNextWave = false
// no Application status found, likely because the Application is being newly created
syncEnabled = false
break
}
appStatus := applicationSet.Status.ApplicationStatus[idx]
if appStatus.Status != argov1alpha1.ProgressiveSyncHealthy {
// At least one application in this wave is not yet healthy. We cannot proceed to the next wave
syncNextWave = false
app, ok := appMap[appName]
if !ok {
// application name not found in the list of applications managed by this ApplicationSet, maybe because it's being deleted
syncEnabled = false
break
}
syncEnabled = appSyncEnabledForNextStep(&applicationSet, app, appStatus)
if !syncEnabled {
break
}
}
if !syncNextWave {
break
}
}
return appSyncMap
}
func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1alpha1.Application, appStatus argov1alpha1.ApplicationSetApplicationStatus) bool {
if progressiveSyncsRollingSyncStrategyEnabled(appset) {
// we still need to complete the current step if the Application is not yet Healthy or there are still pending Application changes
return isApplicationHealthy(app) && appStatus.Status == "Healthy"
}
return true
}
func isRollingSyncStrategy(appset *argov1alpha1.ApplicationSet) bool {
// It's only RollingSync if the type specifically sets it
return appset.Spec.Strategy != nil && appset.Spec.Strategy.Type == "RollingSync" && appset.Spec.Strategy.RollingSync != nil
@@ -1098,21 +1100,29 @@ func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.Application
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
}
func isApplicationWithError(app argov1alpha1.Application) bool {
for _, condition := range app.Status.Conditions {
if condition.Type == argov1alpha1.ApplicationConditionInvalidSpecError {
return true
}
if condition.Type == argov1alpha1.ApplicationConditionUnknownError {
return true
}
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
if healthStatusString == "Healthy" && syncStatusString != "OutOfSync" && (operationPhaseString == "Succeeded" || operationPhaseString == "") {
return true
}
return false
}
func isProgressiveSyncDeletionOrderReversed(appset *argov1alpha1.ApplicationSet) bool {
// When progressive sync is enabled + deletionOrder is set to Reverse (case-insensitive)
return progressiveSyncsRollingSyncStrategyEnabled(appset) && strings.EqualFold(appset.Spec.Strategy.DeletionOrder, ReverseDeletionOrder)
func statusStrings(app argov1alpha1.Application) (string, string, string) {
healthStatusString := string(app.Status.Health.Status)
syncStatusString := string(app.Status.Sync.Status)
operationPhaseString := ""
if app.Status.OperationState != nil {
operationPhaseString = string(app.Status.OperationState.Phase)
}
return healthStatusString, syncStatusString, operationPhaseString
}
func getAppStep(appName string, appStepMap map[string]int) int {
@@ -1131,112 +1141,81 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
for _, app := range applications {
appHealthStatus := app.Status.Health.Status
appSyncStatus := app.Status.Sync.Status
healthStatusString, syncStatusString, operationPhaseString := statusStrings(app)
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
currentAppStatus := argov1alpha1.ApplicationSetApplicationStatus{}
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, app.Name)
if idx == -1 {
// AppStatus not found, set default status of "Waiting"
currentAppStatus = argov1alpha1.ApplicationSetApplicationStatus{
Application: app.Name,
TargetRevisions: app.Status.GetRevisions(),
LastTransitionTime: &now,
Message: "No Application status found, defaulting status to Waiting",
Status: argov1alpha1.ProgressiveSyncWaiting,
Message: "No Application status found, defaulting status to Waiting.",
Status: "Waiting",
Step: strconv.Itoa(getAppStep(app.Name, appStepMap)),
}
} else {
// we have an existing AppStatus
currentAppStatus = applicationSet.Status.ApplicationStatus[idx]
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
}
}
statusLogCtx := logCtx.WithFields(log.Fields{
"app.name": currentAppStatus.Application,
"app.health": appHealthStatus,
"app.sync": appSyncStatus,
"status.status": currentAppStatus.Status,
"status.message": currentAppStatus.Message,
"status.step": currentAppStatus.Step,
"status.targetRevisions": strings.Join(currentAppStatus.TargetRevisions, ","),
})
newAppStatus := currentAppStatus.DeepCopy()
newAppStatus.Step = strconv.Itoa(getAppStep(newAppStatus.Application, appStepMap))
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
// A new version is available in the application and we need to re-sync the application
newAppStatus.TargetRevisions = app.Status.GetRevisions()
newAppStatus.Message = "Application has pending changes, setting status to Waiting"
newAppStatus.Status = argov1alpha1.ProgressiveSyncWaiting
newAppStatus.LastTransitionTime = &now
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
currentAppStatus.Status = "Waiting"
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
if newAppStatus.Status == argov1alpha1.ProgressiveSyncWaiting {
// App has changed to waiting because the TargetRevisions changed or it is a new selected app
// This does not mean we should always sync the app. The app may not be OutOfSync
// and may not require a sync if it does not have differences.
if appSyncStatus == argov1alpha1.SyncStatusCodeSynced {
if app.Status.Health.Status == health.HealthStatusHealthy {
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncHealthy
newAppStatus.Message = "Application resource has synced, updating status to Healthy"
} else {
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
newAppStatus.Message = "Application resource has synced, updating status to Progressing"
}
}
} else {
// The target revision is the same, so we need to evaluate the current revision progress
if currentAppStatus.Status == argov1alpha1.ProgressiveSyncPending {
// No need to evaluate status health further if the application did not change since our last transition
if app.Status.ReconciledAt == nil || (newAppStatus.LastTransitionTime != nil && app.Status.ReconciledAt.After(newAppStatus.LastTransitionTime.Time)) {
// Validate that at least one sync was trigerred after the pending transition time
if app.Status.OperationState != nil && app.Status.OperationState.StartedAt.After(currentAppStatus.LastTransitionTime.Time) {
statusLogCtx = statusLogCtx.WithField("app.operation", app.Status.OperationState.Phase)
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
appOutdated := false
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
appOutdated = syncStatusString == "OutOfSync"
}
switch {
case app.Status.OperationState.Phase.Successful():
newAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing"
case app.Status.OperationState.Phase.Completed():
newAppStatus.Message = "Application resource completed a sync, updating status from Pending to Progressing"
default:
// If a sync fails or has errors, the Application should be configured with retry. It is not the appset's job to retry failed syncs
newAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing"
}
} else if isApplicationWithError(app) {
// Validate if the application has errors preventing it to be reconciled and perform syncs
// If it does, we move it to progressing.
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncProgressing
newAppStatus.Message = "Application resource has error and cannot sync, updating status to Progressing"
}
}
}
if appOutdated && currentAppStatus.Status != "Waiting" && currentAppStatus.Status != "Pending" {
logCtx.Infof("Application %v is outdated, updating its ApplicationSet status to Waiting", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Waiting"
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
if currentAppStatus.Status == argov1alpha1.ProgressiveSyncProgressing {
// If the status has reached progressing, we know a sync has been triggered. No matter the result of that operation,
// we want an the app to reach the Healthy state for the current revision.
if appHealthStatus == health.HealthStatusHealthy && appSyncStatus == argov1alpha1.SyncStatusCodeSynced {
newAppStatus.LastTransitionTime = &now
newAppStatus.Status = argov1alpha1.ProgressiveSyncHealthy
newAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy"
}
if currentAppStatus.Status == "Pending" {
if !appOutdated && operationPhaseString == "Succeeded" {
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
}
if newAppStatus.LastTransitionTime == &now {
statusLogCtx.WithFields(log.Fields{
"new_status.status": newAppStatus.Status,
"new_status.message": newAppStatus.Message,
"new_status.step": newAppStatus.Step,
"new_status.targetRevisions": strings.Join(newAppStatus.TargetRevisions, ","),
}).Info("Progressive sync application changed status")
if currentAppStatus.Status == "Waiting" && isApplicationHealthy(app) {
logCtx.Infof("Application %v is already synced and healthy, updating its ApplicationSet status to Healthy", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
appStatuses = append(appStatuses, *newAppStatus)
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
logCtx.Infof("Application %v has completed Progressing status, updating its ApplicationSet status to Healthy", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
appStatuses = append(appStatuses, currentAppStatus)
}
err := r.setAppSetApplicationStatus(ctx, logCtx, applicationSet, appStatuses)
@@ -1248,7 +1227,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
}
// check Applications that are in Waiting status and promote them to Pending if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appsToSync map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applicationSet.Status.ApplicationStatus))
@@ -1264,20 +1243,12 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
for _, appStatus := range applicationSet.Status.ApplicationStatus {
totalCountMap[appStepMap[appStatus.Application]]++
if appStatus.Status == argov1alpha1.ProgressiveSyncPending || appStatus.Status == argov1alpha1.ProgressiveSyncProgressing {
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]]++
}
}
for _, appStatus := range applicationSet.Status.ApplicationStatus {
statusLogCtx := logCtx.WithFields(log.Fields{
"app.name": appStatus.Application,
"status.status": appStatus.Status,
"status.message": appStatus.Message,
"status.step": appStatus.Step,
"status.targetRevisions": strings.Join(appStatus.TargetRevisions, ","),
})
maxUpdateAllowed := true
maxUpdate := &intstr.IntOrString{}
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
@@ -1288,7 +1259,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if maxUpdate != nil {
maxUpdateVal, err := intstr.GetScaledValueFromIntOrPercent(maxUpdate, totalCountMap[appStepMap[appStatus.Application]], false)
if err != nil {
statusLogCtx.Warnf("AppSet has a invalid maxUpdate value '%+v', ignoring maxUpdate logic for this step: %v", maxUpdate, err)
logCtx.Warnf("AppSet '%v' has a invalid maxUpdate value '%+v', ignoring maxUpdate logic for this step: %v", applicationSet.Name, maxUpdate, err)
}
// ensure that percentage values greater than 0% always result in at least 1 Application being selected
@@ -1298,21 +1269,16 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if updateCountMap[appStepMap[appStatus.Application]] >= maxUpdateVal {
maxUpdateAllowed = false
statusLogCtx.Infof("Application is not allowed to update yet, %v/%v Applications already updating in step %v", updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap))
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap), applicationSet.Name)
}
}
if appStatus.Status == argov1alpha1.ProgressiveSyncWaiting && appsToSync[appStatus.Application] && maxUpdateAllowed {
if appStatus.Status == "Waiting" && appSyncMap[appStatus.Application] && maxUpdateAllowed {
logCtx.Infof("Application %v moved to Pending status, watching for the Application to start Progressing", appStatus.Application)
appStatus.LastTransitionTime = &now
appStatus.Status = argov1alpha1.ProgressiveSyncPending
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing"
statusLogCtx.WithFields(log.Fields{
"new_status.status": appStatus.Status,
"new_status.message": appStatus.Message,
"new_status.step": appStatus.Step,
"new_status.targetRevisions": strings.Join(appStatus.TargetRevisions, ","),
}).Info("Progressive sync application changed status")
appStatus.Status = "Pending"
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
appStatus.Step = strconv.Itoa(getAppStep(appStatus.Application, appStepMap))
updateCountMap[appStepMap[appStatus.Application]]++
}
@@ -1337,9 +1303,9 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusConditio
completedWaves := map[string]bool{}
for _, appStatus := range applicationSet.Status.ApplicationStatus {
if v, ok := completedWaves[appStatus.Step]; !ok {
completedWaves[appStatus.Step] = appStatus.Status == argov1alpha1.ProgressiveSyncHealthy
completedWaves[appStatus.Step] = appStatus.Status == "Healthy"
} else {
completedWaves[appStatus.Step] = v && appStatus.Status == argov1alpha1.ProgressiveSyncHealthy
completedWaves[appStatus.Step] = v && appStatus.Status == "Healthy"
}
}
@@ -1561,31 +1527,30 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
return nil
}
func (r *ApplicationSetReconciler) syncDesiredApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appsToSync map[string]bool, desiredApplications []argov1alpha1.Application) []argov1alpha1.Application {
func (r *ApplicationSetReconciler) syncValidApplications(logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, appSyncMap map[string]bool, appMap map[string]argov1alpha1.Application, validApps []argov1alpha1.Application) []argov1alpha1.Application {
rolloutApps := []argov1alpha1.Application{}
for i := range desiredApplications {
for i := range validApps {
pruneEnabled := false
// ensure that Applications generated with RollingSync do not have an automated sync policy, since the AppSet controller will handle triggering the sync operation instead
if desiredApplications[i].Spec.SyncPolicy != nil && desiredApplications[i].Spec.SyncPolicy.IsAutomatedSyncEnabled() {
pruneEnabled = desiredApplications[i].Spec.SyncPolicy.Automated.Prune
desiredApplications[i].Spec.SyncPolicy.Automated.Enabled = ptr.To(false)
if validApps[i].Spec.SyncPolicy != nil && validApps[i].Spec.SyncPolicy.IsAutomatedSyncEnabled() {
pruneEnabled = validApps[i].Spec.SyncPolicy.Automated.Prune
validApps[i].Spec.SyncPolicy.Automated = nil
}
appSetStatusPending := false
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, desiredApplications[i].Name)
if idx > -1 && applicationSet.Status.ApplicationStatus[idx].Status == argov1alpha1.ProgressiveSyncPending {
idx := findApplicationStatusIndex(applicationSet.Status.ApplicationStatus, validApps[i].Name)
if idx > -1 && applicationSet.Status.ApplicationStatus[idx].Status == "Pending" {
// only trigger a sync for Applications that are in Pending status, since this is governed by maxUpdate
appSetStatusPending = true
}
// check appsToSync to determine which Applications are ready to be updated and which should be skipped
if appsToSync[desiredApplications[i].Name] && appSetStatusPending {
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", desiredApplications[i].Name, pruneEnabled)
desiredApplications[i] = syncApplication(desiredApplications[i], pruneEnabled)
// check appSyncMap to determine which Applications are ready to be updated and which should be skipped
if appSyncMap[validApps[i].Name] && appMap[validApps[i].Name].Status.Sync.Status == "OutOfSync" && appSetStatusPending {
logCtx.Infof("triggering sync for application: %v, prune enabled: %v", validApps[i].Name, pruneEnabled)
validApps[i] = syncApplication(validApps[i], pruneEnabled)
}
rolloutApps = append(rolloutApps, desiredApplications[i])
rolloutApps = append(rolloutApps, validApps[i])
}
return rolloutApps
}

File diff suppressed because it is too large Load Diff

View File

@@ -14,6 +14,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"github.com/argoproj/argo-cd/v3/applicationset/utils"
"github.com/argoproj/argo-cd/v3/common"
argoprojiov1alpha1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -22,8 +23,9 @@ import (
// requeue any related ApplicationSets.
type clusterSecretEventHandler struct {
// handler.EnqueueRequestForOwner
Log log.FieldLogger
Client client.Client
Log log.FieldLogger
Client client.Client
ApplicationSetNamespaces []string
}
func (h *clusterSecretEventHandler) Create(ctx context.Context, e event.CreateEvent, q workqueue.TypedRateLimitingInterface[reconcile.Request]) {
@@ -68,6 +70,10 @@ func (h *clusterSecretEventHandler) queueRelatedAppGenerators(ctx context.Contex
h.Log.WithField("count", len(appSetList.Items)).Info("listed ApplicationSets")
for _, appSet := range appSetList.Items {
if !utils.IsNamespaceAllowed(h.ApplicationSetNamespaces, appSet.GetNamespace()) {
// Ignore it as not part of the allowed list of namespaces in which to watch Appsets
continue
}
foundClusterGenerator := false
for _, generator := range appSet.Spec.Generators {
if generator.Clusters != nil {

View File

@@ -137,7 +137,7 @@ func TestClusterEventHandler(t *testing.T) {
{
ObjectMeta: metav1.ObjectMeta{
Name: "my-app-set",
Namespace: "another-namespace",
Namespace: "argocd",
},
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
@@ -171,9 +171,37 @@ func TestClusterEventHandler(t *testing.T) {
},
},
expectedRequests: []reconcile.Request{
{NamespacedName: types.NamespacedName{Namespace: "another-namespace", Name: "my-app-set"}},
{NamespacedName: types.NamespacedName{Namespace: "argocd", Name: "my-app-set"}},
},
},
{
name: "cluster generators in other namespaces should not match",
items: []argov1alpha1.ApplicationSet{
{
ObjectMeta: metav1.ObjectMeta{
Name: "my-app-set",
Namespace: "my-namespace-not-allowed",
},
Spec: argov1alpha1.ApplicationSetSpec{
Generators: []argov1alpha1.ApplicationSetGenerator{
{
Clusters: &argov1alpha1.ClusterGenerator{},
},
},
},
},
},
secret: corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "argocd",
Name: "my-secret",
Labels: map[string]string{
argocommon.LabelKeySecretType: argocommon.LabelValueSecretTypeCluster,
},
},
},
expectedRequests: []reconcile.Request{},
},
{
name: "non-argo cd secret should not match",
items: []argov1alpha1.ApplicationSet{
@@ -552,8 +580,9 @@ func TestClusterEventHandler(t *testing.T) {
fakeClient := fake.NewClientBuilder().WithScheme(scheme).WithLists(&appSetList).Build()
handler := &clusterSecretEventHandler{
Client: fakeClient,
Log: log.WithField("type", "createSecretEventHandler"),
Client: fakeClient,
Log: log.WithField("type", "createSecretEventHandler"),
ApplicationSetNamespaces: []string{"argocd"},
}
mockAddRateLimitingInterface := mockAddRateLimitingInterface{}

View File

@@ -86,28 +86,28 @@ func TestGenerateApplications(t *testing.T) {
}
t.Run(cc.name, func(t *testing.T) {
generatorMock := &genmock.Generator{}
generatorMock := genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{},
}
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cc.params, cc.generateParamsError)
generatorMock.EXPECT().GetTemplate(&generator).
generatorMock.On("GetTemplate", &generator).
Return(&v1alpha1.ApplicationSetTemplate{})
rendererMock := &rendmock.Renderer{}
rendererMock := rendmock.Renderer{}
var expectedApps []v1alpha1.Application
if cc.generateParamsError == nil {
for _, p := range cc.params {
if cc.rendererError != nil {
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
Return(nil, cc.rendererError)
} else {
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.template), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), p, false, []string(nil)).
Return(&app, nil)
expectedApps = append(expectedApps, app)
}
@@ -115,9 +115,9 @@ func TestGenerateApplications(t *testing.T) {
}
generators := map[string]generators.Generator{
"List": generatorMock,
"List": &generatorMock,
}
renderer := rendererMock
renderer := &rendererMock
got, reason, err := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -200,26 +200,26 @@ func TestMergeTemplateApplications(t *testing.T) {
cc := c
t.Run(cc.name, func(t *testing.T) {
generatorMock := &genmock.Generator{}
generatorMock := genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
List: &v1alpha1.ListGenerator{},
}
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cc.params, nil)
generatorMock.EXPECT().GetTemplate(&generator).
generatorMock.On("GetTemplate", &generator).
Return(&cc.overrideTemplate)
rendererMock := &rendmock.Renderer{}
rendererMock := rendmock.Renderer{}
rendererMock.EXPECT().RenderTemplateParams(GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
rendererMock.On("RenderTemplateParams", GetTempApplication(cc.expectedMerged), mock.AnythingOfType("*v1alpha1.ApplicationSetSyncPolicy"), cc.params[0], false, []string(nil)).
Return(&cc.expectedApps[0], nil)
generators := map[string]generators.Generator{
"List": generatorMock,
"List": &generatorMock,
}
renderer := rendererMock
renderer := &rendererMock
got, _, _ := GenerateApplications(log.NewEntry(log.StandardLogger()), v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -312,19 +312,19 @@ func TestGenerateAppsUsingPullRequestGenerator(t *testing.T) {
},
} {
t.Run(cases.name, func(t *testing.T) {
generatorMock := &genmock.Generator{}
generatorMock := genmock.Generator{}
generator := v1alpha1.ApplicationSetGenerator{
PullRequest: &v1alpha1.PullRequestGenerator{},
}
generatorMock.EXPECT().GenerateParams(&generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
generatorMock.On("GenerateParams", &generator, mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).
Return(cases.params, nil)
generatorMock.EXPECT().GetTemplate(&generator).
Return(&cases.template)
generatorMock.On("GetTemplate", &generator).
Return(&cases.template, nil)
generators := map[string]generators.Generator{
"PullRequest": generatorMock,
"PullRequest": &generatorMock,
}
renderer := &utils.Render{}

View File

@@ -223,7 +223,7 @@ func TestTransForm(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(t.Context()),
"Clusters": getMockClusterGenerator(),
}
applicationSetInfo := argov1alpha1.ApplicationSet{
@@ -260,7 +260,7 @@ func emptyTemplate() argov1alpha1.ApplicationSetTemplate {
}
}
func getMockClusterGenerator(ctx context.Context) Generator {
func getMockClusterGenerator() Generator {
clusters := []crtclient.Object{
&corev1.Secret{
TypeMeta: metav1.TypeMeta{
@@ -342,19 +342,19 @@ func getMockClusterGenerator(ctx context.Context) Generator {
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
fakeClient := fake.NewClientBuilder().WithObjects(clusters...).Build()
return NewClusterGenerator(ctx, fakeClient, appClientset, "namespace")
return NewClusterGenerator(context.Background(), fakeClient, appClientset, "namespace")
}
func getMockGitGenerator() Generator {
argoCDServiceMock := &mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(argoCDServiceMock, "namespace")
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything).Return([]string{"app1", "app2", "app_3", "p1/app4"}, nil)
gitGenerator := NewGitGenerator(&argoCDServiceMock, "namespace")
return gitGenerator
}
func TestGetRelevantGenerators(t *testing.T) {
testGenerators := map[string]Generator{
"Clusters": getMockClusterGenerator(t.Context()),
"Clusters": getMockClusterGenerator(),
"Git": getMockGitGenerator(),
}

View File

@@ -320,11 +320,11 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -357,6 +357,8 @@ func TestGitGenerateParamsFromDirectories(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -621,11 +623,11 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(testCaseCopy.repoApps, testCaseCopy.repoError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -658,6 +660,8 @@ func TestGitGenerateParamsFromDirectoriesGoTemplate(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -996,11 +1000,11 @@ cluster:
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1032,6 +1036,8 @@ cluster:
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1325,7 +1331,7 @@ env: testing
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
@@ -1333,16 +1339,18 @@ env: testing
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1374,6 +1382,8 @@ env: testing
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1662,7 +1672,7 @@ env: testing
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
@@ -1670,16 +1680,18 @@ env: testing
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1711,6 +1723,8 @@ env: testing
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -1894,23 +1908,25 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
// IMPORTANT: we try to get the files from the repo server that matches the patterns
// If we find those files also satisfy the exclude pattern, we remove them from map
// This is generally done by the g.repos.GetFiles() function.
// With the below mock setup, we make sure that if the GetFiles() function gets called
// for a include or exclude pattern, it should always return the includeFiles or excludeFiles.
for _, pattern := range testCaseCopy.excludePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.excludeFiles, testCaseCopy.repoPathsError)
}
for _, pattern := range testCaseCopy.includePattern {
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
argoCDServiceMock.
On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, pattern, mock.Anything, mock.Anything).
Return(testCaseCopy.includeFiles, testCaseCopy.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -1942,6 +1958,8 @@ func TestGitGeneratorParamsFromFilesWithExcludeOptionGoTemplate(t *testing.T) {
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -2261,11 +2279,11 @@ cluster:
t.Run(testCaseCopy.name, func(t *testing.T) {
t.Parallel()
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
argoCDServiceMock := mocks.Repos{}
argoCDServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(testCaseCopy.repoFileContents, testCaseCopy.repoPathsError)
gitGenerator := NewGitGenerator(argoCDServiceMock, "")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "")
applicationSetInfo := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -2297,6 +2315,8 @@ cluster:
require.NoError(t, err)
assert.ElementsMatch(t, testCaseCopy.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
})
}
}
@@ -2462,7 +2482,7 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
},
}
for _, testCase := range cases {
argoCDServiceMock := mocks.NewRepos(t)
argoCDServiceMock := mocks.Repos{}
if testCase.callGetDirectories {
var project any
@@ -2472,9 +2492,9 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
project = mock.Anything
}
argoCDServiceMock.EXPECT().GetDirectories(mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
argoCDServiceMock.On("GetDirectories", mock.Anything, mock.Anything, mock.Anything, project, mock.Anything, mock.Anything).Return(testCase.repoApps, testCase.repoPathsError)
}
gitGenerator := NewGitGenerator(argoCDServiceMock, "argocd")
gitGenerator := NewGitGenerator(&argoCDServiceMock, "argocd")
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
@@ -2490,5 +2510,7 @@ func TestGitGenerator_GenerateParams(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, testCase.expected, got)
}
argoCDServiceMock.AssertExpectations(t)
}
}

View File

@@ -12,12 +12,12 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
generatorsMock "github.com/argoproj/argo-cd/v3/applicationset/generators/mocks"
servicesMocks "github.com/argoproj/argo-cd/v3/applicationset/services/mocks"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -136,7 +136,7 @@ func TestMatrixGenerate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -149,7 +149,7 @@ func TestMatrixGenerate(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": "app1",
"path.basename": "app1",
@@ -162,7 +162,7 @@ func TestMatrixGenerate(t *testing.T) {
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -343,7 +343,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -358,7 +358,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
{
"path": map[string]string{
"path": "app1",
@@ -375,7 +375,7 @@ func TestMatrixGenerateGoTemplate(t *testing.T) {
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -507,7 +507,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
mock := &generatorsMock.Generator{}
mock := &generatorMock{}
for _, g := range testCaseCopy.baseGenerators {
gitGeneratorSpec := v1alpha1.ApplicationSetGenerator{
@@ -517,7 +517,7 @@ func TestMatrixGetRequeueAfter(t *testing.T) {
SCMProvider: g.SCMProvider,
ClusterDecisionResource: g.ClusterDecisionResource,
}
mock.EXPECT().GetRequeueAfter(&gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter)
mock.On("GetRequeueAfter", &gitGeneratorSpec).Return(testCaseCopy.gitGetRequeueAfter, nil)
}
matrixGenerator := NewMatrixGenerator(
@@ -634,7 +634,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{}
appClientset := kubefake.NewSimpleClientset(runtimeClusters...)
@@ -650,7 +650,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
Git: g.Git,
Clusters: g.Clusters,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
{
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
"path.basename": "dev",
@@ -662,7 +662,7 @@ func TestInterpolatedMatrixGenerate(t *testing.T) {
"path.basenameNormalized": "prod",
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
matrixGenerator := NewMatrixGenerator(
@@ -813,7 +813,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
GoTemplate: true,
@@ -833,7 +833,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
Git: g.Git,
Clusters: g.Clusters,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{
{
"path": map[string]string{
"path": "examples/git-generator-files-discovery/cluster-config/dev/config.json",
@@ -849,7 +849,7 @@ func TestInterpolatedMatrixGenerateGoTemplate(t *testing.T) {
},
},
}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
matrixGenerator := NewMatrixGenerator(
@@ -969,7 +969,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
testCaseCopy := testCase // Since tests may run in parallel
t.Run(testCaseCopy.name, func(t *testing.T) {
genMock := &generatorsMock.Generator{}
genMock := &generatorMock{}
appSet := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "set",
@@ -984,7 +984,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
Git: g.Git,
List: g.List,
}
genMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet, mock.Anything).Return([]map[string]any{{
genMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), appSet).Return([]map[string]any{{
"foo": map[string]any{
"bar": []any{
map[string]any{
@@ -1009,7 +1009,7 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
},
},
}}, nil)
genMock.EXPECT().GetTemplate(&gitGeneratorSpec).
genMock.On("GetTemplate", &gitGeneratorSpec).
Return(&v1alpha1.ApplicationSetTemplate{})
}
@@ -1037,6 +1037,28 @@ func TestMatrixGenerateListElementsYaml(t *testing.T) {
}
}
type generatorMock struct {
mock.Mock
}
func (g *generatorMock) GetTemplate(appSetGenerator *v1alpha1.ApplicationSetGenerator) *v1alpha1.ApplicationSetTemplate {
args := g.Called(appSetGenerator)
return args.Get(0).(*v1alpha1.ApplicationSetTemplate)
}
func (g *generatorMock) GenerateParams(appSetGenerator *v1alpha1.ApplicationSetGenerator, appSet *v1alpha1.ApplicationSet, _ client.Client) ([]map[string]any, error) {
args := g.Called(appSetGenerator, appSet)
return args.Get(0).([]map[string]any), args.Error(1)
}
func (g *generatorMock) GetRequeueAfter(appSetGenerator *v1alpha1.ApplicationSetGenerator) time.Duration {
args := g.Called(appSetGenerator)
return args.Get(0).(time.Duration)
}
func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
// Given a matrix generator over a list generator and a git files generator, the nested git files generator should
// be treated as a files generator, and it should produce parameters.
@@ -1050,11 +1072,11 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
// Now instead of checking for nil, we check whether the field is a non-empty slice. This test prevents a regression
// of that bug.
listGeneratorMock := &generatorsMock.Generator{}
listGeneratorMock.EXPECT().GenerateParams(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
listGeneratorMock := &generatorMock{}
listGeneratorMock.On("GenerateParams", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator"), mock.AnythingOfType("*v1alpha1.ApplicationSet"), mock.Anything).Return([]map[string]any{
{"some": "value"},
}, nil)
listGeneratorMock.EXPECT().GetTemplate(mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
listGeneratorMock.On("GetTemplate", mock.AnythingOfType("*v1alpha1.ApplicationSetGenerator")).Return(&v1alpha1.ApplicationSetTemplate{})
gitGeneratorSpec := &v1alpha1.GitGenerator{
RepoURL: "https://git.example.com",
@@ -1063,10 +1085,10 @@ func TestGitGenerator_GenerateParams_list_x_git_matrix_generator(t *testing.T) {
},
}
repoServiceMock := &servicesMocks.Repos{}
repoServiceMock.EXPECT().GetFiles(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
repoServiceMock := &mocks.Repos{}
repoServiceMock.On("GetFiles", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(map[string][]byte{
"some/path.json": []byte("test: content"),
}, nil).Maybe()
}, nil)
gitGenerator := NewGitGenerator(repoServiceMock, "")
matrixGenerator := NewMatrixGenerator(map[string]Generator{

View File

@@ -174,7 +174,7 @@ func TestApplicationsetCollector(t *testing.T) {
appsetCollector := newAppsetCollector(utils.NewAppsetLister(client), collectedLabels, filter)
metrics.Registry.MustRegister(appsetCollector)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})
@@ -216,7 +216,7 @@ func TestObserveReconcile(t *testing.T) {
appsetMetrics := NewApplicationsetMetrics(utils.NewAppsetLister(client), collectedLabels, filter)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
handler := promhttp.HandlerFor(metrics.Registry, promhttp.HandlerOpts{})

View File

@@ -97,9 +97,7 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
),
}
ctx := t.Context()
req, _ := http.NewRequestWithContext(ctx, http.MethodGet, ts.URL+URL, http.NoBody)
req, _ := http.NewRequest(http.MethodGet, ts.URL+URL, http.NoBody)
resp, err := client.Do(req)
if err != nil {
t.Fatalf("unexpected error: %v", err)
@@ -111,11 +109,7 @@ func TestGitHubMetrics_CollectorApproach_Success(t *testing.T) {
server := httptest.NewServer(handler)
defer server.Close()
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
resp, err = http.DefaultClient.Do(req)
resp, err = http.Get(server.URL)
if err != nil {
t.Fatalf("failed to scrape metrics: %v", err)
}
@@ -157,23 +151,15 @@ func TestGitHubMetrics_CollectorApproach_NoRateLimitMetricsOnNilResponse(t *test
metrics: metrics,
},
}
ctx := t.Context()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
req, _ := http.NewRequest(http.MethodGet, URL, http.NoBody)
_, _ = client.Do(req)
handler := promhttp.HandlerFor(reg, promhttp.HandlerOpts{})
server := httptest.NewServer(handler)
defer server.Close()
req, err = http.NewRequestWithContext(ctx, http.MethodGet, server.URL, http.NoBody)
if err != nil {
t.Fatalf("failed to create request: %v", err)
}
resp, err := http.DefaultClient.Do(req)
resp, err := http.Get(server.URL)
if err != nil {
t.Fatalf("failed to scrape metrics: %v", err)
}

View File

@@ -1,6 +1,7 @@
package pull_request
import (
"context"
"errors"
"testing"
@@ -12,7 +13,6 @@ import (
"github.com/stretchr/testify/require"
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
)
func createBoolPtr(x bool) *bool {
@@ -35,6 +35,29 @@ func createUniqueNamePtr(x string) *string {
return &x
}
type AzureClientFactoryMock struct {
mock *mock.Mock
}
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (git.Client, error) {
args := m.mock.Called(ctx)
var client git.Client
c := args.Get(0)
if c != nil {
client = c.(git.Client)
}
var err error
if len(args) > 1 {
if e, ok := args.Get(1).(error); ok {
err = e
}
}
return client, err
}
func TestListPullRequest(t *testing.T) {
teamProject := "myorg_project"
repoName := "myorg_project_repo"
@@ -68,10 +91,10 @@ func TestListPullRequest(t *testing.T) {
SearchCriteria: &git.GitPullRequestSearchCriteria{},
}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock, nil)
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.On("GetPullRequestsByProject", ctx, args).Return(&pullRequestMock, nil)
provider := AzureDevOpsService{
clientFactory: clientFactoryMock,
@@ -222,12 +245,12 @@ func TestAzureDevOpsListReturnsRepositoryNotFoundError(t *testing.T) {
pullRequestMock := []git.GitPullRequest{}
gitClientMock := &azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
gitClientMock := azureMock.Client{}
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
// Mock the GetPullRequestsByProject to return an error containing "404"
gitClientMock.EXPECT().GetPullRequestsByProject(mock.Anything, args).Return(&pullRequestMock,
gitClientMock.On("GetPullRequestsByProject", t.Context(), args).Return(&pullRequestMock,
errors.New("The following project does not exist:"))
provider := AzureDevOpsService{

View File

@@ -12,7 +12,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/aws_codecommit/mocks"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
@@ -177,8 +177,9 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
if repo.getRepositoryNilMetadata {
repoMetadata = nil
}
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError).Maybe()
codeCommitClient.
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(repo.name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: repoMetadata}, repo.getRepositoryError)
codecommitRepoNameIdPairs = append(codecommitRepoNameIdPairs, &codecommit.RepositoryNameIdPair{
RepositoryId: aws.String(repo.id),
RepositoryName: aws.String(repo.name),
@@ -192,18 +193,20 @@ func TestAWSCodeCommitListRepos(t *testing.T) {
}
if testCase.expectListAtCodeCommit {
codeCommitClient.EXPECT().ListRepositoriesWithContext(mock.Anything, &codecommit.ListRepositoriesInput{}).
codeCommitClient.
On("ListRepositoriesWithContext", ctx, &codecommit.ListRepositoriesInput{}).
Return(&codecommit.ListRepositoriesOutput{
Repositories: codecommitRepoNameIdPairs,
}, testCase.listRepositoryError).Maybe()
}, testCase.listRepositoryError)
} else {
taggingClient.EXPECT().GetResourcesWithContext(mock.Anything, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
TagFilters: testCase.expectTagFilters,
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
}))).
taggingClient.
On("GetResourcesWithContext", ctx, mock.MatchedBy(equalIgnoringTagFilterOrder(&resourcegroupstaggingapi.GetResourcesInput{
TagFilters: testCase.expectTagFilters,
ResourceTypeFilters: aws.StringSlice([]string{resourceTypeCodeCommitRepository}),
}))).
Return(&resourcegroupstaggingapi.GetResourcesOutput{
ResourceTagMappingList: resourceTaggings,
}, testCase.listRepositoryError).Maybe()
}, testCase.listRepositoryError)
}
provider := &AWSCodeCommitProvider{
@@ -347,12 +350,13 @@ func TestAWSCodeCommitRepoHasPath(t *testing.T) {
taggingClient := mocks.NewAWSTaggingClient(t)
ctx := t.Context()
if testCase.expectedGetFolderPath != "" {
codeCommitClient.EXPECT().GetFolderWithContext(mock.Anything, &codecommit.GetFolderInput{
CommitSpecifier: aws.String(branch),
FolderPath: aws.String(testCase.expectedGetFolderPath),
RepositoryName: aws.String(repoName),
}).
Return(testCase.getFolderOutput, testCase.getFolderError).Maybe()
codeCommitClient.
On("GetFolderWithContext", ctx, &codecommit.GetFolderInput{
CommitSpecifier: aws.String(branch),
FolderPath: aws.String(testCase.expectedGetFolderPath),
RepositoryName: aws.String(repoName),
}).
Return(testCase.getFolderOutput, testCase.getFolderError)
}
provider := &AWSCodeCommitProvider{
codeCommitClient: codeCommitClient,
@@ -419,16 +423,18 @@ func TestAWSCodeCommitGetBranches(t *testing.T) {
taggingClient := mocks.NewAWSTaggingClient(t)
ctx := t.Context()
if testCase.allBranches {
codeCommitClient.EXPECT().ListBranchesWithContext(mock.Anything, &codecommit.ListBranchesInput{
RepositoryName: aws.String(name),
}).
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError).Maybe()
codeCommitClient.
On("ListBranchesWithContext", ctx, &codecommit.ListBranchesInput{
RepositoryName: aws.String(name),
}).
Return(&codecommit.ListBranchesOutput{Branches: aws.StringSlice(testCase.branches)}, testCase.apiError)
} else {
codeCommitClient.EXPECT().GetRepositoryWithContext(mock.Anything, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
codeCommitClient.
On("GetRepositoryWithContext", ctx, &codecommit.GetRepositoryInput{RepositoryName: aws.String(name)}).
Return(&codecommit.GetRepositoryOutput{RepositoryMetadata: &codecommit.RepositoryMetadata{
AccountId: aws.String(organization),
DefaultBranch: aws.String(defaultBranch),
}}, testCase.apiError).Maybe()
}}, testCase.apiError)
}
provider := &AWSCodeCommitProvider{
codeCommitClient: codeCommitClient,

View File

@@ -1,6 +1,7 @@
package scm_provider
import (
"context"
"errors"
"fmt"
"testing"
@@ -15,7 +16,6 @@ import (
azureGit "github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
azureMock "github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/azure_devops/git/mocks"
"github.com/argoproj/argo-cd/v3/applicationset/services/scm_provider/mocks"
)
func s(input string) *string {
@@ -78,13 +78,13 @@ func TestAzureDevopsRepoHasPath(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
repoId := &uuid
gitClientMock.EXPECT().GetItem(mock.Anything, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
gitClientMock.On("GetItem", ctx, azureGit.GetItemArgs{Project: &teamProject, Path: &path, VersionDescriptor: &azureGit.GitVersionDescriptor{Version: &branchName}, RepositoryId: repoId}).Return(nil, testCase.azureDevopsError)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
@@ -143,12 +143,12 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
uuid := uuid.New().String()
gitClientMock := azureMock.NewClient(t)
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(nil, testCase.azureDevOpsError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -162,6 +162,8 @@ func TestGetDefaultBranchOnDisabledRepo(t *testing.T) {
}
assert.Empty(t, branches)
gitClientMock.AssertExpectations(t)
})
}
}
@@ -200,12 +202,12 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
uuid := uuid.New().String()
gitClientMock := azureMock.NewClient(t)
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(nil, testCase.azureDevOpsError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -219,6 +221,8 @@ func TestGetAllBranchesOnDisabledRepo(t *testing.T) {
}
assert.Empty(t, branches)
gitClientMock.AssertExpectations(t)
})
}
}
@@ -237,12 +241,12 @@ func TestAzureDevOpsGetDefaultBranchStripsRefsName(t *testing.T) {
branchReturn := &azureGit.GitBranchStats{Name: &strippedBranchName, Commit: &azureGit.GitCommitRef{CommitId: s("abc123233223")}}
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, nil)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil).Maybe()
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &strippedBranchName}).Return(branchReturn, nil)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock, allBranches: false}
branches, err := provider.GetBranches(ctx, repo)
@@ -291,12 +295,12 @@ func TestAzureDevOpsGetBranchesDefultBranchOnly(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
gitClientMock.EXPECT().GetBranch(mock.Anything, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
gitClientMock.On("GetBranch", ctx, azureGit.GetBranchArgs{RepositoryId: &repoName, Project: &teamProject, Name: &defaultBranch}).Return(testCase.expectedBranch, testCase.getBranchesAPIError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid, Branch: defaultBranch}
@@ -375,12 +379,12 @@ func TestAzureDevopsGetBranches(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := &azureMock.Client{}
gitClientMock := azureMock.Client{}
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, testCase.clientError)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock, testCase.clientError)
gitClientMock.EXPECT().GetBranches(mock.Anything, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
gitClientMock.On("GetBranches", ctx, azureGit.GetBranchesArgs{RepositoryId: &repoName, Project: &teamProject}).Return(testCase.expectedBranches, testCase.getBranchesAPIError)
repo := &Repository{Organization: organization, Repository: repoName, RepositoryId: uuid}
@@ -423,6 +427,7 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
teamProject := "myorg_project"
uuid := uuid.New()
ctx := t.Context()
repoId := &uuid
@@ -472,15 +477,15 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
gitClientMock := azureMock.NewClient(t)
gitClientMock.EXPECT().GetRepositories(mock.Anything, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
gitClientMock := azureMock.Client{}
gitClientMock.On("GetRepositories", ctx, azureGit.GetRepositoriesArgs{Project: s(teamProject)}).Return(&testCase.repositories, testCase.getRepositoriesError)
clientFactoryMock := &mocks.AzureDevOpsClientFactory{}
clientFactoryMock.EXPECT().GetClient(mock.Anything).Return(gitClientMock, nil)
clientFactoryMock := &AzureClientFactoryMock{mock: &mock.Mock{}}
clientFactoryMock.mock.On("GetClient", mock.Anything).Return(&gitClientMock)
provider := AzureDevOpsProvider{organization: organization, teamProject: teamProject, clientFactory: clientFactoryMock}
repositories, err := provider.ListRepos(t.Context(), "https")
repositories, err := provider.ListRepos(ctx, "https")
if testCase.getRepositoriesError != nil {
require.Error(t, err, "Expected an error from test case %v", testCase.name)
@@ -492,6 +497,31 @@ func TestGetAzureDevopsRepositories(t *testing.T) {
assert.NotEmpty(t, repositories)
assert.Len(t, repositories, testCase.expectedNumberOfRepos)
}
gitClientMock.AssertExpectations(t)
})
}
}
type AzureClientFactoryMock struct {
mock *mock.Mock
}
func (m *AzureClientFactoryMock) GetClient(ctx context.Context) (azureGit.Client, error) {
args := m.mock.Called(ctx)
var client azureGit.Client
c := args.Get(0)
if c != nil {
client = c.(azureGit.Client)
}
var err error
if len(args) > 1 {
if e, ok := args.Get(1).(error); ok {
err = e
}
}
return client, err
}

View File

@@ -30,7 +30,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
urlStr += fmt.Sprintf("/repositories/%s/%s/src/%s/%s?format=meta", c.owner, repo.Repository, repo.SHA, path)
body := strings.NewReader("")
req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, urlStr, body)
req, err := http.NewRequest(http.MethodGet, urlStr, body)
if err != nil {
return false, err
}

View File

@@ -1,101 +0,0 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
"context"
"github.com/microsoft/azure-devops-go-api/azuredevops/v7/git"
mock "github.com/stretchr/testify/mock"
)
// NewAzureDevOpsClientFactory creates a new instance of AzureDevOpsClientFactory. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewAzureDevOpsClientFactory(t interface {
mock.TestingT
Cleanup(func())
}) *AzureDevOpsClientFactory {
mock := &AzureDevOpsClientFactory{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// AzureDevOpsClientFactory is an autogenerated mock type for the AzureDevOpsClientFactory type
type AzureDevOpsClientFactory struct {
mock.Mock
}
type AzureDevOpsClientFactory_Expecter struct {
mock *mock.Mock
}
func (_m *AzureDevOpsClientFactory) EXPECT() *AzureDevOpsClientFactory_Expecter {
return &AzureDevOpsClientFactory_Expecter{mock: &_m.Mock}
}
// GetClient provides a mock function for the type AzureDevOpsClientFactory
func (_mock *AzureDevOpsClientFactory) GetClient(ctx context.Context) (git.Client, error) {
ret := _mock.Called(ctx)
if len(ret) == 0 {
panic("no return value specified for GetClient")
}
var r0 git.Client
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context) (git.Client, error)); ok {
return returnFunc(ctx)
}
if returnFunc, ok := ret.Get(0).(func(context.Context) git.Client); ok {
r0 = returnFunc(ctx)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(git.Client)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = returnFunc(ctx)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// AzureDevOpsClientFactory_GetClient_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetClient'
type AzureDevOpsClientFactory_GetClient_Call struct {
*mock.Call
}
// GetClient is a helper method to define mock.On call
// - ctx context.Context
func (_e *AzureDevOpsClientFactory_Expecter) GetClient(ctx interface{}) *AzureDevOpsClientFactory_GetClient_Call {
return &AzureDevOpsClientFactory_GetClient_Call{Call: _e.mock.On("GetClient", ctx)}
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) Run(run func(ctx context.Context)) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
run(
arg0,
)
})
return _c
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) Return(client git.Client, err error) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Return(client, err)
return _c
}
func (_c *AzureDevOpsClientFactory_GetClient_Call) RunAndReturn(run func(ctx context.Context) (git.Client, error)) *AzureDevOpsClientFactory_GetClient_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -1,6 +1,7 @@
package services
import (
"context"
"crypto/tls"
"net/http"
"testing"
@@ -11,7 +12,7 @@ import (
)
func TestSetupBitbucketClient(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
cfg := &bitbucketv1.Configuration{}
// Act

16
assets/swagger.json generated
View File

@@ -5619,9 +5619,6 @@
"statusBadgeRootUrl": {
"type": "string"
},
"syncWithReplaceAllowed": {
"type": "boolean"
},
"trackingMethod": {
"type": "string"
},
@@ -7080,7 +7077,7 @@
},
"status": {
"type": "string",
"title": "Status contains the AppSet's perceived status of the managed Application resource"
"title": "Status contains the AppSet's perceived status of the managed Application resource: (Waiting, Pending, Progressing, Healthy)"
},
"step": {
"type": "string",
@@ -8257,7 +8254,7 @@
}
},
"v1alpha1ConfigMapKeyRef": {
"description": "ConfigMapKeyRef struct for a reference to a configmap key.",
"description": "Utility struct for a reference to a configmap key.",
"type": "object",
"properties": {
"configMapName": {
@@ -9339,7 +9336,7 @@
}
},
"v1alpha1PullRequestGeneratorGithub": {
"description": "PullRequestGeneratorGithub defines connection info specific to GitHub.",
"description": "PullRequestGenerator defines connection info specific to GitHub.",
"type": "object",
"properties": {
"api": {
@@ -9480,11 +9477,6 @@
"connectionState": {
"$ref": "#/definitions/v1alpha1ConnectionState"
},
"depth": {
"description": "Depth specifies the depth for shallow clones. A value of 0 or omitting the field indicates a full clone.",
"type": "integer",
"format": "int64"
},
"enableLfs": {
"description": "EnableLFS specifies whether git-lfs support should be enabled for this repo. Only valid for Git repositories.",
"type": "boolean"
@@ -10348,7 +10340,7 @@
}
},
"v1alpha1SecretRef": {
"description": "SecretRef struct for a reference to a secret key.",
"description": "Utility struct for a reference to a secret key.",
"type": "object",
"properties": {
"key": {

View File

@@ -38,7 +38,7 @@ func NewCommand() *cobra.Command {
Use: "argocd-commit-server",
Short: "Run Argo CD Commit Server",
Long: "Argo CD Commit Server is an internal service which commits and pushes hydrated manifests to git. This command runs Commit Server in the foreground.",
RunE: func(cmd *cobra.Command, _ []string) error {
RunE: func(_ *cobra.Command, _ []string) error {
vers := common.GetVersion()
vers.LogStartupInfo(
"Argo CD Commit Server",
@@ -59,10 +59,8 @@ func NewCommand() *cobra.Command {
server := commitserver.NewServer(askPassServer, metricsServer)
grpc := server.CreateGRPC()
ctx := cmd.Context()
lc := &net.ListenConfig{}
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {

View File

@@ -115,7 +115,7 @@ func NewRunDexCommand() *cobra.Command {
err = os.WriteFile("/tmp/dex.yaml", dexCfgBytes, 0o644)
errors.CheckError(err)
log.Debug(redactor(string(dexCfgBytes)))
cmd = exec.CommandContext(ctx, "dex", "serve", "/tmp/dex.yaml")
cmd = exec.Command("dex", "serve", "/tmp/dex.yaml")
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err = cmd.Start()

View File

@@ -80,6 +80,7 @@ func NewCommand() *cobra.Command {
includeHiddenDirectories bool
cmpUseManifestGeneratePaths bool
ociMediaTypes []string
enableBuiltinGitConfig bool
)
command := cobra.Command{
Use: cliName,
@@ -155,6 +156,7 @@ func NewCommand() *cobra.Command {
IncludeHiddenDirectories: includeHiddenDirectories,
CMPUseManifestGeneratePaths: cmpUseManifestGeneratePaths,
OCIMediaTypes: ociMediaTypes,
EnableBuiltinGitConfig: enableBuiltinGitConfig,
}, askPassServer)
errors.CheckError(err)
@@ -169,8 +171,7 @@ func NewCommand() *cobra.Command {
}
grpc := server.CreateGRPC()
lc := &net.ListenConfig{}
listener, err := lc.Listen(ctx, "tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", listenHost, listenPort))
errors.CheckError(err)
healthz.ServeHealthCheck(http.DefaultServeMux, func(r *http.Request) error {
@@ -265,6 +266,7 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&includeHiddenDirectories, "include-hidden-directories", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_INCLUDE_HIDDEN_DIRECTORIES", false), "Include hidden directories from Git")
command.Flags().BoolVar(&cmpUseManifestGeneratePaths, "plugin-use-manifest-generate-paths", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS", false), "Pass the resources described in argocd.argoproj.io/manifest-generate-paths value to the cmpserver to generate the application manifests.")
command.Flags().StringSliceVar(&ociMediaTypes, "oci-layer-media-types", env.StringsFromEnv("ARGOCD_REPO_SERVER_OCI_LAYER_MEDIA_TYPES", []string{"application/vnd.oci.image.layer.v1.tar", "application/vnd.oci.image.layer.v1.tar+gzip", "application/vnd.cncf.helm.chart.content.v1.tar+gzip"}, ","), "Comma separated list of allowed media types for OCI media types. This only accounts for media types within layers.")
command.Flags().BoolVar(&enableBuiltinGitConfig, "enable-builtin-git-config", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_ENABLE_BUILTIN_GIT_CONFIG", true), "Enable builtin git configuration options that are required for correct argocd-repo-server operation.")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, cacheutil.Options{
OnClientCreated: func(client *redis.Client) {

View File

@@ -105,26 +105,26 @@ func TestGetReconcileResults_Refresh(t *testing.T) {
appClientset := appfake.NewSimpleClientset(app, proj)
deployment := test.NewDeployment()
kubeClientset := kubefake.NewClientset(deployment, argoCM, argoCDSecret)
clusterCache := &clustermocks.ClusterCache{}
clusterCache.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
clusterCache.EXPECT().GetGVKParser().Return(nil)
repoServerClient := &mocks.RepoServerServiceClient{}
repoServerClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
clusterCache := clustermocks.ClusterCache{}
clusterCache.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCache.On("GetGVKParser", mock.Anything).Return(nil)
repoServerClient := mocks.RepoServerServiceClient{}
repoServerClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(&argocdclient.ManifestResponse{
Manifests: []string{test.DeploymentManifest},
}, nil)
repoServerClientset := &mocks.Clientset{RepoServerServiceClient: repoServerClient}
liveStateCache := &cachemocks.LiveStateCache{}
liveStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
repoServerClientset := mocks.Clientset{RepoServerServiceClient: &repoServerClient}
liveStateCache := cachemocks.LiveStateCache{}
liveStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(deployment): deployment,
}, nil)
liveStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
liveStateCache.EXPECT().Init().Return(nil)
liveStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCache, nil)
liveStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
liveStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
liveStateCache.On("Init").Return(nil, nil)
liveStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCache, nil)
liveStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", repoServerClientset, "",
result, err := reconcileApplications(ctx, kubeClientset, appClientset, "default", &repoServerClientset, "",
func(_ db.ArgoDB, _ cache.SharedIndexInformer, _ *settings.SettingsManager, _ *metrics.MetricsServer) statecache.LiveStateCache {
return liveStateCache
return &liveStateCache
},
false,
normalizers.IgnoreNormalizerOpts{},

View File

@@ -40,7 +40,9 @@ func captureStdout(callback func()) (string, error) {
return string(data), err
}
func newSettingsManager(ctx context.Context, data map[string]string) *settings.SettingsManager {
func newSettingsManager(data map[string]string) *settings.SettingsManager {
ctx := context.Background()
clientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
@@ -67,8 +69,8 @@ type fakeCmdContext struct {
mgr *settings.SettingsManager
}
func newCmdContext(ctx context.Context, data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(ctx, data)}
func newCmdContext(data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(data)}
}
func (ctx *fakeCmdContext) createSettingsManager(context.Context) (*settings.SettingsManager, error) {
@@ -180,7 +182,7 @@ admissionregistration.k8s.io/MutatingWebhookConfiguration:
if !assert.True(t, ok) {
return
}
summary, err := validator(newSettingsManager(t.Context(), tc.data))
summary, err := validator(newSettingsManager(tc.data))
if tc.containsSummary != "" {
require.NoError(t, err)
assert.Contains(t, summary, tc.containsSummary)
@@ -247,7 +249,7 @@ func tempFile(content string) (string, io.Closer, error) {
}
func TestValidateSettingsCommand_NoErrors(t *testing.T) {
cmd := NewValidateSettingsCommand(newCmdContext(t.Context(), map[string]string{}))
cmd := NewValidateSettingsCommand(newCmdContext(map[string]string{}))
out, err := captureStdout(func() {
err := cmd.Execute()
require.NoError(t, err)
@@ -265,7 +267,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoOverridesConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{}))
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"ignore-differences", f})
err := cmd.Execute()
@@ -276,7 +278,7 @@ func TestResourceOverrideIgnoreDifferences(t *testing.T) {
})
t.Run("DataIgnored", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
ignoreDifferences: |
jsonPointers:
@@ -298,7 +300,7 @@ func TestResourceOverrideHealth(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoHealthAssessment", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `example.com/ExampleResource: {}`,
}))
out, err := captureStdout(func() {
@@ -311,7 +313,7 @@ func TestResourceOverrideHealth(t *testing.T) {
})
t.Run("HealthAssessmentConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `example.com/ExampleResource:
health.lua: |
return { status = "Progressing" }
@@ -327,7 +329,7 @@ func TestResourceOverrideHealth(t *testing.T) {
})
t.Run("HealthAssessmentConfiguredWildcard", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `example.com/*:
health.lua: |
return { status = "Progressing" }
@@ -353,7 +355,7 @@ func TestResourceOverrideAction(t *testing.T) {
defer utilio.Close(closer)
t.Run("NoActions", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment: {}`,
}))
out, err := captureStdout(func() {
@@ -366,7 +368,7 @@ func TestResourceOverrideAction(t *testing.T) {
})
t.Run("OldStyleActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
actions: |
discovery.lua: |
@@ -402,7 +404,7 @@ resume false
})
t.Run("NewStyleActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(t.Context(), map[string]string{
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `batch/CronJob:
actions: |
discovery.lua: |

View File

@@ -1794,7 +1794,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
repo string
appNamespace string
cluster string
path string
)
command := &cobra.Command{
Use: "list",
@@ -1830,9 +1829,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if cluster != "" {
appList = argo.FilterByCluster(appList, cluster)
}
if path != "" {
appList = argo.FilterByPath(appList, path)
}
switch output {
case "yaml", "json":
@@ -1853,7 +1849,6 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringVarP(&repo, "repo", "r", "", "List apps by source repo URL")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only list applications in namespace")
command.Flags().StringVarP(&cluster, "cluster", "c", "", "List apps by cluster name or url")
command.Flags().StringVarP(&path, "path", "P", "", "List apps by path")
return command
}

View File

@@ -57,7 +57,7 @@ func NewApplicationGetResourceCommand(clientOpts *argocdclient.ClientOptions) *c
# Get a specific resource with managed fields, Pod my-app-pod, in 'my-app' by name in wide format
argocd app get-resource my-app --kind Pod --resource-name my-app-pod --show-managed-fields
# Get the details of a specific field in a resource in 'my-app' in the wide format
# Get the the details of a specific field in a resource in 'my-app' in the wide format
argocd app get-resource my-app --kind Pod --filter-fields status.podIP
# Get the details of multiple specific fields in a specific resource in 'my-app' in the wide format

View File

@@ -1,103 +0,0 @@
package commands
import (
"regexp"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
)
// splitColumns splits a line produced by tabwriter using runs of 2 or more spaces
// as delimiters to obtain logical columns regardless of alignment padding.
func splitColumns(line string) []string {
re := regexp.MustCompile(`\s{2,}`)
return re.Split(strings.TrimSpace(line), -1)
}
func Test_printKeyTable_Empty(t *testing.T) {
out, err := captureOutput(func() error {
printKeyTable([]appsv1.GnuPGPublicKey{})
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 1)
headerCols := splitColumns(lines[0])
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, headerCols)
}
func Test_printKeyTable_Single(t *testing.T) {
keys := []appsv1.GnuPGPublicKey{
{
KeyID: "ABCDEF1234567890",
SubType: "rsa4096",
Owner: "Alice <alice@example.com>",
},
}
out, err := captureOutput(func() error {
printKeyTable(keys)
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 2)
// Header
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
// Row
row := splitColumns(lines[1])
require.Len(t, row, 3)
assert.Equal(t, "ABCDEF1234567890", row[0])
assert.Equal(t, "RSA4096", row[1]) // subtype upper-cased
assert.Equal(t, "Alice <alice@example.com>", row[2])
}
func Test_printKeyTable_Multiple(t *testing.T) {
keys := []appsv1.GnuPGPublicKey{
{
KeyID: "ABCD",
SubType: "ed25519",
Owner: "User One <one@example.com>",
},
{
KeyID: "0123456789ABCDEF",
SubType: "rsa2048",
Owner: "Second User <second@example.com>",
},
}
out, err := captureOutput(func() error {
printKeyTable(keys)
return nil
})
require.NoError(t, err)
lines := strings.Split(strings.TrimRight(out, "\n"), "\n")
require.Len(t, lines, 3)
// Header
assert.Equal(t, []string{"KEYID", "TYPE", "IDENTITY"}, splitColumns(lines[0]))
// First row
row1 := splitColumns(lines[1])
require.Len(t, row1, 3)
assert.Equal(t, "ABCD", row1[0])
assert.Equal(t, "ED25519", row1[1])
assert.Equal(t, "User One <one@example.com>", row1[2])
// Second row
row2 := splitColumns(lines[2])
require.Len(t, row2, 3)
assert.Equal(t, "0123456789ABCDEF", row2[0])
assert.Equal(t, "RSA2048", row2[1])
assert.Equal(t, "Second User <second@example.com>", row2[2])
}

View File

@@ -213,8 +213,7 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
}
if port == nil || *port == 0 {
addr := *address + ":0"
lc := &net.ListenConfig{}
ln, err := lc.Listen(ctx, "tcp", addr)
ln, err := net.Listen("tcp", addr)
if err != nil {
return nil, fmt.Errorf("failed to listen on %q: %w", addr, err)
}

View File

@@ -3,11 +3,9 @@ package commands
import (
"errors"
"fmt"
"maps"
"os"
"os/exec"
"path/filepath"
"slices"
"strings"
"github.com/argoproj/argo-cd/v3/util/cli"
@@ -15,17 +13,18 @@ import (
log "github.com/sirupsen/logrus"
)
const prefix = "argocd"
// DefaultPluginHandler implements the PluginHandler interface
type DefaultPluginHandler struct {
lookPath func(file string) (string, error)
run func(cmd *exec.Cmd) error
ValidPrefixes []string
lookPath func(file string) (string, error)
run func(cmd *exec.Cmd) error
}
// NewDefaultPluginHandler instantiates a DefaultPluginHandler
func NewDefaultPluginHandler() *DefaultPluginHandler {
// NewDefaultPluginHandler instantiates the DefaultPluginHandler
func NewDefaultPluginHandler(validPrefixes []string) *DefaultPluginHandler {
return &DefaultPluginHandler{
lookPath: exec.LookPath,
ValidPrefixes: validPrefixes,
lookPath: exec.LookPath,
run: func(cmd *exec.Cmd) error {
return cmd.Run()
},
@@ -33,8 +32,8 @@ func NewDefaultPluginHandler() *DefaultPluginHandler {
}
// HandleCommandExecutionError processes the error returned from executing the command.
// It handles both standard Argo CD commands and plugin commands. We don't require returning
// an error, but we are doing it to cover various test scenarios.
// It handles both standard Argo CD commands and plugin commands. We don't require to return
// error but we are doing it to cover various test scenarios.
func (h *DefaultPluginHandler) HandleCommandExecutionError(err error, isArgocdCLI bool, args []string) error {
// the log level needs to be setup manually here since the initConfig()
// set by the cobra.OnInitialize() was never executed because cmd.Execute()
@@ -86,24 +85,27 @@ func (h *DefaultPluginHandler) handlePluginCommand(cmdArgs []string) (string, er
// lookForPlugin looks for a plugin in the PATH that starts with argocd prefix
func (h *DefaultPluginHandler) lookForPlugin(filename string) (string, bool) {
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
path, err := h.lookPath(pluginName)
if err != nil {
// error if a plugin is found in a relative path
if errors.Is(err, exec.ErrDot) {
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
} else {
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
for _, prefix := range h.ValidPrefixes {
pluginName := fmt.Sprintf("%s-%s", prefix, filename)
path, err := h.lookPath(pluginName)
if err != nil {
// error if a plugin is found in a relative path
if errors.Is(err, exec.ErrDot) {
log.Errorf("Plugin '%s' found in relative path: %v", pluginName, err)
} else {
log.Warnf("error looking for plugin '%s': %v", pluginName, err)
}
continue
}
return "", false
if path == "" {
return "", false
}
return path, true
}
if path == "" {
return "", false
}
return path, true
return "", false
}
// executePlugin implements PluginHandler and executes a plugin found
@@ -139,56 +141,3 @@ func (h *DefaultPluginHandler) command(name string, arg ...string) *exec.Cmd {
}
return cmd
}
// ListAvailablePlugins returns a list of plugin names that are available in the user's PATH
// for tab completion. It searches for executables matching the ValidPrefixes pattern.
func (h *DefaultPluginHandler) ListAvailablePlugins() []string {
// Track seen plugin names to avoid duplicates
seenPlugins := make(map[string]bool)
// Search through each directory in PATH
for _, dir := range filepath.SplitList(os.Getenv("PATH")) {
// Skip empty directories
if dir == "" {
continue
}
// Read directory contents
entries, err := os.ReadDir(dir)
if err != nil {
continue
}
// Check each file in the directory
for _, entry := range entries {
// Skip directories and non-executable files
if entry.IsDir() {
continue
}
name := entry.Name()
// Check if the file is a valid argocd plugin
pluginPrefix := prefix + "-"
if strings.HasPrefix(name, pluginPrefix) {
// Extract the plugin command name (everything after the prefix)
pluginName := strings.TrimPrefix(name, pluginPrefix)
// Skip empty plugin names or names with path separators
if pluginName == "" || strings.Contains(pluginName, "/") || strings.Contains(pluginName, "\\") {
continue
}
// Check if the file is executable
if info, err := entry.Info(); err == nil {
// On Unix-like systems, check executable bit
if info.Mode()&0o111 != 0 {
seenPlugins[pluginName] = true
}
}
}
}
}
return slices.Sorted(maps.Keys(seenPlugins))
}

View File

@@ -28,7 +28,7 @@ func setupPluginPath(t *testing.T) {
func TestNormalCommandWithPlugin(t *testing.T) {
setupPluginPath(t)
_ = NewDefaultPluginHandler()
_ = NewDefaultPluginHandler([]string{"argocd"})
args := []string{"argocd", "version", "--short", "--client"}
buf := new(bytes.Buffer)
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
@@ -47,7 +47,7 @@ func TestNormalCommandWithPlugin(t *testing.T) {
func TestPluginExecution(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -101,7 +101,7 @@ func TestPluginExecution(t *testing.T) {
func TestNormalCommandError(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
args := []string{"argocd", "version", "--non-existent-flag"}
cmd := NewVersionCmd(&argocdclient.ClientOptions{}, nil)
cmd.SetArgs(args[1:])
@@ -118,7 +118,7 @@ func TestNormalCommandError(t *testing.T) {
// TestUnknownCommandNoPlugin tests the scenario when the command is neither a normal ArgoCD command
// nor exists as a plugin
func TestUnknownCommandNoPlugin(t *testing.T) {
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -137,7 +137,7 @@ func TestUnknownCommandNoPlugin(t *testing.T) {
func TestPluginNoExecutePermission(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -156,7 +156,7 @@ func TestPluginNoExecutePermission(t *testing.T) {
func TestPluginExecutionError(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -187,7 +187,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
t.Setenv("PATH", os.Getenv("PATH")+string(os.PathListSeparator)+relativePath)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
cmd := NewCommand()
cmd.SilenceErrors = true
cmd.SilenceUsage = true
@@ -206,7 +206,7 @@ func TestPluginInRelativePathIgnored(t *testing.T) {
func TestPluginFlagParsing(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
tests := []struct {
name string
@@ -255,7 +255,7 @@ func TestPluginFlagParsing(t *testing.T) {
func TestPluginStatusCode(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
pluginHandler := NewDefaultPluginHandler([]string{"argocd"})
tests := []struct {
name string
@@ -309,76 +309,3 @@ func TestPluginStatusCode(t *testing.T) {
})
}
}
// TestListAvailablePlugins tests the plugin discovery functionality for tab completion
func TestListAvailablePlugins(t *testing.T) {
setupPluginPath(t)
tests := []struct {
name string
validPrefix []string
expected []string
}{
{
name: "Standard argocd prefix finds plugins",
expected: []string{"demo_plugin", "error", "foo", "status-code-plugin", "test-plugin", "version"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Equal(t, tt.expected, plugins)
})
}
}
// TestListAvailablePluginsEmptyPath tests plugin discovery when PATH is empty
func TestListAvailablePluginsEmptyPath(t *testing.T) {
// Set empty PATH
t.Setenv("PATH", "")
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Empty(t, plugins, "Should return empty list when PATH is empty")
}
// TestListAvailablePluginsNonExecutableFiles tests that non-executable files are ignored
func TestListAvailablePluginsNonExecutableFiles(t *testing.T) {
setupPluginPath(t)
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
// Should not include 'no-permission' since it's not executable
assert.NotContains(t, plugins, "no-permission")
}
// TestListAvailablePluginsDeduplication tests that duplicate plugins from different PATH dirs are handled
func TestListAvailablePluginsDeduplication(t *testing.T) {
// Create two temporary directories with the same plugin
dir1 := t.TempDir()
dir2 := t.TempDir()
// Create the same plugin in both directories
plugin1 := filepath.Join(dir1, "argocd-duplicate")
plugin2 := filepath.Join(dir2, "argocd-duplicate")
err := os.WriteFile(plugin1, []byte("#!/bin/bash\necho 'plugin1'\n"), 0o755)
require.NoError(t, err)
err = os.WriteFile(plugin2, []byte("#!/bin/bash\necho 'plugin2'\n"), 0o755)
require.NoError(t, err)
// Set PATH to include both directories
testPath := dir1 + string(os.PathListSeparator) + dir2
t.Setenv("PATH", testPath)
pluginHandler := NewDefaultPluginHandler()
plugins := pluginHandler.ListAvailablePlugins()
assert.Equal(t, []string{"duplicate"}, plugins)
}

View File

@@ -191,7 +191,6 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repoOpts.Repo.NoProxy = repoOpts.NoProxy
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.Depth = repoOpts.Depth
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")

View File

@@ -44,11 +44,6 @@ func NewCommand() *cobra.Command {
},
DisableAutoGenTag: true,
SilenceUsage: true,
ValidArgsFunction: func(_ *cobra.Command, _ []string, _ string) ([]string, cobra.ShellCompDirective) {
// Return available plugin commands for tab completion
plugins := NewDefaultPluginHandler().ListAvailablePlugins()
return plugins, cobra.ShellCompDirectiveNoFileComp
},
}
command.AddCommand(NewCompletionCommand())

View File

@@ -24,7 +24,7 @@ func extractHealthStatusAndReason(node v1alpha1.ResourceNode) (healthStatus heal
healthStatus = node.Health.Status
reason = node.Health.Message
}
return healthStatus, reason
return
}
func treeViewAppGet(prefix string, uidToNodeMap map[string]v1alpha1.ResourceNode, parentToChildMap map[string][]string, parent v1alpha1.ResourceNode, mapNodeNameToResourceState map[string]*resourceState, w *tabwriter.Writer) {

View File

@@ -83,11 +83,12 @@ func main() {
}
err := command.Execute()
// if an error is present, try to look for various scenarios
// if the err is non-nil, try to look for various scenarios
// such as if the error is from the execution of a normal argocd command,
// unknown command error or any other.
if err != nil {
pluginErr := cli.NewDefaultPluginHandler().HandleCommandExecutionError(err, isArgocdCLI, os.Args)
pluginHandler := cli.NewDefaultPluginHandler([]string{"argocd"})
pluginErr := pluginHandler.HandleCommandExecutionError(err, isArgocdCLI, os.Args)
if pluginErr != nil {
var exitErr *exec.ExitError
if errors.As(pluginErr, &exitErr) {

View File

@@ -136,9 +136,9 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: manual (aliases of manual: none), automated (aliases of automated: auto, automatic))")
command.Flags().StringArrayVar(&opts.syncOptions, "sync-option", []string{}, "Add or remove a sync option, e.g add `Prune=false`. Remove using `!` prefix, e.g. `!Prune=false`")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning for automated sync policy")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing for automated sync policy")
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources for automated sync policy")
command.Flags().BoolVar(&opts.autoPrune, "auto-prune", false, "Set automatic pruning when sync is automated")
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing when sync is automated")
command.Flags().BoolVar(&opts.allowEmpty, "allow-empty", false, "Set allow zero live resources when sync is automated")
command.Flags().StringVar(&opts.namePrefix, "nameprefix", "", "Kustomize nameprefix")
command.Flags().StringVar(&opts.nameSuffix, "namesuffix", "", "Kustomize namesuffix")
command.Flags().StringVar(&opts.kustomizeVersion, "kustomize-version", "", "Kustomize version")
@@ -284,26 +284,25 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
spec.SyncPolicy.Retry.Refresh = appOpts.retryRefresh
}
})
if flags.Changed("auto-prune") || flags.Changed("self-heal") || flags.Changed("allow-empty") {
if spec.SyncPolicy == nil {
spec.SyncPolicy = &argoappv1.SyncPolicy{}
}
if spec.SyncPolicy.Automated == nil {
disabled := false
spec.SyncPolicy.Automated = &argoappv1.SyncPolicyAutomated{Enabled: &disabled}
}
if flags.Changed("auto-prune") {
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
if flags.Changed("allow-empty") {
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
if flags.Changed("auto-prune") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --auto-prune: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.Prune = appOpts.autoPrune
}
if flags.Changed("self-heal") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --self-heal: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
if flags.Changed("allow-empty") {
if spec.SyncPolicy == nil || !spec.SyncPolicy.IsAutomatedSyncEnabled() {
log.Fatal("Cannot set --allow-empty: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.AllowEmpty = appOpts.allowEmpty
}
return visited
}

View File

@@ -267,47 +267,6 @@ func Test_setAppSpecOptions(t *testing.T) {
require.NoError(t, f.SetFlag("sync-option", "!a=1"))
assert.Nil(t, f.spec.SyncPolicy)
})
t.Run("AutoPruneFlag", func(t *testing.T) {
f := newAppOptionsFixture()
// syncPolicy is nil (automated.enabled = false)
require.NoError(t, f.SetFlag("auto-prune", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.Prune)
// automated.enabled = true
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("auto-prune", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.Prune)
})
t.Run("SelfHealFlag", func(t *testing.T) {
f := newAppOptionsFixture()
require.NoError(t, f.SetFlag("self-heal", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.SelfHeal)
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("self-heal", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.SelfHeal)
})
t.Run("AllowEmptyFlag", func(t *testing.T) {
f := newAppOptionsFixture()
require.NoError(t, f.SetFlag("allow-empty", "true"))
require.NotNil(t, f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.True(t, f.spec.SyncPolicy.Automated.AllowEmpty)
*f.spec.SyncPolicy.Automated.Enabled = true
require.NoError(t, f.SetFlag("allow-empty", "false"))
assert.True(t, *f.spec.SyncPolicy.Automated.Enabled)
assert.False(t, f.spec.SyncPolicy.Automated.AllowEmpty)
})
t.Run("RetryLimit", func(t *testing.T) {
require.NoError(t, f.SetFlag("sync-retry-limit", "5"))
assert.Equal(t, int64(5), f.spec.SyncPolicy.Retry.Limit)

View File

@@ -27,7 +27,6 @@ type RepoOptions struct {
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
}
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
@@ -54,5 +53,4 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
}

View File

@@ -1,7 +1,6 @@
package cmpserver
import (
"context"
"fmt"
"net"
"os"
@@ -86,8 +85,7 @@ func (a *ArgoCDCMPServer) Run() {
// Listen on the socket address
_ = os.Remove(config.Address())
lc := &net.ListenConfig{}
listener, err := lc.Listen(context.Background(), "unix", config.Address())
listener, err := net.Listen("unix", config.Address())
errors.CheckError(err)
log.Infof("argocd-cmp-server %s serving on %s", common.GetVersion(), listener.Addr())

View File

@@ -229,7 +229,7 @@ func (s *Service) initGitClient(logCtx *log.Entry, r *apiclient.CommitHydratedMa
}
logCtx.Debugf("Fetching repo %s", r.Repo.Repo)
err = gitClient.Fetch("", 0)
err = gitClient.Fetch("")
if err != nil {
cleanupOrLog()
return nil, "", nil, fmt.Errorf("failed to clone repo: %w", err)

View File

@@ -82,7 +82,7 @@ func Test_CommitHydratedManifests(t *testing.T) {
t.Parallel()
service, mockRepoClientFactory := newServiceWithMocks(t)
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(nil, assert.AnError).Once()
_, err := service.CommitHydratedManifests(t.Context(), validRequest)
require.Error(t, err)
@@ -94,14 +94,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
resp, err := service.CommitHydratedManifests(t.Context(), validRequest)
require.NoError(t, err)
@@ -114,14 +114,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("root-and-blank-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithRootAndBlank := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -161,15 +161,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().RemoveContents([]string{"apps/staging"}).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("subdir-path-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithSubdirPath := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -201,15 +201,15 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().RemoveContents([]string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("RemoveContents", []string{"apps/production", "apps/staging"}).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("mixed-paths-sha", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithMixedPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{
@@ -257,14 +257,14 @@ func Test_CommitHydratedManifests(t *testing.T) {
service, mockRepoClientFactory := newServiceWithMocks(t)
mockGitClient := gitmocks.NewClient(t)
mockGitClient.EXPECT().Init().Return(nil).Once()
mockGitClient.EXPECT().Fetch(mock.Anything, mock.Anything).Return(nil).Once()
mockGitClient.EXPECT().SetAuthor("Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrOrphan("env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CheckoutOrNew("main", "env/test", false).Return("", nil).Once()
mockGitClient.EXPECT().CommitAndPush("main", "test commit message").Return("", nil).Once()
mockGitClient.EXPECT().CommitSHA().Return("it-worked!", nil).Once()
mockRepoClientFactory.EXPECT().NewClient(mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
mockGitClient.On("Init").Return(nil).Once()
mockGitClient.On("Fetch", mock.Anything).Return(nil).Once()
mockGitClient.On("SetAuthor", "Argo CD", "argo-cd@example.com").Return("", nil).Once()
mockGitClient.On("CheckoutOrOrphan", "env/test", false).Return("", nil).Once()
mockGitClient.On("CheckoutOrNew", "main", "env/test", false).Return("", nil).Once()
mockGitClient.On("CommitAndPush", "main", "test commit message").Return("", nil).Once()
mockGitClient.On("CommitSHA").Return("it-worked!", nil).Once()
mockRepoClientFactory.On("NewClient", mock.Anything, mock.Anything).Return(mockGitClient, nil).Once()
requestWithEmptyPaths := &apiclient.CommitHydratedManifestsRequest{
Repo: &v1alpha1.Repository{

View File

@@ -21,8 +21,8 @@ import (
var sprigFuncMap = sprig.GenericFuncMap() // a singleton for better performance
const gitAttributesContents = `*/README.md linguist-generated=true
*/hydrator.metadata linguist-generated=true`
const gitAttributesContents = `**/README.md linguist-generated=true
**/hydrator.metadata linguist-generated=true`
func init() {
// Avoid allowing the user to learn things about the environment.

View File

@@ -7,8 +7,10 @@ import (
"encoding/json"
"fmt"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"testing"
"time"
@@ -234,6 +236,74 @@ func TestWriteGitAttributes(t *testing.T) {
gitAttributesPath := filepath.Join(root.Name(), ".gitattributes")
gitAttributesBytes, err := os.ReadFile(gitAttributesPath)
require.NoError(t, err)
assert.Contains(t, string(gitAttributesBytes), "*/README.md linguist-generated=true")
assert.Contains(t, string(gitAttributesBytes), "*/hydrator.metadata linguist-generated=true")
assert.Contains(t, string(gitAttributesBytes), "README.md linguist-generated=true")
assert.Contains(t, string(gitAttributesBytes), "hydrator.metadata linguist-generated=true")
}
func TestWriteGitAttributes_MatchesAllDepths(t *testing.T) {
root := tempRoot(t)
err := writeGitAttributes(root)
require.NoError(t, err)
// The gitattributes pattern needs to match files at all depths:
// - hydrator.metadata (root level)
// - path1/hydrator.metadata (one level deep)
// - path1/nested/deep/hydrator.metadata (multiple levels deep)
// Same for README.md files
//
// The pattern "**/hydrator.metadata" matches at any depth including root
// The pattern "*/hydrator.metadata" only matches exactly one directory level deep
// Test actual Git behavior using git check-attr
// Initialize a git repo
ctx := t.Context()
repoPath := root.Name()
cmd := exec.CommandContext(ctx, "git", "init")
cmd.Dir = repoPath
output, err := cmd.CombinedOutput()
require.NoError(t, err, "Failed to init git repo: %s", string(output))
// Test files at different depths
testCases := []struct {
path string
shouldMatch bool
description string
}{
{"hydrator.metadata", true, "root level hydrator.metadata"},
{"README.md", true, "root level README.md"},
{"path1/hydrator.metadata", true, "one level deep hydrator.metadata"},
{"path1/README.md", true, "one level deep README.md"},
{"path1/nested/hydrator.metadata", true, "two levels deep hydrator.metadata"},
{"path1/nested/README.md", true, "two levels deep README.md"},
{"path1/nested/deep/hydrator.metadata", true, "three levels deep hydrator.metadata"},
{"path1/nested/deep/README.md", true, "three levels deep README.md"},
{"manifest.yaml", false, "manifest.yaml should not match"},
{"path1/manifest.yaml", false, "nested manifest.yaml should not match"},
}
for _, tc := range testCases {
t.Run(tc.description, func(t *testing.T) {
// Use git check-attr to verify if linguist-generated attribute is set
cmd := exec.CommandContext(ctx, "git", "check-attr", "linguist-generated", tc.path)
cmd.Dir = repoPath
output, err := cmd.CombinedOutput()
require.NoError(t, err, "Failed to run git check-attr: %s", string(output))
// Output format: <path>: <attribute>: <value>
// Example: "hydrator.metadata: linguist-generated: true"
outputStr := strings.TrimSpace(string(output))
if tc.shouldMatch {
expectedOutput := tc.path + ": linguist-generated: true"
assert.Equal(t, expectedOutput, outputStr,
"File %s should have linguist-generated=true attribute", tc.path)
} else {
// Attribute should be unspecified
expectedOutput := tc.path + ": linguist-generated: unspecified"
assert.Equal(t, expectedOutput, outputStr,
"File %s should not have linguist-generated=true attribute", tc.path)
}
})
}
}

View File

@@ -229,7 +229,6 @@ const (
// AnnotationKeyAppSkipReconcile tells the Application to skip the Application controller reconcile.
// Skip reconcile when the value is "true" or any other string values that can be strconv.ParseBool() to be true.
AnnotationKeyAppSkipReconcile = "argocd.argoproj.io/skip-reconcile"
// LabelKeyComponentRepoServer is the label key to identify the component as repo-server
LabelKeyComponentRepoServer = "app.kubernetes.io/component"
// LabelValueComponentRepoServer is the label value for the repo-server component

View File

@@ -39,7 +39,6 @@ import (
"k8s.io/client-go/informers"
informerv1 "k8s.io/client-go/informers/apps/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/ptr"
@@ -467,7 +466,7 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
// Enforce application's permission for the source namespace
_, err = ctrl.getAppProj(app)
if err != nil {
logCtx.WithError(err).Errorf("Unable to determine project for app")
logCtx.Errorf("Unable to determine project for app '%s': %v", app.QualifiedName(), err)
continue
}
@@ -903,12 +902,12 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
clusters, err := ctrl.db.ListClusters(ctx)
if err != nil {
log.WithError(err).Warn("Cannot init sharding. Error while querying clusters list from database")
log.Warnf("Cannot init sharding. Error while querying clusters list from database: %v", err)
} else {
appItems, err := ctrl.getAppList(metav1.ListOptions{})
if err != nil {
log.WithError(err).Warn("Cannot init sharding. Error while querying application list from database")
log.Warnf("Cannot init sharding. Error while querying application list from database: %v", err)
} else {
ctrl.clusterSharding.Init(clusters, appItems)
}
@@ -1001,29 +1000,29 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app))
@@ -1042,8 +1041,8 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
// We cannot rely on informer since applications might be updated by both application controller and api server.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
logCtx.WithError(err).Error("Failed to retrieve latest application state")
return processNext
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
return
}
app = freshApp
}
@@ -1065,7 +1064,7 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
}
ts.AddCheckpoint("finalize_application_deletion_ms")
}
return processNext
return
}
func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processNext bool) {
@@ -1074,26 +1073,26 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appComparisonTypeRefreshQueue.Done(key)
}()
if shutdown {
processNext = false
return processNext
return
}
if parts := strings.Split(key, "/"); len(parts) != 3 {
log.WithField("appkey", key).Warn("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consist of namespace/name/comparisonType")
log.Warnf("Unexpected key format in appComparisonTypeRefreshTypeQueue. Key should consists of namespace/name/comparisonType but got: %s", key)
} else {
compareWith, err := strconv.Atoi(parts[2])
if err != nil {
log.WithField("appkey", key).WithError(err).Warn("Unable to parse comparison type")
return processNext
log.Warnf("Unable to parse comparison type: %v", err)
return
}
ctrl.requestAppRefresh(ctrl.toAppQualifiedName(parts[1], parts[0]), CompareWith(compareWith).Pointer(), nil)
}
return processNext
return
}
func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool) {
@@ -1102,35 +1101,35 @@ func (ctrl *ApplicationController) processProjectQueueItem() (processNext bool)
defer func() {
if r := recover(); r != nil {
log.WithField("key", key).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.projectRefreshQueue.Done(key)
}()
if shutdown {
processNext = false
return processNext
return
}
obj, exists, err := ctrl.projInformer.GetIndexer().GetByKey(key)
if err != nil {
log.WithField("key", key).WithError(err).Error("Failed to get project from informer index")
return processNext
log.Errorf("Failed to get project '%s' from informer index: %+v", key, err)
return
}
if !exists {
// This happens after appproj was deleted, but the work queue still had an entry for it.
return processNext
return
}
origProj, ok := obj.(*appv1.AppProject)
if !ok {
log.WithField("key", key).Warnf("Key in index is not an appproject")
return processNext
log.Warnf("Key '%s' in index is not an appproject", key)
return
}
if origProj.DeletionTimestamp != nil && origProj.HasFinalizer() {
if err := ctrl.finalizeProjectDeletion(origProj.DeepCopy()); err != nil {
log.WithError(err).Warn("Failed to finalize project deletion")
log.Warnf("Failed to finalize project deletion: %v", err)
}
}
return processNext
return
}
func (ctrl *ApplicationController) finalizeProjectDeletion(proj *appv1.AppProject) error {
@@ -1195,7 +1194,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
if !apierrors.IsNotFound(err) {
logCtx.WithError(err).Error("Unable to get refreshed application info prior deleting resources")
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
}
return nil
}
@@ -1205,7 +1204,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.WithError(err).Warn("Unable to get destination cluster")
logCtx.Warnf("Unable to get destination cluster: %v", err)
app.UnSetCascadedDeletion()
app.UnSetPostDeleteFinalizerAll()
if err := ctrl.updateFinalizers(app); err != nil {
@@ -1220,11 +1219,6 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
config := metrics.AddMetricsTransportWrapper(ctrl.metricsServer, app, clusterRESTConfig)
// Apply impersonation config if necessary
if err := ctrl.applyImpersonationConfig(config, proj, app, destCluster); err != nil {
return fmt.Errorf("cannot apply impersonation: %w", err)
}
if app.CascadedDeletion() {
deletionApproved := app.IsDeletionConfirmed(app.DeletionTimestamp.Time)
@@ -1373,7 +1367,7 @@ func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condi
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(context.Background(), app.Name, types.MergePatchType, patch, metav1.PatchOptions{})
}
if err != nil {
logCtx.WithError(err).Error("Unable to set application condition")
logCtx.Errorf("Unable to set application condition: %v", err)
}
}
@@ -1514,10 +1508,20 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
if _, err := cache.MetaNamespaceKeyFunc(app); err == nil {
// force app refresh with using CompareWithLatest comparison type and trigger app reconciliation loop
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatestForceResolve.Pointer(), nil)
var compareWith CompareWith
if state.Operation.InitiatedBy.Automated {
// Do not force revision resolution on automated operations because
// this would cause excessive Ls-Remote requests on monorepo commits
compareWith = CompareWithLatest
} else {
// Force app refresh with using most recent resolved revision after sync,
// so UI won't show a just synced application being out of sync if it was
// synced after commit but before app. refresh (see #18153)
compareWith = CompareWithLatestForceResolve
}
ctrl.requestAppRefresh(app.QualifiedName(), compareWith.Pointer(), nil)
} else {
logCtx.WithError(err).Warn("Fails to requeue application")
logCtx.Warnf("Fails to requeue application: %v", err)
}
}
ts.AddCheckpoint("request_app_refresh_ms")
@@ -1549,13 +1553,13 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
}
patchJSON, err := json.Marshal(patch)
if err != nil {
logCtx.WithError(err).Error("error marshaling json")
logCtx.Errorf("error marshaling json: %v", err)
return
}
if app.Status.OperationState != nil && app.Status.OperationState.FinishedAt != nil && state.FinishedAt == nil {
patchJSON, err = jsonpatch.MergeMergePatches(patchJSON, []byte(`{"status": {"operationState": {"finishedAt": null}}}`))
if err != nil {
logCtx.WithError(err).Error("error merging operation state patch")
logCtx.Errorf("error merging operation state patch: %v", err)
return
}
}
@@ -1569,7 +1573,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
}
// kube.RetryUntilSucceed logs failed attempts at "debug" level, but we want to know if this fails. Log a
// warning.
logCtx.WithError(err).Warn("error patching application with operation state")
logCtx.Warnf("error patching application with operation state: %v", err)
return fmt.Errorf("error patching application with operation state: %w", err)
}
return nil
@@ -1598,7 +1602,7 @@ func (ctrl *ApplicationController) setOperationState(app *appv1.Application, sta
destCluster, err := argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.WithError(err).Warn("Unable to get destination cluster, setting dest_server label to empty string in sync metric")
logCtx.Warnf("Unable to get destination cluster, setting dest_server label to empty string in sync metric: %v", err)
}
destServer := ""
if destCluster != nil {
@@ -1615,7 +1619,7 @@ func (ctrl *ApplicationController) writeBackToInformer(app *appv1.Application) {
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithField("informer-writeBack", true)
err := ctrl.appInformer.GetStore().Update(app)
if err != nil {
logCtx.WithError(err).Error("failed to update informer store")
logCtx.Errorf("failed to update informer store: %v", err)
return
}
}
@@ -1636,12 +1640,12 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
// We want to have app operation update happen after the sync, so there's no race condition
// and app updates not proceeding. See https://github.com/argoproj/argo-cd/issues/18500.
@@ -1650,23 +1654,23 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
origApp = origApp.DeepCopy()
needRefresh, refreshType, comparisonLevel := ctrl.needRefreshAppStatus(origApp, ctrl.statusRefreshTimeout, ctrl.statusHardRefreshTimeout)
if !needRefresh {
return processNext
return
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app)).WithFields(log.Fields{
@@ -1708,13 +1712,13 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
if tree, err = ctrl.getResourceTree(destCluster, app, managedResources); err == nil {
app.Status.Summary = tree.GetSummary(app)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), tree); err != nil {
logCtx.WithError(err).Error("Failed to cache resources tree")
return processNext
logCtx.Errorf("Failed to cache resources tree: %v", err)
return
}
}
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
return processNext
return
}
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
}
@@ -1730,20 +1734,20 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
logCtx.WithError(err).Warn("failed to set app resource tree")
logCtx.Warnf("failed to set app resource tree: %v", err)
}
if err := ctrl.cache.SetAppManagedResources(app.InstanceName(ctrl.namespace), nil); err != nil {
logCtx.WithError(err).Warn("failed to set app managed resources tree")
logCtx.Warnf("failed to set app managed resources tree: %v", err)
}
ts.AddCheckpoint("process_refresh_app_conditions_errors_ms")
return processNext
return
}
destCluster, err = argo.GetDestinationCluster(context.Background(), app.Spec.Destination, ctrl.db)
if err != nil {
logCtx.WithError(err).Error("Failed to get destination cluster")
logCtx.Errorf("Failed to get destination cluster: %v", err)
// exit the reconciliation. ctrl.refreshAppConditions should have caught the error
return processNext
return
}
var localManifests []string
@@ -1783,8 +1787,8 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
ts.AddCheckpoint("compare_app_state_ms")
if stderrors.Is(err, ErrCompareStateRepo) {
logCtx.WithError(err).Warn("Ignoring temporary failed attempt to compare app state against repo")
return processNext // short circuit if git error is encountered
logCtx.Warnf("Ignoring temporary failed attempt to compare app state against repo: %v", err)
return // short circuit if git error is encountered
}
for k, v := range compareResult.timings {
@@ -1797,7 +1801,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
tree, err := ctrl.setAppManagedResources(destCluster, app, compareResult)
ts.AddCheckpoint("set_app_managed_resources_ms")
if err != nil {
logCtx.WithError(err).Error("Failed to cache app resources")
logCtx.Errorf("Failed to cache app resources: %v", err)
} else {
app.Status.Summary = tree.GetSummary(app)
}
@@ -1849,54 +1853,60 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
if err := ctrl.updateFinalizers(app); err != nil {
logCtx.WithError(err).Error("Failed to update finalizers")
logCtx.Errorf("Failed to update finalizers: %v", err)
}
}
ts.AddCheckpoint("process_finalizers_ms")
return processNext
return
}
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appHydrateQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.WithField("appkey", appKey).Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appHydrateQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp.DeepCopy())
log.WithFields(applog.GetAppLogFields(origApp)).Debug("Successfully processed app hydrate queue item")
return processNext
return
}
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
hydrationKey, shutdown := ctrl.hydrationQueue.Get()
if shutdown {
processNext = false
return processNext
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
@@ -1904,19 +1914,12 @@ func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool
"destinationBranch": hydrationKey.DestinationBranch,
})
defer func() {
if r := recover(); r != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx.Debug("Processing hydration queue item")
ctrl.hydrator.ProcessHydrationQueueItem(hydrationKey)
logCtx.Debug("Successfully processed hydration queue item")
return processNext
return
}
func resourceStatusKey(res appv1.ResourceStatus) string {
@@ -2019,11 +2022,11 @@ func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Applica
patch, modified, err := diff.CreateTwoWayMergePatch(orig, app, appv1.Application{})
if err != nil {
logCtx.WithError(err).Error("error constructing app spec patch")
logCtx.Errorf("error constructing app spec patch: %v", err)
} else if modified {
_, err := ctrl.PatchAppWithWriteBack(context.Background(), app.Name, app.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
logCtx.WithError(err).Error("Error persisting normalized application spec")
logCtx.Errorf("Error persisting normalized application spec: %v", err)
} else {
logCtx.Infof("Normalized app spec: %s", string(patch))
}
@@ -2078,12 +2081,12 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
if err != nil {
logCtx.WithError(err).Error("Error constructing app status patch")
return patchDuration
logCtx.Errorf("Error constructing app status patch: %v", err)
return
}
if !modified {
logCtx.Infof("No status changes. Skipping patch")
return patchDuration
return
}
// calculate time for path call
start := time.Now()
@@ -2092,7 +2095,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}()
_, err = ctrl.PatchAppWithWriteBack(context.Background(), orig.Name, orig.Namespace, types.MergePatchType, patch, metav1.PatchOptions{})
if err != nil {
logCtx.WithError(err).Warn("Error updating application")
logCtx.Warnf("Error updating application: %v", err)
} else {
logCtx.Infof("Update successful")
}
@@ -2237,11 +2240,11 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
if stderrors.Is(err, argo.ErrAnotherOperationInProgress) {
// skipping auto-sync because another operation is in progress and was not noticed due to stale data in informer
// it is safe to skip auto-sync because it is already running
logCtx.WithError(err).Warnf("Failed to initiate auto-sync to %s", desiredRevisions)
logCtx.Warnf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
return nil, 0
}
logCtx.WithError(err).Errorf("Failed to initiate auto-sync to %s", desiredRevisions)
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredRevisions, err)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}, setOpTime
}
ctrl.writeBackToInformer(updatedApp)
@@ -2367,7 +2370,7 @@ func (ctrl *ApplicationController) canProcessApp(obj any) bool {
return false
}
} else {
logCtx.WithError(err).Debugf("Unable to determine if Application should skip reconcile based on annotation %s", common.AnnotationKeyAppSkipReconcile)
logCtx.Debugf("Unable to determine if Application should skip reconcile based on annotation %s: %v", common.AnnotationKeyAppSkipReconcile, err)
}
}
}
@@ -2622,22 +2625,4 @@ func (ctrl *ApplicationController) logAppEvent(ctx context.Context, a *appv1.App
ctrl.auditLogger.LogAppEvent(a, eventInfo, message, "", eventLabels)
}
func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config, proj *appv1.AppProject, app *appv1.Application, destCluster *appv1.Cluster) error {
impersonationEnabled, err := ctrl.settingsMgr.IsImpersonationEnabled()
if err != nil {
return fmt.Errorf("error getting impersonation setting: %w", err)
}
if !impersonationEnabled {
return nil
}
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
if err != nil {
return fmt.Errorf("error deriving service account to impersonate: %w", err)
}
config.Impersonate = rest.ImpersonationConfig{
UserName: user,
}
return nil
}
type ClusterFilterFunction func(c *appv1.Cluster, distributionFunction sharding.DistributionFunction) bool

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"strconv"
"testing"
"time"
@@ -95,11 +94,11 @@ func (m *MockKubectl) DeleteResource(ctx context.Context, config *rest.Config, g
return m.Kubectl.DeleteResource(ctx, config, gvk, name, namespace, deleteOptions)
}
func newFakeController(ctx context.Context, data *fakeData, repoErr error) *ApplicationController {
return newFakeControllerWithResync(ctx, data, time.Minute, repoErr, nil)
func newFakeController(data *fakeData, repoErr error) *ApplicationController {
return newFakeControllerWithResync(data, time.Minute, repoErr, nil)
}
func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration, repoErr, revisionPathsErr error) *ApplicationController {
var clust corev1.Secret
err := yaml.Unmarshal([]byte(fakeCluster), &clust)
if err != nil {
@@ -107,33 +106,33 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
}
// Mock out call to GenerateManifest
mockRepoClient := &mockrepoclient.RepoServerServiceClient{}
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
if len(data.manifestResponses) > 0 {
for _, response := range data.manifestResponses {
if repoErr != nil {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, repoErr).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, repoErr).Once()
} else {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(response, nil).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(response, nil).Once()
}
}
} else {
if repoErr != nil {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, repoErr).Once()
} else {
mockRepoClient.EXPECT().GenerateManifest(mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil).Once()
}
}
if revisionPathsErr != nil {
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(nil, revisionPathsErr)
} else {
mockRepoClient.EXPECT().UpdateRevisionForPaths(mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(data.updateRevisionForPathsResponse, nil)
}
mockRepoClientset := &mockrepoclient.Clientset{RepoServerServiceClient: mockRepoClient}
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
mockCommitClientset := &mockcommitclient.Clientset{}
mockCommitClientset := mockcommitclient.Clientset{}
secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
@@ -158,15 +157,15 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
kubeClient := fake.NewClientset(runtimeObjs...)
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, test.FakeArgoCDNamespace)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
ctrl, err := NewApplicationController(
test.FakeArgoCDNamespace,
settingsMgr,
kubeClient,
appclientset.NewSimpleClientset(data.apps...),
mockRepoClientset,
mockCommitClientset,
&mockRepoClientset,
&mockCommitClientset,
appstatecache.NewCache(
cacheutil.NewCache(cacheutil.NewInMemoryCache(1*time.Minute)),
1*time.Minute,
@@ -197,7 +196,7 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
false,
)
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
// Setting a default sharding algorithm for the tests where we cannot set it.
ctrl.clusterSharding = sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm)
if err != nil {
@@ -207,25 +206,27 @@ func newFakeControllerWithResync(ctx context.Context, data *fakeData, appResyncP
defer cancelProj()
cancelApp := test.StartInformer(ctrl.appInformer)
defer cancelApp()
clusterCacheMock := &mocks.ClusterCache{}
clusterCacheMock.EXPECT().IsNamespaced(mock.Anything).Return(true, nil)
clusterCacheMock.EXPECT().GetOpenAPISchema().Return(nil)
clusterCacheMock.EXPECT().GetGVKParser().Return(nil)
clusterCacheMock := mocks.ClusterCache{}
clusterCacheMock.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCacheMock.On("GetOpenAPISchema").Return(nil, nil)
clusterCacheMock.On("GetGVKParser").Return(nil)
mockStateCache := &mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = mockStateCache
ctrl.stateCache = mockStateCache
mockStateCache.EXPECT().IsNamespaced(mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.EXPECT().GetManagedLiveObjs(mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.EXPECT().GetVersionsInfo(mock.Anything).Return("v1.2.3", nil, nil)
mockStateCache := mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
ctrl.stateCache = &mockStateCache
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.On("GetVersionsInfo", mock.Anything).Return("v1.2.3", nil, nil)
response := make(map[kube.ResourceKey]v1alpha1.ResourceNode)
for k, v := range data.namespacedResources {
response[k] = v.ResourceNode
}
mockStateCache.EXPECT().GetNamespaceTopLevelResources(mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.Anything).Return(nil)
mockStateCache.EXPECT().GetClusterCache(mock.Anything).Return(clusterCacheMock, nil)
mockStateCache.EXPECT().IterateHierarchyV2(mock.Anything, mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Cluster, keys []kube.ResourceKey, action func(_ v1alpha1.ResourceNode, _ string) bool) {
mockStateCache.On("GetNamespaceTopLevelResources", mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.On("IterateResources", mock.Anything, mock.Anything).Return(nil)
mockStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCacheMock, nil)
mockStateCache.On("IterateHierarchyV2", mock.Anything, mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
keys := args[1].([]kube.ResourceKey)
action := args[2].(func(child v1alpha1.ResourceNode, appName string) bool)
for _, key := range keys {
appName := ""
if res, ok := data.namespacedResources[key]; ok {
@@ -595,7 +596,7 @@ func newFakeServiceAccount() map[string]any {
func TestAutoSync(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -613,7 +614,7 @@ func TestAutoSyncEnabledSetToTrue(t *testing.T) {
app := newFakeApp()
enable := true
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -634,7 +635,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
@@ -649,7 +650,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
app := newFakeMultiSourceApp()
app.Spec.SyncPolicy.Automated.SelfHeal = false
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"a", "b", "c"},
@@ -665,7 +666,7 @@ func TestAutoSyncMultiSourceWithoutSelfHeal(t *testing.T) {
func TestAutoSyncNotAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -678,7 +679,7 @@ func TestAutoSyncAllowEmpty(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy.Automated.Prune = true
app.Spec.SyncPolicy.Automated.AllowEmpty = true
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -692,7 +693,7 @@ func TestSkipAutoSync(t *testing.T) {
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
t.Run("PreviouslySyncedToRevision", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
@@ -707,7 +708,7 @@ func TestSkipAutoSync(t *testing.T) {
// Verify we skip when we are already Synced (even if revision is different)
t.Run("AlreadyInSyncedState", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -723,7 +724,7 @@ func TestSkipAutoSync(t *testing.T) {
t.Run("AutoSyncIsDisabled", func(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy = nil
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -740,7 +741,7 @@ func TestSkipAutoSync(t *testing.T) {
app := newFakeApp()
enable := false
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -757,7 +758,7 @@ func TestSkipAutoSync(t *testing.T) {
app := newFakeApp()
now := metav1.Now()
app.DeletionTimestamp = &now
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -783,7 +784,7 @@ func TestSkipAutoSync(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -807,7 +808,7 @@ func TestSkipAutoSync(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -821,7 +822,7 @@ func TestSkipAutoSync(t *testing.T) {
t.Run("NeedsToPruneResourcesOnlyButAutomatedPruneDisabled", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
@@ -847,7 +848,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
},
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
@@ -907,7 +908,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(t.Context(), "my-app", metav1.GetOptions{})
@@ -925,7 +926,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
},
},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
app.Status.OperationState.SyncResult.Revisions = []string{"z", "x", "v"}
app.Status.OperationState.SyncResult.Sources[0].Helm = &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
@@ -972,7 +973,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
app.DeletionTimestamp = &now
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
@@ -1017,7 +1018,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
appObj := kube.MustToUnstructured(&app)
cm := newFakeCM()
strayObj := kube.MustToUnstructured(&cm)
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj, &restrictedProj},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
@@ -1058,7 +1059,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeAppWithDestName()
app.SetCascadedDeletion(v1alpha1.ResourcesFinalizerName)
app.DeletionTimestamp = &now
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{}}, nil)
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
@@ -1084,7 +1085,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
testShouldDelete := func(app *v1alpha1.Application) {
appObj := kube.MustToUnstructured(&app)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
}}, nil)
@@ -1118,7 +1119,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeApp()
app.SetPostDeleteFinalizer()
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakePostDeleteHook},
}},
@@ -1160,7 +1161,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
},
}
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakePostDeleteHook},
}},
@@ -1204,7 +1205,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
},
}
require.NoError(t, unstructured.SetNestedField(liveHook.Object, conditions, "status", "conditions"))
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{fakeRoleBinding, fakeRole, fakeServiceAccount, fakePostDeleteHook},
}},
@@ -1245,117 +1246,6 @@ func TestFinalizeAppDeletion(t *testing.T) {
})
}
func TestFinalizeAppDeletionWithImpersonation(t *testing.T) {
type fixture struct {
application *v1alpha1.Application
controller *ApplicationController
}
setup := func(destinationNamespace, serviceAccountName string) *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
now := metav1.Now()
app.DeletionTimestamp = &now
project := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []v1alpha1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
DestinationServiceAccounts: []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://localhost:6443",
Namespace: destinationNamespace,
DefaultServiceAccount: serviceAccountName,
},
},
},
}
additionalObjs := []runtime.Object{}
if serviceAccountName != "" {
syncServiceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: test.FakeDestNamespace,
},
}
additionalObjs = append(additionalObjs, syncServiceAccount)
}
data := fakeData{
apps: []runtime.Object{app, project},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: "https://localhost:6443",
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{},
configMapData: map[string]string{
"application.sync.impersonation.enabled": strconv.FormatBool(true),
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(t.Context(), &data, nil)
return &fixture{
application: app,
controller: ctrl,
}
}
t.Run("no matching impersonation service account is configured", func(t *testing.T) {
// given impersonation is enabled but no matching service account exists
f := setup(test.FakeDestNamespace, "")
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should fail due to impersonation error
require.Error(t, err)
assert.Contains(t, err.Error(), "error deriving service account to impersonate")
})
t.Run("valid impersonation service account is configured", func(t *testing.T) {
// given impersonation is enabled with valid service account
f := setup(test.FakeDestNamespace, "test-sa")
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should succeed
require.NoError(t, err)
})
t.Run("invalid application destination cluster", func(t *testing.T) {
// given impersonation is enabled but destination cluster does not exist
f := setup(test.FakeDestNamespace, "test-sa")
f.application.Spec.Destination.Server = "https://invalid-cluster:6443"
f.application.Spec.Destination.Name = "invalid"
// when
err := f.controller.finalizeApplicationDeletion(f.application, func(_ string) ([]*v1alpha1.Cluster, error) {
return []*v1alpha1.Cluster{}, nil
})
// then deletion should still succeed by removing finalizers
require.NoError(t, err)
})
}
// TestNormalizeApplication verifies we normalize an application during reconciliation
func TestNormalizeApplication(t *testing.T) {
defaultProj := v1alpha1.AppProject{
@@ -1389,7 +1279,7 @@ func TestNormalizeApplication(t *testing.T) {
{
// Verify we normalize the app because project is missing
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.AddRateLimited(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -1411,7 +1301,7 @@ func TestNormalizeApplication(t *testing.T) {
// Verify we don't unnecessarily normalize app when project is set
app.Spec.Project = "default"
data.apps[0] = app
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.AddRateLimited(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
@@ -1436,7 +1326,7 @@ func TestHandleAppUpdated(t *testing.T) {
app.Spec.Destination.Server = v1alpha1.KubernetesInternalAPIServerAddr
proj := defaultProj.DeepCopy()
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl.handleObjectUpdated(map[string]bool{app.InstanceName(ctrl.namespace): true}, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, level := ctrl.isRefreshRequested(app.QualifiedName())
@@ -1463,7 +1353,7 @@ func TestHandleOrphanedResourceUpdated(t *testing.T) {
proj := defaultProj.DeepCopy()
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app1, app2, proj}}, nil)
ctrl.handleObjectUpdated(map[string]bool{}, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: test.FakeArgoCDNamespace})
@@ -1494,7 +1384,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
ResourceRef: v1alpha1.ResourceRef{Group: "apps", Kind: "Deployment", Namespace: "default", Name: "deploy2"},
}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", "Deployment", "default", "nginx-deployment"): {ResourceNode: managedDeploy},
@@ -1517,7 +1407,7 @@ func TestGetResourceTree_HasOrphanedResources(t *testing.T) {
}
func TestSetOperationStateOnDeletedApp(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
@@ -1535,7 +1425,7 @@ func TestSetOperationStateLogRetries(t *testing.T) {
t.Cleanup(func() {
logrus.StandardLogger().ReplaceHooks(logrus.LevelHooks{})
})
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
patched := false
@@ -1548,12 +1438,7 @@ func TestSetOperationStateLogRetries(t *testing.T) {
})
ctrl.setOperationState(newFakeApp(), &v1alpha1.OperationState{Phase: synccommon.OperationSucceeded})
assert.True(t, patched)
require.GreaterOrEqual(t, len(hook.Entries), 1)
entry := hook.Entries[0]
require.Contains(t, entry.Data, "error")
errorVal, ok := entry.Data["error"].(error)
require.True(t, ok, "error field should be of type error")
assert.Contains(t, errorVal.Error(), "fake error")
assert.Contains(t, hook.Entries[0].Message, "fake error")
}
func TestNeedRefreshAppStatus(t *testing.T) {
@@ -1591,7 +1476,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
app.Status.Sync.ComparedTo.Source = app.Spec.GetSource()
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
t.Run("no need to refresh just reconciled application", func(t *testing.T) {
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
@@ -1603,7 +1488,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
// refresh app using the 'deepest' requested comparison level
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), nil)
@@ -1620,7 +1505,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
// refresh app with a non-nil delay
// use zero-second delay to test the add later logic without waiting in the test
@@ -1650,7 +1535,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
app := app.DeepCopy()
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
@@ -1680,7 +1565,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
}
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
@@ -1760,7 +1645,7 @@ func TestNeedRefreshAppStatus(t *testing.T) {
}
func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
app := newFakeApp()
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
Labels: map[string]string{
@@ -1784,7 +1669,7 @@ func TestUpdatedManagedNamespaceMetadata(t *testing.T) {
}
func TestUnchangedManagedNamespaceMetadata(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
app := newFakeApp()
app.Spec.SyncPolicy.ManagedNamespaceMetadata = &v1alpha1.ManagedNamespaceMetadata{
Labels: map[string]string{
@@ -1827,7 +1712,7 @@ func TestRefreshAppConditions(t *testing.T) {
t.Run("NoErrorConditions", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
@@ -1838,7 +1723,7 @@ func TestRefreshAppConditions(t *testing.T) {
app := newFakeApp()
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionExcludedResourceWarning}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.False(t, hasErrors)
@@ -1851,7 +1736,7 @@ func TestRefreshAppConditions(t *testing.T) {
app.Spec.Project = "wrong project"
app.Status.SetConditions([]v1alpha1.ApplicationCondition{{Type: v1alpha1.ApplicationConditionInvalidSpecError, Message: "old message"}}, nil)
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &defaultProj}}, nil)
_, hasErrors := ctrl.refreshAppConditions(app)
assert.True(t, hasErrors)
@@ -1866,7 +1751,7 @@ func TestUpdateReconciledAt(t *testing.T) {
reconciledAt := metav1.NewTime(time.Now().Add(-1 * time.Second))
app.Status = v1alpha1.ApplicationStatus{ReconciledAt: &reconciledAt}
app.Status.Sync = v1alpha1.SyncStatus{ComparedTo: v1alpha1.ComparedTo{Source: app.Spec.GetSource(), Destination: app.Spec.Destination, IgnoreDifferences: app.Spec.IgnoreDifferences}}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1997,7 +1882,7 @@ apps/Deployment:
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{tc.app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2061,7 +1946,7 @@ apps/Deployment:
return hs`,
}
ctrl := newFakeControllerWithResync(t.Context(), &fakeData{
ctrl := newFakeControllerWithResync(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2128,7 +2013,7 @@ apps/Deployment:
func TestProjectErrorToCondition(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "wrong project"
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -2156,7 +2041,7 @@ func TestProjectErrorToCondition(t *testing.T) {
func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
app := newFakeApp()
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, proj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
patched := false
@@ -2172,7 +2057,7 @@ func TestFinalizeProjectDeletion_HasApplications(t *testing.T) {
func TestFinalizeProjectDeletion_DoesNotHaveApplications(t *testing.T) {
proj := &v1alpha1.AppProject{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: test.FakeArgoCDNamespace}}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{&defaultProj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{&defaultProj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
@@ -2198,7 +2083,7 @@ func TestProcessRequestedAppOperation_FailedNoRetries(t *testing.T) {
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2225,7 +2110,7 @@ func TestProcessRequestedAppOperation_InvalidDestination(t *testing.T) {
proj := defaultProj
proj.Name = "test-project"
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, &proj}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app, &proj}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
func() {
@@ -2254,7 +2139,7 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
Sync: &v1alpha1.SyncOperation{},
Retry: v1alpha1.RetryStrategy{Limit: 1},
}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2301,7 +2186,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailed(t *testing.T) {
Revision: "abc123",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2358,7 +2243,7 @@ func TestProcessRequestedAppOperation_RunningPreviouslyFailedBackoff(t *testing.
Revision: "abc123",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.PrependReactor("patch", "*", func(_ kubetesting.Action) (handled bool, ret runtime.Object, err error) {
require.FailNow(t, "A patch should not have been called if the backoff has not passed")
@@ -2386,7 +2271,7 @@ func TestProcessRequestedAppOperation_HasRetriesTerminated(t *testing.T) {
Revision: "abc123",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
@@ -2410,7 +2295,7 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
@@ -2436,6 +2321,41 @@ func TestProcessRequestedAppOperation_Successful(t *testing.T) {
assert.Equal(t, CompareWithLatestForceResolve, level)
}
func TestProcessRequestedAppAutomatedOperation_Successful(t *testing.T) {
app := newFakeApp()
app.Spec.Project = "default"
app.Operation = &v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
InitiatedBy: v1alpha1.OperationInitiator{
Automated: true,
},
}
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
}},
}, nil)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
receivedPatch := map[string]any{}
fakeAppCs.PrependReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
require.NoError(t, json.Unmarshal(patchAction.GetPatch(), &receivedPatch))
}
return true, &v1alpha1.Application{}, nil
})
ctrl.processRequestedAppOperation(app)
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Equal(t, string(synccommon.OperationSucceeded), phase)
assert.Equal(t, "successfully synced (no more tasks)", message)
ok, level := ctrl.isRefreshRequested(ctrl.toAppKey(app.Name))
assert.True(t, ok)
assert.Equal(t, CompareWithLatest, level)
}
func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
testCases := []struct {
name string
@@ -2485,7 +2405,7 @@ func TestProcessRequestedAppOperation_SyncTimeout(t *testing.T) {
Revision: "HEAD",
},
}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponses: []*apiclient.ManifestResponse{{
Manifests: []string{},
@@ -2527,9 +2447,9 @@ func TestGetAppHosts(t *testing.T) {
"application.allowedNodeLabels": "label1,label2",
},
}
ctrl := newFakeController(t.Context(), data, nil)
ctrl := newFakeController(data, nil)
mockStateCache := &mockstatecache.LiveStateCache{}
mockStateCache.EXPECT().IterateResources(mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
mockStateCache.On("IterateResources", mock.Anything, mock.MatchedBy(func(callback func(res *clustercache.Resource, info *statecache.ResourceInfo)) bool {
// node resource
callback(&clustercache.Resource{
Ref: corev1.ObjectReference{Name: "minikube", Kind: "Node", APIVersion: "v1"},
@@ -2555,7 +2475,7 @@ func TestGetAppHosts(t *testing.T) {
ResourceRequests: map[corev1.ResourceName]resource.Quantity{corev1.ResourceCPU: resource.MustParse("2")},
}})
return true
})).Return(nil).Maybe()
})).Return(nil)
ctrl.stateCache = mockStateCache
hosts, err := ctrl.getAppHosts(&v1alpha1.Cluster{Server: "test", Name: "test"}, app, []v1alpha1.ResourceNode{{
@@ -2582,15 +2502,15 @@ func TestGetAppHosts(t *testing.T) {
func TestMetricsExpiration(t *testing.T) {
app := newFakeApp()
// Check expiration is disabled by default
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
assert.False(t, ctrl.metricsServer.HasExpiration())
// Check expiration is enabled if set
ctrl = newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
ctrl = newFakeController(&fakeData{apps: []runtime.Object{app}, metricsCacheExpiration: 10 * time.Second}, nil)
assert.True(t, ctrl.metricsServer.HasExpiration())
}
func TestToAppKey(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
tests := []struct {
name string
input string
@@ -2610,7 +2530,7 @@ func TestToAppKey(t *testing.T) {
func Test_canProcessApp(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
ctrl.applicationNamespaces = []string{"good"}
t.Run("without cluster filter, good namespace", func(t *testing.T) {
app.Namespace = "good"
@@ -2641,7 +2561,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
appSkipReconcileFalse.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "false"}
appSkipReconcileTrue := newFakeApp()
appSkipReconcileTrue.Annotations = map[string]string{common.AnnotationKeyAppSkipReconcile: "true"}
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
tests := []struct {
name string
input any
@@ -2662,7 +2582,7 @@ func Test_canProcessAppSkipReconcileAnnotation(t *testing.T) {
func Test_syncDeleteOption(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}}, nil)
cm := newFakeCM()
t.Run("without delete option object is deleted", func(t *testing.T) {
cmObj := kube.MustToUnstructured(&cm)
@@ -2683,7 +2603,7 @@ func Test_syncDeleteOption(t *testing.T) {
func TestAddControllerNamespace(t *testing.T) {
t.Run("set controllerNamespace when the app is in the controller namespace", func(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{},
}, nil)
@@ -2701,7 +2621,7 @@ func TestAddControllerNamespace(t *testing.T) {
app.Namespace = appNamespace
proj := defaultProj
proj.Spec.SourceNamespaces = []string{appNamespace}
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &proj},
manifestResponse: &apiclient.ManifestResponse{},
applicationNamespaces: []string{appNamespace},
@@ -2958,7 +2878,7 @@ func assertDurationAround(t *testing.T, expected time.Duration, actual time.Dura
}
func TestSelfHealRemainingBackoff(t *testing.T) {
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackoff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
@@ -3040,7 +2960,7 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
func TestSelfHealBackoffCooldownElapsed(t *testing.T) {
cooldown := time.Second * 30
ctrl := newFakeController(t.Context(), &fakeData{}, nil)
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackoffCooldown = cooldown
app := &v1alpha1.Application{

View File

@@ -40,7 +40,7 @@ func (n netError) Error() string { return string(n) }
func (n netError) Timeout() bool { return false }
func (n netError) Temporary() bool { return false }
func fixtures(ctx context.Context, data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
func fixtures(data map[string]string, opts ...func(secret *corev1.Secret)) (*fake.Clientset, *argosettings.SettingsManager) {
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
@@ -65,17 +65,17 @@ func fixtures(ctx context.Context, data map[string]string, opts ...func(secret *
opts[i](secret)
}
kubeClient := fake.NewClientset(cm, secret)
settingsManager := argosettings.NewSettingsManager(ctx, kubeClient, "default")
settingsManager := argosettings.NewSettingsManager(context.Background(), kubeClient, "default")
return kubeClient, settingsManager
}
func TestHandleModEvent_HasChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
clusterCache.On("EnsureSynced").Return(nil).Once()
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1)
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
"https://mycluster": clusterCache,
@@ -95,10 +95,10 @@ func TestHandleModEvent_HasChanges(_ *testing.T) {
func TestHandleModEvent_ClusterExcluded(t *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.EXPECT().Invalidate(mock.Anything, mock.Anything).Return().Once()
clusterCache.EXPECT().EnsureSynced().Return(nil).Once()
clusterCache.On("Invalidate", mock.Anything, mock.Anything).Return(nil).Once()
clusterCache.On("EnsureSynced").Return(nil).Once()
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
db: nil,
appInformer: nil,
@@ -128,10 +128,10 @@ func TestHandleModEvent_ClusterExcluded(t *testing.T) {
func TestHandleModEvent_NoChanges(_ *testing.T) {
clusterCache := &mocks.ClusterCache{}
clusterCache.EXPECT().Invalidate(mock.Anything).Panic("should not invalidate").Maybe()
clusterCache.EXPECT().EnsureSynced().Return(nil).Panic("should not re-sync").Maybe()
clusterCache.On("Invalidate", mock.Anything).Panic("should not invalidate")
clusterCache.On("EnsureSynced").Return(nil).Panic("should not re-sync")
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
"https://mycluster": clusterCache,
@@ -150,7 +150,7 @@ func TestHandleModEvent_NoChanges(_ *testing.T) {
func TestHandleAddEvent_ClusterExcluded(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1).Maybe()
db.On("GetApplicationControllerReplicas").Return(1)
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{},
clusterSharding: sharding.NewClusterSharding(db, 0, 2, common.DefaultShardingAlgorithm),
@@ -169,9 +169,10 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
Config: appv1.ClusterConfig{Username: "bar"},
}
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(1)
db.On("GetApplicationControllerReplicas").Return(1)
fakeClient := fake.NewClientset()
settingsMgr := argosettings.NewSettingsManager(t.Context(), fakeClient, "argocd")
liveStateCacheLock := sync.RWMutex{}
gitopsEngineClusterCache := &mocks.ClusterCache{}
clustersCache := liveStateCache{
clusters: map[string]cache.ClusterCache{
@@ -179,7 +180,9 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
},
clusterSharding: sharding.NewClusterSharding(db, 0, 1, common.DefaultShardingAlgorithm),
settingsMgr: settingsMgr,
lock: sync.RWMutex{},
// Set the lock here so we can reference it later
//nolint:govet // We need to overwrite here to have access to the lock
lock: liveStateCacheLock,
}
channel := make(chan string)
// Mocked lock held by the gitops-engine cluster cache
@@ -200,7 +203,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
handleDeleteWasCalled.Lock()
engineHoldsEngineLock.Lock()
gitopsEngineClusterCache.EXPECT().EnsureSynced().Run(func() {
gitopsEngineClusterCache.On("EnsureSynced").Run(func(_ mock.Arguments) {
gitopsEngineClusterCacheLock.Lock()
t.Log("EnsureSynced: Engine has engine lock")
engineHoldsEngineLock.Unlock()
@@ -214,7 +217,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
ensureSyncedCompleted.Unlock()
}).Return(nil).Once()
gitopsEngineClusterCache.EXPECT().Invalidate().Run(func(_ ...cache.UpdateSettingsFunc) {
gitopsEngineClusterCache.On("Invalidate").Run(func(_ mock.Arguments) {
// Allow EnsureSynced to continue now that we're in the deadlock condition
handleDeleteWasCalled.Unlock()
// Wait until gitops engine holds the gitops lock
@@ -227,7 +230,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
t.Log("Invalidate: Invalidate has engine lock")
gitopsEngineClusterCacheLock.Unlock()
invalidateCompleted.Unlock()
}).Return().Maybe()
}).Return()
go func() {
// Start the gitops-engine lock holds
go func() {
@@ -775,7 +778,7 @@ func Test_GetVersionsInfo_error_redacted(t *testing.T) {
}
func TestLoadCacheSettings(t *testing.T) {
_, settingsManager := fixtures(t.Context(), map[string]string{
_, settingsManager := fixtures(map[string]string{
"application.instanceLabelKey": "testLabel",
"application.resourceTrackingMethod": string(appv1.TrackingMethodLabel),
"installationID": "123456789",

View File

@@ -68,10 +68,6 @@ func populateNodeInfo(un *unstructured.Unstructured, res *ResourceInfo, customLa
case "ServiceEntry":
populateIstioServiceEntryInfo(un, res)
}
case "argoproj.io":
if gvk.Kind == "Application" {
populateApplicationInfo(un, res)
}
}
}
@@ -492,13 +488,6 @@ func populateHostNodeInfo(un *unstructured.Unstructured, res *ResourceInfo) {
}
}
func populateApplicationInfo(un *unstructured.Unstructured, res *ResourceInfo) {
// Add managed-by-url annotation to info if present
if managedByURL, ok := un.GetAnnotations()[v1alpha1.AnnotationKeyManagedByURL]; ok {
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "managed-by-url", Value: managedByURL})
}
}
func generateManifestHash(un *unstructured.Unstructured, ignores []v1alpha1.ResourceIgnoreDifferences, overrides map[string]v1alpha1.ResourceOverride, opts normalizers.IgnoreNormalizerOpts) (string, error) {
normalizer, err := normalizers.NewIgnoreNormalizer(ignores, overrides, opts)
if err != nil {

View File

@@ -11,7 +11,6 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -125,7 +124,7 @@ func Test_getAppsForHydrationKey_RepoURLNormalization(t *testing.T) {
t.Parallel()
d := mocks.NewDependencies(t)
d.EXPECT().GetProcessableApps().Return(&v1alpha1.ApplicationList{
d.On("GetProcessableApps").Return(&v1alpha1.ApplicationList{
Items: []v1alpha1.Application{
{
Spec: v1alpha1.ApplicationSpec{
@@ -277,7 +276,7 @@ func Test_validateApplications_RootPathSkipped(t *testing.T) {
},
}
d.EXPECT().GetProcessableAppProj(mock.Anything).Return(&v1alpha1.AppProject{
d.On("GetProcessableAppProj", mock.Anything).Return(&v1alpha1.AppProject{
Spec: v1alpha1.AppProjectSpec{
SourceRepos: []string{"https://example.com/*"},
},
@@ -395,10 +394,10 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
app.Status.SourceHydrator.CurrentOperation = nil
var persistedStatus *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
persistedStatus = newStatus
d.On("PersistAppHydratorStatus", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
persistedStatus = args.Get(1).(*v1alpha1.SourceHydratorStatus)
}).Return().Once()
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
d.On("AddHydrationQueueItem", mock.Anything).Return().Once()
h := &Hydrator{
dependencies: d,
@@ -433,7 +432,7 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
},
},
}
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
d.On("AddHydrationQueueItem", mock.Anything).Return().Once()
h := &Hydrator{
dependencies: d,
@@ -496,21 +495,20 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
hydrationKey := getHydrationQueueKey(app1)
// getAppsForHydrationKey returns two apps
d.EXPECT().GetProcessableApps().Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app1, *app2}}, nil)
d.EXPECT().GetProcessableAppProj(mock.Anything).Return(nil, errors.New("test error")).Once()
d.EXPECT().GetProcessableAppProj(mock.Anything).Return(newTestProject(), nil).Once()
d.On("GetProcessableApps").Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app1, *app2}}, nil)
d.On("GetProcessableAppProj", mock.Anything).Return(nil, errors.New("test error")).Once()
d.On("GetProcessableAppProj", mock.Anything).Return(newTestProject(), nil).Once()
h := &Hydrator{dependencies: d}
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
case app2.Name:
persistedStatus2 = newStatus
d.On("PersistAppHydratorStatus", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
if args.Get(0).(*v1alpha1.Application).Name == app1.Name {
persistedStatus1 = args.Get(1).(*v1alpha1.SourceHydratorStatus)
} else if args.Get(0).(*v1alpha1.Application).Name == app2.Name {
persistedStatus2 = args.Get(1).(*v1alpha1.SourceHydratorStatus)
}
}).Return().Twice()
@@ -538,23 +536,22 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
app2 = setTestAppPhase(app2, v1alpha1.HydrateOperationPhaseHydrating)
hydrationKey := getHydrationQueueKey(app1)
d.EXPECT().GetProcessableApps().Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app1, *app2}}, nil)
d.EXPECT().GetProcessableAppProj(mock.Anything).Return(newTestProject(), nil)
d.On("GetProcessableApps").Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app1, *app2}}, nil)
d.On("GetProcessableAppProj", mock.Anything).Return(newTestProject(), nil)
h := &Hydrator{dependencies: d}
// Make hydrate return app-specific error
d.EXPECT().GetRepoObjs(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, nil, errors.New("hydrate error"))
d.On("GetRepoObjs", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, nil, errors.New("hydrate error"))
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
case app2.Name:
persistedStatus2 = newStatus
d.On("PersistAppHydratorStatus", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
if args.Get(0).(*v1alpha1.Application).Name == app1.Name {
persistedStatus1 = args.Get(1).(*v1alpha1.SourceHydratorStatus)
} else if args.Get(0).(*v1alpha1.Application).Name == app2.Name {
persistedStatus2 = args.Get(1).(*v1alpha1.SourceHydratorStatus)
}
}).Return().Twice()
@@ -582,24 +579,23 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
app2.Spec.SourceHydrator.SyncSource.Path = "something/else"
app2 = setTestAppPhase(app2, v1alpha1.HydrateOperationPhaseHydrating)
hydrationKey := getHydrationQueueKey(app1)
d.EXPECT().GetProcessableApps().Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app1, *app2}}, nil)
d.EXPECT().GetProcessableAppProj(mock.Anything).Return(newTestProject(), nil)
d.On("GetProcessableApps").Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app1, *app2}}, nil)
d.On("GetProcessableAppProj", mock.Anything).Return(newTestProject(), nil)
h := &Hydrator{dependencies: d, repoGetter: r}
d.EXPECT().GetRepoObjs(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, &repoclient.ManifestResponse{
d.On("GetRepoObjs", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, &repoclient.ManifestResponse{
Revision: "abc123",
}, nil)
r.EXPECT().GetRepository(mock.Anything, mock.Anything, mock.Anything).Return(nil, errors.New("repo error"))
r.On("GetRepository", mock.Anything, mock.Anything, mock.Anything).Return(nil, errors.New("repo error"))
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
case app2.Name:
persistedStatus2 = newStatus
d.On("PersistAppHydratorStatus", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
if args.Get(0).(*v1alpha1.Application).Name == app1.Name {
persistedStatus1 = args.Get(1).(*v1alpha1.SourceHydratorStatus)
} else if args.Get(0).(*v1alpha1.Application).Name == app2.Name {
persistedStatus2 = args.Get(1).(*v1alpha1.SourceHydratorStatus)
}
}).Return().Twice()
@@ -628,24 +624,24 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
cc := commitservermocks.NewCommitServiceClient(t)
app := setTestAppPhase(newTestApp("test-app"), v1alpha1.HydrateOperationPhaseHydrating)
hydrationKey := getHydrationQueueKey(app)
d.EXPECT().GetProcessableApps().Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app}}, nil)
d.EXPECT().GetProcessableAppProj(mock.Anything).Return(newTestProject(), nil)
d.On("GetProcessableApps").Return(&v1alpha1.ApplicationList{Items: []v1alpha1.Application{*app}}, nil)
d.On("GetProcessableAppProj", mock.Anything).Return(newTestProject(), nil)
h := &Hydrator{dependencies: d, repoGetter: r, commitClientset: &commitservermocks.Clientset{CommitServiceClient: cc}, repoClientset: &reposervermocks.Clientset{RepoServerServiceClient: rc}}
// Expect setAppHydratorError to be called
var persistedStatus *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
persistedStatus = newStatus
d.On("PersistAppHydratorStatus", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
persistedStatus = args.Get(1).(*v1alpha1.SourceHydratorStatus)
}).Return().Once()
d.EXPECT().RequestAppRefresh(app.Name, app.Namespace).Return(nil).Once()
d.EXPECT().GetRepoObjs(mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, &repoclient.ManifestResponse{
d.On("RequestAppRefresh", app.Name, app.Namespace).Return(nil).Once()
d.On("GetRepoObjs", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, &repoclient.ManifestResponse{
Revision: "abc123",
}, nil).Once()
r.EXPECT().GetRepository(mock.Anything, "https://example.com/repo", "test-project").Return(nil, nil).Once()
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(nil, nil).Once()
d.EXPECT().GetWriteCredentials(mock.Anything, "https://example.com/repo", "test-project").Return(nil, nil).Once()
d.EXPECT().GetHydratorCommitMessageTemplate().Return("commit message", nil).Once()
cc.EXPECT().CommitHydratedManifests(mock.Anything, mock.Anything).Return(&commitclient.CommitHydratedManifestsResponse{HydratedSha: "def456"}, nil).Once()
r.On("GetRepository", mock.Anything, "https://example.com/repo", "test-project").Return(nil, nil).Once()
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(nil, nil).Once()
d.On("GetWriteCredentials", mock.Anything, "https://example.com/repo", "test-project").Return(nil, nil).Once()
d.On("GetHydratorCommitMessageTemplate").Return("commit message", nil).Once()
cc.On("CommitHydratedManifests", mock.Anything, mock.Anything).Return(&commitclient.CommitHydratedManifestsResponse{HydratedSha: "def456"}, nil).Once()
h.ProcessHydrationQueueItem(hydrationKey)
@@ -669,7 +665,7 @@ func TestValidateApplications_ProjectError(t *testing.T) {
t.Parallel()
d := mocks.NewDependencies(t)
app := newTestApp("test-app")
d.EXPECT().GetProcessableAppProj(app).Return(nil, errors.New("project error")).Once()
d.On("GetProcessableAppProj", app).Return(nil, errors.New("project error")).Once()
h := &Hydrator{dependencies: d}
projects, errs := h.validateApplications([]*v1alpha1.Application{app})
@@ -684,7 +680,7 @@ func TestValidateApplications_SourceNotPermitted(t *testing.T) {
app := newTestApp("test-app")
proj := newTestProject()
proj.Spec.SourceRepos = []string{"not-allowed"}
d.EXPECT().GetProcessableAppProj(app).Return(proj, nil).Once()
d.On("GetProcessableAppProj", app).Return(proj, nil).Once()
h := &Hydrator{dependencies: d}
projects, errs := h.validateApplications([]*v1alpha1.Application{app})
@@ -699,7 +695,7 @@ func TestValidateApplications_RootPath(t *testing.T) {
app := newTestApp("test-app")
app.Spec.SourceHydrator.SyncSource.Path = "."
proj := newTestProject()
d.EXPECT().GetProcessableAppProj(app).Return(proj, nil).Once()
d.On("GetProcessableAppProj", app).Return(proj, nil).Once()
h := &Hydrator{dependencies: d}
projects, errs := h.validateApplications([]*v1alpha1.Application{app})
@@ -715,8 +711,8 @@ func TestValidateApplications_DuplicateDestination(t *testing.T) {
app2 := newTestApp("app2")
app2.Spec.SourceHydrator.SyncSource.Path = app1.Spec.SourceHydrator.SyncSource.Path // duplicate path
proj := newTestProject()
d.EXPECT().GetProcessableAppProj(app1).Return(proj, nil).Once()
d.EXPECT().GetProcessableAppProj(app2).Return(proj, nil).Once()
d.On("GetProcessableAppProj", app1).Return(proj, nil).Once()
d.On("GetProcessableAppProj", app2).Return(proj, nil).Once()
h := &Hydrator{dependencies: d}
projects, errs := h.validateApplications([]*v1alpha1.Application{app1, app2})
@@ -733,8 +729,8 @@ func TestValidateApplications_Success(t *testing.T) {
app2 := newTestApp("app2")
app2.Spec.SourceHydrator.SyncSource.Path = "other-path"
proj := newTestProject()
d.EXPECT().GetProcessableAppProj(app1).Return(proj, nil).Once()
d.EXPECT().GetProcessableAppProj(app2).Return(proj, nil).Once()
d.On("GetProcessableAppProj", app1).Return(proj, nil).Once()
d.On("GetProcessableAppProj", app2).Return(proj, nil).Once()
h := &Hydrator{dependencies: d}
projects, errs := h.validateApplications([]*v1alpha1.Application{app1, app2})
@@ -795,25 +791,27 @@ func TestHydrator_hydrate_Success(t *testing.T) {
readRepo := &v1alpha1.Repository{Repo: "https://example.com/repo"}
writeRepo := &v1alpha1.Repository{Repo: "https://example.com/repo"}
d.EXPECT().GetRepoObjs(mock.Anything, app1, app1.Spec.SourceHydrator.GetDrySource(), "main", proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
d.EXPECT().GetRepoObjs(mock.Anything, app2, app2.Spec.SourceHydrator.GetDrySource(), "sha123", proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.EXPECT().GetRepository(mock.Anything, readRepo.Repo, proj.Name).Return(readRepo, nil)
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{Message: "metadata"}, nil).Run(func(_ context.Context, in *repoclient.RepoServerRevisionMetadataRequest, _ ...grpc.CallOption) {
assert.Equal(t, readRepo, in.Repo)
assert.Equal(t, "sha123", in.Revision)
d.On("GetRepoObjs", mock.Anything, app1, app1.Spec.SourceHydrator.GetDrySource(), "main", proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
d.On("GetRepoObjs", mock.Anything, app2, app2.Spec.SourceHydrator.GetDrySource(), "sha123", proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.On("GetRepository", mock.Anything, readRepo.Repo, proj.Name).Return(readRepo, nil)
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{Message: "metadata"}, nil).Run(func(args mock.Arguments) {
r := args.Get(1).(*repoclient.RepoServerRevisionMetadataRequest)
assert.Equal(t, readRepo, r.Repo)
assert.Equal(t, "sha123", r.Revision)
})
d.EXPECT().GetWriteCredentials(mock.Anything, readRepo.Repo, proj.Name).Return(writeRepo, nil)
d.EXPECT().GetHydratorCommitMessageTemplate().Return("commit message", nil)
cc.EXPECT().CommitHydratedManifests(mock.Anything, mock.Anything).Return(&commitclient.CommitHydratedManifestsResponse{HydratedSha: "hydrated123"}, nil).Run(func(_ context.Context, in *commitclient.CommitHydratedManifestsRequest, _ ...grpc.CallOption) {
assert.Equal(t, "commit message", in.CommitMessage)
assert.Equal(t, "hydrated", in.SyncBranch)
assert.Equal(t, "hydrated-next", in.TargetBranch)
assert.Equal(t, "sha123", in.DrySha)
assert.Equal(t, writeRepo, in.Repo)
assert.Len(t, in.Paths, 2)
assert.Equal(t, app1.Spec.SourceHydrator.SyncSource.Path, in.Paths[0].Path)
assert.Equal(t, app2.Spec.SourceHydrator.SyncSource.Path, in.Paths[1].Path)
assert.Equal(t, "metadata", in.DryCommitMetadata.Message)
d.On("GetWriteCredentials", mock.Anything, readRepo.Repo, proj.Name).Return(writeRepo, nil)
d.On("GetHydratorCommitMessageTemplate").Return("commit message", nil)
cc.On("CommitHydratedManifests", mock.Anything, mock.Anything).Return(&commitclient.CommitHydratedManifestsResponse{HydratedSha: "hydrated123"}, nil).Run(func(args mock.Arguments) {
r := args.Get(1).(*commitclient.CommitHydratedManifestsRequest)
assert.Equal(t, "commit message", r.CommitMessage)
assert.Equal(t, "hydrated", r.SyncBranch)
assert.Equal(t, "hydrated-next", r.TargetBranch)
assert.Equal(t, "sha123", r.DrySha)
assert.Equal(t, writeRepo, r.Repo)
assert.Len(t, r.Paths, 2)
assert.Equal(t, app1.Spec.SourceHydrator.SyncSource.Path, r.Paths[0].Path)
assert.Equal(t, app2.Spec.SourceHydrator.SyncSource.Path, r.Paths[1].Path)
assert.Equal(t, "metadata", r.DryCommitMetadata.Message)
})
logCtx := log.NewEntry(log.StandardLogger())
@@ -843,7 +841,7 @@ func TestHydrator_hydrate_GetManifestsError(t *testing.T) {
proj := newTestProject()
projects := map[string]*v1alpha1.AppProject{app.Spec.Project: proj}
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, nil, errors.New("manifests error"))
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, nil, errors.New("manifests error"))
logCtx := log.NewEntry(log.StandardLogger())
sha, hydratedSha, errs, err := h.hydrate(logCtx, []*v1alpha1.Application{app}, projects)
@@ -873,9 +871,9 @@ func TestHydrator_hydrate_RevisionMetadataError(t *testing.T) {
proj := newTestProject()
projects := map[string]*v1alpha1.AppProject{app.Spec.Project: proj}
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.EXPECT().GetRepository(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(nil, errors.New("metadata error"))
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.On("GetRepository", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(nil, errors.New("metadata error"))
logCtx := log.NewEntry(log.StandardLogger())
sha, hydratedSha, errs, err := h.hydrate(logCtx, []*v1alpha1.Application{app}, projects)
@@ -905,10 +903,10 @@ func TestHydrator_hydrate_GetWriteCredentialsError(t *testing.T) {
proj := newTestProject()
projects := map[string]*v1alpha1.AppProject{app.Spec.Project: proj}
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.EXPECT().GetRepository(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.EXPECT().GetWriteCredentials(mock.Anything, mock.Anything, mock.Anything).Return(nil, errors.New("creds error"))
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.On("GetRepository", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.On("GetWriteCredentials", mock.Anything, mock.Anything, mock.Anything).Return(nil, errors.New("creds error"))
logCtx := log.NewEntry(log.StandardLogger())
sha, hydratedSha, errs, err := h.hydrate(logCtx, []*v1alpha1.Application{app}, projects)
@@ -938,11 +936,11 @@ func TestHydrator_hydrate_CommitMessageTemplateError(t *testing.T) {
proj := newTestProject()
projects := map[string]*v1alpha1.AppProject{app.Spec.Project: proj}
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.EXPECT().GetRepository(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.EXPECT().GetWriteCredentials(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
d.EXPECT().GetHydratorCommitMessageTemplate().Return("", errors.New("template error"))
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.On("GetRepository", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.On("GetWriteCredentials", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
d.On("GetHydratorCommitMessageTemplate").Return("", errors.New("template error"))
logCtx := log.NewEntry(log.StandardLogger())
sha, hydratedSha, errs, err := h.hydrate(logCtx, []*v1alpha1.Application{app}, projects)
@@ -972,11 +970,11 @@ func TestHydrator_hydrate_TemplatedCommitMessageError(t *testing.T) {
proj := newTestProject()
projects := map[string]*v1alpha1.AppProject{app.Spec.Project: proj}
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.EXPECT().GetRepository(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.EXPECT().GetWriteCredentials(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
d.EXPECT().GetHydratorCommitMessageTemplate().Return("{{ notAFunction }} template", nil)
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.On("GetRepository", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.On("GetWriteCredentials", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
d.On("GetHydratorCommitMessageTemplate").Return("{{ notAFunction }} template", nil)
logCtx := log.NewEntry(log.StandardLogger())
sha, hydratedSha, errs, err := h.hydrate(logCtx, []*v1alpha1.Application{app}, projects)
@@ -1006,12 +1004,12 @@ func TestHydrator_hydrate_CommitHydratedManifestsError(t *testing.T) {
proj := newTestProject()
projects := map[string]*v1alpha1.AppProject{app.Spec.Project: proj}
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.EXPECT().GetRepository(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.EXPECT().GetRevisionMetadata(mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.EXPECT().GetWriteCredentials(mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
d.EXPECT().GetHydratorCommitMessageTemplate().Return("commit message", nil)
cc.EXPECT().CommitHydratedManifests(mock.Anything, mock.Anything).Return(nil, errors.New("commit error"))
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, mock.Anything, proj).Return(nil, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
r.On("GetRepository", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
rc.On("GetRevisionMetadata", mock.Anything, mock.Anything).Return(&v1alpha1.RevisionMetadata{}, nil)
d.On("GetWriteCredentials", mock.Anything, mock.Anything, mock.Anything).Return(&v1alpha1.Repository{Repo: "https://example.com/repo"}, nil)
d.On("GetHydratorCommitMessageTemplate").Return("commit message", nil)
cc.On("CommitHydratedManifests", mock.Anything, mock.Anything).Return(nil, errors.New("commit error"))
logCtx := log.NewEntry(log.StandardLogger())
sha, hydratedSha, errs, err := h.hydrate(logCtx, []*v1alpha1.Application{app}, projects)
@@ -1050,12 +1048,12 @@ func TestHydrator_getManifests_Success(t *testing.T) {
},
})
d.EXPECT().GetRepoObjs(mock.Anything, app, app.Spec.SourceHydrator.GetDrySource(), "sha123", proj).Return([]*unstructured.Unstructured{cm}, &repoclient.ManifestResponse{
d.On("GetRepoObjs", mock.Anything, app, app.Spec.SourceHydrator.GetDrySource(), "sha123", proj).Return([]*unstructured.Unstructured{cm}, &repoclient.ManifestResponse{
Revision: "sha123",
Commands: []string{"cmd1", "cmd2"},
}, nil)
rev, pathDetails, err := h.getManifests(t.Context(), app, "sha123", proj)
rev, pathDetails, err := h.getManifests(context.Background(), app, "sha123", proj)
require.NoError(t, err)
assert.Equal(t, "sha123", rev)
assert.Equal(t, app.Spec.SourceHydrator.SyncSource.Path, pathDetails.Path)
@@ -1071,9 +1069,9 @@ func TestHydrator_getManifests_EmptyTargetRevision(t *testing.T) {
app := newTestApp("test-app")
proj := newTestProject()
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, "main", proj).Return([]*unstructured.Unstructured{}, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, "main", proj).Return([]*unstructured.Unstructured{}, &repoclient.ManifestResponse{Revision: "sha123"}, nil)
rev, pathDetails, err := h.getManifests(t.Context(), app, "", proj)
rev, pathDetails, err := h.getManifests(context.Background(), app, "", proj)
require.NoError(t, err)
assert.Equal(t, "sha123", rev)
assert.NotNil(t, pathDetails)
@@ -1086,9 +1084,9 @@ func TestHydrator_getManifests_GetRepoObjsError(t *testing.T) {
app := newTestApp("test-app")
proj := newTestProject()
d.EXPECT().GetRepoObjs(mock.Anything, app, mock.Anything, "main", proj).Return(nil, nil, errors.New("repo error"))
d.On("GetRepoObjs", mock.Anything, app, mock.Anything, "main", proj).Return(nil, nil, errors.New("repo error"))
rev, pathDetails, err := h.getManifests(t.Context(), app, "main", proj)
rev, pathDetails, err := h.getManifests(context.Background(), app, "main", proj)
require.Error(t, err)
assert.Contains(t, err.Error(), "repo error")
assert.Empty(t, rev)

View File

@@ -43,7 +43,7 @@ func TestGetRepoObjs(t *testing.T) {
},
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"
@@ -92,7 +92,7 @@ func TestGetHydratorCommitMessageTemplate_WhenTemplateisNotDefined_FallbackToDef
},
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
require.NoError(t, err)
@@ -115,7 +115,7 @@ func TestGetHydratorCommitMessageTemplate(t *testing.T) {
configMapData: cm.Data,
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
tmpl, err := ctrl.GetHydratorCommitMessageTemplate()
require.NoError(t, err)

View File

@@ -13,12 +13,12 @@ import (
)
func TestMetricClusterConnectivity(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := v1alpha1.Cluster{Name: "cluster1", Server: "server1", Labels: map[string]string{"env": "dev", "team": "team1"}}
cluster2 := v1alpha1.Cluster{Name: "cluster2", Server: "server2", Labels: map[string]string{"env": "staging", "team": "team2"}}
cluster3 := v1alpha1.Cluster{Name: "cluster3", Server: "server3", Labels: map[string]string{"env": "production", "team": "team3"}}
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3}}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
type testCases struct {
testCombination

View File

@@ -263,8 +263,8 @@ func newFakeApp(fakeAppYAML string) *argoappv1.Application {
return &app
}
func newFakeLister(ctx context.Context, fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(ctx)
func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var fakeApps []runtime.Object
for _, appYAML := range fakeAppYAMLs {
@@ -319,11 +319,11 @@ func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse stri
func runTest(t *testing.T, cfg TestMetricServerConfig) {
t.Helper()
cancel, appLister := newFakeLister(t.Context(), cfg.FakeAppYAMLs...)
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
defer cancel()
mockDB := mocks.NewArgoDB(t)
mockDB.EXPECT().GetClusterServersByName(mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil).Maybe()
mockDB.EXPECT().GetCluster(mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil).Maybe()
mockDB.On("GetClusterServersByName", mock.Anything, "cluster1").Return([]string{"https://localhost:6443"}, nil)
mockDB.On("GetCluster", mock.Anything, "https://localhost:6443").Return(&argoappv1.Cluster{Name: "cluster1", Server: "https://localhost:6443"}, nil)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels, cfg.AppConditions, mockDB)
require.NoError(t, err)
@@ -333,7 +333,7 @@ func runTest(t *testing.T, cfg TestMetricServerConfig) {
metricsServ.registry.MustRegister(collector)
}
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -472,7 +472,7 @@ argocd_app_condition{condition="ExcludedResourceWarning",name="my-app-4",namespa
}
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -493,7 +493,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
metricsServ.IncSync(fakeApp, "https://localhost:6443", &argoappv1.OperationState{Phase: common.OperationSucceeded})
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -526,7 +526,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
}
func TestMetricsSyncDuration(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -536,7 +536,7 @@ func TestMetricsSyncDuration(t *testing.T) {
fakeAppOperationRunning := newFakeApp(fakeAppOperationRunning)
metricsServ.IncAppSyncDuration(fakeAppOperationRunning, "https://localhost:6443", fakeAppOperationRunning.Status.OperationState)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -550,7 +550,7 @@ func TestMetricsSyncDuration(t *testing.T) {
fakeAppOperationFinished := newFakeApp(fakeAppOperationFinished)
metricsServ.IncAppSyncDuration(fakeAppOperationFinished, "https://localhost:6443", fakeAppOperationFinished.Status.OperationState)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -567,7 +567,7 @@ argocd_app_sync_duration_seconds_total{dest_server="https://localhost:6443",name
}
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -590,7 +590,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, "https://localhost:6443", 5*time.Second)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -601,7 +601,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
}
func TestOrphanedResourcesMetric(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -616,7 +616,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
numOrphanedResources := 1
metricsServ.SetOrphanedResourcesMetric(app, numOrphanedResources)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -627,7 +627,7 @@ argocd_app_orphaned_resources_count{name="my-app-4",namespace="argocd",project="
}
func TestMetricsReset(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -641,7 +641,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name="my-app",namespace="argocd",phase="Succeeded",project="important-project"} 2
`
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -652,7 +652,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
err = metricsServ.SetExpiration(time.Second)
require.NoError(t, err)
time.Sleep(2 * time.Second)
req, err = http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err = http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr = httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -665,7 +665,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",dry_run="false",name=
}
func TestWorkqueueMetrics(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -685,7 +685,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
`
workqueue.NewNamed("test")
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
@@ -696,7 +696,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
}
func TestGoMetrics(t *testing.T) {
cancel, appLister := newFakeLister(t.Context())
cancel, appLister := newFakeLister()
defer cancel()
mockDB := mocks.NewArgoDB(t)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{}, mockDB)
@@ -718,7 +718,7 @@ go_memstats_sys_bytes
go_threads
`
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, "/metrics", http.NoBody)
req, err := http.NewRequest(http.MethodGet, "/metrics", http.NoBody)
require.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)

View File

@@ -28,7 +28,7 @@ import (
func TestGetShardByID_NotEmptyID(t *testing.T) {
db := &dbmocks.ArgoDB{}
replicasCount := 1
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "1"}))
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "2"}))
assert.Equal(t, 0, LegacyDistributionFunction(replicasCount)(&v1alpha1.Cluster{ID: "3"}))
@@ -38,7 +38,7 @@ func TestGetShardByID_NotEmptyID(t *testing.T) {
func TestGetShardByID_EmptyID(t *testing.T) {
db := &dbmocks.ArgoDB{}
replicasCount := 1
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(replicasCount)(&v1alpha1.Cluster{})
assert.Equal(t, 0, shard)
@@ -46,7 +46,7 @@ func TestGetShardByID_EmptyID(t *testing.T) {
func TestGetShardByID_NoReplicas(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
db.On("GetApplicationControllerReplicas").Return(0)
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(0)(&v1alpha1.Cluster{})
assert.Equal(t, -1, shard)
@@ -54,7 +54,7 @@ func TestGetShardByID_NoReplicas(t *testing.T) {
func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
db := &dbmocks.ArgoDB{}
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
db.On("GetApplicationControllerReplicas").Return(0)
distributionFunction := LegacyDistributionFunction
shard := distributionFunction(0)(&v1alpha1.Cluster{})
assert.Equal(t, -1, shard)
@@ -63,7 +63,7 @@ func TestGetShardByID_NoReplicasUsingHashDistributionFunction(t *testing.T) {
func TestGetShardByID_NoReplicasUsingHashDistributionFunctionWithClusters(t *testing.T) {
clusters, db, cluster1, cluster2, cluster3, cluster4, cluster5 := createTestClusters()
// Test with replicas set to 0
db.EXPECT().GetApplicationControllerReplicas().Return(0).Maybe()
db.On("GetApplicationControllerReplicas").Return(0)
t.Setenv(common.EnvControllerShardingAlgorithm, common.RoundRobinShardingAlgorithm)
distributionFunction := RoundRobinDistributionFunction(clusters, 0)
assert.Equal(t, -1, distributionFunction(nil))
@@ -91,7 +91,7 @@ func TestGetClusterFilterLegacy(t *testing.T) {
// shardIndex := 1 // ensuring that a shard with index 1 will process all the clusters with an "even" id (2,4,6,...)
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
t.Setenv(common.EnvControllerShardingAlgorithm, common.LegacyShardingAlgorithm)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
@@ -109,7 +109,7 @@ func TestGetClusterFilterUnknown(t *testing.T) {
os.Unsetenv(common.EnvControllerShardingAlgorithm)
t.Setenv(common.EnvControllerShardingAlgorithm, "unknown")
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := GetDistributionFunction(clusterAccessor, appAccessor, "unknown", replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -124,7 +124,7 @@ func TestLegacyGetClusterFilterWithFixedShard(t *testing.T) {
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
appAccessor, _, _, _, _, _ := createTestApps()
replicasCount := 5
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.DefaultShardingAlgorithm, replicasCount)
assert.Equal(t, 0, filter(nil))
assert.Equal(t, 4, filter(&cluster1))
@@ -151,7 +151,7 @@ func TestRoundRobinGetClusterFilterWithFixedShard(t *testing.T) {
clusterAccessor, db, cluster1, cluster2, cluster3, cluster4, _ := createTestClusters()
appAccessor, _, _, _, _, _ := createTestApps()
replicasCount := 4
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
filter := GetDistributionFunction(clusterAccessor, appAccessor, common.RoundRobinShardingAlgorithm, replicasCount)
assert.Equal(t, 0, filter(nil))
@@ -182,7 +182,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 1", func(t *testing.T) {
replicasCount := 1
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -194,7 +194,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 2", func(t *testing.T) {
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -206,7 +206,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunction2(t *testing.T) {
t.Run("replicas set to 3", func(t *testing.T) {
replicasCount := 3
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Once()
db.On("GetApplicationControllerReplicas").Return(replicasCount).Once()
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -232,7 +232,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
t.Setenv(common.EnvControllerReplicas, strconv.Itoa(replicasCount))
_, db, _, _, _, _, _ := createTestClusters()
clusterAccessor := func() []*v1alpha1.Cluster { return clusterPointers }
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
for i, c := range clusterPointers {
assert.Equal(t, i%2, distributionFunction(c))
@@ -240,7 +240,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterNumber
}
func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAddedAndRemoved(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "1")
cluster2 := createCluster("cluster2", "2")
cluster3 := createCluster("cluster3", "3")
@@ -252,10 +252,10 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
// Test with replicas set to 2
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusterAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
assert.Equal(t, 0, distributionFunction(&cluster1))
@@ -277,7 +277,7 @@ func TestGetShardByIndexModuloReplicasCountDistributionFunctionWhenClusterIsAdde
}
func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
clusterCount := 133
prefix := "cluster"
@@ -290,10 +290,10 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
clusterAccessor := getClusterAccessor(clusters)
appAccessor, _, _, _, _, _ := createTestApps()
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
// Test with replicas set to 3
replicasCount := 3
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, 0, distributionFunction(nil))
distributionMap := map[int]int{}
@@ -347,32 +347,32 @@ func TestConsistentHashingWhenClusterIsAddedAndRemoved(t *testing.T) {
}
func TestConsistentHashingWhenClusterWithZeroReplicas(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
clusters := []v1alpha1.Cluster{createCluster("cluster-01", "01")}
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
appAccessor, _, _, _, _, _ := createTestApps()
// Test with replicas set to 0
replicasCount := 0
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, -1, distributionFunction(nil))
}
func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
var fixedShard int64 = 1
cluster := &v1alpha1.Cluster{ID: "1", Shard: &fixedShard}
clusters := []v1alpha1.Cluster{*cluster}
clusterAccessor := getClusterAccessor(clusters)
clusterList := &v1alpha1.ClusterList{Items: clusters}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
// Test with replicas set to 5
replicasCount := 5
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount)
db.On("GetApplicationControllerReplicas").Return(replicasCount)
appAccessor, _, _, _, _, _ := createTestApps()
distributionFunction := ConsistentHashingWithBoundedLoadsDistributionFunction(clusterAccessor, appAccessor, replicasCount)
assert.Equal(t, fixedShard, int64(distributionFunction(cluster)))
@@ -381,7 +381,7 @@ func TestConsistentHashingWhenClusterWithFixedShard(t *testing.T) {
func TestGetShardByIndexModuloReplicasCountDistributionFunction(t *testing.T) {
clusters, db, cluster1, cluster2, _, _, _ := createTestClusters()
replicasCount := 2
db.EXPECT().GetApplicationControllerReplicas().Return(replicasCount).Maybe()
db.On("GetApplicationControllerReplicas").Return(replicasCount)
distributionFunction := RoundRobinDistributionFunction(clusters, replicasCount)
// Test that the function returns the correct shard for cluster1 and cluster2
@@ -419,7 +419,7 @@ func TestInferShard(t *testing.T) {
}
func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster, v1alpha1.Cluster) {
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "1")
cluster2 := createCluster("cluster2", "2")
cluster3 := createCluster("cluster3", "3")
@@ -428,10 +428,10 @@ func createTestClusters() (clusterAccessor, *dbmocks.ArgoDB, v1alpha1.Cluster, v
clusters := []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5}
db.EXPECT().ListClusters(mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
db.On("ListClusters", mock.Anything).Return(&v1alpha1.ClusterList{Items: []v1alpha1.Cluster{
cluster1, cluster2, cluster3, cluster4, cluster5,
}}, nil)
return getClusterAccessor(clusters), db, cluster1, cluster2, cluster3, cluster4, cluster5
return getClusterAccessor(clusters), &db, cluster1, cluster2, cluster3, cluster4, cluster5
}
func getClusterAccessor(clusters []v1alpha1.Cluster) clusterAccessor {

View File

@@ -16,14 +16,14 @@ import (
func TestLargeShuffle(t *testing.T) {
t.Skip()
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{}}
for i := 0; i < math.MaxInt/4096; i += 256 {
// fmt.Fprintf(os.Stdout, "%d", i)
cluster := createCluster(fmt.Sprintf("cluster-%d", i), strconv.Itoa(i))
clusterList.Items = append(clusterList.Items, cluster)
}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
clusterAccessor := getClusterAccessor(clusterList.Items)
// Test with replicas set to 256
replicasCount := 256
@@ -36,7 +36,7 @@ func TestLargeShuffle(t *testing.T) {
func TestShuffle(t *testing.T) {
t.Skip()
db := &dbmocks.ArgoDB{}
db := dbmocks.ArgoDB{}
cluster1 := createCluster("cluster1", "10")
cluster2 := createCluster("cluster2", "20")
cluster3 := createCluster("cluster3", "30")
@@ -46,7 +46,7 @@ func TestShuffle(t *testing.T) {
cluster25 := createCluster("cluster6", "25")
clusterList := &v1alpha1.ClusterList{Items: []v1alpha1.Cluster{cluster1, cluster2, cluster3, cluster4, cluster5, cluster6}}
db.EXPECT().ListClusters(mock.Anything).Return(clusterList, nil)
db.On("ListClusters", mock.Anything).Return(clusterList, nil)
clusterAccessor := getClusterAccessor(clusterList.Items)
// Test with replicas set to 3
t.Setenv(common.EnvControllerReplicas, "3")

View File

@@ -41,18 +41,13 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
appstatecache "github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/gpg"
utilio "github.com/argoproj/argo-cd/v3/util/io"
"github.com/argoproj/argo-cd/v3/util/settings"
"github.com/argoproj/argo-cd/v3/util/stats"
)
var (
ErrCompareStateRepo = errors.New("failed to get repo objects")
processManifestGeneratePathsEnabled = env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_PROCESS_MANIFEST_GENERATE_PATHS", true)
)
var ErrCompareStateRepo = errors.New("failed to get repo objects")
type resourceInfoProviderStub struct{}
@@ -75,7 +70,7 @@ type managedResource struct {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
@@ -258,21 +253,14 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
appNamespace = ""
}
updateRevisions := processManifestGeneratePathsEnabled &&
// updating revisions result is not required if automated sync is not enabled
app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil &&
// using updating revisions gains performance only if manifest generation is required.
// just reading pre-generated manifests is comparable to updating revisions time-wise
app.Status.SourceType != v1alpha1.ApplicationSourceTypeDirectory
if updateRevisions && repo.Depth == 0 && !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && syncedRevision != revision && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
if !source.IsHelm() && !source.IsOCI() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
Repo: repo,
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetAppRefreshPaths(app),
Paths: path.GetSourceRefreshPaths(app, source),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,
@@ -362,7 +350,7 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
}
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
func (m *appStateManager) ResolveGitRevision(repoURL, revision string) (string, error) {
func (m *appStateManager) ResolveGitRevision(repoURL string, revision string) (string, error) {
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return "", fmt.Errorf("failed to connect to repo server: %w", err)
@@ -541,7 +529,7 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
ts := stats.NewTimingStats()
logCtx := log.WithFields(applog.GetAppLogFields(app))

View File

@@ -48,7 +48,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -66,7 +66,7 @@ func TestCompareAppStateEmpty(t *testing.T) {
// TestCompareAppStateRepoError tests the case when CompareAppState notices a repo error
func TestCompareAppStateRepoError(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
ctrl := newFakeController(&fakeData{manifestResponses: make([]*apiclient.ManifestResponse, 3)}, errors.New("test repo error"))
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -112,7 +112,7 @@ func TestCompareAppStateNamespaceMetadataDiffers(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -161,7 +161,7 @@ func TestCompareAppStateNamespaceMetadataDiffersToManifest(t *testing.T) {
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -219,7 +219,7 @@ func TestCompareAppStateNamespaceMetadata(t *testing.T) {
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -278,7 +278,7 @@ func TestCompareAppStateNamespaceMetadataIsTheSame(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -306,7 +306,7 @@ func TestCompareAppStateMissing(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -338,7 +338,7 @@ func TestCompareAppStateExtra(t *testing.T) {
key: pod,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -369,7 +369,7 @@ func TestCompareAppStateHook(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -401,7 +401,7 @@ func TestCompareAppStateSkipHook(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -431,7 +431,7 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
@@ -465,7 +465,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
key: pod,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -494,7 +494,7 @@ func TestAppRevisionsSingleSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
app := newFakeApp()
revisions := make([]string, 0)
@@ -534,7 +534,7 @@ func TestAppRevisionsMultiSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
app := newFakeMultiSourceApp()
revisions := make([]string, 0)
@@ -583,7 +583,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
kube.GetResourceKey(obj3): obj3,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -624,7 +624,7 @@ func TestCompareAppStateManagedNamespaceMetadataWithLiveNsDoesNotGetPruned(t *te
kube.GetResourceKey(ns): ns,
},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, []string{}, app.Spec.Sources, false, false, nil, false)
require.NoError(t, err)
@@ -676,7 +676,7 @@ func TestCompareAppStateWithManifestGeneratePath(t *testing.T) {
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{},
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
revisions := make([]string, 0)
revisions = append(revisions, "abc123")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, app.Spec.GetSources(), false, false, nil, false)
@@ -698,7 +698,7 @@ func TestSetHealth(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -734,7 +734,7 @@ func TestPreserveStatusTimestamp(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -770,7 +770,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
Namespace: "default",
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -799,7 +799,7 @@ func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
proj.Spec.OrphanedResources = &v1alpha1.OrphanedResourcesMonitorSettings{}
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
@@ -828,7 +828,7 @@ func TestSetManagedResourcesWithResourcesOfAnotherApp(t *testing.T) {
app2 := newFakeApp()
app2.Name = "app2"
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app1, app2, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app2.Namespace, "guestbook"): {
@@ -852,7 +852,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
configMapData: map[string]string{
"resource.customizations": "invalid setting",
@@ -878,7 +878,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
app := newFakeApp()
app.Namespace = "default"
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, proj},
namespacedResources: map[kube.ResourceKey]namespacedResource{
kube.NewResourceKey("apps", kube.DeploymentKind, app.Namespace, "guestbook"): {
@@ -902,7 +902,7 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
func Test_appStateManager_persistRevisionHistory(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app},
}, nil)
manager := ctrl.appStateManager.(*appStateManager)
@@ -1007,7 +1007,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1034,7 +1034,7 @@ func TestSignedResponseNoSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1066,7 +1066,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1093,7 +1093,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1120,7 +1120,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1147,7 +1147,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1175,7 +1175,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
testProj := signedProj
testProj.Spec.SignatureKeys[0].KeyID = "4AEE18F83AFDEB24"
sources := make([]v1alpha1.ApplicationSource, 0)
@@ -1207,7 +1207,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
}
// it doesn't matter for our test whether local manifests are valid
localManifests := []string{"foobar"}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1237,7 +1237,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1267,7 +1267,7 @@ func TestSignedResponseSignatureRequired(t *testing.T) {
}
// it doesn't matter for our test whether local manifests are valid
localManifests := []string{""}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1395,7 +1395,7 @@ func TestIsLiveResourceManaged(t *testing.T) {
},
},
})
ctrl := newFakeController(t.Context(), &fakeData{
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
@@ -1765,7 +1765,7 @@ func TestCompareAppStateDefaultRevisionUpdated(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1788,7 +1788,7 @@ func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
sources := make([]v1alpha1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
@@ -1807,10 +1807,10 @@ func Test_normalizeClusterScopeTracking(t *testing.T) {
Namespace: "test",
},
})
c := &cachemocks.ClusterCache{}
c.EXPECT().IsNamespaced(mock.Anything).Return(false, nil)
c := cachemocks.ClusterCache{}
c.On("IsNamespaced", mock.Anything).Return(false, nil)
var called bool
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, c, func(u *unstructured.Unstructured) error {
err := normalizeClusterScopeTracking([]*unstructured.Unstructured{obj}, &c, func(u *unstructured.Unstructured) error {
// We expect that the normalization function will call this callback with an obj that has had the namespace set
// to empty.
called = true
@@ -1838,7 +1838,7 @@ func TestCompareAppState_DoesNotCallUpdateRevisionForPaths_ForOCI(t *testing.T)
Revision: "abc123",
},
}
ctrl := newFakeControllerWithResync(t.Context(), &data, time.Minute, nil, errors.New("this should not be called"))
ctrl := newFakeControllerWithResync(&data, time.Minute, nil, errors.New("this should not be called"))
source := app.Spec.GetSource()
source.RepoURL = "oci://example.com/argo/argo-cd"

View File

@@ -2,7 +2,6 @@ package controller
import (
"context"
"encoding/json"
stderrors "errors"
"fmt"
"os"
@@ -263,7 +262,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
// resources which in this case applies the live values in the configured
// ignore differences fields.
if syncOp.SyncOptions.HasOption("RespectIgnoreDifferences=true") {
patchedTargets, err := normalizeTargetResources(openAPISchema, compareResult)
patchedTargets, err := normalizeTargetResources(compareResult)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to normalize target resources: %s", err)
@@ -435,65 +434,53 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
// - applies normalization to the target resources based on the live resources
// - copies ignored fields from the matching live resources: apply normalizer to the live resource,
// calculates the patch performed by normalizer and applies the patch to the target resource
func normalizeTargetResources(openAPISchema openapi.Resources, cr *comparisonResult) ([]*unstructured.Unstructured, error) {
// Normalize live and target resources (cleaning or aligning them)
func normalizeTargetResources(cr *comparisonResult) ([]*unstructured.Unstructured, error) {
// normalize live and target resources
normalized, err := diff.Normalize(cr.reconciliationResult.Live, cr.reconciliationResult.Target, cr.diffConfig)
if err != nil {
return nil, err
}
patchedTargets := []*unstructured.Unstructured{}
for idx, live := range cr.reconciliationResult.Live {
normalizedTarget := normalized.Targets[idx]
if normalizedTarget == nil {
patchedTargets = append(patchedTargets, nil)
continue
}
gvk := normalizedTarget.GroupVersionKind()
originalTarget := cr.reconciliationResult.Target[idx]
if live == nil {
// No live resource, just use target
patchedTargets = append(patchedTargets, originalTarget)
continue
}
var (
lookupPatchMeta strategicpatch.LookupPatchMeta
versionedObject any
)
// Load patch meta struct or OpenAPI schema for CRDs
if versionedObject, err = scheme.Scheme.New(gvk); err == nil {
if lookupPatchMeta, err = strategicpatch.NewPatchMetaFromStruct(versionedObject); err != nil {
var lookupPatchMeta *strategicpatch.PatchMetaFromStruct
versionedObject, err := scheme.Scheme.New(normalizedTarget.GroupVersionKind())
if err == nil {
meta, err := strategicpatch.NewPatchMetaFromStruct(versionedObject)
if err != nil {
return nil, err
}
} else if crdSchema := openAPISchema.LookupResource(gvk); crdSchema != nil {
lookupPatchMeta = strategicpatch.NewPatchMetaFromOpenAPI(crdSchema)
lookupPatchMeta = &meta
}
// Calculate live patch
livePatch, err := getMergePatch(normalized.Lives[idx], live, lookupPatchMeta)
if err != nil {
return nil, err
}
// Apply the patch to the normalized target
// This ensures ignored fields in live are restored into the target before syncing
normalizedTarget, err = applyMergePatch(normalizedTarget, livePatch, versionedObject, lookupPatchMeta)
normalizedTarget, err = applyMergePatch(normalizedTarget, livePatch, versionedObject)
if err != nil {
return nil, err
}
patchedTargets = append(patchedTargets, normalizedTarget)
}
return patchedTargets, nil
}
// getMergePatch calculates and returns the patch between the original and the
// modified unstructures.
func getMergePatch(original, modified *unstructured.Unstructured, lookupPatchMeta strategicpatch.LookupPatchMeta) ([]byte, error) {
func getMergePatch(original, modified *unstructured.Unstructured, lookupPatchMeta *strategicpatch.PatchMetaFromStruct) ([]byte, error) {
originalJSON, err := original.MarshalJSON()
if err != nil {
return nil, err
@@ -509,35 +496,18 @@ func getMergePatch(original, modified *unstructured.Unstructured, lookupPatchMet
return jsonpatch.CreateMergePatch(originalJSON, modifiedJSON)
}
// applyMergePatch will apply the given patch in the obj and return the patched unstructure.
func applyMergePatch(obj *unstructured.Unstructured, patch []byte, versionedObject any, meta strategicpatch.LookupPatchMeta) (*unstructured.Unstructured, error) {
// applyMergePatch will apply the given patch in the obj and return the patched
// unstructure.
func applyMergePatch(obj *unstructured.Unstructured, patch []byte, versionedObject any) (*unstructured.Unstructured, error) {
originalJSON, err := obj.MarshalJSON()
if err != nil {
return nil, err
}
var patchedJSON []byte
switch {
case versionedObject != nil:
patchedJSON, err = strategicpatch.StrategicMergePatch(originalJSON, patch, versionedObject)
case meta != nil:
var originalMap, patchMap map[string]any
if err := json.Unmarshal(originalJSON, &originalMap); err != nil {
return nil, err
}
if err := json.Unmarshal(patch, &patchMap); err != nil {
return nil, err
}
patchedMap, err := strategicpatch.StrategicMergeMapPatchUsingLookupPatchMeta(originalMap, patchMap, meta)
if err != nil {
return nil, err
}
patchedJSON, err = json.Marshal(patchedMap)
if err != nil {
return nil, err
}
default:
if versionedObject == nil {
patchedJSON, err = jsonpatch.MergePatch(originalJSON, patch)
} else {
patchedJSON, err = strategicpatch.StrategicMergePatch(originalJSON, patch, versionedObject)
}
if err != nil {
return nil, err

View File

@@ -1,17 +1,9 @@
package controller
import (
"fmt"
"os"
"strconv"
"testing"
openapi_v2 "github.com/google/gnostic-models/openapiv2"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kubectl/pkg/util/openapi"
"sigs.k8s.io/yaml"
"github.com/argoproj/gitops-engine/pkg/sync"
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@@ -31,29 +23,6 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
)
type fakeDiscovery struct {
schema *openapi_v2.Document
}
func (f *fakeDiscovery) OpenAPISchema() (*openapi_v2.Document, error) {
return f.schema, nil
}
func loadCRDSchema(t *testing.T, path string) *openapi_v2.Document {
t.Helper()
data, err := os.ReadFile(path)
require.NoError(t, err)
jsonData, err := yaml.YAMLToJSON(data)
require.NoError(t, err)
doc, err := openapi_v2.ParseDocument(jsonData)
require.NoError(t, err)
return doc
}
func TestPersistRevisionHistory(t *testing.T) {
app := newFakeApp()
app.Status.OperationState = nil
@@ -75,7 +44,7 @@ func TestPersistRevisionHistory(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -121,7 +90,7 @@ func TestPersistManagedNamespaceMetadataState(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -152,7 +121,7 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source specified
source := v1alpha1.ApplicationSource{
@@ -206,7 +175,7 @@ func TestSyncComparisonError(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -263,7 +232,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
},
managedLiveObjs: liveObjects,
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
@@ -350,7 +319,7 @@ func TestSyncWindowDeniesSync(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
@@ -416,7 +385,7 @@ func TestNormalizeTargetResources(t *testing.T) {
f := setup(t, ignores)
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -429,7 +398,7 @@ func TestNormalizeTargetResources(t *testing.T) {
f := setup(t, []v1alpha1.ResourceIgnoreDifferences{})
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -449,7 +418,7 @@ func TestNormalizeTargetResources(t *testing.T) {
unstructured.RemoveNestedField(live.Object, "metadata", "annotations", "iksm-version")
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -474,7 +443,7 @@ func TestNormalizeTargetResources(t *testing.T) {
f := setup(t, ignores)
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -489,6 +458,7 @@ func TestNormalizeTargetResources(t *testing.T) {
assert.Equal(t, int64(4), replicas)
})
t.Run("will keep new array entries not found in live state if not ignored", func(t *testing.T) {
t.Skip("limitation in the current implementation")
// given
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
@@ -502,7 +472,7 @@ func TestNormalizeTargetResources(t *testing.T) {
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -539,11 +509,6 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
}
t.Run("will properly ignore nested fields within arrays", func(t *testing.T) {
doc := loadCRDSchema(t, "testdata/schemas/httpproxy_openapi_v2.yaml")
disco := &fakeDiscovery{schema: doc}
oapiGetter := openapi.NewOpenAPIGetter(disco)
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
require.NoError(t, err)
// given
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
@@ -557,11 +522,8 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
target := test.YamlToUnstructured(testdata.TargetHTTPProxy)
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
gvk := schema.GroupVersionKind{Group: "projectcontour.io", Version: "v1", Kind: "HTTPProxy"}
fmt.Printf("LookupResource result: %+v\n", oapiResources.LookupResource(gvk))
// when
patchedTargets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
patchedTargets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -600,7 +562,7 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -652,7 +614,7 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
// when
targets, err := normalizeTargetResources(nil, f.comparisonResult)
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
@@ -706,175 +668,6 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
assert.Equal(t, "EV", env0["name"])
assert.Equal(t, "here", env0["value"])
})
t.Run("patches ignored differences in individual array elements of HTTPProxy CRD", func(t *testing.T) {
doc := loadCRDSchema(t, "testdata/schemas/httpproxy_openapi_v2.yaml")
disco := &fakeDiscovery{schema: doc}
oapiGetter := openapi.NewOpenAPIGetter(disco)
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
require.NoError(t, err)
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
Group: "projectcontour.io",
Kind: "HTTPProxy",
JQPathExpressions: []string{".spec.routes[].rateLimitPolicy.global.descriptors[].entries[]"},
},
}
f := setupHTTPProxy(t, ignores)
target := test.YamlToUnstructured(testdata.TargetHTTPProxy)
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
live := test.YamlToUnstructured(testdata.LiveHTTPProxy)
f.comparisonResult.reconciliationResult.Live = []*unstructured.Unstructured{live}
patchedTargets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
require.NoError(t, err)
require.Len(t, patchedTargets, 1)
patched := patchedTargets[0]
// verify descriptors array in patched target
descriptors := dig(patched.Object, "spec", "routes", 0, "rateLimitPolicy", "global", "descriptors").([]any)
require.Len(t, descriptors, 1) // Only the descriptors with ignored entries should remain
// verify individual entries array inside the descriptor
entriesArr := dig(patched.Object, "spec", "routes", 0, "rateLimitPolicy", "global", "descriptors", 0, "entries").([]any)
require.Len(t, entriesArr, 1) // Only the ignored entry should be patched
// verify the content of the entry is preserved correctly
entry := entriesArr[0].(map[string]any)
requestHeader := entry["requestHeader"].(map[string]any)
assert.Equal(t, "sample-header", requestHeader["headerName"])
assert.Equal(t, "sample-key", requestHeader["descriptorKey"])
})
}
func TestNormalizeTargetResourcesCRDs(t *testing.T) {
type fixture struct {
comparisonResult *comparisonResult
}
setupHTTPProxy := func(t *testing.T, ignores []v1alpha1.ResourceIgnoreDifferences) *fixture {
t.Helper()
dc, err := diff.NewDiffConfigBuilder().
WithDiffSettings(ignores, nil, true, normalizers.IgnoreNormalizerOpts{}).
WithNoCache().
Build()
require.NoError(t, err)
live := test.YamlToUnstructured(testdata.SimpleAppLiveYaml)
target := test.YamlToUnstructured(testdata.SimpleAppTargetYaml)
return &fixture{
&comparisonResult{
reconciliationResult: sync.ReconciliationResult{
Live: []*unstructured.Unstructured{live},
Target: []*unstructured.Unstructured{target},
},
diffConfig: dc,
},
}
}
t.Run("sample-app", func(t *testing.T) {
doc := loadCRDSchema(t, "testdata/schemas/simple-app.yaml")
disco := &fakeDiscovery{schema: doc}
oapiGetter := openapi.NewOpenAPIGetter(disco)
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
require.NoError(t, err)
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
Group: "example.com",
Kind: "SimpleApp",
JQPathExpressions: []string{".spec.servers[1].enabled", ".spec.servers[0].port"},
},
}
f := setupHTTPProxy(t, ignores)
target := test.YamlToUnstructured(testdata.SimpleAppTargetYaml)
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
live := test.YamlToUnstructured(testdata.SimpleAppLiveYaml)
f.comparisonResult.reconciliationResult.Live = []*unstructured.Unstructured{live}
patchedTargets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
require.NoError(t, err)
require.Len(t, patchedTargets, 1)
patched := patchedTargets[0]
require.NotNil(t, patched)
// 'spec.servers' array has length 2
servers := dig(patched.Object, "spec", "servers").([]any)
require.Len(t, servers, 2)
// first server's 'name' is 'server1'
name1 := dig(patched.Object, "spec", "servers", 0, "name").(string)
assert.Equal(t, "server1", name1)
assert.Equal(t, int64(8081), dig(patched.Object, "spec", "servers", 0, "port").(int64))
assert.Equal(t, int64(9090), dig(patched.Object, "spec", "servers", 1, "port").(int64))
// first server's 'enabled' should be true
enabled1 := dig(patched.Object, "spec", "servers", 0, "enabled").(bool)
assert.True(t, enabled1)
// second server's 'name' should be 'server2'
name2 := dig(patched.Object, "spec", "servers", 1, "name").(string)
assert.Equal(t, "server2", name2)
// second server's 'enabled' should be true (respected from live due to ignoreDifferences)
enabled2 := dig(patched.Object, "spec", "servers", 1, "enabled").(bool)
assert.True(t, enabled2)
})
t.Run("rollout-obj", func(t *testing.T) {
// Load Rollout CRD schema like SimpleApp
doc := loadCRDSchema(t, "testdata/schemas/rollout-schema.yaml")
disco := &fakeDiscovery{schema: doc}
oapiGetter := openapi.NewOpenAPIGetter(disco)
oapiResources, err := openapi.NewOpenAPIParser(oapiGetter).Parse()
require.NoError(t, err)
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
Group: "argoproj.io",
Kind: "Rollout",
JQPathExpressions: []string{`.spec.template.spec.containers[] | select(.name == "init") | .image`},
},
}
f := setupHTTPProxy(t, ignores)
live := test.YamlToUnstructured(testdata.LiveRolloutYaml)
target := test.YamlToUnstructured(testdata.TargetRolloutYaml)
f.comparisonResult.reconciliationResult.Live = []*unstructured.Unstructured{live}
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
targets, err := normalizeTargetResources(oapiResources, f.comparisonResult)
require.NoError(t, err)
require.Len(t, targets, 1)
patched := targets[0]
require.NotNil(t, patched)
containers := dig(patched.Object, "spec", "template", "spec", "containers").([]any)
require.Len(t, containers, 2)
initContainer := containers[0].(map[string]any)
mainContainer := containers[1].(map[string]any)
// Assert init container image is preserved (ignoreDifferences works)
initImage := dig(initContainer, "image").(string)
assert.Equal(t, "init-container:v1", initImage)
// Assert main container fields as expected
mainName := dig(mainContainer, "name").(string)
assert.Equal(t, "main", mainName)
mainImage := dig(mainContainer, "image").(string)
assert.Equal(t, "main-container:v1", mainImage)
})
}
func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
@@ -1620,7 +1413,7 @@ func TestSyncWithImpersonate(t *testing.T) {
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,
project: project,
@@ -1780,7 +1573,7 @@ func TestClientSideApplyMigration(t *testing.T) {
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(t.Context(), &data, nil)
ctrl := newFakeController(&data, nil)
return &fixture{
application: app,

View File

@@ -32,16 +32,4 @@ var (
//go:embed additional-image-replicas-deployment.yaml
AdditionalImageReplicaDeploymentYaml string
//go:embed simple-app-live.yaml
SimpleAppLiveYaml string
//go:embed simple-app-target.yaml
SimpleAppTargetYaml string
//go:embed target-rollout.yaml
TargetRolloutYaml string
//go:embed live-rollout.yaml
LiveRolloutYaml string
)

View File

@@ -1,25 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-sample
spec:
replicas: 2
strategy:
canary:
steps:
- setWeight: 20
selector:
matchLabels:
app: rollout-sample
template:
metadata:
labels:
app: rollout-sample
spec:
containers:
- name: init
image: init-container:v1
livenessProbe:
initialDelaySeconds: 10
- name: main
image: main-container:v1

View File

@@ -1,62 +0,0 @@
swagger: "2.0"
info:
title: HTTPProxy
version: "v1"
paths: {}
definitions:
io.projectcontour.v1.HTTPProxy:
type: object
x-kubernetes-group-version-kind:
- group: projectcontour.io
version: v1
kind: HTTPProxy
properties:
spec:
type: object
properties:
routes:
type: array
items:
type: object
properties:
rateLimitPolicy:
type: object
properties:
global:
type: object
properties:
descriptors:
type: array
x-kubernetes-list-map-keys:
- entries
items:
type: object
properties:
entries:
type: array
x-kubernetes-list-map-keys:
- headerName
items:
type: object
properties:
requestHeader:
type: object
properties:
descriptorKey:
type: string
headerName:
type: string
requestHeaderValueMatch:
type: object
properties:
headers:
type: array
items:
type: object
properties:
name:
type: string
contains:
type: string
value:
type: string

View File

@@ -1,67 +0,0 @@
swagger: "2.0"
info:
title: Rollout
version: "v1alpha1"
paths: {}
definitions:
argoproj.io.v1alpha1.Rollout:
type: object
x-kubernetes-group-version-kind:
- group: argoproj.io
version: v1alpha1
kind: Rollout
properties:
spec:
type: object
properties:
replicas:
type: integer
strategy:
type: object
properties:
canary:
type: object
properties:
steps:
type: array
items:
type: object
properties:
setWeight:
type: integer
selector:
type: object
properties:
matchLabels:
type: object
additionalProperties:
type: string
template:
type: object
properties:
metadata:
type: object
properties:
labels:
type: object
additionalProperties:
type: string
spec:
type: object
properties:
containers:
type: array
x-kubernetes-list-map-keys:
- name
items:
type: object
properties:
name:
type: string
image:
type: string
livenessProbe:
type: object
properties:
initialDelaySeconds:
type: integer

View File

@@ -1,29 +0,0 @@
swagger: "2.0"
info:
title: SimpleApp
version: "v1"
paths: {}
definitions:
example.com.v1.SimpleApp:
type: object
x-kubernetes-group-version-kind:
- group: example.com
version: v1
kind: SimpleApp
properties:
spec:
type: object
properties:
servers:
type: array
x-kubernetes-list-map-keys:
- name
items:
type: object
properties:
name:
type: string
port:
type: integer
enabled:
type: boolean

View File

@@ -1,12 +0,0 @@
apiVersion: example.com/v1
kind: SimpleApp
metadata:
name: simpleapp-sample
spec:
servers:
- name: server1
port: 8081 # port changed in live from 8080
enabled: true
- name: server2
port: 9090
enabled: true # enabled changed in live from false

View File

@@ -1,12 +0,0 @@
apiVersion: example.com/v1
kind: SimpleApp
metadata:
name: simpleapp-sample
spec:
servers:
- name: server1
port: 8080
enabled: true
- name: server2
port: 9090
enabled: false

View File

@@ -1,25 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-sample
spec:
replicas: 2
strategy:
canary:
steps:
- setWeight: 20
selector:
matchLabels:
app: rollout-sample
template:
metadata:
labels:
app: rollout-sample
spec:
containers:
- name: init
image: init-container:v1
livenessProbe:
initialDelaySeconds: 15
- name: main
image: main-container:v1

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

View File

@@ -68,9 +68,9 @@ make builder-image IMAGE_NAMESPACE=argoproj IMAGE_TAG=v1.0.0
Every commit to master is built and published to `ghcr.io/argoproj/argo-cd/argocd:<version>-<short-sha>`. The list of images is available at
[https://github.com/argoproj/argo-cd/packages](https://github.com/argoproj/argo-cd/packages).
> [!NOTE]
> GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
> even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
> to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
!!! note
GitHub docker registry [requires](https://github.community/t5/GitHub-Actions/docker-pull-from-public-GitHub-Package-Registry-fail-with-quot/m-p/32888#M1294) authentication to read
even publicly available packages. Follow the steps from Kubernetes [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
to configure image pull secret if you want to use `ghcr.io/argoproj/argo-cd/argocd` image.
The image is automatically deployed to the dev Argo CD instance: [https://cd.apps.argoproj.io/](https://cd.apps.argoproj.io/)

View File

@@ -1,18 +0,0 @@
The Argo CD UI displays icons for various Kubernetes resource types to help users quickly identify them. Argo CD
includes a set of built-in icons for common resource types.
You can contribute additional icons for custom resource types by following these steps:
1. Ensure the license is compatible with Apache 2.0.
2. Add the icon file to the `ui/src/assets/images/resources/<group>/icon.svg` path in the Argo CD repository.
3. Modify the SVG to use the correct color, `#8fa4b1`.
4. Run `make resourceiconsgen` to update the generated typescript file that lists all available icons.
5. Create a pull request to the Argo CD repository with your changes.
`<group>` is the API group of the custom resource. For example, if you are adding an icon for a custom resource with the
API group `example.com`, you would place the icon at `ui/src/assets/images/resources/example.com/icon.svg`.
If you want the same icon to apply to resources in multiple API groups with the same suffix, you can create a directory
prefixed with an underscore. The underscore will be interpreted as a wildcard. For example, to apply the same icon to
resources in the `example.com` and `another.example.com` API groups, you would place the icon at
`ui/src/assets/images/resources/_.example.com/icon.svg`.

View File

@@ -26,8 +26,8 @@ api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
```
This configuration example will be used as the basis for the next steps.
> [!NOTE]
> The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
!!! note
The Procfile for a component may change with time. Please go through the Procfile and make sure you use the latest configuration for debugging.
### Configure component env variables
The component that you will run in your IDE for debugging (`api-server` in our case) will need env variables. Copy the env variables from `Procfile`, located in the `argo-cd` root folder of your development branch. The env variables are located before the `$COMMAND` section in the `sh -c` section of the component run command.
@@ -112,8 +112,8 @@ Example for an `api-server` launch configuration snippet, based on our above exa
</component>
```
> [!NOTE]
> As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
!!! note
As an alternative to importing the above file to Goland, you can create a Run/Debug Configuration using the official [Goland docs](https://www.jetbrains.com/help/go/go-build.html) and just copy the `parameters`, `directory` and `PATH` sections from the example above (specifying `Run kind` as `Directory` in the Run/Debug Configurations wizard)
## Run Argo CD without the debugged component
Next, we need to run all Argo CD components, except for the debugged component (cause we will run this component separately in the IDE).
@@ -143,4 +143,4 @@ To debug the `api-server`, run:
Finally, run the component you wish to debug from your IDE and make sure it does not have any errors.
## Important
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.
When running Argo CD components separately, ensure components aren't creating conflicts - each component needs to be up exactly once, be it running locally with the local toolchain or running from your IDE. Otherwise you may get errors about ports not available or even debugging a process that does not contain your code changes.

View File

@@ -21,7 +21,7 @@ curl -sSfL https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/i
Connect to one of the services, for example, to debug the main ArgoCD server run:
```shell
kubectl config set-context --current --namespace argocd
telepresence helm install --set-json agent.securityContext={} # Installs telepresence into your cluster
telepresence helm install --set agent.securityContext={} # Installs telepresence into your cluster
telepresence connect # Starts the connection to your cluster (bound to the current namespace)
telepresence intercept argocd-server --port 8080:http --env-file .envrc.remote # Starts the interception
```
@@ -37,7 +37,7 @@ telepresence status
Stop the intercept using:
```shell
telepresence leave argocd-server
telepresence leave argocd-server-argocd
```
And uninstall telepresence from your cluster:

View File

@@ -1,5 +1,38 @@
# Managing Dependencies
## GitOps Engine (`github.com/argoproj/gitops-engine`)
### Repository
https://github.com/argoproj/gitops-engine
### Pulling changes from `gitops-engine`
After your GitOps Engine PR has been merged, ArgoCD needs to be updated to pull in the version of the GitOps engine that contains your change. Here are the steps:
- Retrieve the SHA hash for your commit. You will use this in the next step.
- From the `argo-cd` folder, run the following command
`go get github.com/argoproj/gitops-engine@<git-commit-sha>`
If you get an error message `invalid version: unknown revision` then you got the wrong SHA hash
- Run:
`go mod tidy`
- The following files are changed:
- `go.mod`
- `go.sum`
- Create an ArgoCD PR with a `refactor:` type in its title for the two file changes.
### Tips:
- See https://github.com/argoproj/argo-cd/pull/4434 as an example
- The PR might require additional, dependent changes in ArgoCD that are directly impacted by the changes made in the engine.
## Notifications Engine (`github.com/argoproj/notifications-engine`)
### Repository

View File

@@ -7,7 +7,7 @@
## Preface
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything builds correctly. Commit your changes locally and perform the following steps, for each step the commands for both local and virtualized toolchain are listed.
### Docker privileges for virtualized toolchain users
### Docker priviliges for virtualized toolchain users
[These instructions](toolchain-guide.md#docker-privileges) are relevant for most of the steps below
### Using Podman for virtualized toolchain users
@@ -29,6 +29,11 @@ As build dependencies change over time, you have to synchronize your development
* `make dep-ui` or `make dep-ui-local`
Argo CD recently migrated to Go modules. Usually, dependencies will be downloaded at build time, but the Makefile provides two targets to download and vendor all dependencies:
* `make mod-download` or `make mod-download-local` will download all required Go modules and
* `make mod-vendor` or `make mod-vendor-local` will vendor those dependencies into the Argo CD source tree
### Generate API glue code and other assets
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
@@ -37,8 +42,8 @@ Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/prot
* Check if something has changed by running `git status` or `git diff`
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
> [!NOTE]
> There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
!!!note
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
### Build your code and run unit tests

View File

@@ -38,7 +38,7 @@ If you want to build and test the site directly on your local machine without th
## Analytics
> [!TIP]
> Don't forget to disable your ad-blocker when testing.
!!! tip
Don't forget to disable your ad-blocker when testing.
We collect [Google Analytics](https://analytics.google.com/analytics/web/#/report-home/a105170809w198079555p192782995).

View File

@@ -1,10 +1,9 @@
# Proxy Extensions
> [!WARNING]
> **Beta Feature (Since 2.7.0)**
>
> This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
> It is generally considered stable, but there may be unhandled edge cases.
!!! warning "Beta Feature (Since 2.7.0)"
This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage.
It is generally considered stable, but there may be unhandled edge cases.
## Overview

View File

@@ -129,30 +129,33 @@ It is also possible to add an optional flyout widget to your extension. It can b
Below is an example of an extension using the flyout widget:
```javascript
((window) => {
const component = (props: { openFlyout: () => any }) => {
const component = (props: {
openFlyout: () => any
}) => {
return React.createElement(
"div",
{
style: { padding: "10px" },
onClick: () => props.openFlyout(),
},
"Hello World"
"div",
{
style: { padding: "10px" },
onClick: () => props.openFlyout()
},
"Hello World"
);
};
const flyout = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"This is a flyout"
"div",
{ style: { padding: "10px" } },
"This is a flyout"
);
};
window.extensionsAPI.registerStatusPanelExtension(
component,
"My Extension",
"my_extension",
flyout
component,
"My Extension",
"my_extension",
flyout
);
})(window);
```
@@ -180,9 +183,7 @@ The callback function `shouldDisplay` should return true if the extension should
```typescript
const shouldDisplay = (app: Application) => {
return (
application.metadata?.labels?.["application.environmentLabelKey"] === "prd"
);
return application.metadata?.labels?.['application.environmentLabelKey'] === "prd";
};
```
@@ -195,64 +196,28 @@ Below is an example of a simple extension with a flyout widget:
};
const flyout = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"This is a flyout"
"div",
{ style: { padding: "10px" } },
"This is a flyout"
);
};
const component = () => {
return React.createElement(
"div",
{
onClick: () => flyout(),
},
"Toolbar Extension Test"
"div",
{
onClick: () => flyout()
},
"Toolbar Extension Test"
);
};
window.extensionsAPI.registerTopBarActionMenuExt(
component,
"Toolbar Extension Test",
"Toolbar_Extension_Test",
flyout,
shouldDisplay,
"",
true
component,
"Toolbar Extension Test",
"Toolbar_Extension_Test",
flyout,
shouldDisplay,
'',
true
);
})(window);
```
## App View Extensions
App View extensions allow you to create a new Application Details View for an application. This view would be selectable alongside the other views like the Node Tree, Pod, and Network views. When the extension's icon is clicked, the extension's component is rendered as the main content of the application view.
Register this extension through the `extensionsAPI.registerAppViewExtension` method.
```typescript
registerAppViewExtension(
component: ExtensionComponent, // the component to be rendered
title: string, // the title of the page once the component is rendered
icon: string, // the favicon classname for the icon tab
)
```
Below is an example of a simple extension:
```javascript
((window) => {
const component = () => {
return React.createElement(
"div",
{ style: { padding: "10px" } },
"Hello World"
);
};
window.extensionsAPI.registerAppViewExtension(
component,
"My Extension",
"fa-question-circle"
);
})(window);
```
Example rendered extension:
![destination](../../assets/application-view-extension.png)
```

View File

@@ -6,8 +6,8 @@
Sure thing! You can either open an Enhancement Proposal in our GitHub issue tracker or you can [join us on Slack](https://argoproj.github.io/community/join-slack) in channel #argo-contributors to discuss your ideas and get guidance for submitting a PR.
> [!NOTE]
> Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
!!! note
Regular [contributor meetings](https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/#regular-contributor-meeting) are held weekly. Please follow the link for more details.
### No one has looked at my PR yet. Why?

View File

@@ -1,12 +1,10 @@
# Overview
> [!WARNING]
> **As an Argo CD user, you probably don't want to be reading this section of the docs.**
>
> This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
>
> * A chat bot
> * A Slack integration
!!! warning "As an Argo CD user, you probably don't want to be reading this section of the docs."
This part of the manual is aimed at helping people contribute to Argo CD, documentation, or to develop third-party applications that interact with Argo CD, e.g.
* A chat bot
* A Slack integration
## Preface
#### Understand the [Code Contribution Guide](code-contributions.md)
@@ -28,7 +26,7 @@ For backend and frontend contributions, that require a full building-testing-run
## Contributing to Argo CD Notifications documentation
This guide will help you get started quickly with contributing documentation changes, performing the minimum setup you'll need.
The notifications docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
The notificaions docs are located in [notifications-engine](https://github.com/argoproj/notifications-engine) Git repository and require 2 pull requests: one for the `notifications-engine` repo and one for the `argo-cd` repo.
For backend and frontend contributions, that require a full building-testing-running-locally cycle, please refer to [Contributing to Argo CD backend and frontend ](index.md#contributing-to-argo-cd-backend-and-frontend)
### Fork and clone Argo CD repository
@@ -102,4 +100,4 @@ Need help? Start with the [Contributors FAQ](faq/)
* [Config Management Plugins](../operator-manual/config-management-plugins/)
## Contributing to Argo Website
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.

View File

@@ -18,9 +18,11 @@ These are the upcoming releases dates:
| v2.13 | Monday, Sep. 16, 2024 | Monday, Nov. 4, 2024 | [Regina Voloshin](https://github.com/reggie-k) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/19513) |
| v2.14 | Monday, Dec. 16, 2024 | Monday, Feb. 3, 2025 | [Ryan Umstead](https://github.com/rumstead) | [Pavel Kostohrys](https://github.com/pasha-codefresh) | [checklist](https://github.com/argoproj/argo-cd/issues/20869) |
| v3.0 | Monday, Mar. 17, 2025 | Tuesday, May 6, 2025 | [Regina Voloshin](https://github.com/reggie-k) | | [checklist](https://github.com/argoproj/argo-cd/issues/21735) |
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](#) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | | [checklist](#) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | | |
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | | |
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | | |
Actual release dates might differ from the plan by a few days.

Some files were not shown because too many files have changed in this diff Show More