Compare commits

..

129 Commits

Author SHA1 Message Date
github-actions[bot]
2bb617d02a Bump version to 2.14.16 on release-2.14 branch (#24397)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 11:53:37 -04:00
Michael Crenshaw
72e2387795 fix(security): repository.GetDetailedProject exposes repo secrets (#24389)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Signed-off-by: Dan Garfield <dan.garfield@octopus.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Dan Garfield <dan.garfield@octopus.com>
2025-09-04 11:33:04 -04:00
Adrian Berger
f8eba3e6e9 fix(cherry-pick-2.14): custom resource health for flux helm repository of type oci (#24339)
Signed-off-by: Adrian Berger <adrian.berger@bedag.ch>
2025-09-02 15:19:27 -04:00
Nitish Kumar
510b77546e chore(cherry-pick-2.14): replace bitnami images (#24289)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-08-27 14:37:51 +02:00
gcp-cherry-pick-bot[bot]
d77ecdf113 chore: adds all components in goreman run script (cherry-pick #23777) (#23790)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-08-24 21:37:14 -04:00
Ville Vesilehto
f9bb3b608e chore: update Go to 1.24.6 (release-2.14) (#24091)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-08-11 11:24:15 +02:00
rumstead
5d0a4f0dd3 fix(appset): When Appset is deleted, the controller should reconcile applicationset #23723 (cherry-pick ##23823) (#23832)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: sangeer <86688098+sangdammad@users.noreply.github.com>
2025-07-17 11:56:24 -04:00
Michael Crenshaw
8a3b2fdd2b fix(server): infer resource status health for apps-in-any-ns (#22944) (#23707)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-09 11:50:18 -04:00
gcp-cherry-pick-bot[bot]
ddb6073e52 fix: improves the ui message when an operation is terminated due to controller sync timeout (cherry-pick #23657) (#23673)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
Co-authored-by: Papapetrou Patroklos <1743100+ppapapetrou76@users.noreply.github.com>
2025-07-07 10:49:51 +03:00
gcp-cherry-pick-bot[bot]
d95b710055 fix(controller): get commit server url from env (cherry-pick #23536) (#23543)
Signed-off-by: Alexej Disterhoft <alexej.disterhoft@redcare-pharmacy.com>
Co-authored-by: Alexej Disterhoft <alexej@disterhoft.de>
2025-06-24 14:41:11 -04:00
github-actions[bot]
6c7d6940cd Bump version to 2.14.15 on release-2.14 branch (#23427)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-06-16 13:59:54 -04:00
rumstead
ec5198949e fix(applicationset): requeue applicationste when application status changes (#23413)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: Dmitry Shmelev <avikez@gmail.com>
2025-06-16 11:46:59 -04:00
Alexandre Gaudreault
da2ef7db67 fix(sync): auto-sync loop when FailOnSharedResource (#23357)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-06-12 11:58:04 -04:00
github-actions[bot]
c6166fd531 Bump version to 2.14.14 on release-2.14 branch (#23299)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: blakepettersson <1227954+blakepettersson@users.noreply.github.com>
2025-06-09 11:37:55 -04:00
Ville Vesilehto
3c68b26d7a chore: upgrade Go from 1.23.4 to 1.24.4 (release-2.14) (#23294)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-06-08 20:00:00 +02:00
Ville Vesilehto
5f890629a9 chore: upgrade mockery to v2.53.4 (release-2.14) (#23316)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-06-08 07:58:46 -04:00
Ville Vesilehto
e24ee58f28 chore: upgrade golangci-lint to v2 (release-2.14) (#23305)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2025-06-06 16:54:43 -04:00
gcp-cherry-pick-bot[bot]
14fa0e0d9f fix: parse project with applicationset resource (cherry-pick #23252) (#23268)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-06-04 23:16:30 -10:00
Siddhesh Ghadi
2aceb1dc44 fix: update broken yarn.lock (#23212)
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2025-05-30 10:41:41 -06:00
Soumya Ghosh Dastidar
a2361bf850 fix: add cooldown to prevent resetting autoheal exp backoff preemptively (cherry-pick #23057) (#23188)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2025-05-28 22:40:38 +02:00
github-actions[bot]
5ad281ef56 Bump version to 2.14.13 on release-2.14 branch (#23183)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-05-28 08:45:02 -06:00
Michael Crenshaw
24d57220ca Merge commit from fork
Fix shadowed variable name

Signed-off-by: Ry0taK <49341894+Ry0taK@users.noreply.github.com>
Co-authored-by: Ry0taK <49341894+Ry0taK@users.noreply.github.com>
2025-05-28 08:20:48 -06:00
Peter Jiang
d213c305e4 chore: bump gitops-engine ssd fix (#23072)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-05-20 21:10:33 -04:00
github-actions[bot]
25e327bb9a Bump version to 2.14.12 on release-2.14 branch (#23064)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 11:29:01 -04:00
nathanlaceyraft
efe5d2956e chore(deps): resolve CVE GO-2025-3540, GO-2025-3503, GO-2025-3487 within 2.14.10 (#22709)
Signed-off-by: Nathan Lacey <nlacey@teamraft.com>
2025-05-20 10:57:12 -04:00
Alexandre Gaudreault
5bc6f4722b fix: infinite reconciliation loop when app is in error (#23047)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 10:48:47 -04:00
gcp-cherry-pick-bot[bot]
25235fbc2d fix(test): broken e2e test (cherry-pick #22975) (#23052)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 09:49:13 -04:00
gcp-cherry-pick-bot[bot]
3a9ab77537 fix(commit-server): apply image override (cherry-pick #22916) (#22918) 2025-05-09 21:54:33 -04:00
gcp-cherry-pick-bot[bot]
ced6a7822e fix(health): handle nil lastTransitionTime (#22897) (cherry-pick #22900) (#22909)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-09 10:46:25 -04:00
Blake Pettersson
fe93963970 fix: do not normalize resource tracking on live crds (#22722) - cherrypick 2.14 (#22746)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-05-07 16:35:29 -07:00
Mike Bryant
78e61ba71c fix: Only port-forward to ready pods (#10610) (cherry-pick #22794) (#22826)
Signed-off-by: Mike Bryant <mike.bryant@mettle.co.uk>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2025-05-07 15:24:52 -07:00
gcp-cherry-pick-bot[bot]
f7ad2adf50 fix(ApplicationSet): Check strategy type to verify it's a progressive sync (cherry-pick #22563) (#22833)
Signed-off-by: Fernando Crespo Gravalos <fcrespo@fastly.com>
Signed-off-by: Fernando Crespo Grávalos <59588094+fcrespofastly@users.noreply.github.com>
Co-authored-by: Fernando Crespo Grávalos <59588094+fcrespofastly@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-04-30 12:37:32 -07:00
Peter Jiang
b163de0784 fix: remove project from cache key for project scoped credentials (#22816)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-04-28 16:41:57 -04:00
github-actions[bot]
82831155c2 Bump version to 2.14.11 on release-2.14 branch (#22755)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-04-22 08:25:09 -07:00
gcp-cherry-pick-bot[bot]
91f54459d4 feat(hydrator): handle sourceHydrator fields from webhook (#19397) (cherry-pick #22485) (#22754)
Signed-off-by: daengdaengLee <gunho1020@gmail.com>
Signed-off-by: Alexy Mantha <alexy@mantha.dev>
Co-authored-by: Alexy Mantha <alexy.mantha@goto.com>
Co-authored-by: Kunho Lee <gunho1020@gmail.com>
2025-04-22 08:20:58 -07:00
gcp-cherry-pick-bot[bot]
0451723be1 fix(appset): generated app errors should use the default requeue (#21887) (cherry-pick #21936) (#22672)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-04-18 11:48:08 -07:00
gcp-cherry-pick-bot[bot]
f6f7d29c11 fix(ui): avoid spurious error on hydration (#22506) (cherry-pick #22711) (#22714)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-04-17 16:40:39 -07:00
github-actions[bot]
5feb8c21f2 Bump version to 2.14.10 on release-2.14 branch (#22670)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-04-14 17:38:21 -07:00
Hesham Sherif
4826fb0ab8 chore(deps): Update github.com/expr-lang/expr to v1.17.0 fixing CVE-2025-29786 (#22651)
Signed-off-by: heshamelsherif97 <heshamelsherif97@gmail.com>
2025-04-14 15:05:10 -07:00
gcp-cherry-pick-bot[bot]
3b308d66e2 fix: respect delete confirmation for argocd app deletion (cherry-pick #22657) (#22664)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2025-04-14 08:36:01 -07:00
gcp-cherry-pick-bot[bot]
b31d700188 fix(cli): wrong variable to store --no-proxy value (cherry-pick #21226) (#22590)
Signed-off-by: Nathanael Liechti <technat@technat.ch>
Co-authored-by: Nathanael Liechti <technat@technat.ch>
2025-04-10 07:08:55 -07:00
gcp-cherry-pick-bot[bot]
be81419f27 fix: login return_url doesn't work with custom server paths (cherry-pick #21588) (#22594)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-04-07 13:14:27 -07:00
Aaron Hoffman
6b15a04509 fix: [cherry-pick] selfhealattemptscount needs to be reset at times (#22095, #20978) (#22583)
Signed-off-by: Michal Ryšavý <michal.rysavy@ext.csas.cz>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Michal Ryšavý <mrysavy@users.noreply.github.com>
Co-authored-by: Michal Ryšavý <michal.rysavy@ext.csas.cz>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-04-07 06:32:03 -07:00
github-actions[bot]
38985bdcd6 Bump version to 2.14.9 on release-2.14 branch (#22549)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-04-02 12:29:11 -07:00
Peter Jiang
c868711d03 chore(dep): bump gitops-engine 2.14 (#22520)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-03-29 10:44:07 -04:00
gcp-cherry-pick-bot[bot]
31a554568a fix: Check for semver constraint matching in application webhook handler (cherry-pick #21648) (#22508)
Signed-off-by: eadred <eadred77@googlemail.com>
Co-authored-by: Eadred <eadred77@googlemail.com>
2025-03-27 11:27:09 -04:00
github-actions[bot]
a7178be1c1 Bump version to 2.14.8 on release-2.14 branch (#22469)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 15:38:39 -04:00
Michael Crenshaw
9a9e62d392 fix(server): fully populate app destination before project checks (#22408) (#22426)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-03-24 11:20:07 -07:00
Michael Crenshaw
9f832cd099 chore(deps): bump github.com/golang-jwt/jwt to 4.5.2/5.2.2 (#22465)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 14:06:41 -04:00
Blake Pettersson
ec45e33800 fix(ui, rbac): project-roles (#21829) (2.14 backport) (#22461)
Signed-off-by: wyttime04 <vanessa80332@gmail.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: wyttime04 <vanessa80332@gmail.com>
2025-03-24 10:47:26 -07:00
Atif Ali
872319e8e7 fix: handle annotated git tags correctly in repo server cache (#21771) (#22424)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-03-21 09:49:35 -04:00
nmirasch
7acdaa96e0 fix: CVE-2025-26791 upgrading redoc dep to 2.4.0 to avoid DOMPurify b… (#21997)
Signed-off-by: nmirasch <neus.miras@gmail.com>
2025-03-21 07:33:19 -04:00
github-actions[bot]
d107d4e41a Bump version to 2.14.7 on release-2.14 branch (#22412)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-03-19 13:04:41 -04:00
Michael Crenshaw
39407827d3 chore(deps): bump gitops engine (#22405)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-19 10:51:18 -04:00
github-actions[bot]
fe2a6e91b6 Bump version to 2.14.6 on release-2.14 branch (#22393)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-03-18 06:23:50 -07:00
rumstead
38c03769af feat(server): make deep copies of objects returned by informers (#22173) (#22179) (#22340)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-03-13 11:56:50 -07:00
Anand Francis Joseph
defd4be943 chore(deps): Update go-git from 5.12.0 to 5.13.2 to include several CVE fixes (#22313)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-03-13 09:49:44 -04:00
github-actions[bot]
f463a945d5 Bump version to 2.14.5 on release-2.14 branch (#22292)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ishitasequeira <46771830+ishitasequeira@users.noreply.github.com>
2025-03-10 23:11:20 -04:00
Anand Francis Joseph
ed242b9eee chore(deps): bump github.com/redis/go-redis/v9 from 9.7.0 to 9.7.1 (#21957) (#22255)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: anandf <anjoseph@redhat.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-09 19:39:46 +02:00
github-actions[bot]
3d901f2037 Bump version to 2.14.4 on release-2.14 branch (#22176)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 15:31:29 -05:00
Michael Crenshaw
2b1e829986 chore(deps): switch gitops-engine back to release-2.14 branch (#22163)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 15:23:48 -05:00
gcp-cherry-pick-bot[bot]
52231dbc09 fix(actions): don't run empty Lua scripts (#22084) (cherry-pick #22161) (#22172)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 15:23:32 -05:00
Michael Crenshaw
2eab10a3cb chore(deps): revert accidental upgrade of go.mod packages (#22162)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 14:08:08 -05:00
gcp-cherry-pick-bot[bot]
962d7a9ad9 fix(ci): use pinned Helm version for init-release (#22164) (cherry-pick #22165) (#22171)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 13:51:52 -05:00
gcp-cherry-pick-bot[bot]
54170a4fd8 fix: make codegen permissions (cherry-pick #21667) (#22145)
Signed-off-by: Brett C. Dudo <brett@dudo.io>
Signed-off-by: Brett Dudo <brett@dudo.io>
Co-authored-by: Brett Dudo <brett@dudo.io>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
2025-03-04 09:57:17 +05:30
gcp-cherry-pick-bot[bot]
71fd4e501d fix: Check placement exists before length check (#22060) (cherry-pick #22057) (#22089)
Signed-off-by: Dale Haiducek <19750917+dhaiducek@users.noreply.github.com>
Co-authored-by: Dale Haiducek <19750917+dhaiducek@users.noreply.github.com>
2025-02-28 11:41:20 -05:00
Nitish Kumar
cb1df5d35f fix: correct lookup for the kustomization file when applying patches (cherry-pick #22024) (#22086)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-28 21:36:55 +05:30
github-actions[bot]
66db4b6876 Bump version to 2.14.3 on release-2.14 branch (#22070)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-02-27 15:53:50 -05:00
Michael Crenshaw
3adb83c1df fix(hydrator): refresh by annotation instead of work queue (#22016) (#22067)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-27 15:30:14 -05:00
Michael Crenshaw
63edc3eb9c fix: accidental v3 imports (#22068)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-27 15:29:47 -05:00
gcp-cherry-pick-bot[bot]
2dd70dede8 fix(hydrator): don't use manifest-generate-paths (#22039) (cherry-pick #22015) (#22061)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-27 13:01:13 -05:00
Michael Crenshaw
d79185a4fe fix(hydrator): don't get cluster or API versions for hydrator (#21985) (#22038)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-26 15:33:09 -05:00
gcp-cherry-pick-bot[bot]
8f925c6754 fix: fetch syncedRevision in UpdateRevisionForPaths (#21014) (cherry-pick #21015) (#22011)
Signed-off-by: toyamagu2021 <toyamagu2021@gmail.com>
Signed-off-by: toyamagu-2021 <toyamagu2021@gmail.com>
Co-authored-by: gussan <83329336+toyamagu-2021@users.noreply.github.com>
2025-02-26 20:12:02 +05:30
gcp-cherry-pick-bot[bot]
b5be1df890 docs: document source hydrator maturity (cherry-pick #21969) (#21970)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-24 10:41:39 -05:00
gcp-cherry-pick-bot[bot]
92a3c3d727 fix: correctly set compareWith when requesting app refresh with delay (fixes #18998) (cherry-pick #21298) (#21952)
Signed-off-by: Xiaonan Shen <s@sxn.dev>
Co-authored-by: Xiaonan Shen <s@sxn.dev>
Co-authored-by: 沈啸楠 <sxn@shenxiaonandeMacBook-Pro.local>
2025-02-23 22:51:30 +05:30
gcp-cherry-pick-bot[bot]
aaed35c6d4 fix(applicationset): ApplicationSets with rolling sync stuck in Pending (cherry-pick #20230) (#21948)
Signed-off-by: Fabián Sellés <fabian.selles@adevinta.com>
Co-authored-by: Fabián Sellés Rosa <1088313+Fsero@users.noreply.github.com>
Co-authored-by: Thibault Jamet <tjamet@users.noreply.github.com>
Co-authored-by: Carlos Rejano <carlosrejanoromeu@gmail.com>
2025-02-22 15:07:42 -05:00
Nitish Kumar
2b422d2c70 chore: add cherry pick for v2.14 (#21901)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-02-18 11:34:01 -08:00
Andrii Korotkov
896a461ae6 fix: New kube applier for server side diff dry run with refactoring (#21488) (#21819)
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-02-07 17:57:27 -05:00
github-actions[bot]
ad2724661b Bump version to 2.14.2 on release-2.14 branch (#21797)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: leoluz <44989+leoluz@users.noreply.github.com>
2025-02-05 18:39:56 -05:00
gcp-cherry-pick-bot[bot]
efd9c32e25 fix: Add proxy registry key by dest server + name (cherry-pick #21791) (#21794)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2025-02-05 16:13:27 -05:00
github-actions[bot]
3345d05a43 Bump version to 2.14.1 on release-2.14 branch (#21758)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-02-03 16:21:20 -05:00
rumstead
4745e08d4f docs(2.14): adding basic upgrading docs for 2.14 (#21744) (#21752) 2025-02-03 21:22:47 +02:00
gcp-cherry-pick-bot[bot]
46f494592c fix(ui): Solve issue with navigating with dropdown from an application's page (cherry-pick #21737) (#21746)
Signed-off-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Amit Oren <amit@coralogix.com>
2025-02-03 11:13:52 -05:00
github-actions[bot]
5964abd6af Bump version to 2.14.0-rc7 on release-2.14 branch (#21712)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ishitasequeira <46771830+ishitasequeira@users.noreply.github.com>
2025-01-29 15:21:23 -05:00
Siddhesh Ghadi
d59c85c5eb Merge commit from fork
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2025-01-29 13:41:18 -05:00
Alexandre Gaudreault
e4599e1a90 feat(rbac): add disable fine-grained inheritance flag (#20600) (#21553)
Signed-off-by: Matt Finkel <finkel.matt@gmail.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Matt Finkel <finkel.matt@gmail.com>
2025-01-24 17:04:41 -05:00
Ishita Sequeira
67b2336cac chore(deps): fix bump golang.org/x/net from 0.32.0 to 0.34.0 - CVE-2024-45338 (#21628)
Signed-off-by: Ishita Sequeira <ishiseq29@gmail.com>
2025-01-22 11:08:15 -05:00
gcp-cherry-pick-bot[bot]
8a8fc37f3c fix: Policy/policy.open-cluster-management.io stuck in progressing status when no clusters match the policy (#21296) (cherry-pick #21297) (#21614)
Signed-off-by: Michele Baldessari <michele@acksyn.org>
Co-authored-by: Michele Baldessari <michele@acksyn.org>
2025-01-21 19:36:28 -05:00
github-actions[bot]
2ef67d3e5c Bump version to 2.14.0-rc6 on release-2.14 branch (#21611)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-01-21 13:38:57 -05:00
gcp-cherry-pick-bot[bot]
479b182552 fix(appset): reverted Gitlab SCM HasPath search and consider 404 errors as file not found (#16253) (cherry-pick #21597) (#21602)
Signed-off-by: Prune <prune@lecentre.net>
Co-authored-by: Prune Sebastien THOMAS <prune@lecentre.net>
2025-01-21 13:14:35 -05:00
gcp-cherry-pick-bot[bot]
bb8185e2ec docs: add mkdocs configuration stanza to .readthedocs.yaml (cherry-pick #21475) (#21608)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-01-21 13:13:55 -05:00
gcp-cherry-pick-bot[bot]
70ea86523e fix: resolve the failing e2e appset tests for ksonnet applications (cherry-pick #21580) (#21604)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-01-21 13:13:04 -05:00
gcp-cherry-pick-bot[bot]
35174dc196 fix(hydrator): UI nil checks (cherry-pick #21598) (#21601)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-01-21 11:56:46 -05:00
gcp-cherry-pick-bot[bot]
bab2c41e10 docs(hydrator): document signature verification limitation (cherry-pick #21504) (#21585)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-01-20 23:44:01 -05:00
Eadred
bd755104ed fix(appset): events not honouring configured namespaces (#21219) (#21241) (#21519)
* fix: 21219 Honour ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES for all ApplicationSet events

Namespace filtering is applied to Update, Delete and Generic events.

Fixes https://github.com/argoproj/argo-cd/issues/21219



* fix: 21219 Add tests for ignoreNotAllowedNamespaces



* fix: 21219 Remove redundant package import



---------

Signed-off-by: eadred <eadred77@googlemail.com>
2025-01-17 10:59:37 -05:00
gcp-cherry-pick-bot[bot]
2bf5dc6ed1 Fix application url for custom base href (#21377) (#21516)
Signed-off-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Amit Oren <amit@coralogix.com>
2025-01-16 00:46:38 -05:00
gcp-cherry-pick-bot[bot]
ebf754e3ab fix(appset): update gitlab SCM provider to search on parent folder (#16253) (#21491) (#21503)
* (fix:appset) update gitlab SCM provider to search on parent folder

fix https://github.com/argoproj/argo-cd/issues/16253



* adding test-case that replicated the new Gitlab API behaviour



* add comments to the case



---------

Signed-off-by: Prune <prune@lecentre.net>
Co-authored-by: Prune Sebastien THOMAS <prune@lecentre.net>
2025-01-15 14:59:39 -05:00
github-actions[bot]
97704acded Bump version to 2.14.0-rc5 (#21424)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-01-08 14:35:06 -05:00
gcp-cherry-pick-bot[bot]
51471b3b8b fix(controller): rename cluster batch param and add to argocd-cmd-params-cm (#21402) (#21419)
* fix(controller): rename cluster batch param and add to argocd-cmd-params-cm



* parameterize deployment too



* consistency



---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-01-08 10:27:58 -05:00
gcp-cherry-pick-bot[bot]
c13c9c1be3 fix(ci): updating action-gh-release after upstream fix (#21407) (#21408)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
2025-01-07 16:40:42 -05:00
github-actions[bot]
a4c1bffbea Bump version to 2.14.0-rc4 (#21349)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-01-03 10:05:22 -05:00
gcp-cherry-pick-bot[bot]
e2eb655e41 chore: Fix data race detection failures in application tests (#21271) (#21302)
* chore: Fix race detection failures in application tests



* Fix failing TestGetCachedAppState tests



---------

Signed-off-by: eadred <eadred77@googlemail.com>
Co-authored-by: Eadred <eadred77@googlemail.com>
2024-12-23 13:05:32 -05:00
gcp-cherry-pick-bot[bot]
0a26e0f465 fix: Change applicationset generate HTTP method to avoid route conflicts (#20758) (#21299)
* Change applicationset generate HTTP method to avoid route conflicts



* Update server/applicationset/applicationset.proto




* Codegen



---------

Signed-off-by: Amit Oren <amit@coralogix.com>
Signed-off-by: Amit Oren <github@amitoren.dev>
Co-authored-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-12-23 13:04:41 -05:00
github-actions[bot]
90146498fe Bump version to 2.14.0-rc3 (#21246)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2024-12-18 13:33:48 -05:00
gcp-cherry-pick-bot[bot]
018014c4b0 chore: Graceful shutdown for API Server (#18642) (#21224) (#21229)
* fix: Graceful shutdown for the API server (#18642) (#20981)

* fix: Graceful shutdown for the API server (#18642)

Closes #18642

Implements a graceful shutdown the the API server. Without this, ArgoCD API server will eventually return 502 during rolling update. However, healthcheck would return 503 if the server is terminating.





* Init server only once, but keep re-initializing listeners



* Check error for SetParamInSettingConfigMap as needed after fresh master



* Prevent a data race



* Remove unused variable, don't pass lock when not necessary



* Try overriding URL instead of additional URLs



* Use a more specific url



---------





* Use a custom signal for graceful restart



* Re-run tests



---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Co-authored-by: Andrii Korotkov <137232734+andrii-korotkov-verkada@users.noreply.github.com>
Co-authored-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-18 15:11:03 +02:00
github-actions[bot]
a89d01266b Bump version to 2.14.0-rc2 (#21223)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pasha-codefresh <39732895+pasha-codefresh@users.noreply.github.com>
2024-12-17 19:49:32 +02:00
gcp-cherry-pick-bot[bot]
684ee0bceb Revert "fix: Graceful shutdown for the API server (#18642) (#20981)" (#21221) (#21222) 2024-12-17 18:57:11 +02:00
github-actions[bot]
2ac03b5152 Bump version to 2.14.0-rc1 (#21218)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pasha-codefresh <39732895+pasha-codefresh@users.noreply.github.com>
2024-12-17 17:08:19 +02:00
Michael Crenshaw
9203dd16af chore(server): simplify project validation logic (#21191)
* chore(server): simplify project validation logic

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* improve tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-17 13:01:06 +05:30
Michael Crenshaw
0de5f60cdc chore(appset): reduce dupe code w/ DB (#21192)
* chore(appset): reduce dupe code w/ DB

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix imports

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-17 12:57:37 +05:30
AvivGuiser
1a69663a70 docs: add link to sprig website in the template docs site (#21184)
* add link to sprig website in the template docs site

Signed-off-by: AvivGuiser <avivguiser@gmail.com>

* Update docs/operator-manual/notifications/templates.md

Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
Signed-off-by: AvivGuiser <avivguiser@gmail.com>

---------

Signed-off-by: AvivGuiser <avivguiser@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2024-12-16 16:34:09 -07:00
Michael Crenshaw
433b317c35 feat: source hydrator (#20345)
* feat(hydrator): add sourceHydrator types

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix(codegen): use kube_codegen.sh deepcopy and client gen correctly

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

deepcopy gen

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* feat(hydrator): add commit-server component

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

go mod tidy

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

one test file for both implementations

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix test for linux

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix git client mock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix git client mock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

address comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

unit tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix image, fix health checks, fix merge issue

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix lint issues

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

remove code that doesn't work for GHE

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

changes from comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

allow opt-in

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

separation between app controller and hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify diff

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

todos

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add dry sha to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add app name to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

more logging, no caching

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix cluster install

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't interrupt an ongoing hydrate operation

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

revert hydrate loop fix

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

handle project-scoped repo creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

codegen

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

improve docs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fixes from comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* set hydrator enabled key when using hydrator manifests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix manifests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

improve docs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

allow opt-in

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

separation between app controller and hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify diff

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

todos

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add dry sha to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add app name to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

more logging, no caching

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix cluster install

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't interrupt an ongoing hydrate operation

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

revert hydrate loop fix

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

handle project-scoped repo creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

codegen

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

improve docs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fixes from comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* feat(hydrator): add sourceHydrator types

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix(codegen): use kube_codegen.sh deepcopy and client gen correctly

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

deepcopy gen

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

allow opt-in

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

separation between app controller and hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify diff

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

todos

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add dry sha to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add app name to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

more logging, no caching

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix cluster install

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't interrupt an ongoing hydrate operation

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

revert hydrate loop fix

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

handle project-scoped repo creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

codegen

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* feat(hydrator): write credentials handling + UI

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

feat(hydrator): enable controller

Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Omer Azmon <omer_azmon@intuit.com>
Co-authored-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Juwon Hwang (Kevin) <juwon8891@gmail.com>
Co-authored-by: thisishwan2 <feel000617@gmail.com>
Co-authored-by: mirageoasis <kimhw0820@naver.com>
Co-authored-by: Robin Lieb <robin.j.lieb@gmail.com>
Co-authored-by: miiiinju1 <gms07073@ynu.ac.kr>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

allow opt-in

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

separation between app controller and hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify diff

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

todos

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

simplify

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add dry sha to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

add app name to logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

more logging, no caching

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix cluster install

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't interrupt an ongoing hydrate operation

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

revert hydrate loop fix

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

feat(hydrator): write credentials handling + UI

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

WIP: add new APIs for write creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

write api and template api

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix time function

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix lint issues

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't enrich with read creds

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

revert tls change

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't disable buttons in UI

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

ask repo server for specific revision

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fixes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

lint ui

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

remove unnecessary change

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

fix test and lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

enable hydrator for e2e tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* free disk space for e2e tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

don't free disk space

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* free disk space for e2e tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove comment that breaks auth

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* try removing extra function

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* cleanup from comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix test

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-16 16:59:09 -05:00
Mykola Pelekh
dc3f40c31e fix: avoid resources lock contention (#8172) (#20329)
* fix: avoid resources lock contention

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* feat: allow enabling batch events processing

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* fix: update ParseDurationFromEnv to handle duration in ms

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* feat: make eventProcessingInterval option configurable (default is 0.1s)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* use upstream gitops-engine

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-16 12:18:11 -05:00
Suraj yadav
c090f849b0 pruned-icon-changed-to-trash (#21088)
Signed-off-by: Surajyadav <harrypotter1108@gmail.com>
2024-12-16 16:59:31 +05:30
Suraj yadav
a94a07ecd6 feat(ui): Added title label for filters (#21149)
* added-filter-title

Signed-off-by: Surajyadav <harrypotter1108@gmail.com>

* text-color

Signed-off-by: Surajyadav <harrypotter1108@gmail.com>

---------

Signed-off-by: Surajyadav <harrypotter1108@gmail.com>
2024-12-16 16:55:43 +05:30
dependabot[bot]
065700c5e1 chore(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui (#21131)
Bumps [nanoid](https://github.com/ai/nanoid) from 3.3.7 to 3.3.8.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.3.7...3.3.8)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 15:44:49 +05:30
dependabot[bot]
8d4ae26686 chore(deps): bump library/busybox in /test/e2e/multiarch-container (#21145)
Bumps library/busybox from `768e5c6` to `2919d01`.

---
updated-dependencies:
- dependency-name: library/busybox
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 15:44:15 +05:30
GuySaar8
8a9de6a8d3 fix(ui): ArgoCD history tab shows latest values in all recent releases (#13006) (#21161)
* fix(ISSUE-13006): ArgoCD history tab shows latest values in all recent releases

Signed-off-by: Guy Saar <guysaar8@gmail.com>

* chore: added org to USER.md

Signed-off-by: Guy Saar <guysaar8@gmail.com>

chore: added org to USER.md

Signed-off-by: Guy Saar <guysaar8@gmail.com>

* chore: update USER.md based on PR review

Signed-off-by: Guy Saar <guysaar8@gmail.com>

chore: added newline to USER.md

Signed-off-by: Guy Saar <guysaar8@gmail.com>

---------

Signed-off-by: Guy Saar <guysaar8@gmail.com>
2024-12-16 15:43:34 +05:30
Yusuke Abe
4d17bf3d8b docs: update sync-wave documentation (#21155)
Signed-off-by: chansuke <moonset20@gmail.com>
2024-12-16 15:41:38 +05:30
dependabot[bot]
75b0b3c8ee chore(deps): bump go.opentelemetry.io/otel/sdk from 1.32.0 to 1.33.0 (#21165)
Bumps [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) from 1.32.0 to 1.33.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.32.0...v1.33.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 15:37:55 +05:30
OpenGuidou
bce16e9daf fix(appset): Fix appset generate in --core mode for cluster gen (#21170)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
2024-12-16 14:52:44 +05:30
Michael Crenshaw
e878ad5f31 chore: remove unused defaults from image workflow (#21183)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-16 14:51:00 +05:30
dependabot[bot]
19eaeb9aca chore(deps): bump github.com/Azure/kubelogin from 0.1.5 to 0.1.6 (#21193)
Bumps [github.com/Azure/kubelogin](https://github.com/Azure/kubelogin) from 0.1.5 to 0.1.6.
- [Release notes](https://github.com/Azure/kubelogin/releases)
- [Changelog](https://github.com/Azure/kubelogin/blob/main/CHANGELOG.md)
- [Commits](https://github.com/Azure/kubelogin/compare/v0.1.5...v0.1.6)

---
updated-dependencies:
- dependency-name: github.com/Azure/kubelogin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 14:50:09 +05:30
Michael Crenshaw
5cdb1a0a15 chore: use new fake k8s client constructor (#21186)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-16 14:49:30 +05:30
Michael Crenshaw
4471603de2 fix(api): send to closed channel in mergeLogStreams (#7006) (#21178)
* fix(api): send to closed channel in mergeLogStreams (#7006)

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more intense test

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* even more intense

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary comment

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix the race condition

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-15 19:44:34 -05:00
Clint Chester
99efafb55a feat: Confluent Connector Resource Health Checker - #17695 (#17697)
* Adding Synergy as a ArgoCD user

Signed-off-by: GitHub <noreply@github.com>

* Health checking Kafka Connector resources

Signed-off-by: Clint Chester <clint.chester@synergy.net.au>

* Includes Kafka Connect Task Failures

Signed-off-by: Clint Chester <clint.chester@synergy.net.au>

---------

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Clint Chester <clint.chester@synergy.net.au>
2024-12-15 15:58:30 -05:00
1102
fdf539dc6a feat: add health check for ClusterResourceSet (#20746)
Signed-off-by: nueavv <nuguni@kakao.com>
2024-12-15 15:56:34 -05:00
github-actions[bot]
22fe65b4eb [Bot] docs: Update Snyk reports (#21180)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2024-12-15 20:49:57 +00:00
Michael Crenshaw
b60d28c71a docs(proposal): manifest hydrator (#17755)
* docs(proposal): manifest hydrator

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* whitespace

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* whitespace

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove old references to drySources as an array

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* rename fields

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* opinions

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* document limitations

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* updates

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* updates

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* multi-source is nondeterministic

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* Update docs/proposals/manifest-hydrator/commit-server/README.md

Co-authored-by: joe miller <joeym@joeym.net>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: joe miller <joeym@joeym.net>
2024-12-15 15:45:10 -05:00
497 changed files with 100038 additions and 4237 deletions

View File

@@ -14,7 +14,7 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.23.3'
GOLANG_VERSION: '1.24.6'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -109,10 +109,10 @@ jobs:
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@971e284b6050e8a5849b72094c50ab08da042db8 # v6.1.1
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v1.62.2
version: v2.1.6
args: --verbose
test-go:
@@ -393,7 +393,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
uses: SonarSource/sonarqube-scan-action@1b442ee39ac3fa7c2acdd410208dcb2bcfaae6c4 # v4.1.0
uses: SonarSource/sonarqube-scan-action@bfd4e558cda28cda6b5defafb9232d191be8c203 # v4.2.1
if: env.sonar_secret != ''
test-e2e:
name: Run end-to-end tests
@@ -429,6 +429,13 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be
with:
large-packages: false
docker-images: false
swap-storage: false
tool-cache: false
- name: Checkout code
uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- name: Setup Golang
@@ -542,4 +549,4 @@ jobs:
exit 0
else
exit 1
fi
fi

View File

@@ -17,11 +17,9 @@ on:
platforms:
required: true
type: string
default: linux/amd64
push:
required: true
type: boolean
default: false
target:
required: false
type: string

View File

@@ -53,7 +53,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.23.3
go-version: 1.24.6
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@@ -70,7 +70,7 @@ jobs:
ghcr_image_name: ghcr.io/argoproj/argo-cd/argocd:${{ needs.set-vars.outputs.image-tag }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.23.3
go-version: 1.24.6
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:

View File

@@ -11,7 +11,7 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.23.3' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.24.6' # Note: go-version must also be set in job argocd-image.with.go-version
jobs:
argocd-image:
@@ -25,7 +25,7 @@ jobs:
quay_image_name: quay.io/argoproj/argocd:${{ github.ref_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.23.3
go-version: 1.24.6
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
@@ -195,7 +195,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@7b4da11513bf3f43f9999e90eabced41ab8bb048 # v2.2.0
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda # v2.2.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -1,20 +1,12 @@
issues:
exclude:
- SA5011
max-issues-per-linter: 0
max-same-issues: 0
exclude-rules:
- path: '(.+)_test\.go'
linters:
- unparam
version: "2"
run:
timeout: 50m
linters:
default: none
enable:
- errcheck
- errorlint
- gocritic
- gofumpt
- goimports
- gosimple
- govet
- ineffassign
- misspell
@@ -25,35 +17,85 @@ linters:
- unparam
- unused
- usestdlibvars
- whitespace
linters-settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp # Keep it disabled for readability
- badCond
- commentFormatting
- exitAfterDefer
- ifElseChain
- mapKey
- singleCaseSwitch
- typeSwitchVar
goimports:
local-prefixes: github.com/argoproj/argo-cd/v2
perfsprint:
# Optimizes even if it requires an int or uint type cast.
int-conversion: true
# Optimizes into `err.Error()` even if it is only equivalent for non-nil errors.
err-error: false
# Optimizes `fmt.Errorf`.
errorf: false
# Optimizes `fmt.Sprintf` with only one argument.
sprintf1: true
# Optimizes into strings concatenation.
strconcat: false
testifylint:
enable-all: true
disable:
- go-require
run:
timeout: 50m
- whitespace
settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp
- badCond
- commentFormatting
- exitAfterDefer
- ifElseChain
- mapKey
- singleCaseSwitch
- typeSwitchVar
perfsprint:
int-conversion: true
err-error: false
errorf: false
sprintf1: true
strconcat: false
staticcheck:
checks:
- "all"
- "-ST1001" # dot imports are discouraged
- "-ST1003" # poorly chosen identifier
- "-ST1005" # incorrectly formatted error string
- "-ST1011" # poorly chosen name for variable of type time.Duration
- "-ST1012" # poorly chosen name for an error variable
- "-ST1016" # use consistent method receiver names
- "-ST1017" # don't use Yoda conditions
- "-ST1019" # importing the same package multiple times
- "-ST1023" # redundant type in variable declaration
- "-QF1001" # apply De Morgans law
- "-QF1003" # convert if/else-if chain to tagged switch
- "-QF1006" # lift if+break into loop condition
- "-QF1007" # merge conditional assignment into variable declaration
- "-QF1008" # omit embedded fields from selector expression
- "-QF1009" # use time.Time.Equal instead of == operator
- "-QF1011" # omit redundant type from variable declaration
- "-QF1012" # use fmt.Fprintf(x, ...) instead of x.Write(fmt.Sprintf(...))
testifylint:
enable-all: true
disable:
- go-require
- equal-values
- empty
- len
- expected-actual
- formatter
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- linters:
- unparam
path: (.+)_test\.go
- path: (.+)\.go$
text: SA5011
paths:
- third_party$
- builtin$
- examples$
issues:
max-issues-per-linter: 0
max-same-issues: 0
formatters:
enable:
- gofumpt
- goimports
settings:
goimports:
local-prefixes:
- github.com/argoproj/argo-cd/v2
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -74,3 +74,6 @@ packages:
github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer:
github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned/typed/application/v1alpha1:
interfaces:
AppProjectInterface:

View File

@@ -2,6 +2,7 @@ version: 2
formats: all
mkdocs:
fail_on_warning: false
configuration: mkdocs.yml
python:
install:
- requirements: docs/requirements.txt

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:24.04@sha256:3f85b7caad41a95462cf5b787d8
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.23.3@sha256:d56c3e08fe5b27729ee3834854ae8f7015af48fd651cd25d1e3bcf3c19830174 AS builder
FROM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe978d2b877d475be0d72f6c6ce6f6 AS builder
RUN echo 'deb http://archive.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -101,7 +101,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.23.3@sha256:d56c3e08fe5b27729ee3834854ae8f7015af48fd651cd25d1e3bcf3c19830174 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.24.6@sha256:2c89c41fb9efc3807029b59af69645867cfe978d2b877d475be0d72f6c6ce6f6 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -490,6 +490,7 @@ start-e2e-local: mod-vendor-local dep-ui-local cli-local
ARGOCD_APPLICATIONSET_CONTROLLER_TOKENREF_STRICT_MODE=true \
ARGOCD_APPLICATIONSET_CONTROLLER_ALLOWED_SCM_PROVIDERS=http://127.0.0.1:8341,http://127.0.0.1:8342,http://127.0.0.1:8343,http://127.0.0.1:8344 \
ARGOCD_E2E_TEST=true \
ARGOCD_HYDRATOR_ENABLED=true \
goreman -f $(ARGOCD_PROCFILE) start ${ARGOCD_START}
ls -lrt /tmp/coverage

View File

@@ -1,5 +1,5 @@
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/app-controller} HOSTNAME=testappcontroller-1 FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --server-side-diff-enabled=${ARGOCD_APPLICATION_CONTROLLER_SERVER_SIDE_DIFF:-'false'}"
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''}"
controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/app-controller} HOSTNAME=testappcontroller-1 FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --commit-server localhost:${ARGOCD_E2E_COMMITSERVER_PORT:-8086} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --server-side-diff-enabled=${ARGOCD_APPLICATION_CONTROLLER_SERVER_SIDE_DIFF:-'false'} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
redis: hack/start-redis-with-password.sh
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"

View File

@@ -335,6 +335,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Swisscom](https://www.swisscom.ch)
1. [Swissquote](https://github.com/swissquote)
1. [Syncier](https://syncier.com/)
1. [Synergy](https://synergy.net.au)
1. [Syself](https://syself.com)
1. [TableCheck](https://tablecheck.com/)
1. [Tailor Brands](https://www.tailorbrands.com)

View File

@@ -1 +1 @@
2.14.0
2.14.16

View File

@@ -127,8 +127,8 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
}()
// Do not attempt to further reconcile the ApplicationSet if it is being deleted.
if applicationSetInfo.ObjectMeta.DeletionTimestamp != nil {
appsetName := applicationSetInfo.ObjectMeta.Name
if applicationSetInfo.DeletionTimestamp != nil {
appsetName := applicationSetInfo.Name
logCtx.Debugf("DeletionTimestamp is set on %s", appsetName)
deleteAllowed := utils.DefaultPolicy(applicationSetInfo.Spec.SyncPolicy, r.Policy, r.EnablePolicyOverride).AllowDelete()
if !deleteAllowed {
@@ -155,6 +155,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
// desiredApplications is the main list of all expected Applications from all generators in this appset.
desiredApplications, applicationSetReason, err := template.GenerateApplications(logCtx, applicationSetInfo, r.Generators, r.Renderer, r.Client)
if err != nil {
logCtx.Errorf("unable to generate applications: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
@@ -164,7 +165,8 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, parametersGenerated,
)
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, err
// In order for the controller SDK to respect RequeueAfter, the error must be nil
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, nil
}
parametersGenerated = true
@@ -208,16 +210,16 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
appSyncMap := map[string]bool{}
if r.EnableProgressiveSyncs {
if applicationSetInfo.Spec.Strategy == nil && len(applicationSetInfo.Status.ApplicationStatus) > 0 {
// If appset used progressive sync but stopped, clean up the progressive sync application statuses
if !isRollingSyncStrategy(&applicationSetInfo) && len(applicationSetInfo.Status.ApplicationStatus) > 0 {
// If an appset was previously syncing with a `RollingSync` strategy but it has switched to the default strategy, clean up the progressive sync application statuses
logCtx.Infof("Removing %v unnecessary AppStatus entries from ApplicationSet %v", len(applicationSetInfo.Status.ApplicationStatus), applicationSetInfo.Name)
err := r.setAppSetApplicationStatus(ctx, logCtx, &applicationSetInfo, []argov1alpha1.ApplicationSetApplicationStatus{})
if err != nil {
return ctrl.Result{}, fmt.Errorf("failed to clear previous AppSet application statuses for %v: %w", applicationSetInfo.Name, err)
}
} else if applicationSetInfo.Spec.Strategy != nil {
// appset uses progressive sync
} else if isRollingSyncStrategy(&applicationSetInfo) {
// The appset uses progressive sync with `RollingSync` strategy
for _, app := range currentApplications {
appMap[app.Name] = app
}
@@ -312,7 +314,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
if applicationSetInfo.RefreshRequired() {
delete(applicationSetInfo.Annotations, common.AnnotationApplicationSetRefresh)
err := r.Client.Update(ctx, &applicationSetInfo)
err := r.Update(ctx, &applicationSetInfo)
if err != nil {
logCtx.Warnf("error occurred while updating ApplicationSet: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
@@ -487,7 +489,7 @@ func (r *ApplicationSetReconciler) validateGeneratedApplications(ctx context.Con
}
appProject := &argov1alpha1.AppProject{}
err := r.Client.Get(ctx, types.NamespacedName{Name: app.Spec.Project, Namespace: r.ArgoCDNamespace}, appProject)
err := r.Get(ctx, types.NamespacedName{Name: app.Spec.Project, Namespace: r.ArgoCDNamespace}, appProject)
if err != nil {
if apierr.IsNotFound(err) {
errorsByIndex[i] = fmt.Errorf("application references project %s which does not exist", app.Spec.Project)
@@ -525,11 +527,9 @@ func (r *ApplicationSetReconciler) getMinRequeueAfter(applicationSetInfo *argov1
}
func ignoreNotAllowedNamespaces(namespaces []string) predicate.Predicate {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
return utils.IsNamespaceAllowed(namespaces, e.Object.GetNamespace())
},
}
return predicate.NewPredicateFuncs(func(object client.Object) bool {
return utils.IsNamespaceAllowed(namespaces, object.GetNamespace())
})
}
func appControllerIndexer(rawObj client.Object) []string {
@@ -553,12 +553,13 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
return fmt.Errorf("error setting up with manager: %w", err)
}
ownsHandler := getOwnsHandlerPredicates(enableProgressiveSyncs)
appOwnsHandler := getApplicationOwnsHandler(enableProgressiveSyncs)
appSetOwnsHandler := getApplicationSetOwnsHandler(enableProgressiveSyncs)
return ctrl.NewControllerManagedBy(mgr).WithOptions(controller.Options{
MaxConcurrentReconciles: maxConcurrentReconciliations,
}).For(&argov1alpha1.ApplicationSet{}).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(ownsHandler)).
}).For(&argov1alpha1.ApplicationSet{}, builder.WithPredicates(appSetOwnsHandler)).
Owns(&argov1alpha1.Application{}, builder.WithPredicates(appOwnsHandler)).
WithEventFilter(ignoreNotAllowedNamespaces(r.ApplicationSetNamespaces)).
Watches(
&corev1.Secret{},
@@ -566,7 +567,6 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
Client: mgr.GetClient(),
Log: log.WithField("type", "createSecretEventHandler"),
}).
// TODO: also watch Applications and respond on changes if we own them.
Complete(r)
}
@@ -625,7 +625,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.ObjectMeta.Annotations[key]; exists {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
@@ -634,7 +634,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
for _, key := range preservedLabels {
if state, exists := found.ObjectMeta.Labels[key]; exists {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
@@ -644,7 +644,7 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
// Preserve post-delete finalizers:
// https://github.com/argoproj/argo-cd/issues/17181
for _, finalizer := range found.ObjectMeta.Finalizers {
for _, finalizer := range found.Finalizers {
if strings.HasPrefix(finalizer, argov1alpha1.PostDeleteFinalizerName) {
if generatedApp.Finalizers == nil {
generatedApp.Finalizers = []string{}
@@ -653,10 +653,10 @@ func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context,
}
}
found.ObjectMeta.Annotations = generatedApp.Annotations
found.Annotations = generatedApp.Annotations
found.ObjectMeta.Finalizers = generatedApp.Finalizers
found.ObjectMeta.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
found.Labels = generatedApp.Labels
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
@@ -710,7 +710,7 @@ func (r *ApplicationSetReconciler) createInCluster(ctx context.Context, logCtx *
func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, applicationSet argov1alpha1.ApplicationSet) ([]argov1alpha1.Application, error) {
var current argov1alpha1.ApplicationList
err := r.Client.List(ctx, &current, client.MatchingFields{".metadata.controller": applicationSet.Name}, client.InNamespace(applicationSet.Namespace))
err := r.List(ctx, &current, client.MatchingFields{".metadata.controller": applicationSet.Name}, client.InNamespace(applicationSet.Namespace))
if err != nil {
return nil, fmt.Errorf("error retrieving applications: %w", err)
}
@@ -721,9 +721,6 @@ func (r *ApplicationSetReconciler) getCurrentApplications(ctx context.Context, a
// deleteInCluster will delete Applications that are currently on the cluster, but not in appList.
// The function must be called after all generators had been called and generated applications
func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
// settingsMgr := settings.NewSettingsManager(context.TODO(), r.KubeClientset, applicationSet.Namespace)
// argoDB := db.NewDB(applicationSet.Namespace, settingsMgr, r.KubeClientset)
// clusterList, err := argoDB.ListClusters(ctx)
clusterList, err := utils.ListClusters(ctx, r.KubeClientset, r.ArgoCDNamespace)
if err != nil {
return fmt.Errorf("error listing clusters: %w", err)
@@ -758,7 +755,7 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
continue
}
err = r.Client.Delete(ctx, &app)
err = r.Delete(ctx, &app)
if err != nil {
logCtx.WithError(err).Error("failed to delete Application")
if firstError != nil {
@@ -830,7 +827,7 @@ func (r *ApplicationSetReconciler) removeFinalizerOnInvalidDestination(ctx conte
if log.IsLevelEnabled(log.DebugLevel) {
utils.LogPatch(appLog, patch, updated)
}
if err := r.Client.Patch(ctx, updated, patch); err != nil {
if err := r.Patch(ctx, updated, patch); err != nil {
return fmt.Errorf("error updating finalizers: %w", err)
}
// Application must have updated list of finalizers
@@ -852,7 +849,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
for _, app := range applications {
app.SetOwnerReferences([]metav1.OwnerReference{})
err := r.Client.Update(ctx, &app)
err := r.Update(ctx, &app)
if err != nil {
return fmt.Errorf("error updating application: %w", err)
}
@@ -1008,8 +1005,14 @@ func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1al
return true
}
func isRollingSyncStrategy(appset *argov1alpha1.ApplicationSet) bool {
// It's only RollingSync if the type specifically sets it
return appset.Spec.Strategy != nil && appset.Spec.Strategy.Type == "RollingSync" && appset.Spec.Strategy.RollingSync != nil
}
func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.ApplicationSet) bool {
return appset.Spec.Strategy != nil && appset.Spec.Strategy.RollingSync != nil && appset.Spec.Strategy.Type == "RollingSync" && len(appset.Spec.Strategy.RollingSync.Steps) > 0
// ProgressiveSync is enabled if the strategy is set to `RollingSync` + steps slice is not empty
return isRollingSyncStrategy(appset) && len(appset.Spec.Strategy.RollingSync.Steps) > 0
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
@@ -1062,19 +1065,20 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
Message: "No Application status found, defaulting status to Waiting.",
Status: "Waiting",
Step: strconv.Itoa(getAppStep(app.Name, appStepMap)),
TargetRevisions: app.Status.GetRevisions(),
}
} else {
// we have an existing AppStatus
currentAppStatus = applicationSet.Status.ApplicationStatus[idx]
// upgrade any existing AppStatus that might have been set by an older argo-cd version
// note: currentAppStatus.TargetRevisions may be set to empty list earlier during migrations,
// to prevent other usage of r.Client.Status().Update to fail before reaching here.
if len(currentAppStatus.TargetRevisions) == 0 {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
}
}
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
currentAppStatus.Status = "Waiting"
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
appOutdated := false
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
@@ -1087,25 +1091,15 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.Status = "Waiting"
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
if currentAppStatus.Status == "Pending" {
if operationPhaseString == "Succeeded" {
revisions := []string{}
if len(app.Status.OperationState.SyncResult.Revisions) > 0 {
revisions = app.Status.OperationState.SyncResult.Revisions
} else if app.Status.OperationState.SyncResult.Revision != "" {
revisions = append(revisions, app.Status.OperationState.SyncResult.Revision)
}
if reflect.DeepEqual(currentAppStatus.TargetRevisions, revisions) {
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
}
if !appOutdated && operationPhaseString == "Succeeded" {
logCtx.Infof("Application %v has completed a sync successfully, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = strconv.Itoa(getAppStep(currentAppStatus.Application, appStepMap))
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
@@ -1467,7 +1461,7 @@ func syncApplication(application argov1alpha1.Application, prune bool) argov1alp
return application
}
func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
func getApplicationOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
// if we are the owner and there is a create event, we most likely created it and do not need to
@@ -1504,8 +1498,8 @@ func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
if !isApp {
return false
}
requeue := shouldRequeueApplicationSet(appOld, appNew, enableProgressiveSyncs)
logCtx.WithField("requeue", requeue).Debugf("requeue: %t caused by application %s", requeue, appNew.Name)
requeue := shouldRequeueForApplication(appOld, appNew, enableProgressiveSyncs)
logCtx.WithField("requeue", requeue).Debugf("requeue caused by application %s", appNew.Name)
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
@@ -1522,13 +1516,13 @@ func getOwnsHandlerPredicates(enableProgressiveSyncs bool) predicate.Funcs {
}
}
// shouldRequeueApplicationSet determines when we want to requeue an ApplicationSet for reconciling based on an owned
// shouldRequeueForApplication determines when we want to requeue an ApplicationSet for reconciling based on an owned
// application change
// The applicationset controller owns a subset of the Application CR.
// We do not need to re-reconcile if parts of the application change outside the applicationset's control.
// An example being, Application.ApplicationStatus.ReconciledAt which gets updated by the application controller.
// Additionally, Application.ObjectMeta.ResourceVersion and Application.ObjectMeta.Generation which are set by K8s.
func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
func shouldRequeueForApplication(appOld *argov1alpha1.Application, appNew *argov1alpha1.Application, enableProgressiveSyncs bool) bool {
if appOld == nil || appNew == nil {
return false
}
@@ -1538,9 +1532,9 @@ func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov
// https://pkg.go.dev/reflect#DeepEqual
// ApplicationDestination has an unexported field so we can just use the == for comparison
if !cmp.Equal(appOld.Spec, appNew.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{})) ||
!cmp.Equal(appOld.ObjectMeta.GetAnnotations(), appNew.ObjectMeta.GetAnnotations(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.ObjectMeta.GetLabels(), appNew.ObjectMeta.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.ObjectMeta.GetFinalizers(), appNew.ObjectMeta.GetFinalizers(), cmpopts.EquateEmpty()) {
!cmp.Equal(appOld.GetAnnotations(), appNew.GetAnnotations(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.GetLabels(), appNew.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appOld.GetFinalizers(), appNew.GetFinalizers(), cmpopts.EquateEmpty()) {
return true
}
@@ -1561,4 +1555,90 @@ func shouldRequeueApplicationSet(appOld *argov1alpha1.Application, appNew *argov
return false
}
func getApplicationSetOwnsHandler(enableProgressiveSyncs bool) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received create event")
// Always queue a new applicationset
return true
},
DeleteFunc: func(e event.DeleteEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received delete event")
// Always queue for the deletion of an applicationset
return true
},
UpdateFunc: func(e event.UpdateEvent) bool {
appSetOld, isAppSet := e.ObjectOld.(*argov1alpha1.ApplicationSet)
if !isAppSet {
return false
}
appSetNew, isAppSet := e.ObjectNew.(*argov1alpha1.ApplicationSet)
if !isAppSet {
return false
}
requeue := shouldRequeueForApplicationSet(appSetOld, appSetNew, enableProgressiveSyncs)
log.WithField("applicationset", appSetNew.QualifiedName()).
WithField("requeue", requeue).Debugln("received update event")
return requeue
},
GenericFunc: func(e event.GenericEvent) bool {
appSet, isApp := e.Object.(*argov1alpha1.ApplicationSet)
if !isApp {
return false
}
log.WithField("applicationset", appSet.QualifiedName()).Debugln("received generic event")
// Always queue for the generic of an applicationset
return true
},
}
}
// shouldRequeueForApplicationSet determines when we need to requeue an applicationset
func shouldRequeueForApplicationSet(appSetOld, appSetNew *argov1alpha1.ApplicationSet, enableProgressiveSyncs bool) bool {
if appSetOld == nil || appSetNew == nil {
return false
}
// Requeue if any ApplicationStatus.Status changed for Progressive sync strategy
if enableProgressiveSyncs {
if !cmp.Equal(appSetOld.Status.ApplicationStatus, appSetNew.Status.ApplicationStatus, cmpopts.EquateEmpty()) {
return true
}
}
// only compare the applicationset spec, annotations, labels and finalizers, deletionTimestamp, specifically avoiding
// the status field. status is owned by the applicationset controller,
// and we do not need to requeue when it does bookkeeping
// NB: the ApplicationDestination comes from the ApplicationSpec being embedded
// in the ApplicationSetTemplate from the generators
if !cmp.Equal(appSetOld.Spec, appSetNew.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{})) ||
!cmp.Equal(appSetOld.GetLabels(), appSetNew.GetLabels(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appSetOld.GetFinalizers(), appSetNew.GetFinalizers(), cmpopts.EquateEmpty()) ||
!cmp.Equal(appSetOld.DeletionTimestamp, appSetNew.DeletionTimestamp, cmpopts.EquateEmpty()) {
return true
}
// Requeue only when the refresh annotation is newly added to the ApplicationSet.
// Changes to other annotations made simultaneously might be missed, but such cases are rare.
if !cmp.Equal(appSetOld.GetAnnotations(), appSetNew.GetAnnotations(), cmpopts.EquateEmpty()) {
_, oldHasRefreshAnnotation := appSetOld.Annotations[common.AnnotationApplicationSetRefresh]
_, newHasRefreshAnnotation := appSetNew.Annotations[common.AnnotationApplicationSetRefresh]
if oldHasRefreshAnnotation && !newHasRefreshAnnotation {
return false
}
return true
}
return false
}
var _ handler.EventHandler = &clusterSecretEventHandler{}

View File

@@ -1885,7 +1885,7 @@ func TestRequeueGeneratorFails(t *testing.T) {
}
res, err := r.Reconcile(context.Background(), req)
require.Error(t, err)
require.NoError(t, err)
assert.Equal(t, ReconcileRequeueOnValidationError, res.RequeueAfter)
}
@@ -4733,6 +4733,9 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusProgressing,
},
Sync: v1alpha1.SyncStatus{
Revision: "Next",
},
},
},
},
@@ -4796,7 +4799,8 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Phase: common.OperationRunning,
},
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "Current",
},
},
},
@@ -4861,7 +4865,8 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Phase: common.OperationSucceeded,
},
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "Next",
},
},
},
@@ -4926,7 +4931,8 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Phase: common.OperationSucceeded,
},
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "Current",
Status: v1alpha1.SyncStatusCodeSynced,
},
},
},
@@ -5165,86 +5171,6 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
},
},
},
{
name: "does not progresses a pending application with a successful sync triggered by controller with invalid revision to progressing",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{},
},
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{},
},
},
},
},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
LastTransitionTime: &metav1.Time{
Time: time.Now().Add(time.Duration(-1) * time.Minute),
},
Message: "",
Status: "Pending",
Step: "1",
TargetRevisions: []string{"Next"},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusDegraded,
},
OperationState: &v1alpha1.OperationState{
Phase: common.OperationSucceeded,
StartedAt: metav1.Time{
Time: time.Now(),
},
Operation: v1alpha1.Operation{
InitiatedBy: v1alpha1.OperationInitiator{
Username: "applicationset-controller",
Automated: true,
},
},
SyncResult: &v1alpha1.SyncOperationResult{
Revision: "Previous",
},
},
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
},
},
},
},
appStepMap: map[string]int{
"app1": 0,
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: "Pending",
Step: "1",
TargetRevisions: []string{"Next"},
},
},
},
{
name: "removes the appStatus for applications that no longer exist",
appSet: v1alpha1.ApplicationSet{
@@ -5299,7 +5225,77 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Phase: common.OperationSucceeded,
},
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "Current",
},
},
},
},
appStepMap: map[string]int{
"app1": 0,
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "Application resource is already Healthy, updating status from Waiting to Healthy.",
Status: "Healthy",
Step: "1",
TargetRevisions: []string{"Current"},
},
},
},
{
name: "progresses a pending synced application with an old revision to progressing with the Current one",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "argocd",
},
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{},
},
{
MatchExpressions: []v1alpha1.ApplicationMatchExpression{},
},
},
},
},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: "Pending",
Step: "1",
TargetRevisions: []string{"Old"},
},
},
},
},
apps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Status: v1alpha1.ApplicationStatus{
Health: v1alpha1.HealthStatus{
Status: health.HealthStatusHealthy,
},
OperationState: &v1alpha1.OperationState{
Phase: common.OperationSucceeded,
SyncResult: &v1alpha1.SyncOperationResult{
Revision: "Current",
},
},
Sync: v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeSynced,
Revisions: []string{"Current"},
},
},
},
@@ -6397,11 +6393,11 @@ func TestResourceStatusAreOrdered(t *testing.T) {
func TestOwnsHandler(t *testing.T) {
// progressive syncs do not affect create, delete, or generic
ownsHandler := getOwnsHandlerPredicates(true)
ownsHandler := getApplicationOwnsHandler(true)
assert.False(t, ownsHandler.CreateFunc(event.CreateEvent{}))
assert.True(t, ownsHandler.DeleteFunc(event.DeleteEvent{}))
assert.True(t, ownsHandler.GenericFunc(event.GenericEvent{}))
ownsHandler = getOwnsHandlerPredicates(false)
ownsHandler = getApplicationOwnsHandler(false)
assert.False(t, ownsHandler.CreateFunc(event.CreateEvent{}))
assert.True(t, ownsHandler.DeleteFunc(event.DeleteEvent{}))
assert.True(t, ownsHandler.GenericFunc(event.GenericEvent{}))
@@ -6581,7 +6577,7 @@ func TestOwnsHandler(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ownsHandler = getOwnsHandlerPredicates(tt.args.enableProgressiveSyncs)
ownsHandler = getApplicationOwnsHandler(tt.args.enableProgressiveSyncs)
assert.Equalf(t, tt.want, ownsHandler.UpdateFunc(tt.args.e), "UpdateFunc(%v)", tt.args.e)
})
}
@@ -6657,3 +6653,497 @@ func TestMigrateStatus(t *testing.T) {
})
}
}
func TestApplicationSetOwnsHandlerUpdate(t *testing.T) {
buildAppSet := func(annotations map[string]string) *v1alpha1.ApplicationSet {
return &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Annotations: annotations,
},
}
}
tests := []struct {
name string
appSetOld crtclient.Object
appSetNew crtclient.Object
enableProgressiveSyncs bool
want bool
}{
{
name: "Different Spec",
appSetOld: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{List: &v1alpha1.ListGenerator{}},
},
},
},
appSetNew: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{Git: &v1alpha1.GitGenerator{}},
},
},
},
enableProgressiveSyncs: false,
want: true,
},
{
name: "Different Annotations",
appSetOld: buildAppSet(map[string]string{"key1": "value1"}),
appSetNew: buildAppSet(map[string]string{"key1": "value2"}),
enableProgressiveSyncs: false,
want: true,
},
{
name: "Different Labels",
appSetOld: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"key1": "value1"},
},
},
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"key1": "value2"},
},
},
enableProgressiveSyncs: false,
want: true,
},
{
name: "Different Finalizers",
appSetOld: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Finalizers: []string{"finalizer1"},
},
},
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Finalizers: []string{"finalizer2"},
},
},
enableProgressiveSyncs: false,
want: true,
},
{
name: "No Changes",
appSetOld: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{List: &v1alpha1.ListGenerator{}},
},
},
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{"key1": "value1"},
Labels: map[string]string{"key1": "value1"},
Finalizers: []string{"finalizer1"},
},
},
appSetNew: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Generators: []v1alpha1.ApplicationSetGenerator{
{List: &v1alpha1.ListGenerator{}},
},
},
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{"key1": "value1"},
Labels: map[string]string{"key1": "value1"},
Finalizers: []string{"finalizer1"},
},
},
enableProgressiveSyncs: false,
want: false,
},
{
name: "annotation removed",
appSetOld: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
appSetNew: buildAppSet(map[string]string{}),
enableProgressiveSyncs: false,
want: false,
},
{
name: "annotation not removed",
appSetOld: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
appSetNew: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
enableProgressiveSyncs: false,
want: false,
},
{
name: "annotation added",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: buildAppSet(map[string]string{
argocommon.AnnotationApplicationSetRefresh: "true",
}),
enableProgressiveSyncs: false,
want: true,
},
{
name: "old object is not an appset",
appSetOld: &v1alpha1.Application{},
appSetNew: buildAppSet(map[string]string{}),
enableProgressiveSyncs: false,
want: false,
},
{
name: "new object is not an appset",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: &v1alpha1.Application{},
enableProgressiveSyncs: false,
want: false,
},
{
name: "deletionTimestamp present when progressive sync enabled",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: time.Now()},
},
},
enableProgressiveSyncs: true,
want: true,
},
{
name: "deletionTimestamp present when progressive sync disabled",
appSetOld: buildAppSet(map[string]string{}),
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: time.Now()},
},
},
enableProgressiveSyncs: false,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(tt.enableProgressiveSyncs)
requeue := ownsHandler.UpdateFunc(event.UpdateEvent{
ObjectOld: tt.appSetOld,
ObjectNew: tt.appSetNew,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.UpdateFunc(%v, %v, %t)", tt.appSetOld, tt.appSetNew, tt.enableProgressiveSyncs)
})
}
}
func TestApplicationSetOwnsHandlerGeneric(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(false)
tests := []struct {
name string
obj crtclient.Object
want bool
}{
{
name: "Object is ApplicationSet",
obj: &v1alpha1.ApplicationSet{},
want: true,
},
{
name: "Object is not ApplicationSet",
obj: &v1alpha1.Application{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
requeue := ownsHandler.GenericFunc(event.GenericEvent{
Object: tt.obj,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.GenericFunc(%v)", tt.obj)
})
}
}
func TestApplicationSetOwnsHandlerCreate(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(false)
tests := []struct {
name string
obj crtclient.Object
want bool
}{
{
name: "Object is ApplicationSet",
obj: &v1alpha1.ApplicationSet{},
want: true,
},
{
name: "Object is not ApplicationSet",
obj: &v1alpha1.Application{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
requeue := ownsHandler.CreateFunc(event.CreateEvent{
Object: tt.obj,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.CreateFunc(%v)", tt.obj)
})
}
}
func TestApplicationSetOwnsHandlerDelete(t *testing.T) {
ownsHandler := getApplicationSetOwnsHandler(false)
tests := []struct {
name string
obj crtclient.Object
want bool
}{
{
name: "Object is ApplicationSet",
obj: &v1alpha1.ApplicationSet{},
want: true,
},
{
name: "Object is not ApplicationSet",
obj: &v1alpha1.Application{},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
requeue := ownsHandler.DeleteFunc(event.DeleteEvent{
Object: tt.obj,
})
assert.Equalf(t, tt.want, requeue, "ownsHandler.DeleteFunc(%v)", tt.obj)
})
}
}
func TestShouldRequeueForApplicationSet(t *testing.T) {
type args struct {
appSetOld *v1alpha1.ApplicationSet
appSetNew *v1alpha1.ApplicationSet
enableProgressiveSyncs bool
}
tests := []struct {
name string
args args
want bool
}{
{
name: "NilAppSet",
args: args{
appSetNew: &v1alpha1.ApplicationSet{},
appSetOld: nil,
enableProgressiveSyncs: false,
},
want: false,
},
{
name: "ApplicationSetApplicationStatusChanged",
args: args{
appSetOld: &v1alpha1.ApplicationSet{
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Healthy",
},
},
},
},
appSetNew: &v1alpha1.ApplicationSet{
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Waiting",
},
},
},
},
enableProgressiveSyncs: true,
},
want: true,
},
{
name: "ApplicationSetWithDeletionTimestamp",
args: args{
appSetOld: &v1alpha1.ApplicationSet{
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Healthy",
},
},
},
},
appSetNew: &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: time.Now()},
},
Status: v1alpha1.ApplicationSetStatus{
ApplicationStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Status: "Waiting",
},
},
},
},
enableProgressiveSyncs: false,
},
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equalf(t, tt.want, shouldRequeueForApplicationSet(tt.args.appSetOld, tt.args.appSetNew, tt.args.enableProgressiveSyncs), "shouldRequeueForApplicationSet(%v, %v)", tt.args.appSetOld, tt.args.appSetNew)
})
}
}
func TestIgnoreNotAllowedNamespaces(t *testing.T) {
tests := []struct {
name string
namespaces []string
objectNS string
expected bool
}{
{
name: "Namespace allowed",
namespaces: []string{"allowed-namespace"},
objectNS: "allowed-namespace",
expected: true,
},
{
name: "Namespace not allowed",
namespaces: []string{"allowed-namespace"},
objectNS: "not-allowed-namespace",
expected: false,
},
{
name: "Empty allowed namespaces",
namespaces: []string{},
objectNS: "any-namespace",
expected: false,
},
{
name: "Multiple allowed namespaces",
namespaces: []string{"allowed-namespace-1", "allowed-namespace-2"},
objectNS: "allowed-namespace-2",
expected: true,
},
{
name: "Namespace not in multiple allowed namespaces",
namespaces: []string{"allowed-namespace-1", "allowed-namespace-2"},
objectNS: "not-allowed-namespace",
expected: false,
},
{
name: "Namespace matched by glob pattern",
namespaces: []string{"allowed-namespace-*"},
objectNS: "allowed-namespace-1",
expected: true,
},
{
name: "Namespace matched by regex pattern",
namespaces: []string{"/^allowed-namespace-[^-]+$/"},
objectNS: "allowed-namespace-1",
expected: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
predicate := ignoreNotAllowedNamespaces(tt.namespaces)
object := &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Namespace: tt.objectNS,
},
}
t.Run(tt.name+":Create", func(t *testing.T) {
result := predicate.Create(event.CreateEvent{Object: object})
assert.Equal(t, tt.expected, result)
})
t.Run(tt.name+":Update", func(t *testing.T) {
result := predicate.Update(event.UpdateEvent{ObjectNew: object})
assert.Equal(t, tt.expected, result)
})
t.Run(tt.name+":Delete", func(t *testing.T) {
result := predicate.Delete(event.DeleteEvent{Object: object})
assert.Equal(t, tt.expected, result)
})
t.Run(tt.name+":Generic", func(t *testing.T) {
result := predicate.Generic(event.GenericEvent{Object: object})
assert.Equal(t, tt.expected, result)
})
})
}
}
func TestIsRollingSyncStrategy(t *testing.T) {
tests := []struct {
name string
appset *v1alpha1.ApplicationSet
expected bool
}{
{
name: "RollingSync strategy is explicitly set",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "RollingSync",
RollingSync: &v1alpha1.ApplicationSetRolloutStrategy{
Steps: []v1alpha1.ApplicationSetRolloutStep{},
},
},
},
},
expected: true,
},
{
name: "AllAtOnce strategy is explicitly set",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{
Type: "AllAtOnce",
},
},
},
expected: false,
},
{
name: "Strategy is empty",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: &v1alpha1.ApplicationSetStrategy{},
},
},
expected: false,
},
{
name: "Strategy is nil",
appset: &v1alpha1.ApplicationSet{
Spec: v1alpha1.ApplicationSetSpec{
Strategy: nil,
},
},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isRollingSyncStrategy(tt.appset)
assert.Equal(t, tt.expected, result)
})
}
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -2,10 +2,10 @@ package scm_provider
import (
"context"
"errors"
"fmt"
"net/http"
"os"
pathpkg "path"
"github.com/hashicorp/go-retryablehttp"
"github.com/xanzy/go-gitlab"
@@ -129,40 +129,31 @@ func (g *GitlabProvider) ListRepos(ctx context.Context, cloneProtocol string) ([
func (g *GitlabProvider) RepoHasPath(_ context.Context, repo *Repository, path string) (bool, error) {
p, _, err := g.client.Projects.GetProject(repo.Organization+"/"+repo.Repository, nil)
if err != nil {
return false, err
return false, fmt.Errorf("error getting Project Info: %w", err)
}
directories := []string{
path,
pathpkg.Dir(path),
}
for _, directory := range directories {
options := gitlab.ListTreeOptions{
Path: &directory,
Ref: &repo.Branch,
}
for {
treeNode, resp, err := g.client.Repositories.ListTree(p.ID, &options)
// search if the path is a file and exists in the repo
fileOptions := gitlab.GetFileOptions{Ref: &repo.Branch}
_, _, err = g.client.RepositoryFiles.GetFile(p.ID, path, &fileOptions)
if err != nil {
if errors.Is(err, gitlab.ErrNotFound) {
// no file found, check for a directory
options := gitlab.ListTreeOptions{
Path: &path,
Ref: &repo.Branch,
}
_, _, err := g.client.Repositories.ListTree(p.ID, &options)
if err != nil {
if errors.Is(err, gitlab.ErrNotFound) {
return false, nil // no file or directory found
}
return false, err
}
if path == directory {
if resp.TotalItems > 0 {
return true, nil
}
}
for i := range treeNode {
if treeNode[i].Path == path {
return true, nil
}
}
if resp.NextPage == 0 {
// no future pages
break
}
options.Page = resp.NextPage
return true, nil // directory found
}
return false, err
}
return false, nil
return true, nil // file found
}
func (g *GitlabProvider) listBranches(_ context.Context, repo *Repository) ([]gitlab.Branch, error) {

View File

@@ -20,6 +20,7 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Println(r.RequestURI)
switch r.RequestURI {
case "/api/v4":
fmt.Println("here1")
@@ -1040,6 +1041,32 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
// Recent versions of the Gitlab API (v17.7+) listTree return 404 not only when a file doesn't exist, but also
// when a path is to a file instead of a directory. Code was refactored to explicitly search for file then
// search for directory, catching 404 errors as "file not found".
case "/api/v4/projects/27084533/repository/files/argocd?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/argocd%2Finstall%2Eyaml?ref=master":
_, err := io.WriteString(w, `{"file_name":"install.yaml","file_path":"argocd/install.yaml","size":0,"encoding":"base64","content_sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","ref":"main","blob_id":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391","commit_id":"6d4c0f9d34534ccc73aa3f3180b25e2aebe630eb","last_commit_id":"b50eb63f9c0e09bfdb070db26fd32c7210291f52","execute_filemode":false,"content":""}`)
if err != nil {
t.Fail()
}
case "/api/v4/projects/27084533/repository/files/notathing?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=notathing&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/argocd%2Fnotathing%2Eyaml?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=argocd%2Fnotathing.yaml&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/notathing%2Fnotathing%2Eyaml?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=notathing%2Fnotathing.yaml&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/files/notathing%2Fnotathing%2Fnotathing%2Eyaml?ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/tree?path=notathing%2Fnotathing%2Fnotathing.yaml&ref=master":
w.WriteHeader(http.StatusNotFound)
case "/api/v4/projects/27084533/repository/branches/foo":
w.WriteHeader(http.StatusNotFound)
default:
@@ -1194,6 +1221,16 @@ func TestGitlabHasPath(t *testing.T) {
path: "argocd/notathing.yaml",
exists: false,
},
{
name: "noexistent file in noexistent directory",
path: "notathing/notathing.yaml",
exists: false,
},
{
name: "noexistent file in nested noexistent directory",
path: "notathing/notathing/notathing.yaml",
exists: false,
},
}
for _, c := range cases {

View File

@@ -2,22 +2,15 @@ package utils
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
"time"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/common"
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/db"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/utils/ptr"
)
// The contents of this file are from
@@ -126,11 +119,15 @@ func ListClusters(ctx context.Context, clientset kubernetes.Interface, namespace
hasInClusterCredentials := false
for i, clusterSecret := range clusterSecrets {
// This line has changed from the original Argo CD code: now receives an error, and handles it
cluster, err := secretToCluster(&clusterSecret)
cluster, err := db.SecretToCluster(&clusterSecret)
if err != nil || cluster == nil {
return nil, fmt.Errorf("unable to convert cluster secret to cluster object '%s': %w", clusterSecret.Name, err)
}
// db.SecretToCluster populates these, but they're not meant to be available to the caller.
cluster.Labels = nil
cluster.Annotations = nil
clusterList.Items[i] = *cluster
if cluster.Server == appv1.KubernetesInternalAPIServerAddr {
hasInClusterCredentials = true
@@ -167,48 +164,3 @@ func getLocalCluster(clientset kubernetes.Interface) *appv1.Cluster {
cluster.ConnectionState.ModifiedAt = &now
return cluster
}
// secretToCluster converts a secret into a Cluster object
func secretToCluster(s *corev1.Secret) (*appv1.Cluster, error) {
var config appv1.ClusterConfig
if len(s.Data["config"]) > 0 {
if err := json.Unmarshal(s.Data["config"], &config); err != nil {
// This line has changed from the original Argo CD: now returns an error rather than panicing.
return nil, err
}
}
var namespaces []string
for _, ns := range strings.Split(string(s.Data["namespaces"]), ",") {
if ns = strings.TrimSpace(ns); ns != "" {
namespaces = append(namespaces, ns)
}
}
var refreshRequestedAt *metav1.Time
if v, found := s.Annotations[appv1.AnnotationKeyRefresh]; found {
requestedAt, err := time.Parse(time.RFC3339, v)
if err != nil {
log.Warnf("Error while parsing date in cluster secret '%s': %v", s.Name, err)
} else {
refreshRequestedAt = &metav1.Time{Time: requestedAt}
}
}
var shard *int64
if shardStr := s.Data["shard"]; shardStr != nil {
if val, err := strconv.Atoi(string(shardStr)); err != nil {
log.Warnf("Error while parsing shard in cluster secret '%s': %v", s.Name, err)
} else {
shard = ptr.To(int64(val))
}
}
cluster := appv1.Cluster{
ID: string(s.UID),
Server: strings.TrimRight(string(s.Data["server"]), "/"),
Name: string(s.Data["name"]),
Namespaces: namespaces,
Config: config,
RefreshRequestedAt: refreshRequestedAt,
Shard: shard,
}
return &cluster, nil
}

View File

@@ -20,51 +20,6 @@ const (
fakeNamespace = "fake-ns"
)
// From Argo CD util/db/cluster_test.go
func Test_secretToCluster(t *testing.T) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "mycluster",
Namespace: fakeNamespace,
},
Data: map[string][]byte{
"name": []byte("test"),
"server": []byte("http://mycluster"),
"config": []byte("{\"username\":\"foo\", \"disableCompression\":true}"),
},
}
cluster, err := secretToCluster(secret)
require.NoError(t, err)
assert.Equal(t, argoappv1.Cluster{
Name: "test",
Server: "http://mycluster",
Config: argoappv1.ClusterConfig{
Username: "foo",
DisableCompression: true,
},
}, *cluster)
}
// From Argo CD util/db/cluster_test.go
func Test_secretToCluster_NoConfig(t *testing.T) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "mycluster",
Namespace: fakeNamespace,
},
Data: map[string][]byte{
"name": []byte("test"),
"server": []byte("http://mycluster"),
},
}
cluster, err := secretToCluster(secret)
require.NoError(t, err)
assert.Equal(t, argoappv1.Cluster{
Name: "test",
Server: "http://mycluster",
}, *cluster)
}
func createClusterSecret(secretName string, clusterName string, clusterServer string) *corev1.Secret {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -10,6 +10,7 @@ p, role:readonly, applications, get, */*, allow
p, role:readonly, certificates, get, *, allow
p, role:readonly, clusters, get, *, allow
p, role:readonly, repositories, get, *, allow
p, role:readonly, write-repositories, get, *, allow
p, role:readonly, projects, get, *, allow
p, role:readonly, accounts, get, *, allow
p, role:readonly, gpgkeys, get, *, allow
@@ -17,7 +18,9 @@ p, role:readonly, logs, get, */*, allow
p, role:admin, applications, create, */*, allow
p, role:admin, applications, update, */*, allow
p, role:admin, applications, update/*, */*, allow
p, role:admin, applications, delete, */*, allow
p, role:admin, applications, delete/*, */*, allow
p, role:admin, applications, sync, */*, allow
p, role:admin, applications, override, */*, allow
p, role:admin, applications, action/*, */*, allow
@@ -34,6 +37,9 @@ p, role:admin, clusters, delete, *, allow
p, role:admin, repositories, create, *, allow
p, role:admin, repositories, update, *, allow
p, role:admin, repositories, delete, *, allow
p, role:admin, write-repositories, create, *, allow
p, role:admin, write-repositories, update, *, allow
p, role:admin, write-repositories, delete, *, allow
p, role:admin, projects, create, *, allow
p, role:admin, projects, update, *, allow
p, role:admin, projects, delete, *, allow
@@ -43,4 +49,4 @@ p, role:admin, gpgkeys, delete, *, allow
p, role:admin, exec, create, */*, allow
g, role:admin, role:readonly
g, admin, role:admin
g, admin, role:admin
1 # Built-in policy which defines two roles: role:readonly and role:admin,
10 p, role:readonly, clusters, get, *, allow
11 p, role:readonly, repositories, get, *, allow
12 p, role:readonly, projects, get, *, allow p, role:readonly, write-repositories, get, *, allow
13 p, role:readonly, projects, get, *, allow
14 p, role:readonly, accounts, get, *, allow
15 p, role:readonly, gpgkeys, get, *, allow
16 p, role:readonly, logs, get, */*, allow
18 p, role:admin, applications, update, */*, allow
19 p, role:admin, applications, delete, */*, allow p, role:admin, applications, update/*, */*, allow
20 p, role:admin, applications, sync, */*, allow p, role:admin, applications, delete, */*, allow
21 p, role:admin, applications, delete/*, */*, allow
22 p, role:admin, applications, override, */*, allow p, role:admin, applications, sync, */*, allow
23 p, role:admin, applications, override, */*, allow
24 p, role:admin, applications, action/*, */*, allow
25 p, role:admin, applicationsets, get, */*, allow
26 p, role:admin, applicationsets, create, */*, allow
37 p, role:admin, repositories, delete, *, allow
38 p, role:admin, projects, create, *, allow p, role:admin, write-repositories, create, *, allow
39 p, role:admin, projects, update, *, allow p, role:admin, write-repositories, update, *, allow
40 p, role:admin, write-repositories, delete, *, allow
41 p, role:admin, projects, create, *, allow
42 p, role:admin, projects, update, *, allow
43 p, role:admin, projects, delete, *, allow
44 p, role:admin, accounts, update, *, allow
45 p, role:admin, gpgkeys, create, *, allow
49 g, admin, role:admin
50
51
52

534
assets/swagger.json generated
View File

@@ -1990,6 +1990,39 @@
}
}
},
"/api/v1/applicationsets/generate": {
"post": {
"tags": [
"ApplicationSetService"
],
"summary": "Generate generates",
"operationId": "ApplicationSetService_Generate",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/applicationsetApplicationSetGenerateRequest"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/applicationsetApplicationSetGenerateResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/applicationsets/{name}": {
"get": {
"tags": [
@@ -4084,6 +4117,504 @@
}
}
},
"/api/v1/write-repocreds": {
"get": {
"tags": [
"RepoCredsService"
],
"summary": "ListWriteRepositoryCredentials gets a list of all configured repository credential sets that have write access",
"operationId": "RepoCredsService_ListWriteRepositoryCredentials",
"parameters": [
{
"type": "string",
"description": "Repo URL for query.",
"name": "url",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepoCredsList"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"post": {
"tags": [
"RepoCredsService"
],
"summary": "CreateWriteRepositoryCredentials creates a new repository credential set with write access",
"operationId": "RepoCredsService_CreateWriteRepositoryCredentials",
"parameters": [
{
"description": "Repository definition",
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
},
{
"type": "boolean",
"description": "Whether to create in upsert mode.",
"name": "upsert",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repocreds/{creds.url}": {
"put": {
"tags": [
"RepoCredsService"
],
"summary": "UpdateWriteRepositoryCredentials updates a repository credential set with write access",
"operationId": "RepoCredsService_UpdateWriteRepositoryCredentials",
"parameters": [
{
"type": "string",
"description": "URL is the URL to which these credentials match",
"name": "creds.url",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepoCreds"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repocreds/{url}": {
"delete": {
"tags": [
"RepoCredsService"
],
"summary": "DeleteWriteRepositoryCredentials deletes a repository credential set with write access from the configuration",
"operationId": "RepoCredsService_DeleteWriteRepositoryCredentials",
"parameters": [
{
"type": "string",
"name": "url",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/repocredsRepoCredsResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "ListWriteRepositories gets a list of all configured write repositories",
"operationId": "RepositoryService_ListWriteRepositories",
"parameters": [
{
"type": "string",
"description": "Repo URL for query.",
"name": "repo",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
},
{
"type": "string",
"description": "App project for query.",
"name": "appProject",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1RepositoryList"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"post": {
"tags": [
"RepositoryService"
],
"summary": "CreateWriteRepository creates a new write repository configuration",
"operationId": "RepositoryService_CreateWriteRepository",
"parameters": [
{
"description": "Repository definition",
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
{
"type": "boolean",
"description": "Whether to create in upsert mode.",
"name": "upsert",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to operate on credential set instead of repository.",
"name": "credsOnly",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories/{repo.repo}": {
"put": {
"tags": [
"RepositoryService"
],
"summary": "UpdateWriteRepository updates a write repository configuration",
"operationId": "RepositoryService_UpdateWriteRepository",
"parameters": [
{
"type": "string",
"description": "Repo contains the URL to the remote repository",
"name": "repo.repo",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories/{repo}": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "GetWrite returns a repository or its write credentials",
"operationId": "RepositoryService_GetWrite",
"parameters": [
{
"type": "string",
"description": "Repo URL for query",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
},
{
"type": "string",
"description": "App project for query.",
"name": "appProject",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"delete": {
"tags": [
"RepositoryService"
],
"summary": "DeleteWriteRepository deletes a write repository from the configuration",
"operationId": "RepositoryService_DeleteWriteRepository",
"parameters": [
{
"type": "string",
"description": "Repo URL for query",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
},
{
"type": "string",
"description": "App project for query.",
"name": "appProject",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/repositoryRepoResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/write-repositories/{repo}/validate": {
"post": {
"tags": [
"RepositoryService"
],
"summary": "ValidateWriteAccess validates write access to a repository with given parameters",
"operationId": "RepositoryService_ValidateWriteAccess",
"parameters": [
{
"type": "string",
"description": "The URL to the repo",
"name": "repo",
"in": "path",
"required": true
},
{
"description": "The URL to the repo",
"name": "body",
"in": "body",
"required": true,
"schema": {
"type": "string"
}
},
{
"type": "string",
"description": "Username for accessing repo.",
"name": "username",
"in": "query"
},
{
"type": "string",
"description": "Password for accessing repo.",
"name": "password",
"in": "query"
},
{
"type": "string",
"description": "Private key data for accessing SSH repository.",
"name": "sshPrivateKey",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to skip certificate or host key validation.",
"name": "insecure",
"in": "query"
},
{
"type": "string",
"description": "TLS client cert data for accessing HTTPS repository.",
"name": "tlsClientCertData",
"in": "query"
},
{
"type": "string",
"description": "TLS client cert key for accessing HTTPS repository.",
"name": "tlsClientCertKey",
"in": "query"
},
{
"type": "string",
"description": "The type of the repo.",
"name": "type",
"in": "query"
},
{
"type": "string",
"description": "The name of the repo.",
"name": "name",
"in": "query"
},
{
"type": "boolean",
"description": "Whether helm-oci support should be enabled for this repo.",
"name": "enableOci",
"in": "query"
},
{
"type": "string",
"description": "Github App Private Key PEM data.",
"name": "githubAppPrivateKey",
"in": "query"
},
{
"type": "string",
"format": "int64",
"description": "Github App ID of the app used to access the repo.",
"name": "githubAppID",
"in": "query"
},
{
"type": "string",
"format": "int64",
"description": "Github App Installation ID of the installed GitHub App.",
"name": "githubAppInstallationID",
"in": "query"
},
{
"type": "string",
"description": "Github App Enterprise base url if empty will default to https://api.github.com.",
"name": "githubAppEnterpriseBaseUrl",
"in": "query"
},
{
"type": "string",
"description": "HTTP/HTTPS proxy to access the repository.",
"name": "proxy",
"in": "query"
},
{
"type": "string",
"description": "Reference between project and repository that allow you automatically to be added as item inside SourceRepos project entity.",
"name": "project",
"in": "query"
},
{
"type": "string",
"description": "Google Cloud Platform service account key.",
"name": "gcpServiceAccountKey",
"in": "query"
},
{
"type": "boolean",
"description": "Whether to force HTTP basic auth.",
"name": "forceHttpBasicAuth",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/repositoryRepoResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/version": {
"get": {
"tags": [
@@ -4725,6 +5256,9 @@
"help": {
"$ref": "#/definitions/clusterHelp"
},
"hydratorEnabled": {
"type": "boolean"
},
"impersonationEnabled": {
"type": "boolean"
},

View File

@@ -19,6 +19,7 @@ import (
"k8s.io/client-go/tools/clientcmd"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
commitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/controller"
"github.com/argoproj/argo-cd/v2/controller/sharding"
@@ -58,10 +59,12 @@ func NewCommand() *cobra.Command {
repoErrorGracePeriod int64
repoServerAddress string
repoServerTimeoutSeconds int
commitServerAddress string
selfHealTimeoutSeconds int
selfHealBackoffTimeoutSeconds int
selfHealBackoffFactor int
selfHealBackoffCapSeconds int
selfHealBackoffCooldownSeconds int
syncTimeout int
statusProcessors int
operationProcessors int
@@ -87,7 +90,8 @@ func NewCommand() *cobra.Command {
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
// argocd k8s event logging flag
enableK8sEvent []string
enableK8sEvent []string
hydratorEnabled bool
)
command := cobra.Command{
Use: cliName,
@@ -157,6 +161,8 @@ func NewCommand() *cobra.Command {
repoClientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds, tlsConfig)
commitClientset := commitclient.NewCommitServerClientset(commitServerAddress)
cache, err := cacheSource()
errors.CheckError(err)
cache.Cache.SetClient(cacheutil.NewTwoLevelClient(cache.Cache.GetClient(), 10*time.Minute))
@@ -183,6 +189,7 @@ func NewCommand() *cobra.Command {
kubeClient,
appClient,
repoClientset,
commitClientset,
cache,
kubectl,
resyncDuration,
@@ -190,6 +197,7 @@ func NewCommand() *cobra.Command {
time.Duration(appResyncJitter)*time.Second,
time.Duration(selfHealTimeoutSeconds)*time.Second,
selfHealBackoff,
time.Duration(selfHealBackoffCooldownSeconds)*time.Second,
time.Duration(syncTimeout)*time.Second,
time.Duration(repoErrorGracePeriod)*time.Second,
metricsPort,
@@ -205,6 +213,7 @@ func NewCommand() *cobra.Command {
enableDynamicClusterDistribution,
ignoreNormalizerOpts,
enableK8sEvent,
hydratorEnabled,
)
errors.CheckError(err)
cacheutil.CollectMetrics(redisClient, appController.GetMetricsServer(), nil)
@@ -247,6 +256,7 @@ func NewCommand() *cobra.Command {
command.Flags().Int64Var(&repoErrorGracePeriod, "repo-error-grace-period-seconds", int64(env.ParseDurationFromEnv("ARGOCD_REPO_ERROR_GRACE_PERIOD_SECONDS", defaultAppResyncPeriod*time.Second, 0, math.MaxInt64).Seconds()), "Grace period in seconds for ignoring consecutive errors while communicating with repo server.")
command.Flags().StringVar(&repoServerAddress, "repo-server", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER", common.DefaultRepoServerAddr), "Repo server address.")
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS", 60, 0, math.MaxInt64), "Repo server RPC call timeout seconds.")
command.Flags().StringVar(&commitServerAddress, "commit-server", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_COMMIT_SERVER", common.DefaultCommitServerAddr), "Commit server address.")
command.Flags().IntVar(&statusProcessors, "status-processors", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_STATUS_PROCESSORS", 20, 0, math.MaxInt32), "Number of application status processors")
command.Flags().IntVar(&operationProcessors, "operation-processors", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_OPERATION_PROCESSORS", 10, 0, math.MaxInt32), "Number of application operation processors")
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT", "text"), "Set the logging format. One of: text|json")
@@ -258,6 +268,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&selfHealBackoffTimeoutSeconds, "self-heal-backoff-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_TIMEOUT_SECONDS", 2, 0, math.MaxInt32), "Specifies initial timeout of exponential backoff between self heal attempts")
command.Flags().IntVar(&selfHealBackoffFactor, "self-heal-backoff-factor", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_FACTOR", 3, 0, math.MaxInt32), "Specifies factor of exponential timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCapSeconds, "self-heal-backoff-cap-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_CAP_SECONDS", 300, 0, math.MaxInt32), "Specifies max timeout of exponential backoff between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCooldownSeconds, "self-heal-backoff-cooldown-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_COOLDOWN_SECONDS", 330, 0, math.MaxInt32), "Specifies period of time the app needs to stay synced before the self heal backoff can reset")
command.Flags().IntVar(&syncTimeout, "sync-timeout", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SYNC_TIMEOUT", 0, 0, math.MaxInt32), "Specifies the timeout after which a sync would be terminated. 0 means no timeout (default 0).")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
@@ -285,7 +296,7 @@ func NewCommand() *cobra.Command {
command.Flags().DurationVar(&ignoreNormalizerOpts.JQExecutionTimeout, "ignore-normalizer-jq-execution-timeout-seconds", env.ParseDurationFromEnv("ARGOCD_IGNORE_NORMALIZER_JQ_TIMEOUT", 0*time.Second, 0, math.MaxInt64), "Set ignore normalizer JQ execution timeout")
// argocd k8s event logging flag
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
cacheSource = appstatecache.AddCacheFlagsToCmd(&command, cacheutil.Options{
OnClientCreated: func(client *redis.Client) {
redisClient = client

View File

@@ -87,6 +87,7 @@ func NewCommand() *cobra.Command {
applicationNamespaces []string
enableProxyExtension bool
webhookParallelism int
hydratorEnabled bool
// ApplicationSet
enableNewGitFileGlobbing bool
@@ -243,6 +244,7 @@ func NewCommand() *cobra.Command {
EnableProxyExtension: enableProxyExtension,
WebhookParallelism: webhookParallelism,
EnableK8sEvent: enableK8sEvent,
HydratorEnabled: hydratorEnabled,
}
appsetOpts := server.ApplicationSetOpts{
@@ -321,6 +323,7 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
// Flags related to the applicationSet component.
command.Flags().StringVar(&scmRootCAPath, "appset-scm-root-ca-path", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH", ""), "Provide Root CA Path for self-signed TLS Certificates")

View File

@@ -16,12 +16,13 @@ import (
"golang.org/x/term"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/headless"
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/utils"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/account"
"github.com/argoproj/argo-cd/v2/pkg/apiclient/session"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/io"
@@ -218,7 +219,7 @@ argocd account can-i create clusters '*'
Actions: %v
Resources: %v
`, rbacpolicy.Actions, rbacpolicy.Resources),
`, rbac.Actions, rbac.Resources),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@@ -262,7 +263,7 @@ func NewAccountListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Use: "list",
Short: "List accounts",
Example: "argocd account list",
Run: func(c *cobra.Command, args []string) {
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
conn, client := headless.NewClientOrDie(clientOpts, c).NewAccountClientOrDie()
@@ -309,7 +310,7 @@ argocd account get
# Get details for an account by name
argocd account get --account <account-name>`,
Run: func(c *cobra.Command, args []string) {
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
clientset := headless.NewClientOrDie(clientOpts, c)
@@ -358,7 +359,7 @@ func printAccountDetails(acc *accountpkg.Account) {
expiresAt := time.Unix(t.ExpiresAt, 0)
expiresAtFormatted = expiresAt.Format(time.RFC3339)
if expiresAt.Before(time.Now()) {
expiresAtFormatted = fmt.Sprintf("%s (expired)", expiresAtFormatted)
expiresAtFormatted = expiresAtFormatted + " (expired)"
}
}
@@ -382,7 +383,7 @@ argocd account generate-token
# Generate token for the account with the specified name
argocd account generate-token --account <account-name>`,
Run: func(c *cobra.Command, args []string) {
Run: func(c *cobra.Command, _ []string) {
ctx := c.Context()
clientset := headless.NewClientOrDie(clientOpts, c)

View File

@@ -9,6 +9,7 @@ import (
"sort"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -402,7 +403,26 @@ func reconcileApplications(
)
appStateManager := controller.NewAppStateManager(
argoDB, appClientset, repoServerClient, namespace, kubeutil.NewKubectl(), settingsMgr, stateCache, projInformer, server, cache, time.Second, argo.NewResourceTracking(), false, 0, serverSideDiff, ignoreNormalizerOpts)
argoDB,
appClientset,
repoServerClient,
namespace,
kubeutil.NewKubectl(),
func(_ string) (kube.CleanupFunc, error) {
return func() {}, nil
},
settingsMgr,
stateCache,
projInformer,
server,
cache,
time.Second,
argo.NewResourceTracking(),
false,
0,
serverSideDiff,
ignoreNormalizerOpts,
)
appsList, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(ctx, v1.ListOptions{LabelSelector: selector})
if err != nil {

View File

@@ -91,7 +91,7 @@ func TestGetReconcileResults_Refresh(t *testing.T) {
appClientset := appfake.NewSimpleClientset(app, proj)
deployment := test.NewDeployment()
kubeClientset := kubefake.NewSimpleClientset(deployment, &cm)
kubeClientset := kubefake.NewClientset(deployment, &cm)
clusterCache := clustermocks.ClusterCache{}
clusterCache.On("IsNamespaced", mock.Anything).Return(true, nil)
clusterCache.On("GetGVKParser", mock.Anything).Return(nil)

View File

@@ -183,13 +183,12 @@ func getControllerReplicas(ctx context.Context, kubeClient *kubernetes.Clientset
func NewClusterShardsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
redisCompressionStr string
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
)
command := cobra.Command{
Use: "shards",
@@ -213,7 +212,7 @@ func NewClusterShardsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
if replicas == 0 {
return
}
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, redisCompressionStr)
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, clientOpts.RedisCompression)
errors.CheckError(err)
if len(clusters) == 0 {
return
@@ -234,7 +233,6 @@ func NewClusterShardsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
// we can ignore unchecked error here as the command will be parsed again and checked when command.Execute() is run later
// nolint:errcheck
command.ParseFlags(os.Args[1:])
redisCompressionStr, _ = command.Flags().GetString(cacheutil.CLIFlagRedisCompress)
return &command
}
@@ -466,13 +464,12 @@ func NewClusterDisableNamespacedMode() *cobra.Command {
func NewClusterStatsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
redisCompressionStr string
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
)
command := cobra.Command{
Use: "stats",
@@ -502,7 +499,7 @@ argocd admin cluster stats target-cluster`,
replicas, err = getControllerReplicas(ctx, kubeClient, namespace, clientOpts.AppControllerName)
errors.CheckError(err)
}
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, redisCompressionStr)
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, clientOpts.RedisCompression)
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
@@ -524,7 +521,6 @@ argocd admin cluster stats target-cluster`,
// we can ignore unchecked error here as the command will be parsed again and checked when command.Execute() is run later
// nolint:errcheck
command.ParseFlags(os.Args[1:])
redisCompressionStr, _ = command.Flags().GetString(cacheutil.CLIFlagRedisCompress)
return &command
}
@@ -617,7 +613,7 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
clientConfig := clientcmd.NewDefaultClientConfig(*cfgAccess, &overrides)
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
kubeClientset := fake.NewSimpleClientset()
kubeClientset := fake.NewClientset()
var awsAuthConf *v1alpha1.AWSAuthConfig
var execProviderConf *v1alpha1.ExecProviderConfig

View File

@@ -12,17 +12,14 @@ import (
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/initialize"
"github.com/argoproj/argo-cd/v2/common"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
"github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/env"
"github.com/argoproj/argo-cd/v2/util/errors"
)
func NewDashboardCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
port int
address string
compressionStr string
clientConfig clientcmd.ClientConfig
port int
address string
clientConfig clientcmd.ClientConfig
)
cmd := &cobra.Command{
Use: "dashboard",
@@ -30,10 +27,8 @@ func NewDashboardCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
Run: func(cmd *cobra.Command, args []string) {
ctx := cmd.Context()
compression, err := cache.CompressionTypeFromString(compressionStr)
errors.CheckError(err)
clientOpts.Core = true
errors.CheckError(headless.MaybeStartLocalServer(ctx, clientOpts, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, compression, clientConfig))
errors.CheckError(headless.MaybeStartLocalServer(ctx, clientOpts, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, clientConfig))
println(fmt.Sprintf("Argo CD UI is available at http://%s:%d", address, port))
<-ctx.Done()
},
@@ -50,6 +45,5 @@ $ argocd admin dashboard --redis-compress gzip
clientConfig = cli.AddKubectlFlagsToSet(cmd.Flags())
cmd.Flags().IntVar(&port, "port", common.DefaultPortAPIServer, "Listen on given port")
cmd.Flags().StringVar(&address, "address", common.DefaultAddressAdminDashboard, "Listen on given address")
cmd.Flags().StringVar(&compressionStr, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionGZip)), "Enable this if the application controller is configured with redis compression enabled. (possible values: gzip, none)")
return cmd
}

View File

@@ -150,7 +150,7 @@ func NewGenRepoSpecCommand() *cobra.Command {
},
},
}
kubeClientset := fake.NewSimpleClientset(argoCDCM)
kubeClientset := fake.NewClientset(argoCDCM)
settingsMgr := settings.NewSettingsManager(ctx, kubeClientset, ArgoCDNamespace)
argoDB := db.NewDB(ArgoCDNamespace, settingsMgr, kubeClientset)

View File

@@ -119,7 +119,7 @@ func (opts *settingsOpts) createSettingsManager(ctx context.Context) (*settings.
}
}
setSettingsMeta(argocdSecret)
clientset := fake.NewSimpleClientset(argocdSecret, argocdCM)
clientset := fake.NewClientset(argocdSecret, argocdCM)
manager := settings.NewSettingsManager(ctx, clientset, "default")
errors.CheckError(manager.ResyncInformers())

View File

@@ -9,13 +9,12 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"sigs.k8s.io/yaml"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/assets"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
@@ -28,94 +27,85 @@ type rbacTrait struct {
allowPath bool
}
// Provide a mapping of short-hand resource names to their RBAC counterparts
// Provide a mapping of shorthand resource names to their RBAC counterparts
var resourceMap = map[string]string{
"account": rbacpolicy.ResourceAccounts,
"app": rbacpolicy.ResourceApplications,
"apps": rbacpolicy.ResourceApplications,
"application": rbacpolicy.ResourceApplications,
"applicationsets": rbacpolicy.ResourceApplicationSets,
"cert": rbacpolicy.ResourceCertificates,
"certs": rbacpolicy.ResourceCertificates,
"certificate": rbacpolicy.ResourceCertificates,
"cluster": rbacpolicy.ResourceClusters,
"extension": rbacpolicy.ResourceExtensions,
"gpgkey": rbacpolicy.ResourceGPGKeys,
"key": rbacpolicy.ResourceGPGKeys,
"log": rbacpolicy.ResourceLogs,
"logs": rbacpolicy.ResourceLogs,
"exec": rbacpolicy.ResourceExec,
"proj": rbacpolicy.ResourceProjects,
"projs": rbacpolicy.ResourceProjects,
"project": rbacpolicy.ResourceProjects,
"repo": rbacpolicy.ResourceRepositories,
"repos": rbacpolicy.ResourceRepositories,
"repository": rbacpolicy.ResourceRepositories,
}
var projectScoped = map[string]bool{
rbacpolicy.ResourceApplications: true,
rbacpolicy.ResourceApplicationSets: true,
rbacpolicy.ResourceLogs: true,
rbacpolicy.ResourceExec: true,
rbacpolicy.ResourceClusters: true,
rbacpolicy.ResourceRepositories: true,
"account": rbac.ResourceAccounts,
"app": rbac.ResourceApplications,
"apps": rbac.ResourceApplications,
"application": rbac.ResourceApplications,
"applicationsets": rbac.ResourceApplicationSets,
"cert": rbac.ResourceCertificates,
"certs": rbac.ResourceCertificates,
"certificate": rbac.ResourceCertificates,
"cluster": rbac.ResourceClusters,
"extension": rbac.ResourceExtensions,
"gpgkey": rbac.ResourceGPGKeys,
"key": rbac.ResourceGPGKeys,
"log": rbac.ResourceLogs,
"logs": rbac.ResourceLogs,
"exec": rbac.ResourceExec,
"proj": rbac.ResourceProjects,
"projs": rbac.ResourceProjects,
"project": rbac.ResourceProjects,
"repo": rbac.ResourceRepositories,
"repos": rbac.ResourceRepositories,
"repository": rbac.ResourceRepositories,
}
// List of allowed RBAC resources
var validRBACResourcesActions = map[string]actionTraitMap{
rbacpolicy.ResourceAccounts: accountsActions,
rbacpolicy.ResourceApplications: applicationsActions,
rbacpolicy.ResourceApplicationSets: defaultCRUDActions,
rbacpolicy.ResourceCertificates: defaultCRDActions,
rbacpolicy.ResourceClusters: defaultCRUDActions,
rbacpolicy.ResourceExtensions: extensionActions,
rbacpolicy.ResourceGPGKeys: defaultCRDActions,
rbacpolicy.ResourceLogs: logsActions,
rbacpolicy.ResourceExec: execActions,
rbacpolicy.ResourceProjects: defaultCRUDActions,
rbacpolicy.ResourceRepositories: defaultCRUDActions,
rbac.ResourceAccounts: accountsActions,
rbac.ResourceApplications: applicationsActions,
rbac.ResourceApplicationSets: defaultCRUDActions,
rbac.ResourceCertificates: defaultCRDActions,
rbac.ResourceClusters: defaultCRUDActions,
rbac.ResourceExtensions: extensionActions,
rbac.ResourceGPGKeys: defaultCRDActions,
rbac.ResourceLogs: logsActions,
rbac.ResourceExec: execActions,
rbac.ResourceProjects: defaultCRUDActions,
rbac.ResourceRepositories: defaultCRUDActions,
}
// List of allowed RBAC actions
var defaultCRUDActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionGet: rbacTrait{},
rbacpolicy.ActionUpdate: rbacTrait{},
rbacpolicy.ActionDelete: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionGet: rbacTrait{},
rbac.ActionUpdate: rbacTrait{},
rbac.ActionDelete: rbacTrait{},
}
var defaultCRDActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionGet: rbacTrait{},
rbacpolicy.ActionDelete: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionGet: rbacTrait{},
rbac.ActionDelete: rbacTrait{},
}
var applicationsActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionGet: rbacTrait{},
rbacpolicy.ActionUpdate: rbacTrait{allowPath: true},
rbacpolicy.ActionDelete: rbacTrait{allowPath: true},
rbacpolicy.ActionAction: rbacTrait{allowPath: true},
rbacpolicy.ActionOverride: rbacTrait{},
rbacpolicy.ActionSync: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionGet: rbacTrait{},
rbac.ActionUpdate: rbacTrait{allowPath: true},
rbac.ActionDelete: rbacTrait{allowPath: true},
rbac.ActionAction: rbacTrait{allowPath: true},
rbac.ActionOverride: rbacTrait{},
rbac.ActionSync: rbacTrait{},
}
var accountsActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbacpolicy.ActionUpdate: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
rbac.ActionUpdate: rbacTrait{},
}
var execActions = actionTraitMap{
rbacpolicy.ActionCreate: rbacTrait{},
rbac.ActionCreate: rbacTrait{},
}
var logsActions = actionTraitMap{
rbacpolicy.ActionGet: rbacTrait{},
rbac.ActionGet: rbacTrait{},
}
var extensionActions = actionTraitMap{
rbacpolicy.ActionInvoke: rbacTrait{},
rbac.ActionInvoke: rbacTrait{},
}
// NewRBACCommand is the command for 'rbac'
@@ -226,7 +216,7 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
// even if there is no explicit RBAC allow, or if there is an explicit RBAC deny)
var isLogRbacEnforced func() bool
if nsOverride && policyFile == "" {
if resolveRBACResourceName(resource) == rbacpolicy.ResourceLogs {
if resolveRBACResourceName(resource) == rbac.ResourceLogs {
isLogRbacEnforced = func() bool {
if opts, ok := cmdCtx.(*settingsOpts); ok {
opts.loadClusterSettings = true
@@ -248,12 +238,11 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
fmt.Println("Yes")
}
os.Exit(0)
} else {
if !quiet {
fmt.Println("No")
}
os.Exit(1)
}
if !quiet {
fmt.Println("No")
}
os.Exit(1)
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
@@ -321,13 +310,11 @@ argocd admin settings rbac validate --namespace argocd
if err := rbac.ValidatePolicy(userPolicy); err == nil {
fmt.Printf("Policy is valid.\n")
os.Exit(0)
} else {
fmt.Printf("Policy is invalid: %v\n", err)
os.Exit(1)
}
} else {
log.Fatalf("Policy is empty or could not be loaded.")
fmt.Printf("Policy is invalid: %v\n", err)
os.Exit(1)
}
log.Fatalf("Policy is empty or could not be loaded.")
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
@@ -402,7 +389,7 @@ func getPolicyFromConfigMap(cm *corev1.ConfigMap) (string, string, string) {
// getPolicyConfigMap fetches the RBAC config map from K8s cluster
func getPolicyConfigMap(ctx context.Context, client kubernetes.Interface, namespace string) (*corev1.ConfigMap, error) {
cm, err := client.CoreV1().ConfigMaps(namespace).Get(ctx, common.ArgoCDRBACConfigMapName, v1.GetOptions{})
cm, err := client.CoreV1().ConfigMaps(namespace).Get(ctx, common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
if err != nil {
return nil, err
}
@@ -448,12 +435,12 @@ func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPoli
// Some project scoped resources have a special notation - for simplicity's sake,
// if user gives no sub-resource (or specifies simple '*'), we construct
// the required notation by setting subresource to '*/*'.
if projectScoped[realResource] {
if rbac.ProjectScoped[realResource] {
if subResource == "*" || subResource == "" {
subResource = "*/*"
}
}
if realResource == rbacpolicy.ResourceLogs {
if realResource == rbac.ResourceLogs {
if isLogRbacEnforced != nil && !isLogRbacEnforced() {
return true
}
@@ -466,9 +453,8 @@ func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPoli
func resolveRBACResourceName(name string) string {
if res, ok := resourceMap[name]; ok {
return res
} else {
return name
}
return name
}
// validateRBACResourceAction checks whether a given resource is a valid RBAC resource.

View File

@@ -7,14 +7,15 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"github.com/argoproj/argo-cd/v2/server/rbacpolicy"
"github.com/argoproj/argo-cd/v2/util/rbac"
"github.com/argoproj/argo-cd/v2/util/assets"
)
@@ -56,8 +57,8 @@ func Test_validateRBACResourceAction(t *testing.T) {
{
name: "Test valid resource and action",
args: args{
resource: rbacpolicy.ResourceApplications,
action: rbacpolicy.ActionCreate,
resource: rbac.ResourceApplications,
action: rbac.ActionCreate,
},
valid: true,
},
@@ -71,7 +72,7 @@ func Test_validateRBACResourceAction(t *testing.T) {
{
name: "Test invalid action",
args: args{
resource: rbacpolicy.ResourceApplications,
resource: rbac.ResourceApplications,
action: "invalid",
},
valid: false,
@@ -79,24 +80,24 @@ func Test_validateRBACResourceAction(t *testing.T) {
{
name: "Test invalid action for resource",
args: args{
resource: rbacpolicy.ResourceLogs,
action: rbacpolicy.ActionCreate,
resource: rbac.ResourceLogs,
action: rbac.ActionCreate,
},
valid: false,
},
{
name: "Test valid action with path",
args: args{
resource: rbacpolicy.ResourceApplications,
action: rbacpolicy.ActionAction + "/apps/Deployment/restart",
resource: rbac.ResourceApplications,
action: rbac.ActionAction + "/apps/Deployment/restart",
},
valid: true,
},
{
name: "Test invalid action with path",
args: args{
resource: rbacpolicy.ResourceApplications,
action: rbacpolicy.ActionGet + "/apps/Deployment/restart",
resource: rbac.ResourceApplications,
action: rbac.ActionGet + "/apps/Deployment/restart",
},
valid: false,
},
@@ -147,7 +148,7 @@ func Test_PolicyFromK8s(t *testing.T) {
ctx := context.Background()
require.NoError(t, err)
kubeclientset := fake.NewSimpleClientset(&v1.ConfigMap{
kubeclientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-rbac-cm",
Namespace: "argocd",
@@ -280,7 +281,7 @@ p, role:user, logs, get, .*/.*, allow
p, role:user, exec, create, .*/.*, allow
`
kubeclientset := fake.NewSimpleClientset(&v1.ConfigMap{
kubeclientset := fake.NewClientset(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-rbac-cm",
Namespace: "argocd",

View File

@@ -45,7 +45,7 @@ func captureStdout(callback func()) (string, error) {
func newSettingsManager(data map[string]string) *settings.SettingsManager {
ctx := context.Background()
clientset := fake.NewSimpleClientset(&v1.ConfigMap{
clientset := fake.NewClientset(&v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: common.ArgoCDConfigMapName,
@@ -69,7 +69,7 @@ func newSettingsManager(data map[string]string) *settings.SettingsManager {
type fakeCmdContext struct {
mgr *settings.SettingsManager
// nolint:unused,structcheck
// nolint:unused
out bytes.Buffer
}

View File

@@ -112,6 +112,7 @@ type watchOpts struct {
suspended bool
degraded bool
delete bool
hydrated bool
}
// NewApplicationCreateCommand returns a new instance of an `argocd app create` command
@@ -1883,6 +1884,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().BoolVar(&watch.suspended, "suspended", false, "Wait for suspended")
command.Flags().BoolVar(&watch.degraded, "degraded", false, "Wait for degraded")
command.Flags().BoolVar(&watch.delete, "delete", false, "Wait for delete")
command.Flags().BoolVar(&watch.hydrated, "hydrated", false, "Wait for hydration operations")
command.Flags().StringVarP(&selector, "selector", "l", "", "Wait for apps by label. Supports '=', '==', '!=', in, notin, exists & not exists. Matching apps must satisfy all of the specified label constraints.")
command.Flags().StringArrayVar(&resources, "resource", []string{}, fmt.Sprintf("Sync only specific resources as GROUP%[1]sKIND%[1]sNAME or %[2]sGROUP%[1]sKIND%[1]sNAME. Fields may be blank and '*' can be used. This option may be specified repeatedly", resourceFieldDelimiter, resourceExcludeIndicator))
command.Flags().BoolVar(&watch.operation, "operation", false, "Wait for pending operations")
@@ -2450,7 +2452,7 @@ func groupResourceStates(app *argoappv1.Application, selectedResources []*argoap
}
// check if resource health, sync and operation statuses matches watch options
func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string, operationStatus *argoappv1.Operation) bool {
func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string, operationStatus *argoappv1.Operation, hydrationFinished bool) bool {
if watch.delete {
return false
}
@@ -2480,7 +2482,8 @@ func checkResourceStatus(watch watchOpts, healthStatus string, syncStatus string
synced := !watch.sync || syncStatus == string(argoappv1.SyncStatusCodeSynced)
operational := !watch.operation || operationStatus == nil
return synced && healthCheckPassed && operational
hydrated := !watch.hydrated || hydrationFinished
return synced && healthCheckPassed && operational && hydrated
}
// resourceParentChild gets the latest state of the app and the latest state of the app's resource tree and then
@@ -2644,13 +2647,15 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
}
}
hydrationFinished := app.Status.SourceHydrator.CurrentOperation != nil && app.Status.SourceHydrator.CurrentOperation.Phase == argoappv1.HydrateOperationPhaseHydrated && app.Status.SourceHydrator.CurrentOperation.SourceHydrator.DeepEquals(app.Status.SourceHydrator.LastSuccessfulOperation.SourceHydrator) && app.Status.SourceHydrator.CurrentOperation.DrySHA == app.Status.SourceHydrator.LastSuccessfulOperation.DrySHA
var selectedResourcesAreReady bool
// If selected resources are included, wait only on those resources, otherwise wait on the application as a whole.
if len(selectedResources) > 0 {
selectedResourcesAreReady = true
for _, state := range getResourceStates(app, selectedResources) {
resourceIsReady := checkResourceStatus(watch, state.Health, state.Status, appEvent.Application.Operation)
resourceIsReady := checkResourceStatus(watch, state.Health, state.Status, appEvent.Application.Operation, hydrationFinished)
if !resourceIsReady {
selectedResourcesAreReady = false
break
@@ -2658,7 +2663,7 @@ func waitOnApplicationStatus(ctx context.Context, acdClient argocdclient.Client,
}
} else {
// Wait on the application as a whole
selectedResourcesAreReady = checkResourceStatus(watch, string(app.Status.Health.Status), string(app.Status.Sync.Status), appEvent.Application.Operation)
selectedResourcesAreReady = checkResourceStatus(watch, string(app.Status.Health.Status), string(app.Status.Sync.Status), appEvent.Application.Operation, hydrationFinished)
}
if selectedResourcesAreReady && (!operationInProgress || !watch.operation) {

View File

@@ -1705,7 +1705,7 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: true,
health: true,
degraded: true,
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Degraded, Suspended and health status failed", func(t *testing.T) {
@@ -1713,57 +1713,57 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: true,
health: true,
degraded: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Suspended and health status passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: true,
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Suspended and health status failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Suspended passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: false,
}, string(health.HealthStatusSuspended), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusSuspended), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Suspended failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: true,
health: false,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Health passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: false,
health: true,
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusHealthy), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Health failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{
suspended: false,
health: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
t.Run("Synced passed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Synced failed", func(t *testing.T) {
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeOutOfSync), &v1alpha1.Operation{})
res := checkResourceStatus(watchOpts{}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeOutOfSync), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Degraded passed", func(t *testing.T) {
@@ -1771,7 +1771,7 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: false,
health: false,
degraded: true,
}, string(health.HealthStatusDegraded), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusDegraded), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.True(t, res)
})
t.Run("Degraded failed", func(t *testing.T) {
@@ -1779,7 +1779,7 @@ func TestCheckResourceStatus(t *testing.T) {
suspended: false,
health: false,
degraded: true,
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{})
}, string(health.HealthStatusProgressing), string(v1alpha1.SyncStatusCodeSynced), &v1alpha1.Operation{}, true)
assert.False(t, res)
})
}

View File

@@ -14,7 +14,8 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1 "k8s.io/api/core/v1"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
runtimeUtil "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/dynamic"
@@ -128,7 +129,7 @@ func (c *forwardRepoClientset) NewRepoServerClient() (io.Closer, repoapiclient.R
}
repoServerName := c.repoServerName
repoServererviceLabelSelector := common.LabelKeyComponentRepoServer + "=" + common.LabelValueComponentRepoServer
repoServerServices, err := c.kubeClientset.CoreV1().Services(c.namespace).List(context.Background(), v1.ListOptions{LabelSelector: repoServererviceLabelSelector})
repoServerServices, err := c.kubeClientset.CoreV1().Services(c.namespace).List(context.Background(), metaV1.ListOptions{LabelSelector: repoServererviceLabelSelector})
if err != nil {
c.err = err
return
@@ -176,7 +177,7 @@ func testAPI(ctx context.Context, clientOpts *apiclient.ClientOptions) error {
//
// If the clientOpts enables core mode, but the local config does not have core mode enabled, this function will
// not start the local server.
func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, compression cache.RedisCompressionType, clientConfig clientcmd.ClientConfig) error {
func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, clientConfig clientcmd.ClientConfig) error {
if clientConfig == nil {
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
clientConfig = cli.AddKubectlFlagsToSet(flags)
@@ -243,6 +244,10 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
if err != nil {
return fmt.Errorf("error adding argo resources to scheme: %w", err)
}
err = corev1.AddToScheme(scheme)
if err != nil {
return fmt.Errorf("error adding corev1 resources to scheme: %w", err)
}
controllerClientset, err := client.New(restConfig, client.Options{
Scheme: scheme,
})
@@ -265,7 +270,7 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
log.Warnf("Failed to fetch & set redis password for namespace %s: %v", namespace, err)
}
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression, redisHaProxyName: clientOpts.RedisHaProxyName, redisName: clientOpts.RedisName, redisPassword: redisOptions.Password}), time.Hour)
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: cache.RedisCompressionType(clientOpts.RedisCompression), redisHaProxyName: clientOpts.RedisHaProxyName, redisName: clientOpts.RedisName, redisPassword: redisOptions.Password}), time.Hour)
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
EnableGZip: false,
Namespace: namespace,
@@ -316,7 +321,7 @@ func NewClientOrDie(opts *apiclient.ClientOptions, c *cobra.Command) apiclient.C
ctxStr := initialize.RetrieveContextIfChanged(c.Flag("context"))
// If we're in core mode, start the API server on the fly and configure the client `opts` to use it.
// If we're not in core mode, this function call will do nothing.
err := MaybeStartLocalServer(ctx, opts, ctxStr, nil, nil, cache.RedisCompressionNone, nil)
err := MaybeStartLocalServer(ctx, opts, ctxStr, nil, nil, nil)
if err != nil {
log.Fatal(err)
}

View File

@@ -11,6 +11,7 @@ import (
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
"github.com/argoproj/argo-cd/v2/common"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
"github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/config"
"github.com/argoproj/argo-cd/v2/util/env"
@@ -87,6 +88,7 @@ func NewCommand() *cobra.Command {
command.PersistentFlags().StringVar(&clientOpts.RedisHaProxyName, "redis-haproxy-name", env.StringFromEnv(common.EnvRedisHaProxyName, common.DefaultRedisHaProxyName), fmt.Sprintf("Name of the Redis HA Proxy; set this or the %s environment variable when the HA Proxy's name label differs from the default, for example when installing via the Helm chart", common.EnvRedisHaProxyName))
command.PersistentFlags().StringVar(&clientOpts.RedisName, "redis-name", env.StringFromEnv(common.EnvRedisName, common.DefaultRedisName), fmt.Sprintf("Name of the Redis deployment; set this or the %s environment variable when the Redis's name label differs from the default, for example when installing via the Helm chart", common.EnvRedisName))
command.PersistentFlags().StringVar(&clientOpts.RepoServerName, "repo-server-name", env.StringFromEnv(common.EnvRepoServerName, common.DefaultRepoServerName), fmt.Sprintf("Name of the Argo CD Repo server; set this or the %s environment variable when the server's name label differs from the default, for example when installing via the Helm chart", common.EnvRepoServerName))
command.PersistentFlags().StringVar(&clientOpts.RedisCompression, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionGZip)), "Enable this if the application controller is configured with redis compression enabled. (possible values: gzip, none)")
command.PersistentFlags().BoolVar(&clientOpts.PromptsEnabled, "prompts-enabled", localconfig.GetPromptsEnabled(true), "Force optional interactive prompts to be enabled or disabled, overriding local configuration. If not specified, the local configuration value will be used, which is false by default.")
clientOpts.KubeOverrides = &clientcmd.ConfigOverrides{}

View File

@@ -53,7 +53,7 @@ func (p *Prompt) ConfirmBaseOnCount(messageForSingle string, messageForArray str
}
if count == 1 {
return p.Confirm(messageForSingle), true
return p.Confirm(messageForSingle), false
}
return p.ConfirmAll(messageForArray)

View File

@@ -38,11 +38,47 @@ func TestConfirmBaseOnCountPromptDisabled(t *testing.T) {
assert.True(t, result2)
}
func TestConfirmBaseOnCountZeroApps(t *testing.T) {
p := &Prompt{enabled: true}
result1, result2 := p.ConfirmBaseOnCount("Proceed?", "Process all?", 0)
assert.True(t, result1)
assert.True(t, result2)
func TestConfirmBaseOnCount(t *testing.T) {
tests := []struct {
input string
output bool
count int
}{
{
input: "y\n",
output: true,
count: 0,
},
{
input: "y\n",
output: true,
count: 1,
},
{
input: "n\n",
output: false,
count: 1,
},
}
origStdin := os.Stdin
for _, tt := range tests {
tmpFile, err := writeToStdin(tt.input)
require.NoError(t, err)
p := &Prompt{enabled: true}
result1, result2 := p.ConfirmBaseOnCount("Proceed?", "Proceed all?", tt.count)
assert.Equal(t, tt.output, result1)
if tt.count == 1 {
assert.False(t, result2)
} else {
assert.Equal(t, tt.output, result2)
}
_ = tmpFile.Close()
os.Remove(tmpFile.Name())
}
os.Stdin = origStdin
}
func TestConfirmPrompt(t *testing.T) {
@@ -62,8 +98,8 @@ func TestConfirmPrompt(t *testing.T) {
p := &Prompt{enabled: true}
result := p.Confirm("Are you sure you want to run this command? (y/n) \n")
assert.Equal(t, c.output, result)
os.Remove(tmpFile.Name())
_ = tmpFile.Close()
os.Remove(tmpFile.Name())
}
os.Stdin = origStdin
@@ -89,8 +125,8 @@ func TestConfirmAllPrompt(t *testing.T) {
confirm, confirmAll := p.ConfirmAll("Are you sure you want to run this command? (y/n) \n")
assert.Equal(t, c.confirm, confirm)
assert.Equal(t, c.confirmAll, confirmAll)
os.Remove(tmpFile.Name())
_ = tmpFile.Close()
os.Remove(tmpFile.Name())
}
os.Stdin = origStdin

View File

@@ -91,6 +91,12 @@ type AppOptions struct {
retryBackoffFactor int64
ref string
SourceName string
drySourceRepo string
drySourceRevision string
drySourcePath string
syncSourceBranch string
syncSourcePath string
hydrateToBranch string
}
// SetAutoMaxProcs sets the GOMAXPROCS value based on the binary name.
@@ -112,6 +118,12 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.chart, "helm-chart", "", "Helm Chart name")
command.Flags().StringVar(&opts.env, "env", "", "Application environment to monitor")
command.Flags().StringVar(&opts.revision, "revision", "", "The tracking source branch, tag, commit or Helm chart version the application will sync to")
command.Flags().StringVar(&opts.drySourceRepo, "dry-source-repo", "", "Repository URL of the app dry source")
command.Flags().StringVar(&opts.drySourceRevision, "dry-source-revision", "", "Revision of the app dry source")
command.Flags().StringVar(&opts.drySourcePath, "dry-source-path", "", "Path in repository to the app directory for the dry source")
command.Flags().StringVar(&opts.syncSourceBranch, "sync-source-branch", "", "The branch from which the app will sync")
command.Flags().StringVar(&opts.syncSourcePath, "sync-source-path", "", "The path in the repository from which the app will sync")
command.Flags().StringVar(&opts.hydrateToBranch, "hydrate-to-branch", "", "The branch to hydrate the app to")
command.Flags().IntVar(&opts.revisionHistoryLimit, "revision-history-limit", argoappv1.RevisionHistoryLimit, "How many items to keep in revision history")
command.Flags().StringVar(&opts.destServer, "dest-server", "", "K8s cluster URL (e.g. https://kubernetes.default.svc)")
command.Flags().StringVar(&opts.destName, "dest-name", "", "K8s cluster Name (e.g. minikube)")
@@ -175,21 +187,27 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
if flags == nil {
return visited
}
source := spec.GetSourcePtrByPosition(sourcePosition)
if source == nil {
source = &argoappv1.ApplicationSource{}
}
source, visited = ConstructSource(source, *appOpts, flags)
if spec.HasMultipleSources() {
if sourcePosition == 0 {
spec.Sources[sourcePosition] = *source
} else if sourcePosition > 0 {
spec.Sources[sourcePosition-1] = *source
} else {
spec.Sources = append(spec.Sources, *source)
}
var h *argoappv1.SourceHydrator
h, hasHydratorFlag := constructSourceHydrator(spec.SourceHydrator, *appOpts, flags)
if hasHydratorFlag {
spec.SourceHydrator = h
} else {
spec.Source = source
source := spec.GetSourcePtrByPosition(sourcePosition)
if source == nil {
source = &argoappv1.ApplicationSource{}
}
source, visited = ConstructSource(source, *appOpts, flags)
if spec.HasMultipleSources() {
if sourcePosition == 0 {
spec.Sources[sourcePosition] = *source
} else if sourcePosition > 0 {
spec.Sources[sourcePosition-1] = *source
} else {
spec.Sources = append(spec.Sources, *source)
}
} else {
spec.Source = source
}
}
flags.Visit(func(f *pflag.Flag) {
visited++
@@ -592,9 +610,7 @@ func constructAppsBaseOnName(appName string, labels, annotations, args []string,
Name: appName,
Namespace: appNs,
},
Spec: argoappv1.ApplicationSpec{
Source: &argoappv1.ApplicationSource{},
},
Spec: argoappv1.ApplicationSpec{},
}
SetAppSpecOptions(flags, &app.Spec, &appOpts, 0)
SetParameterOverrides(app, appOpts.Parameters, 0)
@@ -768,6 +784,47 @@ func ConstructSource(source *argoappv1.ApplicationSource, appOpts AppOptions, fl
return source, visited
}
// constructSourceHydrator constructs a source hydrator from the command line flags. It returns the modified source
// hydrator and a boolean indicating if any hydrator flags were set. We return instead of just modifying the source
// hydrator in place because the given hydrator `h` might be nil. In that case, we need to create a new source hydrator
// and return it.
func constructSourceHydrator(h *argoappv1.SourceHydrator, appOpts AppOptions, flags *pflag.FlagSet) (*argoappv1.SourceHydrator, bool) {
hasHydratorFlag := false
ensureNotNil := func(notEmpty bool) {
hasHydratorFlag = true
if notEmpty && h == nil {
h = &argoappv1.SourceHydrator{}
}
}
flags.Visit(func(f *pflag.Flag) {
switch f.Name {
case "dry-source-repo":
ensureNotNil(appOpts.drySourceRepo != "")
h.DrySource.RepoURL = appOpts.drySourceRepo
case "dry-source-path":
ensureNotNil(appOpts.drySourcePath != "")
h.DrySource.Path = appOpts.drySourcePath
case "dry-source-revision":
ensureNotNil(appOpts.drySourceRevision != "")
h.DrySource.TargetRevision = appOpts.drySourceRevision
case "sync-source-branch":
ensureNotNil(appOpts.syncSourceBranch != "")
h.SyncSource.TargetBranch = appOpts.syncSourceBranch
case "sync-source-path":
ensureNotNil(appOpts.syncSourcePath != "")
h.SyncSource.Path = appOpts.syncSourcePath
case "hydrate-to-branch":
ensureNotNil(appOpts.hydrateToBranch != "")
if appOpts.hydrateToBranch == "" {
h.HydrateTo = nil
} else {
h.HydrateTo = &argoappv1.HydrateTo{TargetBranch: appOpts.hydrateToBranch}
}
}
})
return h, hasHydratorFlag
}
func mergeLabels(app *argoappv1.Application, labels []string) {
mapLabels, err := label.Parse(labels)
errors.CheckError(err)

View File

@@ -295,6 +295,28 @@ func Test_setAppSpecOptions(t *testing.T) {
require.NoError(t, f.SetFlag("helm-api-versions", "v2"))
assert.Equal(t, []string{"v1", "v2"}, f.spec.Source.Helm.APIVersions)
})
t.Run("source hydrator", func(t *testing.T) {
require.NoError(t, f.SetFlag("dry-source-repo", "https://github.com/argoproj/argocd-example-apps"))
assert.Equal(t, "https://github.com/argoproj/argocd-example-apps", f.spec.SourceHydrator.DrySource.RepoURL)
require.NoError(t, f.SetFlag("dry-source-path", "apps"))
assert.Equal(t, "apps", f.spec.SourceHydrator.DrySource.Path)
require.NoError(t, f.SetFlag("dry-source-revision", "HEAD"))
assert.Equal(t, "HEAD", f.spec.SourceHydrator.DrySource.TargetRevision)
require.NoError(t, f.SetFlag("sync-source-branch", "env/test"))
assert.Equal(t, "env/test", f.spec.SourceHydrator.SyncSource.TargetBranch)
require.NoError(t, f.SetFlag("sync-source-path", "apps"))
assert.Equal(t, "apps", f.spec.SourceHydrator.SyncSource.Path)
require.NoError(t, f.SetFlag("hydrate-to-branch", "env/test-next"))
assert.Equal(t, "env/test-next", f.spec.SourceHydrator.HydrateTo.TargetBranch)
require.NoError(t, f.SetFlag("hydrate-to-branch", ""))
assert.Nil(t, f.spec.SourceHydrator.HydrateTo)
})
}
func newMultiSourceAppOptionsFixture() *appOptionsFixture {

View File

@@ -162,7 +162,7 @@ func TestGetKubePublicEndpoint(t *testing.T) {
if tc.clusterInfo != nil {
objects = append(objects, tc.clusterInfo)
}
clientset := fake.NewSimpleClientset(objects...)
clientset := fake.NewClientset(objects...)
endpoint, err := GetKubePublicEndpoint(clientset)
if tc.expectError {
require.Error(t, err)

View File

@@ -18,11 +18,13 @@ func TestProjectOpts_ResourceLists(t *testing.T) {
}
assert.ElementsMatch(t,
[]v1.GroupKind{{Kind: "ConfigMap"}}, opts.GetAllowedNamespacedResources(),
[]v1.GroupKind{{Group: "apps", Kind: "DaemonSet"}}, opts.GetDeniedNamespacedResources(),
[]v1.GroupKind{{Group: "apiextensions.k8s.io", Kind: "CustomResourceDefinition"}}, opts.GetAllowedClusterResources(),
[]v1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources(),
)
[]v1.GroupKind{{Kind: "ConfigMap"}}, opts.GetAllowedNamespacedResources())
assert.ElementsMatch(t,
[]v1.GroupKind{{Group: "apps", Kind: "DaemonSet"}}, opts.GetDeniedNamespacedResources())
assert.ElementsMatch(t,
[]v1.GroupKind{{Group: "apiextensions.k8s.io", Kind: "CustomResourceDefinition"}}, opts.GetAllowedClusterResources())
assert.ElementsMatch(t,
[]v1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources())
}
func TestProjectOpts_GetDestinationServiceAccounts(t *testing.T) {

View File

@@ -45,7 +45,7 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().StringVar(&opts.GithubAppPrivateKeyPath, "github-app-private-key-path", "", "private key of the GitHub Application")
command.Flags().StringVar(&opts.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
command.Flags().StringVar(&opts.Proxy, "proxy", "", "use proxy to access repository")
command.Flags().StringVar(&opts.Proxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.NoProxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.GCPServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks
@@ -14,7 +14,7 @@ type Clientset struct {
mock.Mock
}
// NewCommitServerClient provides a mock function with given fields:
// NewCommitServerClient provides a mock function with no fields
func (_m *Clientset) NewCommitServerClient() (io.Closer, apiclient.CommitServiceClient, error) {
ret := _m.Called()

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -26,6 +26,8 @@ const (
const (
// DefaultRepoServerAddr is the gRPC address of the Argo CD repo server
DefaultRepoServerAddr = "argocd-repo-server:8081"
// DefaultCommitServerAddr is the gRPC address of the Argo CD commit server
DefaultCommitServerAddr = "argocd-commit-server:8086"
// DefaultDexServerAddr is the HTTP address of the Dex OIDC server, which we run a reverse proxy against
DefaultDexServerAddr = "argocd-dex-server:5556"
// DefaultRedisAddr is the default redis address
@@ -179,6 +181,8 @@ const (
LabelValueSecretTypeRepository = "repository"
// LabelValueSecretTypeRepoCreds indicates a secret type of repository credentials
LabelValueSecretTypeRepoCreds = "repo-creds"
// LabelValueSecretTypeRepositoryWrite indicates a secret type of repository credentials for writing
LabelValueSecretTypeRepositoryWrite = "repository-write"
// LabelValueSecretTypeSCMCreds indicates a secret type of SCM credentials
LabelValueSecretTypeSCMCreds = "scm-creds"

View File

@@ -92,7 +92,7 @@ func TestSetOptionalRedisPasswordFromKubeConfig(t *testing.T) {
t.Parallel()
var (
ctx = context.TODO()
kubeClient = kubefake.NewSimpleClientset()
kubeClient = kubefake.NewClientset()
redisOptions = &redis.Options{}
)
if tc.secret != nil {

View File

@@ -41,9 +41,12 @@ import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/ptr"
commitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/common"
statecache "github.com/argoproj/argo-cd/v2/controller/cache"
"github.com/argoproj/argo-cd/v2/controller/hydrator"
"github.com/argoproj/argo-cd/v2/controller/metrics"
"github.com/argoproj/argo-cd/v2/controller/sharding"
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
@@ -121,6 +124,8 @@ type ApplicationController struct {
appComparisonTypeRefreshQueue workqueue.TypedRateLimitingInterface[string]
appOperationQueue workqueue.TypedRateLimitingInterface[string]
projectRefreshQueue workqueue.TypedRateLimitingInterface[string]
appHydrateQueue workqueue.TypedRateLimitingInterface[string]
hydrationQueue workqueue.TypedRateLimitingInterface[hydrator.HydrationQueueKey]
appInformer cache.SharedIndexInformer
appLister applisters.ApplicationLister
projInformer cache.SharedIndexInformer
@@ -131,6 +136,7 @@ type ApplicationController struct {
statusRefreshJitter time.Duration
selfHealTimeout time.Duration
selfHealBackOff *wait.Backoff
selfHealBackoffCooldown time.Duration
syncTimeout time.Duration
db db.ArgoDB
settingsMgr *settings_util.SettingsManager
@@ -146,6 +152,8 @@ type ApplicationController struct {
// dynamicClusterDistributionEnabled if disabled deploymentInformer is never initialized
dynamicClusterDistributionEnabled bool
deploymentInformer informerv1.DeploymentInformer
hydrator *hydrator.Hydrator
}
// NewApplicationController creates new instance of ApplicationController.
@@ -155,6 +163,7 @@ func NewApplicationController(
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
repoClientset apiclient.Clientset,
commitClientset commitclient.Clientset,
argoCache *appstatecache.Cache,
kubectl kube.Kubectl,
appResyncPeriod time.Duration,
@@ -162,6 +171,7 @@ func NewApplicationController(
appResyncJitter time.Duration,
selfHealTimeout time.Duration,
selfHealBackoff *wait.Backoff,
selfHealBackoffCooldown time.Duration,
syncTimeout time.Duration,
repoErrorGracePeriod time.Duration,
metricsPort int,
@@ -177,6 +187,7 @@ func NewApplicationController(
dynamicClusterDistributionEnabled bool,
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts,
enableK8sEvent []string,
hydratorEnabled bool,
) (*ApplicationController, error) {
log.Infof("appResyncPeriod=%v, appHardResyncPeriod=%v, appResyncJitter=%v", appResyncPeriod, appHardResyncPeriod, appResyncJitter)
db := db.NewDB(namespace, settingsMgr, kubeClientset)
@@ -190,10 +201,12 @@ func NewApplicationController(
kubeClientset: kubeClientset,
kubectl: kubectl,
applicationClientset: applicationClientset,
appRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_reconciliation_queue"}),
appOperationQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_operation_processing_queue"}),
projectRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "project_reconciliation_queue"}),
appComparisonTypeRefreshQueue: workqueue.NewTypedRateLimitingQueue(ratelimiter.NewCustomAppControllerRateLimiter(rateLimiterConfig)),
appRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_reconciliation_queue"}),
appOperationQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_operation_processing_queue"}),
projectRefreshQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "project_reconciliation_queue"}),
appComparisonTypeRefreshQueue: workqueue.NewTypedRateLimitingQueue(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig)),
appHydrateQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[string](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[string]{Name: "app_hydration_queue"}),
hydrationQueue: workqueue.NewTypedRateLimitingQueueWithConfig(ratelimiter.NewCustomAppControllerRateLimiter[hydrator.HydrationQueueKey](rateLimiterConfig), workqueue.TypedRateLimitingQueueConfig[hydrator.HydrationQueueKey]{Name: "manifest_hydration_queue"}),
db: db,
statusRefreshTimeout: appResyncPeriod,
statusHardRefreshTimeout: appHardResyncPeriod,
@@ -204,6 +217,7 @@ func NewApplicationController(
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
selfHealBackOff: selfHealBackoff,
selfHealBackoffCooldown: selfHealBackoffCooldown,
syncTimeout: syncTimeout,
clusterSharding: clusterSharding,
projByNameCache: sync.Map{},
@@ -211,6 +225,9 @@ func NewApplicationController(
dynamicClusterDistributionEnabled: dynamicClusterDistributionEnabled,
ignoreNormalizerOpts: ignoreNormalizerOpts,
}
if hydratorEnabled {
ctrl.hydrator = hydrator.NewHydrator(&ctrl, appResyncPeriod, commitClientset)
}
if kubectlParallelismLimit > 0 {
ctrl.kubectlSemaphore = semaphore.NewWeighted(kubectlParallelismLimit)
}
@@ -296,7 +313,7 @@ func NewApplicationController(
}
}
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectl, ctrl.metricsServer, ctrl.handleObjectUpdated, clusterSharding, argo.NewResourceTracking())
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout, argo.NewResourceTracking(), persistResourceHealth, repoErrorGracePeriod, serverSideDiff, ignoreNormalizerOpts)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.onKubectlRun, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout, argo.NewResourceTracking(), persistResourceHealth, repoErrorGracePeriod, serverSideDiff, ignoreNormalizerOpts)
ctrl.appInformer = appInformer
ctrl.appLister = appLister
ctrl.projInformer = projInformer
@@ -845,6 +862,8 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
defer ctrl.appComparisonTypeRefreshQueue.ShutDown()
defer ctrl.appOperationQueue.ShutDown()
defer ctrl.projectRefreshQueue.ShutDown()
defer ctrl.appHydrateQueue.ShutDown()
defer ctrl.hydrationQueue.ShutDown()
ctrl.metricsServer.RegisterClustersInfoSource(ctx, ctrl.stateCache)
ctrl.RegisterClusterSecretUpdater(ctx)
@@ -903,6 +922,19 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
for ctrl.processProjectQueueItem() {
}
}, time.Second, ctx.Done())
if ctrl.hydrator != nil {
go wait.Until(func() {
for ctrl.processAppHydrateQueueItem() {
}
}, time.Second, ctx.Done())
go wait.Until(func() {
for ctrl.processHydrationQueueItem() {
}
}, time.Second, ctx.Done())
}
<-ctx.Done()
}
@@ -912,7 +944,7 @@ func (ctrl *ApplicationController) requestAppRefresh(appName string, compareWith
key := ctrl.toAppKey(appName)
if compareWith != nil && after != nil {
ctrl.appComparisonTypeRefreshQueue.AddAfter(fmt.Sprintf("%s/%d", key, compareWith), *after)
ctrl.appComparisonTypeRefreshQueue.AddAfter(fmt.Sprintf("%s/%d", key, *compareWith), *after)
} else {
if compareWith != nil {
ctrl.refreshRequestedAppsMutex.Lock()
@@ -1444,7 +1476,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
} else {
state.Phase = synccommon.OperationRunning
state.RetryCount++
state.Message = fmt.Sprintf("%s. Retrying attempt #%d at %s.", state.Message, state.RetryCount, retryAt.Format(time.Kitchen))
state.Message = fmt.Sprintf("%s due to application controller sync timeout. Retrying attempt #%d at %s.", state.Message, state.RetryCount, retryAt.Format(time.Kitchen))
}
} else if state.RetryCount > 0 {
state.Message = fmt.Sprintf("%s (retried %d times).", state.Message, state.RetryCount)
@@ -1646,11 +1678,9 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
project, hasErrors := ctrl.refreshAppConditions(app)
ts.AddCheckpoint("refresh_app_conditions_ms")
now := metav1.Now()
if hasErrors {
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
app.Status.Health.Status = health.HealthStatusUnknown
app.Status.Health.LastTransitionTime = &now
patchMs = ctrl.persistAppStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
@@ -1741,6 +1771,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
ts.AddCheckpoint("auto_sync_ms")
if app.Status.ReconciledAt == nil || comparisonLevel >= CompareWithLatest {
now := metav1.Now()
app.Status.ReconciledAt = &now
}
app.Status.Sync = *compareResult.syncStatus
@@ -1774,6 +1805,68 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
return
}
func (ctrl *ApplicationController) processAppHydrateQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appHydrateQueue.Get()
if shutdown {
processNext = false
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appHydrateQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey)
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
ctrl.hydrator.ProcessAppHydrateQueueItem(origApp)
getAppLog(origApp).Debug("Successfully processed app hydrate queue item")
return
}
func (ctrl *ApplicationController) processHydrationQueueItem() (processNext bool) {
hydrationKey, shutdown := ctrl.hydrationQueue.Get()
if shutdown {
processNext = false
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.hydrationQueue.Done(hydrationKey)
}()
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
"destinationBranch": hydrationKey.DestinationBranch,
})
logCtx.Debug("Processing hydration queue item")
ctrl.hydrator.ProcessHydrationQueueItem(hydrationKey)
logCtx.Debug("Successfully processed hydration queue item")
return
}
func resourceStatusKey(res appv1.ResourceStatus) string {
return strings.Join([]string{res.Group, res.Kind, res.Namespace, res.Name}, "/")
}
@@ -1782,7 +1875,8 @@ func currentSourceEqualsSyncedSource(app *appv1.Application) bool {
if app.Spec.HasMultipleSources() {
return app.Spec.Sources.Equals(app.Status.Sync.ComparedTo.Sources)
}
return app.Spec.Source.Equals(&app.Status.Sync.ComparedTo.Source)
source := app.Spec.GetSource()
return source.Equals(&app.Status.Sync.ComparedTo.Source)
}
// needRefreshAppStatus answers if application status needs to be refreshed.
@@ -1908,9 +2002,15 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
ctrl.logAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message, context.TODO())
}
if orig.Status.Health.Status != newStatus.Health.Status {
now := metav1.Now()
newStatus.Health.LastTransitionTime = &now
message := fmt.Sprintf("Updated health status: %s -> %s", orig.Status.Health.Status, newStatus.Health.Status)
ctrl.logAppEvent(orig, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message, context.TODO())
} else {
// make sure the last transition time is the same and populated if the health is the same
newStatus.Health.LastTransitionTime = orig.Status.Health.LastTransitionTime
}
var newAnnotations map[string]string
if orig.GetAnnotations() != nil {
newAnnotations = make(map[string]string)
@@ -1918,6 +2018,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
newAnnotations[k] = v
}
delete(newAnnotations, appv1.AnnotationKeyRefresh)
delete(newAnnotations, appv1.AnnotationKeyHydrate)
}
patch, modified, err := createMergePatch(
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
@@ -2011,9 +2112,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
InitiatedBy: appv1.OperationInitiator{Automated: true},
Retry: appv1.RetryStrategy{Limit: 5},
}
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
if app.Spec.SyncPolicy.Retry != nil {
op.Retry = *app.Spec.SyncPolicy.Retry
}
@@ -2029,8 +2128,18 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil, 0
} else if alreadyAttempted && selfHeal {
if shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app); shouldSelfHeal {
} else if selfHeal {
shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app, alreadyAttempted)
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
if alreadyAttempted {
if !shouldSelfHeal {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
return nil, 0
}
op.Sync.SelfHealAttemptsCount++
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
@@ -2041,10 +2150,6 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
})
}
}
} else {
logCtx.Infof("Skipping auto-sync: already attempted sync to %s with timeout %v (retrying in %v)", desiredCommitSHA, ctrl.selfHealTimeout, retryAfter)
ctrl.requestAppRefresh(app.QualifiedName(), CompareWithLatest.Pointer(), &retryAfter)
return nil, 0
}
}
ts.AddCheckpoint("already_attempted_check_ms")
@@ -2128,29 +2233,41 @@ func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS
}
}
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool, time.Duration) {
func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application, alreadyAttempted bool) (bool, time.Duration) {
if app.Status.OperationState == nil {
return true, time.Duration(0)
}
var timeSinceOperation *time.Duration
if app.Status.OperationState.FinishedAt != nil {
timeSinceOperation = ptr.To(time.Since(app.Status.OperationState.FinishedAt.Time))
}
// Reset counter if the prior sync was successful and the cooldown period is over OR if the revision has changed
if !alreadyAttempted || (timeSinceOperation != nil && *timeSinceOperation >= ctrl.selfHealBackoffCooldown && app.Status.Sync.Status == appv1.SyncStatusCodeSynced) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = 0
}
var retryAfter time.Duration
if ctrl.selfHealBackOff == nil {
if app.Status.OperationState.FinishedAt == nil {
if timeSinceOperation == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
retryAfter = ctrl.selfHealTimeout - *timeSinceOperation
}
} else {
backOff := *ctrl.selfHealBackOff
backOff.Steps = int(app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
var delay time.Duration
for backOff.Steps > 0 {
steps := backOff.Steps
for i := 0; i < steps; i++ {
delay = backOff.Step()
}
if app.Status.OperationState.FinishedAt == nil {
if timeSinceOperation == nil {
retryAfter = delay
} else {
retryAfter = delay - time.Since(app.Status.OperationState.FinishedAt.Time)
retryAfter = delay - *timeSinceOperation
}
}
return retryAfter <= 0, retryAfter
@@ -2325,6 +2442,9 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
if !newOK || (delay != nil && *delay != time.Duration(0)) {
ctrl.appOperationQueue.AddRateLimited(key)
}
if ctrl.hydrator != nil {
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
}
ctrl.clusterSharding.UpdateApp(newApp)
},
DeleteFunc: func(obj interface{}) {

View File

@@ -42,6 +42,7 @@ import (
dbmocks "github.com/argoproj/argo-cd/v2/util/db/mocks"
mockcommitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient/mocks"
mockstatecache "github.com/argoproj/argo-cd/v2/controller/cache/mocks"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned/fake"
@@ -126,6 +127,8 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
mockRepoClientset := mockrepoclient.Clientset{RepoServerServiceClient: &mockRepoClient}
mockCommitClientset := mockcommitclient.Clientset{}
secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-secret",
@@ -148,7 +151,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
}
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
kubeClient := fake.NewSimpleClientset(runtimeObjs...)
kubeClient := fake.NewClientset(runtimeObjs...)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
ctrl, err := NewApplicationController(
@@ -157,6 +160,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
kubeClient,
appclientset.NewSimpleClientset(data.apps...),
&mockRepoClientset,
&mockCommitClientset,
appstatecache.NewCache(
cacheutil.NewCache(cacheutil.NewInMemoryCache(1*time.Minute)),
1*time.Minute,
@@ -167,6 +171,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
time.Second,
time.Minute,
nil,
time.Minute,
0,
time.Second*10,
common.DefaultPortArgoCDMetrics,
@@ -182,6 +187,7 @@ func newFakeControllerWithResync(data *fakeData, appResyncPeriod time.Duration,
false,
normalizers.IgnoreNormalizerOpts{},
testEnableEventList,
false,
)
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
@@ -1404,6 +1410,25 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.Equal(t, CompareWithRecent, compareWith)
})
t.Run("requesting refresh with delay gives correct compression level", func(t *testing.T) {
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
// refresh app with a non-nil delay
// use zero-second delay to test the add later logic without waiting in the test
delay := time.Duration(0)
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), &delay)
ctrl.processAppComparisonTypeQueueItem()
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, v1alpha1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithRecent, compareWith)
})
t.Run("refresh application which status is not reconciled using latest commit", func(t *testing.T) {
app := app.DeepCopy()
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
@@ -1813,7 +1838,7 @@ apps/Deployment:
hs = {}
hs.status = ""
hs.message = ""
if obj.metadata ~= nil then
if obj.metadata.labels ~= nil then
current_status = obj.metadata.labels["status"]
@@ -2037,7 +2062,7 @@ func TestProcessRequestedAppOperation_FailedHasRetries(t *testing.T) {
phase, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "phase")
assert.Equal(t, string(synccommon.OperationRunning), phase)
message, _, _ := unstructured.NestedString(receivedPatch, "status", "operationState", "message")
assert.Contains(t, message, "Retrying attempt #1")
assert.Contains(t, message, "due to application controller sync timeout. Retrying attempt #1")
retryCount, _, _ := unstructured.NestedFloat64(receivedPatch, "status", "operationState", "retryCount")
assert.InEpsilon(t, float64(1), retryCount, 0.0001)
}
@@ -2509,7 +2534,7 @@ func TestSelfHealExponentialBackoff(t *testing.T) {
ctrl.selfHealBackOff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
Cap: 5 * time.Minute,
Cap: 2 * time.Minute,
}
app := &v1alpha1.Application{
@@ -2524,29 +2549,92 @@ func TestSelfHealExponentialBackoff(t *testing.T) {
testCases := []struct {
attempts int64
expectedAttempts int64
finishedAt *metav1.Time
expectedDuration time.Duration
shouldSelfHeal bool
alreadyAttempted bool
syncStatus v1alpha1.SyncStatusCode
}{{
attempts: 0,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 0,
shouldSelfHeal: true,
alreadyAttempted: true,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 1,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 2 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 1,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 2,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 6 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 2,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 3,
finishedAt: nil,
expectedDuration: 18 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 3,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 4,
finishedAt: nil,
expectedDuration: 54 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 4,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 5,
finishedAt: nil,
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 5,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 6,
finishedAt: nil,
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 6,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, {
attempts: 6,
finishedAt: nil,
expectedDuration: 0,
shouldSelfHeal: true,
alreadyAttempted: false,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeOutOfSync,
}, { // backoff will not reset as finished tme isn't >= cooldown
attempts: 6,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 120 * time.Second,
shouldSelfHeal: false,
alreadyAttempted: true,
expectedAttempts: 6,
syncStatus: v1alpha1.SyncStatusCodeSynced,
}, { // backoff will reset as finished time is >= cooldown
attempts: 40,
finishedAt: &metav1.Time{Time: time.Now().Add(-(1 * time.Minute))},
expectedDuration: -60 * time.Second,
shouldSelfHeal: true,
alreadyAttempted: true,
expectedAttempts: 0,
syncStatus: v1alpha1.SyncStatusCodeSynced,
}}
for i := range testCases {
@@ -2554,8 +2642,10 @@ func TestSelfHealExponentialBackoff(t *testing.T) {
t.Run(fmt.Sprintf("test case %d", i), func(t *testing.T) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = tc.attempts
app.Status.OperationState.FinishedAt = tc.finishedAt
ok, duration := ctrl.shouldSelfHeal(app)
app.Status.Sync.Status = tc.syncStatus
ok, duration := ctrl.shouldSelfHeal(app, tc.alreadyAttempted)
require.Equal(t, ok, tc.shouldSelfHeal)
require.Equal(t, tc.expectedAttempts, app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
assertDurationAround(t, tc.expectedDuration, duration)
})
}

View File

@@ -69,6 +69,12 @@ const (
// EnvClusterCacheRetryUseBackoff is the env variable to control whether to use a backoff strategy with the retry during cluster cache sync
EnvClusterCacheRetryUseBackoff = "ARGOCD_CLUSTER_CACHE_RETRY_USE_BACKOFF"
// EnvClusterCacheBatchEventsProcessing is the env variable to control whether to enable batch events processing
EnvClusterCacheBatchEventsProcessing = "ARGOCD_CLUSTER_CACHE_BATCH_EVENTS_PROCESSING"
// EnvClusterCacheEventsProcessingInterval is the env variable to control the interval between processing events when BatchEventsProcessing is enabled
EnvClusterCacheEventsProcessingInterval = "ARGOCD_CLUSTER_CACHE_EVENTS_PROCESSING_INTERVAL"
// AnnotationIgnoreResourceUpdates when set to true on an untracked resource,
// argo will apply `ignoreResourceUpdates` configuration on it.
AnnotationIgnoreResourceUpdates = "argocd.argoproj.io/ignore-resource-updates"
@@ -103,6 +109,12 @@ var (
// clusterCacheRetryUseBackoff specifies whether to use a backoff strategy on cluster cache sync, if retry is enabled
clusterCacheRetryUseBackoff bool = false
// clusterCacheBatchEventsProcessing specifies whether to enable batch events processing
clusterCacheBatchEventsProcessing bool = false
// clusterCacheEventsProcessingInterval specifies the interval between processing events when BatchEventsProcessing is enabled
clusterCacheEventsProcessingInterval = 100 * time.Millisecond
)
func init() {
@@ -114,6 +126,8 @@ func init() {
clusterCacheListSemaphoreSize = env.ParseInt64FromEnv(EnvClusterCacheListSemaphore, clusterCacheListSemaphoreSize, 0, math.MaxInt64)
clusterCacheAttemptLimit = int32(env.ParseNumFromEnv(EnvClusterCacheAttemptLimit, int(clusterCacheAttemptLimit), 1, math.MaxInt32))
clusterCacheRetryUseBackoff = env.ParseBoolFromEnv(EnvClusterCacheRetryUseBackoff, false)
clusterCacheBatchEventsProcessing = env.ParseBoolFromEnv(EnvClusterCacheBatchEventsProcessing, false)
clusterCacheEventsProcessingInterval = env.ParseDurationFromEnv(EnvClusterCacheEventsProcessingInterval, clusterCacheEventsProcessingInterval, 0, math.MaxInt64)
}
type LiveStateCache interface {
@@ -554,6 +568,8 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
clustercache.SetLogr(logutils.NewLogrusLogger(log.WithField("server", cluster.Server))),
clustercache.SetRetryOptions(clusterCacheAttemptLimit, clusterCacheRetryUseBackoff, isRetryableError),
clustercache.SetRespectRBAC(respectRBAC),
clustercache.SetBatchEventsProcessing(clusterCacheBatchEventsProcessing),
clustercache.SetEventProcessingInterval(clusterCacheEventsProcessingInterval),
}
clusterCache = clustercache.NewClusterCache(clusterCacheConfig, clusterCacheOpts...)
@@ -608,6 +624,10 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
c.metricsServer.IncClusterEventsCount(cluster.Server, gvk.Group, gvk.Kind)
})
_ = clusterCache.OnProcessEventsHandler(func(duration time.Duration, processedEventsNumber int) {
c.metricsServer.ObserveResourceEventsProcessingDuration(cluster.Server, duration, processedEventsNumber)
})
c.clusters[server] = clusterCache
return clusterCache, nil

View File

@@ -140,7 +140,7 @@ func TestHandleDeleteEvent_CacheDeadlock(t *testing.T) {
}
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
fakeClient := fake.NewSimpleClientset()
fakeClient := fake.NewClientset()
settingsMgr := argosettings.NewSettingsManager(context.TODO(), fakeClient, "argocd")
liveStateCacheLock := sync.RWMutex{}
gitopsEngineClusterCache := &mocks.ClusterCache{}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.43.2. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks
@@ -55,7 +55,7 @@ func (_m *LiveStateCache) GetClusterCache(server string) (cache.ClusterCache, er
return r0, r1
}
// GetClustersInfo provides a mock function with given fields:
// GetClustersInfo provides a mock function with no fields
func (_m *LiveStateCache) GetClustersInfo() []cache.ClusterInfo {
ret := _m.Called()
@@ -172,7 +172,7 @@ func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIR
return r0, r1, r2
}
// Init provides a mock function with given fields:
// Init provides a mock function with no fields
func (_m *LiveStateCache) Init() error {
ret := _m.Called()

View File

@@ -67,7 +67,7 @@ func TestClusterSecretUpdater(t *testing.T) {
"server.secretkey": nil,
},
}
kubeclientset := fake.NewSimpleClientset(emptyArgoCDConfigMap, argoCDSecret)
kubeclientset := fake.NewClientset(emptyArgoCDConfigMap, argoCDSecret)
appclientset := appsfake.NewSimpleClientset()
appInformer := appinformers.NewApplicationInformer(appclientset, "", time.Minute, cache.Indexers{})
settingsManager := settings.NewSettingsManager(context.Background(), kubeclientset, fakeNamespace)

View File

@@ -8,7 +8,6 @@ import (
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/v2/common"
@@ -22,7 +21,9 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
var savedErr error
var errCount uint
appHealth := appv1.HealthStatus{Status: health.HealthStatusHealthy}
appHealth := app.Status.Health.DeepCopy()
appHealth.Status = health.HealthStatusHealthy
for i, res := range resources {
if res.Target != nil && hookutil.Skip(res.Target) {
continue
@@ -82,18 +83,11 @@ func setApplicationHealth(resources []managedResource, statuses []appv1.Resource
}
if persistResourceHealth {
app.Status.ResourceHealthSource = appv1.ResourceHealthLocationInline
// if the status didn't change, don't update the timestamp
if app.Status.Health.Status == appHealth.Status && app.Status.Health.LastTransitionTime != nil {
appHealth.LastTransitionTime = app.Status.Health.LastTransitionTime
} else {
now := metav1.Now()
appHealth.LastTransitionTime = &now
}
} else {
app.Status.ResourceHealthSource = appv1.ResourceHealthLocationAppTree
}
if savedErr != nil && errCount > 1 {
savedErr = fmt.Errorf("see application-controller logs for %d other errors; most recent error was: %w", errCount-1, savedErr)
}
return &appHealth, savedErr
return appHealth, savedErr
}

View File

@@ -73,7 +73,6 @@ func TestSetApplicationHealth(t *testing.T) {
assert.NotNil(t, healthStatus.LastTransitionTime)
assert.Nil(t, resourceStatuses[0].Health.LastTransitionTime)
assert.Nil(t, resourceStatuses[1].Health.LastTransitionTime)
previousLastTransitionTime := healthStatus.LastTransitionTime
app.Status.Health = *healthStatus
// now mark the job as a hook and retry. it should ignore the hook and consider the app healthy
@@ -81,9 +80,8 @@ func TestSetApplicationHealth(t *testing.T) {
healthStatus, err = setApplicationHealth(resources, resourceStatuses, nil, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus.Status)
// change in health, timestamp should change
assert.NotEqual(t, *previousLastTransitionTime, *healthStatus.LastTransitionTime)
previousLastTransitionTime = healthStatus.LastTransitionTime
// timestamp should be the same in case health did not change
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
app.Status.Health = *healthStatus
// now we set the `argocd.argoproj.io/ignore-healthcheck: "true"` annotation on the job's target.
@@ -94,8 +92,7 @@ func TestSetApplicationHealth(t *testing.T) {
healthStatus, err = setApplicationHealth(resources, resourceStatuses, nil, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus.Status)
// no change in health, timestamp shouldn't change
assert.Equal(t, *previousLastTransitionTime, *healthStatus.LastTransitionTime)
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
}
func TestSetApplicationHealth_ResourceHealthNotPersisted(t *testing.T) {
@@ -124,7 +121,7 @@ func TestSetApplicationHealth_MissingResource(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusMissing, healthStatus.Status)
assert.False(t, healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
}
func TestSetApplicationHealth_HealthImproves(t *testing.T) {
@@ -156,7 +153,7 @@ func TestSetApplicationHealth_HealthImproves(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, overrides, app, true)
require.NoError(t, err)
assert.Equal(t, tc.newStatus, healthStatus.Status)
assert.NotEqual(t, testTimestamp, *healthStatus.LastTransitionTime)
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
})
}
}
@@ -173,6 +170,7 @@ func TestSetApplicationHealth_MissingResourceNoBuiltHealthCheck(t *testing.T) {
healthStatus, err := setApplicationHealth(resources, resourceStatuses, lua.ResourceHealthOverrides{}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, healthStatus.Status)
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
assert.Equal(t, health.HealthStatusMissing, resourceStatuses[0].Health.Status)
})
@@ -184,7 +182,7 @@ func TestSetApplicationHealth_MissingResourceNoBuiltHealthCheck(t *testing.T) {
}, app, true)
require.NoError(t, err)
assert.Equal(t, health.HealthStatusMissing, healthStatus.Status)
assert.False(t, healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, healthStatus.LastTransitionTime)
})
}

View File

@@ -51,7 +51,7 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
revisions = append(revisions, src.TargetRevision)
}
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false)
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false, true)
if err != nil {
return false, err
}

View File

@@ -0,0 +1,355 @@
package hydrator
import (
"context"
"encoding/json"
"fmt"
"time"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
commitclient "github.com/argoproj/argo-cd/v2/commitserver/apiclient"
"github.com/argoproj/argo-cd/v2/controller/utils"
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
argoio "github.com/argoproj/argo-cd/v2/util/io"
)
// Dependencies is the interface for the dependencies of the Hydrator. It serves two purposes: 1) it prevents the
// hydrator from having direct access to the app controller, and 2) it allows for easy mocking of dependencies in tests.
// If you add something here, be sure that it is something the app controller needs to provide to the hydrator.
type Dependencies interface {
// TODO: determine if we actually need to get the app, or if all the stuff we need the app for is done already on
// the app controller side.
GetProcessableAppProj(app *appv1.Application) (*appv1.AppProject, error)
GetProcessableApps() (*appv1.ApplicationList, error)
GetRepoObjs(app *appv1.Application, source appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error)
GetWriteCredentials(ctx context.Context, repoURL string, project string) (*appv1.Repository, error)
RequestAppRefresh(appName string, appNamespace string) error
// TODO: only allow access to the hydrator status
PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
AddHydrationQueueItem(key HydrationQueueKey)
}
type Hydrator struct {
dependencies Dependencies
statusRefreshTimeout time.Duration
commitClientset commitclient.Clientset
}
func NewHydrator(dependencies Dependencies, statusRefreshTimeout time.Duration, commitClientset commitclient.Clientset) *Hydrator {
return &Hydrator{
dependencies: dependencies,
statusRefreshTimeout: statusRefreshTimeout,
commitClientset: commitClientset,
}
}
func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
origApp = origApp.DeepCopy()
app := origApp.DeepCopy()
if app.Spec.SourceHydrator == nil {
return
}
logCtx := utils.GetAppLog(app)
logCtx.Debug("Processing app hydrate queue item")
// TODO: don't reuse statusRefreshTimeout. Create a new timeout for hydration.
needsHydration, reason := appNeedsHydration(origApp, h.statusRefreshTimeout)
if !needsHydration {
return
}
logCtx.WithField("reason", reason).Info("Hydrating app")
app.Status.SourceHydrator.CurrentOperation = &appv1.HydrateOperation{
StartedAt: metav1.Now(),
FinishedAt: nil,
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
origApp.Status.SourceHydrator = app.Status.SourceHydrator
h.dependencies.AddHydrationQueueItem(getHydrationQueueKey(app))
logCtx.Debug("Successfully processed app hydrate queue item")
}
func getHydrationQueueKey(app *appv1.Application) HydrationQueueKey {
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
key := HydrationQueueKey{
SourceRepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
SourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
DestinationBranch: destinationBranch,
}
return key
}
type HydrationQueueKey struct {
SourceRepoURL string
SourceTargetRevision string
DestinationBranch string
}
// uniqueHydrationDestination is used to detect duplicate hydrate destinations.
type uniqueHydrationDestination struct {
sourceRepoURL string
sourceTargetRevision string
destinationBranch string
destinationPath string
}
func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey HydrationQueueKey) (processNext bool) {
logCtx := log.WithFields(log.Fields{
"sourceRepoURL": hydrationKey.SourceRepoURL,
"sourceTargetRevision": hydrationKey.SourceTargetRevision,
"destinationBranch": hydrationKey.DestinationBranch,
})
relevantApps, drySHA, hydratedSHA, err := h.hydrateAppsLatestCommit(logCtx, hydrationKey)
if drySHA != "" {
logCtx = logCtx.WithField("drySHA", drySHA)
}
if err != nil {
logCtx.WithField("appCount", len(relevantApps)).WithError(err).Error("Failed to hydrate apps")
for _, app := range relevantApps {
origApp := app.DeepCopy()
app.Status.SourceHydrator.CurrentOperation.Phase = appv1.HydrateOperationPhaseFailed
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate revision %q: %v", drySHA, err.Error())
// We may or may not have gotten far enough in the hydration process to get a non-empty SHA, but set it just
// in case we did.
app.Status.SourceHydrator.CurrentOperation.DrySHA = drySHA
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
logCtx = logCtx.WithField("app", app.QualifiedName())
logCtx.Errorf("Failed to hydrate app: %v", err)
}
return
}
logCtx.WithField("appCount", len(relevantApps)).Debug("Successfully hydrated apps")
finishedAt := metav1.Now()
for _, app := range relevantApps {
origApp := app.DeepCopy()
operation := &appv1.HydrateOperation{
StartedAt: app.Status.SourceHydrator.CurrentOperation.StartedAt,
FinishedAt: &finishedAt,
Phase: appv1.HydrateOperationPhaseHydrated,
Message: "",
DrySHA: drySHA,
HydratedSHA: hydratedSHA,
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
app.Status.SourceHydrator.CurrentOperation = operation
app.Status.SourceHydrator.LastSuccessfulOperation = &appv1.SuccessfulHydrateOperation{
DrySHA: drySHA,
HydratedSHA: hydratedSHA,
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
// Request a refresh since we pushed a new commit.
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
if err != nil {
logCtx.WithField("app", app.QualifiedName()).WithError(err).Error("Failed to request app refresh after hydration")
}
}
return
}
func (h *Hydrator) hydrateAppsLatestCommit(logCtx *log.Entry, hydrationKey HydrationQueueKey) ([]*appv1.Application, string, string, error) {
relevantApps, err := h.getRelevantAppsForHydration(logCtx, hydrationKey)
if err != nil {
return nil, "", "", fmt.Errorf("failed to get relevant apps for hydration: %w", err)
}
dryRevision, hydratedRevision, err := h.hydrate(logCtx, relevantApps)
if err != nil {
return relevantApps, dryRevision, "", fmt.Errorf("failed to hydrate apps: %w", err)
}
return relevantApps, dryRevision, hydratedRevision, nil
}
func (h *Hydrator) getRelevantAppsForHydration(logCtx *log.Entry, hydrationKey HydrationQueueKey) ([]*appv1.Application, error) {
// Get all apps
apps, err := h.dependencies.GetProcessableApps()
if err != nil {
return nil, fmt.Errorf("failed to list apps: %w", err)
}
var relevantApps []*appv1.Application
uniqueDestinations := make(map[uniqueHydrationDestination]bool, len(apps.Items))
for _, app := range apps.Items {
if app.Spec.SourceHydrator == nil {
continue
}
if app.Spec.SourceHydrator.DrySource.RepoURL != hydrationKey.SourceRepoURL ||
app.Spec.SourceHydrator.DrySource.TargetRevision != hydrationKey.SourceTargetRevision {
continue
}
destinationBranch := app.Spec.SourceHydrator.SyncSource.TargetBranch
if app.Spec.SourceHydrator.HydrateTo != nil {
destinationBranch = app.Spec.SourceHydrator.HydrateTo.TargetBranch
}
if destinationBranch != hydrationKey.DestinationBranch {
continue
}
var proj *appv1.AppProject
proj, err = h.dependencies.GetProcessableAppProj(&app)
if err != nil {
return nil, fmt.Errorf("failed to get project %q for app %q: %w", app.Spec.Project, app.QualifiedName(), err)
}
permitted := proj.IsSourcePermitted(app.Spec.GetSource())
if !permitted {
// Log and skip. We don't want to fail the entire operation because of one app.
logCtx.Warnf("App %q is not permitted to use source %q", app.QualifiedName(), app.Spec.Source.String())
continue
}
uniqueDestinationKey := uniqueHydrationDestination{
sourceRepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
sourceTargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
destinationBranch: destinationBranch,
destinationPath: app.Spec.SourceHydrator.SyncSource.Path,
}
// TODO: test the dupe detection
if _, ok := uniqueDestinations[uniqueDestinationKey]; ok {
return nil, fmt.Errorf("multiple app hydrators use the same destination: %v", uniqueDestinationKey)
}
uniqueDestinations[uniqueDestinationKey] = true
relevantApps = append(relevantApps, &app)
}
return relevantApps, nil
}
func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application) (string, string, error) {
if len(apps) == 0 {
return "", "", nil
}
repoURL := apps[0].Spec.SourceHydrator.DrySource.RepoURL
syncBranch := apps[0].Spec.SourceHydrator.SyncSource.TargetBranch
targetBranch := apps[0].Spec.GetHydrateToSource().TargetRevision
var paths []*commitclient.PathDetails
projects := make(map[string]bool, len(apps))
var targetRevision string
// TODO: parallelize this loop
for _, app := range apps {
project, err := h.dependencies.GetProcessableAppProj(app)
if err != nil {
return "", "", fmt.Errorf("failed to get project: %w", err)
}
projects[project.Name] = true
drySource := appv1.ApplicationSource{
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
Path: app.Spec.SourceHydrator.DrySource.Path,
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
}
if targetRevision == "" {
targetRevision = app.Spec.SourceHydrator.DrySource.TargetRevision
}
// TODO: enable signature verification
objs, resp, err := h.dependencies.GetRepoObjs(app, drySource, targetRevision, project)
if err != nil {
return "", "", fmt.Errorf("failed to get repo objects: %w", err)
}
// This should be the DRY SHA. We set it here so that after processing the first app, all apps are hydrated
// using the same SHA.
targetRevision = resp.Revision
// Set up a ManifestsRequest
manifestDetails := make([]*commitclient.HydratedManifestDetails, len(objs))
for i, obj := range objs {
objJson, err := json.Marshal(obj)
if err != nil {
return "", "", fmt.Errorf("failed to marshal object: %w", err)
}
manifestDetails[i] = &commitclient.HydratedManifestDetails{ManifestJSON: string(objJson)}
}
paths = append(paths, &commitclient.PathDetails{
Path: app.Spec.SourceHydrator.SyncSource.Path,
Manifests: manifestDetails,
Commands: resp.Commands,
})
}
// If all the apps are under the same project, use that project. Otherwise, use an empty string to indicate that we
// need global creds.
project := ""
if len(projects) == 1 {
for p := range projects {
project = p
}
}
repo, err := h.dependencies.GetWriteCredentials(context.Background(), repoURL, project)
if err != nil {
return "", "", fmt.Errorf("failed to get hydrator credentials: %w", err)
}
if repo == nil {
// Try without credentials.
repo = &appv1.Repository{
Repo: repoURL,
}
logCtx.Warn("no credentials found for repo, continuing without credentials")
}
manifestsRequest := commitclient.CommitHydratedManifestsRequest{
Repo: repo,
SyncBranch: syncBranch,
TargetBranch: targetBranch,
DrySha: targetRevision,
CommitMessage: fmt.Sprintf("[Argo CD Bot] hydrate %s", targetRevision),
Paths: paths,
}
closer, commitService, err := h.commitClientset.NewCommitServerClient()
if err != nil {
return targetRevision, "", fmt.Errorf("failed to create commit service: %w", err)
}
defer argoio.Close(closer)
resp, err := commitService.CommitHydratedManifests(context.Background(), &manifestsRequest)
if err != nil {
return targetRevision, "", fmt.Errorf("failed to commit hydrated manifests: %w", err)
}
return targetRevision, resp.HydratedSha, nil
}
// appNeedsHydration answers if application needs manifests hydrated.
func appNeedsHydration(app *appv1.Application, statusHydrateTimeout time.Duration) (needsHydration bool, reason string) {
if app.Spec.SourceHydrator == nil {
return false, "source hydrator not configured"
}
var hydratedAt *metav1.Time
if app.Status.SourceHydrator.CurrentOperation != nil {
hydratedAt = &app.Status.SourceHydrator.CurrentOperation.StartedAt
}
if app.IsHydrateRequested() {
return true, "hydrate requested"
} else if app.Status.SourceHydrator.CurrentOperation == nil {
return true, "no previous hydrate operation"
} else if !app.Spec.SourceHydrator.DeepEquals(app.Status.SourceHydrator.CurrentOperation.SourceHydrator) {
return true, "spec.sourceHydrator differs"
} else if app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseFailed && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.FinishedAt.Time) > 2*time.Minute {
return true, "previous hydrate operation failed more than 2 minutes ago"
} else if hydratedAt == nil || hydratedAt.Add(statusHydrateTimeout).Before(time.Now().UTC()) {
return true, "hydration expired"
}
return false, ""
}

View File

@@ -0,0 +1,103 @@
package hydrator
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
func Test_appNeedsHydration(t *testing.T) {
t.Parallel()
now := metav1.NewTime(time.Now())
oneHourAgo := metav1.NewTime(now.Add(-1 * time.Hour))
testCases := []struct {
name string
app *v1alpha1.Application
timeout time.Duration
expectedNeedsHydration bool
expectedMessage string
}{
{
name: "source hydrator not configured",
app: &v1alpha1.Application{},
expectedNeedsHydration: false,
expectedMessage: "source hydrator not configured",
},
{
name: "hydrate requested",
app: &v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{v1alpha1.AnnotationKeyHydrate: "normal"}},
Spec: v1alpha1.ApplicationSpec{SourceHydrator: &v1alpha1.SourceHydrator{}},
},
timeout: 1 * time.Hour,
expectedNeedsHydration: true,
expectedMessage: "hydrate requested",
},
{
name: "no previous hydrate operation",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{SourceHydrator: &v1alpha1.SourceHydrator{}},
},
timeout: 1 * time.Hour,
expectedNeedsHydration: true,
expectedMessage: "no previous hydrate operation",
},
{
name: "spec.sourceHydrator differs",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{SourceHydrator: &v1alpha1.SourceHydrator{}},
Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{
SourceHydrator: v1alpha1.SourceHydrator{DrySource: v1alpha1.DrySource{RepoURL: "something new"}},
}}},
},
timeout: 1 * time.Hour,
expectedNeedsHydration: true,
expectedMessage: "spec.sourceHydrator differs",
},
{
name: "hydration failed more than two minutes ago",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{SourceHydrator: &v1alpha1.SourceHydrator{}},
Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{DrySHA: "abc123", FinishedAt: &oneHourAgo, Phase: v1alpha1.HydrateOperationPhaseFailed}}},
},
timeout: 1 * time.Hour,
expectedNeedsHydration: true,
expectedMessage: "previous hydrate operation failed more than 2 minutes ago",
},
{
name: "timeout reached",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{SourceHydrator: &v1alpha1.SourceHydrator{}},
Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{StartedAt: oneHourAgo}}},
},
timeout: 1 * time.Minute,
expectedNeedsHydration: true,
expectedMessage: "hydration expired",
},
{
name: "hydrate not needed",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{SourceHydrator: &v1alpha1.SourceHydrator{}},
Status: v1alpha1.ApplicationStatus{SourceHydrator: v1alpha1.SourceHydratorStatus{CurrentOperation: &v1alpha1.HydrateOperation{DrySHA: "abc123", StartedAt: now, FinishedAt: &now, Phase: v1alpha1.HydrateOperationPhaseFailed}}},
},
timeout: 1 * time.Hour,
expectedNeedsHydration: false,
expectedMessage: "",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
needsHydration, message := appNeedsHydration(tc.app, tc.timeout)
assert.Equal(t, tc.expectedNeedsHydration, needsHydration)
assert.Equal(t, tc.expectedMessage, message)
})
}
}

View File

@@ -0,0 +1,90 @@
package controller
import (
"context"
"fmt"
"github.com/argoproj/argo-cd/v2/controller/hydrator"
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
argoutil "github.com/argoproj/argo-cd/v2/util/argo"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
/**
This file implements the hydrator.Dependencies interface for the ApplicationController.
Hydration logic does not belong in this file. The methods here should be "bookkeeping" methods that keep hydration work
in the hydrator and app controller work in the app controller. The only purpose of this file is to provide the hydrator
safe, minimal access to certain app controller functionality to avoid duplicate code.
*/
func (ctrl *ApplicationController) GetProcessableAppProj(app *appv1.Application) (*appv1.AppProject, error) {
return ctrl.getAppProj(app)
}
// GetProcessableApps returns a list of applications that are processable by the controller.
func (ctrl *ApplicationController) GetProcessableApps() (*appv1.ApplicationList, error) {
// getAppList already filters out applications that are not processable by the controller.
return ctrl.getAppList(metav1.ListOptions{})
}
func (ctrl *ApplicationController) GetRepoObjs(origApp *appv1.Application, drySource appv1.ApplicationSource, revision string, project *appv1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
drySources := []appv1.ApplicationSource{drySource}
dryRevisions := []string{revision}
appLabelKey, err := ctrl.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return nil, nil, fmt.Errorf("failed to get app instance label key: %w", err)
}
app := origApp.DeepCopy()
// Remove the manifest generate path annotation, because the feature will misbehave for apps using source hydrator.
// Setting this annotation causes GetRepoObjs to compare the dry source commit to the most recent synced commit. The
// problem is that the most recent synced commit is likely on the hydrated branch, not the dry branch. The
// comparison will throw an error and break hydration.
//
// The long-term solution will probably be to persist the synced _dry_ revision and use that for the comparison.
delete(app.Annotations, appv1.AnnotationKeyManifestGeneratePaths)
// FIXME: use cache and revision cache
objs, resp, _, err := ctrl.appStateManager.GetRepoObjs(app, drySources, appLabelKey, dryRevisions, true, true, false, project, false, false)
if err != nil {
return nil, nil, fmt.Errorf("failed to get repo objects: %w", err)
}
if len(resp) != 1 {
return nil, nil, fmt.Errorf("expected one manifest response, got %d", len(resp))
}
return objs, resp[0], nil
}
func (ctrl *ApplicationController) GetWriteCredentials(ctx context.Context, repoURL string, project string) (*appv1.Repository, error) {
return ctrl.db.GetWriteRepository(ctx, repoURL, project)
}
func (ctrl *ApplicationController) RequestAppRefresh(appName string, appNamespace string) error {
// We request a refresh by setting the annotation instead of by adding it to the refresh queue, because there is no
// guarantee that the hydrator is running on the same controller shard as is processing the application.
// This function is called for each app after a hydrate operation is completed so that the app controller can pick
// up the newly-hydrated changes. So we set hydrate=false to avoid a hydrate loop.
_, err := argoutil.RefreshApp(ctrl.applicationClientset.ArgoprojV1alpha1().Applications(appNamespace), appName, appv1.RefreshTypeNormal, false)
if err != nil {
return fmt.Errorf("failed to request app refresh: %w", err)
}
return nil
}
func (ctrl *ApplicationController) PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
status := orig.Status.DeepCopy()
status.SourceHydrator = *newStatus
ctrl.persistAppStatus(orig, status)
}
func (ctrl *ApplicationController) AddHydrationQueueItem(key hydrator.HydrationQueueKey) {
ctrl.hydrationQueue.AddRateLimited(key)
}

View File

@@ -30,18 +30,20 @@ import (
type MetricsServer struct {
*http.Server
syncCounter *prometheus.CounterVec
kubectlExecCounter *prometheus.CounterVec
kubectlExecPendingGauge *prometheus.GaugeVec
orphanedResourcesGauge *prometheus.GaugeVec
k8sRequestCounter *prometheus.CounterVec
clusterEventsCounter *prometheus.CounterVec
redisRequestCounter *prometheus.CounterVec
reconcileHistogram *prometheus.HistogramVec
redisRequestHistogram *prometheus.HistogramVec
registry *prometheus.Registry
hostname string
cron *cron.Cron
syncCounter *prometheus.CounterVec
kubectlExecCounter *prometheus.CounterVec
kubectlExecPendingGauge *prometheus.GaugeVec
orphanedResourcesGauge *prometheus.GaugeVec
k8sRequestCounter *prometheus.CounterVec
clusterEventsCounter *prometheus.CounterVec
redisRequestCounter *prometheus.CounterVec
reconcileHistogram *prometheus.HistogramVec
redisRequestHistogram *prometheus.HistogramVec
resourceEventsProcessingHistogram *prometheus.HistogramVec
resourceEventsNumberGauge *prometheus.GaugeVec
registry *prometheus.Registry
hostname string
cron *cron.Cron
}
const (
@@ -153,6 +155,20 @@ var (
},
descAppDefaultLabels,
)
resourceEventsProcessingHistogram = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_resource_events_processing",
Help: "Time to process resource events in seconds.",
Buckets: []float64{0.25, .5, 1, 2, 4, 8, 16},
},
[]string{"server"},
)
resourceEventsNumberGauge = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "argocd_resource_events_processed_in_batch",
Help: "Number of resource events processed in batch",
}, []string{"server"})
)
// NewMetricsServer returns a new prometheus server which collects application metrics
@@ -202,6 +218,8 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFil
registry.MustRegister(clusterEventsCounter)
registry.MustRegister(redisRequestCounter)
registry.MustRegister(redisRequestHistogram)
registry.MustRegister(resourceEventsProcessingHistogram)
registry.MustRegister(resourceEventsNumberGauge)
return &MetricsServer{
registry: registry,
@@ -209,16 +227,18 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFil
Addr: addr,
Handler: mux,
},
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
kubectlExecCounter: kubectlExecCounter,
kubectlExecPendingGauge: kubectlExecPendingGauge,
orphanedResourcesGauge: orphanedResourcesGauge,
reconcileHistogram: reconcileHistogram,
clusterEventsCounter: clusterEventsCounter,
redisRequestCounter: redisRequestCounter,
redisRequestHistogram: redisRequestHistogram,
hostname: hostname,
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
kubectlExecCounter: kubectlExecCounter,
kubectlExecPendingGauge: kubectlExecPendingGauge,
orphanedResourcesGauge: orphanedResourcesGauge,
reconcileHistogram: reconcileHistogram,
clusterEventsCounter: clusterEventsCounter,
redisRequestCounter: redisRequestCounter,
redisRequestHistogram: redisRequestHistogram,
resourceEventsProcessingHistogram: resourceEventsProcessingHistogram,
resourceEventsNumberGauge: resourceEventsNumberGauge,
hostname: hostname,
// This cron is used to expire the metrics cache.
// Currently clearing the metrics cache is logging and deleting from the map
// so there is no possibility of panic, but we will add a chain to keep robfig/cron v1 behavior.
@@ -284,6 +304,12 @@ func (m *MetricsServer) ObserveRedisRequestDuration(duration time.Duration) {
m.redisRequestHistogram.WithLabelValues(m.hostname, common.ApplicationController).Observe(duration.Seconds())
}
// ObserveResourceEventsProcessingDuration observes resource events processing duration
func (m *MetricsServer) ObserveResourceEventsProcessingDuration(server string, duration time.Duration, processedEventsNumber int) {
m.resourceEventsProcessingHistogram.WithLabelValues(server).Observe(duration.Seconds())
m.resourceEventsNumberGauge.WithLabelValues(server).Set(float64(processedEventsNumber))
}
// IncReconcile increments the reconcile counter for an application
func (m *MetricsServer) IncReconcile(app *argoappv1.Application, duration time.Duration) {
m.reconcileHistogram.WithLabelValues(app.Namespace, app.Spec.Destination.Server).Observe(duration.Seconds())
@@ -311,6 +337,8 @@ func (m *MetricsServer) SetExpiration(cacheExpiration time.Duration) error {
m.redisRequestCounter.Reset()
m.reconcileHistogram.Reset()
m.redisRequestHistogram.Reset()
m.resourceEventsProcessingHistogram.Reset()
m.resourceEventsNumberGauge.Reset()
})
if err != nil {
return err

View File

@@ -71,7 +71,7 @@ type managedResource struct {
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool, rollback bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
// comparisonResult holds the state of an application after the reconciliation
@@ -108,6 +108,7 @@ type appStateManager struct {
appclientset appclientset.Interface
projInformer cache.SharedIndexInformer
kubectl kubeutil.Kubectl
onKubectlRun kubeutil.OnKubectlRunFunc
repoClientset apiclient.Clientset
liveStateCache statecache.LiveStateCache
cache *appstatecache.Cache
@@ -125,7 +126,7 @@ type appStateManager struct {
// task to the repo-server. It returns the list of generated manifests as unstructured
// objects. It also returns the full response from all calls to the repo server as the
// second argument.
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
ts := stats.NewTimingStats()
helmRepos, err := m.db.ListHelmRepositories(context.Background())
if err != nil {
@@ -168,9 +169,13 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
}
ts.AddCheckpoint("build_options_ms")
serverVersion, apiResources, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
if err != nil {
return nil, nil, false, fmt.Errorf("failed to get cluster version for cluster %q: %w", app.Spec.Destination.Server, err)
var serverVersion string
var apiResources []kubeutil.APIResourceInfo
if sendRuntimeState {
serverVersion, apiResources, err = m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
if err != nil {
return nil, nil, false, fmt.Errorf("failed to get cluster version for cluster %q: %w", app.Spec.Destination.Server, err)
}
}
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
@@ -219,6 +224,12 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
revision := revisions[i]
appNamespace := app.Spec.Destination.Namespace
apiVersions := argo.APIResourcesToStrings(apiResources, true)
if !sendRuntimeState {
appNamespace = ""
}
if !source.IsHelm() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(context.Background(), &apiclient.UpdateRevisionForPathsRequest{
@@ -229,10 +240,10 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
Paths: path.GetAppRefreshPaths(app),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: app.Spec.Destination.Namespace,
Namespace: appNamespace,
ApplicationSource: &source,
KubeVersion: serverVersion,
ApiVersions: argo.APIResourcesToStrings(apiResources, true),
ApiVersions: apiVersions,
TrackingMethod: string(argo.GetTrackingMethod(m.settingsMgr)),
RefSources: refSources,
HasMultipleSources: app.Spec.HasMultipleSources(),
@@ -263,11 +274,11 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
NoRevisionCache: noRevisionCache,
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: app.Spec.Destination.Namespace,
Namespace: appNamespace,
ApplicationSource: &source,
KustomizeOptions: kustomizeOptions,
KubeVersion: serverVersion,
ApiVersions: argo.APIResourcesToStrings(apiResources, true),
ApiVersions: apiVersions,
VerifySignature: verifySignature,
HelmRepoCreds: permittedHelmCredentials,
TrackingMethod: string(argo.GetTrackingMethod(m.settingsMgr)),
@@ -309,6 +320,39 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
return targetObjs, manifestInfos, revisionUpdated, nil
}
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
func (m *appStateManager) ResolveGitRevision(repoURL string, revision string) (string, error) {
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return "", fmt.Errorf("failed to connect to repo server: %w", err)
}
defer io.Close(conn)
repo, err := m.db.GetRepository(context.Background(), repoURL, "")
if err != nil {
return "", fmt.Errorf("failed to get repo %q: %w", repoURL, err)
}
// Mock the app. The repo-server only needs to know whether the "chart" field is populated.
app := &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: repoURL,
TargetRevision: revision,
},
},
}
resp, err := repoClient.ResolveRevision(context.Background(), &apiclient.ResolveRevisionRequest{
Repo: repo,
App: app,
AmbiguousRevision: revision,
})
if err != nil {
return "", fmt.Errorf("failed to determine whether the dry source has changed: %w", err)
}
return resp.Revision, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, error) {
targetObjs := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifests {
@@ -437,24 +481,23 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
// return unknown comparison result if basic comparison settings cannot be loaded
if err != nil {
now := metav1.Now()
if hasMultipleSources {
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
ComparedTo: v1alpha1.ComparedTo{Destination: app.Spec.Destination, Sources: sources, IgnoreDifferences: app.Spec.IgnoreDifferences},
ComparedTo: app.Spec.BuildComparedToStatus(),
Status: v1alpha1.SyncStatusCodeUnknown,
Revisions: revisions,
},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown, LastTransitionTime: &now},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown},
}, nil
} else {
return &comparisonResult{
syncStatus: &v1alpha1.SyncStatus{
ComparedTo: v1alpha1.ComparedTo{Source: sources[0], Destination: app.Spec.Destination, IgnoreDifferences: app.Spec.IgnoreDifferences},
ComparedTo: app.Spec.BuildComparedToStatus(),
Status: v1alpha1.SyncStatusCodeUnknown,
Revision: revisions[0],
},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown, LastTransitionTime: &now},
healthStatus: &v1alpha1.HealthStatus{Status: health.HealthStatusUnknown},
}, nil
}
}
@@ -490,7 +533,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
}
}
targetObjs, manifestInfos, revisionUpdated, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, rollback)
targetObjs, manifestInfos, revisionUpdated, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, rollback, true)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
msg := fmt.Sprintf("Failed to load target state: %s", err.Error())
@@ -698,13 +741,13 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
diffConfigBuilder.WithServerSideDiff(serverSideDiff)
if serverSideDiff {
resourceOps, cleanup, err := m.getResourceOperations(app.Spec.Destination.Server)
applier, cleanup, err := m.getServerSideDiffDryRunApplier(app.Spec.Destination.Server)
if err != nil {
log.Errorf("CompareAppState error getting resource operations: %s", err)
log.Errorf("CompareAppState error getting server side diff dry run applier: %s", err)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionUnknownError, Message: err.Error(), LastTransitionTime: &now})
}
defer cleanup()
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(resourceOps))
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
}
// enable structured merge diff if application syncs with server-side apply
@@ -1029,6 +1072,7 @@ func NewAppStateManager(
repoClientset apiclient.Clientset,
namespace string,
kubectl kubeutil.Kubectl,
onKubectlRun kubeutil.OnKubectlRunFunc,
settingsMgr *settings.SettingsManager,
liveStateCache statecache.LiveStateCache,
projInformer cache.SharedIndexInformer,
@@ -1047,6 +1091,7 @@ func NewAppStateManager(
db: db,
appclientset: appclientset,
kubectl: kubectl,
onKubectlRun: onKubectlRun,
repoClientset: repoClientset,
namespace: namespace,
settingsMgr: settingsMgr,

View File

@@ -715,7 +715,7 @@ func TestSetHealth(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus.Status)
assert.False(t, compRes.healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, compRes.healthStatus.LastTransitionTime)
}
func TestPreserveStatusTimestamp(t *testing.T) {
@@ -790,7 +790,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, health.HealthStatusHealthy, compRes.healthStatus.Status)
assert.False(t, compRes.healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, compRes.healthStatus.LastTransitionTime)
}
func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
@@ -866,7 +866,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, health.HealthStatusUnknown, compRes.healthStatus.Status)
assert.False(t, compRes.healthStatus.LastTransitionTime.IsZero())
assert.Equal(t, app.Status.Health.LastTransitionTime, compRes.healthStatus.LastTransitionTime)
assert.Equal(t, argoappv1.SyncStatusCodeUnknown, compRes.syncStatus.Status)
}

View File

@@ -14,6 +14,7 @@ import (
cdcommon "github.com/argoproj/argo-cd/v2/common"
gitopsDiff "github.com/argoproj/gitops-engine/pkg/diff"
"github.com/argoproj/gitops-engine/pkg/sync"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@@ -33,6 +34,7 @@ import (
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/diff"
"github.com/argoproj/argo-cd/v2/util/glob"
kubeutil "github.com/argoproj/argo-cd/v2/util/kube"
logutils "github.com/argoproj/argo-cd/v2/util/log"
"github.com/argoproj/argo-cd/v2/util/lua"
"github.com/argoproj/argo-cd/v2/util/rand"
@@ -66,11 +68,11 @@ func (m *appStateManager) getGVKParser(server string) (*managedfields.GvkParser,
return cluster.GetGVKParser(), nil
}
// getResourceOperations will return the kubectl implementation of the ResourceOperations
// interface that provides functionality to manage kubernetes resources. Returns a
// getServerSideDiffDryRunApplier will return the kubectl implementation of the KubeApplier
// interface that provides functionality to dry run apply kubernetes resources. Returns a
// cleanup function that must be called to remove the generated kube config for this
// server.
func (m *appStateManager) getResourceOperations(server string) (kube.ResourceOperations, func(), error) {
func (m *appStateManager) getServerSideDiffDryRunApplier(server string) (gitopsDiff.KubeApplier, func(), error) {
clusterCache, err := m.liveStateCache.GetClusterCache(server)
if err != nil {
return nil, nil, fmt.Errorf("error getting cluster cache: %w", err)
@@ -85,7 +87,7 @@ func (m *appStateManager) getResourceOperations(server string) (kube.ResourceOpe
if err != nil {
return nil, nil, fmt.Errorf("error getting cluster REST config: %w", err)
}
ops, cleanup, err := m.kubectl.ManageResources(rawConfig, clusterCache.GetOpenAPISchema())
ops, cleanup, err := kubeutil.ManageServerSideDiffDryRuns(rawConfig, clusterCache.GetOpenAPISchema(), m.onKubectlRun)
if err != nil {
return nil, nil, fmt.Errorf("error creating kubectl ResourceOperations: %w", err)
}
@@ -112,15 +114,6 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
}
syncOp = *state.Operation.Sync
// validates if it should fail the sync if it finds shared resources
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
hasSharedResource {
state.Phase = common.OperationFailed
state.Message = fmt.Sprintf("Shared resource found: %s", sharedResourceMessage)
return
}
isMultiSourceRevision := app.Spec.HasMultipleSources()
rollback := len(syncOp.Sources) > 0 || syncOp.Source != nil
if rollback {
@@ -211,6 +204,15 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
syncRes.Revision = compareResult.syncStatus.Revision
syncRes.Revisions = compareResult.syncStatus.Revisions
// validates if it should fail the sync if it finds shared resources
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
hasSharedResource {
state.Phase = common.OperationFailed
state.Message = fmt.Sprintf("Shared resource found: %s", sharedResourceMessage)
return
}
// If there are any comparison or spec errors error conditions do not perform the operation
if errConditions := app.Status.GetConditions(map[v1alpha1.ApplicationConditionType]bool{
v1alpha1.ApplicationConditionComparisonError: true,

View File

@@ -15,6 +15,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
argocommon "github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/controller/testdata"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
@@ -56,7 +57,7 @@ func TestPersistRevisionHistory(t *testing.T) {
updatedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, v1.GetOptions{})
require.NoError(t, err)
assert.Len(t, updatedApp.Status.History, 1)
require.Len(t, updatedApp.Status.History, 1)
assert.Equal(t, app.Spec.GetSource(), updatedApp.Status.History[0].Source)
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}
@@ -190,17 +191,23 @@ func TestSyncComparisonError(t *testing.T) {
}
func TestAppStateManager_SyncAppState(t *testing.T) {
t.Parallel()
type fixture struct {
project *v1alpha1.AppProject
application *v1alpha1.Application
project *v1alpha1.AppProject
controller *ApplicationController
}
setup := func() *fixture {
setup := func(liveObjects map[kube.ResourceKey]*unstructured.Unstructured) *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
if liveObjects == nil {
liveObjects = make(map[kube.ResourceKey]*unstructured.Unstructured)
}
project := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
@@ -208,6 +215,12 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
},
Spec: v1alpha1.AppProjectSpec{
SignatureKeys: []v1alpha1.SignatureKey{{KeyID: "test"}},
Destinations: []v1alpha1.ApplicationDestination{
{
Namespace: "*",
Server: "*",
},
},
},
}
data := fakeData{
@@ -218,13 +231,13 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
managedLiveObjs: liveObjects,
}
ctrl := newFakeController(&data, nil)
return &fixture{
project: project,
application: app,
project: project,
controller: ctrl,
}
}
@@ -232,13 +245,23 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
t.Run("will fail the sync if finds shared resources", func(t *testing.T) {
// given
t.Parallel()
f := setup()
syncErrorMsg := "deployment already applied by another application"
condition := v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
Message: syncErrorMsg,
}
f.application.Status.Conditions = append(f.application.Status.Conditions, condition)
sharedObject := kube.MustToUnstructured(&corev1.ConfigMap{
TypeMeta: v1.TypeMeta{
APIVersion: "v1",
Kind: "ConfigMap",
},
ObjectMeta: v1.ObjectMeta{
Name: "configmap1",
Namespace: "default",
Labels: map[string]string{
argocommon.LabelKeyAppInstance: "another-app",
},
},
})
liveObjects := make(map[kube.ResourceKey]*unstructured.Unstructured)
liveObjects[kube.GetResourceKey(sharedObject)] = sharedObject
f := setup(liveObjects)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
@@ -253,7 +276,7 @@ func TestAppStateManager_SyncAppState(t *testing.T) {
// then
assert.Equal(t, common.OperationFailed, opState.Phase)
assert.Contains(t, opState.Message, syncErrorMsg)
assert.Contains(t, opState.Message, "ConfigMap/configmap1 is part of applications fake-argocd-ns/my-app and another-app")
})
}

17
controller/utils/log.go Normal file
View File

@@ -0,0 +1,17 @@
package utils
import (
"github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
// GetAppLog returns a logrus entry with fields set for the given application.
func GetAppLog(app *v1alpha1.Application) *logrus.Entry {
return logrus.WithFields(logrus.Fields{
"application": app.Name,
"app-namespace": app.Namespace,
"app-qualified-name": app.QualifiedName(),
"project": app.Spec.Project,
})
}

View File

@@ -184,12 +184,11 @@ the argocd-secret with key 'some.argocd.secret.key'.
If provided, and multiple services are configured, will have to match
the application destination name or server to have requests properly
forwarded to this service URL. If there are multiple backends for the
same extension this field is required. In this case at least one of
the two will be required: name or server. It is better to provide both
values to avoid problems with applications unable to send requests to
the proper backend service. If only one backend service is
configured, this field is ignored, and all requests are forwarded to
the configured one.
same extension this field is required. In this case, it is necessary
to provide both values to avoid problems with applications unable to
send requests to the proper backend service. If only one backend
service is configured, this field is ignored, and all requests are
forwarded to the configured one.
#### `extensions.backend.services.cluster.name` (*string*)
(optional)

View File

@@ -9,6 +9,9 @@ data:
# Repo server address. (default "argocd-repo-server:8081")
repo.server: "argocd-repo-server:8081"
# Commit server address. (default "argocd-commit-server:8086")
commit.server: "argocd-commit-server:8086"
# Redis server hostname and port (e.g. argocd-redis:6379)
redis.server: "argocd-redis:6379"
# Enable compression for data sent to Redis with the required compression algorithm. (default 'gzip')
@@ -16,6 +19,9 @@ data:
# Redis database
redis.db:
# Enables the alpha "manifest hydrator" feature. (default "false")
hydrator.enabled: "false"
# Open-Telemetry collector address: (e.g. "otel-collector:4317")
otlp.address: ""
# Open-Telemetry collector insecure: (e.g. "true")
@@ -79,6 +85,13 @@ data:
controller.diff.server.side: "false"
# Enables profile endpoint on the internal metrics port
controller.profile.enabled: "false"
# Enables batch-processing mode in the controller's cluster cache. This can help improve performance for clusters that
# have high "churn," i.e. lots of resource modifications.
controller.cluster.cache.batch.events.processing: "false"
# This sets the interval at which the controller's cluster cache processes a batch of cluster events. A lower value
# will increase the speed at which Argo CD becomes aware of external cluster state. A higher value will reduce cluster
# cache lock contention and better handle high-churn clusters.
controller.cluster.cache.events.processing.interval: "100ms"
## Server properties
# Listen on given address for incoming connections (default "0.0.0.0")
@@ -195,6 +208,15 @@ data:
# Include hidden directories from Git
reposerver.include.hidden.directories: "false"
## Commit-server properties
# Listen on given address for incoming connections (default "0.0.0.0")
commitserver.listen.address: "0.0.0.0"
# Set the logging format. One of: text|json (default "text")
commitserver.log.format: "text"
# Set the logging level. One of: debug|info|warn|error (default "info")
commitserver.log.level: "info"
# Listen on given address for metrics (default "0.0.0.0")
commitserver.metrics.listen.address: "0.0.0.0"
# Set the logging format. One of: text|json (default "text")
dexserver.log.format: "text"

View File

@@ -15,7 +15,7 @@ to indicate their stability and maturity. These are the statuses of non-stable f
## Overview
| Feature | Introduced | Status |
| ----------------------------------------- | ---------- | ------ |
|-------------------------------------------|------------|--------|
| [AppSet Progressive Syncs][2] | v2.6.0 | Alpha |
| [Proxy Extensions][3] | v2.7.0 | Alpha |
| [Skip Application Reconcile][4] | v2.7.0 | Alpha |
@@ -25,6 +25,7 @@ to indicate their stability and maturity. These are the statuses of non-stable f
| [Server Side Diff][8] | v2.10.0 | Beta |
| [Cluster Sharding: consistent-hashing][9] | v2.12.0 | Alpha |
| [Service Account Impersonation][10] | v2.13.0 | Alpha |
| [Source Hydrator][11] | v2.14.0 | Alpha |
## Unstable Configurations
@@ -83,3 +84,4 @@ to indicate their stability and maturity. These are the statuses of non-stable f
[8]: ../user-guide/diff-strategies.md#server-side-diff
[9]: ./high_availability.md#argocd-application-controller
[10]: app-sync-using-impersonation.md
[11]: ../user-guide/source-hydrator.md

View File

@@ -130,6 +130,15 @@ stringData:
count (grouped by k8s api version, the granule of parallelism for list operations). In this case, all resources will
be buffered in memory -- no api server request will be blocked by processing.
* `ARGOCD_CLUSTER_CACHE_BATCH_EVENTS_PROCESSING` - environment variable that enables the controller to collect events
for Kubernetes resources and process them in a batch. This is useful when the cluster contains a large number of resources,
and the controller is overwhelmed by the number of events. The default value is `false`, which means that the controller
processes events one by one.
* `ARGOCD_CLUSTER_CACHE_EVENTS_PROCESSING_INTERVAL` - environment variable controlling the interval for processing events in a batch.
The valid value is in the format of Go time duration string, e.g. `1ms`, `1s`, `1m`, `1h`. The default value is `100ms`.
The variable is used only when `ARGOCD_CLUSTER_CACHE_BATCH_EVENTS_PROCESSING` is set to `true`.
* `ARGOCD_APPLICATION_TREE_SHARD_SIZE` - environment variable controlling the max number of resources stored in one Redis
key. Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis.
The default value is 0, which means that the application tree is stored in a single Redis key. The reasonable value is 100.

View File

@@ -24,6 +24,8 @@ Metrics about applications. Scraped at the `argocd-metrics:8082/metrics` endpoin
| `argocd_kubectl_exec_total` | counter | Number of kubectl executions |
| `argocd_redis_request_duration` | histogram | Redis requests duration. |
| `argocd_redis_request_total` | counter | Number of redis requests executed during application reconciliation |
| `argocd_resource_events_processing` | histogram | Time to process resource events in batch in seconds |
| `argocd_resource_events_processed_in_batch` | gauge | Number of resource events processed in batch |
If you use Argo CD with many application and project creation and deletion,
the metrics page will keep in cache your application and project's history.

View File

@@ -57,7 +57,7 @@ kind: Secret
metadata:
name: argocd-notifications-secret
stringData:
sampleWebhookToken: secret-token
sampleWebhookToken: secret-token
type: Opaque
```
@@ -112,7 +112,7 @@ You can change the timezone to show in notifications as follows.
## Functions
Templates have access to the set of built-in functions:
Templates have access to the set of built-in functions such as the functions of the [Sprig](https://masterminds.github.io/sprig/) package
```yaml
apiVersion: v1

View File

@@ -130,9 +130,9 @@ p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow
Argo CD RBAC does not use `/` as a separator when evaluating glob patterns. So the pattern `delete/*/kind/*`
will match `delete/<group>/kind/<namespace>/<name>` but also `delete/<group>/<kind>/kind/<name>`.
The fact that both of these match will generally not be a problem, because resource kinds generally contain capital
letters, and namespaces cannot contain capital letters. However, it is possible for a resource kind to be lowercase.
So it is better to just always include all the parts of the resource in the pattern (in other words, always use four
The fact that both of these match will generally not be a problem, because resource kinds generally contain capital
letters, and namespaces cannot contain capital letters. However, it is possible for a resource kind to be lowercase.
So it is better to just always include all the parts of the resource in the pattern (in other words, always use four
slashes).
If we want to grant access to the user to update all resources of an application, but not the application itself:
@@ -148,9 +148,9 @@ p, example-user, applications, delete, default/prod-app, deny
p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow
```
!!! note
!!! note "Disable Application permission Inheritance"
It is not possible to deny fine-grained permissions for a sub-resource if the action was **explicitly allowed on the application**.
By default, it is not possible to deny fine-grained permissions for a sub-resource if the action was **explicitly allowed on the application**.
For instance, the following policies will **allow** a user to delete the Pod and any other resources in the application:
```csv
@@ -158,6 +158,20 @@ p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow
p, example-user, applications, delete/*/Pod/*/*, default/prod-app, deny
```
To change this behavior, you can set the config value
`server.rbac.disableApplicationFineGrainedRBACInheritance` to `true` in
the Argo CD ConfigMap `argocd-cm`.
When inheritance is disabled, it is now possible to deny fine-grained permissions for a sub-resource
if the action was **explicitly allowed on the application**.
For instance, if we want to explicitly allow updates to the application, but deny updates to any sub-resources:
```csv
p, example-user, applications, update, default/prod-app, allow
p, example-user, applications, update/*, default/prod-app, deny
```
#### The `action` action
The `action` action corresponds to either built-in resource customizations defined

View File

@@ -27,6 +27,7 @@ argocd-application-controller [flags]
--client-certificate string Path to a client certificate file for TLS
--client-key string Path to a client key file for TLS
--cluster string The name of the kubeconfig cluster to use
--commit-server string Commit server address. (default "argocd-commit-server:8086")
--context string The name of the kubeconfig context to use
--default-cache-expiration duration Cache expiration default (default 24h0m0s)
--disable-compression If true, opt-out of response compression for all requests to the server
@@ -34,6 +35,7 @@ argocd-application-controller [flags]
--enable-k8s-event none Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all])
--gloglevel int Set the glog logging level
-h, --help help for argocd-application-controller
--hydrator-enabled Feature flag to enable Hydrator. Default ("false")
--ignore-normalizer-jq-execution-timeout-seconds duration Set ignore normalizer JQ execution timeout
--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kubeconfig string Path to a kube config. Only required if out-of-cluster
@@ -68,6 +70,7 @@ argocd-application-controller [flags]
--repo-server-timeout-seconds int Repo server RPC call timeout seconds. (default 60)
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
--self-heal-backoff-cap-seconds int Specifies max timeout of exponential backoff between application self heal attempts (default 300)
--self-heal-backoff-cooldown-seconds int Specifies period of time the app needs to stay synced before the self heal backoff can reset (default 330)
--self-heal-backoff-factor int Specifies factor of exponential timeout between application self heal attempts (default 3)
--self-heal-backoff-timeout-seconds int Specifies initial timeout of exponential backoff between self heal attempts (default 2)
--self-heal-timeout-seconds int Specifies timeout between application self heal attempts

View File

@@ -55,6 +55,7 @@ argocd-server [flags]
--enable-proxy-extension Enable Proxy Extension feature
--gloglevel int Set the glog logging level
-h, --help help for argocd-server
--hydrator-enabled Feature flag to enable Hydrator. Default ("false")
--insecure Run server without TLS
--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kubeconfig string Path to a kube config. Only required if out-of-cluster

View File

@@ -1,2 +1,5 @@
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 2.14 | v1.31, v1.30, v1.29, v1.28 |
| 2.13 | v1.30, v1.29, v1.28, v1.27 |
| 2.12 | v1.29, v1.28, v1.27, v1.26 |

View File

@@ -0,0 +1,14 @@
# v2.13 to 2.14
## Upgraded Helm Version
Helm was upgraded to 3.16.2 and the skipSchemaValidation Flag was added to
the [CLI and Application CR](https://argo-cd.readthedocs.io/en/latest/user-guide/helm/#helm-skip-schema-validation).
## Breaking Changes
## Sanitized project API response
Due to security reasons ([GHSA-786q-9hcg-v9ff](https://github.com/argoproj/argo-cd/security/advisories/GHSA-786q-9hcg-v9ff)),
the project API response was sanitized to remove sensitive information. This includes
credentials of project-scoped repositories and clusters.

View File

@@ -0,0 +1,538 @@
---
title: Manifest Hydrator
authors:
- "@crenshaw-dev"
- "@zachaller"
sponsors:
- TBD # List all interested parties here.
reviewers:
- TBD
approvers:
- TBD
creation-date: 2024-03-26
last-updated: 2024-03-26
---
# Manifest Hydrator
This proposal describes a feature to make manifest hydration (i.e. the "rendered manifest pattern") a first-class feature of Argo CD.
## Terms
* dry manifests: DRY or Don't Repeat Yourself - things like Kustomize overlays and Helm charts that produce Kubernetes manifests but are not themselves Kubernetes Manifests
* hydrated manifests: the output from dry manifest tools, i.e. plain Kubernetes manifests
## Summary
Manifest hydration tools like Helm and Kustomize are indispensable in GitOps. These tools transform "dry" (Don't Repeat Yourself) sources into plain Kubernetes manifests. The effects of a change to dry sources are not always obvious. So storing only dry sources in git leaves the user with an incomplete and confusing history of their application. This undercuts some of the main benefits of GitOps.
The "rendered manifests" pattern has emerged as a way to mitigate the downsides of using hydration tools in GitOps. Today, developers use CI tools to automatically hydrate manifests and push to separate branches. They then configure Argo CD to deploy from the hydrated branches. (For more information, see the awesome [blog post](https://akuity.io/blog/the-rendered-manifests-pattern/) and [ArgoCon talk](https://www.youtube.com/watch?v=TonN-369Qfo) by Nicholas Morey.)
This proposal describes manifest hydration and pushing to git as a first-class feature of Argo CD.
It offers two modes of operation: push-to-deploy and push-to-stage. In push-to-deploy, hydrated manifests are pushed to the same branch from which Argo CD deploys. In push-to-stage, manifests are pushed to a different branch, and Argo CD relies on some external system to move changes to the deployment branch; this provides an integration point for automated environment promotion systems.
### Opinions
This proposal is opinionated. It is based on the belief that, in order to reap the full benefits of GitOps, every change to an application's desired state must originate from a commit to a single GitOps repository. In other words, the full history of the application's desired state must be visible as the commit history on a git repository.
This requirement is incompatible with tooling which injects nondeterministic configuration into the desired state before it is deployed by the GitOps controller. Examples of nondeterministic external configuration are:
1) Helm chart dependencies on unpinned chart versions
2) Kustomize remote bases to unpinned git revisions
3) Config tool parameter overrides in the Argo CD Application `spec.source` fields
4) Multiple sources referenced in the same application (knowledge of combination of source versions is held externally to git)
Injecting nondeterministic configuration makes it impossible to know the complete history of an application by looking at a git branch history. Even if the nondeterministic output is databased (for example, in a hydrated source branch in git), it is impossible for developers to confidently make changes to desired state, because they cannot know ahead of time what other configuration will be injected at deploy time.
We believe that the problems of injecting external configuration are best solved by asking these two questions:
1) Does the configuration belong in the developer's interface (i.e. the dry manifests)?
2) Does the configuration need to be mutable at runtime, or only at deploy time?
If the configuration belongs in the developer's interface, write a tool to push the information to git. Image tags are a good example of such configuration, and the Argo CD Image Updater is a good example of such tooling.
If the configuration doesn't belong in the developer's interface, and it needs to be updated at runtime, write a controller. The developer shouldn't be expected to maintain configuration which is not an immediate part of their desired state. An example would be an auto-sizing controller which eliminates the need for the developer to manage their own autoscaler config.
If the configuration doesn't belong in the developer's interface and doesn't need to be updated at runtime (only at deploy time), write a mutating webhook. This is a great option for injecting cluster-specific configuration that the developer doesn't need to directly control.
With these three options available (git-pushers, controllers, and mutating webhooks), we believe that it is not generally necessary to inject nondeterministic configuration into the manifest hydration process. Instead, we can have a full history of the developer's minimal intent (dry branch) and the full expression of that intent (hydrated branch) completely recorded in a series of commits on a git branch.
By respecting these limitations, we unlock the ability to manage change promotion/reversion entirely via git. Change lineage is fully represented as a series of dry commit hashes. This makes it possible to write reliable rules around how these hashes are promoted to different environments and how they are reverted (i.e. we can meaningfully say "`prod` may never be more than one dry hash ahead of `test`"). If information about the lineage of an application is scattered among multiple sources, it is difficult or even impossible to meaningfully define rules about how one environment's lineage must relate to that of another environment.
Being opinionated unlocks the full benefits of GitOps as well as the ability to build a reasonable, reliable preview/promotion/reversion system.
These opinions will lock out use cases where configuration injection cannot be avoided by writing git-pushers, controllers, or mutating webhooks. We believe that the benefits of making an opinionated system outweigh the costs of compromising those opinions.
## Motivation
Many organizations have implemented their own manifest hydration system. By implementing it in Argo CD, we can lower the cost to our users of maintaining those systems, and we can encourage best practices related to the pattern.
### Goals
1) Make manifest hydration easy and intuitive for Argo CD users
2) Make it possible to implement a promotion system which relies on the manifest hydration's push-to-stage mode
3) Emphasize maintaining as much of the system's state as possible in git rather than in the Application CR (e.g. source hydrator config values, such as Helm values)
4) Every deployed change must have a corresponding dry commit - i.e. git is always the source of any changes
5) Developers should be able to easily reproduce the manifest hydration process locally, i.e. by running some commands
#### Hydration Reproducibility
One goal of this proposal is to make hydration reproducibility easy. Reproducibility brings a couple benefits: easy iteration/debugging and reliable previews.
##### Easy Iteration/Debugging
The hydration system should enable developers to easily reproduce the hydration process locally. The developer should be able to run a short series of commands and perform the exact same tasks that Argo CD would take to hydrate their manifests. This allows the developer to verify that Argo CD is behaving as expected and to quickly tweak inputs and see the results. This lets them iterate quickly and improves developer satisfaction and change velocity.
To provide this experience, the hydrator needs to provide the developer with a few pieces of information:
1) The input repo URL, path, and commit SHA
2) The hydration tool CLI version(s) (for example, the version of the Helm CLI used for hydration)
3) A series of commands and arguments which the developer can run locally
Equipped with this information, the developer can perform the exact same steps as Argo CD and be confident that their dry manifest changes will produce the desired output.
Ensuring that hydration is deterministic assures the developer that the output for a given dry state will be the same next week as it is today.
###### Avoiding Esoteric Behavior
We should avoid the developer needing to know Argo CD-specific behavior in order to reproduce hydration. Tools like Helm, Kustimize, etc. have excellent public-facing documentation which the developer should be able to take advantage of without needing to know quirks of Argo CD.
##### Reliable Previews
Deterministic hydration output allows Argo CD to produce a reliable change preview when a developer proposes a change to the dry manifests via a PR.
If output is not deterministic, then a preview generated today might not be valid/correct a week, day, or even hour later. Non-determinism makes it so that developers can't trust that the change they review will be the change actually applied.
### Non-Goals
1) Implementing a change promotion system
## Open Questions
* The `sourceHydrator` field is mutually exclusive with the `source` and the `sources` field. Should we throw an error if they're both configured, or should we just pick one and ignore the others?
* How will/should this feature relate to the image updater? Is there an opportunity to share code, since both tools involve pushing to git?
* Should we enforce a naming convention for hydrated manifest branches, e.g. `argo/...`? This would make it easier to recommend branch protection rules, for example, only allow pushes to `argo/*` from the argo bot.
* Should we enforce setting a `sourceHydrator.syncSource.path` to something besides `.`? Setting a path makes it easier to add/remove other apps later if desired.
## Proposal
Today, Argo CD watches one or more git repositories (configured in the `spec.source` or `spec.sources` field). When a new commit appears, Argo CD updates the desired state by rendering the manifests with the configured manifest hydration tool. If auto-sync is enabled, Argo CD applies the new manifests to the cluster.
With the introduction of this change, Argo CD will watch two revisions in the same git repository: the first is the "dry source", i.e. the git repo/revision where the un-rendered manifests reside, and the second is the "hydrated source," where the rendered manifests are places and retrieved for syncing to the cluster.
### New `spec.sourceHydrator` Application Field
A `sourceHydrator` field will be added to the Argo CD Application spec:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: example
spec:
# The sourceHydrator field is mutually-exclusive with `source` and with `sources`. If this field is configured, we
# should either throw an error or ignore the other two.
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
# This assumes the Application's environments are modeled as directories.
path: environments/e2e
syncSource:
targetBranch: environments/e2e
path: .
# The hydrateTo field is optional. If specified, Argo CD will write hydrated manifests to this branch instead of the
# syncSource.targetBranch. This allows the user to "stage" a hydrated commit before actually deploying the changes
# by merging them into the syncSource branch. A complete change promotion system can be built around this feature.
hydrateTo:
targetBranch: environments/e2e-next
# The path is assumed to be the same as that in syncSource.
```
When the Argo CD application controller detects a new commit on the `drySource`, it queue up the hydration process.
When the application controller detects a new (hydrated) commit on the `syncSource.targetBranch`, it will sync the manifests.
### Processing a New Dry Commit
On noticing a new dry commit, Argo CD will first collect all Applications which have the same `drySource` repo and targetRevision.
Argo CD will then group those sources by the configured `syncSource` targetBranch.
```go
package hydrator
import "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
type DrySource struct {
repoURL string
targetRevision string
}
type SyncSource struct {
targetBranch string
}
var appGroups map[DrySource]map[SyncSource][]v1alpha1.Application
```
Then Argo CD will loop over the apps in each group. For each group, it will run manifest hydration on the configured `drySource.path` and write the result to the configured `syncSource.path`. After looping over all apps in the group and writing all their manifests, it will commit the changes to the configured `syncSource` repoURL and targetBranch (or, if configured, the `hydratedTo` targetBranch). Finally, it will push those changes to git. Then it will repeat this process for the remaining groups.
The actual push operation should be delegated to the [commit server](./manifest-hydrator/commit-server/README.md).
To understand how this would work for a simple dev/test/prod setup with two regions, consider this example:
```yaml
### DEV APPS ###
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dev-west
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
path: environments/dev/west
syncSource:
targetBranch: environments/dev
path: west
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dev-east
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
path: environments/dev/east
syncSource:
targetBranch: environments/dev
path: east
---
### TEST APPS ###
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: test-west
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
path: environments/test/west
syncSource:
targetBranch: environments/test
path: west
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: test-east
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
path: environments/test/east
syncSource:
targetBranch: environments/prod
path: east
---
### PROD APPS ###
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prod-west
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
path: environments/prod/west
syncSource:
targetBranch: environments/prod
path: west
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prod-east
spec:
sourceHydrator:
drySource:
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: main
path: environments/prod/east
syncSource:
targetBranch: environments/prod
path: east
---
```
Each commit to the dry branch will result in a commit to up to three branches. Each commit to an environment branch will contain changes for west, east, or both (depending on which is affected). Changes originating from a single dry commit are always grouped into a single hydrated commit.
### Handling External Values Files
Since only one source may be used in as the dry source, the multi-source approach to external Helm values files will not work here. Instead, we'll recommend that users use the umbrella chart approach. The main reasons for multi-source as an alternative were convenience (no need to maintain the parent chart) and resolving issues with authentication to dependency charts. We believe the simplification is worth the cost of convenience, and we can address the auth issues as standalone bugs.
An earlier iteration of this proposal attempted to preserve the multi-source style of external value file inclusion by introducing a "magic" `.argocd-hydrator.yaml` file containing `additionalSources` to reference the Helm chart. In the end, it felt like we were re-implementing Helm's dependencies feature or git submodules. It's better to just rely on one of those existing tools.
### `.argocd-source.yaml` Support
The `spec.sourceHydrator.drySource` field contains only three fields: `repoURL`, `targetRevision`, and `path`.
`spec.source` contains a number of fields for configuring manifest hydration tools (`helm`, `kustomize`, and `directory`). That functionality is still available for `spec.sourceHydrator`. But instead of being configured in the Application CR, those values are set in `.argocd-source.yaml`, an existing "override" mechanism for `spec.source`. By requiring that this configuration be set in `.argocd-source.yaml`, we respect the principle that all changes must be made in git instead of in the Application CR.
### `spec.destination.namespace` Behavior
The Application `spec.destination.namespace` field is used to set the `metadata.namespace` field of any namespace resources for which that field is not set in the manifests.
The hydrator will not inject `metadata.namespace` into the hydrated manifests pushed to git. Instead, Argo CD's behavior of injecting that value immediately before applying to the cluster will continue to be used with the `spec.sourceHydrator.syncSource`.
### Build Environment Support
For sources specified in `spec.source` or `spec.sources`, Argo CD [sets certain environment variables](https://argo-cd.readthedocs.io/en/stable/user-guide/build-environment/) before running the manifest hydration tool.
Some of these environment variables may change independently of the dry source and therefore break the reproducibility of manifest hydration (see the [Opinions](#opinions) section). Therefore, only some environment variables will be populated for the `spec.sourceHydrator` source.
These environment variables will **not** be set:
* `ARGOCD_APP_NAME`
* `ARGOCD_APP_NAMESPACE`
* `KUBE_VERSION`
* `KUBE_API_VERSIONS`
These environment variables will be set because they are commit SHAs and are directly and immutably tied to the dry manifest commit:
* `ARGOCD_APP_REVISION`
* `ARGOCD_APP_REVISION_SHORT`
These environment variables will be set because they are inherently tied to the manifest hydrator configuration. If these fields set in `spec.sourceHydrator.drySource` change, we are breaking the connection to the original hydrator configuration anyway.
* `ARGOCD_APP_SOURCE_PATH`
* `ARGOCD_APP_SOURCE_REPO_URL`
* `ARGOCD_APP_SOURCE_TARGET_REVISION`
### Support for Helm-Specific Features
#### App Name / Release Name
By default, Argo CD's `source` and `sources` fields use the Application's name as the release name when hydrating Helm manifests.
To centralize the source of truth when using `spec.sourceHydrator`, the default release name will be an empty string, and any different release name should be specified in the `helm.releaseName` field in `.argocd-source.yaml`.
#### Kube API Versions
`helm install` supports dynamically reading Kube API versions from the destination cluster to adjust manifest output. `helm template` accepts a list of Kube API versions to simulate the same behavior, and Argo CD's `spec.source` and `spec.sources` fields set those API versions when running `helm template`.
To centralize the source of truth when using `spec.sourceHydrator`, the Kube API versions will not be populated by default.
Instead, a new field will be added to the Application's `spec.source.helm` field:
```yaml
kind: Application
spec:
source:
helm:
apiVersions:
- admissionregistration.k8s.io/v1/MutatingWebhookConfiguration
- admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration
- ... etc.
```
That field will also be available in `.argocd-source.yaml`:
```yaml
helm:
apiVersions:
- admissionregistration.k8s.io/v1/MutatingWebhookConfiguration
- admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration
- ... etc.
```
So the appropriate way to set Kube API versions for the source hydrator will be to populate the `.argocd-source.yaml` file.
#### Hydrated Environment Branches
Representing the dry manifests of environments as branches has well-documented downsides for developer experience. Specifically, it's toilsome for developers to manage moving changes from one branch to another and avoid drift.
So environments-as-directories has emerged as the standard for good GitOps practices. Change management across directories in a single branch is much easier to perform and reason about.
**This proposal does not suggest using branches to represent the dry manifests of environments.** As a matter of fact, this proposal codifies the current best practice of representing the dry manifests as directories in a single branch.
This proposal recommends using different branches for the _hydrated_ representation of environments only. Using different branches has some benefits:
1) Intuitive grouping of "changes to ship at once" - for example, if you have app-1-east and app-1-west, it makes sense to merge a single hydrated PR to deploy to both of those apps at once
2) Easy-to-read history of a single environment via the commits history
3) Easy comparison between environments using the SCMs' "compare" interfaces
In other words, branches make a very nice _read_ interface for _hydrated_ manifests while preserving the best-practice of using _directories_ for the _write_ interface.
### Commit Metadata
Each output directory should contain two files: manifest.yaml and README.md. manifest.yaml should contain the plain hydrated manifests. The resources should be sorted by namespace, name, group, and kind (in that order).
The README will be built using the following template:
````gotemplate
{{ if eq (len .applications) 1 }}
{{ $appName := (index .applications 0).metadata.name }}
# {{ $appName }} Manifests
[manifest.yaml](./manifest.yaml) contains the hydrated manifests for the {{ $appName }} application.
{{ end }}
{{ if gt (len .applications) 1 }}
{{ $appName := (index .applications 0).metadata.name }}
# Manifests for {{ len .applications }} Applications
[manifest.yaml](./manifest.yaml) contains the hydrated manifests for these applications:
{{ range $i, $app := .applications }}
- {{ $app.name }}
{{ end }}
{{ end }}
These are the details of the most recent change;
* Author: {{ .commitAuthor }}
* Message: {{ .commitMessage }}
* Time: {{ .commitTime }}
To reproduce the manifest hydration, do the following:
```
git clone {{ .repoURL }}
cd {{ .repoName }}
git checkout {{ .dryShortSHA }}
{{ range $i, $command := .commands }}
{{ $command }}
{{ end }}
```
````
This template should be admin-configurable.
Example output might look like this:
````markdown
# dev-west Manifests
[manifest.yaml](./manifest.yaml) contains the hydrated manifests for the dev-west application.
These are the details of the most recent change;
* Author: Michael Crenshaw <michael@example.com>
* Message: chore: bumped image tag to v0.0.2
* Time: 2024-03-27 10:32:04 UTC
To reproduce the manifest hydration, do the following:
```
git clone https://github.com/argoproj/argocd-example-apps
cd argocd-example-apps
git checkout ab2382f
kustomize edit set image my-app:v0.0.2
kustomize build environments/dev/west
```
````
The hydrator will also write a `hydrator.metadata` file containing a JSON representation of all the values available for README templating. This metadata can be used by external systems (e.g. a PR-based promoter system) to generate contextual information about the hydrated manifest's provenance.
```json
{
"commands": ["kustomize edit set image my-app:v0.0.2", "kustomize build ."],
"drySHA": "ab2382f",
"commitAuthor": "Michael Crenshaw <michael@example.com>",
"commitMessage": "chore: bump Helm dependency chart to 32.1.12",
"repoURL": "https://github.com/argoproj/argocd-example-apps"
}
```
To request a commit to the hydrated branch, the application controller will make a call to the CommitManifests service.
A single call will bundle all the changes destined for a given targetBranch.
It's the application controller's job to ensure that the user has write access to the repo before making the call.
```protobuf
// CommitManifests represents the caller's request for some Kubernetes manifests to be pushed to a git repository.
message CommitManifests {
// repoURL is the URL of the repo we're pushing to. HTTPS or SSH URLs are acceptable.
required string repoURL = 1;
// targetBranch is the name of the branch we're pushing to.
required string targetBranch = 2;
// drySHA is the full SHA256 hash of the "dry commit" from which the manifests were hydrated.
required string drySHA = 3;
// commitAuthor is the name of the author of the dry commit.
required string commitAuthor = 4;
// commitMessage is the short commit message from the dry commit.
required string commitMessage = 5;
// commitTime is the dry commit timestamp.
required string commitTime = 6;
// details holds the information about the actual hydrated manifests.
repeated CommitPathDetails details = 7;
}
// CommitManifestDetails represents the details about a
message CommitPathDetails {
// path is the path to the directory to which these manifests should be written.
required string path = 1;
// manifests is a list of JSON documents representing the Kubernetes manifests.
repeated string manifests = 2;
// readme is a string which will be written to a README.md alongside the manifest.yaml.
required string readme = 3;
}
message CommitManifestsResponse {
}
```
### Push access
The hydrator will need to push to the git repository. This will require a secret containing the git credentials.
Write access will be configured via a Kubernetes secret with the following structure:
```yaml
apiVersion: v1
kind: Secret
metadata:
labels:
argocd.argoproj.io/secret-type: repository-write
stringData:
url: 'https://github.com/argoproj/argocd-example-apps'
githubAppID: '123456'
githubInstallationID: '123456'
githubAppPrivateKey: |
-----
```
### Use cases
#### Use case 1:
An organization with strong requirements around change auditing might enable manifest hydration in order to generate a full history of changes.
#### Use case 2:
### Implementation Details/Notes/Constraints
### Detailed examples
### Security Considerations
This proposal would involve introducing a component capable of pushing to git.
We'll need to consider what git permissions setup to recommend, what security features we should recommend enabling (e.g. branch protection), etc.
We'll also need to consider how to store the git push secrets. It's probable that they'll need to be stored in a namespace separate from the other Argo CD components to provide a bit extra protection.
### Risks and Mitigations
### Upgrade / Downgrade Strategy
## Drawbacks
## Alternatives

View File

@@ -0,0 +1,44 @@
# Argo CD Manifest Hydrator
Most Argo CD Applications don't directly use plain Kubernetes manifests. They reference a Helm chart or some Kustomize manifests, and then Argo CD transforms those sources into their final form (plain Kubernetes manifests).
Having Argo CD quietly do this transformation behind the scenes is convenient. But it can make it harder for developers to understand the full state of their application, both current and past. Hydrating (also known as "rendering") the sources and pushing the hydrated manifests to git is a common technique to preserve a full history of an Application's state.
Argo CD provides first-class tooling to hydrate manifests and push them to git. This document explains how to take advantage of that tooling.
## Setting up git Push Access
To use Argo CD's source hydration tooling, you have to grant Argo CD push access to all the repositories for apps using the source hydrator.
### Security Considerations
Argo CD stores git push secrets separately from the main Argo CD components and separately from git pull credentials to minimize the possibility of a malicious actor stealing the secrets or hijacking Argo CD components to push malicious changes.
Pushing hydrated manifests to git can improve security by ensuring that all state changes are stored and auditable. If a malicious actor does manage to produce malicious changes in manifests, those changes will be discoverable in git instead of living only in the live cluster state.
You should use your SCM's security mechanisms to ensure that Argo CD can only push to the allowed repositories and branches.
### Adding the Access Credentials
To set up push access, add a secret to the `argocd-push` namespace with the following format:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: argocd-example-apps
labels:
# Note that this is "repository-push" instead of "repository". The same secret should never be used for both push and pull access.
argocd.argoproj.io/secret-type: repository-push
type: Opaque
stringData:
url: https://github.com/argoproj/argocd-example-apps.git
username: '****'
password: '****'
```
Once the secret is available, any Application which has pull access to a given repo will be able to use the source hydration tooling to also push to that repo.
## Using the `sourceHydrator` Field
## Migrating from the `source` or `sources` Field

View File

@@ -0,0 +1,38 @@
# Commit Server
The Argo CD Commit Server provides push access to git repositories for hydrated manifests.
The server exposes a gRPC service which accepts requests to push hydrated manifests to a git repository. This is the interface:
```protobuf
// CommitManifests represents the caller's request for some Kubernetes manifests to be pushed to a git repository.
message CommitManifests {
// repoURL is the URL of the repo we're pushing to. HTTPS or SSH URLs are acceptable.
required string repoURL = 1;
// targetBranch is the name of the branch we're pushing to.
required string targetBranch = 2;
// drySHA is the full SHA256 hash of the "dry commit" from which the manifests were hydrated.
required string drySHA = 3;
// commitAuthor is the name of the author of the dry commit.
required string commitAuthor = 4;
// commitMessage is the short commit message from the dry commit.
required string commitMessage = 5;
// commitTime is the dry commit timestamp.
required string commitTime = 6;
// details holds the information about the actual hydrated manifests.
repeated CommitPathDetails details = 7;
}
// CommitManifestDetails represents the details about a
message CommitPathDetails {
// path is the path to the directory to which these manifests should be written.
required string path = 1;
// manifests is a list of JSON documents representing the Kubernetes manifests.
repeated string manifests = 2;
// readme is a string which will be written to a README.md alongside the manifest.yaml.
required string readme = 3;
}
message CommitManifestsResponse {
}
```

View File

@@ -23,39 +23,39 @@ recent minor releases.
| [install.yaml](master/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](master/argocd-iac-namespace-install.html) | - | - | - | - |
### v2.13.1
### v2.13.2
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v2.13.1/argocd-test.html) | 0 | 0 | 7 | 2 |
| [ui/yarn.lock](v2.13.1/argocd-test.html) | 0 | 0 | 1 | 0 |
| [dex:v2.41.1](v2.13.1/ghcr.io_dexidp_dex_v2.41.1.html) | 0 | 0 | 0 | 2 |
| [haproxy:2.6.17-alpine](v2.13.1/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html) | 0 | 0 | 2 | 4 |
| [redis:7.0.15-alpine](v2.13.1/public.ecr.aws_docker_library_redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [argocd:v2.13.1](v2.13.1/quay.io_argoproj_argocd_v2.13.1.html) | 0 | 0 | 3 | 10 |
| [redis:7.0.15-alpine](v2.13.1/redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [install.yaml](v2.13.1/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v2.13.1/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v2.13.2/argocd-test.html) | 1 | 0 | 7 | 2 |
| [ui/yarn.lock](v2.13.2/argocd-test.html) | 0 | 0 | 1 | 0 |
| [dex:v2.41.1](v2.13.2/ghcr.io_dexidp_dex_v2.41.1.html) | 0 | 0 | 0 | 2 |
| [haproxy:2.6.17-alpine](v2.13.2/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html) | 0 | 0 | 2 | 4 |
| [redis:7.0.15-alpine](v2.13.2/public.ecr.aws_docker_library_redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [argocd:v2.13.2](v2.13.2/quay.io_argoproj_argocd_v2.13.2.html) | 0 | 0 | 3 | 10 |
| [redis:7.0.15-alpine](v2.13.2/redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [install.yaml](v2.13.2/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v2.13.2/argocd-iac-namespace-install.html) | - | - | - | - |
### v2.12.7
### v2.12.8
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v2.12.7/argocd-test.html) | 0 | 0 | 8 | 2 |
| [ui/yarn.lock](v2.12.7/argocd-test.html) | 0 | 0 | 1 | 0 |
| [dex:v2.38.0](v2.12.7/ghcr.io_dexidp_dex_v2.38.0.html) | 0 | 0 | 6 | 7 |
| [haproxy:2.6.17-alpine](v2.12.7/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html) | 0 | 0 | 2 | 4 |
| [redis:7.0.15-alpine](v2.12.7/public.ecr.aws_docker_library_redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [argocd:v2.12.7](v2.12.7/quay.io_argoproj_argocd_v2.12.7.html) | 0 | 0 | 3 | 11 |
| [redis:7.0.15-alpine](v2.12.7/redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [install.yaml](v2.12.7/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v2.12.7/argocd-iac-namespace-install.html) | - | - | - | - |
| [go.mod](v2.12.8/argocd-test.html) | 1 | 0 | 8 | 2 |
| [ui/yarn.lock](v2.12.8/argocd-test.html) | 0 | 0 | 1 | 0 |
| [dex:v2.38.0](v2.12.8/ghcr.io_dexidp_dex_v2.38.0.html) | 0 | 0 | 6 | 7 |
| [haproxy:2.6.17-alpine](v2.12.8/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html) | 0 | 0 | 2 | 4 |
| [redis:7.0.15-alpine](v2.12.8/public.ecr.aws_docker_library_redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [argocd:v2.12.8](v2.12.8/quay.io_argoproj_argocd_v2.12.8.html) | 0 | 0 | 3 | 10 |
| [redis:7.0.15-alpine](v2.12.8/redis_7.0.15-alpine.html) | 0 | 0 | 0 | 1 |
| [install.yaml](v2.12.8/argocd-iac-install.html) | - | - | - | - |
| [namespace-install.yaml](v2.12.8/argocd-iac-namespace-install.html) | - | - | - | - |
### v2.11.12
| | Critical | High | Medium | Low |
|---:|:--------:|:----:|:------:|:---:|
| [go.mod](v2.11.12/argocd-test.html) | 0 | 2 | 9 | 2 |
| [go.mod](v2.11.12/argocd-test.html) | 1 | 2 | 9 | 2 |
| [ui/yarn.lock](v2.11.12/argocd-test.html) | 0 | 0 | 1 | 0 |
| [dex:v2.38.0](v2.11.12/ghcr.io_dexidp_dex_v2.38.0.html) | 0 | 0 | 6 | 7 |
| [haproxy:2.6.14-alpine](v2.11.12/haproxy_2.6.14-alpine.html) | 0 | 1 | 7 | 7 |

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:23:04 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:23:55 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -2861,7 +2861,7 @@
</li>
<li class="card__meta__item">
Line number: 24840
Line number: 24846
</li>
</ul>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:23:14 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:24:05 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>
@@ -2815,7 +2815,7 @@
</li>
<li class="card__meta__item">
Line number: 2163
Line number: 2169
</li>
</ul>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:20:56 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:21:36 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -470,7 +470,7 @@
<div class="meta-counts">
<div class="meta-count"><span>7</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>26 vulnerable dependency paths</span></div>
<div class="meta-count"><span>2158</span> <span>dependencies</span></div>
<div class="meta-count"><span>2160</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="22 known vulnerabilities found in 43 vulnerable dependency paths.">
<meta name="description" content="23 known vulnerabilities found in 44 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:21:06 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:21:47 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -469,8 +469,8 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>22</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>43 vulnerable dependency paths</span></div>
<div class="meta-count"><span>23</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>44 vulnerable dependency paths</span></div>
<div class="meta-count"><span>969</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
@@ -479,6 +479,80 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--critical" data-snyk-test="critical">
<h2 class="card__title">Incorrect Implementation of Authentication Algorithm</h2>
<div class="card__section">
<div class="label label--critical">
<span class="label__text">critical severity</span>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.41.1/hairyhenderson/gomplate/v4 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
golang.org/x/crypto/ssh
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v4@* and golang.org/x/crypto/ssh@v0.24.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v4@*
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@v0.24.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p><a href="https://pkg.go.dev/golang.org/x/crypto/ssh?tab=doc">golang.org/x/crypto/ssh</a> is a SSH client and server</p>
<p>Affected versions of this package are vulnerable to Incorrect Implementation of Authentication Algorithm when the key passed in the last call before a connection is established is assumed to be the key used for authentication. It is not necessarily the authentication key in use, and this allows attackers who can control the key cache by making their own carefully-timed connections to bypass authorization with subsequent legitimate <code>ServerConfig.PublicKeyCallback</code> callbacks.</p>
<p><strong>Note:</strong> The assumed caching behavior of this callback is not documented and is therefore considered human error, but the project maintainers have observed reliance on it for authorization decisions in production. In fact, the assumption is negated in the documentation, which states &quot;A call to this function does not guarantee that the key offered is in fact used to authenticate.&quot; The behavior after upgrading still allows the possibility of an attacker forcing their own key to be the one in the cache when the callback is invoked if the client is using a different authentication method such as <code>PasswordCallback</code>, <code>KeyboardInteractiveCallback</code>, or <code>NoClientAuth</code>. It is therefore recommended to rely on the return values of the connection itself, found in <code>ServerConn.Permissions</code> for further authorization steps.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>golang.org/x/crypto/ssh</code> to version 0.31.0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/golang/crypto/commit/b4f1988a35dee11ec3e05d6bf3e90b695fbd8909">GitHub Commit</a></li>
<li><a href="https://github.com/golang/go/issues/20094">GitHub Issue</a></li>
<li><a href="https://go.dev/cl/635315">go.dev Commit</a></li>
<li><a href="https://go.dev/issue/70779">go.dev Issue</a></li>
<li><a href="https://groups.google.com/g/golang-announce/c/-nPEi39gI4Q/m/cGVPJCqdAQAJ">Google Groups Forum</a></li>
<li><a href="https://pkg.go.dev/vuln/GO-2024-3321">Go Vulnerability Database</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOLANGORGXCRYPTOSSH-8496611">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--medium" data-snyk-test="medium">
<h2 class="card__title">Insertion of Sensitive Information into Log File</h2>
<div class="card__section">

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:21:11 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:21:52 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:21:15 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:22:00 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:21:33 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:22:20 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -472,7 +472,7 @@
<div class="meta-counts">
<div class="meta-count"><span>20</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>100 vulnerable dependency paths</span></div>
<div class="meta-count"><span>2378</span> <span>dependencies</span></div>
<div class="meta-count"><span>2380</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
</header><!-- .project__header -->

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:21:38 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:22:25 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:30:13 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:31:10 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:30:22 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:31:19 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="14 known vulnerabilities found in 1075 vulnerable dependency paths.">
<meta name="description" content="15 known vulnerabilities found in 1089 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:28:19 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:29:14 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -467,8 +467,8 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>14</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>1075 vulnerable dependency paths</span></div>
<div class="meta-count"><span>15</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>1089 vulnerable dependency paths</span></div>
<div class="meta-count"><span>2041</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
@@ -477,6 +477,277 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--critical" data-snyk-test="critical">
<h2 class="card__title">Incorrect Implementation of Authentication Algorithm</h2>
<div class="card__section">
<div class="label label--critical">
<span class="label__text">critical severity</span>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: /argo-cd/argoproj/argo-cd/v2 <span class="list-paths__item__arrow"></span> go.mod
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
golang.org/x/crypto/ssh
</li>
<li class="card__meta__item">Introduced through:
github.com/argoproj/argo-cd/v2@0.0.0 and golang.org/x/crypto/ssh@0.19.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/knownhosts@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/skeema/knownhosts@1.2.2
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/skeema/knownhosts@1.2.2
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/knownhosts@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/skeema/knownhosts@1.2.2
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/xanzy/ssh-agent@0.3.3
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/agent@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/skeema/knownhosts@1.2.2
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/knownhosts@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/skeema/knownhosts@1.2.2
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/xanzy/ssh-agent@0.3.3
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/agent@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/skeema/knownhosts@1.2.2
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/knownhosts@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/argoproj/argo-cd/v2@0.0.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/client@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/go-git/go-git/v5/plumbing/transport/ssh@5.11.0
<span class="list-paths__item__arrow"></span>
github.com/xanzy/ssh-agent@0.3.3
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh/agent@0.19.0
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@0.19.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p><a href="https://pkg.go.dev/golang.org/x/crypto/ssh?tab=doc">golang.org/x/crypto/ssh</a> is a SSH client and server</p>
<p>Affected versions of this package are vulnerable to Incorrect Implementation of Authentication Algorithm when the key passed in the last call before a connection is established is assumed to be the key used for authentication. It is not necessarily the authentication key in use, and this allows attackers who can control the key cache by making their own carefully-timed connections to bypass authorization with subsequent legitimate <code>ServerConfig.PublicKeyCallback</code> callbacks.</p>
<p><strong>Note:</strong> The assumed caching behavior of this callback is not documented and is therefore considered human error, but the project maintainers have observed reliance on it for authorization decisions in production. In fact, the assumption is negated in the documentation, which states &quot;A call to this function does not guarantee that the key offered is in fact used to authenticate.&quot; The behavior after upgrading still allows the possibility of an attacker forcing their own key to be the one in the cache when the callback is invoked if the client is using a different authentication method such as <code>PasswordCallback</code>, <code>KeyboardInteractiveCallback</code>, or <code>NoClientAuth</code>. It is therefore recommended to rely on the return values of the connection itself, found in <code>ServerConn.Permissions</code> for further authorization steps.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>golang.org/x/crypto/ssh</code> to version 0.31.0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/golang/crypto/commit/b4f1988a35dee11ec3e05d6bf3e90b695fbd8909">GitHub Commit</a></li>
<li><a href="https://github.com/golang/go/issues/20094">GitHub Issue</a></li>
<li><a href="https://go.dev/cl/635315">go.dev Commit</a></li>
<li><a href="https://go.dev/issue/70779">go.dev Issue</a></li>
<li><a href="https://groups.google.com/g/golang-announce/c/-nPEi39gI4Q/m/cGVPJCqdAQAJ">Google Groups Forum</a></li>
<li><a href="https://pkg.go.dev/vuln/GO-2024-3321">Go Vulnerability Database</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOLANGORGXCRYPTOSSH-8496611">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Denial of Service (DoS)</h2>
<div class="card__section">

View File

@@ -7,7 +7,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Snyk test report</title>
<meta name="description" content="41 known vulnerabilities found in 129 vulnerable dependency paths.">
<meta name="description" content="42 known vulnerabilities found in 130 vulnerable dependency paths.">
<base target="_blank">
<link rel="icon" type="image/png" href="https://res.cloudinary.com/snyk/image/upload/v1468845142/favicon/favicon.png"
sizes="194x194">
@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:28:28 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:29:22 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following paths:</span>
@@ -469,8 +469,8 @@
</div>
<div class="meta-counts">
<div class="meta-count"><span>41</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>129 vulnerable dependency paths</span></div>
<div class="meta-count"><span>42</span> <span>known vulnerabilities</span></div>
<div class="meta-count"><span>130 vulnerable dependency paths</span></div>
<div class="meta-count"><span>829</span> <span>dependencies</span></div>
</div><!-- .meta-counts -->
</div><!-- .layout-container--short -->
@@ -479,6 +479,80 @@
<div class="layout-container" style="padding-top: 35px;">
<div class="cards--vuln filter--patch filter--ignore">
<div class="card card--vuln disclosure--not-new severity--critical" data-snyk-test="critical">
<h2 class="card__title">Incorrect Implementation of Authentication Algorithm</h2>
<div class="card__section">
<div class="label label--critical">
<span class="label__text">critical severity</span>
</div>
<hr/>
<ul class="card__meta">
<li class="card__meta__item">
Manifest file: ghcr.io/dexidp/dex:v2.38.0/hairyhenderson/gomplate/v3 <span class="list-paths__item__arrow"></span> /usr/local/bin/gomplate
</li>
<li class="card__meta__item">
Package Manager: golang
</li>
<li class="card__meta__item">
Vulnerable module:
golang.org/x/crypto/ssh
</li>
<li class="card__meta__item">Introduced through:
github.com/hairyhenderson/gomplate/v3@* and golang.org/x/crypto/ssh@v0.18.0
</li>
</ul>
<hr/>
<h3 class="card__section__title">Detailed paths</h3>
<ul class="card__meta__paths">
<li>
<span class="list-paths__item__introduced"><em>Introduced through</em>:
github.com/hairyhenderson/gomplate/v3@*
<span class="list-paths__item__arrow"></span>
golang.org/x/crypto/ssh@v0.18.0
</span>
</li>
</ul><!-- .list-paths -->
</div><!-- .card__section -->
<hr/>
<!-- Overview -->
<h2 id="overview">Overview</h2>
<p><a href="https://pkg.go.dev/golang.org/x/crypto/ssh?tab=doc">golang.org/x/crypto/ssh</a> is a SSH client and server</p>
<p>Affected versions of this package are vulnerable to Incorrect Implementation of Authentication Algorithm when the key passed in the last call before a connection is established is assumed to be the key used for authentication. It is not necessarily the authentication key in use, and this allows attackers who can control the key cache by making their own carefully-timed connections to bypass authorization with subsequent legitimate <code>ServerConfig.PublicKeyCallback</code> callbacks.</p>
<p><strong>Note:</strong> The assumed caching behavior of this callback is not documented and is therefore considered human error, but the project maintainers have observed reliance on it for authorization decisions in production. In fact, the assumption is negated in the documentation, which states &quot;A call to this function does not guarantee that the key offered is in fact used to authenticate.&quot; The behavior after upgrading still allows the possibility of an attacker forcing their own key to be the one in the cache when the callback is invoked if the client is using a different authentication method such as <code>PasswordCallback</code>, <code>KeyboardInteractiveCallback</code>, or <code>NoClientAuth</code>. It is therefore recommended to rely on the return values of the connection itself, found in <code>ServerConn.Permissions</code> for further authorization steps.</p>
<h2 id="remediation">Remediation</h2>
<p>Upgrade <code>golang.org/x/crypto/ssh</code> to version 0.31.0 or higher.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/golang/crypto/commit/b4f1988a35dee11ec3e05d6bf3e90b695fbd8909">GitHub Commit</a></li>
<li><a href="https://github.com/golang/go/issues/20094">GitHub Issue</a></li>
<li><a href="https://go.dev/cl/635315">go.dev Commit</a></li>
<li><a href="https://go.dev/issue/70779">go.dev Issue</a></li>
<li><a href="https://groups.google.com/g/golang-announce/c/-nPEi39gI4Q/m/cGVPJCqdAQAJ">Google Groups Forum</a></li>
<li><a href="https://pkg.go.dev/vuln/GO-2024-3321">Go Vulnerability Database</a></li>
</ul>
<hr/>
<div class="cta card__cta">
<p><a href="https://snyk.io/vuln/SNYK-GOLANG-GOLANGORGXCRYPTOSSH-8496611">More about this vulnerability</a></p>
</div>
</div><!-- .card -->
<div class="card card--vuln disclosure--not-new severity--high" data-snyk-test="high">
<h2 class="card__title">Allocation of Resources Without Limits or Throttling</h2>
<div class="card__section">

View File

@@ -456,7 +456,7 @@
<div class="header-wrap">
<h1 class="project__header__title">Snyk test report</h1>
<p class="timestamp">December 8th 2024, 12:28:32 am (UTC+00:00)</p>
<p class="timestamp">December 15th 2024, 12:29:27 am (UTC+00:00)</p>
</div>
<div class="source-panel">
<span>Scanned the following path:</span>

Some files were not shown because too many files have changed in this diff Show More