Compare commits

...

186 Commits

Author SHA1 Message Date
github-actions[bot]
a70b2293a0 Bump version to 2.13.9 on release-2.13 branch (#24404)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-09-04 13:30:39 -04:00
Michael Crenshaw
b8ac09d326 fix(security): repository.GetDetailedProject exposes repo secrets (#24388)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-09-04 13:30:14 -04:00
Michael Crenshaw
bfa724719a fix(server): infer resource status health for apps-in-any-ns (#22944) (#23708)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-07-09 17:07:56 -04:00
Blake Pettersson
df347d0d56 fix: do not normalize resource tracking on live crds (#22722) - cherrypick 2.13 (#22747)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-06-16 13:48:07 -04:00
github-actions[bot]
612d345cd9 Bump version to 2.13.8 on release-2.13 branch (#23184)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-05-28 08:45:24 -06:00
Michael Crenshaw
d508e3be50 Merge commit from fork
Fix shadowed variable name

Signed-off-by: Ry0taK <49341894+Ry0taK@users.noreply.github.com>
Co-authored-by: Ry0taK <49341894+Ry0taK@users.noreply.github.com>
2025-05-28 08:20:48 -06:00
Nitish Kumar
6612b7b017 chore: replace heptio-images with argocd-e2e-container (cherry-pick #23040) (#23056)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-05-27 14:19:10 +03:00
gcp-cherry-pick-bot[bot]
41ab259bea fix(test): broken e2e test (cherry-pick #22975) (#23053)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-20 09:49:36 -04:00
gcp-cherry-pick-bot[bot]
97fd4ac877 fix(appset): generated app errors should use the default requeue (#21887) (cherry-pick #21936) (#22673)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: rumstead <37445536+rumstead@users.noreply.github.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2025-04-18 11:41:33 -07:00
github-actions[bot]
a33bcbaeeb Bump version to 2.13.7 on release-2.13 branch (#22669)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alexmt <426437+alexmt@users.noreply.github.com>
2025-04-14 17:36:25 -07:00
gcp-cherry-pick-bot[bot]
0f65a3f46e fix(cli): wrong variable to store --no-proxy value (cherry-pick #21226) (#22591)
Signed-off-by: Nathanael Liechti <technat@technat.ch>
Co-authored-by: Nathanael Liechti <technat@technat.ch>
2025-04-10 07:09:17 -07:00
Atif Ali
4b1180076e chore(deps): update github.com/expr-lang/expr to v1.17.0 (#22610)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-04-09 07:01:44 -04:00
Atif Ali
fb5624c925 chore(deps): update go-jose library from 4.0.2 to 4.0.5 (#22560)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-04-09 06:57:42 -04:00
Atif Ali
bb70a1f4f5 fix: Check placement exists before length check (#22060) (#22057) (#22505)
Signed-off-by: Dale Haiducek <19750917+dhaiducek@users.noreply.github.com>
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Dale Haiducek <19750917+dhaiducek@users.noreply.github.com>
2025-03-27 13:33:00 -04:00
github-actions[bot]
accae32b1c Bump version to 2.13.6 on release-2.13 branch (#22474)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 16:53:08 -04:00
Michael Crenshaw
43f3cff4ca fix(ci): use pinned Helm version for init-release (#22164) (#22472)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 16:49:15 -04:00
Michael Crenshaw
58ded15863 chore(deps): bump github.com/golang-jwt/jwt to 4.5.2 (#22466)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-24 15:07:54 -04:00
Keith Chong
8d0279895c chore: Update change log for 2.13.6 (#22438)
Signed-off-by: Keith Chong <kykchong@redhat.com>
2025-03-21 15:38:55 -04:00
Atif Ali
6207fd0040 fix: handle annotated git tags correctly in repo server cache (#21771) (#22397)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-03-21 10:16:50 -04:00
nmirasch
3875dde5cc fix: CVE-2025-26791 upgrading redoc dep to 2.4.0, DOMPurify before 3.2.4 (#21966)
Signed-off-by: nmirasch <neus.miras@gmail.com>
2025-03-21 07:32:44 -04:00
Anand Francis Joseph
17a535f6d4 fix(server): Fix server crash due to race condition in go-redis triggered by DNS instability (#22251)
Signed-off-by: anandf <anjoseph@redhat.com>
2025-03-10 11:18:58 -04:00
Nitish Kumar
180d6890af chore: cherry-pick #21786 for v2.13 (#21906)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2025-02-28 13:06:19 -05:00
Nitish Kumar
6ef7f61d9a fix: correct lookup for the kustomization file when applying patches (cherry-pick #22024) (#22087)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-02-28 21:37:35 +05:30
gcp-cherry-pick-bot[bot]
c7937f101c fix: correctly set compareWith when requesting app refresh with delay (fixes #18998) (cherry-pick #21298) (#21953)
Signed-off-by: Xiaonan Shen <s@sxn.dev>
Co-authored-by: Xiaonan Shen <s@sxn.dev>
Co-authored-by: 沈啸楠 <sxn@shenxiaonandeMacBook-Pro.local>
2025-02-23 22:52:07 +05:30
github-actions[bot]
03600ae7ac Bump version to 2.13.5 on release-2.13 branch (#21795)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: leoluz <44989+leoluz@users.noreply.github.com>
2025-02-05 16:25:12 -05:00
gcp-cherry-pick-bot[bot]
c6112c05fe fix: Add proxy registry key by dest server + name (cherry-pick #21791) (#21793)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2025-02-05 16:13:45 -05:00
gcp-cherry-pick-bot[bot]
49771c1d4f fix(ui): Solve issue with navigating with dropdown from an application's page (cherry-pick #21737) (#21747)
Signed-off-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Amit Oren <amit@coralogix.com>
2025-02-03 11:14:09 -05:00
github-actions[bot]
102853d31a Bump version to 2.13.4 on release-2.13 branch (#21709)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ishitasequeira <46771830+ishitasequeira@users.noreply.github.com>
2025-01-29 15:15:49 -05:00
Siddhesh Ghadi
10b9589f1c Merge commit from fork
Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2025-01-29 13:41:18 -05:00
Esteban Molina-Estolano
53dc116353 fix: oras-go client should fallback to docker config if no credentials specified (cherry-pick 2.13 #18133) (#20872)
Signed-off-by: Tony Au-Yeung <tony@elevenlabs.io>
Co-authored-by: Tony Au-Yeung <tonyay163@gmail.com>
2025-01-22 10:48:00 -05:00
gcp-cherry-pick-bot[bot]
99aaf43bdb fix: Policy/policy.open-cluster-management.io stuck in progressing status when no clusters match the policy (#21296) (cherry-pick #21297) (#21594)
Signed-off-by: Michele Baldessari <michele@acksyn.org>
Co-authored-by: Michele Baldessari <michele@acksyn.org>
2025-01-21 13:44:55 -05:00
gcp-cherry-pick-bot[bot]
c8a62bb162 docs: add mkdocs configuration stanza to .readthedocs.yaml (cherry-pick #21475) (#21609)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-01-21 13:14:06 -05:00
gcp-cherry-pick-bot[bot]
fd67e4970f fix: resolve the failing e2e appset tests for ksonnet applications (cherry-pick #21580) (#21605)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2025-01-21 13:13:17 -05:00
gcp-cherry-pick-bot[bot]
2618ccca2d fix: login return_url doesn't work with custom server paths (cherry-pick #21588) (#21603)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2025-01-21 09:25:39 -08:00
Atif Ali
38e02ab9e8 chore(deps): bump go-git version to go-git/v5 5.13.1 (#21551)
Signed-off-by: Atif Ali <atali@redhat.com>
2025-01-20 20:08:16 -05:00
Eadred
2fe4536ed2 fix(appset): events not honouring configured namespaces (#21219) (#21241) (#21520)
* fix: 21219 Honour ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES for all ApplicationSet events

Namespace filtering is applied to Update, Delete and Generic events.

Fixes https://github.com/argoproj/argo-cd/issues/21219



* fix: 21219 Add tests for ignoreNotAllowedNamespaces



* fix: 21219 Remove redundant package import



---------

Signed-off-by: eadred <eadred77@googlemail.com>
2025-01-17 10:58:14 -05:00
gcp-cherry-pick-bot[bot]
49163b09b1 Fix application url for custom base href (#21377) (#21515)
Signed-off-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Amit Oren <amit@coralogix.com>
2025-01-16 00:46:32 -05:00
gcp-cherry-pick-bot[bot]
c0f847f301 docs: Update Screenshot in Orphaned Resources Monitoring Section #20510 (cherry-pick #20533) (#21489)
Signed-off-by: jaehanbyun <awbrg789@naver.com>
2025-01-14 11:30:01 -05:00
gcp-cherry-pick-bot[bot]
2e794fbbc5 chore(deps): bump github.com/go-git/go-git/v5 from 5.12.0 to 5.13.0 (cherry-pick #21329) (#21401)
* chore(deps): bump github.com/go-git/go-git/v5 from 5.12.0 to 5.13.0 (#21329)

Bumps [github.com/go-git/go-git/v5](https://github.com/go-git/go-git) from 5.12.0 to 5.13.0.
- [Release notes](https://github.com/go-git/go-git/releases)
- [Commits](https://github.com/go-git/go-git/compare/v5.12.0...v5.13.0)

---
updated-dependencies:
- dependency-name: github.com/go-git/go-git/v5
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* tidy

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-01-13 09:51:16 -05:00
github-actions[bot]
a25c8a0eef Bump version to 2.13.3 (#21359)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2025-01-03 13:39:40 -05:00
gcp-cherry-pick-bot[bot]
c76a131b17 fix: Change applicationset generate HTTP method to avoid route conflicts (#20758) (#21300)
* Change applicationset generate HTTP method to avoid route conflicts



* Update server/applicationset/applicationset.proto




* Codegen



---------

Signed-off-by: Amit Oren <amit@coralogix.com>
Signed-off-by: Amit Oren <github@amitoren.dev>
Co-authored-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-12-30 11:54:06 +02:00
Michael Crenshaw
64a14a08e0 fix(ui): add optional check to avoid undefined reference in project detail (#20044) (#21263)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
Co-authored-by: Linghao Su <linghao.su@daocloud.io>
2024-12-19 15:37:36 -05:00
gcp-cherry-pick-bot[bot]
09eede0c17 fix(appset): Fix appset generate in --core mode for cluster gen (#21170) (#21236)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
Co-authored-by: OpenGuidou <73480729+OpenGuidou@users.noreply.github.com>
2024-12-18 14:48:12 +05:30
gcp-cherry-pick-bot[bot]
f260510f38 fix(api): send to closed channel in mergeLogStreams (#7006) (#21178) (#21187)
* fix(api): send to closed channel in mergeLogStreams (#7006)



* more intense test



* even more intense



* remove unnecessary comment



* fix the race condition



---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-12-15 20:58:05 -05:00
adriananeci
079754c639 fix: Populate destination name when destination server is specified (#21063) (cherry-pick 2.13) (#21176)
* fix: Populate destination name when destination server is specified (#21063)

* Populate destination name when destination server is specified

Signed-off-by: Adrian Aneci <aneci@adobe.com>

---------

Signed-off-by: Adrian Aneci <aneci@adobe.com>

* Go lint fixes

Signed-off-by: Adrian Aneci <aneci@adobe.com>

---------

Signed-off-by: Adrian Aneci <aneci@adobe.com>
2024-12-14 14:18:36 +05:30
github-actions[bot]
dc43124058 Bump version to 2.13.2 (#21133)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ishitasequeira <46771830+ishitasequeira@users.noreply.github.com>
2024-12-11 20:26:41 +05:30
Blake Pettersson
b6af657295 fix: add missing fields in listrepositories (#20991) (#21129)
This fixes a regression in 2.12. Before 2.12 githubAppInstallationID,
`githubAppID` and `gitHubAppEnterpriseBaseURL` were returned, but with
2.12 the repo is retrieved with getRepositories`, which does not
include those fields.

To fix this, add those missing fields to `ListRepositories`.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2024-12-11 12:17:21 +01:00
gcp-cherry-pick-bot[bot]
a3624a3f20 fix: Allow to delete repos with invalid urls (#20921) (#20975) (#21116)
* fix: Allow to delete repos with invalid urls (#20921)

Fixes #20921

Normalization of repo url is attempted during repo deletion before comparison with existing repos. Before this change, it'd fail on invalid urls and return "", resulting in permission denied. This ended up in a state where people can add repos with invalid urls but couldn't delete them. Return raw repo url if url parsing failed. Add a test to validate the deletion can be performed and make sure it fails without the main change.

This needs to be cherry-picked to v2.11-2.13



* Don't modify the original NormalizeGitURL



---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Co-authored-by: Andrii Korotkov <137232734+andrii-korotkov-verkada@users.noreply.github.com>
2024-12-10 16:28:27 +02:00
gcp-cherry-pick-bot[bot]
01ae20d1b3 fix: 20791 - sync multi-source application out of order source syncs (cherry-pick #21071) (#21077)
Signed-off-by: Ishita Sequeira <ishiseq29@gmail.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
2024-12-05 11:14:25 -07:00
gcp-cherry-pick-bot[bot]
89ef3563db fix: Bitbucket Cloud PR Author is processed correctly (#20769) (#20990) (#21039)
Fixes #20769

Author there is a struct, not a string. Use nickname from that struct as an author name.

Let's cherry pick to 2.11-2.13

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Co-authored-by: Andrii Korotkov <137232734+andrii-korotkov-verkada@users.noreply.github.com>
2024-12-03 17:02:24 +05:30
gcp-cherry-pick-bot[bot]
831e4525c3 fix: API server should not attempt to read secrets in all namespaces (#20950) (#20960)
* fix: api server is trying to list cluster wide secrets while generating apps



* Revert "fix: Fix argocd appset generate failure due to missing clusterrole (#20162)"

This reverts commit fad534bcfe.

---------

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-11-26 19:43:12 +05:30
gcp-cherry-pick-bot[bot]
f8d6665c67 fix: Memory leak in repo-server (#20876) (#20894)
Signed-off-by: Adam Chandler <adam_chandler@trimble.com>
Co-authored-by: Adam Chandler <31355738+AJChandler@users.noreply.github.com>
2024-11-21 10:08:04 -05:00
gcp-cherry-pick-bot[bot]
0680ddbdf9 chore(deps): bump http-proxy-middleware from 2.0.4 to 2.0.7 in /ui (#20518) (#20892)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-21 08:44:53 -05:00
gcp-cherry-pick-bot[bot]
ad36916ec4 fix(cli): Fix appset generate in --core mode (#20717) (#20883)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
Co-authored-by: OpenGuidou <73480729+OpenGuidou@users.noreply.github.com>
2024-11-21 17:36:31 +05:30
github-actions[bot]
af54ef8db5 Bump version to 2.13.1 (#20865)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pasha-codefresh <39732895+pasha-codefresh@users.noreply.github.com>
2024-11-20 17:45:30 +02:00
gcp-cherry-pick-bot[bot]
68606c6caf fix: Fix repeated 403 due to app namespace being undefined (#20699) (#20819) (#20860)
Fixes #20699

Constructor may not get called every time the application changes, so previous this.appNamespace could be stale. But the update to use `this.props.match.params.appnamespace` could also fail if it's undefined.
As a fix, create and use a helper function `getAppNamespace` which has a special case handling for undefined.

Also, use a namespaced endpoint when namespace is not undefined.

It needs to be cherry-picked to v2.11-2.13.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Co-authored-by: Andrii Korotkov <137232734+andrii-korotkov-verkada@users.noreply.github.com>
2024-11-20 13:13:11 +02:00
Cayde6
6a8cb6eff0 feat: option to disable writing k8s events(#18205) (#18441) (#20788)
* feat: option to disable writing k8s events

optioned to write logs for k8s events.
Each is passed as an environment variable and defaults to true,
disabling it requires explicitly setting the option to false.

---------

Signed-off-by: Jack-R-lantern <tjdfkr2421@gmail.com>
2024-11-14 13:07:04 +05:30
Pasha Kostohrys
d03ccf305c fix: disable automaxprocs logging (#20069) - cherry-pick 2.13 (#20718)
* fix: disable automaxprocs logging (#20069)

* disable automaxprocs logging

Signed-off-by: nitishfy <justnitish06@gmail.com>

fix lint checks

Signed-off-by: nitishfy <justnitish06@gmail.com>

move maxprocs to main.go

Signed-off-by: nitishfy <justnitish06@gmail.com>

move set auto max procs to a function

Signed-off-by: nitishfy <justnitish06@gmail.com>

add info log

Signed-off-by: nitishfy <justnitish06@gmail.com>

* add info log

Signed-off-by: nitishfy <justnitish06@gmail.com>

* fix lint checks

Signed-off-by: nitishfy <justnitish06@gmail.com>

* fix lint checks

Signed-off-by: nitishfy <justnitish06@gmail.com>

* add unit test

Signed-off-by: nitishfy <justnitish06@gmail.com>

* fix lint issues

Signed-off-by: nitishfy <justnitish06@gmail.com>

---------

Signed-off-by: nitishfy <justnitish06@gmail.com>
(cherry picked from commit cfa1c89c43)

* fix(ci): ignore temporary files when checking for out of bound symlinks (#20527)

Signed-off-by: cef <moncef.abboud95@gmail.com>

---------

Signed-off-by: cef <moncef.abboud95@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
Co-authored-by: ABBOUD Moncef <moncef.abboud95@gmail.com>
2024-11-08 19:46:49 +02:00
gcp-cherry-pick-bot[bot]
7f45c9e093 chore: Don't degrade PDB on InsufficientPods (#20171) (#20665) (#20694)
Closes #20171

There are valid use cases for InsufficientPods, e.g. running some jobs or tests that don't want to be terminated. See the discussion on the issue for more details. So don't degrade the PDB on InsufficientPods condition.

This needs to be cherry picked to 2.13, as many people may not upgrade otherwise.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Co-authored-by: Andrii Korotkov <137232734+andrii-korotkov-verkada@users.noreply.github.com>
2024-11-07 09:11:01 -08:00
austin5219
449e6939b2 fix(pkce): 20202 Backport PKCE auth flow fix for basehref and reauth (#20675)
* fix(pkce): 20111 PKCE auth flow does not return user to previous path like dex auth flow (#20202)

* Adding non-default basehref support for PKCE auth flow

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Adding ; for linting

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* removing hook function

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Moving unauthorized error handling to class component to access context for error handling within 401 error

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Store the subsrition handle to close in unmount

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* reorder imports

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Actually saving the subscriptions now

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* returning the 401 subscription from helper function

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Handle the promise of a subscription

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Removing then from non async subscribe

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Linter fixes

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Adding path caching to sessionStorage on pkceLogin and redirect step to cached path if available in pkceCallback to mirror Dex functionality

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

---------

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

* Merge Conflict fix

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>

---------

Signed-off-by: austin5219 <3936059+austin5219@users.noreply.github.com>
2024-11-07 15:47:09 +02:00
gcp-cherry-pick-bot[bot]
99aab9a5f3 fix: check for source position when --show-params is set (#20682) (#20689)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
Co-authored-by: Soumya Ghosh Dastidar <44349253+gdsoumya@users.noreply.github.com>
2024-11-07 12:51:49 +05:30
github-actions[bot]
347f221adb Bump version to 2.13.0 (#20652)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pasha-codefresh <39732895+pasha-codefresh@users.noreply.github.com>
2024-11-04 14:05:53 +02:00
gcp-cherry-pick-bot[bot]
1fcbe3f511 fix(ui): fix the slider tansition (#20641) (#20642)
Co-authored-by: Ashu <11219262+ashutosh16@users.noreply.github.com>
2024-11-01 16:53:43 -04:00
gcp-cherry-pick-bot[bot]
d417417c21 docs(rbac): clarify glob pattern behavior for fine-grain RBAC (#20624) (#20626)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-10-31 14:11:13 -04:00
gcp-cherry-pick-bot[bot]
e7f98814a9 fix(diff): avoid cache miss in server-side diff (#20605) (#20607)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-10-30 21:36:12 -04:00
gcp-cherry-pick-bot[bot]
3f708b8b14 rerender when extensions update (#20559) (#20601)
Signed-off-by: Yiwei Gong <imwithye@gmail.com>
Co-authored-by: Yiwei Gong <imwithye@gmail.com>
2024-10-30 11:01:54 -07:00
Amit Oren
7bc333d193 fix(ui): fix open application detail in new tab when basehref is set (#20004) (#20571)
Signed-off-by: Amit Oren <amit@coralogix.com>
Co-authored-by: Henry Liu <henry.liu@daocloud.io>
2024-10-29 13:59:40 +02:00
Leonardo Luz Almeida
e3b1d9327d feat: allow individual extension configs (#20491) (#20525)
* feat: allow individual extension configs



* fix test



* update ext docs



* + docs



* pr review



* address review comments



---------

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2024-10-24 16:26:03 -04:00
gcp-cherry-pick-bot[bot]
deb07ee698 fix(appset): avoid panic when no steps in rollingSync (#20357) (#20492)
Signed-off-by: cef <moncef.abboud95@gmail.com>
Co-authored-by: ABBOUD Moncef <moncef.abboud95@gmail.com>
2024-10-24 07:57:04 -04:00
Alexander Matyushentsev
435989c07e fix: support managing cluster with multiple argocd instances and annotation based tracking (#20222) (#20488)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-10-22 06:28:20 -07:00
Alexander Matyushentsev
2503eb32af feat: support using exponential backoff between self heal attempts (#20275) (#20480)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-10-22 00:51:32 -07:00
Alexander Matyushentsev
be57dfe1fa fix: support managing cluster with multiple argocd instances and annotation based tracking (#20222) (#20483)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-10-22 00:59:42 -04:00
github-actions[bot]
e99c8b754b Bump version to 2.13.0-rc5 (#20457)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2024-10-18 13:26:40 -04:00
gcp-cherry-pick-bot[bot]
2076b4f73c fix: don't disable buttons for multi-source apps (#20446) (#20447)
With #20381 multi-source apps were not taken into account 🤦

Fixes #20445.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2024-10-18 11:15:03 -04:00
gcp-cherry-pick-bot[bot]
5c595d8410 fix(diff): avoid cache miss in server-side diff (#20423) (#20424) (#20449)
* fix(diff): avoid cache miss in server-side diff (#20423)



* fix silly mistakes



---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-10-18 11:12:31 -04:00
github-actions[bot]
8340e1e43f Bump version to 2.13.0-rc4 (#20439)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2024-10-17 16:34:49 -04:00
gcp-cherry-pick-bot[bot]
1cddb8e607 fix(ci): handle new k3s test version matrix (#20223) (#20427) (#20433)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-10-17 16:06:06 -04:00
gcp-cherry-pick-bot[bot]
262c8fa529 fix(ui): source can in fact be undefined (#20381) (#20418)
* fix(ui): source can in fact be `undefined`

The assumption that a source is always there is not always true. To
repro, create an app-of-apps containing a single app without any `source`
present. In the UI this will crash, horribly. This PR fixes that so
that instead of crashing the user will get useful info indicating what
is wrong with the app.



* chore(ui): some cr tweaks



* chore(ui): some cr tweaks



---------

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2024-10-17 07:59:45 -04:00
Linghao Su
97a49a24cc fix(controller/ui): fix pod with sidecar state (#19843) (#20393)
* fix(controller): change pod status calculate with sidecar



* fix(controller): add restartable sidecar count in total container display



* fix(controller): update info test case conditions




* fix(controller): add more test case to cover more conditions



* fix(ui): check is condition exist before for of



---------

Signed-off-by: linghaoSu <linghao.su@daocloud.io>
Signed-off-by: Linghao Su <slh001@live.cn>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-10-16 21:50:50 -04:00
gcp-cherry-pick-bot[bot]
a9a8d0e45f fix: check err before use schedule and duration (#20043) (#20353)
* fix: check err before use schedule and duration



* test: add tests for invalid schedule and duration



* feat: change to return error when sync window is invalid



* fix: use assert.Error or assert.NoError



* fix: use require instead of assert



---------

Signed-off-by: daengdaengLee <gunho1020@gmail.com>
Co-authored-by: Kunho Lee <gunho1020@gmail.com>
2024-10-12 13:33:55 -04:00
pasha-codefresh
92de225ce5 feat(ui): display sha's revision in every history release (#19962) - release-2.13 (#20310)
Signed-off-by: pashakostohrys <pavel@codefresh.io>
2024-10-09 14:25:43 -04:00
Alexandre Gaudreault
a713e5023a prevent crash during timer expiration after stream is closed (#19917) (#20272)
Reorder ticker stop and close merge to prevent send(true) happens after merge is closed, in rare situation when the timer expires exactly at the point between close(merge) and ticker.Stop()

Signed-off-by: morapet <peter@moran.sk>
Co-authored-by: morapet <peter@moran.sk>
2024-10-07 13:53:11 -04:00
github-actions[bot]
ec60abd4d8 Bump version to 2.13.0-rc3 (#20268)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: crenshaw-dev <350466+crenshaw-dev@users.noreply.github.com>
2024-10-07 09:32:14 -04:00
gcp-cherry-pick-bot[bot]
c6d9d50ee9 fix: Rework git tag semver resolution (#20083) (#20096) (#20213)
* Write initial tests



* Improve git tag semver resolution



* Add company to list of users



* Fix broken error string check



* Fix incorrect semver test assumption



* switch to debug statement



* Add more testcases for review



* review comments



---------

Signed-off-by: Paul Larsen <pnvlarsen@gmail.com>
Co-authored-by: Paul Larsen <pnvlarsen@gmail.com>
2024-10-06 23:49:01 -04:00
gcp-cherry-pick-bot[bot]
7244b8b40f fix: Policy/policy.open-cluster-management.io health check is broken (#20108) (#20109) (#20258)
Tried using the health check as listed here but it gave error:

| error setting app health: failed to get resource health for "Policy" with name "XXXX" in namespace "local-cluster": <string>:35: invalid value (nil) at index 1 in table for concat stack traceback: [G]: in function 'concat' <string>:35: in main chunk [G]: ?

This change fixes the error by updating how the noncompliant clusters are tracked and counted to use latest Lua recommendations.

Signed-off-by: Ian Tewksbury <itewk@redhat.com>
Co-authored-by: Ian Tewksbury <itewk@redhat.com>
2024-10-06 15:35:08 -04:00
gcp-cherry-pick-bot[bot]
8e81bb6c80 Fixes minor typo which lead to using the bearer token as api URL and was obviously not working. (#20169) (#20170)
Signed-off-by: Dan Garfield <dan@codefresh.io>
2024-10-06 13:00:11 +03:00
gcp-cherry-pick-bot[bot]
3bc2e1ae4c fix(health): only consider non-empty health checks (#20232) (#20235)
* fix(health): only consider non-empty health checks

For wildcard health checks, only consider wildcards with a non-empty
health check. Fixes #16905 (at least partially).



* test: renaming test case for clarity



* refactor: add clarity as to what the function is supposed to do



* Update docs/operator-manual/health.md




* test: add test case for `*/*` override with empty healthcheck



---------

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-10-04 15:35:47 -04:00
gcp-cherry-pick-bot[bot]
61f63f35ae chore: Added unit tests and fix e2e tests for application sync decoupling feature (#19966) (#20219)
* fixed doc comments and added unit tests



* Added comments for the newly added unit tests



* Refactored method name to deriveServiceAccountToImpersonate



* Using const name in return value



* Added unit tests for argocd proj add-destination-service-accounts



* Fixed failing e2e tests



* Fix linting errors



* Using require package instead of assert and fixed code generation



* Removed parallel execution of tests for sync with impersonate



* Added err checks for glob validations



* Fixed e2e tests for sync impersonation



* Using consistently based expects in E2E tests



* Added more unit tests and fixed go generate



* Fixed failed lint errors, unit and e2e test failures



* Fixed goimports linter issue



* Added code comments and added few missing unit tests



* Added missing unit test for GetDestinationServiceAccounts method



* Fixed goimports formatting with local for project_test.go



* Corrected typo in a field name additionalObjs



* Fixed failing unit tests



---------

Signed-off-by: anandf <anjoseph@redhat.com>
Co-authored-by: Anand Francis Joseph <anjoseph@redhat.com>
2024-10-03 15:38:17 -04:00
gcp-cherry-pick-bot[bot]
5eb1f9bd16 fix: update health check to support modelmesh (#20142) (#20218)
Signed-off-by: Trevor Royer <troyer@redhat.com>
Co-authored-by: Trevor Royer <troyer@redhat.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2024-10-03 14:08:11 -04:00
gcp-cherry-pick-bot[bot]
4149f484bf fix: Fix false positive in plugin application discovery (#20196) (#20214)
* fix: fix false positive in plugin application discovery



* fix: apply suggestion to return immediately if discovery is not configured for unnamed plugin



---------

Signed-off-by: Pradithya Aria <pradithya.pura@gojek.com>
Co-authored-by: aria <pradithya.pura@gojek.com>
2024-10-03 13:25:32 -04:00
gcp-cherry-pick-bot[bot]
0b2895977e docs(ui): sorting version (#20181) (#20203)
Signed-off-by: nueavv <nuguni@kakao.com>
Co-authored-by: 1102 <90682513+nueavv@users.noreply.github.com>
2024-10-02 13:41:17 -04:00
gcp-cherry-pick-bot[bot]
99b30a87a6 fix: Fix argocd appset generate failure due to missing clusterrole (#20162) (#20164)
* fix: FIx argocd-server clusterrole to allow argocd appset generate using cluster generator



* fix: update generated code



---------

Signed-off-by: Pradithya Aria <pradithya.pura@gojek.com>
Co-authored-by: aria <pradithya.pura@gojek.com>
2024-09-30 09:42:43 -04:00
gcp-cherry-pick-bot[bot]
9fc6ec116d fix(extension): add header to support apps-in-any-namespace (#20123) (#20126)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-09-27 11:54:38 -04:00
gcp-cherry-pick-bot[bot]
f7f553f675 chore(deps): bump Helm from 3.15.2 to 3.15.4 (#20135) (#20137)
* sec: upgrade helm version in order to fix critical vulnerability



* sec: upgrade helm version in order to fix critical vulnerability



---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
Co-authored-by: pasha-codefresh <pavel@codefresh.io>
2024-09-27 09:48:59 -04:00
gcp-cherry-pick-bot[bot]
a9d9d07edd fix: CVE-2024-45296 Backtracking regular expressions cause ReDoS by upgrading path-to-regexp from 1.8.0 to 1.9.0 (#20087) (#20089)
Signed-off-by: Cheng Fang <cfang@redhat.com>
Co-authored-by: Cheng Fang <cfang@redhat.com>
2024-09-24 23:28:18 -04:00
github-actions[bot]
0f083c9e58 Bump version to 2.13.0-rc2 (#20029)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pasha-codefresh <39732895+pasha-codefresh@users.noreply.github.com>
2024-09-20 14:36:29 +03:00
gcp-cherry-pick-bot[bot]
5392ca7e79 chore(deps): bump dompurify from 2.3.6 to 2.5.6 in /ui (#19955) (#20015)
Bumps [dompurify](https://github.com/cure53/DOMPurify) from 2.3.6 to 2.5.6.
- [Release notes](https://github.com/cure53/DOMPurify/releases)
- [Commits](https://github.com/cure53/DOMPurify/compare/2.3.6...2.5.6)

---
updated-dependencies:
- dependency-name: dompurify
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-19 21:18:56 -04:00
gcp-cherry-pick-bot[bot]
243ecc2f25 fix: notification controller crash loop in 2.13 RC1 (#19984) (#19986)
Signed-off-by: pashakostohrys <pavel@codefresh.io>
Co-authored-by: pasha-codefresh <pavel@codefresh.io>
2024-09-18 23:01:51 +03:00
gcp-cherry-pick-bot[bot]
425b4087f3 fix: Add redis password to forwardCacheClient struct (#19599) (#19977)
Signed-off-by: Netanel Kadosh <kadoshnetanel@gmail.com>
Co-authored-by: Netanel Kadosh <kadoshnetanel@gmail.com>
2024-09-18 12:53:35 +03:00
github-actions[bot]
74a367d10e Bump version to 2.13.0-rc1 (#19943)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pasha-codefresh <39732895+pasha-codefresh@users.noreply.github.com>
2024-09-16 09:54:17 +03:00
dependabot[bot]
e67a7b6674 chore(deps-dev): bump @types/node from 22.5.4 to 22.5.5 in /ui-test (#19941)
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 22.5.4 to 22.5.5.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

---
updated-dependencies:
- dependency-name: "@types/node"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 04:53:11 +00:00
dependabot[bot]
ddf337e893 chore(deps): bump bitnami/kubectl in /test/container (#19939)
Bumps bitnami/kubectl from `7779e58` to `27e5f50`.

---
updated-dependencies:
- dependency-name: bitnami/kubectl
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 00:37:53 -04:00
dependabot[bot]
5540c37f3a chore(deps): bump github.com/cyphar/filepath-securejoin (#19940)
Bumps [github.com/cyphar/filepath-securejoin](https://github.com/cyphar/filepath-securejoin) from 0.3.1 to 0.3.2.
- [Release notes](https://github.com/cyphar/filepath-securejoin/releases)
- [Changelog](https://github.com/cyphar/filepath-securejoin/blob/main/CHANGELOG.md)
- [Commits](https://github.com/cyphar/filepath-securejoin/compare/v0.3.1...v0.3.2)

---
updated-dependencies:
- dependency-name: github.com/cyphar/filepath-securejoin
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 00:37:35 -04:00
dependabot[bot]
60df9eb384 chore(deps-dev): bump mocha and @types/mocha in /ui-test (#19923)
Bumps [mocha](https://github.com/mochajs/mocha) and [@types/mocha](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/mocha). These dependencies needed to be updated together.

Updates `mocha` from 10.4.0 to 10.7.3
- [Release notes](https://github.com/mochajs/mocha/releases)
- [Changelog](https://github.com/mochajs/mocha/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mochajs/mocha/compare/v10.4.0...v10.7.3)

Updates `@types/mocha` from 10.0.6 to 10.0.8
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/mocha)

---
updated-dependencies:
- dependency-name: mocha
  dependency-type: direct:development
  update-type: version-update:semver-minor
- dependency-name: "@types/mocha"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-13 12:26:46 -07:00
dependabot[bot]
c6a414c7db chore(deps): bump chromedriver from 128.0.1 to 128.0.3 in /ui-test (#19924)
Bumps [chromedriver](https://github.com/giggio/node-chromedriver) from 128.0.1 to 128.0.3.
- [Commits](https://github.com/giggio/node-chromedriver/compare/128.0.1...128.0.3)

---
updated-dependencies:
- dependency-name: chromedriver
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-13 12:26:06 -07:00
dependabot[bot]
d49e175c53 chore(deps): bump library/busybox in /test/e2e/multiarch-container (#19925)
Bumps library/busybox from `34b191d` to `c230832`.

---
updated-dependencies:
- dependency-name: library/busybox
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-13 12:25:55 -07:00
Thibault Jamet
42c001dd14 fix(appset): Fix perpetual appset reconciliation (#19822)
Golang maps do not guarantee the order of the application resources
from the applicationset which causes rapid sync activity for the applicationset
as the objects and hence their resourceVersions are updated after each reconcile loop.

This then triggers reconciliation of all objects watching the
ApplicationSet.

In order to prevent this behaviour, ensure that the ApplicationSet
reconciler provides an idempotent list of resources, ensuring objects
are not updated.

Fixes: #19757

Signed-off-by: Thibault Jamet <thibault.jamet@adevinta.com>
Signed-off-by: Fabián Sellés <fabian.selles@adevinta.com>
Co-authored-by: Fabian Selles <fabian.sellesrosa@gmail.com>
Co-authored-by: Ariadna Rouco <ariadna.rouco@adevinta.com>
2024-09-13 13:43:53 -04:00
dependabot[bot]
ccc66cc54d chore(deps): bump peter-evans/create-pull-request from 7.0.1 to 7.0.2 (#19922)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.1 to 7.0.2.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](8867c4aba1...d121e62763)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-13 16:30:20 +01:00
Vikram Sethi
f22c332d92 Add Adobe to USERS.md (#19927)
Adobe is actively using all four Argo projects

Signed-off-by: Vikram Sethi <vsethi@adobe.com>
2024-09-12 23:49:17 -10:00
rumstead
cb6fbbfdea fix(docs): adding links for appset matrix example (#19914)
* fix(docs): adding links for appset matrix example

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

* fix(docs): adding links for appset matrix example

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

---------

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2024-09-12 22:55:41 -06:00
Kostis (Codefresh)
81de487cf6 docs: Application sets metrics documentation (#19892)
* docs: fixed wrong formatting of yaml

Signed-off-by: Kostis (Codefresh) <39800303+kostis-codefresh@users.noreply.github.com>

* docs: metrics for application sets

Signed-off-by: Kostis (Codefresh) <39800303+kostis-codefresh@users.noreply.github.com>

* docs: applicationset metrics suggestions from code review

Co-authored-by: Dan Garfield <dan@codefresh.io>
Signed-off-by: Kostis (Codefresh) <39800303+kostis-codefresh@users.noreply.github.com>

* docs: minor formatting fix

Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
Signed-off-by: Kostis (Codefresh) <39800303+kostis-codefresh@users.noreply.github.com>

---------

Signed-off-by: Kostis (Codefresh) <39800303+kostis-codefresh@users.noreply.github.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2024-09-12 15:02:13 +00:00
dependabot[bot]
28f424f8f9 chore(deps): bump library/golang from 4a3c2bc to 2fe82a3 (#19905)
Bumps library/golang from `4a3c2bc` to `2fe82a3`.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-12 06:25:54 +00:00
dependabot[bot]
bf02881374 chore(deps): bump library/redis in /test/container (#19900)
Bumps library/redis from `fbff2d8` to `eadf354`.

---
updated-dependencies:
- dependency-name: library/redis
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-12 01:47:33 -04:00
dependabot[bot]
393f7fc7c1 chore(deps): bump library/golang in /test/container (#19901)
Bumps library/golang from `4a3c2bc` to `2fe82a3`.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-12 01:47:19 -04:00
dependabot[bot]
48a03a9884 chore(deps): bump google.golang.org/grpc from 1.66.1 to 1.66.2 (#19902)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.66.1 to 1.66.2.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.66.1...v1.66.2)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-12 01:47:05 -04:00
dependabot[bot]
7abdd88d81 chore(deps): bump go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc (#19903)
Bumps [go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc](https://github.com/open-telemetry/opentelemetry-go-contrib) from 0.54.0 to 0.55.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.54.0...zpages/v0.55.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-12 01:46:50 -04:00
dependabot[bot]
c20734df37 chore(deps): bump gitpod/workspace-full from fbff2dc to 230285e (#19904)
Bumps gitpod/workspace-full from `fbff2dc` to `230285e`.

---
updated-dependencies:
- dependency-name: gitpod/workspace-full
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-12 05:27:18 +00:00
dependabot[bot]
f5a202abb3 chore(deps): bump library/golang in /test/remote (#19899)
Bumps library/golang from `4a3c2bc` to `2fe82a3`.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-09-12 04:53:03 +00:00
Linghao Su
20e7f8edca feat(ui): add health status and message in sync status list (#19875)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-09-11 20:26:44 -07:00
dependabot[bot]
ddab959958 chore(deps): bump library/redis from 7.2.5 to 7.4.0 in /test/container (#19294)
Bumps library/redis from 7.2.5 to 7.4.0.

---
updated-dependencies:
- dependency-name: library/redis
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 20:25:17 -07:00
Suraj yadav
aeb8b55fc0 fix(ui): Re-fix help-icon in the summary section (#19833)
* refix-icon

Signed-off-by: Surajyadav <harrypotter1108@gmail.com>

* lint

Signed-off-by: Surajyadav <harrypotter1108@gmail.com>

---------

Signed-off-by: Surajyadav <harrypotter1108@gmail.com>
2024-09-11 22:10:27 -04:00
Seolhui Lee
c4709fbf5f feat: Add graceful shutdown handling in notification (#19368)
Signed-off-by: LeeSeolHui <lsh81550@gmail.com>
2024-09-11 22:08:25 -04:00
Thiago Perrotta
022c4fd061 docs(sync windows): rename Sunday-Saturday (#19885) (#19886)
* fix(sync windows): rename Sunday-Saturday

Sunday-Saturday is ambiguous. It could mean:

- sunday and saturday ONLY
- from sunday to saturday (=every day of the week)

In order to disambiguate, we could change the label to one of the
following:

- Every Day of the Week
- Sunday to Saturday
- From Sunday to Saturday

Signed-off-by: Thiago Perrotta <tbperrotta@gmail.com>

* Update ui/src/app/settings/components/project-sync-windows-edit/project-sync-windows-edit.tsx

Co-authored-by: Dan Garfield <dan@codefresh.io>
Signed-off-by: Thiago Perrotta <tbperrotta@gmail.com>

---------

Signed-off-by: Thiago Perrotta <tbperrotta@gmail.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2024-09-11 21:37:35 -04:00
1102
02df74192f docs: Clarify AWS profile mounting locations for EKS cluster addition (#19853)
Signed-off-by: nueavv <nuguni@kakao.com>
2024-09-11 20:28:39 -04:00
1102
ad399c0a88 docs: Add Installation Warning and Kustomize Guide (#19874)
* Update installation docs: Add warning for ClusterRoleBinding and custom namespace

Signed-off-by: nueavv <nuguni@kakao.com>

* Add support for installing Argo CD in custom namespace using Kustomize

Signed-off-by: nueavv <nuguni@kakao.com>

* remove clusterrolebinding name

Signed-off-by: nueavv <nuguni@kakao.com>

---------

Signed-off-by: nueavv <nuguni@kakao.com>
2024-09-11 16:54:44 -06:00
Jingchao
f980187f17 fix: openkruise health check npe error #19545 (#19660)
* test: add broken unit test data

Signed-off-by: Jingchao <alswlx@gmail.com>

* fix: npe error in kruise ds health-check

Signed-off-by: Jingchao <alswlx@gmail.com>

---------

Signed-off-by: Jingchao <alswlx@gmail.com>
2024-09-11 15:32:37 -07:00
dependabot[bot]
da118ad6aa chore(deps): bump express from 4.19.2 to 4.20.0 in /ui (#19883)
Bumps [express](https://github.com/expressjs/express) from 4.19.2 to 4.20.0.
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/master/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.19.2...4.20.0)

---
updated-dependencies:
- dependency-name: express
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 15:27:59 -07:00
dependabot[bot]
44d56954b7 chore(deps): bump library/golang from 1.22.6 to 1.23.1 (#19838)
Bumps library/golang from 1.22.6 to 1.23.1.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 15:27:08 -07:00
afarbos
e86258d8a5 feat: Implement PodDisruptionBudget CRD health checks (#19826)
Signed-off-by: Arnaud Farbos <farbos.arnaud@gmail.com>
2024-09-11 15:24:59 -07:00
dependabot[bot]
8487a93931 chore(deps): bump library/golang from 1.22.0 to 1.23.1 in /test/remote (#19840)
Bumps library/golang from 1.22.0 to 1.23.1.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 15:20:21 -07:00
dependabot[bot]
76870db199 chore(deps): bump library/golang in /test/container (#19841)
Bumps library/golang from `80cf6f9` to `4a3c2bc`.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 15:19:51 -07:00
dependabot[bot]
d60f8d8ba2 chore(deps): bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc (#19877)
Bumps [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc](https://github.com/open-telemetry/opentelemetry-go) from 1.27.0 to 1.30.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.27.0...v1.30.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 15:18:46 -07:00
Nitish Kumar
5e55d1d502 docs: mention information about where to set the ARGOCD_SYNC_WAVE_DELAY environment variable (#19879)
* Update sync-waves.md

Signed-off-by: Nitish Kumar <justnitish06@gmail.com>

* Update docs/user-guide/sync-waves.md

Co-authored-by: Dan Garfield <dan@codefresh.io>
Signed-off-by: Nitish Kumar <justnitish06@gmail.com>

---------

Signed-off-by: Nitish Kumar <justnitish06@gmail.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2024-09-11 16:11:26 -06:00
Nitish Kumar
ebbd3d1321 feat: add --source-position flag to argocd get app command to show parameter changes for multi-source application (#19887)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2024-09-11 22:47:31 +05:30
KangManJoo
b098f2152e chore: improve error logs (#10592) (#19743)
Signed-off-by: KangManJoo <eogns47@konkuk.ac.kr>
2024-09-11 13:14:47 -04:00
dependabot[bot]
a7bc623fef chore(deps): bump go.opentelemetry.io/otel/sdk from 1.29.0 to 1.30.0 (#19878)
Bumps [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) from 1.29.0 to 1.30.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.29.0...v1.30.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 13:06:56 -04:00
mmuzh
1de5f3b7fc Add Augury as part of argocd users (#19890)
Signed-off-by: mmuzh <96480964+MaxMuzh@users.noreply.github.com>
2024-09-11 13:53:33 +02:00
dependabot[bot]
14c1da6e40 chore(deps): bump github.com/xanzy/go-gitlab from 0.108.0 to 0.109.0 (#19839)
Bumps [github.com/xanzy/go-gitlab](https://github.com/xanzy/go-gitlab) from 0.108.0 to 0.109.0.
- [Release notes](https://github.com/xanzy/go-gitlab/releases)
- [Changelog](https://github.com/xanzy/go-gitlab/blob/main/releases_test.go)
- [Commits](https://github.com/xanzy/go-gitlab/compare/v0.108.0...v0.109.0)

---
updated-dependencies:
- dependency-name: github.com/xanzy/go-gitlab
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-11 12:11:26 +03:00
mhaoda
bc4c4757fd fix: proxy url arg for repocreds command. (#19805)
* Add proxy url arg for repocreds command.

Co-authored-by: Li Wang <li.wang3@fmr.com>
Signed-off-by: Miao Haoda <Haoda.Miao@fmr.com>

* commit the results of clidocsgen

Signed-off-by: Miao Haoda <Haoda.Miao@fmr.com>

---------

Signed-off-by: Miao Haoda <Haoda.Miao@fmr.com>
Co-authored-by: Li Wang <li.wang3@fmr.com>
2024-09-11 10:11:30 +02:00
Alexandre Gaudreault
ca7a08eb95 fix(deeplinks): do not evaluate template when condition is false (#19625) (#19868)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-09-10 16:34:51 -04:00
Ashu
5776554819 feat(lua actions): add a flag to Include builtin actions with resource overrides (#19708)
* feat: include prebuilt action with overrides

Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>


Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>
Signed-off-by: Ashu <11219262+ashutosh16@users.noreply.github.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-09-10 11:57:29 -07:00
Leonardo Luz Almeida
878494f037 feat: Send user groups to proxy extensions (#19855)
* feat: Send user groups to proxy extensions

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

---------

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2024-09-10 15:37:00 +00:00
Marios Andreopoulos
d8c773dd3d docs: fix Helm --set-file example (#19864)
Signed-off-by: Marios Andreopoulos <opensource@andmarios.com>
2024-09-10 09:08:39 -04:00
dependabot[bot]
d2d9a37a0c chore(deps): bump google.golang.org/grpc from 1.66.0 to 1.66.1 (#19860)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.66.0 to 1.66.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.66.0...v1.66.1)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-10 11:15:41 +03:00
dependabot[bot]
ccc528aa9a chore(deps-dev): bump typescript from 5.5.4 to 5.6.2 in /ui-test (#19857)
Bumps [typescript](https://github.com/microsoft/TypeScript) from 5.5.4 to 5.6.2.
- [Release notes](https://github.com/microsoft/TypeScript/releases)
- [Changelog](https://github.com/microsoft/TypeScript/blob/main/azure-pipelines.release.yml)
- [Commits](https://github.com/microsoft/TypeScript/compare/v5.5.4...v5.6.2)

---
updated-dependencies:
- dependency-name: typescript
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-10 11:13:00 +03:00
Rafal
031fb88fbb fix(ui): Container Selector in Pods doesn't work (#19856)
Signed-off-by: Rafal Pelczar <rafal@akuity.io>
2024-09-10 09:22:11 +05:30
Alexandre Gaudreault
21a364158e feat(cli): ignore tracking annotation on backup restore (#18960)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-09-09 15:50:00 -04:00
Andrea Cervesato
47c7e46405 Missing close ``` in kustomize documentation (#19850)
As per title: ```  is missing in the first object.

Signed-off-by: Andrea Cervesato <andrea.cervesato@gmail.com>
2024-09-09 12:42:09 +02:00
Gergely Fábián
cb926d004d chore: bump go version to 1.22.7 (#19845)
Signed-off-by: Gergely Fábián <gergo.fb@gmail.com>
2024-09-09 12:33:16 +03:00
dependabot[bot]
a2aaf7fd1d chore(deps): bump library/node from c6add15 to bd00c03 in /ui-test (#19844)
Bumps library/node from `c6add15` to `bd00c03`.

---
updated-dependencies:
- dependency-name: library/node
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 12:30:04 +03:00
dependabot[bot]
06237b3fee chore(deps): bump library/registry in /test/container (#19842)
Bumps library/registry from `1212042` to `ac0192b`.

---
updated-dependencies:
- dependency-name: library/registry
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 12:29:48 +03:00
Tchoupinax
be90cc04fb feat: add a button to show parameter details (#12183) (#16871) 2024-09-08 17:20:48 +03:00
github-actions[bot]
5af95b1350 [Bot] docs: Update Snyk reports (#19831) 2024-09-08 13:54:17 +03:00
pasha-codefresh
aa990d6696 always execute sync if at least for one revision we identify if it was changed or no (#19828)
Signed-off-by: pashakostohrys <pavel@codefresh.io>
2024-09-07 12:12:33 -04:00
carlosrejano
71bbdccacf fix(appset): Retry on conflict when updating status (#19663)
* fix(appset): Retry on conflict when updating status

  # Context:
  When updating the status of the applicationset object it can happen
  that it fails due to a conflict since the resourceVersion has changed
  due to a different update. This makes the reconcile fails and we need
  to wait until the following reconcile loop until it updates the
  relevant status fields and hope that the update calls don't fail again
  due a conflict. It can even happen that it gets stuck constantly due
  to this erriors.

  A better approach I would say is retrying when there is a conflict
  error with the newest version of the object, so we make sure we update
  the object with the latest version always.

  This has been raised in issue #19535 that failing due to conflicts can
  make the reconcile not able to proceed.

  # What does this PR?
  - Wraps all the `Update().Status` calls inside a retry function that
    will retry when the update fails due a conflict.
  - Adds appset to fake client subresources, if not the client can not
    correctly determine the status subresource. Refer to:
    https://github.com/kubernetes-sigs/controller-runtime/issues/2386,
    and
    https://github.com/kubernetes-sigs/controller-runtime/issues/2362.

Signed-off-by: Carlos Rejano <carlos.rejano@adevinta.com>

* fixup! fix(appset): Retry on conflict when updating status

---------

Signed-off-by: Carlos Rejano <carlos.rejano@adevinta.com>
Signed-off-by: carlosrejano <59321132+carlosrejano@users.noreply.github.com>
Co-authored-by: Carlos Rejano <carlos.rejano@adevinta.com>
2024-09-07 20:55:39 +05:30
pasha-codefresh
473665795c fix: manifest-generate-paths with autosync causes an undesirable refresh sync (#19799)
Signed-off-by: pashakostohrys <pavel@codefresh.io>
2024-09-06 11:40:48 -04:00
dependabot[bot]
3661f09456 chore(deps): bump library/node from 8ec0232 to c6add15 in /ui-test (#19807)
Bumps library/node from `8ec0232` to `c6add15`.

---
updated-dependencies:
- dependency-name: library/node
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 16:59:48 +03:00
dependabot[bot]
1759a4406b chore(deps): bump github.com/prometheus/client_golang (#19810)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.20.2 to 1.20.3.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/v1.20.3/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.20.2...v1.20.3)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 16:59:35 +03:00
dependabot[bot]
cc42d5f92d chore(deps): bump library/golang in /test/container (#19811)
Bumps library/golang from `1a6db32` to `80cf6f9`.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 16:58:40 +03:00
dependabot[bot]
3136d08f44 chore(deps): bump bitnami/kubectl in /test/container (#19812)
Bumps bitnami/kubectl from `664bf2a` to `7779e58`.

---
updated-dependencies:
- dependency-name: bitnami/kubectl
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 16:58:26 +03:00
dependabot[bot]
6533a6f686 chore(deps): bump library/node in /test/container (#19813)
Bumps library/node from `8ec0232` to `bd00c03`.

---
updated-dependencies:
- dependency-name: library/node
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 16:58:12 +03:00
dependabot[bot]
3f5b80f626 chore(deps): bump library/node from 8ec0232 to bd00c03 (#19814)
Bumps library/node from `8ec0232` to `bd00c03`.

---
updated-dependencies:
- dependency-name: library/node
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 16:55:47 +03:00
dependabot[bot]
3d66b05899 chore(deps): bump tj-actions/changed-files from 44.5.7 to 45.0.1 (#19750)
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 44.5.7 to 45.0.1.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](c65cd88342...e9772d1404)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 09:24:00 +02:00
dependabot[bot]
b84f01eb3d chore(deps): bump peter-evans/create-pull-request from 7.0.0 to 7.0.1 (#19806)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.0 to 7.0.1.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](4320041ed3...8867c4aba1)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 09:23:07 +02:00
dependabot[bot]
09fdec4c6b chore(deps): bump golang.org/x/oauth2 from 0.22.0 to 0.23.0 (#19790)
Bumps [golang.org/x/oauth2](https://github.com/golang/oauth2) from 0.22.0 to 0.23.0.
- [Commits](https://github.com/golang/oauth2/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/oauth2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 09:22:14 +02:00
dependabot[bot]
01bbd91c9d chore(deps): bump golang.org/x/net from 0.28.0 to 0.29.0 (#19808)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.28.0 to 0.29.0.
- [Commits](https://github.com/golang/net/compare/v0.28.0...v0.29.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 04:57:18 +00:00
Dan Garfield
d28229dc1c Fix broken link from overview, previous merge conflict (#19801)
Signed-off-by: todaywasawesome <dan@codefresh.io>
2024-09-05 14:54:50 -04:00
Dustin Lactin
9d3409f7d5 docs: Add Mozilla to USERS.md (#19802)
Signed-off-by: Dustin Lactin <dlactin@mozilla.com>
2024-09-05 06:44:58 -10:00
Alexander Matyushentsev
ba67abed40 docs: proposal to introduce 'Prune/Delete=confirm' sync option value (#19520)
* docs: proposal to introduce 'Prune/Delete=confirm' sync option value

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* add clarifications

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

---------

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
Co-authored-by: Dan Garfield <dan@codefresh.io>
2024-09-05 19:10:39 +03:00
Keith Chong
6dc7405cf9 fix: Delete button should be disabled when one source remains (#18804) 2024-09-05 05:37:59 -04:00
dependabot[bot]
c27091cb4f chore(deps): bump golang.org/x/term from 0.23.0 to 0.24.0 (#19789)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.23.0 to 0.24.0.
- [Commits](https://github.com/golang/term/compare/v0.23.0...v0.24.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 10:47:25 +03:00
dependabot[bot]
bd93902325 chore(deps): bump library/busybox in /test/e2e/multiarch-container (#19791)
Bumps library/busybox from `8274294` to `34b191d`.

---
updated-dependencies:
- dependency-name: library/busybox
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:42:53 +03:00
dependabot[bot]
d9bda34605 chore(deps): bump bitnami/kubectl in /test/container (#19792)
Bumps bitnami/kubectl from `96ef4d3` to `664bf2a`.

---
updated-dependencies:
- dependency-name: bitnami/kubectl
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:42:29 +03:00
dependabot[bot]
ece68bd143 chore(deps): bump library/golang in /test/container (#19793)
Bumps library/golang from `613a108` to `1a6db32`.

---
updated-dependencies:
- dependency-name: library/golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-05 09:42:09 +03:00
foyerunix
de35745fc0 feat: Add metric to expose Applications conditions (#19438)
Closes #13096

Implement a new metric exposing Applications conditions.
This is particularly useful for SRE teams to be able
to setup alerts on issues that aren't displayed via
"health_status" and "sync_status" in the metric "argocd_app_info".

Signed-off-by: Foyer Unix <foyerunix@foyer.lu>
Co-authored-by: Foyer Unix <foyerunix@foyer.lu>
2024-09-05 09:41:35 +03:00
dependabot[bot]
bb43c5a83d chore(deps-dev): bump @types/node from 22.5.3 to 22.5.4 in /ui-test (#19794) 2024-09-05 09:12:41 +03:00
rumstead
01874d64de fix(appset): allow for shorthand git refs in git generators #15427 (#19783)
* fix(appset): allow for shorthand git refs in git generators

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

* Retrigger CI pipeline

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

* attempt to fix goimports

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

* attempt to fix goimports

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

* remove redundant test

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>

---------

Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2024-09-05 10:13:30 +05:30
Xiaopeng Han
aa2bafd812 fix(cli): admin settings rbac can has inconsistency among project resources (#17805)
* fix admin can inconsistency among resources

Signed-off-by: xiaopeng <hanxiaop8@outlook.com>

* revise logic and fix lint

Signed-off-by: xiaopeng <hanxiaop8@outlook.com>

---------

Signed-off-by: xiaopeng <hanxiaop8@outlook.com>
2024-09-05 10:07:13 +05:30
Anand Francis Joseph
d3fbeec825 Fixed go.mod to remove the replace construct added for gitops-engine (#19788)
Signed-off-by: anandf <anjoseph@redhat.com>
2024-09-05 02:16:50 +00:00
dependabot[bot]
63b6565079 chore(deps): bump google.golang.org/grpc from 1.65.0 to 1.66.0 (#19719)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.65.0 to 1.66.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.65.0...v1.66.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-04 20:32:58 -04:00
sxt90128
ef41eebd10 fix: issue 19395 change delete icon color in dark model (#19747)
* fix-19395 change delete icon color in dark model

Signed-off-by: Esther Shen <xingtong.shen@fmr.com>

* fix-19395 revert formatting changes

Signed-off-by: Esther Shen <xingtong.shen@fmr.com>

* fix-19395 revert formatting changes

Signed-off-by: Esther Shen <xingtong.shen@fmr.com>

* fix-19395 revert formatting changes

Signed-off-by: Esther Shen <xingtong.shen@fmr.com>

---------

Signed-off-by: Esther Shen <xingtong.shen@fmr.com>
2024-09-04 20:24:40 -04:00
Dan Garfield
832fefb533 Add links to index page (#19786)
Signed-off-by: todaywasawesome <dan@codefresh.io>
2024-09-04 20:18:32 -04:00
Dan Garfield
9c47a709fb Fix local guide for building and testing docs (#19785)
Signed-off-by: todaywasawesome <dan@codefresh.io>
2024-09-04 20:13:53 -04:00
Anand Francis Joseph
1028808bb7 feat: Decoupling application sync using impersonation (#17403)
* Implementation of app sync with impersonation support

Signed-off-by: anandf <anjoseph@redhat.com>

* negation test

Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>

* Update doc comments to remove server name as its not supported.

Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
Signed-off-by: Anand Francis Joseph <anandfrancis.joseph@gmail.com>

* Update glob pattern check for matching destinations.

Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
Signed-off-by: Anand Francis Joseph <anandfrancis.joseph@gmail.com>

* Corrected the code comments for namespace field and destination matching logic

Signed-off-by: anandf <anjoseph@redhat.com>

* Added missing generated files

Signed-off-by: anandf <anjoseph@redhat.com>

* Fixed golint errors caused due to to gofumpt validations

Signed-off-by: anandf <anjoseph@redhat.com>

* Fix golint errors with unit test code

Signed-off-by: anandf <anjoseph@redhat.com>

* Updated the go import ordering with local packages at the end

Signed-off-by: anandf <anjoseph@redhat.com>

* Addressed review comments

Signed-off-by: anandf <anjoseph@redhat.com>

* Fixed ES lint error caused due to missing class

Signed-off-by: anandf <anjoseph@redhat.com>

* Updated the documentation to address the review comments

Signed-off-by: anandf <anjoseph@redhat.com>

* Simplified the sync code and improved logs and error handling

Signed-off-by: anandf <anjoseph@redhat.com>

* Fixed E2E tests to fail when no sa is configured

Signed-off-by: anandf <anjoseph@redhat.com>

* Updated help message generated for CLI commands

Signed-off-by: anandf <anjoseph@redhat.com>

* Fixed failing tests due to default service account not used for sync operation

Signed-off-by: anandf <anjoseph@redhat.com>

* Fixed the error message when sync fails due to no matching sa

Signed-off-by: anandf <anjoseph@redhat.com>

* Removed repeating logs and added impersonation fields to logger

Signed-off-by: anandf <anjoseph@redhat.com>

* Made changes in the proposal to match the behaviour when no matching sa is found

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

---------

Signed-off-by: anandf <anjoseph@redhat.com>
Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>
Signed-off-by: Anand Francis Joseph <anandfrancis.joseph@gmail.com>
Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
Co-authored-by: Mangaal <angommeeteimangaal@gmail.com>
Co-authored-by: Ishita Sequeira <46771830+ishitasequeira@users.noreply.github.com>
2024-09-04 14:18:47 -04:00
Dan Garfield
f071fdcfa3 Update pygments to 2.15.1 (#19782)
Signed-off-by: todaywasawesome <dan@codefresh.io>
2024-09-04 18:14:37 +03:00
Cheng Fang
e3e02f0064 chore(lint): errors reported by golangci-lint: S1009: should omit nil check; printf: non-constant format string (#19773)
Signed-off-by: Cheng Fang <cfang@redhat.com>
Co-authored-by: pasha-codefresh <pavel@codefresh.io>
2024-09-04 14:58:15 +00:00
529 changed files with 15743 additions and 12094 deletions

View File

@@ -31,7 +31,7 @@ jobs:
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@8410ad0602e1e429cee44a835ae9f77f654a6694 # v4.0.0
- uses: tj-actions/changed-files@c65cd883420fd2eb864698a825fc4162dd94482c # v44.5.7
- uses: tj-actions/changed-files@e9772d140489982e0e3704fea5ee93d536f1e275 # v45.0.1
id: filter
with:
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
@@ -81,7 +81,7 @@ jobs:
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -151,7 +151,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -215,7 +215,7 @@ jobs:
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -308,7 +308,7 @@ jobs:
node-version: '21.6.1'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -348,7 +348,7 @@ jobs:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
@@ -448,7 +448,7 @@ jobs:
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
@@ -539,4 +539,4 @@ jobs:
exit 0
else
exit 1
fi
fi

View File

@@ -64,7 +64,7 @@ jobs:
git stash pop
- name: Create pull request
uses: peter-evans/create-pull-request@4320041ed380b20e97d388d56a7fb4f9b8c20e79 # v7.0.0
uses: peter-evans/create-pull-request@d121e62763d8cc35b5fb1710e887d6e69a52d3a4 # v7.0.2
with:
commit-message: "Bump version to ${{ inputs.TARGET_VERSION }}"
title: "Bump version to ${{ inputs.TARGET_VERSION }} on ${{ inputs.TARGET_BRANCH }} branch"

View File

@@ -295,7 +295,7 @@ jobs:
if: ${{ env.UPDATE_VERSION == 'true' }}
- name: Create PR to update VERSION on master branch
uses: peter-evans/create-pull-request@4320041ed380b20e97d388d56a7fb4f9b8c20e79 # v7.0.0
uses: peter-evans/create-pull-request@d121e62763d8cc35b5fb1710e887d6e69a52d3a4 # v7.0.2
with:
commit-message: Bump version in master
title: "chore: Bump version in master"

2
.gitpod.Dockerfile vendored
View File

@@ -1,4 +1,4 @@
FROM gitpod/workspace-full@sha256:fbff2dce4236535b96de0e94622bbe9a44fba954ca064862004c34e3e08904df
FROM gitpod/workspace-full@sha256:230285e0b949e6d728d384b2029a4111db7b9c87c182f22f32a0be9e36b225df
USER root

View File

@@ -43,6 +43,7 @@ packages:
ProjectGetter:
RbacEnforcer:
SettingsGetter:
UserGetter:
github.com/argoproj/argo-cd/v2/util/db:
interfaces:
ArgoDB:
@@ -65,4 +66,4 @@ packages:
SessionServiceClient:
github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster:
interfaces:
ClusterServiceServer:
ClusterServiceServer:

View File

@@ -2,6 +2,7 @@ version: 2
formats: all
mkdocs:
fail_on_warning: false
configuration: mkdocs.yml
python:
install:
- requirements: docs/requirements.txt

View File

@@ -1,5 +1,10 @@
# Changelog
## v2.13.6 (2025-03-21)
### Bug fixes
- fix: handle annotated git tags correctly in repo server cache (#21548) (#21771)
## v2.4.8 (2022-07-29)
### Bug fixes

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:24.04@sha256:3f85b7caad41a95462cf5b787d8
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.22.6@sha256:2bd56f00ff47baf33e64eae7996b65846c7cb5e0a46e0a882ef179fd89654afa AS builder
FROM docker.io/library/golang:1.23.1@sha256:2fe82a3f3e006b4f2a316c6a21f62b66e1330ae211d039bb8d1128e12ed57bf1 AS builder
RUN echo 'deb http://archive.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -83,7 +83,7 @@ WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/node:22.8.0@sha256:8ec02324cb37718197de92e51677781be9f1345c709f31a1f44440c6036d24a2 AS argocd-ui
FROM --platform=$BUILDPLATFORM docker.io/library/node:22.8.0@sha256:bd00c03095f7586432805dbf7989be10361d27987f93de904b1fc003949a4794 AS argocd-ui
WORKDIR /src
COPY ["ui/package.json", "ui/yarn.lock", "./"]
@@ -101,7 +101,7 @@ RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OP
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.22.6@sha256:2bd56f00ff47baf33e64eae7996b65846c7cb5e0a46e0a882ef179fd89654afa AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.23.1@sha256:2fe82a3f3e006b4f2a316c6a21f62b66e1330ae211d039bb8d1128e12ed57bf1 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View File

@@ -553,7 +553,7 @@ build-docs-local:
.PHONY: build-docs
build-docs:
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs build'
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs build'
.PHONY: serve-docs-local
serve-docs-local:
@@ -561,7 +561,7 @@ serve-docs-local:
.PHONY: serve-docs
serve-docs:
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install -r docs/requirements.txt; mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
$(DOCKER) run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs -w /docs --entrypoint "" ${MKDOCS_DOCKER_IMAGE} sh -c 'pip install mkdocs; pip install $$(mkdocs get-deps); mkdocs serve -a $$(ip route get 1 | awk '\''{print $$7}'\''):8000'
# Verify that kubectl can connect to your K8s cluster from Docker
.PHONY: verify-kube-connect

View File

@@ -11,6 +11,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [7shifts](https://www.7shifts.com/)
1. [Adevinta](https://www.adevinta.com/)
1. [Adfinis](https://adfinis.com)
1. [Adobe](https://www.adobe.com/)
1. [Adventure](https://jp.adventurekk.com/)
1. [Adyen](https://www.adyen.com)
1. [AirQo](https://airqo.net/)
@@ -29,6 +30,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Arctiq Inc.](https://www.arctiq.ca)
2. [Arturia](https://www.arturia.com)
1. [ARZ Allgemeines Rechenzentrum GmbH](https://www.arz.at/)
1. [Augury](https://www.augury.com/)
1. [Autodesk](https://www.autodesk.com)
1. [Axians ACSP](https://www.axians.fr)
1. [Axual B.V.](https://axual.com)
@@ -39,6 +41,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Beez Innovation Labs](https://www.beezlabs.com/)
1. [Bedag Informatik AG](https://www.bedag.ch/)
1. [Beleza Na Web](https://www.belezanaweb.com.br/)
1. [Believable Bots](https://believablebots.io)
1. [BigPanda](https://bigpanda.io)
1. [BioBox Analytics](https://biobox.io)
1. [BMW Group](https://www.bmwgroup.com/)
@@ -207,6 +210,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Moengage](https://www.moengage.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MOO Print](https://www.moo.com/)
1. [Mozilla](https://www.mozilla.org)
1. [MTN Group](https://www.mtn.com/)
1. [Municipality of The Hague](https://www.denhaag.nl/)
1. [My Job Glasses](https://myjobglasses.com)

View File

@@ -1 +1 @@
2.13.0
2.13.9

View File

@@ -18,6 +18,7 @@ import (
"context"
"fmt"
"reflect"
"sort"
"strings"
"time"
@@ -32,6 +33,7 @@ import (
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -140,6 +142,7 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
// desiredApplications is the main list of all expected Applications from all generators in this appset.
desiredApplications, applicationSetReason, err := template.GenerateApplications(logCtx, applicationSetInfo, r.Generators, r.Renderer, r.Client)
if err != nil {
logCtx.Errorf("unable to generate applications: %v", err)
_ = r.setApplicationSetStatusCondition(ctx,
&applicationSetInfo,
argov1alpha1.ApplicationSetCondition{
@@ -149,7 +152,8 @@ func (r *ApplicationSetReconciler) Reconcile(ctx context.Context, req ctrl.Reque
Status: argov1alpha1.ApplicationSetConditionStatusTrue,
}, parametersGenerated,
)
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, err
// In order for the controller SDK to respect RequeueAfter, the error must be nil
return ctrl.Result{RequeueAfter: ReconcileRequeueOnValidationError}, nil
}
parametersGenerated = true
@@ -427,20 +431,29 @@ func (r *ApplicationSetReconciler) setApplicationSetStatusCondition(ctx context.
if needToUpdateConditions || len(applicationSet.Status.Conditions) < len(newConditions) {
// fetch updated Application Set object before updating it
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
if err := r.Get(ctx, namespacedName, applicationSet); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: applicationSet.Namespace, Name: applicationSet.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
applicationSet.Status.SetConditions(
newConditions, evaluatedTypes,
)
updatedAppset.Status.SetConditions(
newConditions, evaluatedTypes,
)
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, applicationSet)
// Update the newly fetched object with new set of conditions
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
if err != nil && !apierr.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
}
@@ -499,11 +512,9 @@ func (r *ApplicationSetReconciler) getMinRequeueAfter(applicationSetInfo *argov1
}
func ignoreNotAllowedNamespaces(namespaces []string) predicate.Predicate {
return predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
return utils.IsNamespaceAllowed(namespaces, e.Object.GetNamespace())
},
}
return predicate.NewPredicateFuncs(func(object client.Object) bool {
return utils.IsNamespaceAllowed(namespaces, object.GetNamespace())
})
}
func appControllerIndexer(rawObj client.Object) []string {
@@ -983,7 +994,7 @@ func appSyncEnabledForNextStep(appset *argov1alpha1.ApplicationSet, app argov1al
}
func progressiveSyncsRollingSyncStrategyEnabled(appset *argov1alpha1.ApplicationSet) bool {
return appset.Spec.Strategy != nil && appset.Spec.Strategy.RollingSync != nil && appset.Spec.Strategy.Type == "RollingSync"
return appset.Spec.Strategy != nil && appset.Spec.Strategy.RollingSync != nil && appset.Spec.Strategy.Type == "RollingSync" && len(appset.Spec.Strategy.RollingSync.Steps) > 0
}
func isApplicationHealthy(app argov1alpha1.Application) bool {
@@ -1006,6 +1017,16 @@ func statusStrings(app argov1alpha1.Application) (string, string, string) {
return healthStatusString, syncStatusString, operationPhaseString
}
func getAppStep(appName string, appStepMap map[string]int) int {
// if an application is not selected by any match expression, it defaults to step -1
step := -1
if appStep, ok := appStepMap[appName]; ok {
// 1-based indexing
step = appStep + 1
}
return step
}
// check the status of each Application's status and promote Applications to the next status if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
@@ -1025,7 +1046,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
LastTransitionTime: &now,
Message: "No Application status found, defaulting status to Waiting.",
Status: "Waiting",
Step: fmt.Sprint(appStepMap[app.Name] + 1),
Step: fmt.Sprint(getAppStep(app.Name, appStepMap)),
TargetRevisions: app.Status.GetRevisions(),
}
} else {
@@ -1035,7 +1056,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
// upgrade any existing AppStatus that might have been set by an older argo-cd version
// note: currentAppStatus.TargetRevisions may be set to empty list earlier during migrations,
// to prevent other usage of r.Client.Status().Update to fail before reaching here.
if currentAppStatus.TargetRevisions == nil || len(currentAppStatus.TargetRevisions) == 0 {
if len(currentAppStatus.TargetRevisions) == 0 {
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
}
@@ -1050,7 +1071,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Waiting"
currentAppStatus.Message = "Application has pending changes, setting status to Waiting."
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
currentAppStatus.TargetRevisions = app.Status.GetRevisions()
}
@@ -1068,14 +1089,14 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource completed a sync successfully, updating status from Pending to Progressing."
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
}
} else if operationPhaseString == "Running" || healthStatusString == "Progressing" {
logCtx.Infof("Application %v has entered Progressing status, updating its ApplicationSet status to Progressing", app.Name)
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = "Progressing"
currentAppStatus.Message = "Application resource became Progressing, updating status from Pending to Progressing."
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
}
}
@@ -1084,7 +1105,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource is already Healthy, updating status from Waiting to Healthy."
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
}
if currentAppStatus.Status == "Progressing" && isApplicationHealthy(app) {
@@ -1092,7 +1113,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
currentAppStatus.LastTransitionTime = &now
currentAppStatus.Status = healthStatusString
currentAppStatus.Message = "Application resource became Healthy, updating status from Progressing to Healthy."
currentAppStatus.Step = fmt.Sprint(appStepMap[currentAppStatus.Application] + 1)
currentAppStatus.Step = fmt.Sprint(getAppStep(currentAppStatus.Application, appStepMap))
}
appStatuses = append(appStatuses, currentAppStatus)
@@ -1113,14 +1134,12 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applicationSet.Status.ApplicationStatus))
// if we have no RollingUpdate steps, clear out the existing ApplicationStatus entries
if applicationSet.Spec.Strategy != nil && applicationSet.Spec.Strategy.Type != "" && applicationSet.Spec.Strategy.Type != "AllAtOnce" {
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
updateCountMap := []int{}
totalCountMap := []int{}
length := 0
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
length = len(applicationSet.Spec.Strategy.RollingSync.Steps)
}
length := len(applicationSet.Spec.Strategy.RollingSync.Steps)
for s := 0; s < length; s++ {
updateCountMap = append(updateCountMap, 0)
totalCountMap = append(totalCountMap, 0)
@@ -1130,10 +1149,8 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
for _, appStatus := range applicationSet.Status.ApplicationStatus {
totalCountMap[appStepMap[appStatus.Application]] += 1
if progressiveSyncsRollingSyncStrategyEnabled(applicationSet) {
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]] += 1
}
if appStatus.Status == "Pending" || appStatus.Status == "Progressing" {
updateCountMap[appStepMap[appStatus.Application]] += 1
}
}
@@ -1158,7 +1175,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
if updateCountMap[appStepMap[appStatus.Application]] >= maxUpdateVal {
maxUpdateAllowed = false
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, appStepMap[appStatus.Application]+1, applicationSet.Name)
logCtx.Infof("Application %v is not allowed to update yet, %v/%v Applications already updating in step %v in AppSet %v", appStatus.Application, updateCountMap[appStepMap[appStatus.Application]], maxUpdateVal, getAppStep(appStatus.Application, appStepMap), applicationSet.Name)
}
}
@@ -1167,7 +1184,7 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatusProgress
appStatus.LastTransitionTime = &now
appStatus.Status = "Pending"
appStatus.Message = "Application moved to Pending status, watching for the Application resource to start Progressing."
appStatus.Step = fmt.Sprint(appStepMap[appStatus.Application] + 1)
appStatus.Step = fmt.Sprint(getAppStep(appStatus.Application, appStepMap))
updateCountMap[appStepMap[appStatus.Application]] += 1
}
@@ -1249,15 +1266,29 @@ func (r *ApplicationSetReconciler) migrateStatus(ctx context.Context, appset *ar
}
if update {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
if err := r.Client.Status().Update(ctx, appset); err != nil {
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, appset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
return fmt.Errorf("error fetching updated application set: %w", err)
updatedAppset.Status.ApplicationStatus = appset.Status.ApplicationStatus
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(appset)
return nil
})
if err != nil && !apierr.IsNotFound(err) {
return fmt.Errorf("unable to set application set condition: %w", err)
}
}
return nil
@@ -1271,22 +1302,35 @@ func (r *ApplicationSetReconciler) updateResourcesStatus(ctx context.Context, lo
for _, status := range statusMap {
statuses = append(statuses, status)
}
sort.Slice(statuses, func(i, j int) bool {
return statuses[i].Name < statuses[j].Name
})
appset.Status.Resources = statuses
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
namespacedName := types.NamespacedName{Namespace: appset.Namespace, Name: appset.Name}
err := r.Client.Status().Update(ctx, appset)
updatedAppset.Status.Resources = appset.Status.Resources
// Update the newly fetched object with new status resources
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(appset)
return nil
})
if err != nil {
logCtx.Errorf("unable to set application set status: %v", err)
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, appset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
return nil
}
@@ -1321,20 +1365,30 @@ func (r *ApplicationSetReconciler) setAppSetApplicationStatus(ctx context.Contex
for i := range applicationStatuses {
applicationSet.Status.SetApplicationStatus(applicationStatuses[i])
}
// DefaultRetry will retry 5 times with a backoff factor of 1, jitter of 0.1 and a duration of 10ms
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
updatedAppset := &argov1alpha1.ApplicationSet{}
if err := r.Get(ctx, namespacedName, updatedAppset); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, applicationSet)
updatedAppset.Status.ApplicationStatus = applicationSet.Status.ApplicationStatus
// Update the newly fetched object with new set of ApplicationStatus
err := r.Client.Status().Update(ctx, updatedAppset)
if err != nil {
return err
}
updatedAppset.DeepCopyInto(applicationSet)
return nil
})
if err != nil {
logCtx.Errorf("unable to set application set status: %v", err)
return fmt.Errorf("unable to set application set status: %w", err)
}
if err := r.Get(ctx, namespacedName, applicationSet); err != nil {
if client.IgnoreNotFound(err) != nil {
return nil
}
return fmt.Errorf("error fetching updated application set: %w", err)
}
}
return nil

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -5,7 +5,7 @@
replicaCount: 1
image:
repository: gcr.io/heptio-images/ks-guestbook-demo
repository: quay.io/argoprojlabs/argocd-e2e-container
tag: 0.1
pullPolicy: IfNotPresent

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -14,7 +14,7 @@ spec:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
- image: quay.io/argoprojlabs/argocd-e2e-container:0.2
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -218,7 +218,7 @@ func (g *DuckTypeGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.A
res = append(res, params)
}
} else {
log.Warningf("clusterDecisionResource status." + statusListKey + " missing")
log.Warningf("clusterDecisionResource status.%s missing", statusListKey)
return nil, nil
}

View File

@@ -78,7 +78,7 @@ func (g *GitGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha1.Applic
return nil, fmt.Errorf("error getting project %s: %w", project, err)
}
// we need to verify the signature on the Git revision if GPG is enabled
verifyCommit = appProject.Spec.SignatureKeys != nil && len(appProject.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled()
verifyCommit = len(appProject.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled()
}
var err error

View File

@@ -694,7 +694,7 @@ func TestPluginGenerateParams(t *testing.T) {
require.NoError(t, err)
gotJson, err := json.Marshal(got)
require.NoError(t, err)
assert.Equal(t, string(expectedJson), string(gotJson))
assert.JSONEq(t, string(expectedJson), string(gotJson))
}
})
}

View File

@@ -168,7 +168,7 @@ func (g *PullRequestGenerator) selectServiceProvider(ctx context.Context, genera
if err != nil {
return nil, fmt.Errorf("error fetching Secret Bearer token: %w", err)
}
return pullrequest.NewBitbucketServiceBearerToken(ctx, providerConfig.API, appToken, providerConfig.Project, providerConfig.Repo, g.scmRootCAPath, providerConfig.Insecure, caCerts)
return pullrequest.NewBitbucketServiceBearerToken(ctx, appToken, providerConfig.API, providerConfig.Project, providerConfig.Repo, g.scmRootCAPath, providerConfig.Insecure, caCerts)
} else if providerConfig.BasicAuth != nil {
password, err := utils.GetSecretRef(ctx, g.client, providerConfig.BasicAuth.PasswordRef, applicationSetInfo.Namespace)
if err != nil {

View File

@@ -19,7 +19,7 @@ type BitbucketCloudPullRequest struct {
ID int `json:"id"`
Title string `json:"title"`
Source BitbucketCloudPullRequestSource `json:"source"`
Author string `json:"author"`
Author BitbucketCloudPullRequestAuthor `json:"author"`
}
type BitbucketCloudPullRequestSource struct {
@@ -35,6 +35,11 @@ type BitbucketCloudPullRequestSourceCommit struct {
Hash string `json:"hash"`
}
// Also have display_name and uuid, but don't plan to use them.
type BitbucketCloudPullRequestAuthor struct {
Nickname string `json:"nickname"`
}
type PullRequestResponse struct {
Page int32 `json:"page"`
Size int32 `json:"size"`
@@ -134,7 +139,7 @@ func (b *BitbucketCloudService) List(_ context.Context) ([]*PullRequest, error)
Title: pull.Title,
Branch: pull.Source.Branch.Name,
HeadSHA: pull.Source.Commit.Hash,
Author: pull.Author,
Author: pull.Author.Nickname,
})
}

View File

@@ -37,7 +37,9 @@ func defaultHandlerCloud(t *testing.T) func(http.ResponseWriter, *http.Request)
"hash": "1a8dd249c04a"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
}
]
}`)
@@ -154,7 +156,9 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "1a8dd249c04a"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
},
{
"id": 102,
@@ -168,7 +172,9 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "4cf807e67a6d"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
}
]
}`, r.Host))
@@ -191,7 +197,9 @@ func TestListPullRequestPaginationCloud(t *testing.T) {
"hash": "6344d9623e3b"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
}
]
}`, r.Host))
@@ -339,7 +347,9 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "1a8dd249c04a"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
},
{
"id": 200,
@@ -353,7 +363,9 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "4cf807e67a6d"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
}
]
}`, r.Host))
@@ -376,7 +388,9 @@ func TestListPullRequestBranchMatchCloud(t *testing.T) {
"hash": "6344d9623e3b"
}
},
"author": "testName"
"author": {
"nickname": "testName"
}
}
]
}`, r.Host))

View File

@@ -46,7 +46,7 @@ func (c *ExtendedClient) GetContents(repo *Repository, path string) (bool, error
return true, nil
}
return false, fmt.Errorf(resp.Status)
return false, fmt.Errorf("%s", resp.Status)
}
var _ SCMProviderService = &BitBucketCloudProvider{}

View File

@@ -51,9 +51,12 @@ const (
// if we used destination name we infer the server url
// if we used both name and server then we return an invalid spec error
func ValidateDestination(ctx context.Context, dest *appv1.ApplicationDestination, clientset kubernetes.Interface, argoCDNamespace string) error {
if dest.IsServerInferred() && dest.IsNameInferred() {
return fmt.Errorf("application destination can't have both name and server inferred: %s %s", dest.Name, dest.Server)
}
if dest.Name != "" {
if dest.Server == "" {
server, err := getDestinationServer(ctx, dest.Name, clientset, argoCDNamespace)
server, err := getDestinationBy(ctx, dest.Name, clientset, argoCDNamespace, true)
if err != nil {
return fmt.Errorf("unable to find destination server: %w", err)
}
@@ -61,14 +64,25 @@ func ValidateDestination(ctx context.Context, dest *appv1.ApplicationDestination
return fmt.Errorf("application references destination cluster %s which does not exist", dest.Name)
}
dest.SetInferredServer(server)
} else if !dest.IsServerInferred() {
} else if !dest.IsServerInferred() && !dest.IsNameInferred() {
return fmt.Errorf("application destination can't have both name and server defined: %s %s", dest.Name, dest.Server)
}
} else if dest.Server != "" {
if dest.Name == "" {
serverName, err := getDestinationBy(ctx, dest.Server, clientset, argoCDNamespace, false)
if err != nil {
return fmt.Errorf("unable to find destination server: %w", err)
}
if serverName == "" {
return fmt.Errorf("application references destination cluster %s which does not exist", dest.Server)
}
dest.SetInferredName(serverName)
}
}
return nil
}
func getDestinationServer(ctx context.Context, clusterName string, clientset kubernetes.Interface, argoCDNamespace string) (string, error) {
func getDestinationBy(ctx context.Context, cluster string, clientset kubernetes.Interface, argoCDNamespace string, byName bool) (string, error) {
// settingsMgr := settings.NewSettingsManager(context.TODO(), clientset, namespace)
// argoDB := db.NewDB(namespace, settingsMgr, clientset)
// clusterList, err := argoDB.ListClusters(ctx)
@@ -78,14 +92,17 @@ func getDestinationServer(ctx context.Context, clusterName string, clientset kub
}
var servers []string
for _, c := range clusterList.Items {
if c.Name == clusterName {
if byName && c.Name == cluster {
servers = append(servers, c.Server)
}
if !byName && c.Server == cluster {
servers = append(servers, c.Name)
}
}
if len(servers) > 1 {
return "", fmt.Errorf("there are %d clusters with the same name: %v", len(servers), servers)
} else if len(servers) == 0 {
return "", fmt.Errorf("there are no clusters with this name: %s", clusterName)
return "", fmt.Errorf("there are no clusters with this name: %s", cluster)
}
return servers[0], nil
}

View File

@@ -92,7 +92,12 @@ func TestValidateDestination(t *testing.T) {
Namespace: "default",
}
appCond := ValidateDestination(context.Background(), &dest, nil, fakeNamespace)
secret := createClusterSecret("my-secret", "minikube", "https://127.0.0.1:6443")
objects := []runtime.Object{}
objects = append(objects, secret)
kubeclientset := fake.NewSimpleClientset(objects...)
appCond := ValidateDestination(context.Background(), &dest, kubeclientset, fakeNamespace)
require.NoError(t, appCond)
assert.False(t, dest.IsServerInferred())
})

View File

@@ -229,7 +229,7 @@ spec:
require.NoError(t, err)
yamlExpected, err := yaml.Marshal(tc.expectedApp)
require.NoError(t, err)
assert.Equal(t, string(yamlExpected), string(yamlFound))
assert.YAMLEq(t, string(yamlExpected), string(yamlFound))
})
}
}

View File

@@ -273,7 +273,7 @@ func (r *Render) RenderTemplateParams(tmpl *argoappsv1.Application, syncPolicy *
// b) there IS a syncPolicy, but preserveResourcesOnDeletion is set to false
// See TestRenderTemplateParamsFinalizers in util_test.go for test-based definition of behaviour
if (syncPolicy == nil || !syncPolicy.PreserveResourcesOnDeletion) &&
(replacedTmpl.ObjectMeta.Finalizers == nil || len(replacedTmpl.ObjectMeta.Finalizers) == 0) {
len(replacedTmpl.ObjectMeta.Finalizers) == 0 {
replacedTmpl.ObjectMeta.Finalizers = []string{"resources-finalizer.argocd.argoproj.io"}
}

View File

@@ -0,0 +1,186 @@
{
"ref": "refs/heads/env/dev",
"before": "d5c1ffa8e294bc18c639bfb4e0df499251034414",
"after": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"created": false,
"deleted": false,
"forced": true,
"base_ref": null,
"compare": "https://github.com/org/repo/compare/d5c1ffa8e294...63738bb582c8",
"commits": [
{
"id": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"tree_id": "64897da445207e409ad05af93b1f349ad0a4ee19",
"distinct": true,
"message": "Add staging-argocd-demo environment",
"timestamp": "2018-05-04T15:40:02-07:00",
"url": "https://github.com/org/repo/commit/63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"author": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"committer": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"added": [
"ksapps/test-app/environments/staging-argocd-demo/main.jsonnet",
"ksapps/test-app/environments/staging-argocd-demo/params.libsonnet"
],
"removed": [
],
"modified": [
"ksapps/test-app/app.yaml"
]
}
],
"head_commit": {
"id": "63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"tree_id": "64897da445207e409ad05af93b1f349ad0a4ee19",
"distinct": true,
"message": "Add staging-argocd-demo environment",
"timestamp": "2018-05-04T15:40:02-07:00",
"url": "https://github.com/org/repo/commit/63738bb582c8b540af7bcfc18f87c575c3ed66e0",
"author": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"committer": {
"name": "Jesse Suen",
"email": "Jesse_Suen@example.com",
"username": "org"
},
"added": [
"ksapps/test-app/environments/staging-argocd-demo/main.jsonnet",
"ksapps/test-app/environments/staging-argocd-demo/params.libsonnet"
],
"removed": [
],
"modified": [
"ksapps/test-app/app.yaml"
]
},
"repository": {
"id": 123060978,
"name": "repo",
"full_name": "org/repo",
"owner": {
"name": "org",
"email": "org@users.noreply.github.com",
"login": "org",
"id": 12677113,
"avatar_url": "https://avatars0.githubusercontent.com/u/12677113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/org",
"html_url": "https://github.com/org",
"followers_url": "https://api.github.com/users/org/followers",
"following_url": "https://api.github.com/users/org/following{/other_user}",
"gists_url": "https://api.github.com/users/org/gists{/gist_id}",
"starred_url": "https://api.github.com/users/org/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/org/subscriptions",
"organizations_url": "https://api.github.com/users/org/orgs",
"repos_url": "https://api.github.com/users/org/repos",
"events_url": "https://api.github.com/users/org/events{/privacy}",
"received_events_url": "https://api.github.com/users/org/received_events",
"type": "User",
"site_admin": false
},
"private": false,
"html_url": "https://github.com/org/repo",
"description": "Test Repository",
"fork": false,
"url": "https://github.com/org/repo",
"forks_url": "https://api.github.com/repos/org/repo/forks",
"keys_url": "https://api.github.com/repos/org/repo/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/org/repo/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/org/repo/teams",
"hooks_url": "https://api.github.com/repos/org/repo/hooks",
"issue_events_url": "https://api.github.com/repos/org/repo/issues/events{/number}",
"events_url": "https://api.github.com/repos/org/repo/events",
"assignees_url": "https://api.github.com/repos/org/repo/assignees{/user}",
"branches_url": "https://api.github.com/repos/org/repo/branches{/branch}",
"tags_url": "https://api.github.com/repos/org/repo/tags",
"blobs_url": "https://api.github.com/repos/org/repo/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/org/repo/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/org/repo/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/org/repo/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/org/repo/statuses/{sha}",
"languages_url": "https://api.github.com/repos/org/repo/languages",
"stargazers_url": "https://api.github.com/repos/org/repo/stargazers",
"contributors_url": "https://api.github.com/repos/org/repo/contributors",
"subscribers_url": "https://api.github.com/repos/org/repo/subscribers",
"subscription_url": "https://api.github.com/repos/org/repo/subscription",
"commits_url": "https://api.github.com/repos/org/repo/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/org/repo/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/org/repo/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/org/repo/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/org/repo/contents/{+path}",
"compare_url": "https://api.github.com/repos/org/repo/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/org/repo/merges",
"archive_url": "https://api.github.com/repos/org/repo/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/org/repo/downloads",
"issues_url": "https://api.github.com/repos/org/repo/issues{/number}",
"pulls_url": "https://api.github.com/repos/org/repo/pulls{/number}",
"milestones_url": "https://api.github.com/repos/org/repo/milestones{/number}",
"notifications_url": "https://api.github.com/repos/org/repo/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/org/repo/labels{/name}",
"releases_url": "https://api.github.com/repos/org/repo/releases{/id}",
"deployments_url": "https://api.github.com/repos/org/repo/deployments",
"created_at": 1519698615,
"updated_at": "2018-05-04T22:37:55Z",
"pushed_at": 1525473610,
"git_url": "git://github.com/org/repo.git",
"ssh_url": "git@github.com:org/repo.git",
"clone_url": "https://github.com/org/repo.git",
"svn_url": "https://github.com/org/repo",
"homepage": null,
"size": 538,
"stargazers_count": 0,
"watchers_count": 0,
"language": null,
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"forks_count": 1,
"mirror_url": null,
"archived": false,
"open_issues_count": 0,
"license": null,
"forks": 1,
"open_issues": 0,
"watchers": 0,
"default_branch": "master",
"stargazers": 0,
"master_branch": "master"
},
"pusher": {
"name": "org",
"email": "org@users.noreply.github.com"
},
"sender": {
"login": "org",
"id": 12677113,
"avatar_url": "https://avatars0.githubusercontent.com/u/12677113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/org",
"html_url": "https://github.com/org",
"followers_url": "https://api.github.com/users/org/followers",
"following_url": "https://api.github.com/users/org/following{/other_user}",
"gists_url": "https://api.github.com/users/org/gists{/gist_id}",
"starred_url": "https://api.github.com/users/org/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/org/subscriptions",
"organizations_url": "https://api.github.com/users/org/orgs",
"repos_url": "https://api.github.com/users/org/repos",
"events_url": "https://api.github.com/users/org/events{/privacy}",
"received_events_url": "https://api.github.com/users/org/received_events",
"type": "User",
"site_admin": false
}
}

View File

@@ -19,6 +19,7 @@ import (
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
argosettings "github.com/argoproj/argo-cd/v2/util/settings"
"github.com/argoproj/argo-cd/v2/util/webhook"
"github.com/go-playground/webhooks/v6/azuredevops"
"github.com/go-playground/webhooks/v6/github"
@@ -190,11 +191,6 @@ func (h *WebhookHandler) Handler(w http.ResponseWriter, r *http.Request) {
}
}
func parseRevision(ref string) string {
refParts := strings.SplitN(ref, "/", 3)
return refParts[len(refParts)-1]
}
func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
var (
webURL string
@@ -204,16 +200,16 @@ func getGitGeneratorInfo(payload interface{}) *gitGeneratorInfo {
switch payload := payload.(type) {
case github.PushPayload:
webURL = payload.Repository.HTMLURL
revision = parseRevision(payload.Ref)
revision = webhook.ParseRevision(payload.Ref)
touchedHead = payload.Repository.DefaultBranch == revision
case gitlab.PushEventPayload:
webURL = payload.Project.WebURL
revision = parseRevision(payload.Ref)
revision = webhook.ParseRevision(payload.Ref)
touchedHead = payload.Project.DefaultBranch == revision
case azuredevops.GitPushEvent:
// See: https://learn.microsoft.com/en-us/azure/devops/service-hooks/events?view=azure-devops#git.push
webURL = payload.Resource.Repository.RemoteURL
revision = parseRevision(payload.Resource.RefUpdates[0].Name)
revision = webhook.ParseRevision(payload.Resource.RefUpdates[0].Name)
touchedHead = payload.Resource.RefUpdates[0].Name == payload.Resource.Repository.DefaultBranch
// unfortunately, Azure DevOps doesn't provide a list of changed files
default:
@@ -373,12 +369,12 @@ func shouldRefreshPluginGenerator(gen *v1alpha1.PluginGenerator) bool {
}
func genRevisionHasChanged(gen *v1alpha1.GitGenerator, revision string, touchedHead bool) bool {
targetRev := parseRevision(gen.Revision)
targetRev := webhook.ParseRevision(gen.Revision)
if targetRev == "HEAD" || targetRev == "" { // revision is head
return touchedHead
}
return targetRev == revision
return targetRev == revision || gen.Revision == revision
}
func gitGeneratorUsesURL(gen *v1alpha1.GitGenerator, webURL string, repoRegexp *regexp.Regexp) bool {

View File

@@ -67,6 +67,15 @@ func TestWebhookHandler(t *testing.T) {
expectedStatusCode: http.StatusOK,
expectedRefresh: true,
},
{
desc: "WebHook from a GitHub repository via Commit shorthand",
headerKey: "X-GitHub-Event",
headerValue: "push",
payloadFile: "github-commit-event-feature-branch.json",
effectedAppSets: []string{"github-shorthand", "matrix-pull-request-github-plugin", "plugin"},
expectedStatusCode: http.StatusOK,
expectedRefresh: true,
},
{
desc: "WebHook from a GitHub repository via Commit to branch",
headerKey: "X-GitHub-Event",
@@ -192,6 +201,7 @@ func TestWebhookHandler(t *testing.T) {
fakeAppWithGitGenerator("git-github", namespace, "https://github.com/org/repo"),
fakeAppWithGitGenerator("git-gitlab", namespace, "https://gitlab/group/name"),
fakeAppWithGitGenerator("git-azure-devops", namespace, "https://dev.azure.com/fabrikam-fiber-inc/DefaultCollection/_git/Fabrikam-Fiber-Git"),
fakeAppWithGitGeneratorWithRevision("github-shorthand", namespace, "https://github.com/org/repo", "env/dev"),
fakeAppWithGithubPullRequestGenerator("pull-request-github", namespace, "CodErTOcat", "Hello-World"),
fakeAppWithGitlabPullRequestGenerator("pull-request-gitlab", namespace, "100500"),
fakeAppWithAzureDevOpsPullRequestGenerator("pull-request-azure-devops", namespace, "DefaultCollection", "Fabrikam"),
@@ -302,14 +312,62 @@ func mockGenerators() map[string]generators.Generator {
}
func TestGenRevisionHasChanged(t *testing.T) {
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{}, "master", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{}, "master", false))
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "dev"}, "dev", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "dev"}, "master", false))
assert.True(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "refs/heads/dev"}, "dev", true))
assert.False(t, genRevisionHasChanged(&v1alpha1.GitGenerator{Revision: "refs/heads/dev"}, "master", false))
type args struct {
gen *v1alpha1.GitGenerator
revision string
touchedHead bool
}
tests := []struct {
name string
args args
want bool
}{
{name: "touchedHead", args: args{
gen: &v1alpha1.GitGenerator{},
revision: "main",
touchedHead: true,
}, want: true},
{name: "didntTouchHead", args: args{
gen: &v1alpha1.GitGenerator{},
revision: "main",
touchedHead: false,
}, want: false},
{name: "foundEqualShort", args: args{
gen: &v1alpha1.GitGenerator{Revision: "dev"},
revision: "dev",
touchedHead: true,
}, want: true},
{name: "foundEqualLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/heads/dev"},
revision: "dev",
touchedHead: true,
}, want: true},
{name: "foundNotEqualLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/heads/dev"},
revision: "main",
touchedHead: true,
}, want: false},
{name: "foundNotEqualShort", args: args{
gen: &v1alpha1.GitGenerator{Revision: "dev"},
revision: "main",
touchedHead: false,
}, want: false},
{name: "foundEqualTag", args: args{
gen: &v1alpha1.GitGenerator{Revision: "v3.14.1"},
revision: "v3.14.1",
touchedHead: false,
}, want: true},
{name: "foundEqualTagLongGen", args: args{
gen: &v1alpha1.GitGenerator{Revision: "refs/tags/v3.14.1"},
revision: "v3.14.1",
touchedHead: false,
}, want: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equalf(t, tt.want, genRevisionHasChanged(tt.args.gen, tt.args.revision, tt.args.touchedHead), "genRevisionHasChanged(%v, %v, %v)", tt.args.gen, tt.args.revision, tt.args.touchedHead)
})
}
}
func fakeAppWithGitGenerator(name, namespace, repo string) *v1alpha1.ApplicationSet {
@@ -331,6 +389,12 @@ func fakeAppWithGitGenerator(name, namespace, repo string) *v1alpha1.Application
}
}
func fakeAppWithGitGeneratorWithRevision(name, namespace, repo, revision string) *v1alpha1.ApplicationSet {
appSet := fakeAppWithGitGenerator(name, namespace, repo)
appSet.Spec.Generators[0].Git.Revision = revision
return appSet
}
func fakeAppWithGitlabPullRequestGenerator(name, namespace, projectId string) *v1alpha1.ApplicationSet {
return &v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
@@ -711,7 +775,7 @@ func fakeAppWithMatrixAndPullRequestGeneratorWithPluginGenerator(name, namespace
func newFakeClient(ns string) *kubefake.Clientset {
s := runtime.NewScheme()
s.AddKnownTypes(v1alpha1.SchemeGroupVersion, &v1alpha1.ApplicationSet{})
return kubefake.NewSimpleClientset(&corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "argocd-cm", Namespace: ns, Labels: map[string]string{
return kubefake.NewClientset(&corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "argocd-cm", Namespace: ns, Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
}}}, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{

69
assets/swagger.json generated
View File

@@ -1990,6 +1990,39 @@
}
}
},
"/api/v1/applicationsets/generate": {
"post": {
"tags": [
"ApplicationSetService"
],
"summary": "Generate generates",
"operationId": "ApplicationSetService_Generate",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/applicationsetApplicationSetGenerateRequest"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/applicationsetApplicationSetGenerateResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/applicationsets/{name}": {
"get": {
"tags": [
@@ -4716,6 +4749,12 @@
"help": {
"$ref": "#/definitions/clusterHelp"
},
"impersonationEnabled": {
"type": "boolean"
},
"installationID": {
"type": "string"
},
"kustomizeOptions": {
"$ref": "#/definitions/v1alpha1KustomizeOptions"
},
@@ -5937,6 +5976,13 @@
"type": "string",
"title": "Description contains optional project description"
},
"destinationServiceAccounts": {
"description": "DestinationServiceAccounts holds information about the service accounts to be impersonated for the application sync operation for each destination.",
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ApplicationDestinationServiceAccount"
}
},
"destinations": {
"type": "array",
"title": "Destinations contains list of destinations available for deployment",
@@ -6068,6 +6114,24 @@
}
}
},
"v1alpha1ApplicationDestinationServiceAccount": {
"description": "ApplicationDestinationServiceAccount holds information about the service account to be impersonated for the application sync operation.",
"type": "object",
"properties": {
"defaultServiceAccount": {
"type": "string",
"title": "DefaultServiceAccount to be used for impersonation during the sync operation"
},
"namespace": {
"description": "Namespace specifies the target namespace for the application's resources.",
"type": "string"
},
"server": {
"description": "Server specifies the URL of the target cluster's Kubernetes control plane API.",
"type": "string"
}
}
},
"v1alpha1ApplicationList": {
"type": "object",
"title": "ApplicationList is list of Application resources\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
@@ -9139,6 +9203,11 @@
"description": "SyncOperation contains details about a sync operation.",
"type": "object",
"properties": {
"autoHealAttemptsCount": {
"type": "integer",
"format": "int64",
"title": "SelfHealAttemptsCount contains the number of auto-heal attempts"
},
"dryRun": {
"type": "boolean",
"title": "DryRun specifies to perform a `kubectl apply --dry-run` without actually performing the sync"

View File

@@ -13,6 +13,7 @@ import (
"github.com/redis/go-redis/v9"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
@@ -24,6 +25,7 @@ import (
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/v2/pkg/ratelimiter"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/normalizers"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
@@ -56,12 +58,16 @@ func NewCommand() *cobra.Command {
repoServerAddress string
repoServerTimeoutSeconds int
selfHealTimeoutSeconds int
selfHealBackoffTimeoutSeconds int
selfHealBackoffFactor int
selfHealBackoffCapSeconds int
statusProcessors int
operationProcessors int
glogLevel int
metricsPort int
metricsCacheExpiration time.Duration
metricsAplicationLabels []string
metricsAplicationConditions []string
kubectlParallelismLimit int64
cacheSource func() (*appstatecache.Cache, error)
redisClient *redis.Client
@@ -77,6 +83,9 @@ func NewCommand() *cobra.Command {
enableDynamicClusterDistribution bool
serverSideDiff bool
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts
// argocd k8s event logging flag
enableK8sEvent []string
)
command := cobra.Command{
Use: cliName,
@@ -151,6 +160,14 @@ func NewCommand() *cobra.Command {
kubectl := kubeutil.NewKubectl()
clusterSharding, err := sharding.GetClusterSharding(kubeClient, settingsMgr, shardingAlgorithm, enableDynamicClusterDistribution)
errors.CheckError(err)
var selfHealBackoff *wait.Backoff
if selfHealBackoffTimeoutSeconds != 0 {
selfHealBackoff = &wait.Backoff{
Duration: time.Duration(selfHealBackoffTimeoutSeconds) * time.Second,
Factor: float64(selfHealBackoffFactor),
Cap: time.Duration(selfHealBackoffCapSeconds) * time.Second,
}
}
appController, err = controller.NewApplicationController(
namespace,
settingsMgr,
@@ -163,10 +180,12 @@ func NewCommand() *cobra.Command {
hardResyncDuration,
time.Duration(appResyncJitter)*time.Second,
time.Duration(selfHealTimeoutSeconds)*time.Second,
selfHealBackoff,
time.Duration(repoErrorGracePeriod)*time.Second,
metricsPort,
metricsCacheExpiration,
metricsAplicationLabels,
metricsAplicationConditions,
kubectlParallelismLimit,
persistResourceHealth,
clusterSharding,
@@ -175,6 +194,7 @@ func NewCommand() *cobra.Command {
serverSideDiff,
enableDynamicClusterDistribution,
ignoreNormalizerOpts,
enableK8sEvent,
)
errors.CheckError(err)
cacheutil.CollectMetrics(redisClient, appController.GetMetricsServer())
@@ -224,11 +244,15 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
command.Flags().DurationVar(&metricsCacheExpiration, "metrics-cache-expiration", env.ParseDurationFromEnv("ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION", 0*time.Second, 0, math.MaxInt64), "Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 5, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS", 0, 0, math.MaxInt32), "Specifies timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffTimeoutSeconds, "self-heal-backoff-timeout-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_TIMEOUT_SECONDS", 2, 0, math.MaxInt32), "Specifies initial timeout of exponential backoff between self heal attempts")
command.Flags().IntVar(&selfHealBackoffFactor, "self-heal-backoff-factor", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_FACTOR", 3, 0, math.MaxInt32), "Specifies factor of exponential timeout between application self heal attempts")
command.Flags().IntVar(&selfHealBackoffCapSeconds, "self-heal-backoff-cap-seconds", env.ParseNumFromEnv("ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_BACKOFF_CAP_SECONDS", 300, 0, math.MaxInt32), "Specifies max timeout of exponential backoff between application self heal attempts")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", env.ParseInt64FromEnv("ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT", 20, 0, math.MaxInt64), "Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
command.Flags().StringSliceVar(&metricsAplicationLabels, "metrics-application-labels", []string{}, "List of Application labels that will be added to the argocd_application_labels metric")
command.Flags().StringSliceVar(&metricsAplicationConditions, "metrics-application-conditions", []string{}, "List of Application conditions that will be added to the argocd_application_conditions metric")
command.Flags().StringVar(&otlpAddress, "otlp-address", env.StringFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_ADDRESS", ""), "OpenTelemetry collector address to send traces to")
command.Flags().BoolVar(&otlpInsecure, "otlp-insecure", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_INSECURE", true), "OpenTelemetry collector insecure mode")
command.Flags().StringToStringVar(&otlpHeaders, "otlp-headers", env.ParseStringToStringFromEnv("ARGOCD_APPLICATION_CONTROLLER_OTLP_HEADERS", map[string]string{}, ","), "List of OpenTelemetry collector extra headers sent with traces, headers are comma-separated key-value pairs(e.g. key1=value1,key2=value2)")
@@ -248,6 +272,9 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableDynamicClusterDistribution, "dynamic-cluster-distribution-enabled", env.ParseBoolFromEnv(common.EnvEnableDynamicClusterDistribution, false), "Enables dynamic cluster distribution.")
command.Flags().BoolVar(&serverSideDiff, "server-side-diff-enabled", env.ParseBoolFromEnv(common.EnvServerSideDiff, false), "Feature flag to enable ServerSide diff. Default (\"false\")")
command.Flags().DurationVar(&ignoreNormalizerOpts.JQExecutionTimeout, "ignore-normalizer-jq-execution-timeout-seconds", env.ParseDurationFromEnv("ARGOCD_IGNORE_NORMALIZER_JQ_TIMEOUT", 0*time.Second, 0, math.MaxInt64), "Set ignore normalizer JQ execution timeout")
// argocd k8s event logging flag
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
cacheSource = appstatecache.AddCacheFlagsToCmd(&command, cacheutil.Options{
OnClientCreated: func(client *redis.Client) {
redisClient = client

View File

@@ -270,7 +270,7 @@ func startWebhookServer(webhookHandler *webhook.WebhookHandler, webhookAddr stri
mux := http.NewServeMux()
mux.HandleFunc("/api/webhook", webhookHandler.Handler)
go func() {
log.Info("Starting webhook server")
log.Infof("Starting webhook server %s", webhookAddr)
err := http.ListenAndServe(webhookAddr, mux)
if err != nil {
log.Error(err, "failed to start webhook server")

View File

@@ -1,10 +1,14 @@
package commands
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"strings"
"sync"
"syscall"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
@@ -62,7 +66,8 @@ func NewCommand() *cobra.Command {
Use: "controller",
Short: "Starts Argo CD Notifications controller",
RunE: func(c *cobra.Command, args []string) error {
ctx := c.Context()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
vers := common.GetVersion()
namespace, _, err := clientConfig.Namespace()
@@ -146,6 +151,17 @@ func NewCommand() *cobra.Command {
return fmt.Errorf("failed to initialize controller: %w", err)
}
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
s := <-sigCh
log.Printf("got signal %v, attempting graceful shutdown", s)
cancel()
}()
go ctrl.Run(ctx, processorsCount)
<-ctx.Done()
return nil

View File

@@ -27,6 +27,7 @@ import (
reposervercache "github.com/argoproj/argo-cd/v2/reposerver/cache"
"github.com/argoproj/argo-cd/v2/server"
servercache "github.com/argoproj/argo-cd/v2/server/cache"
"github.com/argoproj/argo-cd/v2/util/argo"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/dex"
@@ -91,6 +92,9 @@ func NewCommand() *cobra.Command {
scmRootCAPath string
allowedScmProviders []string
enableScmProviders bool
// argocd k8s event logging flag
enableK8sEvent []string
)
command := &cobra.Command{
Use: cliName,
@@ -151,6 +155,7 @@ func NewCommand() *cobra.Command {
controllerClient, err := client.New(config, client.Options{Scheme: scheme})
errors.CheckError(err)
controllerClient = client.NewDryRunClient(controllerClient)
controllerClient = client.NewNamespacedClient(controllerClient, namespace)
// Load CA information to use for validating connections to the
// repository server, if strict TLS validation was requested.
@@ -229,6 +234,7 @@ func NewCommand() *cobra.Command {
ApplicationNamespaces: applicationNamespaces,
EnableProxyExtension: enableProxyExtension,
WebhookParallelism: webhookParallelism,
EnableK8sEvent: enableK8sEvent,
}
appsetOpts := server.ApplicationSetOpts{
@@ -303,6 +309,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
// Flags related to the applicationSet component.
command.Flags().StringVar(&scmRootCAPath, "appset-scm-root-ca-path", env.StringFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH", ""), "Provide Root CA Path for self-signed TLS Certificates")

View File

@@ -190,7 +190,11 @@ func isArgoCDConfigMap(name string) bool {
// specsEqual returns if the spec, data, labels, annotations, and finalizers of the two
// supplied objects are equal, indicating that no update is necessary during importing
func specsEqual(left, right unstructured.Unstructured) bool {
if !reflect.DeepEqual(left.GetAnnotations(), right.GetAnnotations()) {
leftAnnotation := left.GetAnnotations()
rightAnnotation := right.GetAnnotations()
delete(leftAnnotation, apiv1.LastAppliedConfigAnnotation)
delete(rightAnnotation, apiv1.LastAppliedConfigAnnotation)
if !reflect.DeepEqual(leftAnnotation, rightAnnotation) {
return false
}
if !reflect.DeepEqual(left.GetLabels(), right.GetLabels()) {

View File

@@ -97,7 +97,7 @@ func NewGenAppSpecCommand() *cobra.Command {
argocd admin app generate-spec nginx-ingress --repo https://charts.helm.sh/stable --helm-chart nginx-ingress --revision 1.24.3 --dest-namespace default --dest-server https://kubernetes.default.svc
# Generate declarative config for a Kustomize app
argocd admin app generate-spec kustomize-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image gcr.io/heptio-images/ks-guestbook-demo:0.1
argocd admin app generate-spec kustomize-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image quay.io/argoprojlabs/argocd-e2e-container:0.1
# Generate declarative config for a app using a custom tool:
argocd admin app generate-spec kasane --repo https://github.com/argoproj/argocd-example-apps.git --path plugins/kasane --dest-namespace default --dest-server https://kubernetes.default.svc --config-management-plugin kasane
@@ -387,7 +387,7 @@ func reconcileApplications(
return true
}, func(r *http.Request) error {
return nil
}, []string{})
}, []string{}, []string{})
if err != nil {
return nil, err
}

View File

@@ -136,6 +136,7 @@ func NewImportCommand() *cobra.Command {
dryRun bool
verbose bool
stopOperation bool
ignoreTracking bool
applicationNamespaces []string
applicationsetNamespaces []string
)
@@ -264,6 +265,13 @@ func NewImportCommand() *cobra.Command {
continue
}
}
// If there is a live object, remove the tracking annotations/label that might conflict
// when argo is managed with an application.
if ignoreTracking && exists {
updateTracking(bakObj, &liveObj)
}
if !exists {
isForbidden := false
if !dryRun {
@@ -349,6 +357,7 @@ func NewImportCommand() *cobra.Command {
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().BoolVar(&dryRun, "dry-run", false, "Print what will be performed")
command.Flags().BoolVar(&prune, "prune", false, "Prune secrets, applications and projects which do not appear in the backup")
command.Flags().BoolVar(&ignoreTracking, "ignore-tracking", false, "Do not update the tracking annotation if the resource is already tracked")
command.Flags().BoolVar(&verbose, "verbose", false, "Verbose output (versus only changed output)")
command.Flags().BoolVar(&stopOperation, "stop-operation", false, "Stop any existing operations")
command.Flags().StringSliceVarP(&applicationNamespaces, "application-namespaces", "", []string{}, fmt.Sprintf("Comma separated list of namespace globs to which import of applications is allowed. If not provided value from '%s' in %s will be used,if it's not defined only applications without an explicit namespace will be imported to the Argo CD namespace", applicationNamespacesCmdParamsKey, common.ArgoCDCmdParamsConfigMapName))
@@ -422,3 +431,32 @@ func updateLive(bak, live *unstructured.Unstructured, stopOperation bool) *unstr
}
return newLive
}
// updateTracking will update the tracking label and annotation in the bak resources to the
// value of the live resource.
func updateTracking(bak, live *unstructured.Unstructured) {
// update the common annotation
bakAnnotations := bak.GetAnnotations()
liveAnnotations := live.GetAnnotations()
if liveAnnotations != nil && bakAnnotations != nil {
if v, ok := liveAnnotations[common.AnnotationKeyAppInstance]; ok {
if _, ok := bakAnnotations[common.AnnotationKeyAppInstance]; ok {
bakAnnotations[common.AnnotationKeyAppInstance] = v
bak.SetAnnotations(bakAnnotations)
}
}
}
// update the common label
// A custom label can be set, but it is impossible to know which instance is managing the application
bakLabels := bak.GetLabels()
liveLabels := live.GetLabels()
if liveLabels != nil && bakLabels != nil {
if v, ok := liveLabels[common.LabelKeyAppInstance]; ok {
if _, ok := bakLabels[common.LabelKeyAppInstance]; ok {
bakLabels[common.LabelKeyAppInstance] = v
bak.SetLabels(bakLabels)
}
}
}
}

View File

@@ -0,0 +1,87 @@
package admin
import (
"testing"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v2/common"
)
func newBackupObject(trackingValue string, trackingLabel bool, trackingAnnotation bool) *unstructured.Unstructured {
cm := v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "my-configmap",
Namespace: "namespace",
},
Data: map[string]string{
"foo": "bar",
},
}
if trackingLabel {
cm.SetLabels(map[string]string{
common.LabelKeyAppInstance: trackingValue,
})
}
if trackingAnnotation {
cm.SetAnnotations(map[string]string{
common.AnnotationKeyAppInstance: trackingValue,
})
}
return kube.MustToUnstructured(&cm)
}
func Test_updateTracking(t *testing.T) {
type args struct {
bak *unstructured.Unstructured
live *unstructured.Unstructured
}
tests := []struct {
name string
args args
expected *unstructured.Unstructured
}{
{
name: "update annotation when present in live",
args: args{
bak: newBackupObject("bak", false, true),
live: newBackupObject("live", false, true),
},
expected: newBackupObject("live", false, true),
},
{
name: "update default label when present in live",
args: args{
bak: newBackupObject("bak", true, true),
live: newBackupObject("live", true, true),
},
expected: newBackupObject("live", true, true),
},
{
name: "do not update if live object does not have tracking",
args: args{
bak: newBackupObject("bak", true, true),
live: newBackupObject("live", false, false),
},
expected: newBackupObject("bak", true, true),
},
{
name: "do not update if bak object does not have tracking",
args: args{
bak: newBackupObject("bak", false, false),
live: newBackupObject("live", true, true),
},
expected: newBackupObject("bak", false, false),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
updateTracking(tt.args.bak, tt.args.live)
assert.Equal(t, tt.expected, tt.args.bak)
})
}
}

View File

@@ -183,13 +183,12 @@ func getControllerReplicas(ctx context.Context, kubeClient *kubernetes.Clientset
func NewClusterShardsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
redisCompressionStr string
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
)
command := cobra.Command{
Use: "shards",
@@ -213,7 +212,7 @@ func NewClusterShardsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
if replicas == 0 {
return
}
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, redisCompressionStr)
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, clientOpts.RedisCompression)
errors.CheckError(err)
if len(clusters) == 0 {
return
@@ -234,7 +233,6 @@ func NewClusterShardsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
// we can ignore unchecked error here as the command will be parsed again and checked when command.Execute() is run later
// nolint:errcheck
command.ParseFlags(os.Args[1:])
redisCompressionStr, _ = command.Flags().GetString(cacheutil.CLIFlagRedisCompress)
return &command
}
@@ -466,13 +464,12 @@ func NewClusterDisableNamespacedMode() *cobra.Command {
func NewClusterStatsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
redisCompressionStr string
shard int
replicas int
shardingAlgorithm string
clientConfig clientcmd.ClientConfig
cacheSrc func() (*appstatecache.Cache, error)
portForwardRedis bool
)
command := cobra.Command{
Use: "stats",
@@ -502,7 +499,7 @@ argocd admin cluster stats target-cluster`,
replicas, err = getControllerReplicas(ctx, kubeClient, namespace, clientOpts.AppControllerName)
errors.CheckError(err)
}
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, redisCompressionStr)
clusters, err := loadClusters(ctx, kubeClient, appClient, replicas, shardingAlgorithm, namespace, portForwardRedis, cacheSrc, shard, clientOpts.RedisName, clientOpts.RedisHaProxyName, clientOpts.RedisCompression)
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
@@ -524,7 +521,6 @@ argocd admin cluster stats target-cluster`,
// we can ignore unchecked error here as the command will be parsed again and checked when command.Execute() is run later
// nolint:errcheck
command.ParseFlags(os.Args[1:])
redisCompressionStr, _ = command.Flags().GetString(cacheutil.CLIFlagRedisCompress)
return &command
}

View File

@@ -12,17 +12,14 @@ import (
"github.com/argoproj/argo-cd/v2/cmd/argocd/commands/initialize"
"github.com/argoproj/argo-cd/v2/common"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
"github.com/argoproj/argo-cd/v2/util/cache"
"github.com/argoproj/argo-cd/v2/util/env"
"github.com/argoproj/argo-cd/v2/util/errors"
)
func NewDashboardCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
port int
address string
compressionStr string
clientConfig clientcmd.ClientConfig
port int
address string
clientConfig clientcmd.ClientConfig
)
cmd := &cobra.Command{
Use: "dashboard",
@@ -30,10 +27,8 @@ func NewDashboardCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
Run: func(cmd *cobra.Command, args []string) {
ctx := cmd.Context()
compression, err := cache.CompressionTypeFromString(compressionStr)
errors.CheckError(err)
clientOpts.Core = true
errors.CheckError(headless.MaybeStartLocalServer(ctx, clientOpts, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, compression, clientConfig))
errors.CheckError(headless.MaybeStartLocalServer(ctx, clientOpts, initialize.RetrieveContextIfChanged(cmd.Flag("context")), &port, &address, clientConfig))
println(fmt.Sprintf("Argo CD UI is available at http://%s:%d", address, port))
<-ctx.Done()
},
@@ -50,6 +45,5 @@ $ argocd admin dashboard --redis-compress gzip
clientConfig = cli.AddKubectlFlagsToSet(cmd.Flags())
cmd.Flags().IntVar(&port, "port", common.DefaultPortAPIServer, "Listen on given port")
cmd.Flags().StringVar(&address, "address", common.DefaultAddressAdminDashboard, "Listen on given address")
cmd.Flags().StringVar(&compressionStr, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionGZip)), "Enable this if the application controller is configured with redis compression enabled. (possible values: gzip, none)")
return cmd
}

View File

@@ -29,7 +29,7 @@ type rbacTrait struct {
}
// Provide a mapping of short-hand resource names to their RBAC counterparts
var resourceMap map[string]string = map[string]string{
var resourceMap = map[string]string{
"account": rbacpolicy.ResourceAccounts,
"app": rbacpolicy.ResourceApplications,
"apps": rbacpolicy.ResourceApplications,
@@ -53,8 +53,17 @@ var resourceMap map[string]string = map[string]string{
"repository": rbacpolicy.ResourceRepositories,
}
var projectScoped = map[string]bool{
rbacpolicy.ResourceApplications: true,
rbacpolicy.ResourceApplicationSets: true,
rbacpolicy.ResourceLogs: true,
rbacpolicy.ResourceExec: true,
rbacpolicy.ResourceClusters: true,
rbacpolicy.ResourceRepositories: true,
}
// List of allowed RBAC resources
var validRBACResourcesActions map[string]actionTraitMap = map[string]actionTraitMap{
var validRBACResourcesActions = map[string]actionTraitMap{
rbacpolicy.ResourceAccounts: accountsActions,
rbacpolicy.ResourceApplications: applicationsActions,
rbacpolicy.ResourceApplicationSets: defaultCRUDActions,
@@ -436,14 +445,15 @@ func checkPolicy(subject, action, resource, subResource, builtinPolicy, userPoli
}
}
// Application resources have a special notation - for simplicity's sake,
// Some project scoped resources have a special notation - for simplicity's sake,
// if user gives no sub-resource (or specifies simple '*'), we construct
// the required notation by setting subresource to '*/*'.
if realResource == rbacpolicy.ResourceApplications {
if projectScoped[realResource] {
if subResource == "*" || subResource == "" {
subResource = "*/*"
}
} else if realResource == rbacpolicy.ResourceLogs {
}
if realResource == rbacpolicy.ResourceLogs {
if isLogRbacEnforced != nil && !isLogRbacEnforced() {
return true
}

View File

@@ -235,6 +235,14 @@ func Test_PolicyFromK8s(t *testing.T) {
ok := checkPolicy("log-allow-user", "get", "logs", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, falseLogRbacEnforce)
require.True(t, ok)
})
t.Run("get logs", func(t *testing.T) {
ok := checkPolicy("role:test", "get", "logs", "*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)
})
t.Run("get logs", func(t *testing.T) {
ok := checkPolicy("role:test", "get", "logs", "", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)
})
t.Run("create exec", func(t *testing.T) {
ok := checkPolicy("role:test", "create", "exec", "*/*", assets.BuiltinPolicyCSV, uPol, dRole, "", true, nil)
require.True(t, ok)

View File

@@ -138,7 +138,7 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
argocd app create nginx-ingress --repo https://charts.helm.sh/stable --helm-chart nginx-ingress --revision 1.24.3 --dest-namespace default --dest-server https://kubernetes.default.svc
# Create a Kustomize app
argocd app create kustomize-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image gcr.io/heptio-images/ks-guestbook-demo:0.1
argocd app create kustomize-guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image quay.io/argoprojlabs/argocd-e2e-container:0.1
# Create a MultiSource app while yaml file contains an application with multiple sources
argocd app create guestbook --file <path-to-yaml-file>
@@ -294,7 +294,7 @@ func parentChildDetails(appIf application.ApplicationServiceClient, ctx context.
return mapUidToNode, mapParentToChild, parentNode
}
func printHeader(acdClient argocdclient.Client, app *argoappv1.Application, ctx context.Context, windows *argoappv1.SyncWindows, showOperation bool, showParams bool) {
func printHeader(acdClient argocdclient.Client, app *argoappv1.Application, ctx context.Context, windows *argoappv1.SyncWindows, showOperation bool, showParams bool, sourcePosition int) {
aURL := appURL(ctx, acdClient, app.Name)
printAppSummaryTable(app, aURL, windows)
@@ -309,20 +309,21 @@ func printHeader(acdClient argocdclient.Client, app *argoappv1.Application, ctx
fmt.Println()
printOperationResult(app.Status.OperationState)
}
if !app.Spec.HasMultipleSources() && showParams {
printParams(app)
if showParams {
printParams(app, sourcePosition)
}
}
// NewApplicationGetCommand returns a new instance of an `argocd app get` command
func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
refresh bool
hardRefresh bool
output string
showParams bool
showOperation bool
appNamespace string
refresh bool
hardRefresh bool
output string
showParams bool
showOperation bool
appNamespace string
sourcePosition int
)
command := &cobra.Command{
Use: "get APPNAME",
@@ -343,6 +344,9 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
# Show application parameters and overrides
argocd app get my-app --show-params
# Show application parameters and overrides for a source at position 1 under spec.sources of app my-app
argocd app get my-app --show-params --source-position 1
# Refresh application data when retrieving
argocd app get my-app --refresh
@@ -373,9 +377,18 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
Refresh: getRefreshType(refresh, hardRefresh),
AppNamespace: &appNs,
})
errors.CheckError(err)
// check for source position if --show-params is set
if app.Spec.HasMultipleSources() && showParams {
if sourcePosition <= 0 {
errors.CheckError(fmt.Errorf("Source position should be specified and must be greater than 0 for applications with multiple sources"))
}
if len(app.Spec.GetSources()) < sourcePosition {
errors.CheckError(fmt.Errorf("Source position should be less than the number of sources in the application"))
}
}
pConn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer argoio.Close(pConn)
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: app.Spec.Project})
@@ -388,7 +401,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
err := PrintResource(app, output)
errors.CheckError(err)
case "wide", "":
printHeader(acdClient, app, ctx, windows, showOperation, showParams)
printHeader(acdClient, app, ctx, windows, showOperation, showParams, sourcePosition)
if len(app.Status.Resources) > 0 {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
@@ -396,14 +409,14 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
_ = w.Flush()
}
case "tree":
printHeader(acdClient, app, ctx, windows, showOperation, showParams)
printHeader(acdClient, app, ctx, windows, showOperation, showParams, sourcePosition)
mapUidToNode, mapParentToChild, parentNode, mapNodeNameToResourceState := resourceParentChild(ctx, acdClient, appName, appNs)
if len(mapUidToNode) > 0 {
fmt.Println()
printTreeView(mapUidToNode, mapParentToChild, parentNode, mapNodeNameToResourceState)
}
case "tree=detailed":
printHeader(acdClient, app, ctx, windows, showOperation, showParams)
printHeader(acdClient, app, ctx, windows, showOperation, showParams, sourcePosition)
mapUidToNode, mapParentToChild, parentNode, mapNodeNameToResourceState := resourceParentChild(ctx, acdClient, appName, appNs)
if len(mapUidToNode) > 0 {
fmt.Println()
@@ -420,6 +433,7 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
command.Flags().BoolVar(&refresh, "refresh", false, "Refresh application data when retrieving")
command.Flags().BoolVar(&hardRefresh, "hard-refresh", false, "Refresh application data as well as target manifests cache")
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Only get application from namespace")
command.Flags().IntVar(&sourcePosition, "source-position", -1, "Position of the source from the list of sources of the app. Counting starts at 1.")
return command
}
@@ -572,8 +586,8 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
var status string
var allow, deny, inactiveAllows bool
if windows.HasWindows() {
active := windows.Active()
if active.HasWindows() {
active, err := windows.Active()
if err == nil && active.HasWindows() {
for _, w := range *active {
if w.Kind == "deny" {
deny = true
@@ -582,13 +596,14 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
}
}
}
if windows.InactiveAllows().HasWindows() {
inactiveAllowWindows, err := windows.InactiveAllows()
if err == nil && inactiveAllowWindows.HasWindows() {
inactiveAllows = true
}
s := windows.CanSync(true)
if deny || !deny && !allow && inactiveAllows {
if s {
s, err := windows.CanSync(true)
if err == nil && s {
status = "Manual Allowed"
} else {
status = "Sync Denied"
@@ -701,9 +716,22 @@ func truncateString(str string, num int) string {
}
// printParams prints parameters and overrides
func printParams(app *argoappv1.Application) {
if app.Spec.GetSource().Helm != nil {
printHelmParams(app.Spec.GetSource().Helm)
func printParams(app *argoappv1.Application, sourcePosition int) {
var source *argoappv1.ApplicationSource
if app.Spec.HasMultipleSources() {
// Get the source by the sourcePosition whose params you'd like to print
source = app.Spec.GetSourcePtrByPosition(sourcePosition)
if source == nil {
source = &argoappv1.ApplicationSource{}
}
} else {
src := app.Spec.GetSource()
source = &src
}
if source.Helm != nil {
printHelmParams(source.Helm)
}
}
@@ -793,9 +821,9 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
errors.CheckError(err)
},
}
command.Flags().IntVar(&sourcePosition, "source-position", -1, "Position of the source from the list of sources of the app. Counting starts at 1.")
cmdutil.AddAppFlags(command, &appOpts)
command.Flags().StringVarP(&appNamespace, "app-namespace", "N", "", "Set application parameters in namespace")
command.Flags().IntVar(&sourcePosition, "source-position", -1, "Position of the source from the list of sources of the app. Counting starts at 1.")
return command
}
@@ -1255,7 +1283,7 @@ func findandPrintDiff(ctx context.Context, app *argoappv1.Application, proj *arg
if diffOptions.local != "" {
localObjs := groupObjsByKey(getLocalObjects(ctx, app, proj, diffOptions.local, diffOptions.localRepoRoot, argoSettings.AppLabelKey, diffOptions.cluster.Info.ServerVersion, diffOptions.cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.TrackingMethod), liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, localObjs, items, argoSettings, app.InstanceName(argoSettings.ControllerNamespace), app.Spec.Destination.Namespace)
} else if diffOptions.revision != "" || (diffOptions.revisions != nil && len(diffOptions.revisions) > 0) {
} else if diffOptions.revision != "" || len(diffOptions.revisions) > 0 {
var unstructureds []*unstructured.Unstructured
for _, mfst := range diffOptions.res.Manifests {
obj, err := argoappv1.UnmarshalToUnstructured(mfst)
@@ -1348,7 +1376,7 @@ func groupObjsForDiff(resources *application.ManagedResourcesResponse, objs map[
}
if local, ok := objs[key]; ok || live != nil {
if local != nil && !kube.IsCRD(local) {
err = resourceTracking.SetAppInstance(local, argoSettings.AppLabelKey, appName, namespace, argoappv1.TrackingMethod(argoSettings.GetTrackingMethod()))
err = resourceTracking.SetAppInstance(local, argoSettings.AppLabelKey, appName, namespace, argoappv1.TrackingMethod(argoSettings.GetTrackingMethod()), argoSettings.GetInstallationID())
errors.CheckError(err)
}
@@ -1906,7 +1934,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if len(projects) != 0 {
errMsg += fmt.Sprintf(" projects %v", projects)
}
log.Fatalf(errMsg)
log.Fatal(errMsg)
}
for _, i := range list.Items {

View File

@@ -918,35 +918,83 @@ func TestPrintAppConditions(t *testing.T) {
}
func TestPrintParams(t *testing.T) {
output, _ := captureOutput(func() error {
app := &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "name1",
Value: "value1",
},
{
Name: "name2",
Value: "value2",
},
{
Name: "name3",
Value: "value3",
testCases := []struct {
name string
app *v1alpha1.Application
sourcePosition int
expectedOutput string
}{
{
name: "Single Source application with valid helm parameters",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "name1",
Value: "value1",
},
{
Name: "name2",
Value: "value2",
},
{
Name: "name3",
Value: "value3",
},
},
},
},
},
},
}
printParams(app)
return nil
})
expectation := "\n\nNAME VALUE\nname1 value1\nname2 value2\nname3 value3\n"
if output != expectation {
t.Fatalf("Incorrect print params output %q, should be %q", output, expectation)
sourcePosition: -1,
expectedOutput: "\n\nNAME VALUE\nname1 value1\nname2 value2\nname3 value3\n",
},
{
name: "Multi-source application with a valid Source Position",
app: &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Sources: []v1alpha1.ApplicationSource{
{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "nameA",
Value: "valueA",
},
},
},
},
{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "nameB",
Value: "valueB",
},
},
},
},
},
},
},
sourcePosition: 1,
expectedOutput: "\n\nNAME VALUE\nnameA valueA\n",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
output, _ := captureOutput(func() error {
printParams(tc.app, tc.sourcePosition)
return nil
})
if output != tc.expectedOutput {
t.Fatalf("Incorrect print params output %q, should be %q\n", output, tc.expectedOutput)
}
})
}
}

View File

@@ -81,14 +81,14 @@ func Test_PrintResource(t *testing.T) {
return err
})
require.NoError(t, err)
assert.Equal(t, expectYamlSingle, str)
assert.YAMLEq(t, expectYamlSingle, str)
str, err = captureOutput(func() error {
err := PrintResource(testResource, "json")
return err
})
require.NoError(t, err)
assert.Equal(t, expectJsonSingle, str)
assert.JSONEq(t, expectJsonSingle, str)
err = PrintResource(testResource, "unknown")
require.Error(t, err)
@@ -116,28 +116,28 @@ func Test_PrintResourceList(t *testing.T) {
return err
})
require.NoError(t, err)
assert.Equal(t, expectYamlList, str)
assert.YAMLEq(t, expectYamlList, str)
str, err = captureOutput(func() error {
err := PrintResourceList(testResource, "json", false)
return err
})
require.NoError(t, err)
assert.Equal(t, expectJsonList, str)
assert.JSONEq(t, expectJsonList, str)
str, err = captureOutput(func() error {
err := PrintResourceList(testResource2, "yaml", true)
return err
})
require.NoError(t, err)
assert.Equal(t, expectYamlSingle, str)
assert.YAMLEq(t, expectYamlSingle, str)
str, err = captureOutput(func() error {
err := PrintResourceList(testResource2, "json", true)
return err
})
require.NoError(t, err)
assert.Equal(t, expectJsonSingle, str)
assert.JSONEq(t, expectJsonSingle, str)
err = PrintResourceList(testResource, "unknown", false)
require.Error(t, err)

View File

@@ -14,8 +14,10 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/runtime"
corev1 "k8s.io/api/core/v1"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
runtimeUtil "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
cache2 "k8s.io/client-go/tools/cache"
@@ -48,6 +50,7 @@ type forwardCacheClient struct {
err error
redisHaProxyName string
redisName string
redisPassword string
}
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
@@ -64,7 +67,7 @@ func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error)
return
}
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort)})
redisClient := redis.NewClient(&redis.Options{Addr: fmt.Sprintf("localhost:%d", redisPort), Password: c.redisPassword})
c.client = cache.NewRedisCache(redisClient, time.Hour, c.compression)
})
if c.err != nil {
@@ -126,7 +129,7 @@ func (c *forwardRepoClientset) NewRepoServerClient() (io.Closer, repoapiclient.R
}
repoServerName := c.repoServerName
repoServererviceLabelSelector := common.LabelKeyComponentRepoServer + "=" + common.LabelValueComponentRepoServer
repoServerServices, err := c.kubeClientset.CoreV1().Services(c.namespace).List(context.Background(), v1.ListOptions{LabelSelector: repoServererviceLabelSelector})
repoServerServices, err := c.kubeClientset.CoreV1().Services(c.namespace).List(context.Background(), metaV1.ListOptions{LabelSelector: repoServererviceLabelSelector})
if err != nil {
c.err = err
return
@@ -174,7 +177,7 @@ func testAPI(ctx context.Context, clientOpts *apiclient.ClientOptions) error {
//
// If the clientOpts enables core mode, but the local config does not have core mode enabled, this function will
// not start the local server.
func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, compression cache.RedisCompressionType, clientConfig clientcmd.ClientConfig) error {
func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOptions, ctxStr string, port *int, address *string, clientConfig clientcmd.ClientConfig) error {
if clientConfig == nil {
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
clientConfig = cli.AddKubectlFlagsToSet(flags)
@@ -201,7 +204,7 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
}
// get rid of logging error handler
runtime.ErrorHandlers = runtime.ErrorHandlers[1:]
runtimeUtil.ErrorHandlers = runtimeUtil.ErrorHandlers[1:]
cli.SetLogLevel(log.ErrorLevel.String())
log.SetLevel(log.ErrorLevel)
os.Setenv(v1alpha1.EnvVarFakeInClusterConfig, "true")
@@ -236,7 +239,18 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
return fmt.Errorf("error creating kubernetes dynamic clientset: %w", err)
}
controllerClientset, err := client.New(restConfig, client.Options{})
scheme := runtime.NewScheme()
err = v1alpha1.AddToScheme(scheme)
if err != nil {
return fmt.Errorf("error adding argo resources to scheme: %w", err)
}
err = corev1.AddToScheme(scheme)
if err != nil {
return fmt.Errorf("error adding corev1 resources to scheme: %w", err)
}
controllerClientset, err := client.New(restConfig, client.Options{
Scheme: scheme,
})
if err != nil {
return fmt.Errorf("error creating kubernetes controller clientset: %w", err)
}
@@ -251,12 +265,12 @@ func MaybeStartLocalServer(ctx context.Context, clientOpts *apiclient.ClientOpti
if err != nil {
return fmt.Errorf("error running miniredis: %w", err)
}
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: compression, redisHaProxyName: clientOpts.RedisHaProxyName, redisName: clientOpts.RedisName}), time.Hour)
redisOptions := &redis.Options{Addr: mr.Addr()}
if err = common.SetOptionalRedisPasswordFromKubeConfig(ctx, kubeClientset, namespace, redisOptions); err != nil {
log.Warnf("Failed to fetch & set redis password for namespace %s: %v", namespace, err)
}
appstateCache := appstatecache.NewCache(cache.NewCache(&forwardCacheClient{namespace: namespace, context: ctxStr, compression: cache.RedisCompressionType(clientOpts.RedisCompression), redisHaProxyName: clientOpts.RedisHaProxyName, redisName: clientOpts.RedisName, redisPassword: redisOptions.Password}), time.Hour)
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
EnableGZip: false,
Namespace: namespace,
@@ -307,7 +321,7 @@ func NewClientOrDie(opts *apiclient.ClientOptions, c *cobra.Command) apiclient.C
ctxStr := initialize.RetrieveContextIfChanged(c.Flag("context"))
// If we're in core mode, start the API server on the fly and configure the client `opts` to use it.
// If we're not in core mode, this function call will do nothing.
err := MaybeStartLocalServer(ctx, opts, ctxStr, nil, nil, cache.RedisCompressionNone, nil)
err := MaybeStartLocalServer(ctx, opts, ctxStr, nil, nil, nil)
if err != nil {
log.Fatal(err)
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"os"
"slices"
"strings"
"text/tabwriter"
"time"
@@ -80,6 +81,8 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command.AddCommand(NewProjectRemoveOrphanedIgnoreCommand(clientOpts))
command.AddCommand(NewProjectAddSourceNamespace(clientOpts))
command.AddCommand(NewProjectRemoveSourceNamespace(clientOpts))
command.AddCommand(NewProjectAddDestinationServiceAccountCommand(clientOpts))
command.AddCommand(NewProjectRemoveDestinationServiceAccountCommand(clientOpts))
return command
}
@@ -799,7 +802,7 @@ func printProjectNames(projects []v1alpha1.AppProject) {
// Print table of project info
func printProjectTable(projects []v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\n")
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\tDESTINATION-SERVICE-ACCOUNTS\n")
for _, p := range projects {
printProjectLine(w, &p)
}
@@ -855,7 +858,7 @@ func formatOrphanedResources(p *v1alpha1.AppProject) string {
}
func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
var destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys string
var destinations, destinationServiceAccounts, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys string
switch len(p.Spec.Destinations) {
case 0:
destinations = "<none>"
@@ -864,6 +867,14 @@ func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
default:
destinations = fmt.Sprintf("%d destinations", len(p.Spec.Destinations))
}
switch len(p.Spec.DestinationServiceAccounts) {
case 0:
destinationServiceAccounts = "<none>"
case 1:
destinationServiceAccounts = fmt.Sprintf("%s,%s,%s", p.Spec.DestinationServiceAccounts[0].Server, p.Spec.DestinationServiceAccounts[0].Namespace, p.Spec.DestinationServiceAccounts[0].DefaultServiceAccount)
default:
destinationServiceAccounts = fmt.Sprintf("%d destinationServiceAccounts", len(p.Spec.DestinationServiceAccounts))
}
switch len(p.Spec.SourceRepos) {
case 0:
sourceRepos = "<none>"
@@ -892,7 +903,7 @@ func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
default:
signatureKeys = fmt.Sprintf("%d key(s)", len(p.Spec.SignatureKeys))
}
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys, formatOrphanedResources(p))
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist, signatureKeys, formatOrphanedResources(p), destinationServiceAccounts)
}
func printProject(p *v1alpha1.AppProject, scopedRepositories []*v1alpha1.Repository, scopedClusters []*v1alpha1.Cluster) {
@@ -1082,3 +1093,122 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
}
return command
}
// NewProjectAddDestinationServiceAccountCommand returns a new instance of an `argocd proj add-destination-service-account` command
func NewProjectAddDestinationServiceAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var serviceAccountNamespace string
buildApplicationDestinationServiceAccount := func(destination string, namespace string, serviceAccount string, serviceAccountNamespace string) v1alpha1.ApplicationDestinationServiceAccount {
if serviceAccountNamespace != "" {
return v1alpha1.ApplicationDestinationServiceAccount{
Server: destination,
Namespace: namespace,
DefaultServiceAccount: fmt.Sprintf("%s:%s", serviceAccountNamespace, serviceAccount),
}
} else {
return v1alpha1.ApplicationDestinationServiceAccount{
Server: destination,
Namespace: namespace,
DefaultServiceAccount: serviceAccount,
}
}
}
command := &cobra.Command{
Use: "add-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT",
Short: "Add project destination's default service account",
Example: templates.Examples(`
# Add project destination service account (SERVICE_ACCOUNT) for a server URL (SERVER) in the specified namespace (NAMESPACE) on the project with name PROJECT
argocd proj add-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT
# Add project destination service account (SERVICE_ACCOUNT) from a different namespace
argocd proj add-destination PROJECT SERVER NAMESPACE SERVICE_ACCOUNT --service-account-namespace <service_account_namespace>
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) != 4 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
serviceAccount := args[3]
if strings.Contains(serviceAccountNamespace, "*") {
log.Fatal("service-account-namespace for DestinationServiceAccount must not contain wildcards")
}
if strings.Contains(serviceAccount, "*") {
log.Fatal("ServiceAccount for DestinationServiceAccount must not contain wildcards")
}
destinationServiceAccount := buildApplicationDestinationServiceAccount(server, namespace, serviceAccount, serviceAccountNamespace)
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer argoio.Close(conn)
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, dest := range proj.Spec.DestinationServiceAccounts {
dstServerExist := destinationServiceAccount.Server != "" && dest.Server == destinationServiceAccount.Server
dstServiceAccountExist := destinationServiceAccount.DefaultServiceAccount != "" && dest.DefaultServiceAccount == destinationServiceAccount.DefaultServiceAccount
if dest.Namespace == destinationServiceAccount.Namespace && dstServerExist && dstServiceAccountExist {
log.Fatal("Specified destination service account is already defined in project")
}
}
proj.Spec.DestinationServiceAccounts = append(proj.Spec.DestinationServiceAccounts, destinationServiceAccount)
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVar(&serviceAccountNamespace, "service-account-namespace", "", "Use service-account-namespace as namespace where the service account is present")
return command
}
// NewProjectRemoveDestinationCommand returns a new instance of an `argocd proj remove-destination-service-account` command
func NewProjectRemoveDestinationServiceAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command := &cobra.Command{
Use: "remove-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT",
Short: "Remove default destination service account from the project",
Example: templates.Examples(`
# Remove the destination service account (SERVICE_ACCOUNT) from the specified destination (SERVER and NAMESPACE combination) on the project with name PROJECT
argocd proj remove-destination-service-account PROJECT SERVER NAMESPACE SERVICE_ACCOUNT
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) != 4 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
serviceAccount := args[3]
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer argoio.Close(conn)
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
originalLength := len(proj.Spec.DestinationServiceAccounts)
proj.Spec.DestinationServiceAccounts = slices.DeleteFunc(proj.Spec.DestinationServiceAccounts,
func(destServiceAccount v1alpha1.ApplicationDestinationServiceAccount) bool {
return destServiceAccount.Namespace == namespace &&
destServiceAccount.Server == server &&
destServiceAccount.DefaultServiceAccount == serviceAccount
},
)
if originalLength != len(proj.Spec.DestinationServiceAccounts) {
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
} else {
log.Fatal("Specified destination service account does not exist in project")
}
},
}
return command
}

View File

@@ -352,9 +352,10 @@ func printSyncWindows(proj *v1alpha1.AppProject) {
fmt.Fprintf(w, fmtStr, headers...)
if proj.Spec.SyncWindows.HasWindows() {
for i, window := range proj.Spec.SyncWindows {
isActive, _ := window.Active()
vals := []interface{}{
strconv.Itoa(i),
formatBoolOutput(window.Active()),
formatBoolOutput(isActive),
window.Kind,
window.Schedule,
window.Duration,

View File

@@ -187,6 +187,7 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
command.Flags().StringVar(&repo.Type, "type", common.DefaultRepoType, "type of the repository, \"git\" or \"helm\"")
command.Flags().StringVar(&gcpServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
command.Flags().BoolVar(&repo.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force basic auth when connecting via HTTP")
command.Flags().StringVar(&repo.Proxy, "proxy-url", "", "If provided, this URL will be used to connect via proxy")
return command
}

View File

@@ -3,6 +3,8 @@ package commands
import (
"fmt"
"github.com/argoproj/argo-cd/v2/util/cache"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
@@ -87,6 +89,8 @@ func NewCommand() *cobra.Command {
command.PersistentFlags().StringVar(&clientOpts.RedisName, "redis-name", env.StringFromEnv(common.EnvRedisName, common.DefaultRedisName), fmt.Sprintf("Name of the Redis deployment; set this or the %s environment variable when the Redis's name label differs from the default, for example when installing via the Helm chart", common.EnvRedisName))
command.PersistentFlags().StringVar(&clientOpts.RepoServerName, "repo-server-name", env.StringFromEnv(common.EnvRepoServerName, common.DefaultRepoServerName), fmt.Sprintf("Name of the Argo CD Repo server; set this or the %s environment variable when the server's name label differs from the default, for example when installing via the Helm chart", common.EnvRepoServerName))
command.PersistentFlags().StringVar(&clientOpts.RedisCompression, "redis-compress", env.StringFromEnv("REDIS_COMPRESSION", string(cache.RedisCompressionGZip)), "Enable this if the application controller is configured with redis compression enabled. (possible values: gzip, none)")
clientOpts.KubeOverrides = &clientcmd.ConfigOverrides{}
command.PersistentFlags().StringVar(&clientOpts.KubeOverrides.CurrentContext, "kube-context", "", "Directs the command to the given kube-context")

View File

@@ -4,9 +4,9 @@ import (
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/v2/cmd/util"
_ "go.uber.org/automaxprocs"
"github.com/spf13/cobra"
appcontroller "github.com/argoproj/argo-cd/v2/cmd/argocd-application-controller/commands"
applicationset "github.com/argoproj/argo-cd/v2/cmd/argocd-applicationset-controller/commands"
@@ -31,9 +31,12 @@ func main() {
if val := os.Getenv(binaryNameEnv); val != "" {
binaryName = val
}
isCLI := false
switch binaryName {
case "argocd", "argocd-linux-amd64", "argocd-darwin-amd64", "argocd-windows-amd64.exe":
command = cli.NewCommand()
isCLI = true
case "argocd-server":
command = apiserver.NewCommand()
case "argocd-application-controller":
@@ -42,19 +45,24 @@ func main() {
command = reposerver.NewCommand()
case "argocd-cmp-server":
command = cmpserver.NewCommand()
isCLI = true
case "argocd-dex":
command = dex.NewCommand()
case "argocd-notifications":
command = notification.NewCommand()
case "argocd-git-ask-pass":
command = gitaskpass.NewCommand()
isCLI = true
case "argocd-applicationset-controller":
command = applicationset.NewCommand()
case "argocd-k8s-auth":
command = k8sauth.NewCommand()
isCLI = true
default:
command = cli.NewCommand()
isCLI = true
}
util.SetAutoMaxProcs(isCLI)
if err := command.Execute(); err != nil {
os.Exit(1)

View File

@@ -9,6 +9,8 @@ import (
"strings"
"time"
"go.uber.org/automaxprocs/maxprocs"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@@ -88,6 +90,19 @@ type AppOptions struct {
ref string
}
// SetAutoMaxProcs sets the GOMAXPROCS value based on the binary name.
// It suppresses logs for CLI binaries and logs the setting for services.
func SetAutoMaxProcs(isCLI bool) {
if isCLI {
_, _ = maxprocs.Set() // Intentionally ignore errors for CLI binaries
} else {
_, err := maxprocs.Set(maxprocs.Logger(log.Infof))
if err != nil {
log.Errorf("Error setting GOMAXPROCS: %v", err)
}
}
}
func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.repoURL, "repo", "", "Repository URL, ignored if a file is set")
command.Flags().StringVar(&opts.appPath, "path", "", "Path in repository to the app directory, ignored if a file is set")

View File

@@ -1,10 +1,11 @@
package util
import (
"bytes"
"log"
"os"
"testing"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -529,3 +530,27 @@ func TestFilterResources(t *testing.T) {
assert.Nil(t, filteredResources)
})
}
func TestSetAutoMaxProcs(t *testing.T) {
t.Run("CLI mode ignores errors", func(t *testing.T) {
logBuffer := &bytes.Buffer{}
oldLogger := log.Default()
log.SetOutput(logBuffer)
defer log.SetOutput(oldLogger.Writer())
SetAutoMaxProcs(true)
assert.Empty(t, logBuffer.String(), "Expected no log output when isCLI is true")
})
t.Run("Non-CLI mode logs error on failure", func(t *testing.T) {
logBuffer := &bytes.Buffer{}
oldLogger := log.Default()
log.SetOutput(logBuffer)
defer log.SetOutput(oldLogger.Writer())
SetAutoMaxProcs(false)
assert.NotContains(t, logBuffer.String(), "Error setting GOMAXPROCS", "Unexpected log output detected")
})
}

View File

@@ -20,11 +20,12 @@ import (
)
type ProjectOpts struct {
Description string
destinations []string
Sources []string
SignatureKeys []string
SourceNamespaces []string
Description string
destinations []string
destinationServiceAccounts []string
Sources []string
SignatureKeys []string
SourceNamespaces []string
orphanedResourcesEnabled bool
orphanedResourcesWarn bool
@@ -47,6 +48,8 @@ func AddProjFlags(command *cobra.Command, opts *ProjectOpts) {
command.Flags().StringArrayVar(&opts.allowedNamespacedResources, "allow-namespaced-resource", []string{}, "List of allowed namespaced resources")
command.Flags().StringArrayVar(&opts.deniedNamespacedResources, "deny-namespaced-resource", []string{}, "List of denied namespaced resources")
command.Flags().StringSliceVar(&opts.SourceNamespaces, "source-namespaces", []string{}, "List of source namespaces for applications")
command.Flags().StringArrayVar(&opts.destinationServiceAccounts, "dest-service-accounts", []string{},
"Destination server, namespace and target service account (e.g. https://192.168.99.100:8443,default,default-sa)")
}
func getGroupKindList(values []string) []v1.GroupKind {
@@ -93,6 +96,23 @@ func (opts *ProjectOpts) GetDestinations() []v1alpha1.ApplicationDestination {
return destinations
}
func (opts *ProjectOpts) GetDestinationServiceAccounts() []v1alpha1.ApplicationDestinationServiceAccount {
destinationServiceAccounts := make([]v1alpha1.ApplicationDestinationServiceAccount, 0)
for _, destStr := range opts.destinationServiceAccounts {
parts := strings.Split(destStr, ",")
if len(parts) != 3 {
log.Fatalf("Expected destination service account of the form: server,namespace, defaultServiceAccount. Received: %s", destStr)
} else {
destinationServiceAccounts = append(destinationServiceAccounts, v1alpha1.ApplicationDestinationServiceAccount{
Server: parts[0],
Namespace: parts[1],
DefaultServiceAccount: parts[2],
})
}
}
return destinationServiceAccounts
}
// GetSignatureKeys TODO: Get configured keys and emit warning when a key is specified that is not configured
func (opts *ProjectOpts) GetSignatureKeys() []v1alpha1.SignatureKey {
signatureKeys := make([]v1alpha1.SignatureKey, 0)
@@ -166,6 +186,8 @@ func SetProjSpecOptions(flags *pflag.FlagSet, spec *v1alpha1.AppProjectSpec, pro
spec.NamespaceResourceBlacklist = projOpts.GetDeniedNamespacedResources()
case "source-namespaces":
spec.SourceNamespaces = projOpts.GetSourceNamespaces()
case "dest-service-accounts":
spec.DestinationServiceAccounts = projOpts.GetDestinationServiceAccounts()
}
})
if flags.Changed("orphaned-resources") || flags.Changed("orphaned-resources-warn") {

View File

@@ -5,6 +5,8 @@ import (
"github.com/stretchr/testify/assert"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
func TestProjectOpts_ResourceLists(t *testing.T) {
@@ -22,3 +24,27 @@ func TestProjectOpts_ResourceLists(t *testing.T) {
[]v1.GroupKind{{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"}}, opts.GetDeniedClusterResources(),
)
}
func TestProjectOpts_GetDestinationServiceAccounts(t *testing.T) {
opts := ProjectOpts{
destinationServiceAccounts: []string{
"https://192.168.99.100:8443,test-ns,test-sa",
"https://kubernetes.default.svc.local:6443,guestbook,guestbook-sa",
},
}
assert.ElementsMatch(t,
[]v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://192.168.99.100:8443",
Namespace: "test-ns",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.default.svc.local:6443",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-sa",
},
}, opts.GetDestinationServiceAccounts(),
)
}

View File

@@ -45,7 +45,7 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().StringVar(&opts.GithubAppPrivateKeyPath, "github-app-private-key-path", "", "private key of the GitHub Application")
command.Flags().StringVar(&opts.GitHubAppEnterpriseBaseURL, "github-app-enterprise-base-url", "", "base url to use when using GitHub Enterprise (e.g. https://ghe.example.com/api/v3")
command.Flags().StringVar(&opts.Proxy, "proxy", "", "use proxy to access repository")
command.Flags().StringVar(&opts.Proxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.NoProxy, "no-proxy", "", "don't access these targets via proxy")
command.Flags().StringVar(&opts.GCPServiceAccountKeyPath, "gcp-service-account-key-path", "", "service account key for the Google Cloud Platform")
command.Flags().BoolVar(&opts.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force use of basic auth when connecting repository via HTTP")
}

View File

@@ -178,6 +178,7 @@ const (
// AnnotationKeyAppInstance is the Argo CD application name is used as the instance name
AnnotationKeyAppInstance = "argocd.argoproj.io/tracking-id"
AnnotationInstallationID = "argocd.argoproj.io/installation-id"
// AnnotationCompareOptions is a comma-separated list of options for comparison
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"

View File

@@ -130,6 +130,7 @@ type ApplicationController struct {
statusHardRefreshTimeout time.Duration
statusRefreshJitter time.Duration
selfHealTimeout time.Duration
selfHealBackOff *wait.Backoff
repoClientset apiclient.Clientset
db db.ArgoDB
settingsMgr *settings_util.SettingsManager
@@ -160,10 +161,12 @@ func NewApplicationController(
appHardResyncPeriod time.Duration,
appResyncJitter time.Duration,
selfHealTimeout time.Duration,
selfHealBackoff *wait.Backoff,
repoErrorGracePeriod time.Duration,
metricsPort int,
metricsCacheExpiration time.Duration,
metricsApplicationLabels []string,
metricsApplicationConditions []string,
kubectlParallelismLimit int64,
persistResourceHealth bool,
clusterSharding sharding.ClusterShardingCache,
@@ -172,6 +175,7 @@ func NewApplicationController(
serverSideDiff bool,
dynamicClusterDistributionEnabled bool,
ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts,
enableK8sEvent []string,
) (*ApplicationController, error) {
log.Infof("appResyncPeriod=%v, appHardResyncPeriod=%v, appResyncJitter=%v", appResyncPeriod, appHardResyncPeriod, appResyncJitter)
db := db.NewDB(namespace, settingsMgr, kubeClientset)
@@ -196,9 +200,10 @@ func NewApplicationController(
statusRefreshJitter: appResyncJitter,
refreshRequestedApps: make(map[string]CompareWith),
refreshRequestedAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, common.ApplicationController),
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, common.ApplicationController, enableK8sEvent),
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
selfHealBackOff: selfHealBackoff,
clusterSharding: clusterSharding,
projByNameCache: sync.Map{},
applicationNamespaces: applicationNamespaces,
@@ -279,7 +284,7 @@ func NewApplicationController(
metricsAddr := fmt.Sprintf("0.0.0.0:%d", metricsPort)
ctrl.metricsServer, err = metrics.NewMetricsServer(metricsAddr, appLister, ctrl.canProcessApp, readinessHealthCheck, metricsApplicationLabels)
ctrl.metricsServer, err = metrics.NewMetricsServer(metricsAddr, appLister, ctrl.canProcessApp, readinessHealthCheck, metricsApplicationLabels, metricsApplicationConditions)
if err != nil {
return nil, err
}
@@ -901,7 +906,7 @@ func (ctrl *ApplicationController) requestAppRefresh(appName string, compareWith
key := ctrl.toAppKey(appName)
if compareWith != nil && after != nil {
ctrl.appComparisonTypeRefreshQueue.AddAfter(fmt.Sprintf("%s/%d", key, compareWith), *after)
ctrl.appComparisonTypeRefreshQueue.AddAfter(fmt.Sprintf("%s/%d", key, *compareWith), *after)
} else {
if compareWith != nil {
ctrl.refreshRequestedAppsMutex.Lock()
@@ -1689,8 +1694,9 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
app.Status.Summary = tree.GetSummary(app)
}
if project.Spec.SyncWindows.Matches(app).CanSync(false) {
syncErrCond, opMS := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources)
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false)
if canSync {
syncErrCond, opMS := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources, compareResult.revisionUpdated)
setOpMs = opMS
if syncErrCond != nil {
app.Status.SetConditions(
@@ -1913,7 +1919,7 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus) (*appv1.ApplicationCondition, time.Duration) {
func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *appv1.SyncStatus, resources []appv1.ResourceStatus, revisionUpdated bool) (*appv1.ApplicationCondition, time.Duration) {
logCtx := getAppLog(app)
ts := stats.NewTimingStats()
defer func() {
@@ -1967,7 +1973,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
desiredCommitSHA := syncStatus.Revision
desiredCommitSHAsMS := syncStatus.Revisions
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA, desiredCommitSHAsMS, app.Spec.HasMultipleSources())
alreadyAttempted, attemptPhase := alreadyAttemptedSync(app, desiredCommitSHA, desiredCommitSHAsMS, app.Spec.HasMultipleSources(), revisionUpdated)
ts.AddCheckpoint("already_attempted_sync_ms")
op := appv1.Operation{
Sync: &appv1.SyncOperation{
@@ -1979,6 +1985,9 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
InitiatedBy: appv1.OperationInitiator{Automated: true},
Retry: appv1.RetryStrategy{Limit: 5},
}
if app.Status.OperationState != nil && app.Status.OperationState.Operation.Sync != nil {
op.Sync.SelfHealAttemptsCount = app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount
}
if app.Spec.SyncPolicy.Retry != nil {
op.Retry = *app.Spec.SyncPolicy.Retry
}
@@ -1996,6 +2005,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
return nil, 0
} else if alreadyAttempted && selfHeal {
if shouldSelfHeal, retryAfter := ctrl.shouldSelfHeal(app); shouldSelfHeal {
op.Sync.SelfHealAttemptsCount++
for _, resource := range resources {
if resource.Status != appv1.SyncStatusCodeSynced {
op.Sync.Resources = append(op.Sync.Resources, appv1.SyncOperationResource{
@@ -2022,7 +2032,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
}
if bAllNeedPrune {
message := fmt.Sprintf("Skipping sync attempt to %s: auto-sync will wipe out all resources", desiredCommitSHA)
logCtx.Warnf(message)
logCtx.Warn(message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}, 0
}
}
@@ -2062,17 +2072,26 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
// alreadyAttemptedSync returns whether the most recent sync was performed against the
// commitSHA and with the same app source config which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS []string, hasMultipleSources bool) (bool, synccommon.OperationPhase) {
func alreadyAttemptedSync(app *appv1.Application, commitSHA string, commitSHAsMS []string, hasMultipleSources bool, revisionUpdated bool) (bool, synccommon.OperationPhase) {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false, ""
}
if hasMultipleSources {
if !reflect.DeepEqual(app.Status.OperationState.SyncResult.Revisions, commitSHAsMS) {
return false, ""
if revisionUpdated {
if !reflect.DeepEqual(app.Status.OperationState.SyncResult.Revisions, commitSHAsMS) {
return false, ""
}
} else {
log.WithField("application", app.Name).Debugf("Skipping auto-sync: commitSHA %s has no changes", commitSHA)
}
} else {
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
if revisionUpdated {
log.WithField("application", app.Name).Infof("Executing compare of syncResult.Revision and commitSha because manifest changed: %v", commitSHA)
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false, ""
}
} else {
log.WithField("application", app.Name).Debugf("Skipping auto-sync: commitSHA %s has no changes", commitSHA)
}
}
@@ -2105,10 +2124,24 @@ func (ctrl *ApplicationController) shouldSelfHeal(app *appv1.Application) (bool,
}
var retryAfter time.Duration
if app.Status.OperationState.FinishedAt == nil {
retryAfter = ctrl.selfHealTimeout
if ctrl.selfHealBackOff == nil {
if app.Status.OperationState.FinishedAt == nil {
retryAfter = ctrl.selfHealTimeout
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
}
} else {
retryAfter = ctrl.selfHealTimeout - time.Since(app.Status.OperationState.FinishedAt.Time)
backOff := *ctrl.selfHealBackOff
backOff.Steps = int(app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount)
var delay time.Duration
for backOff.Steps > 0 {
delay = backOff.Step()
}
if app.Status.OperationState.FinishedAt == nil {
retryAfter = delay
} else {
retryAfter = delay - time.Since(app.Status.OperationState.FinishedAt.Time)
}
}
return retryAfter <= 0, retryAfter
}

View File

@@ -4,16 +4,18 @@ import (
"context"
"encoding/json"
"errors"
"fmt"
"testing"
"time"
clustercache "github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/rest"
clustercache "github.com/argoproj/gitops-engine/pkg/cache"
"k8s.io/utils/ptr"
"github.com/argoproj/argo-cd/v2/common"
statecache "github.com/argoproj/argo-cd/v2/controller/cache"
@@ -43,12 +45,15 @@ import (
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
mockrepoclient "github.com/argoproj/argo-cd/v2/reposerver/apiclient/mocks"
"github.com/argoproj/argo-cd/v2/test"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/normalizers"
cacheutil "github.com/argoproj/argo-cd/v2/util/cache"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
"github.com/argoproj/argo-cd/v2/util/settings"
)
var testEnableEventList []string = argo.DefaultEnableEventList()
type namespacedResource struct {
v1alpha1.ResourceNode
AppName string
@@ -64,6 +69,7 @@ type fakeData struct {
metricsCacheExpiration time.Duration
applicationNamespaces []string
updateRevisionForPathsResponse *apiclient.UpdateRevisionForPathsResponse
additionalObjs []runtime.Object
}
type MockKubectl struct {
@@ -133,7 +139,9 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
},
Data: data.configMapData,
}
kubeClient := fake.NewSimpleClientset(&clust, &cm, &secret)
runtimeObjs := []runtime.Object{&clust, &secret, &cm}
runtimeObjs = append(runtimeObjs, data.additionalObjs...)
kubeClient := fake.NewSimpleClientset(runtimeObjs...)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
kubectl := &MockKubectl{Kubectl: &kubetest.MockKubectlCmd{}}
ctrl, err := NewApplicationController(
@@ -151,10 +159,12 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
time.Hour,
time.Second,
time.Minute,
nil,
time.Second*10,
common.DefaultPortArgoCDMetrics,
data.metricsCacheExpiration,
[]string{},
[]string{},
0,
true,
nil,
@@ -163,6 +173,7 @@ func newFakeController(data *fakeData, repoErr error) *ApplicationController {
false,
false,
normalizers.IgnoreNormalizerOpts{},
testEnableEventList,
)
db := &dbmocks.ArgoDB{}
db.On("GetApplicationControllerReplicas").Return(1)
@@ -554,7 +565,7 @@ func TestAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -575,7 +586,7 @@ func TestMultiSourceSelfHeal(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -591,7 +602,7 @@ func TestMultiSourceSelfHeal(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revisions: []string{"z", "x", "v"},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook-1", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -607,7 +618,7 @@ func TestAutoSyncNotAllowEmpty(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
assert.NotNil(t, cond)
}
@@ -620,7 +631,7 @@ func TestAutoSyncAllowEmpty(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
assert.Nil(t, cond)
}
@@ -634,7 +645,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -649,7 +660,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -665,7 +676,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -682,7 +693,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -708,7 +719,7 @@ func TestSkipAutoSync(t *testing.T) {
Status: v1alpha1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -724,7 +735,7 @@ func TestSkipAutoSync(t *testing.T) {
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{
{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync, RequiresPruning: true},
})
}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -760,7 +771,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
Source: *app.Spec.Source.DeepCopy(),
},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -803,7 +814,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}})
cond, _ := ctrl.autoSync(app, &syncStatus, []v1alpha1.ResourceStatus{{Name: "guestbook", Kind: kube.DeploymentKind, Status: v1alpha1.SyncStatusCodeOutOfSync}}, true)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get(context.Background(), "my-app", metav1.GetOptions{})
require.NoError(t, err)
@@ -1374,6 +1385,25 @@ func TestNeedRefreshAppStatus(t *testing.T) {
assert.Equal(t, CompareWithRecent, compareWith)
})
t.Run("requesting refresh with delay gives correct compression level", func(t *testing.T) {
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.False(t, needRefresh)
// use a one-off controller so other tests don't have a manual refresh request
ctrl := newFakeController(&fakeData{apps: []runtime.Object{}}, nil)
// refresh app with a non-nil delay
// use zero-second delay to test the add later logic without waiting in the test
delay := time.Duration(0)
ctrl.requestAppRefresh(app.Name, CompareWithRecent.Pointer(), &delay)
ctrl.processAppComparisonTypeQueueItem()
needRefresh, refreshType, compareWith := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
assert.True(t, needRefresh)
assert.Equal(t, v1alpha1.RefreshTypeNormal, refreshType)
assert.Equal(t, CompareWithRecent, compareWith)
})
t.Run("refresh application which status is not reconciled using latest commit", func(t *testing.T) {
app := app.DeepCopy()
needRefresh, _, _ := ctrl.needRefreshAppStatus(app, 1*time.Hour, 2*time.Hour)
@@ -2138,7 +2168,7 @@ func TestHelmValuesObjectHasReplaceStrategy(t *testing.T) {
app,
appModified)
require.NoError(t, err)
assert.Equal(t, `{"status":{"sync":{"comparedTo":{"source":{"helm":{"valuesObject":{"key":["value-modified1"]}}}}}}}`, string(patch))
assert.JSONEq(t, `{"status":{"sync":{"comparedTo":{"source":{"helm":{"valuesObject":{"key":["value-modified1"]}}}}}}}`, string(patch))
}
func TestAppStatusIsReplaced(t *testing.T) {
@@ -2170,3 +2200,79 @@ func TestAppStatusIsReplaced(t *testing.T) {
require.True(t, has)
require.Nil(t, val)
}
func TestAlreadyAttemptSync(t *testing.T) {
app := newFakeApp()
t.Run("same manifest with sync result", func(t *testing.T) {
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, false)
assert.True(t, attempted)
})
t.Run("different manifest with sync result", func(t *testing.T) {
attempted, _ := alreadyAttemptedSync(app, "sha", []string{}, false, true)
assert.False(t, attempted)
})
}
func assertDurationAround(t *testing.T, expected time.Duration, actual time.Duration) {
delta := time.Second / 2
assert.GreaterOrEqual(t, expected, actual-delta)
assert.LessOrEqual(t, expected, actual+delta)
}
func TestSelfHealExponentialBackoff(t *testing.T) {
ctrl := newFakeController(&fakeData{}, nil)
ctrl.selfHealBackOff = &wait.Backoff{
Factor: 3,
Duration: 2 * time.Second,
Cap: 5 * time.Minute,
}
app := &v1alpha1.Application{
Status: v1alpha1.ApplicationStatus{
OperationState: &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
},
},
},
}
testCases := []struct {
attempts int64
finishedAt *metav1.Time
expectedDuration time.Duration
shouldSelfHeal bool
}{{
attempts: 0,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 0,
shouldSelfHeal: true,
}, {
attempts: 1,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 2 * time.Second,
shouldSelfHeal: false,
}, {
attempts: 2,
finishedAt: ptr.To(metav1.Now()),
expectedDuration: 6 * time.Second,
shouldSelfHeal: false,
}, {
attempts: 3,
finishedAt: nil,
expectedDuration: 18 * time.Second,
shouldSelfHeal: false,
}}
for i := range testCases {
tc := testCases[i]
t.Run(fmt.Sprintf("test case %d", i), func(t *testing.T) {
app.Status.OperationState.Operation.Sync.SelfHealAttemptsCount = tc.attempts
app.Status.OperationState.FinishedAt = tc.finishedAt
ok, duration := ctrl.shouldSelfHeal(app)
require.Equal(t, ok, tc.shouldSelfHeal)
assertDurationAround(t, tc.expectedDuration, duration)
})
}
}

View File

@@ -197,6 +197,7 @@ type cacheSettings struct {
clusterSettings clustercache.Settings
appInstanceLabelKey string
trackingMethod appv1.TrackingMethod
installationID string
// resourceOverrides provides a list of ignored differences to ignore watched resource updates
resourceOverrides map[string]appv1.ResourceOverride
@@ -225,6 +226,10 @@ func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
if err != nil {
return nil, err
}
installationID, err := c.settingsMgr.GetInstallationID()
if err != nil {
return nil, err
}
resourceUpdatesOverrides, err := c.settingsMgr.GetIgnoreResourceUpdatesOverrides()
if err != nil {
return nil, err
@@ -246,7 +251,7 @@ func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
ResourcesFilter: resourcesFilter,
}
return &cacheSettings{clusterSettings, appInstanceLabelKey, argo.GetTrackingMethod(c.settingsMgr), resourceUpdatesOverrides, ignoreResourceUpdatesEnabled}, nil
return &cacheSettings{clusterSettings, appInstanceLabelKey, argo.GetTrackingMethod(c.settingsMgr), installationID, resourceUpdatesOverrides, ignoreResourceUpdatesEnabled}, nil
}
func asResourceNode(r *clustercache.Resource) appv1.ResourceNode {
@@ -523,7 +528,7 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
res.Health, _ = health.GetResourceHealth(un, cacheSettings.clusterSettings.ResourceHealthOverride)
appName := c.resourceTracking.GetAppName(un, cacheSettings.appInstanceLabelKey, cacheSettings.trackingMethod)
appName := c.resourceTracking.GetAppName(un, cacheSettings.appInstanceLabelKey, cacheSettings.trackingMethod, cacheSettings.installationID)
if isRoot && appName != "" {
res.AppName = appName
}

View File

@@ -278,6 +278,32 @@ func populateIstioVirtualServiceInfo(un *unstructured.Unstructured, res *Resourc
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetRefs: targets, ExternalURLs: urls}
}
func isPodInitializedConditionTrue(status *v1.PodStatus) bool {
for _, condition := range status.Conditions {
if condition.Type != v1.PodInitialized {
continue
}
return condition.Status == v1.ConditionTrue
}
return false
}
func isRestartableInitContainer(initContainer *v1.Container) bool {
if initContainer == nil {
return false
}
if initContainer.RestartPolicy == nil {
return false
}
return *initContainer.RestartPolicy == v1.ContainerRestartPolicyAlways
}
func isPodPhaseTerminal(phase v1.PodPhase) bool {
return phase == v1.PodFailed || phase == v1.PodSucceeded
}
func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
pod := v1.Pod{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &pod)
@@ -288,7 +314,8 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
totalContainers := len(pod.Spec.Containers)
readyContainers := 0
reason := string(pod.Status.Phase)
podPhase := pod.Status.Phase
reason := string(podPhase)
if pod.Status.Reason != "" {
reason = pod.Status.Reason
}
@@ -306,6 +333,21 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
res.Images = append(res.Images, image)
}
// If the Pod carries {type:PodScheduled, reason:SchedulingGated}, set reason to 'SchedulingGated'.
for _, condition := range pod.Status.Conditions {
if condition.Type == v1.PodScheduled && condition.Reason == v1.PodReasonSchedulingGated {
reason = v1.PodReasonSchedulingGated
}
}
initContainers := make(map[string]*v1.Container)
for i := range pod.Spec.InitContainers {
initContainers[pod.Spec.InitContainers[i].Name] = &pod.Spec.InitContainers[i]
if isRestartableInitContainer(&pod.Spec.InitContainers[i]) {
totalContainers++
}
}
initializing := false
for i := range pod.Status.InitContainerStatuses {
container := pod.Status.InitContainerStatuses[i]
@@ -313,6 +355,12 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
switch {
case container.State.Terminated != nil && container.State.Terminated.ExitCode == 0:
continue
case isRestartableInitContainer(initContainers[container.Name]) &&
container.Started != nil && *container.Started:
if container.Ready {
readyContainers++
}
continue
case container.State.Terminated != nil:
// initialization is failed
if len(container.State.Terminated.Reason) == 0 {
@@ -334,8 +382,7 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
}
break
}
if !initializing {
restarts = 0
if !initializing || isPodInitializedConditionTrue(&pod.Status) {
hasRunning := false
for i := len(pod.Status.ContainerStatuses) - 1; i >= 0; i-- {
container := pod.Status.ContainerStatuses[i]
@@ -370,7 +417,9 @@ func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
// and https://github.com/kubernetes/kubernetes/issues/90358#issuecomment-617859364
if pod.DeletionTimestamp != nil && pod.Status.Reason == "NodeLost" {
reason = "Unknown"
} else if pod.DeletionTimestamp != nil {
// If the pod is being deleted and the pod phase is not succeeded or failed, set the reason to "Terminating".
// See https://github.com/kubernetes/kubectl/issues/1595#issuecomment-2080001023
} else if pod.DeletionTimestamp != nil && !isPodPhaseTerminal(podPhase) {
reason = "Terminating"
}

View File

@@ -285,6 +285,552 @@ func TestGetPodInfo(t *testing.T) {
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{Labels: map[string]string{"app": "guestbook"}}, info.NetworkingInfo)
}
func TestGetPodWithInitialContainerInfo(t *testing.T) {
pod := strToUnstructured(`
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
app: "app-with-initial-container"
name: "app-with-initial-container-5f46976fdb-vd6rv"
namespace: "default"
ownerReferences:
- apiVersion: "apps/v1"
kind: "ReplicaSet"
name: "app-with-initial-container-5f46976fdb"
spec:
containers:
- image: "alpine:latest"
imagePullPolicy: "Always"
name: "app-with-initial-container"
initContainers:
- image: "alpine:latest"
imagePullPolicy: "Always"
name: "app-with-initial-container-logshipper"
nodeName: "minikube"
status:
containerStatuses:
- image: "alpine:latest"
name: "app-with-initial-container"
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-10-08T08:44:25Z"
initContainerStatuses:
- image: "alpine:latest"
name: "app-with-initial-container-logshipper"
ready: true
restartCount: 0
started: false
state:
terminated:
exitCode: 0
reason: "Completed"
phase: "Running"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/1"},
}, info.Info)
}
func TestGetPodInfoWithSidecar(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
labels:
app: app-with-sidecar
name: app-with-sidecar-6664cc788c-lqlrp
namespace: default
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: app-with-sidecar-6664cc788c
spec:
containers:
- image: 'docker.m.daocloud.io/library/alpine:latest'
imagePullPolicy: Always
name: app-with-sidecar
initContainers:
- image: 'docker.m.daocloud.io/library/alpine:latest'
imagePullPolicy: Always
name: logshipper
restartPolicy: Always
nodeName: minikube
status:
containerStatuses:
- image: 'docker.m.daocloud.io/library/alpine:latest'
name: app-with-sidecar
ready: true
restartCount: 0
started: true
state:
running:
startedAt: '2024-10-08T08:39:43Z'
initContainerStatuses:
- image: 'docker.m.daocloud.io/library/alpine:latest'
name: logshipper
ready: true
restartCount: 0
started: true
state:
running:
startedAt: '2024-10-08T08:39:40Z'
phase: Running
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "2/2"},
}, info.Info)
}
func TestGetPodInfoWithInitialContainer(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
generateName: myapp-long-exist-56b7d8794d-
labels:
app: myapp-long-exist
name: myapp-long-exist-56b7d8794d-pbgrd
namespace: linghao
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: myapp-long-exist-56b7d8794d
spec:
containers:
- image: alpine:latest
imagePullPolicy: Always
name: myapp-long-exist
initContainers:
- image: alpine:latest
imagePullPolicy: Always
name: myapp-long-exist-logshipper
nodeName: minikube
status:
containerStatuses:
- image: alpine:latest
name: myapp-long-exist
ready: false
restartCount: 0
started: false
state:
waiting:
reason: PodInitializing
initContainerStatuses:
- image: alpine:latest
name: myapp-long-exist-logshipper
ready: false
restartCount: 0
started: true
state:
running:
startedAt: '2024-10-09T08:03:45Z'
phase: Pending
startTime: '2024-10-09T08:02:39Z'
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:0/1"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod has 2 restartable init containers, the first one running but not started.
func TestGetPodInfoWithRestartableInitContainer(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test1
spec:
initContainers:
- name: restartable-init-1
restartPolicy: Always
- name: restartable-init-2
restartPolicy: Always
containers:
- name: container
nodeName: minikube
status:
phase: Pending
initContainerStatuses:
- name: restartable-init-1
ready: false
restartCount: 3
state:
running: {}
started: false
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
- name: restartable-init-2
ready: false
state:
waiting: {}
started: false
containerStatuses:
- ready: false
restartCount: 0
state:
waiting: {}
conditions:
- type: ContainersReady
status: "False"
- type: Initialized
status: "False"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:0/2"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/3"},
{Name: "Restart Count", Value: "3"},
}, info.Info)
}
// Test pod has 2 restartable init containers, the first one started and the second one running but not started.
func TestGetPodInfoWithPartiallyStartedInitContainers(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test1
spec:
initContainers:
- name: restartable-init-1
restartPolicy: Always
- name: restartable-init-2
restartPolicy: Always
containers:
- name: container
nodeName: minikube
status:
phase: Pending
initContainerStatuses:
- name: restartable-init-1
ready: false
restartCount: 3
state:
running: {}
started: true
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
- name: restartable-init-2
ready: false
state:
running: {}
started: false
containerStatuses:
- ready: false
restartCount: 0
state:
waiting: {}
conditions:
- type: ContainersReady
status: "False"
- type: Initialized
status: "False"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:1/2"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/3"},
{Name: "Restart Count", Value: "3"},
}, info.Info)
}
// Test pod has 2 restartable init containers started and 1 container running
func TestGetPodInfoWithStartedInitContainers(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test2
spec:
initContainers:
- name: restartable-init-1
restartPolicy: Always
- name: restartable-init-2
restartPolicy: Always
containers:
- name: container
nodeName: minikube
status:
phase: Running
initContainerStatuses:
- name: restartable-init-1
ready: false
restartCount: 3
state:
running: {}
started: true
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
- name: restartable-init-2
ready: false
state:
running: {}
started: true
containerStatuses:
- ready: true
restartCount: 4
state:
running: {}
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with actual time
conditions:
- type: ContainersReady
status: "False"
- type: Initialized
status: "True"
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Running"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "1/3"},
{Name: "Restart Count", Value: "7"},
}, info.Info)
}
// Test pod has 1 init container restarting and 1 container not running
func TestGetPodInfoWithNormalInitContainer(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test7
spec:
initContainers:
- name: init-container
containers:
- name: main-container
nodeName: minikube
status:
phase: podPhase
initContainerStatuses:
- ready: false
restartCount: 3
state:
running: {}
lastTerminationState:
terminated:
finishedAt: "2023-10-01T00:00:00Z" # Replace with the actual time
containerStatuses:
- ready: false
restartCount: 0
state:
waiting: {}
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Init:0/1"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
{Name: "Restart Count", Value: "3"},
}, info.Info)
}
// Test pod condition succeed
func TestPodConditionSucceeded(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test8
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Succeeded
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Completed
exitCode: 0
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Completed"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition failed
func TestPodConditionFailed(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test9
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Failed
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Error
exitCode: 1
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Error"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition succeed with deletion
func TestPodConditionSucceededWithDeletion(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test10
deletionTimestamp: "2023-10-01T00:00:00Z"
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Succeeded
containerStatuses:
- ready: false
restartCount: 0
state:
terminated:
reason: Completed
exitCode: 0
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Completed"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition running with deletion
func TestPodConditionRunningWithDeletion(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test11
deletionTimestamp: "2023-10-01T00:00:00Z"
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Running
containerStatuses:
- ready: false
restartCount: 0
state:
running: {}
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Terminating"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test pod condition pending with deletion
func TestPodConditionPendingWithDeletion(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test12
deletionTimestamp: "2023-10-01T00:00:00Z"
spec:
nodeName: minikube
containers:
- name: container
status:
phase: Pending
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "Terminating"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/1"},
}, info.Info)
}
// Test PodScheduled condition with reason SchedulingGated
func TestPodScheduledWithSchedulingGated(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: test13
spec:
nodeName: minikube
containers:
- name: container1
- name: container2
status:
phase: podPhase
conditions:
- type: PodScheduled
status: "False"
reason: SchedulingGated
`)
info := &ResourceInfo{}
populateNodeInfo(pod, info, []string{})
assert.Equal(t, []v1alpha1.InfoItem{
{Name: "Status Reason", Value: "SchedulingGated"},
{Name: "Node", Value: "minikube"},
{Name: "Containers", Value: "0/2"},
}, info.Info)
}
func TestGetNodeInfo(t *testing.T) {
node := strToUnstructured(`
apiVersion: v1

View File

@@ -51,7 +51,7 @@ func (ctrl *ApplicationController) executePostDeleteHooks(app *v1alpha1.Applicat
revisions = append(revisions, src.TargetRevision)
}
targets, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false)
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, false)
if err != nil {
return false, err
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"os"
"slices"
"strconv"
"time"
@@ -54,7 +55,8 @@ const (
var (
descAppDefaultLabels = []string{"namespace", "name", "project"}
descAppLabels *prometheus.Desc
descAppLabels *prometheus.Desc
descAppConditions *prometheus.Desc
descAppInfo = prometheus.NewDesc(
"argocd_app_info",
@@ -62,6 +64,7 @@ var (
append(descAppDefaultLabels, "autosync_enabled", "repo", "dest_server", "dest_namespace", "sync_status", "health_status", "operation"),
nil,
)
// Deprecated
descAppCreated = prometheus.NewDesc(
"argocd_app_created_time",
@@ -144,7 +147,7 @@ var (
)
// NewMetricsServer returns a new prometheus server which collects application metrics
func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, healthCheck func(r *http.Request) error, appLabels []string) (*MetricsServer, error) {
func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, healthCheck func(r *http.Request) error, appLabels []string, appConditions []string) (*MetricsServer, error) {
hostname, err := os.Hostname()
if err != nil {
return nil, err
@@ -160,8 +163,17 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFil
)
}
if len(appConditions) > 0 {
descAppConditions = prometheus.NewDesc(
"argocd_app_condition",
"Report application conditions.",
append(descAppDefaultLabels, "condition"),
nil,
)
}
mux := http.NewServeMux()
registry := NewAppRegistry(appLister, appFilter, appLabels)
registry := NewAppRegistry(appLister, appFilter, appLabels, appConditions)
mux.Handle(MetricsPath, promhttp.HandlerFor(prometheus.Gatherers{
// contains app controller specific metrics
@@ -293,24 +305,26 @@ func (m *MetricsServer) SetExpiration(cacheExpiration time.Duration) error {
}
type appCollector struct {
store applister.ApplicationLister
appFilter func(obj interface{}) bool
appLabels []string
store applister.ApplicationLister
appFilter func(obj interface{}) bool
appLabels []string
appConditions []string
}
// NewAppCollector returns a prometheus collector for application metrics
func NewAppCollector(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string) prometheus.Collector {
func NewAppCollector(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string, appConditions []string) prometheus.Collector {
return &appCollector{
store: appLister,
appFilter: appFilter,
appLabels: appLabels,
store: appLister,
appFilter: appFilter,
appLabels: appLabels,
appConditions: appConditions,
}
}
// NewAppRegistry creates a new prometheus registry that collects applications
func NewAppRegistry(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string) *prometheus.Registry {
func NewAppRegistry(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string, appConditions []string) *prometheus.Registry {
registry := prometheus.NewRegistry()
registry.MustRegister(NewAppCollector(appLister, appFilter, appLabels))
registry.MustRegister(NewAppCollector(appLister, appFilter, appLabels, appConditions))
return registry
}
@@ -319,6 +333,9 @@ func (c *appCollector) Describe(ch chan<- *prometheus.Desc) {
if len(c.appLabels) > 0 {
ch <- descAppLabels
}
if len(c.appConditions) > 0 {
ch <- descAppConditions
}
ch <- descAppInfo
ch <- descAppSyncStatusCode
ch <- descAppHealthStatus
@@ -383,6 +400,19 @@ func (c *appCollector) collectApps(ch chan<- prometheus.Metric, app *argoappv1.A
addGauge(descAppLabels, 1, labelValues...)
}
if len(c.appConditions) > 0 {
conditionCount := make(map[string]int)
for _, condition := range app.Status.Conditions {
if slices.Contains(c.appConditions, condition.Type) {
conditionCount[condition.Type]++
}
}
for conditionType, count := range conditionCount {
addGauge(descAppConditions, float64(count), conditionType)
}
}
// Deprecated controller metrics
if os.Getenv(EnvVarLegacyControllerMetrics) == "true" {
addGauge(descAppCreated, float64(app.CreationTimestamp.Unix()))

View File

@@ -116,6 +116,41 @@ status:
status: Degraded
`
const fakeApp4 = `
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app-4
namespace: argocd
labels:
team-name: my-team
team-bu: bu-id
argoproj.io/cluster: test-cluster
spec:
destination:
namespace: dummy-namespace
server: https://localhost:6443
project: important-project
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
status:
sync:
status: OutOfSync
health:
status: Degraded
conditions:
- lastTransitionTime: "2024-08-07T12:25:40Z"
message: Application has 1 orphaned resources
type: OrphanedResourceWarning
- lastTransitionTime: "2024-08-07T12:25:40Z"
message: Resource Pod standalone-pod is excluded in the settings
type: ExcludedResourceWarning
- lastTransitionTime: "2024-08-07T12:25:40Z"
message: Resource Endpoint raw-endpoint is excluded in the settings
type: ExcludedResourceWarning
`
const fakeDefaultApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
@@ -179,7 +214,7 @@ func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.Applic
func testApp(t *testing.T, fakeAppYAMLs []string, expectedResponse string) {
t.Helper()
testMetricServer(t, fakeAppYAMLs, expectedResponse, []string{})
testMetricServer(t, fakeAppYAMLs, expectedResponse, []string{}, []string{})
}
type fakeClusterInfo struct {
@@ -194,15 +229,17 @@ type TestMetricServerConfig struct {
FakeAppYAMLs []string
ExpectedResponse string
AppLabels []string
AppConditions []string
ClustersInfo []gitopsCache.ClusterInfo
}
func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse string, appLabels []string) {
func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse string, appLabels []string, appConditions []string) {
t.Helper()
cfg := TestMetricServerConfig{
FakeAppYAMLs: fakeAppYAMLs,
ExpectedResponse: expectedResponse,
AppLabels: appLabels,
AppConditions: appConditions,
ClustersInfo: []gitopsCache.ClusterInfo{},
}
runTest(t, cfg)
@@ -212,7 +249,7 @@ func runTest(t *testing.T, cfg TestMetricServerConfig) {
t.Helper()
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels, cfg.AppConditions)
require.NoError(t, err)
if len(cfg.ClustersInfo) > 0 {
@@ -303,7 +340,61 @@ argocd_app_labels{label_non_existing="",name="my-app-3",namespace="argocd",proje
for _, c := range cases {
c := c
t.Run(c.description, func(t *testing.T) {
testMetricServer(t, c.applications, c.responseContains, c.metricLabels)
testMetricServer(t, c.applications, c.responseContains, c.metricLabels, []string{})
})
}
}
func TestMetricConditions(t *testing.T) {
type testCases struct {
testCombination
description string
metricConditions []string
}
cases := []testCases{
{
description: "metric will only output OrphanedResourceWarning",
metricConditions: []string{"OrphanedResourceWarning"},
testCombination: testCombination{
applications: []string{fakeApp4},
responseContains: `
# HELP argocd_app_condition Report application conditions.
# TYPE argocd_app_condition gauge
argocd_app_condition{condition="OrphanedResourceWarning",name="my-app-4",namespace="argocd",project="important-project"} 1
`,
},
},
{
description: "metric will only output ExcludedResourceWarning",
metricConditions: []string{"ExcludedResourceWarning"},
testCombination: testCombination{
applications: []string{fakeApp4},
responseContains: `
# HELP argocd_app_condition Report application conditions.
# TYPE argocd_app_condition gauge
argocd_app_condition{condition="ExcludedResourceWarning",name="my-app-4",namespace="argocd",project="important-project"} 2
`,
},
},
{
description: "metric will only output both OrphanedResourceWarning and ExcludedResourceWarning",
metricConditions: []string{"ExcludedResourceWarning", "OrphanedResourceWarning"},
testCombination: testCombination{
applications: []string{fakeApp4},
responseContains: `
# HELP argocd_app_condition Report application conditions.
# TYPE argocd_app_condition gauge
argocd_app_condition{condition="OrphanedResourceWarning",name="my-app-4",namespace="argocd",project="important-project"} 1
argocd_app_condition{condition="ExcludedResourceWarning",name="my-app-4",namespace="argocd",project="important-project"} 2
`,
},
},
}
for _, c := range cases {
c := c
t.Run(c.description, func(t *testing.T) {
testMetricServer(t, c.applications, c.responseContains, []string{}, c.metricConditions)
})
}
}
@@ -335,7 +426,7 @@ argocd_app_sync_status{name="my-app",namespace="argocd",project="important-proje
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{})
require.NoError(t, err)
appSyncTotal := `
@@ -387,7 +478,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{})
require.NoError(t, err)
appReconcileMetrics := `
@@ -420,7 +511,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
func TestMetricsReset(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{})
require.NoError(t, err)
appSyncTotal := `
@@ -457,7 +548,7 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
func TestWorkqueueMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{})
require.NoError(t, err)
expectedMetrics := `
@@ -492,7 +583,7 @@ workqueue_unfinished_work_seconds{controller="test",name="test"}
func TestGoMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{}, []string{})
require.NoError(t, err)
expectedMetrics := `

View File

@@ -70,7 +70,7 @@ type managedResource struct {
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool, rollback bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, error)
GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
// comparisonResult holds the state of an application after the reconciliation
@@ -88,6 +88,7 @@ type comparisonResult struct {
timings map[string]time.Duration
diffResultList *diff.DiffResultList
hasPostDeleteHooks bool
revisionUpdated bool
}
func (res *comparisonResult) GetSyncStatus() *v1alpha1.SyncStatus {
@@ -123,51 +124,56 @@ type appStateManager struct {
// task to the repo-server. It returns the list of generated manifests as unstructured
// objects. It also returns the full response from all calls to the repo server as the
// second argument.
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, error) {
func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, rollback bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error) {
ts := stats.NewTimingStats()
helmRepos, err := m.db.ListHelmRepositories(context.Background())
if err != nil {
return nil, nil, fmt.Errorf("failed to list Helm repositories: %w", err)
return nil, nil, false, fmt.Errorf("failed to list Helm repositories: %w", err)
}
permittedHelmRepos, err := argo.GetPermittedRepos(proj, helmRepos)
if err != nil {
return nil, nil, fmt.Errorf("failed to get permitted Helm repositories for project %q: %w", proj.Name, err)
return nil, nil, false, fmt.Errorf("failed to get permitted Helm repositories for project %q: %w", proj.Name, err)
}
ts.AddCheckpoint("repo_ms")
helmRepositoryCredentials, err := m.db.GetAllHelmRepositoryCredentials(context.Background())
if err != nil {
return nil, nil, fmt.Errorf("failed to get Helm credentials: %w", err)
return nil, nil, false, fmt.Errorf("failed to get Helm credentials: %w", err)
}
permittedHelmCredentials, err := argo.GetPermittedReposCredentials(proj, helmRepositoryCredentials)
if err != nil {
return nil, nil, fmt.Errorf("failed to get permitted Helm credentials for project %q: %w", proj.Name, err)
return nil, nil, false, fmt.Errorf("failed to get permitted Helm credentials for project %q: %w", proj.Name, err)
}
enabledSourceTypes, err := m.settingsMgr.GetEnabledSourceTypes()
if err != nil {
return nil, nil, fmt.Errorf("failed to get enabled source types: %w", err)
return nil, nil, false, fmt.Errorf("failed to get enabled source types: %w", err)
}
ts.AddCheckpoint("plugins_ms")
kustomizeSettings, err := m.settingsMgr.GetKustomizeSettings()
if err != nil {
return nil, nil, fmt.Errorf("failed to get Kustomize settings: %w", err)
return nil, nil, false, fmt.Errorf("failed to get Kustomize settings: %w", err)
}
helmOptions, err := m.settingsMgr.GetHelmSettings()
if err != nil {
return nil, nil, fmt.Errorf("failed to get Helm settings: %w", err)
return nil, nil, false, fmt.Errorf("failed to get Helm settings: %w", err)
}
installationID, err := m.settingsMgr.GetInstallationID()
if err != nil {
return nil, nil, false, fmt.Errorf("failed to get installation ID: %w", err)
}
ts.AddCheckpoint("build_options_ms")
serverVersion, apiResources, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
if err != nil {
return nil, nil, fmt.Errorf("failed to get cluster version for cluster %q: %w", app.Spec.Destination.Server, err)
return nil, nil, false, fmt.Errorf("failed to get cluster version for cluster %q: %w", app.Spec.Destination.Server, err)
}
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return nil, nil, fmt.Errorf("failed to connect to repo server: %w", err)
return nil, nil, false, fmt.Errorf("failed to connect to repo server: %w", err)
}
defer io.Close(conn)
@@ -179,20 +185,26 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
// revisions for the rollback
refSources, err := argo.GetRefSources(context.Background(), sources, app.Spec.Project, m.db.GetRepository, revisions, rollback)
if err != nil {
return nil, nil, fmt.Errorf("failed to get ref sources: %w", err)
return nil, nil, false, fmt.Errorf("failed to get ref sources: %w", err)
}
revisionUpdated := false
atLeastOneRevisionIsNotPossibleToBeUpdated := false
keyManifestGenerateAnnotationVal, keyManifestGenerateAnnotationExists := app.Annotations[v1alpha1.AnnotationKeyManifestGeneratePaths]
for i, source := range sources {
if len(revisions) < len(sources) || revisions[i] == "" {
revisions[i] = source.TargetRevision
}
repo, err := m.db.GetRepository(context.Background(), source.RepoURL, proj.Name)
if err != nil {
return nil, nil, fmt.Errorf("failed to get repo %q: %w", source.RepoURL, err)
return nil, nil, false, fmt.Errorf("failed to get repo %q: %w", source.RepoURL, err)
}
kustomizeOptions, err := kustomizeSettings.GetOptions(source)
if err != nil {
return nil, nil, fmt.Errorf("failed to get Kustomize options for source %d of %d: %w", i+1, len(sources), err)
return nil, nil, false, fmt.Errorf("failed to get Kustomize options for source %d of %d: %w", i+1, len(sources), err)
}
syncedRevision := app.Status.Sync.Revision
@@ -204,13 +216,15 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
}
}
val, ok := app.Annotations[v1alpha1.AnnotationKeyManifestGeneratePaths]
if !source.IsHelm() && syncedRevision != "" && ok && val != "" {
revision := revisions[i]
if !source.IsHelm() && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
_, err = repoClient.UpdateRevisionForPaths(context.Background(), &apiclient.UpdateRevisionForPathsRequest{
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(context.Background(), &apiclient.UpdateRevisionForPathsRequest{
Repo: repo,
Revision: revisions[i],
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetAppRefreshPaths(app),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
@@ -221,17 +235,29 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
TrackingMethod: string(argo.GetTrackingMethod(m.settingsMgr)),
RefSources: refSources,
HasMultipleSources: app.Spec.HasMultipleSources(),
InstallationID: installationID,
})
if err != nil {
return nil, nil, fmt.Errorf("failed to compare revisions for source %d of %d: %w", i+1, len(sources), err)
return nil, nil, false, fmt.Errorf("failed to compare revisions for source %d of %d: %w", i+1, len(sources), err)
}
if updateRevisionResult.Changes {
revisionUpdated = true
}
// Generate manifests should use same revision as updateRevisionForPaths, because HEAD revision may be different between these two calls
if updateRevisionResult.Revision != "" {
revision = updateRevisionResult.Revision
}
} else {
// revisionUpdated is set to true if at least one revision is not possible to be updated,
atLeastOneRevisionIsNotPossibleToBeUpdated = true
}
log.Debugf("Generating Manifest for source %s revision %s", source, revisions[i])
log.Debugf("Generating Manifest for source %s revision %s", source, revision)
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &apiclient.ManifestRequest{
Repo: repo,
Repos: permittedHelmRepos,
Revision: revisions[i],
Revision: revision,
NoCache: noCache,
NoRevisionCache: noRevisionCache,
AppLabelKey: appLabelKey,
@@ -250,14 +276,15 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
RefSources: refSources,
ProjectName: proj.Name,
ProjectSourceRepos: proj.Spec.SourceRepos,
InstallationID: installationID,
})
if err != nil {
return nil, nil, fmt.Errorf("failed to generate manifest for source %d of %d: %w", i+1, len(sources), err)
return nil, nil, false, fmt.Errorf("failed to generate manifest for source %d of %d: %w", i+1, len(sources), err)
}
targetObj, err := unmarshalManifests(manifestInfo.Manifests)
if err != nil {
return nil, nil, fmt.Errorf("failed to unmarshal manifests for source %d of %d: %w", i+1, len(sources), err)
return nil, nil, false, fmt.Errorf("failed to unmarshal manifests for source %d of %d: %w", i+1, len(sources), err)
}
targetObjs = append(targetObjs, targetObj...)
manifestInfos = append(manifestInfos, manifestInfo)
@@ -270,7 +297,13 @@ func (m *appStateManager) GetRepoObjs(app *v1alpha1.Application, sources []v1alp
}
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
logCtx.Info("GetRepoObjs stats")
return targetObjs, manifestInfos, nil
// in case if annotation not exists, we should always execute selfheal if manifests changed
if atLeastOneRevisionIsNotPossibleToBeUpdated {
revisionUpdated = true
}
return targetObjs, manifestInfos, revisionUpdated, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, error) {
@@ -327,20 +360,24 @@ func DeduplicateTargetObjects(
// getComparisonSettings will return the system level settings related to the
// diff/normalization process.
func (m *appStateManager) getComparisonSettings() (string, map[string]v1alpha1.ResourceOverride, *settings.ResourcesFilter, error) {
func (m *appStateManager) getComparisonSettings() (string, map[string]v1alpha1.ResourceOverride, *settings.ResourcesFilter, string, error) {
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
return "", nil, nil, err
return "", nil, nil, "", err
}
appLabelKey, err := m.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return "", nil, nil, err
return "", nil, nil, "", err
}
resFilter, err := m.settingsMgr.GetResourcesFilter()
if err != nil {
return "", nil, nil, err
return "", nil, nil, "", err
}
return appLabelKey, resourceOverrides, resFilter, nil
installationID, err := m.settingsMgr.GetInstallationID()
if err != nil {
return "", nil, nil, "", err
}
return appLabelKey, resourceOverrides, resFilter, installationID, nil
}
// verifyGnuPGSignature verifies the result of a GnuPG operation for a given git
@@ -391,7 +428,7 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool, rollback bool) (*comparisonResult, error) {
ts := stats.NewTimingStats()
appLabelKey, resourceOverrides, resFilter, err := m.getComparisonSettings()
appLabelKey, resourceOverrides, resFilter, installationID, err := m.getComparisonSettings()
ts.AddCheckpoint("settings_ms")
@@ -420,7 +457,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
// When signature keys are defined in the project spec, we need to verify the signature on the Git revision
verifySignature := false
if project.Spec.SignatureKeys != nil && len(project.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled() {
if len(project.Spec.SignatureKeys) > 0 && gpg.IsGPGEnabled() {
verifySignature = true
}
@@ -437,6 +474,8 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
var manifestInfos []*apiclient.ManifestResponse
targetNsExists := false
var revisionUpdated bool
if len(localManifests) == 0 {
// If the length of revisions is not same as the length of sources,
// we take the revisions from the sources directly for all the sources.
@@ -447,7 +486,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
}
}
targetObjs, manifestInfos, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, rollback)
targetObjs, manifestInfos, revisionUpdated, err = m.GetRepoObjs(app, sources, appLabelKey, revisions, noCache, noRevisionCache, verifySignature, project, rollback)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
msg := fmt.Sprintf("Failed to load target state: %s", err.Error())
@@ -557,7 +596,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
for _, liveObj := range liveObjByKey {
if liveObj != nil {
appInstanceName := m.resourceTracking.GetAppName(liveObj, appLabelKey, trackingMethod)
appInstanceName := m.resourceTracking.GetAppName(liveObj, appLabelKey, trackingMethod, installationID)
if appInstanceName != "" && appInstanceName != app.InstanceName(m.namespace) {
fqInstanceName := strings.ReplaceAll(appInstanceName, "_", "/")
conditions = append(conditions, v1alpha1.ApplicationCondition{
@@ -696,7 +735,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
}
gvk := obj.GroupVersionKind()
isSelfReferencedObj := m.isSelfReferencedObj(liveObj, targetObj, app.GetName(), appLabelKey, trackingMethod)
isSelfReferencedObj := m.isSelfReferencedObj(liveObj, targetObj, app.GetName(), appLabelKey, trackingMethod, installationID)
resState := v1alpha1.ResourceStatus{
Namespace: obj.GetNamespace(),
@@ -838,6 +877,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
diffConfig: diffConfig,
diffResultList: diffResults,
hasPostDeleteHooks: hasPostDeleteHooks,
revisionUpdated: revisionUpdated,
}
if hasMultipleSources {
@@ -895,9 +935,7 @@ func useDiffCache(noCache bool, manifestInfos []*apiclient.ManifestResponse, sou
return false
}
currentSpec := app.BuildComparedToStatus()
specChanged := !reflect.DeepEqual(app.Status.Sync.ComparedTo, currentSpec)
if specChanged {
if !specEqualsCompareTo(app.Spec, app.Status.Sync.ComparedTo) {
log.WithField("useDiffCache", "false").Debug("specChanged")
return false
}
@@ -906,6 +944,29 @@ func useDiffCache(noCache bool, manifestInfos []*apiclient.ManifestResponse, sou
return true
}
// specEqualsCompareTo compares the application spec to the comparedTo status. It normalizes the destination to match
// the comparedTo destination before comparing. It does not mutate the original spec or comparedTo.
func specEqualsCompareTo(spec v1alpha1.ApplicationSpec, comparedTo v1alpha1.ComparedTo) bool {
// Make a copy to be sure we don't mutate the original.
specCopy := spec.DeepCopy()
currentSpec := specCopy.BuildComparedToStatus()
// The spec might have been augmented to include both server and name, so change it to match the comparedTo before
// comparing.
if comparedTo.Destination.Server == "" {
currentSpec.Destination.Server = ""
}
if comparedTo.Destination.Name == "" {
currentSpec.Destination.Name = ""
}
// Set IsServerInferred to false on both, because that field is not important for comparison.
comparedTo.Destination.SetIsServerInferred(false)
currentSpec.Destination.SetIsServerInferred(false)
return reflect.DeepEqual(comparedTo, currentSpec)
}
func (m *appStateManager) persistRevisionHistory(
app *v1alpha1.Application,
revision string,
@@ -1000,7 +1061,7 @@ func NewAppStateManager(
// group and kind) match the properties of the live object, or if the tracking method
// used does not provide the required properties for matching.
// Reference: https://github.com/argoproj/argo-cd/issues/8683
func (m *appStateManager) isSelfReferencedObj(live, config *unstructured.Unstructured, appName, appLabelKey string, trackingMethod v1alpha1.TrackingMethod) bool {
func (m *appStateManager) isSelfReferencedObj(live, config *unstructured.Unstructured, appName, appLabelKey string, trackingMethod v1alpha1.TrackingMethod, installationID string) bool {
if live == nil {
return true
}
@@ -1033,7 +1094,7 @@ func (m *appStateManager) isSelfReferencedObj(live, config *unstructured.Unstruc
// to match the properties from the live object. Cluster scoped objects
// carry the app's destination namespace in the tracking annotation,
// but are unique in GVK + name combination.
appInstance := m.resourceTracking.GetAppInstance(live, appLabelKey, trackingMethod)
appInstance := m.resourceTracking.GetAppInstance(live, appLabelKey, trackingMethod, installationID)
if appInstance != nil {
return isSelfReferencedObj(live, *appInstance)
}

View File

@@ -1372,8 +1372,8 @@ func TestIsLiveResourceManaged(t *testing.T) {
configObj := managedObj.DeepCopy()
// then
assert.True(t, manager.isSelfReferencedObj(managedObj, configObj, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel))
assert.True(t, manager.isSelfReferencedObj(managedObj, configObj, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation))
assert.True(t, manager.isSelfReferencedObj(managedObj, configObj, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel, ""))
assert.True(t, manager.isSelfReferencedObj(managedObj, configObj, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation, ""))
})
t.Run("will return true if tracked with label", func(t *testing.T) {
// given
@@ -1381,43 +1381,43 @@ func TestIsLiveResourceManaged(t *testing.T) {
configObj := managedObjWithLabel.DeepCopy()
// then
assert.True(t, manager.isSelfReferencedObj(managedObjWithLabel, configObj, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel))
assert.True(t, manager.isSelfReferencedObj(managedObjWithLabel, configObj, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel, ""))
})
t.Run("will handle if trackingId has wrong resource name and config is nil", func(t *testing.T) {
// given
t.Parallel()
// then
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongName, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongName, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation))
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongName, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel, ""))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongName, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation, ""))
})
t.Run("will handle if trackingId has wrong resource group and config is nil", func(t *testing.T) {
// given
t.Parallel()
// then
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongGroup, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongGroup, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation))
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongGroup, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel, ""))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongGroup, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation, ""))
})
t.Run("will handle if trackingId has wrong kind and config is nil", func(t *testing.T) {
// given
t.Parallel()
// then
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongKind, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongKind, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation))
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongKind, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel, ""))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongKind, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation, ""))
})
t.Run("will handle if trackingId has wrong namespace and config is nil", func(t *testing.T) {
// given
t.Parallel()
// then
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongNamespace, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongNamespace, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotationAndLabel))
assert.True(t, manager.isSelfReferencedObj(unmanagedObjWrongNamespace, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodLabel, ""))
assert.False(t, manager.isSelfReferencedObj(unmanagedObjWrongNamespace, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotationAndLabel, ""))
})
t.Run("will return true if live is nil", func(t *testing.T) {
t.Parallel()
assert.True(t, manager.isSelfReferencedObj(nil, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation))
assert.True(t, manager.isSelfReferencedObj(nil, nil, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation, ""))
})
t.Run("will handle upgrade in desired state APIGroup", func(t *testing.T) {
@@ -1427,11 +1427,13 @@ func TestIsLiveResourceManaged(t *testing.T) {
delete(config.GetAnnotations(), common.AnnotationKeyAppInstance)
// then
assert.True(t, manager.isSelfReferencedObj(managedWrongAPIGroup, config, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation))
assert.True(t, manager.isSelfReferencedObj(managedWrongAPIGroup, config, appName, common.AnnotationKeyAppInstance, argo.TrackingMethodAnnotation, ""))
})
}
func TestUseDiffCache(t *testing.T) {
t.Parallel()
type fixture struct {
testName string
noCache bool
@@ -1527,6 +1529,10 @@ func TestUseDiffCache(t *testing.T) {
t.Fatalf("error merging app: %s", err)
}
}
if app.Spec.Destination.Name != "" && app.Spec.Destination.Server != "" {
// Simulate the controller's process for populating both of these fields.
app.Spec.Destination.SetInferredServer(app.Spec.Destination.Server)
}
return app
}
@@ -1692,6 +1698,44 @@ func TestUseDiffCache(t *testing.T) {
expectedUseCache: false,
serverSideDiff: false,
},
{
// There are code paths that modify the ApplicationSpec and augment the destination field with both the
// destination server and name. Since both fields are populated in the app spec but not in the comparedTo,
// we need to make sure we correctly compare the fields and don't miss the cache.
testName: "will return true if the app spec destination contains both server and name, but otherwise matches comparedTo",
noCache: false,
manifestInfos: manifestInfos("rev1"),
sources: sources(),
app: app("httpbin", "rev1", false, &argoappv1.Application{
Spec: argoappv1.ApplicationSpec{
Destination: argoappv1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Name: "httpbin",
Namespace: "httpbin",
},
},
Status: argoappv1.ApplicationStatus{
Resources: []argoappv1.ResourceStatus{},
Sync: argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeSynced,
ComparedTo: argoappv1.ComparedTo{
Destination: argoappv1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "httpbin",
},
},
Revision: "rev1",
},
ReconciledAt: &metav1.Time{
Time: time.Now().Add(-time.Hour),
},
},
}),
manifestRevisions: []string{"rev1"},
statusRefreshTimeout: time.Hour * 24,
expectedUseCache: true,
serverSideDiff: true,
},
}
for _, tc := range cases {
@@ -1710,3 +1754,49 @@ func TestUseDiffCache(t *testing.T) {
})
}
}
func TestCompareAppStateDefaultRevisionUpdated(t *testing.T) {
app := newFakeApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
sources := make([]argoappv1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.True(t, compRes.revisionUpdated)
}
func TestCompareAppStateRevisionUpdatedWithHelmSource(t *testing.T) {
app := newFakeMultiSourceApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data, nil)
sources := make([]argoappv1.ApplicationSource, 0)
sources = append(sources, app.Spec.GetSource())
revisions := make([]string, 0)
revisions = append(revisions, "")
compRes, err := ctrl.appStateManager.CompareAppState(app, &defaultProj, revisions, sources, false, false, nil, false, false)
require.NoError(t, err)
assert.NotNil(t, compRes)
assert.NotNil(t, compRes.syncStatus)
assert.True(t, compRes.revisionUpdated)
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"os"
"strconv"
"strings"
"sync/atomic"
"time"
@@ -23,6 +24,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/managedfields"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"k8s.io/kubectl/pkg/util/openapi"
"github.com/argoproj/argo-cd/v2/controller/metrics"
@@ -30,6 +32,7 @@ import (
listersv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/diff"
"github.com/argoproj/argo-cd/v2/util/glob"
logutils "github.com/argoproj/argo-cd/v2/util/log"
"github.com/argoproj/argo-cd/v2/util/lua"
"github.com/argoproj/argo-cd/v2/util/rand"
@@ -41,6 +44,10 @@ const (
// EnvVarSyncWaveDelay is an environment variable which controls the delay in seconds between
// each sync-wave
EnvVarSyncWaveDelay = "ARGOCD_SYNC_WAVE_DELAY"
// serviceAccountDisallowedCharSet contains the characters that are not allowed to be present
// in a DefaultServiceAccount configured for a DestinationServiceAccount
serviceAccountDisallowedCharSet = "!*[]{}\\/"
)
func (m *appStateManager) getOpenAPISchema(server string) (openapi.Resources, error) {
@@ -167,12 +174,18 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
return
} else if syncWindowPreventsSync(app, proj) {
// If the operation is currently running, simply let the user know the sync is blocked by a current sync window
if state.Phase == common.OperationRunning {
state.Message = "Sync operation blocked by sync window"
} else {
isBlocked, err := syncWindowPreventsSync(app, proj)
if isBlocked {
// If the operation is currently running, simply let the user know the sync is blocked by a current sync window
if state.Phase == common.OperationRunning {
state.Message = "Sync operation blocked by sync window"
if err != nil {
state.Message = fmt.Sprintf("%s: %v", state.Message, err)
}
}
return
}
return
}
if !isMultiSourceRevision {
@@ -282,8 +295,35 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
log.Errorf("Could not get appInstanceLabelKey: %v", err)
return
}
installationID, err := m.settingsMgr.GetInstallationID()
if err != nil {
log.Errorf("Could not get installation ID: %v", err)
return
}
trackingMethod := argo.GetTrackingMethod(m.settingsMgr)
impersonationEnabled, err := m.settingsMgr.IsImpersonationEnabled()
if err != nil {
log.Errorf("could not get impersonation feature flag: %v", err)
return
}
if impersonationEnabled {
serviceAccountToImpersonate, err := deriveServiceAccountToImpersonate(proj, app)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to find a matching service account to impersonate: %v", err)
return
}
logEntry = logEntry.WithFields(log.Fields{"impersonationEnabled": "true", "serviceAccount": serviceAccountToImpersonate})
// set the impersonation headers.
rawConfig.Impersonate = rest.ImpersonationConfig{
UserName: serviceAccountToImpersonate,
}
restConfig.Impersonate = rest.ImpersonationConfig{
UserName: serviceAccountToImpersonate,
}
}
opts := []sync.SyncOpt{
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
@@ -311,7 +351,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
return (len(syncOp.Resources) == 0 ||
isPostDeleteHook(target) ||
argo.ContainsSyncResource(key.Name, key.Namespace, schema.GroupVersionKind{Kind: key.Kind, Group: key.Group}, syncOp.Resources)) &&
m.isSelfReferencedObj(live, target, app.GetName(), appLabelKey, trackingMethod)
m.isSelfReferencedObj(live, target, app.GetName(), appLabelKey, trackingMethod, installationID)
}),
sync.WithManifestValidation(!syncOp.SyncOptions.HasOption(common.SyncOptionsDisableValidation)),
sync.WithSyncWaveHook(delayBetweenSyncWaves),
@@ -528,11 +568,52 @@ func delayBetweenSyncWaves(phase common.SyncPhase, wave int, finalWave bool) err
return nil
}
func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject) bool {
func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject) (bool, error) {
window := proj.Spec.SyncWindows.Matches(app)
isManual := false
if app.Status.OperationState != nil {
isManual = !app.Status.OperationState.Operation.InitiatedBy.Automated
}
return !window.CanSync(isManual)
canSync, err := window.CanSync(isManual)
if err != nil {
// prevents sync because sync window has an error
return true, err
}
return !canSync, nil
}
// deriveServiceAccountToImpersonate determines the service account to be used for impersonation for the sync operation.
// The returned service account will be fully qualified including namespace and the service account name in the format system:serviceaccount:<namespace>:<service_account>
func deriveServiceAccountToImpersonate(project *v1alpha1.AppProject, application *v1alpha1.Application) (string, error) {
// spec.Destination.Namespace is optional. If not specified, use the Application's
// namespace
serviceAccountNamespace := application.Spec.Destination.Namespace
if serviceAccountNamespace == "" {
serviceAccountNamespace = application.Namespace
}
// Loop through the destinationServiceAccounts and see if there is any destination that is a candidate.
// if so, return the service account specified for that destination.
for _, item := range project.Spec.DestinationServiceAccounts {
dstServerMatched, err := glob.MatchWithError(item.Server, application.Spec.Destination.Server)
if err != nil {
return "", fmt.Errorf("invalid glob pattern for destination server: %w", err)
}
dstNamespaceMatched, err := glob.MatchWithError(item.Namespace, application.Spec.Destination.Namespace)
if err != nil {
return "", fmt.Errorf("invalid glob pattern for destination namespace: %w", err)
}
if dstServerMatched && dstNamespaceMatched {
if strings.Trim(item.DefaultServiceAccount, " ") == "" || strings.ContainsAny(item.DefaultServiceAccount, serviceAccountDisallowedCharSet) {
return "", fmt.Errorf("default service account contains invalid chars '%s'", item.DefaultServiceAccount)
} else if strings.Contains(item.DefaultServiceAccount, ":") {
// service account is specified along with its namespace.
return fmt.Sprintf("system:serviceaccount:%s", item.DefaultServiceAccount), nil
} else {
// service account needs to be prefixed with a namespace
return fmt.Sprintf("system:serviceaccount:%s:%s", serviceAccountNamespace, item.DefaultServiceAccount), nil
}
}
}
// if there is no match found in the AppProject.Spec.DestinationServiceAccounts, use the default service account of the destination namespace.
return "", fmt.Errorf("no matching service account found for destination server %s and namespace %s", application.Spec.Destination.Server, serviceAccountNamespace)
}

View File

@@ -2,6 +2,7 @@ package controller
import (
"context"
"strconv"
"testing"
"github.com/argoproj/gitops-engine/pkg/sync"
@@ -9,6 +10,7 @@ import (
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@@ -644,6 +646,771 @@ func TestNormalizeTargetResourcesWithList(t *testing.T) {
})
}
func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
type fixture struct {
project *v1alpha1.AppProject
application *v1alpha1.Application
}
setup := func(destinationServiceAccounts []v1alpha1.ApplicationDestinationServiceAccount, destinationNamespace, destinationServerURL, applicationNamespace string) *fixture {
project := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: "argocd-ns",
Name: "testProj",
},
Spec: v1alpha1.AppProjectSpec{
DestinationServiceAccounts: destinationServiceAccounts,
},
}
app := &v1alpha1.Application{
ObjectMeta: v1.ObjectMeta{
Namespace: applicationNamespace,
Name: "testApp",
},
Spec: v1alpha1.ApplicationSpec{
Project: "testProj",
Destination: v1alpha1.ApplicationDestination{
Server: destinationServerURL,
Namespace: destinationNamespace,
},
},
}
return &fixture{
project: project,
application: app,
}
}
t.Run("empty destination service accounts", func(t *testing.T) {
// given an application referring a project with no destination service accounts
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := ""
expectedErrMsg := "no matching service account found for destination server https://kubernetes.svc.local and namespace testns"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
assert.Equal(t, expectedSA, sa)
// then, there should be an error saying no valid match was found
assert.EqualError(t, err, expectedErrMsg)
})
t.Run("exact match of destination namespace", func(t *testing.T) {
// given an application referring a project with exactly one destination service account that matches the application destination,
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should be no error and should use the right service account for impersonation
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("exact one match with multiple destination service accounts", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts having one exact match for application destination
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "guestbook-test",
DefaultServiceAccount: "guestbook-test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should be no error and should use the right service account for impersonation
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("first match to be used when multiple matches are available", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts having multiple match for application destination
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa-3",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should be no error and it should use the first matching service account for impersonation
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("first match to be used when glob pattern is used", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts with glob patterns matching the application destination
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "test*",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and should use the first matching glob pattern service account for impersonation
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("no match among a valid list", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts with no matches for application destination
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "test1",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "test2",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := ""
expectedErrMsg := "no matching service account found for destination server https://kubernetes.svc.local and namespace testns"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should be an error saying no match was found
require.EqualError(t, err, expectedErrMsg)
assert.Equal(t, expectedSA, sa)
})
t.Run("app destination namespace is empty", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts with empty application destination namespace
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "*",
DefaultServiceAccount: "test-sa-2",
},
}
destinationNamespace := ""
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:argocd-ns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and the service account configured for with empty namespace should be used.
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("match done via catch all glob pattern", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts having a catch all glob pattern
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns1",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "*",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and the catch all service account should be returned
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("match done via invalid glob pattern", func(t *testing.T) {
// given an application referring a project with a destination service account having an invalid glob pattern for namespace
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "e[[a*",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := ""
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there must be an error as the glob pattern is invalid.
require.ErrorContains(t, err, "invalid glob pattern for destination namespace")
assert.Equal(t, expectedSA, sa)
})
t.Run("sa specified with a namespace", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts having a matching service account specified with its namespace
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "myns:test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "*",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:myns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
require.NoError(t, err)
})
}
func TestDeriveServiceAccountMatchingServers(t *testing.T) {
type fixture struct {
project *v1alpha1.AppProject
application *v1alpha1.Application
}
setup := func(destinationServiceAccounts []v1alpha1.ApplicationDestinationServiceAccount, destinationNamespace, destinationServerURL, applicationNamespace string) *fixture {
project := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: "argocd-ns",
Name: "testProj",
},
Spec: v1alpha1.AppProjectSpec{
DestinationServiceAccounts: destinationServiceAccounts,
},
}
app := &v1alpha1.Application{
ObjectMeta: v1.ObjectMeta{
Namespace: applicationNamespace,
Name: "testApp",
},
Spec: v1alpha1.ApplicationSpec{
Project: "testProj",
Destination: v1alpha1.ApplicationDestination{
Server: destinationServerURL,
Namespace: destinationNamespace,
},
},
}
return &fixture{
project: project,
application: app,
}
}
t.Run("exact one match with multiple destination service accounts", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts and one exact match for application destination
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-sa",
},
{
Server: "https://abc.svc.local",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-test-sa",
},
{
Server: "https://cde.svc.local",
Namespace: "guestbook",
DefaultServiceAccount: "default-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and the right service account must be returned.
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("first match to be used when multiple matches are available", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts and multiple matches for application destination
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "guestbook",
DefaultServiceAccount: "guestbook-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and first matching service account should be used
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("first match to be used when glob pattern is used", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts with a matching glob pattern and exact match
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "test*",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account of the glob pattern, being the first match should be returned.
require.NoError(t, err)
})
t.Run("no match among a valid list", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts with no match
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa",
},
{
Server: "https://abc.svc.local",
Namespace: "testns",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://cde.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://xyz.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := ""
expectedErr := "no matching service account found for destination server https://xyz.svc.local and namespace testns"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there an error with appropriate message must be returned
require.EqualError(t, err, expectedErr)
assert.Equal(t, expectedSA, sa)
})
t.Run("match done via catch all glob pattern", func(t *testing.T) {
// given an application referring a project with multiple destination service accounts with matching catch all glob pattern
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://kubernetes.svc.local",
Namespace: "testns1",
DefaultServiceAccount: "test-sa-2",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
{
Server: "*",
Namespace: "*",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://localhost:6443"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:testns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and the service account of the glob pattern match must be returned.
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
t.Run("match done via invalid glob pattern", func(t *testing.T) {
// given an application referring a project with a destination service account having an invalid glob pattern for server
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "e[[a*",
Namespace: "test-ns",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://kubernetes.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := ""
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there must be an error as the glob pattern is invalid.
require.ErrorContains(t, err, "invalid glob pattern for destination server")
assert.Equal(t, expectedSA, sa)
})
t.Run("sa specified with a namespace", func(t *testing.T) {
// given app sync impersonation feature is enabled and matching service account is prefixed with a namespace
t.Parallel()
destinationServiceAccounts := []v1alpha1.ApplicationDestinationServiceAccount{
{
Server: "https://abc.svc.local",
Namespace: "testns",
DefaultServiceAccount: "myns:test-sa",
},
{
Server: "https://kubernetes.svc.local",
Namespace: "default",
DefaultServiceAccount: "default-sa",
},
{
Server: "*",
Namespace: "*",
DefaultServiceAccount: "test-sa",
},
}
destinationNamespace := "testns"
destinationServerURL := "https://abc.svc.local"
applicationNamespace := "argocd-ns"
expectedSA := "system:serviceaccount:myns:test-sa"
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application)
// then, there should not be any error and the service account with the given namespace prefix must be returned.
require.NoError(t, err)
assert.Equal(t, expectedSA, sa)
})
}
func TestSyncWithImpersonate(t *testing.T) {
type fixture struct {
project *v1alpha1.AppProject
application *v1alpha1.Application
controller *ApplicationController
}
setup := func(impersonationEnabled bool, destinationNamespace, serviceAccountName string) *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
project := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
Spec: v1alpha1.AppProjectSpec{
DestinationServiceAccounts: []v1alpha1.
ApplicationDestinationServiceAccount{
{
Server: "https://localhost:6443",
Namespace: destinationNamespace,
DefaultServiceAccount: serviceAccountName,
},
},
},
}
additionalObjs := []runtime.Object{}
if serviceAccountName != "" {
syncServiceAccount := &corev1.ServiceAccount{
ObjectMeta: v1.ObjectMeta{
Name: serviceAccountName,
Namespace: test.FakeDestNamespace,
},
}
additionalObjs = append(additionalObjs, syncServiceAccount)
}
data := fakeData{
apps: []runtime.Object{app, project},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: "https://localhost:6443",
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{},
configMapData: map[string]string{
"application.sync.impersonation.enabled": strconv.FormatBool(impersonationEnabled),
},
additionalObjs: additionalObjs,
}
ctrl := newFakeController(&data, nil)
return &fixture{
project: project,
application: app,
controller: ctrl,
}
}
t.Run("sync with impersonation and no matching service account", func(t *testing.T) {
// given app sync impersonation feature is enabled with an application referring a project no matching service account
f := setup(true, test.FakeArgoCDNamespace, "")
opMessage := "failed to find a matching service account to impersonate: no matching service account found for destination server https://localhost:6443 and namespace fake-dest-ns"
opState := &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{},
},
},
Phase: common.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, opState)
// then, app sync should fail with expected error message in operation state
assert.Equal(t, common.OperationError, opState.Phase)
assert.Contains(t, opState.Message, opMessage)
})
t.Run("sync with impersonation and empty service account match", func(t *testing.T) {
// given app sync impersonation feature is enabled with an application referring a project matching service account that is an empty string
f := setup(true, test.FakeDestNamespace, "")
opMessage := "failed to find a matching service account to impersonate: default service account contains invalid chars ''"
opState := &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{},
},
},
Phase: common.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, opState)
// then app sync should fail with expected error message in operation state
assert.Equal(t, common.OperationError, opState.Phase)
assert.Contains(t, opState.Message, opMessage)
})
t.Run("sync with impersonation and matching sa", func(t *testing.T) {
// given app sync impersonation feature is enabled with an application referring a project matching service account
f := setup(true, test.FakeDestNamespace, "test-sa")
opMessage := "successfully synced (no more tasks)"
opState := &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{},
},
},
Phase: common.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, opState)
// then app sync should not fail
assert.Equal(t, common.OperationSucceeded, opState.Phase)
assert.Contains(t, opState.Message, opMessage)
})
t.Run("sync without impersonation", func(t *testing.T) {
// given app sync impersonation feature is disabled with an application referring a project matching service account
f := setup(false, test.FakeDestNamespace, "")
opMessage := "successfully synced (no more tasks)"
opState := &v1alpha1.OperationState{
Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{},
},
},
Phase: common.OperationRunning,
}
// when
f.controller.appStateManager.SyncAppState(f.application, opState)
// then application sync should pass using the control plane service account
assert.Equal(t, common.OperationSucceeded, opState.Phase)
assert.Contains(t, opState.Message, opMessage)
})
}
func dig[T any](obj interface{}, path []interface{}) T {
i := obj

View File

@@ -6,7 +6,7 @@ metadata:
deployment.kubernetes.io/revision: '9'
iksm-version: '2.0'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"argocd.argoproj.io/tracking-id":"guestbook:apps/Deployment:default/kustomize-guestbook-ui","iksm-version":"2.0"},"name":"kustomize-guestbook-ui","namespace":"default"},"spec":{"replicas":4,"revisionHistoryLimit":3,"selector":{"matchLabels":{"app":"guestbook-ui"}},"template":{"metadata":{"labels":{"app":"guestbook-ui"}},"spec":{"containers":[{"env":[{"name":"SOME_ENV_VAR","value":"some_value"}],"image":"gcr.io/heptio-images/ks-guestbook-demo:0.1","name":"guestbook-ui","ports":[{"containerPort":80}],"resources":{"requests":{"cpu":"50m","memory":"100Mi"}}}]}}}}
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"argocd.argoproj.io/tracking-id":"guestbook:apps/Deployment:default/kustomize-guestbook-ui","iksm-version":"2.0"},"name":"kustomize-guestbook-ui","namespace":"default"},"spec":{"replicas":4,"revisionHistoryLimit":3,"selector":{"matchLabels":{"app":"guestbook-ui"}},"template":{"metadata":{"labels":{"app":"guestbook-ui"}},"spec":{"containers":[{"env":[{"name":"SOME_ENV_VAR","value":"some_value"}],"image":"quay.io/argoprojlabs/argocd-e2e-container:0.1","name":"guestbook-ui","ports":[{"containerPort":80}],"resources":{"requests":{"cpu":"50m","memory":"100Mi"}}}]}}}}
creationTimestamp: '2022-01-05T15:45:21Z'
generation: 119
managedFields:
@@ -137,7 +137,7 @@ spec:
- env:
- name: SOME_ENV_VAR
value: some_value
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
image: 'quay.io/argoprojlabs/argocd-e2e-container:0.1'
imagePullPolicy: IfNotPresent
name: guestbook-ui
ports:

View File

@@ -6,7 +6,7 @@ metadata:
deployment.kubernetes.io/revision: '9'
iksm-version: '2.0'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"argocd.argoproj.io/tracking-id":"guestbook:apps/Deployment:default/kustomize-guestbook-ui","iksm-version":"2.0"},"name":"kustomize-guestbook-ui","namespace":"default"},"spec":{"replicas":4,"revisionHistoryLimit":3,"selector":{"matchLabels":{"app":"guestbook-ui"}},"template":{"metadata":{"labels":{"app":"guestbook-ui"}},"spec":{"containers":[{"env":[{"name":"SOME_ENV_VAR","value":"some_value"}],"image":"gcr.io/heptio-images/ks-guestbook-demo:0.1","name":"guestbook-ui","ports":[{"containerPort":80}],"resources":{"requests":{"cpu":"50m","memory":"100Mi"}}}]}}}}
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"argocd.argoproj.io/tracking-id":"guestbook:apps/Deployment:default/kustomize-guestbook-ui","iksm-version":"2.0"},"name":"kustomize-guestbook-ui","namespace":"default"},"spec":{"replicas":4,"revisionHistoryLimit":3,"selector":{"matchLabels":{"app":"guestbook-ui"}},"template":{"metadata":{"labels":{"app":"guestbook-ui"}},"spec":{"containers":[{"env":[{"name":"SOME_ENV_VAR","value":"some_value"}],"image":"quay.io/argoprojlabs/argocd-e2e-container:0.1","name":"guestbook-ui","ports":[{"containerPort":80}],"resources":{"requests":{"cpu":"50m","memory":"100Mi"}}}]}}}}
creationTimestamp: '2022-01-05T15:45:21Z'
generation: 119
managedFields:
@@ -137,7 +137,7 @@ spec:
- env:
- name: SOME_ENV_VAR
value: some_value
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
image: 'quay.io/argoprojlabs/argocd-e2e-container:0.1'
imagePullPolicy: IfNotPresent
name: guestbook-ui
ports:

View File

@@ -25,7 +25,7 @@ spec:
value: yet_another_value
- name: SOME_ENV_VAR
value: different_value!
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
image: 'quay.io/argoprojlabs/argocd-e2e-container:0.1'
name: guestbook-ui
ports:
- containerPort: 80

View File

@@ -19,7 +19,7 @@ spec:
spec:
containers:
- name: guestbook-ui
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
image: 'quay.io/argoprojlabs/argocd-e2e-container:0.1'
env:
- name: SOME_ENV_VAR
value: some_value

View File

@@ -21,7 +21,7 @@ spec:
- env:
- name: SOME_ENV_VAR
value: some_value
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
image: 'quay.io/argoprojlabs/argocd-e2e-container:0.1'
name: guestbook-ui
ports:
- containerPort: 80

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 163 KiB

View File

@@ -32,23 +32,41 @@ function initializeVersionDropdown() {
window[callbackName] = function(response) {
const div = document.createElement('div');
div.innerHTML = response.html;
document.querySelector(".md-header__inner > .md-header__title").appendChild(div);
const headerTitle = document.querySelector(".md-header__inner > .md-header__title");
if (headerTitle) {
headerTitle.appendChild(div);
}
const container = div.querySelector('.rst-versions');
if (!container) return; // Exit if container not found
// Add caret icon
var caret = document.createElement('div');
caret.innerHTML = "<i class='fa fa-caret-down dropdown-caret'></i>";
caret.classList.add('dropdown-caret');
div.querySelector('.rst-current-version').appendChild(caret);
const currentVersionElem = div.querySelector('.rst-current-version');
if (currentVersionElem) {
currentVersionElem.appendChild(caret);
}
div.querySelector('.rst-current-version').addEventListener('click', function() {
container.classList.toggle('shift-up');
});
// Add click listener to toggle dropdown
if (currentVersionElem && container) {
currentVersionElem.addEventListener('click', function() {
container.classList.toggle('shift-up');
});
}
// Sorting Logic
sortVersionLinks(container);
};
// Load CSS
var CSSLink = document.createElement('link');
CSSLink.rel = 'stylesheet';
CSSLink.href = '/assets/versions.css';
document.getElementsByTagName('head')[0].appendChild(CSSLink);
// Load JSONP Script
var script = document.createElement('script');
const currentVersion = getCurrentVersion();
script.src = 'https://argo-cd.readthedocs.io/_/api/v2/footer_html/?' +
@@ -56,6 +74,58 @@ function initializeVersionDropdown() {
document.getElementsByTagName('head')[0].appendChild(script);
}
// Function to sort version links
function sortVersionLinks(container) {
// Find all <dl> elements within the container
const dlElements = container.querySelectorAll('dl');
dlElements.forEach(dl => {
const dt = dl.querySelector('dt');
if (dt && dt.textContent.trim().toLowerCase() === 'versions') {
// Found the Versions <dl>
const ddElements = Array.from(dl.querySelectorAll('dd'));
// Define sorting criteria
ddElements.sort((a, b) => {
const aText = a.textContent.trim().toLowerCase();
const bText = b.textContent.trim().toLowerCase();
// Prioritize 'latest' and 'stable'
if (aText === 'latest') return -1;
if (bText === 'latest') return 1;
if (aText === 'stable') return -1;
if (bText === 'stable') return 1;
// Extract version numbers (e.g., release-2.9)
const aVersionMatch = aText.match(/release-(\d+(\.\d+)*)/);
const bVersionMatch = bText.match(/release-(\d+(\.\d+)*)/);
if (aVersionMatch && bVersionMatch) {
const aVersion = aVersionMatch[1].split('.').map(Number);
const bVersion = bVersionMatch[1].split('.').map(Number);
for (let i = 0; i < Math.max(aVersion.length, bVersion.length); i++) {
const aNum = aVersion[i] || 0;
const bNum = bVersion[i] || 0;
if (aNum > bNum) return -1;
if (aNum < bNum) return 1;
}
return 0;
}
// Fallback to alphabetical order
return aText.localeCompare(bText);
});
// Remove existing <dd> elements
ddElements.forEach(dd => dl.removeChild(dd));
// Append sorted <dd> elements
ddElements.forEach(dd => dl.appendChild(dd));
}
});
}
// VERSION WARNINGS
window.addEventListener("DOMContentLoaded", function() {
var margin = 30;

View File

@@ -1,8 +1,8 @@
# Site
# Documentation Site
## Developing And Testing
The website is built using `mkdocs` and `mkdocs-material`.
The [documentation website](https://argo-cd.readthedocs.io/) is built using `mkdocs` and `mkdocs-material`.
To test:
@@ -10,7 +10,7 @@ To test:
make serve-docs
```
Once running, you can view your locally built documentation at [http://0.0.0.0:8000/](http://0.0.0.0:8000/).
Make a change to documentation and the website will rebuild and refresh the view.
Making changes to documentation will automatically rebuild and refresh the view.
Before submitting a PR build the website, to verify that there are no errors building the site
```bash

View File

@@ -60,7 +60,38 @@ data:
server: https://some-cluster
```
Note: There is no need to restart Argo CD Server after modifiying the
Proxy extensions can also be provided individually using dedicated
Argo CD configmap keys for better GitOps operations. The example below
demonstrates how to configure the same hypothetical httpbin config
above using a dedicated key:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
extension.config.httpbin: |
connectionTimeout: 2s
keepAlive: 15s
idleConnectionTimeout: 60s
maxIdleConnections: 30
services:
- url: http://httpbin.org
headers:
- name: some-header
value: '$some.argocd.secret.key'
cluster:
name: some-cluster
server: https://some-cluster
```
Attention: Extension names must be unique in the Argo CD configmap. If
duplicated keys are found, the Argo CD API server will log an error
message and no proxy extension will be registered.
Note: There is no need to restart Argo CD Server after modifying the
`extension.config` entry in Argo CD configmap. Changes will be
automatically applied. A new proxy registry will be built making
all new incoming extensions requests (`<argocd-host>/extensions/*`) to
@@ -150,12 +181,11 @@ the argocd-secret with key 'some.argocd.secret.key'.
If provided, and multiple services are configured, will have to match
the application destination name or server to have requests properly
forwarded to this service URL. If there are multiple backends for the
same extension this field is required. In this case at least one of
the two will be required: name or server. It is better to provide both
values to avoid problems with applications unable to send requests to
the proper backend service. If only one backend service is
configured, this field is ignored, and all requests are forwarded to
the configured one.
same extension this field is required. In this case, it is necessary
to provide both values to avoid problems with applications unable to
send requests to the proper backend service. If only one backend
service is configured, this field is ignored, and all requests are
forwarded to the configured one.
#### `extensions.backend.services.cluster.name` (*string*)
(optional)
@@ -268,6 +298,10 @@ section for more details.
Will be populated with the username logged in Argo CD.
#### `Argocd-User-Groups`
Will be populated with the 'groups' claim from the user logged in Argo CD.
### Multi Backend Use-Case
In some cases when Argo CD is configured to sync with multiple remote

View File

@@ -1,10 +1,26 @@
# Overview
!!! warning "You probably don't want to be reading this section of the docs."
This part of the manual is aimed at people wanting to develop third-party applications that interact with Argo CD, e.g.
This part of the manual is aimed at helping people contribute to Argo CD, the documentation, or to develop third-party applications that interact with Argo CD, e.g.
* A chat bot
* A Slack integration
!!! note
Please make sure you've completed the [getting started guide](../getting_started.md).
## Contributing to Argo CD
* [Code Contribution Guide](code-contributions/)
* [Contributors Quickstart](contributors-quickstart/)
* [Running Argo CD Locally](running-locally/)
Need help? Start with the [Contributors FAQ](faq/)
## Contributing to the Documentation
* [Building and Running Documentation Site Locally](docs-site/)
## Extensions and Third-Party Applications
* [UI Extensions](ui-extensions/)
* [Proxy Extensions](proxy-extensions/)
* [Config Management Plugins](../operator-manual/config-management-plugins/)
## Contributing to Argo Website
The Argo website is maintained in the [argo-site](https://github.com/argoproj/argo-site) repository.

View File

@@ -0,0 +1,131 @@
# Application Sync using impersonation
!!! warning "Alpha Feature"
This is an experimental, alpha-quality feature that allows you to control the service account used for the sync operation. The configured service account could have lesser privileges required for creating resources compared to the highly privileged access required for the control plane operations.
!!! warning
Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues.
## Introduction
Argo CD supports syncing `Application` resources using the same service account used for its control plane operations. This feature enables users to decouple service account used for application sync from the service account used for control plane operations.
By default, application syncs in Argo CD have the same privileges as the Argo CD control plane. As a consequence, in a multi-tenant setup, the Argo CD control plane privileges needs to match the tenant that needs the highest privileges. As an example, if an Argo CD instance has 10 Applications and only one of them requires admin privileges, then the Argo CD control plane must have admin privileges in order to be able to sync that one Application. This provides an opportunity for malicious tenants to gain admin level access. Argo CD provides a multi-tenancy model to restrict what each `Application` is authorized to do using `AppProjects`, however it is not secure enough and if Argo CD is compromised, attackers will easily gain `cluster-admin` access to the cluster.
Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature, as it is disabled by default.
!!! note
This feature is considered alpha as of now. Some of the implementation details may change over the course of time until it is promoted to a stable status. We will be happy if early adopters use this feature and provide us with bug reports and feedback.
### What is Impersonation
Impersonation is a feature in Kubernetes and enabled in the `kubectl` CLI client, using which, a user can act as another user through impersonation headers. For example, an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied.
Impersonation requests first authenticate as the requesting user, then switch to the impersonated user info.
## Prerequisites
In a multi-team/multi-tenant environment, a team/tenant is typically granted access to a target namespace to self-manage their kubernetes resources in a declarative way.
A typical tenant onboarding process looks like below:
1. The platform admin creates a tenant namespace and the service account to be used for creating the resources is also created in the same tenant namespace.
2. The platform admin creates one or more Role(s) to manage kubernetes resources in the tenant namespace
3. The platform admin creates one or more RoleBinding(s) to map the service account to the role(s) created in the previous steps.
4. The platform admin can choose to use either the [apps-in-any-namespace](./app-any-namespace.md) feature or provide access to tenants to create applications in the ArgoCD control plane namespace.
5. If the platform admin chooses apps-in-any-namespace feature, tenants can self-service their Argo applications in their respective tenant namespaces and no additional access needs to be provided for the control plane namespace.
## Implementation details
### Overview
In order for an application to use a different service account for the application sync operation, the following steps needs to be performed:
1. The impersonation feature flag should be enabled. Please refer the steps provided in [Enable application sync with impersonation feature](#enable-application-sync-with-impersonation-feature)
2. The `AppProject` referenced by the `.spec.project` field of the `Application` must have the `DestinationServiceAccounts` mapping the destination server and namespace to a service account to be used for the sync operation. Please refer the steps provided in [Configuring destination service accounts](#configuring-destination-service-accounts)
### Enable application sync with impersonation feature
In order to enable this feature, the Argo CD administrator must reconfigure the `application.sync.impersonation.enabled` settings in the `argocd-cm` ConfigMap as below:
```yaml
data:
application.sync.impersonation.enabled: "true"
```
### Disable application sync with impersonation feature
In order to disable this feature, the Argo CD administrator must reconfigure the `application.sync.impersonation.enabled` settings in the `argocd-cm` ConfigMap as below:
```yaml
data:
application.sync.impersonation.enabled: "false"
```
!!! note
This feature is disabled by default.
!!! note
This feature can be enabled/disabled only at the system level and once enabled/disabled it is applicable to all Applications managed by ArgoCD.
## Configuring destination service accounts
Destination service accounts can be added to the `AppProject` under `.spec.destinationServiceAccounts`. Specify the target destination `server` and `namespace` and provide the service account to be used for the sync operation using `defaultServiceAccount` field. Applications that refer this `AppProject` will use the corresponding service account configured for its destination.
During the application sync operation, the controller loops through the available `destinationServiceAccounts` in the mapped `AppProject` and tries to find a matching candidate. If there are multiple matches for a destination server and namespace combination, then the first valid match will be considered. If there are no matches, then an error is reported during the sync operation. In order to avoid such sync errors, it is highly recommended that a valid service account may be configured as a catch-all configuration, for all target destinations and kept in lowest order of priority.
It is possible to specify service accounts along with its namespace. eg: `tenant1-ns:guestbook-deployer`. If no namespace is provided for the service account, then the Application's `spec.destination.namespace` will be used. If no namespace is provided for the service account and the optional `spec.destination.namespace` field is also not provided in the `Application`, then the Application's namespace will be used.
`DestinationServiceAccounts` associated to a `AppProject` can be created and managed, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc).
### Using declarative yaml
For declaratively configuring destination service accounts, create an yaml file for the `AppProject` as below and apply the changes using `kubectl apply` command.
```yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-project
namespace: argocd
spec:
description: Example Project
# Allow manifests to deploy from any Git repos
sourceRepos:
- '*'
destinations:
- '*'
destinationServiceAccounts:
- server: https://kubernetes.default.svc
namespace: guestbook
defaultServiceAccount: guestbook-deployer
- server: https://kubernetes.default.svc
namespace: guestbook-dev
defaultServiceAccount: guestbook-dev-deployer
- server: https://kubernetes.default.svc
namespace: guestbook-stage
defaultServiceAccount: guestbook-stage-deployer
- server: https://kubernetes.default.svc # catch-all configuration
namespace: '*'
defaultServiceAccount: default
```
### Using the CLI
Destination service accounts can be added to an `AppProject` using the ArgoCD CLI.
For example, to add a destination service account for `in-cluster` and `guestbook` namespace, you can use the following CLI command:
```shell
argocd proj add-destination-service-account my-project https://kubernetes.default.svc guestbook guestbook-sa
```
Likewise, to remove the destination service account from an `AppProject`, you can use the following CLI command:
```shell
argocd proj remove-destination-service-account my-project https://kubernetes.default.svc guestbook
```
### Using the UI
Similar to the CLI, you can add destination service account when creating or updating an `AppProject` from the UI

View File

@@ -119,7 +119,7 @@ spec:
forceCommonLabels: false
forceCommonAnnotations: false
images:
- gcr.io/heptio-images/ks-guestbook-demo:0.2
- quay.io/argoprojlabs/argocd-e2e-container:0.2
- my-app=gcr.io/my-repo/my-app:0.1
namespace: custom-namespace
replicas:

View File

@@ -22,8 +22,8 @@ As an example, imagine that we have two clusters:
And our application YAMLs are defined in a Git repository:
- Argo Workflows controller (examples/git-generator-directory/cluster-addons/argo-workflows)
- Prometheus operator (/examples/git-generator-directory/cluster-addons/prometheus-operator)
- [Argo Workflows controller](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory/cluster-addons/argo-workflows)
- [Prometheus operator](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory/cluster-addons/prometheus-operator)
Our goal is to deploy both applications onto both clusters, and, more generally, in the future to automatically deploy new applications in the Git repository, and to new clusters defined within Argo CD, as well.

View File

@@ -42,6 +42,7 @@ When the ApplicationSet changes, the changes will be applied to each group of Ap
* Sync operations are triggered the same way as if they were triggered by the UI or CLI (by directly setting the `operation` status field on the Application resource). This means that a RollingSync will respect sync windows just as if a user had clicked the "Sync" button in the Argo UI.
* When a sync is triggered, the sync is performed with the same syncPolicy configured for the Application. For example, this preserves the Application's retry settings.
* If an Application is considered "Pending" for `applicationsetcontroller.default.application.progressing.timeout` seconds, the Application is automatically moved to Healthy status (default 300).
* If an Application is not selected in any step, it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI.
#### Example
The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels.

View File

@@ -283,6 +283,9 @@ data:
# - annotation+label : Also uses an annotation for tracking, but additionally labels the resource with the application name
application.resourceTrackingMethod: annotation
# Optional installation id. Allows to have multiple installations of Argo CD in the same cluster.
installationID: "my-unique-id"
# disables admin user. Admin is enabled by default
admin.enabled: "false"
# add an additional local user with apiKey and login capabilities
@@ -329,14 +332,14 @@ data:
# spread out the refreshes and give time to the repo-server to catch up. The jitter is the maximum duration that can be
# added to the sync timeout. So, if the sync timeout is 3 minutes and the jitter is 1 minute, then the actual timeout will
# be between 3 and 4 minutes. Disabled when the value is 0, defaults to 0.
timeout.reconciliation.jitter: 0
timeout.reconciliation.jitter: "0"
# cluster.inClusterEnabled indicates whether to allow in-cluster server address. This is enabled by default.
cluster.inClusterEnabled: "true"
# The maximum number of pod logs to render in UI. If the application has more than this number of pods, the logs will not be rendered.
# This is to prevent the UI from becoming unresponsive when rendering a large number of logs. Default is 10.
server.maxPodLogsToRender: 10
server.maxPodLogsToRender: "10"
# Application pod logs RBAC enforcement enables control over who can and who can't view application pod logs.
# When you enable the switch, pod logs will be visible only to admin role by default. Other roles/users will not be able to view them via cli and UI.
@@ -425,4 +428,7 @@ data:
name: some-cluster
server: https://some-cluster
# The maximum size of the payload that can be sent to the webhook server.
webhook.maxPayloadSizeMB: 1024
webhook.maxPayloadSizeMB: "1024"
# application.sync.impersonation.enabled enables application sync to use a custom service account, via impersonation. This allows decoupling sync from control-plane service account.
application.sync.impersonation.enabled: "false"

View File

@@ -47,8 +47,11 @@ data:
controller.log.level: "info"
# Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)
controller.metrics.cache.expiration: "24h0m0s"
# Specifies timeout between application self heal attempts (default 5)
controller.self.heal.timeout.seconds: "5"
# Specifies exponential backoff timeout parameters between application self heal attempts
controller.self.heal.timeout.seconds: "2"
controller.self.heal.backoff.factor: "3"
controller.self.heal.backoff.cap.seconds: "300"
# Cache expiration for app state (default 1h0m0s)
controller.app.state.cache.expiration: "1h0m0s"
# Specifies if resource health should be persisted in app CRD (default true)

View File

@@ -818,9 +818,9 @@ stringData:
}
}
```
This will instruct ArgoCD to read the file at the provided path and use the credentials defined within to authenticate to
AWS. The profile must be mounted in order for this to work. For example, the following values can be defined in a Helm
based ArgoCD deployment:
This will instruct Argo CD to read the file at the provided path and use the credentials defined within to authenticate to AWS.
The profile must be mounted in both the `argocd-server` and `argocd-application-controller` components in order for this to work.
For example, the following values can be defined in a Helm-based Argo CD deployment:
```yaml
controller:

View File

@@ -98,20 +98,27 @@ data:
return hs
```
In order to prevent duplication of the custom health check for potentially multiple resources, it is also possible to specify a wildcard in the resource kind, and anywhere in the resource group, like this:
In order to prevent duplication of custom health checks for potentially multiple resources, it is also possible to
specify a wildcard in the resource kind, and anywhere in the resource group, like this:
```yaml
resource.customizations.health.ec2.aws.crossplane.io_*: |
...
resource.customizations: |
ec2.aws.crossplane.io/*:
health.lua: |
...
```
```yaml
resource.customizations.health.*.aws.crossplane.io_*: |
...
# If a key _begins_ with a wildcard, please ensure that the GVK key is quoted.
resource.customizations: |
"*.aws.crossplane.io/*":
health.lua: |
...
```
!!!important
Please, note that there can be ambiguous resolution of wildcards, see [#16905](https://github.com/argoproj/argo-cd/issues/16905)
Please, note that wildcards are only supported when using the `resource.customizations` key, the `resource.customizations.health.<group>_<kind>`
style keys do not work since wildcards (`*`) are not supported in Kubernetes configmap keys.
The `obj` is a global variable which contains the resource. The script must return an object with status and optional message field.
The custom health check might return one of the following health statuses:
@@ -121,7 +128,7 @@ The custom health check might return one of the following health statuses:
* `Degraded` - the resource is degraded
* `Suspended` - the resource is suspended and waiting for some external event to resume (e.g. suspended CronJob or paused Deployment)
By default health typically returns `Progressing` status.
By default, health typically returns a `Progressing` status.
NOTE: As a security measure, access to the standard Lua libraries will be disabled by default. Admins can control access by
setting `resource.customizations.useOpenLibs.<group>_<kind>`. In the following example, standard libraries are enabled for health check of `cert-manager.io/Certificate`.

View File

@@ -21,6 +21,9 @@ Not recommended for production use. This type of installation is typically used
in (i.e. kubernetes.svc.default). It will still be able to deploy to external clusters with inputted
credentials.
> Note: The ClusterRoleBinding in the installation manifest is bound to a ServiceAccount in the argocd namespace.
> Be cautious when modifying the namespace, as changing it may cause permission-related errors unless the ClusterRoleBinding is correctly adjusted to reflect the new namespace.
* [namespace-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/namespace-install.yaml) - Installation of Argo CD which requires only
namespace level privileges (does not need cluster roles). Use this manifest set if you do not
need Argo CD to deploy applications in the same cluster that Argo CD runs in, and will rely solely
@@ -78,6 +81,29 @@ resources:
For an example of this, see the [kustomization.yaml](https://github.com/argoproj/argoproj-deployments/blob/master/argocd/kustomization.yaml)
used to deploy the [Argoproj CI/CD infrastructure](https://github.com/argoproj/argoproj-deployments#argoproj-deployments).
#### Installing Argo CD in a Custom Namespace
If you want to install Argo CD in a namespace other than the default argocd, you can use Kustomize to apply a patch that updates the ClusterRoleBinding to reference the correct namespace for the ServiceAccount. This ensures that the necessary permissions are correctly set in your custom namespace.
Below is an example of how to configure your kustomization.yaml to install Argo CD in a custom namespace:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: <your-custom-namespace>
resources:
- https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.2/manifests/install.yaml
patches:
- patch: |-
- op: replace
path: /subjects/0/namespace
value: <your-custom-namespace>
target:
kind: ClusterRoleBinding
```
This patch ensures that the ClusterRoleBinding correctly maps to the ServiceAccount in your custom namespace, preventing any permission-related issues during the deployment.
## Helm
The Argo CD can be installed using [Helm](https://helm.sh/). The Helm chart is currently community maintained and available at

View File

@@ -8,6 +8,7 @@ Metrics about applications. Scraped at the `argocd-metrics:8082/metrics` endpoin
| Metric | Type | Description |
|--------|:----:|-------------|
| `argocd_app_info` | gauge | Information about Applications. It contains labels such as `sync_status` and `health_status` that reflect the application state in Argo CD. |
| `argocd_app_condition` | gauge | Report Applications conditions. It contains the conditions currently present in the application status. |
| `argocd_app_k8s_request_total` | counter | Number of Kubernetes requests executed during application reconciliation |
| `argocd_app_labels` | gauge | Argo Application labels converted to Prometheus labels. Disabled by default. See section below about how to enable it. |
| `argocd_app_reconcile` | histogram | Application reconciliation performance in seconds. |
@@ -30,6 +31,8 @@ to deleted resources, you can schedule a metrics reset to clean the
history with an application controller flag. Example:
`--metrics-cache-expiration="24h0m0s"`.
### Exposing Application labels as Prometheus metrics
There are use-cases where Argo CD Applications contain labels that are desired to be exposed as Prometheus metrics.
@@ -60,6 +63,45 @@ argocd_app_labels{label_business_unit="bu-id-1",label_team_name="my-team",name="
argocd_app_labels{label_business_unit="bu-id-2",label_team_name="another-team",name="my-app-3",namespace="argocd",project="important-project"} 1
```
### Exposing Application conditions as Prometheus metrics
There are use-cases where Argo CD Applications contain conditions that are desired to be exposed as Prometheus metrics.
Some examples are:
* Hunting orphaned resources across all deployed applications
* Knowing which resources are excluded from ArgoCD
As the Application conditions are specific to each company, this feature is disabled by default. To enable it, add the
`--metrics-application-conditions` flag to the Argo CD application controller.
The example below will expose the Argo CD Application condition `OrphanedResourceWarning` and `ExcludedResourceWarning` to Prometheus:
```yaml
containers:
- command:
- argocd-application-controller
- --metrics-application-conditions
- OrphanedResourceWarning
- --metrics-application-conditions
- ExcludedResourceWarning
```
## Application Set Controller metrics
The Application Set controller exposes the following metrics for application sets.
| Metric | Type | Description |
|--------|:----:|-------------|
| `argocd_appset_info` | gauge | Information about Application Sets. It contains labels for the name and namespace of an application set as well as `Resource_update_status` that reflects the `ResourcesUpToDate` property |
| `argocd_appset_reconcile` | histogram | Application reconciliation performance in seconds. It contains labels for the name and namespace of an applicationset |
| `argocd_appset_labels` | gauge | Applicationset labels translated to Prometheus labels. Disabled by default |
| `argocd_appset_owned_applications` | gauge | Number of applications owned by the applicationset. It contains labels for the name and namespace of an applicationset. |
Similar to the same metric in application controller (`argocd_app_labels`) the metric `argocd_appset_labels` is disabled by default. You can enable it by providing the `metrics-applicationset-labels` argument to the applicationset controller.
Once enabled it works exactly the same as application controller metrics (label_ appended to normalized label name).
Available labels include Name, Namespace + all labels enabled by the command line options and their value (exactly like application controller metrics described in the previous section).
## API Server Metrics
Metrics about API Server API request and response activity (request totals, response codes, etc...).
Scraped at the `argocd-server-metrics:8083/metrics` endpoint.

View File

@@ -122,9 +122,19 @@ To do so, when the action if performed on an application's resource, the `<actio
For instance, to grant access to `example-user` to only delete Pods in the `prod-app` Application, the policy could be:
```csv
p, example-user, applications, delete/*/Pod/*, default/prod-app, allow
p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow
```
!!!warning "Understand glob pattern behavior"
Argo CD RBAC does not use `/` as a separator when evaluating glob patterns. So the pattern `delete/*/kind/*`
will match `delete/<group>/kind/<namespace>/<name>` but also `delete/<group>/<kind>/kind/<name>`.
The fact that both of these match will generally not be a problem, because resource kinds generally contain capital
letters, and namespaces cannot contain capital letters. However, it is possible for a resource kind to be lowercase.
So it is better to just always include all the parts of the resource in the pattern (in other words, always use four
slashes).
If we want to grant access to the user to update all resources of an application, but not the application itself:
```csv
@@ -135,7 +145,7 @@ If we want to explicitly deny delete of the application, but allow the user to d
```csv
p, example-user, applications, delete, default/prod-app, deny
p, example-user, applications, delete/*/Pod/*, default/prod-app, allow
p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow
```
!!! note
@@ -145,7 +155,7 @@ p, example-user, applications, delete/*/Pod/*, default/prod-app, allow
```csv
p, example-user, applications, delete, default/prod-app, allow
p, example-user, applications, delete/*/Pod/*, default/prod-app, deny
p, example-user, applications, delete/*/Pod/*/*, default/prod-app, deny
```
#### The `action` action

View File

@@ -80,6 +80,20 @@ The `discovery.lua` script must return a table where the key name represents the
Each action name must be represented in the list of `definitions` with an accompanying `action.lua` script to control the resource modifications. The `obj` is a global variable which contains the resource. Each action script returns an optionally modified version of the resource. In this example, we are simply setting `.spec.suspend` to either `true` or `false`.
By default, defining a resource action customization will override any built-in action for this resource kind. If you want to retain the built-in actions, you can set the `mergeBuiltinActions` key to `true`. Your custom actions will have precedence over the built-in actions.
```yaml
resource.customizations.actions.argoproj.io_Rollout: |
mergeBuiltinActions: true
discovery.lua: |
actions = {}
actions["do-things"] = {}
return actions
definitions:
- name: do-things
action.lua: |
return obj
```
#### Creating new resources with a custom action
!!! important

View File

@@ -31,6 +31,7 @@ argocd-application-controller [flags]
--default-cache-expiration duration Cache expiration default (default 24h0m0s)
--disable-compression If true, opt-out of response compression for all requests to the server
--dynamic-cluster-distribution-enabled Enables dynamic cluster distribution.
--enable-k8s-event none Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all])
--gloglevel int Set the glog logging level
-h, --help help for argocd-application-controller
--ignore-normalizer-jq-execution-timeout-seconds duration Set ignore normalizer JQ execution timeout
@@ -39,6 +40,7 @@ argocd-application-controller [flags]
--kubectl-parallelism-limit int Number of allowed concurrent kubectl fork/execs. Any value less than 1 means no limit. (default 20)
--logformat string Set the logging format. One of: text|json (default "text")
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
--metrics-application-conditions strings List of Application conditions that will be added to the argocd_application_conditions metric
--metrics-application-labels strings List of Application labels that will be added to the argocd_application_labels metric
--metrics-cache-expiration duration Prometheus metrics cache expiration (disabled by default. e.g. 24h0m0s)
--metrics-port int Start metrics server on given port (default 8082)
@@ -65,7 +67,10 @@ argocd-application-controller [flags]
--repo-server-strict-tls Whether to use strict validation of the TLS cert presented by the repo server
--repo-server-timeout-seconds int Repo server RPC call timeout seconds. (default 60)
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
--self-heal-timeout-seconds int Specifies timeout between application self heal attempts (default 5)
--self-heal-backoff-cap-seconds int Specifies max timeout of exponential backoff between application self heal attempts (default 300)
--self-heal-backoff-factor int Specifies factor of exponential timeout between application self heal attempts (default 3)
--self-heal-backoff-timeout-seconds int Specifies initial timeout of exponential backoff between self heal attempts (default 2)
--self-heal-timeout-seconds int Specifies timeout between application self heal attempts
--sentinel stringArray Redis sentinel hostname and port (e.g. argocd-redis-ha-announce-0:6379).
--sentinelmaster string Redis sentinel master group name. (default "master")
--server string The address and port of the Kubernetes API server

View File

@@ -51,6 +51,7 @@ argocd-server [flags]
--disable-auth Disable client authentication
--disable-compression If true, opt-out of response compression for all requests to the server
--enable-gzip Enable GZIP compression (default true)
--enable-k8s-event none Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all])
--enable-proxy-extension Enable Proxy Extension feature
--gloglevel int Set the glog logging level
-h, --help help for argocd-server

View File

@@ -1,2 +1,5 @@
This page is populated for released Argo CD versions. Use the version selector to view this table for a specific
version.
| Argo CD version | Kubernetes versions |
|-----------------|---------------------|
| 2.13 | v1.30, v1.29, v1.28, v1.27 |
| 2.12 | v1.29, v1.28, v1.27, v1.26 |
| 2.11 | v1.29, v1.28, v1.27, v1.26, v1.25 |

Some files were not shown because too many files have changed in this diff Show More