Compare commits

..

153 Commits

Author SHA1 Message Date
argo-bot
903db5fe46 Bump version to 2.1.16 2022-06-21 16:19:26 +00:00
argo-bot
1d26f44f53 Bump version to 2.1.16 2022-06-21 16:19:15 +00:00
Michael Crenshaw
e577e25405 chore: fix docs gen
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-21 10:49:34 -04:00
Michael Crenshaw
a92a153a49 Merge pull request from GHSA-jhqp-vf4w-rpwq
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

defer instead of multiple close calls

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

oops

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

don't count jsonnet against max

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

fix codegen

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

add caveat about 300x ratio

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

fix versions

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

fix tests/lint

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-21 09:40:36 -04:00
Michael Crenshaw
45ddd05cef Merge pull request from GHSA-q4w5-4gq2-98vm
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-21 09:39:56 -04:00
Michael Crenshaw
947bdd9efb Merge pull request from GHSA-2m7h-86qq-fp4v
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

fix references

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

use long enough state param for oauth2

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

typo

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

more entropy

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

fix tests/lint

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-21 09:39:01 -04:00
Michael Crenshaw
4fd50ce8bd Merge pull request from GHSA-h4w9-6x78-8vrj
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-21 09:36:37 -04:00
Michael Crenshaw
3fab7def3e fix: missing Helm params (#9565) (#9566)
* fix: missing Helm params

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* use absolute paths, fix tests

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* fix race in test

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 22:32:42 -04:00
Michael Crenshaw
5e6b788da9 test: directory app manifest generation (#9503)
* test: directory app manifest generation

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* git doesn't support empty dirs

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

fix bad cherry-pick

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 16:27:40 -04:00
Michael Crenshaw
26ac321f03 test: fix erroneous test change
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 15:36:34 -04:00
Michael Crenshaw
ea79ca4029 chore: eliminate go-mpatch dependency (#9045)
* chore: eliminate go-mpatch dependency

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: abstract out resource list function

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: don't exit the program in anything but the main function

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: better error messages

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: better error messages

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 15:34:32 -04:00
jannfis
e7ca57b361 chore: Make unit tests run on platforms other than amd64 (#8995)
Signed-off-by: jannfis <jann@mistrust.net>

Co-authored-by: Michael Crenshaw <michael@crenshaw.dev>
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 15:33:25 -04:00
Alexander Matyushentsev
f3e7fbada8 chore: remove obsolete repo-server unit test (#9559)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-06-14 15:33:07 -04:00
Michael Crenshaw
01f069c1da chore: remove unnecessary import
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 15:32:01 -04:00
Tommaso Sardelli
4c8ca4f41e chore: upgrade golangci-lint to v1.46.2 (#9448)
* chore: upgrade golangci-lint to v1.46.2

Because:

* Installation of golangci-lint v1.45.2 is currently broken and fails
  silently due to a redacted dependency
  (https://github.com/blizzy78/varnamelen/issues/13)

This commit:

* Upgrades golangci-lint to v1.46.2

Signed-off-by: Tommaso Sardelli <lacapannadelloziotom@gmail.com>

* fix: lint

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* fix: lint

Signed-off-by: Tommaso Sardelli <lacapannadelloziotom@gmail.com>

Co-authored-by: Michael Crenshaw <michael@crenshaw.dev>
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 15:31:01 -04:00
Michael Crenshaw
e19c351f10 chore: update golangci-lint (#8988)
* chore: update golangci-lint

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 15:30:16 -04:00
Michael Crenshaw
0d74b6859d test: fix ErrorContains (#9445)
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-06-14 14:41:00 -04:00
argo-bot
52f917a181 Bump version to 2.1.15 2022-05-18 12:34:22 +00:00
argo-bot
f4c22f5958 Bump version to 2.1.15 2022-05-18 12:34:11 +00:00
jannfis
10491767cf Merge pull request from GHSA-r642-gv9p-2wjj
Signed-off-by: jannfis <jann@mistrust.net>

Co-authored-by: Michael Crenshaw <michael@crenshaw.dev>

Co-authored-by: Michael Crenshaw <michael@crenshaw.dev>
2022-05-18 13:16:21 +02:00
Michael Crenshaw
7357cfdb58 Merge pull request from GHSA-6gcg-hp2x-q54h
* fix: do not allow symlinks from directory-type applications

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: fix merge

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: lint

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* chore: use t.TempDir for simpler tests

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

* address comments

Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>
2022-05-18 13:13:41 +02:00
jannfis
2fe88150d6 Merge pull request from GHSA-xmg8-99r8-jc2j
Signed-off-by: Michael Crenshaw <michael@crenshaw.dev>

Co-authored-by: Michael Crenshaw <michael@crenshaw.dev>
2022-05-18 13:06:31 +02:00
argo-bot
836cde06ba Bump version to 2.1.14 2022-03-23 00:09:36 +00:00
argo-bot
db00e40b16 Bump version to 2.1.14 2022-03-23 00:09:23 +00:00
argo-bot
4803dfac1d Bump version to 2.1.13 2022-03-22 22:48:03 +00:00
argo-bot
5dbdaa4fe2 Bump version to 2.1.13 2022-03-22 22:47:48 +00:00
Alexander Matyushentsev
e13e887de8 fix: fix broken e2e test (#8861)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-03-22 14:59:38 -07:00
Alexander Matyushentsev
b043629979 Merge pull request from GHSA-2f5v-8r3f-8pww
* fix: application resource APIs must enforce project restrictions

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* Fix unit tests

Signed-off-by: jannfis <jann@mistrust.net>

Co-authored-by: jannfis <jann@mistrust.net>
2022-03-22 10:57:31 -07:00
argo-bot
273a952e6c Bump version to 2.1.12 2022-03-09 00:45:50 +00:00
argo-bot
2600f52a66 Bump version to 2.1.12 2022-03-09 00:45:37 +00:00
Alexander Matyushentsev
2cefc00855 fix: correct jsonnet paths resolution (#8721)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-03-08 16:03:03 -08:00
argo-bot
e25d3b5435 Bump version to 2.1.11 2022-03-06 05:30:33 +00:00
argo-bot
b921433112 Bump version to 2.1.11 2022-03-06 05:30:19 +00:00
Alexander Matyushentsev
96f63c3e2b fix: prevent file traversal using helm file values param and application details api (#8606)
* fix: prevent file traversal using helm file values param and application details api

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* apply reviewer notes: move resolve.go into separate package; use uuid to generate random file

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-03-03 21:22:16 -08:00
Jesse Suen
d04dc9baed fix!: enforce app create/update privileges when getting repo details (#8558)
Signed-off-by: Jesse Suen <jesse@akuity.io>
2022-03-03 20:39:29 -08:00
Alexander Matyushentsev
0ef556e0f5 feat: support custom helm values file schemes (#8535)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-03-03 17:13:28 -08:00
Jesse Suen
d54361937b docs: add security documentation related to git repositories (#8463)
Signed-off-by: Jesse Suen <jesse@akuity.io>
2022-02-11 15:53:02 -08:00
argo-bot
de6735c386 Bump version to 2.1.10 2022-02-05 01:10:59 +00:00
argo-bot
df2149bbac Bump version to 2.1.10 2022-02-05 01:10:46 +00:00
jannfis
09529ee1ae fix: Resolve symlinked value files correctly (#8387)
* fix: Resolve symlinked value files correctly

Signed-off-by: jannfis <jann@mistrust.net>

* fix: Resolve symlinked value files correctly

Signed-off-by: jannfis <jann@mistrust.net>
2022-02-04 15:12:32 -08:00
argo-bot
5c51d5dae0 Bump version to 2.1.9 2022-02-03 20:22:27 +00:00
argo-bot
ec9b6f1689 Bump version to 2.1.9 2022-02-03 20:22:14 +00:00
jannfis
b7d9f0071b Merge pull request from GHSA-63qx-x74g-jcr7
Signed-off-by: jannfis <jann@mistrust.net>
2022-02-03 20:37:46 +01:00
argo-bot
2fdaf7a9ad Bump version to 2.1.8 2021-12-13 23:10:19 +00:00
argo-bot
5e64458c6b Bump version to 2.1.8 2021-12-13 23:10:03 +00:00
pasha-codefresh
2475403af7 fix: issue with keepalive (#7861)
* fix issue with keepalive

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* empty commit

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2021-12-11 11:18:48 -08:00
jomenxiao
a1e14d48ab fix nil point (#7905)
Signed-off-by: jomenxiao <jomenxiao@gmail.com>
2021-12-10 21:24:31 -08:00
Jesse Suen
425d35c477 fix: env vars to tune cluster cache were broken (#7779)
Signed-off-by: Jesse Suen <jesse@akuity.io>
2021-11-24 18:19:19 -08:00
Alexander Matyushentsev
0d7c4cbe83 fix: upgraded gitops engine to v0.4.2 (fixes #7561)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-11-19 13:07:41 -08:00
argo-bot
a408e299ff Bump version to 2.1.7 2021-11-17 22:02:58 +00:00
argo-bot
1acd1af8ef Bump version to 2.1.7 2021-11-17 22:02:45 +00:00
Mark Sarcevicz
5679e4060e Fix: Kuberenetes manifest to have new Github.com ssh known host keys for ArgoCD deployments (#7722)
* Kuberenetes manifest to have new ssh known host keys for ArgoCD deployments

https://github.blog/2021-09-01-improving-git-protocol-security-github/
Signed-off-by: smark88 <msarcevicz@influxdata.com>

* added to docs

Signed-off-by: smark88 <msarcevicz@influxdata.com>

* fix: regenerate manifests using 'make manifests'

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-11-17 13:33:59 -08:00
argo-bot
a346cf933e Bump version to 2.1.6 2021-10-28 19:51:48 +00:00
argo-bot
f249d530b5 Bump version to 2.1.6 2021-10-28 19:51:34 +00:00
Alexander Matyushentsev
46c1ef7a16 fix: don't use revision caching during app creation (#7508)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-20 20:40:18 -07:00
Mohammad Yosefpor
b4565fd7b2 fix: supporting OCI dependencies. Fixes #6062 (#6994)
* fix: supporting OCI dependencies

Signed-off-by: Mohammad Yosefpor <myusefpur@gmail.com>

* chore: add org to USERS.md

Signed-off-by: Mohammad Yosefpor <myusefpur@gmail.com>

* fix(tests): remove invalid TestRepoPermission e2e test

Signed-off-by: Mohammad Yosefpor <myusefpur@gmail.com>
2021-10-20 18:43:03 -07:00
argo-bot
a8a6fc8dda Bump version to 2.1.5 2021-10-20 15:09:22 +00:00
argo-bot
81024f8a89 Bump version to 2.1.5 2021-10-20 15:09:12 +00:00
Alexander Matyushentsev
f0201c3a99 fix: Invalid memory address or nil pointer dereference in processRequestedAppOperation (#7501)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-20 08:04:38 -07:00
argo-bot
d5c6608827 Bump version to 2.1.4 2021-10-20 00:27:32 +00:00
argo-bot
0564de77e6 Bump version to 2.1.4 2021-10-20 00:27:18 +00:00
Alexander Matyushentsev
e1eec8a9dc fix: Operation has completed with phase: Running (#7482)
* fix: Operation has completed with phase: Running

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-19 17:17:34 -07:00
Alexander Matyushentsev
3d8d03f0a4 fix: Application status panel shows Syncing instead of Deleting (#7486)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-19 10:36:23 -07:00
pasha-codefresh
64f5c6aa85 fix: remove not existing repo (#7280)
* remove not existing repo

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* fix test

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2021-10-12 09:59:51 -07:00
Alexander Matyushentsev
f9e2fc9210 docs: update v2.3+ roadmap (#7353)
* docs: update v2.3+ roadmap

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* Address reviewer notes: Add 'Merge Argo CD Image Updater into Argo CD' and 'Multi-tenancy improvements'

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-10-12 08:55:51 -07:00
Jan-Otto Kröpke
f9eac82928 docs: Kustomize load_restrictor -> load-restrictor (#7358)
Signed-off-by: Jan-Otto Kröpke <joe@adorsys.de>
2021-10-12 08:55:32 -07:00
Remington Breeze
bfbc19a583 fix(ui): Add Error Boundary around Extensions and comply with new Extensions API (#7215)
* fix: Add error boundary around Extensions and change path where UI looks for extensions

Signed-off-by: Remington Breeze <remington@breeze.software>

* Add error message to error boundary

Signed-off-by: Remington Breeze <remington@breeze.software>
2021-10-04 17:38:06 -07:00
argo-bot
d855831540 Bump version to 2.1.3 2021-09-29 21:44:26 +00:00
argo-bot
6536fd9fb4 Bump version to 2.1.3 2021-09-29 21:44:11 +00:00
Alexander Matyushentsev
053bfbe845 fix: core-install.yaml always refers to latest argocd image (#7321)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-09-29 14:05:46 -07:00
Chetan Banavikalmutt
7b771061e1 fix: handle applicationset backup forbidden error (#7306)
Signed-off-by: Chetan Banavikalmutt <chetanrns1997@gmail.com>
2021-09-29 12:35:01 -07:00
Alexander Matyushentsev
f8c6bcba65 fix: Argo CD should not use cached git/helm revision during app creation/update validation (#7244)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-09-16 18:35:52 -07:00
Remington Breeze
6e9b18ea4b fix(ui): More tab was displayed for resources that did not have extensions installed (#7209)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-09-14 08:47:49 -07:00
jannfis
7a72b6f2d2 chore: Update haproxy for redis-ha to 2.0.25 (#7194)
Signed-off-by: jannfis <jann@mistrust.net>
2021-09-10 09:14:23 -07:00
Thomas
51db9bdf79 fix: use selected helm-values (#7166)
Signed-off-by: Thomas Münzl <thomasmuenzl@icloud.com>
2021-09-09 21:13:18 -07:00
irizzant
b2c5f5b63c 7144 fix: add custom volume as Helm working dir (#7162)
Signed-off-by: irizzant <i.rizzante@gmail.com>
2021-09-09 21:12:49 -07:00
argo-bot
7af9dfb352 Bump version to 2.1.2 2021-09-02 17:58:03 +00:00
argo-bot
8a39759eb3 Bump version to 2.1.2 2021-09-02 17:57:49 +00:00
Alexander Matyushentsev
3981432899 fix: cluster filter popping out of box (#7135)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-09-02 10:55:50 -07:00
Alexander Matyushentsev
af2e16fcaf fix: gracefully shutdown metrics server when dex config changes (#7138)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-09-01 18:58:30 -07:00
Alexander Matyushentsev
d34bf2cf14 fix: upgrade gitops engine version to v0.4.1 (#7088)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-09-01 16:57:50 -07:00
May Zhang
194b2894ef fix: repository name already exists when multiple helm dependencies f… (#7096)
* fix: repository name already exists when multiple helm dependencies from same private repo server

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: add test cases

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: clean up

Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-09-01 14:56:38 -07:00
argo-bot
aab9542f8b Bump version to 2.1.1 2021-08-25 15:05:12 +00:00
argo-bot
a85ab6586d Bump version to 2.1.1 2021-08-25 15:04:57 +00:00
May Zhang
57abbf95ed fix: password reset requirements (#7071)
* fix: password reset meet requirement

Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-08-24 23:22:23 -07:00
Alexander Matyushentsev
6a69d737da fix: Custom Styles feature is broken (#7067)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-24 15:11:52 -07:00
Remington Breeze
7c98813bb8 fix(ui): Add State to props passed to Extensions (#7045)
Signed-off-by: Remington Breeze <remington@breeze.software>
Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-24 15:11:43 -07:00
Alexander Matyushentsev
6868bd4213 fix: fix building remote container (#7062)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-24 15:11:32 -07:00
pasha-codefresh
c7c08426ac fix: make codegen (#7059)
fix: make codegen

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2021-08-24 15:11:24 -07:00
Alexander Matyushentsev
86d21721a8 fix: keep uid_entrypoint.sh for backward compatibility (#7047)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-23 06:45:35 +00:00
argo-bot
d0b2d55e3f Bump version to 2.1.0 2021-08-20 05:22:27 +00:00
argo-bot
6f03da218f Bump version to 2.1.0 2021-08-20 05:22:12 +00:00
Alexander Matyushentsev
5b5182f83a fix: reload extension when selected resource node changes (#7034)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-19 22:18:14 -07:00
Remington Breeze
d6c2cdf4a7 fix(ui): Resource details crashed due to extensions (#7025)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-19 21:13:12 -07:00
Alexander Matyushentsev
aeb9e9b383 feat: support loading extensions in Argo CD UI (#7019)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-19 21:13:05 -07:00
Philipp Dallig
cd470736f6 fix: Running on Openshift 4.x with readOnlyRootFilesystem (#6998)
Signed-off-by: Philipp Dallig <philipp.dallig@gmail.com>
2021-08-18 14:42:16 +00:00
Alexander Matyushentsev
7470dc3359 fix: resouce health filter should include node if node or node's root health matches filter (#7002)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-17 14:35:00 -07:00
Alexander Matyushentsev
6d3688ddd9 fix: upgrade argo-ui version (#7010)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-17 12:51:15 -07:00
Yuan Tang
20b3d56ba1 docs: Fix links to subsections in roadmap (#6986)
* docs: Fix links to completed subsections in roadmap

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* One more link

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2021-08-17 12:26:30 -07:00
Alexander Matyushentsev
d10778f431 docs: update roadmap and release checklist (#6980)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-17 12:26:24 -07:00
Alexander Matyushentsev
9d81d36a2c docs: update obsolete resource customization docs (#7000)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-17 12:26:12 -07:00
Remington Breeze
03bf5051e6 chore(ui): Relocate CheckboxRow component to Filters instead of argo-ui (#6990)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-17 12:25:42 -07:00
Remington Breeze
a502982d2d fix(ui): Migrate to keyhook helpers in argo-ui, update keybindings accordingly (#6953)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-17 12:25:34 -07:00
David Collom
fb3fe6ed42 fix: Disable generating of CA Certificates (#6987)
Signed-off-by: David Collom <david.collom@jetstack.io>
2021-08-16 06:43:23 +00:00
Alexander Matyushentsev
f3e28d3131 fix: argocd core commands should not drop existing persistent flags (#6981)
* fix: argocd core commands should not drop existing persistent flags

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* run cli codegen

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-12 23:11:29 -07:00
pasha-codefresh
491ab32214 feat(ui): add top value for large breakpoint Closes #6944 (#6975)
feat(ui): add `top` value for `large` breakpoint Closes #6944 (#6975)

Signed-off-by: andrii-codefresh <andrii@codefresh.io>
2021-08-12 12:40:46 -07:00
Jesse Suen
f4331324cc fix: rollout v1.0 'Paused' status.phase should map to Argo CD 'Suspended' (#6967)
Signed-off-by: Jesse Suen <jessesuen@gmail.com>
2021-08-12 11:38:06 -07:00
Alexander Matyushentsev
a8e61cc13c fix: basehref not set correctly (#6962)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-11 15:23:06 -07:00
argo-bot
5700faf0d1 Bump version to 2.1.0-rc3 2021-08-11 19:36:11 +00:00
argo-bot
829f0285b9 Bump version to 2.1.0-rc3 2021-08-11 19:35:56 +00:00
Alexander Matyushentsev
d6bb869468 fix: update deprecated helm2 installation URL (#6960)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-11 12:35:01 -07:00
Alexander Matyushentsev
b3abdb1323 fix: make sure repo server discard cached empty response (#6948)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-11 08:54:33 -07:00
Alexander Matyushentsev
e2bca9f9ef fix: applications/resources filter improvement and bug fixes (#6931)
* fix: applications/resources filter improvement and bug fixes

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-11 08:54:26 -07:00
May Zhang
bdc53c804b fix: use secure way to generate initial password (#6938)
* fix: use secure way to generate initial password

Signed-off-by: May Zhang <may_zhang@intuit.com>

* fix: use secure way to generate initial password

Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-08-10 15:01:15 -07:00
Daisuke Taniwaki
13e10a7c2b fix: Set header to OIDC requests (#6869)
Signed-off-by: Daisuke Taniwaki <daisuketaniwaki@gmail.com>
2021-08-10 15:01:05 -07:00
jannfis
f77d35a3d2 docs: Update contributor docs (#6615)
* Sync

Signed-off-by: jannfis <jann@mistrust.net>

* docs: Update contributor docs

Signed-off-by: jannfis <jann@mistrust.net>

* New paragraph about code submission

Signed-off-by: jannfis <jann@mistrust.net>

* Update

Signed-off-by: jannfis <jann@mistrust.net>

* Update code submissions before triage paragraph

Signed-off-by: jannfis <jann@mistrust.net>
2021-08-09 10:41:53 +00:00
jannfis
0818a48348 docs: Update security considerations (#6930)
* docs: Update security considerations

Signed-off-by: jannfis <jann@mistrust.net>

* docs: Update security considerations

Signed-off-by: jannfis <jann@mistrust.net>
2021-08-09 10:41:33 +00:00
Alexander Matyushentsev
faf7bff322 fix: application sync panel crashes if app has no sync options (#6914)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-04 16:57:09 -07:00
Yi Cai
96fba7c67b fix: Reword Generate new token dialog (#6913)
Signed-off-by: ciiay <yicai@redhat.com>
2021-08-04 16:57:05 -07:00
Alexander Matyushentsev
8020261a7d fix: assume ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS, ARGOCD_SERVER_REPO_SERVER_TIMEOUT_SECONDS env vars have seconds (#6912)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-04 13:52:37 -07:00
Yi Cai
e7e93b2e7a fix: UI 6337 cluster filter improvement for issue #6337 (#6856)
* fix: when cluster field is bge-dev1, the applications with cluster dev1 is selected #6337

Signed-off-by: ciiay <yicai@redhat.com>
2021-08-04 13:52:34 -07:00
Noam Gal
8d78f0a604 fixed max to use MaxInt64 value (#6911)
Signed-off-by: Noam Gal <noam.gal@codefresh.io>
2021-08-04 11:21:57 -07:00
Chetan Banavikalmutt
27e9f6398c fix: unset command should remove env vars when there's no error (#6908)
Signed-off-by: Chetan Banavikalmutt <chetanrns1997@gmail.com>
2021-08-04 11:21:54 -07:00
pasha-codefresh
455d0f1be1 feat: rollback should work without id passed #6825. (#6877)
feat: rollback should work without id passed #6825. (#6877)

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2021-08-04 11:21:50 -07:00
Remington Breeze
a87660adee fix: Add https prefix to ingress URLs if hosts field is present (#6901)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-04 11:21:47 -07:00
woshicai
31093cd359 fix: client input arguments with equal sign (#6885)
Signed-off-by: Charles Cai <charles.cai@sap.com>

Co-authored-by: Charles Cai <charles.cai@sap.com>
2021-08-04 11:21:41 -07:00
Bob Claerhout
cad6016ac6 docs: Add replace description on individual resource level (#6905)
Signed-off-by: Bob Claerhout <claerhout.bob@gmail.com>
2021-08-04 11:21:34 -07:00
May Zhang
a2f1993c06 fix: logout redirect URL (#6903)
Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-08-03 17:07:37 -07:00
Alexander Matyushentsev
9118e7a7ec refactor: update resources install order according to helm implementation (#6902)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-03 17:07:33 -07:00
Alexander Matyushentsev
cd10754574 fix: controller should not create orphaned resources warning by default (#6898)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-03 17:07:28 -07:00
Alexander Matyushentsev
f0b91cccf3 feat: Improve Replace sync option description in UI (#6899)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-03 17:07:23 -07:00
dependabot[bot]
1354538269 chore(deps): bump tar from 6.1.0 to 6.1.3 in /ui (#6900)
Bumps [tar](https://github.com/npm/node-tar) from 6.1.0 to 6.1.3.
- [Release notes](https://github.com/npm/node-tar/releases)
- [Changelog](https://github.com/npm/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-tar/compare/v6.1.0...v6.1.3)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-03 17:07:20 -07:00
Yi Cai
5c408cc5d5 fix: menu for application loses text when title area buttons collapse #6474 (#6887)
Signed-off-by: ciiay <yicai@redhat.com>
2021-08-03 17:07:10 -07:00
argo-bot
b067879892 Bump version to 2.1.0-rc2 2021-08-03 17:00:32 +00:00
argo-bot
ab2bbc2201 Bump version to 2.1.0-rc2 2021-08-03 17:00:06 +00:00
Remington Breeze
a93649419b fix(ui): Bump argo-ui to hide filter suggestions on enter and show on typing (#6891)
* fix(ui): Bump argo-ui to hide filter suggestions on enter and show on typing

Signed-off-by: Remington Breeze <remington@breeze.software>

* remove unneccessary yarn.lock changes

Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-03 09:45:26 -07:00
Remington Breeze
15c361e525 fix(ui): Add View Details option to resource actions menu (#6893)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-03 09:45:21 -07:00
Jan-Otto Kröpke
a5ba98ff61 fix: upgrade to kustomize 4.2.0 (#6861)
Signed-off-by: Jan-Otto Kröpke <joe@adorsys.de>
2021-08-03 09:45:16 -07:00
May Zhang
18ddf1f839 fix: add documentation for using argocd repocreds with --enable-coi and --type helm (#6890)
Signed-off-by: May Zhang <may_zhang@intuit.com>
2021-08-03 09:45:12 -07:00
Jun
4e8b9b85b8 feat: Rollback command support omit history id (#6863)
Signed-off-by: junnplus <junnplus@gmail.com>
2021-08-03 09:45:03 -07:00
Remington Breeze
34b3139309 fix(ui): Incorrect path for non-namespaced resources (#6895)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-03 09:44:58 -07:00
Remington Breeze
4410803b11 fix(ui): Page navigation no longer visible with status bar (#6888)
Signed-off-by: Remington Breeze <remington@breeze.software>
2021-08-03 09:44:50 -07:00
Alexander Matyushentsev
83b272e125 fix: make sure orphaned filter checkbox is clickable (#6886)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-08-03 09:44:44 -07:00
woshicai
4f967eaa5a fix: docs about custom image user #6851 (#6872)
* fix: docs about custom image user, change it from argocd to 999

Signed-off-by: Charles Cai <charles.cai@sap.com>

* update: docs for upgrading version

Signed-off-by: Charles Cai <charles.cai@sap.com>

Co-authored-by: Charles Cai <charles.cai@sap.com>
2021-08-03 09:44:38 -07:00
Joe Bowbeer
13cdf01506 docs: installation.md (#6860)
Signed-off-by: Joe Bowbeer <joe.bowbeer@gmail.com>
2021-08-03 09:44:32 -07:00
Yi Cai
d84822ea88 fix: Project filter selector does not get unset upon clear filters #6750 (#6866)
Signed-off-by: ciiay <yicai@redhat.com>
2021-08-03 09:44:25 -07:00
Remington Breeze
a524a3b4d9 fix(ui): Prevent UI crash if app status or resources is empty (#6858) 2021-07-30 10:03:55 -07:00
Alexander Matyushentsev
9419c11c1d fix: util.cli.SetLogLevel should update global log level (#6852)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-07-29 13:02:07 -07:00
Alexander Matyushentsev
12a4475176 fix: include cluster level RBAC into argocd-core manifests (#6854)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-07-29 12:49:53 -07:00
Klaus Dorninger
662567b8fd fix: #6844 multiple global projects can be configured (#6845)
Signed-off-by: Klaus Dorninger <github@dornimaug.org>
2021-07-29 11:08:01 -07:00
Alexander Matyushentsev
6b14f909e9 fix: core installation must include CRD definitions (#6841)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-07-28 16:30:42 -07:00
argo-bot
a92c24094e Bump version to 2.1.0-rc1 2021-07-28 22:24:09 +00:00
argo-bot
307de9555d Bump version to 2.1.0-rc1 2021-07-28 22:23:55 +00:00
953 changed files with 8008 additions and 77991 deletions

View File

@@ -6,7 +6,7 @@ labels: 'bug'
assignees: ''
---
<!-- If you are trying to resolve an environment-specific issue or have a one-off question about the edge case that does not require a feature then please consider asking a question in argocd slack [channel](https://argoproj.github.io/community/join-slack). -->
If you are trying to resolve an environment-specific issue or have a one-off question about the edge case that does not require a feature then please consider asking a question in argocd slack [channel](https://argoproj.github.io/community/join-slack).
Checklist:
@@ -16,19 +16,19 @@ Checklist:
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
A clear and concise description of what the bug is.
**To Reproduce**
<!-- A list of the steps required to reproduce the issue. Best of all, give us the URL to a repository that exhibits this issue. -->
A list of the steps required to reproduce the issue. Best of all, give us the URL to a repository that exhibits this issue.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
A clear and concise description of what you expected to happen.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
If applicable, add screenshots to help explain your problem.
**Version**

View File

@@ -9,12 +9,23 @@ on:
pull_request:
branches:
- 'master'
- 'release-*'
env:
# Golang version to use across CI steps
GOLANG_VERSION: '1.17.6'
GOLANG_VERSION: '1.16.5'
jobs:
build-docker:
name: Build Docker image
runs-on: ubuntu-latest
if: github.head_ref != ''
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build Docker image
run: |
make image
check-go:
name: Ensure Go modules synchronicity
runs-on: ubuntu-latest
@@ -61,10 +72,10 @@ jobs:
- name: Checkout code
uses: actions/checkout@v2
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v3
with:
version: v1.38.0
args: --timeout 10m --exclude SA5011
version: v1.46.2
args: --timeout 10m --exclude SA5011 --verbose
test-go:
name: Run unit tests for Go packages
@@ -254,7 +265,6 @@ jobs:
env:
NODE_ENV: production
NODE_ONLINE_ENV: online
HOST_ARCH: amd64
working-directory: ui/
- name: Run ESLint
run: yarn lint
@@ -331,7 +341,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
k3s-version: [v1.23.3, v1.22.6, v1.21.2]
k3s-version: [v1.21.2, v1.20.2, v1.19.2, v1.18.9, v1.17.11]
needs:
- build-go
env:
@@ -376,13 +386,10 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Add ./dist to PATH
run: |
echo "$(pwd)/dist" >> $GITHUB_PATH
- name: Download Go dependencies
run: |
go mod download
go install github.com/mattn/goreman@latest
go get github.com/mattn/goreman
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@@ -394,7 +401,7 @@ jobs:
run: |
docker pull quay.io/dexidp/dex:v2.25.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:6.2.6-alpine
docker pull redis:6.2.4-alpine
- name: Create target directory for binaries in the build-process
run: |
mkdir -p dist

View File

@@ -8,7 +8,6 @@ on:
jobs:
CodeQL-Build:
if: github.repository == 'argoproj/argo-cd'
# CodeQL runs on ubuntu-latest and windows-latest
runs-on: ubuntu-latest

23
.github/workflows/gh-pages.yaml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Deploy
on:
push:
branches:
- master
pull_request:
branches:
- 'master'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.x
- name: build
run: |
pip install -r docs/requirements.txt
mkdocs build

View File

@@ -4,17 +4,12 @@ on:
push:
branches:
- master
pull_request:
branches:
- master
types: [ labeled, unlabeled, opened, synchronize, reopened ]
env:
GOLANG_VERSION: '1.17.6'
GOLANG_VERSION: '1.16.5'
jobs:
publish:
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-latest
env:
GOPATH: /home/runner/work/argo-cd/argo-cd
@@ -31,43 +26,34 @@ jobs:
working-directory: ./src/github.com/argoproj/argo-cd
id: image
# login
# build
- run: |
docker images -a --format "{{.ID}}" | xargs -I {} docker rmi {}
make image DEV_IMAGE=true DOCKER_PUSH=false IMAGE_NAMESPACE=ghcr.io/argoproj IMAGE_TAG=${{ steps.image.outputs.tag }}
working-directory: ./src/github.com/argoproj/argo-cd
# publish
- run: |
docker login ghcr.io --username $USERNAME --password $PASSWORD
docker login quay.io --username "${DOCKER_USERNAME}" --password "${DOCKER_TOKEN}"
if: github.event_name == 'push'
docker push ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }}
docker login --username "${DOCKER_USERNAME}" --password "${DOCKER_TOKEN}"
docker tag ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }} argoproj/argocd:latest
docker push argoproj/argocd:latest
env:
USERNAME: ${{ secrets.USERNAME }}
PASSWORD: ${{ secrets.TOKEN }}
DOCKER_USERNAME: ${{ secrets.RELEASE_QUAY_USERNAME }}
DOCKER_TOKEN: ${{ secrets.RELEASE_QUAY_TOKEN }}
# build
- uses: docker/setup-qemu-action@v1
- uses: docker/setup-buildx-action@v1
- run: |
IMAGE_PLATFORMS=linux/amd64
if [[ "${{ github.event_name }}" == "push" || "${{ contains(github.event.pull_request.labels.*.name, 'test-arm-image') }}" == "true" ]]
then
IMAGE_PLATFORMS=linux/amd64,linux/arm64
fi
echo "Building image for platforms: $IMAGE_PLATFORMS"
docker buildx build --platform $IMAGE_PLATFORMS --push="${{ github.event_name == 'push' }}" \
-t ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }} \
-t quay.io/argoproj/argocd:latest .
working-directory: ./src/github.com/argoproj/argo-cd
DOCKER_USERNAME: ${{ secrets.RELEASE_DOCKERHUB_USERNAME }}
DOCKER_TOKEN: ${{ secrets.RELEASE_DOCKERHUB_TOKEN }}
# deploy
- run: git clone "https://$TOKEN@github.com/argoproj/argoproj-deployments"
if: github.event_name == 'push'
env:
TOKEN: ${{ secrets.TOKEN }}
- run: |
docker run -u $(id -u):$(id -g) -v $(pwd):/src -w /src --rm -t ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }} kustomize edit set image quay.io/argoproj/argocd=ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }}
docker run -v $(pwd):/src -w /src --rm -t lyft/kustomizer:v3.3.0 kustomize edit set image quay.io/argoproj/argocd=ghcr.io/argoproj/argocd:${{ steps.image.outputs.tag }}
git config --global user.email 'ci@argoproj.com'
git config --global user.name 'CI'
git diff --exit-code && echo 'Already deployed' || (git commit -am 'Upgrade argocd to ${{ steps.image.outputs.tag }}' && git push)
if: github.event_name == 'push'
working-directory: argoproj-deployments/argocd
# TODO: clean up old images once github supports it: https://github.community/t5/How-to-use-Git-and-GitHub/Deleting-images-from-GitHub-Package-Registry/m-p/41202/thread-id/9811

View File

@@ -12,12 +12,11 @@ on:
- '!release-v0*'
env:
GOLANG_VERSION: '1.17.6'
GOLANG_VERSION: '1.16.5'
jobs:
prepare-release:
name: Perform automatic release on trigger ${{ github.ref }}
if: github.repository == 'argoproj/argo-cd'
runs-on: ubuntu-latest
env:
# The name of the tag as supplied by the GitHub event
@@ -95,7 +94,7 @@ jobs:
echo "=========== BEGIN COMMIT MESSAGE ============="
git show ${SOURCE_TAG}
echo "============ END COMMIT MESSAGE =============="
# Quite dirty hack to get the release notes from the annotated tag
# into a temporary file.
RELEASE_NOTES=$(mktemp -p /tmp release-notes.XXXXXX)
@@ -142,7 +141,7 @@ jobs:
echo "RELEASE_NOTES=${RELEASE_NOTES}" >> $GITHUB_ENV
- name: Setup Golang
uses: actions/setup-go@v2
uses: actions/setup-go@v1
with:
go-version: ${{ env.GOLANG_VERSION }}
@@ -183,7 +182,18 @@ jobs:
echo "Creating release ${RELEASE_TAG}"
git tag ${RELEASE_TAG}
- name: Login to docker repositories
- name: Build Docker image for release
run: |
set -ue
git clean -fd
mkdir -p dist/
make image IMAGE_TAG="${TARGET_VERSION}" DOCKER_PUSH=false
make release-cli
chmod +x ./dist/argocd-linux-amd64
./dist/argocd-linux-amd64 version --client
if: ${{ env.DRY_RUN != 'true' }}
- name: Push docker image to repository
env:
DOCKER_USERNAME: ${{ secrets.RELEASE_DOCKERHUB_USERNAME }}
DOCKER_TOKEN: ${{ secrets.RELEASE_DOCKERHUB_TOKEN }}
@@ -192,22 +202,11 @@ jobs:
run: |
set -ue
docker login quay.io --username "${QUAY_USERNAME}" --password "${QUAY_TOKEN}"
docker push ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION}
# Remove the following when Docker Hub is gone
docker login --username "${DOCKER_USERNAME}" --password "${DOCKER_TOKEN}"
if: ${{ env.DRY_RUN != 'true' }}
- uses: docker/setup-qemu-action@v1
- uses: docker/setup-buildx-action@v1
- name: Build and push Docker image for release
run: |
set -ue
git clean -fd
mkdir -p dist/
docker buildx build --platform linux/amd64,linux/arm64 --push -t ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION} -t argoproj/argocd:v${TARGET_VERSION} .
make release-cli
chmod +x ./dist/argocd-linux-amd64
./dist/argocd-linux-amd64 version --client
docker tag ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION} argoproj/argocd:v${TARGET_VERSION}
docker push argoproj/argocd:v${TARGET_VERSION}
if: ${{ env.DRY_RUN != 'true' }}
- name: Read release notes file
@@ -245,17 +244,6 @@ jobs:
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload argocd-linux-arm64 binary to release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/argocd-linux-arm64
asset_name: argocd-linux-arm64
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload argocd-darwin-amd64 binary to release assets
uses: actions/upload-release-asset@v1
env:
@@ -267,17 +255,6 @@ jobs:
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload argocd-darwin-arm64 binary to release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/argocd-darwin-arm64
asset_name: argocd-darwin-arm64
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload argocd-windows-amd64 binary to release assets
uses: actions/upload-release-asset@v1
env:
@@ -289,48 +266,6 @@ jobs:
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Generate SBOM (spdx)
id: spdx-builder
env:
# defines the spdx/spdx-sbom-generator version to use.
SPDX_GEN_VERSION: v0.0.13
# defines the sigs.k8s.io/bom version to use.
SIGS_BOM_VERSION: v0.2.1
# comma delimited list of project relative folders to inspect for package
# managers (gomod, yarn, npm).
PROJECT_FOLDERS: ".,./ui"
# full qualified name of the docker image to be inspected
DOCKER_IMAGE: ${{env.IMAGE_NAMESPACE}}/argocd:v${{env.TARGET_VERSION}}
run: |
yarn install --cwd ./ui
go install github.com/spdx/spdx-sbom-generator/cmd/generator@$SPDX_GEN_VERSION
go install sigs.k8s.io/bom/cmd/bom@$SIGS_BOM_VERSION
# Generate SPDX for project dependencies analyzing package managers
for folder in $(echo $PROJECT_FOLDERS | sed "s/,/ /g")
do
generator -p $folder -o /tmp
done
# Generate SPDX for binaries analyzing the docker image
if [[ ! -z $DOCKER_IMAGE ]]; then
bom generate -o /tmp/bom-docker-image.spdx -i $DOCKER_IMAGE
fi
cd /tmp && tar -zcf sbom.tar.gz *.spdx
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload SBOM to release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: /tmp/sbom.tar.gz
asset_name: sbom.tar.gz
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Update homebrew formula
env:
HOMEBREW_TOKEN: ${{ secrets.RELEASE_HOMEBREW_TOKEN }}
@@ -345,4 +280,3 @@ jobs:
set -ue
git push --delete origin ${SOURCE_TAG}
if: ${{ always() }}

2
.gitpod.Dockerfile vendored
View File

@@ -9,7 +9,7 @@ RUN curl -L https://go.kubebuilder.io/dl/2.3.1/$(go env GOOS)/$(go env GOARCH) |
tar -xz -C /tmp/ && mv /tmp/kubebuilder_2.3.1_$(go env GOOS)_$(go env GOARCH) /usr/local/kubebuilder
RUN apt-get install redis-server -y
RUN go install github.com/mattn/goreman@latest
RUN go get github.com/mattn/goreman
USER gitpod

View File

@@ -2,5 +2,5 @@ image:
file: .gitpod.Dockerfile
tasks:
- init: make mod-download-local dep-ui-local && GO111MODULE=off go install github.com/mattn/goreman@latest
- init: make mod-download-local dep-ui-local && GO111MODULE=off go get github.com/mattn/goreman
command: make start-test-k8s

View File

@@ -1,22 +0,0 @@
run:
timeout: 2m
skip-files:
- ".*\\.pb\\.go"
skip-dirs:
- pkg/client/
- vendor/
linters:
enable:
- vet
- deadcode
- goimports
- varcheck
- structcheck
- ineffassign
- unconvert
- unparam
linters-settings:
goimports:
local-prefixes: github.com/argoproj/argo-cd
service:
golangci-lint-version: 1.21.0

View File

@@ -1,234 +1,6 @@
# Changelog
## v2.3.0 (Unreleased)
### Argo CD ApplicationSet and Notifications are now part of Argo CD
Two popular [Argoproj Labs](https://github.com/argoproj-labs) projects [Argo CD ApplicationSet](https://github.com/argoproj/applicationset) and
[Argo CD Notifications](https://github.com/argoproj-labs/argocd-notifications) are now part of Argo CD! The default Argo CD installation manifests now
bundle both projects out of the box. Going forward you can expect more tightened integration of these projects into Argo CD.
### New sync and diff strategies
Users can now configure the Application resource to instruct Argo CD to consider the ignore difference setup during the sync process.
In order to do so, add the new sync option RespectIgnoreDifferences=true in the Application resource. Once the sync option is added,
Argo CD won't change ignored fields during the syncing process.
Configuring ignored fields is also easier now. Instead of listing fields one by one users can now leverage the
managedFields metadata to instruct Argo CD about trusted managers and automatically ignore any fields owned by them. A new diff customization
(managedFieldsManagers) is now available allowing users to specify managers the application should trust and to ignore all fields owned by those managers.
Read more about these changes at [New sync and diff strategies in ArgoCD](https://blog.argoproj.io/new-sync-and-diff-strategies-in-argocd-44195d3f8b8c) blog post.
### ARM Images
An officially supported ARM 64 image is now available. Enjoy running Argo CD on your Raspberry Pi! Additionally, the image size was reduced by nearly ~50%
and is only 200MB now. The ARM version of `argocd` CLI is also available and published as a Github release artifact.
### Compact Tree View And Click Application Navigation
The application details page now supports compact application resources tree visualization. Using the "Group Nodes" button, you can collapse the similar resources
into a single group node to remove the clutter and make it easier to understand the state of application resources. You still can get detailed information about the collapsed resources by clicking on the group node. The list of collapsed resources will be available in a sliding panel. Compact resource tree is still too big?
You can use the zoom in and zoom out feature to make it smaller - or even larger!
You no longer need to move back and forth between the application details page and the application list page. Instead you can navigate directly to the required application by clicking the search icon in the application details page title.
### Upgraded Config Management Tools
Both bundled Helm and Kustomize binaries have been upgraded to the latest versions. Kustomize has been upgraded from 4.2.0 to 4.4.1 and Helm has been upgraded from 3.7.1 to 3.8.0.
### Bug Fixes and Performance Enhancements
* Config management tools enhancements:
* The skipCrds flag and ability to ignore missing values files for Helm (#8012, #8003)
* Additional environment variables for Kustomize (#8096)
* Argo CD CLI follows the XDG Base directory standard (#7638)
* Redis is no longer used during SSO login (#8241)
### Features
- feat: Add app list and details page views to navigation history (#7776) (#7937)
- feat: Add skipCrds flag for helm charts (#8012)
- feat: Add visual indicator for newly created pods (#8006)
- feat: Added a new Helm option ignoreMissingValueFiles (#7767) (#8003)
- feat: Allow configuring system wide ignore differences for all resources (#8224)
- feat: Allow escaping dollar in Envsubst (#7961)
- feat: Allow external links on Application (#3487) (#8231)
- feat: Allow selecting application on detail page (#8176)
- feat: Bundle applicationset-controller with argocd (#8148)
- feat: Enable specifying root ca for oidc (#6712)
- feat: Expose ARGOCD_APP_NAME to the `kustomize build` command (#8096)
- feat: Ignore differences owned by trusted managers from managedFields (#7869)
- feat: New sync option to use ignore diff configs during sync (#8078)
- feat: Provide address flag for admin dashboard command (#8095)
- feat: Store "Group Nodes" button state in application details preferences (#8036)
- feat: Support specifying cluster by name in addition to API server URL in Cluster API (#8077)
- feat: Support XDG Base directory standard (#7638) (#7791)
- feat: Use encrypted cookie to store OAuth2 state nonce (instead of redis) (#8241)
- feat: Build images on PR and conditionally build arm64 image on push (#8108)
### Bug Fixes
- fix: Add "Restarting MinIO" status to MiniO Tenant health check (#8191)
- fix: Add all resources in list view (#7295)
- fix: Adding pagination to grouped nodes sliding panel#7837 (#7915)
- fix: Allow all resources to add external links (#7923)
- fix: Always call ValidateDestination (#7976)
- fix: Application exist panic when execute api call (#8188)
- fix: Application-icons-alignment (#8054)
- fix: Controller panics if resource manifest has incorrect annotation (#8022)
- fix: Correctly handle project field during partial cluster update (#7994)
- fix: Default value for retry validation #8055 (#8064)
- fix: Fix a possible crash when parsing RBAC (#8165)
- fix: Grouped node list missing resources on Compact resources view #8014 (#8018)
- fix: Issue with headless installation (#7958)
- fix: Issue with project scoped resources (#8048)
- fix: Kubernetes labels normalization for Prometheus (#7925)
- fix: Nested Refresh dropdown does not work on Application Details page #1524 (#7950)
- fix: Network line colors and menu icon alignment (#8059)
- fix: Opening app details shows UI error on some apps (#8016) (#8019)
- fix: Parse to correct uint32 type (#8177)
- fix: Prevent possible nil-pointer deref in normalizer (#8185)
- fix: Prevent possible out-of-bounds access when loading policies (#8186)
- fix: Provide a semantic version parsed version for KUBE_VERSION (#8250)
- fix: Refreshing label toast (#7979)
- fix: Resource details page crashes when resource is not deployed and hide managed fields is selected (#7971)
- fix: Retry disabled text (#8004)
- fix: Route health check stuck in 'Progressing' (#8170)
- fix: Sync window panel is crashed if resource name not contain letters (#8053)
- fix: Targetervision compatible without prefix refs/heads or refs/tags (#7939)
- fix: Trailing line in Filter Dropdown Menus #7821 (#8001)
- fix: Webhook URL matching edge cases (#7981)
- fix(ui): Use consistent case for diff modes (#7945)
- fix: Use gRPC timeout for sidecar CMPs (#8131) (#8236)
### Other
- chore: Bump go-jsonnet to v0.18.0 (#8011)
- chore: Escape proj in regex (#7985)
- chore: Exclude argocd-server rbac for core-install (#8234)
- chore: Log out the resource triggering reconciliation (#8192)
- chore: Migrate to use golang-jwt/jwt v4.2.0 (#8136)
- chore: Move resolveRevision from api-server to repo-server (#7966)
- chore: Update notifications version (#8267)
- chore: Update slack version (#8299)
- chore: Update to Redis 6.2.4 (#8157)
- chore: Upgrade awscli to 2.4.6 and remove python deps (#7947)
- chore: Upgrade base image to ubuntu:21.10 (#8230)
- chore: Upgrade dex to v2.30.2 (https://github.com/dexidp/dex/issues/2326) (#8237)
- chore: Upgrade gitops engine (#8288)
- chore: Upgrade golang to 1.17.6 (#8229)
- chore: Upgrade helm to most recent version (v3.7.2) (#8226)
- chore: Upgrade k8s client to v1.23 (#8213)
- chore: Upgrade kustomize to most recent version (v4.4.1) (#8227)
- refactor: Introduce 'byClusterName' secret index to speedup cluster server URL lookup (#8133)
- refactor: Move project filtering to server side (#8102)
## v2.2.3 (2022-01-18)
- fix: Application exist panic when execute api call (#8188)
- fix: Route health check stuck in 'Progressing' (#8170)
- refactor: Introduce 'byClusterName' secret index to speedup cluster server URL lookup (#8133)
- chore: Update to Redis 6.2.4 (#8157) (#8158)
## v2.2.2 (2021-12-31)
- fix: Issue with project scoped resources (#8048)
- fix: Escape proj in regex (#7985)
- fix: Default value for retry validation #8055 (#8064)
- fix: Sync window panel is crashed if resource name not contain letters (#8053)
- fix: Upgrade github.com/argoproj/gitops-engine to v0.5.2
- fix: Retry disabled text (#8004)
- fix: Opening app details shows UI error on some apps (#8016) (#8019)
- fix: Correctly handle project field during partial cluster update (#7994)
- fix: Cluster API does not support updating labels and annotations (#7901)
## v2.2.1 (2021-12-16)
- fix: Resource details page crashes when resource is not deployed and hide managed fields is selected (#7971)
- fix: Issue with headless installation (#7958)
- fix: Nil pointer (#7905)
## v2.2.0 (2021-12-14)
> [Upgrade instructions](./docs/operator-manual/upgrading/2.1-2.2.md)
### Project Scoped repositories and clusters
The project scoped repositories and clusters is a feature that simplifies registering the repositories and cluster credentials.
Instead of requiring operators to set up in advance all clusters and git repositories that can be used, developers can now do
this on their own in a self-service manner.
### Config Management Plugins V2
The Config Management Plugins V2 is set of enhancement of the existing config management plugins feature.
The list includes improved installation experience, ability to package plugin into a separate image and
improved plugin manifests discovery.
### Resource tracking
Argo CD has traditionally tracked the resources it manages by the well-known "app.kubernetes.io/instance" property.
While using this property works ok in simple scenarios, it also has several limitations. ArgoCD now allows you to use
a new annotation (argocd.argoproj.io/tracking-id) for tracking your resources. Using this annotation is a much more flexible approach
as there are no conflicts with other Kubernetes tools, and you can easily install multiple Argo CD instances on the same clusters.
### Bug Fixes and Performance Enhancements
* Argo CD API server caches RBAC checks that significantly improves the GET /api/v1/applications API performance (#7587)
* Argo CD RBAC supports regex matches (#7165)
* Health check support for KubeVirt (#7176), Cassandra (#7017), Openshift Route (#7112), DeploymentConfig (#7114), Confluent (#6957) and SparkApplication (#7434) CRDs.
* Persistent banner (#7312) with custom positioning (#7462)
* Cluster name support in project destinations (#7198)
* around 30 more features and a total of 84 bug fixes
## v2.1.7 (2021-12-14)
- fix: issue with keepalive (#7861)
- fix nil pointer dereference error (#7905)
- fix: env vars to tune cluster cache were broken (#7779)
- fix: upgraded gitops engine to v0.4.2 (fixes #7561)
## v2.1.6 (2021-11-16)
- fix: don't use revision caching during app creation (#7508)
- fix: supporting OCI dependencies. Fixes #6062 (#6994)
## v2.1.5 (2021-11-16)
- fix: Invalid memory address or nil pointer dereference in processRequestedAppOperation (#7501)
## v2.1.4 (2021-11-15)
- fix: Operation has completed with phase: Running (#7482)
- fix: Application status panel shows Syncing instead of Deleting (#7486)
- fix(ui): Add Error Boundary around Extensions and comply with new Extensions API (#7215)
## v2.1.3 (2021-10-29)
- fix: core-install.yaml always refers to latest argocd image (#7321)
- fix: handle applicationset backup forbidden error (#7306)
- fix: Argo CD should not use cached git/helm revision during app creation/update validation (#7244)
## v2.1.2 (2021-10-02)
- fix: cluster filter popping out of box (#7135)
- fix: gracefully shutdown metrics server when dex config changes (#7138)
- fix: upgrade gitops engine version to v0.4.1 (#7088)
- fix: repository name already exists when multiple helm dependencies (#7096)
## v2.1.1 (2021-08-25)
### Bug Fixes
- fix: password reset requirements (#7071)
- fix: Custom Styles feature is broken (#7067)
- fix(ui): Add State to props passed to Extensions (#7045)
- fix: keep uid_entrypoint.sh for backward compatibility (#7047)
## v2.1.0 (2021-08-20)
## v2.1.0 (Unreleased)
> [Upgrade instructions](./docs/operator-manual/upgrading/2.0-2.1.md)
@@ -972,7 +744,7 @@ More documentation and tools are coming in patch releases.
The Argo CD deletes all **in-flight** hooks if you terminate running sync operation. The hook state assessment change implemented in this release the Argo CD enables detection of
an in-flight state for all Kubernetes resources including `Deployment`, `PVC`, `StatefulSet`, `ReplicaSet` etc. So if you terminate the sync operation that has, for example,
`StatefulSet` hook that is `Progressing` it will be deleted. The long-running jobs are not supposed to be used as a sync hook and you should consider using
[Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/) instead.
[Sync Waves](https://argoproj.github.io/argo-cd/user-guide/sync-waves/) instead.
#### Enhancements
* feat: Add custom health checks for cert-manager v0.11.0 (#2689)

View File

@@ -1,10 +1,10 @@
ARG BASE_IMAGE=docker.io/library/ubuntu:21.10
ARG BASE_IMAGE=docker.io/library/ubuntu:21.04
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.17.6 as builder
FROM docker.io/library/golang:1.16.5 as builder
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -32,7 +32,6 @@ RUN ./install.sh ksonnet-linux
RUN ./install.sh helm2-linux
RUN ./install.sh helm-linux
RUN ./install.sh kustomize-linux
RUN ./install.sh awscli-linux
####################################################################################################
# Argo CD Base - used as the base for both the release and dev argocd images
@@ -50,21 +49,21 @@ RUN groupadd -g 999 argocd && \
chmod g=u /home/argocd && \
apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y git git-lfs tini gpg tzdata && \
apt-get install -y git git-lfs python3-pip tini gpg tzdata && \
apt-get clean && \
pip3 install awscli==1.18.80 && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/git-ask-pass.sh /usr/local/bin/git-ask-pass.sh
COPY hack/gpg-wrapper.sh /usr/local/bin/gpg-wrapper.sh
COPY hack/git-verify-wrapper.sh /usr/local/bin/git-verify-wrapper.sh
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm2 /usr/local/bin/helm2
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
COPY --from=builder /usr/local/aws-cli/v2/current/dist /usr/local/aws-cli/v2/current/dist
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
# keep uid_entrypoint.sh for backward compatibility
RUN ln -s /usr/local/bin/entrypoint.sh /usr/local/bin/uid_entrypoint.sh
RUN ln -s /usr/local/aws-cli/v2/current/dist/aws /usr/local/bin/aws
# support for mounting configuration from a configmap
RUN mkdir -p /app/config/ssh && \
@@ -91,18 +90,18 @@ FROM docker.io/library/node:12.18.4 as argocd-ui
WORKDIR /src
ADD ["ui/package.json", "ui/yarn.lock", "./"]
RUN yarn install --network-timeout 200000
RUN yarn install
ADD ["ui/", "."]
ARG ARGO_VERSION=latest
ENV ARGO_VERSION=$ARGO_VERSION
RUN HOST_ARCH='amd64' NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 yarn build
RUN NODE_ENV='production' NODE_ONLINE_ENV='online' yarn build
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM docker.io/library/golang:1.17.6 as argocd-build
FROM golang:1.16.5 as argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd
@@ -116,6 +115,12 @@ COPY . .
COPY --from=argocd-ui /src/dist/app /go/src/github.com/argoproj/argo-cd/ui/dist/app
RUN make argocd-all
ARG BUILD_ALL_CLIS=true
RUN if [ "$BUILD_ALL_CLIS" = "true" ] ; then \
make BIN_NAME=argocd-darwin-amd64 GOOS=darwin argocd-all && \
make BIN_NAME=argocd-windows-amd64.exe GOOS=windows argocd-all \
; fi
####################################################################################################
# Final image
####################################################################################################
@@ -125,9 +130,7 @@ COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/argocd* /usr/l
USER root
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-server
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-repo-server
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-cmp-server
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-application-controller
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-dex
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-notifications
USER 999

View File

@@ -3,11 +3,12 @@
####################################################################################################
FROM argocd-base
COPY argocd /usr/local/bin/
COPY argocd-darwin-amd64 /usr/local/bin/
COPY argocd-windows-amd64.exe /usr/local/bin/
USER root
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-server
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-repo-server
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-application-controller
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-dex
RUN ln -s /usr/local/bin/argocd /usr/local/bin/argocd-notifications
USER 999

View File

@@ -4,8 +4,6 @@ DIST_DIR=${CURRENT_DIR}/dist
CLI_NAME=argocd
BIN_NAME=argocd
GEN_RESOURCES_CLI_NAME=argocd-resources-gen
HOST_OS:=$(shell go env GOOS)
HOST_ARCH:=$(shell go env GOARCH)
@@ -15,7 +13,7 @@ GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi)
GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)
VOLUME_MOUNT=$(shell if test "$(go env GOOS)" = "darwin"; then echo ":delegated"; elif test selinuxenabled; then echo ":delegated"; else echo ""; fi)
KUBECTL_VERSION=$(shell go list -m k8s.io/client-go | head -n 1 | rev | cut -d' ' -f1 | rev)
KUBECTL_VERSION=$(shell go list -m all | grep k8s.io/client-go | cut -d ' ' -f5)
GOPATH?=$(shell if test -x `which go`; then go env GOPATH; else echo "$(HOME)/go"; fi)
GOCACHE?=$(HOME)/.cache/go-build
@@ -47,7 +45,7 @@ ARGOCD_E2E_DEX_PORT?=5556
ARGOCD_E2E_YARN_HOST?=localhost
ARGOCD_E2E_DISABLE_AUTH?=
ARGOCD_E2E_TEST_TIMEOUT?=30m
ARGOCD_E2E_TEST_TIMEOUT?=20m
ARGOCD_IN_CI?=false
ARGOCD_TEST_E2E?=true
@@ -179,7 +177,7 @@ gogen: ensure-gopath
go generate ./util/argo/...
.PHONY: protogen
protogen: ensure-gopath mod-vendor-local
protogen: ensure-gopath
export GO111MODULE=off
./hack/generate-proto.sh
@@ -188,16 +186,6 @@ openapigen: ensure-gopath
export GO111MODULE=off
./hack/update-openapi.sh
.PHONY: notification-catalog
notification-catalog:
go run ./hack/gen-catalog catalog
.PHONY: notification-docs
notification-docs:
go run ./hack/gen-docs
go run ./hack/gen-catalog docs
.PHONY: clientgen
clientgen: ensure-gopath
export GO111MODULE=off
@@ -207,9 +195,8 @@ clientgen: ensure-gopath
clidocsgen: ensure-gopath
go run tools/cmd-docs/main.go
.PHONY: codegen-local
codegen-local: ensure-gopath mod-vendor-local notification-docs notification-catalog gogen protogen clientgen openapigen clidocsgen manifests-local
codegen-local: ensure-gopath mod-vendor-local gogen protogen clientgen openapigen clidocsgen manifests-local
rm -rf vendor/
.PHONY: codegen
@@ -224,17 +211,13 @@ cli: test-tools-image
cli-local: clean-debug
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd
.PHONY: gen-resources-cli-local
gen-resources-cli-local: clean-debug
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${GEN_RESOURCES_CLI_NAME} ./hack/gen-resources/cmd
.PHONY: release-cli
release-cli: clean-debug build-ui
make BIN_NAME=argocd-darwin-amd64 GOOS=darwin argocd-all
make BIN_NAME=argocd-darwin-arm64 GOOS=darwin GOARCH=arm64 argocd-all
make BIN_NAME=argocd-linux-amd64 GOOS=linux argocd-all
make BIN_NAME=argocd-linux-arm64 GOOS=linux GOARCH=arm64 argocd-all
make BIN_NAME=argocd-windows-amd64.exe GOOS=windows argocd-all
release-cli: clean-debug image
docker create --name tmp-argocd-linux $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)
docker cp tmp-argocd-linux:/usr/local/bin/argocd ${DIST_DIR}/argocd-linux-amd64
docker cp tmp-argocd-linux:/usr/local/bin/argocd-darwin-amd64 ${DIST_DIR}/argocd-darwin-amd64
docker cp tmp-argocd-linux:/usr/local/bin/argocd-windows-amd64.exe ${DIST_DIR}/argocd-windows-amd64.exe
docker rm tmp-argocd-linux
.PHONY: test-tools-image
test-tools-image:
@@ -252,7 +235,7 @@ manifests: test-tools-image
# consolidated binary for cli, util, server, repo-server, controller
.PHONY: argocd-all
argocd-all: clean-debug
CGO_ENABLED=0 GOOS=${GOOS} GOARCH=${GOARCH} go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${BIN_NAME} ./cmd
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${BIN_NAME} ./cmd
.PHONY: server
server: clean-debug
@@ -266,25 +249,23 @@ repo-server:
controller:
CGO_ENABLED=0 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd
.PHONY: build-ui
build-ui:
docker build -t argocd-ui --target argocd-ui .
find ./ui/dist -type f -not -name gitkeep -delete
docker run -v ${CURRENT_DIR}/ui/dist/app:/tmp/app --rm -t argocd-ui sh -c 'cp -r ./dist/app/* /tmp/app/'
.PHONY: image
ifeq ($(DEV_IMAGE), true)
# The "dev" image builds the binaries from the users desktop environment (instead of in Docker)
# which speeds up builds. Dockerfile.dev needs to be copied into dist to perform the build, since
# the dist directory is under .dockerignore.
IMAGE_TAG="dev-$(shell git describe --always --dirty)"
image: build-ui
image:
docker build -t argocd-base --target argocd-base .
docker build -t argocd-ui --target argocd-ui .
find ./ui/dist -type f -not -name gitkeep -delete
docker run -v ${CURRENT_DIR}/ui/dist/app:/tmp/app --rm -t argocd-ui sh -c 'cp -r ./dist/app/* /tmp/app/'
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd ./cmd
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-darwin-amd64 ./cmd
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build -v -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-windows-amd64.exe ./cmd
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-server
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-application-controller
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-repo-server
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-cmp-server
ln -sfn ${DIST_DIR}/argocd ${DIST_DIR}/argocd-dex
cp Dockerfile.dev dist
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) -f dist/Dockerfile.dev dist
@@ -295,8 +276,10 @@ endif
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) ; fi
.PHONY: armimage
# The "BUILD_ALL_CLIS" argument is to skip building the CLIs for darwin and windows
# which would take a really long time.
armimage:
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)-arm .
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)-arm . --build-arg BUILD_ALL_CLIS="false"
.PHONY: builder-image
builder-image:
@@ -426,14 +409,12 @@ start-e2e-local:
if test -d /tmp/argo-e2e/app/config/gpg; then rm -rf /tmp/argo-e2e/app/config/gpg/*; fi
mkdir -p /tmp/argo-e2e/app/config/gpg/keys && chmod 0700 /tmp/argo-e2e/app/config/gpg/keys
mkdir -p /tmp/argo-e2e/app/config/gpg/source && chmod 0700 /tmp/argo-e2e/app/config/gpg/source
mkdir -p /tmp/argo-e2e/app/config/plugin && chmod 0700 /tmp/argo-e2e/app/config/plugin
# set paths for locally managed ssh known hosts and tls certs data
ARGOCD_SSH_DATA_PATH=/tmp/argo-e2e/app/config/ssh \
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
ARGOCD_GPG_DATA_PATH=/tmp/argo-e2e/app/config/gpg/source \
ARGOCD_GNUPGHOME=/tmp/argo-e2e/app/config/gpg/keys \
ARGOCD_GPG_ENABLED=$(ARGOCD_GPG_ENABLED) \
ARGOCD_PLUGINCONFIGFILEPATH=/tmp/argo-e2e/app/config/plugin \
ARGOCD_E2E_DISABLE_AUTH=false \
ARGOCD_ZJWT_FEATURE_FLAG=always \
ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
@@ -470,12 +451,6 @@ start-local: mod-vendor-local dep-ui-local
ARGOCD_E2E_TEST=false \
goreman -f $(ARGOCD_PROCFILE) start ${ARGOCD_START}
# Run goreman start with exclude option , provide exclude env variable with list of services
.PHONY: run
run:
bash ./hack/goreman-start.sh
# Runs pre-commit validation with the virtualized toolchain
.PHONY: pre-commit
pre-commit: codegen build lint test
@@ -509,6 +484,10 @@ serve-docs-local:
serve-docs:
docker run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${CURRENT_DIR}:/docs ${MKDOCS_DOCKER_IMAGE} serve -a 0.0.0.0:8000
.PHONY: lint-docs
lint-docs:
# https://github.com/dkhamsing/awesome_bot
find docs -name '*.md' -exec grep -l http {} + | xargs docker run --rm -v $(PWD):/mnt:ro dkhamsing/awesome_bot -t 3 --allow-dupe --allow-redirect --white-list `cat white-list | grep -v "#" | tr "\n" ','` --skip-save-results --
# Verify that kubectl can connect to your K8s cluster from Docker
.PHONY: verify-kube-connect
@@ -554,7 +533,3 @@ dep-ui-local:
start-test-k8s:
go run ./hack/k8s
.PHONY: list
list:
@LC_ALL=C $(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$@$$'

7
OWNERS
View File

@@ -9,7 +9,6 @@ approvers:
- jessesuen
- jgwest
- mayzhang2000
- rbreeze
reviewers:
- dthomson25
@@ -19,9 +18,3 @@ reviewers:
- reginapizza
- hblixt
- chetan-rns
- wanghong230
- pasha-codefresh
- ciiay
- leoluz
- crenshaw-dev
- saumeya

View File

@@ -1,8 +1,8 @@
controller: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
api-server: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} "
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:v2.30.2 dex serve /dex.yaml"
redis: bash -c "if [ \"$ARGOCD_REDIS_LOCAL\" == 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:6.2.6-alpine --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
repo-server: [ "$BIN_MODE" == 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-/tmp/argo-e2e/app/config/plugin} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-application-controller go run ./cmd/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server go run ./cmd/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} "
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v2/cmd gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:v2.27.0 serve /dex.yaml"
redis: bash -c "if [ $ARGOCD_REDIS_LOCAL == 'true' ]; then redis-server --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; else docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:6.2.4-alpine --save '' --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}; fi"
repo-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server go run ./cmd/main.go --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh

View File

@@ -45,9 +45,6 @@ Participation in the Argo CD project is governed by the [CNCF Code of Conduct](h
### Blogs and Presentations
1. [Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo](https://github.com/terrytangyuan/awesome-argo)
1. [Unveil the Secret Ingredients of Continuous Delivery at Enterprise Scale with Argo CD](https://blog.akuity.io/unveil-the-secret-ingredients-of-continuous-delivery-at-enterprise-scale-with-argo-cd-7c5b4057ee49)
1. [GitOps Without Pipelines With ArgoCD Image Updater](https://youtu.be/avPUQin9kzU)
1. [Combining Argo CD (GitOps), Crossplane (Control Plane), And KubeVela (OAM)](https://youtu.be/eEcgn_gU3SM)
1. [How to Apply GitOps to Everything - Combining Argo CD and Crossplane](https://youtu.be/yrj4lmScKHQ)
1. [Couchbase - How To Run a Database Cluster in Kubernetes Using Argo CD](https://youtu.be/nkPoPaVzExY)
@@ -72,4 +69,3 @@ Participation in the Argo CD project is governed by the [CNCF Code of Conduct](h
1. [Applied GitOps with Argo CD](https://thenewstack.io/applied-gitops-with-argocd/)
1. [Solving configuration drift using GitOps with Argo CD](https://www.cncf.io/blog/2020/12/17/solving-configuration-drift-using-gitops-with-argo-cd/)
1. [Decentralized GitOps over environments](https://blogs.sap.com/2021/05/06/decentralized-gitops-over-environments/)
1. [How GitOps and Operators mark the rise of Infrastructure-As-Software](https://paytmlabs.com/blog/2021/10/how-to-improve-operational-work-with-operators-and-gitops/)

View File

@@ -1,6 +1,6 @@
# Security Policy for Argo CD
Version: **v1.4 (2022-01-23)**
Version: **v1.1 (2020-06-29)**
## Preface
@@ -26,12 +26,8 @@ are well aware of the issues that may affect Argo CD and are constantly
working on the remediation of those that affect Argo CD and our users.
If you believe that we might have missed an issue that we should take a look
at (that can happen), then please discuss it with us. If there is a CVE
assigned to the issue, please do open an issue on our GitHub tracker instead
of writing to the security contact e-mail, since things reported by scanners
are public already and the discussion that might emerge is of benefit to the
general community. However, please validate your scanner results and its
impact on Argo CD before opening an issue at least roughly.
at (that can happen), then please discuss it with us. But please, do validate
that assumption before at least roughly.
## Supported Versions
@@ -60,17 +56,13 @@ We will do our best to react quickly on your inquiry, and to coordinate a fix
and disclosure with you. Sometimes, it might take a little longer for us to
react (e.g. out of office conditions), so please bear with us in these cases.
We will publish security advisiories using the
[Git Hub Security Advisories](https://github.com/argoproj/argo-cd/security/advisories)
feature to keep our community well informed, and will credit you for your
findings (unless you prefer to stay anonymous, of course).
We will publish security advisiories using the Git Hub SA feature to keep our
community well informed, and will credit you for your findings (unless you
prefer to stay anonymous, of course).
Please report vulnerabilities by e-mail to the following address:
Please report vulnerabilities by e-mail to all of the following people:
* cncf-argo-security@lists.cncf.io
## Securing your Argo CD Instance
See the [operator manual security page](docs/operator-manual/security.md) for
additional information about Argo CD's security features and how to make your
Argo CD production ready.
* jfischer@redhat.com
* Jesse_Suen@intuit.com
* Alexander_Matyushentsev@intuit.com
* Edward_Lee@intuit.com

View File

@@ -9,9 +9,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [7shifts](https://www.7shifts.com/)
1. [Adevinta](https://www.adevinta.com/)
1. [Adventure](https://jp.adventurekk.com/)
1. [Akuity](https://akuity.io/)
1. [Alibaba Group](https://www.alibabagroup.com/)
1. [Allianz Direct](https://www.allianzdirect.de/)
1. [Ambassador Labs](https://www.getambassador.io/)
1. [Ant Group](https://www.antgroup.com/)
1. [ANSTO - Australian Synchrotron](https://www.synchrotron.org.au/)
@@ -24,24 +22,20 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Beat](https://thebeat.co/en/)
1. [Beez Innovation Labs](https://www.beezlabs.com/)
1. [BioBox Analytics](https://biobox.io)
1. [BigPanda](https://bigpanda.io)
1. [BMW Group](https://www.bmwgroup.com/)
1. [Camptocamp](https://camptocamp.com)
1. [Capital One](https://www.capitalone.com)
1. [CARFAX](https://www.carfax.com)
1. [Celonis](https://www.celonis.com/)
1. [Chime](https://www.chime.com)
1. [Cisco ET&I](https://eti.cisco.com/)
1. [Codefresh](https://www.codefresh.io/)
1. [Codility](https://www.codility.com/)
1. [Commonbond](https://commonbond.co/)
1. [Crédit Agricole CIB](https://www.ca-cib.com)
1. [Crédit Agricole](https://www.ca-cib.com)
1. [CROZ d.o.o.](https://croz.net/)
1. [CyberAgent](https://www.cyberagent.co.jp/en/)
1. [Cybozu](https://cybozu-global.com)
1. [Chargetrip](https://chargetrip.com)
1. [D2iQ](https://www.d2iq.com)
1. [Deloitte](https://www.deloitte.com/)
1. [Devtron Labs](https://github.com/devtron-labs/devtron)
1. [EDF Renewables](https://www.edf-re.com/)
1. [edX](https://edx.org)
@@ -50,34 +44,26 @@ Currently, the following organizations are **officially** using Argo CD:
1. [END.](https://www.endclothing.com/)
1. [Energisme](https://energisme.com/)
1. [Fave](https://myfave.com)
1. [Flip](https://flip.id)
1. [Fonoa](https://www.fonoa.com/)
1. [Future PLC](https://www.futureplc.com/)
1. [Garner](https://www.garnercorp.com)
1. [G DATA CyberDefense AG](https://www.gdata-software.com/)
1. [Generali Deutschland AG](https://www.generali.de/)
1. [Gitpod](https://www.gitpod.io)
1. [Glovo](https://www.glovoapp.com)
1. [Gllue](https://gllue.com)
1. [GMETRI](https://gmetri.com/)
1. [Gojek](https://www.gojek.io/)
1. [Greenpass](https://www.greenpass.com.br/)
1. [Handelsbanken](https://www.handelsbanken.se)
1. [Healy](https://www.healyworld.net)
1. [Helio](https://helio.exchange)
1. [hipages](https://hipages.com.au/)
1. [Hiya](https://hiya.com)
1. [Honestbank](https://honestbank.com)
1. [IBM](https://www.ibm.com/)
1. [Ibotta](https://home.ibotta.com)
1. [IITS-Consulting](https://iits-consulting.de)
1. [Index Exchange](https://www.indexexchange.com/)
1. [InsideBoard](https://www.insideboard.com)
1. [Intuit](https://www.intuit.com/)
1. [Joblift](https://joblift.com/)
1. [JovianX](https://www.jovianx.com/)
1. [Karrot](https://www.daangn.com/)
1. [KarrotPay](https://www.daangnpay.com/)
1. [Kasa](https://kasa.co.kr/)
1. [Keptn](https://keptn.sh)
1. [Kinguin](https://www.kinguin.net/)
@@ -88,11 +74,9 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Lytt](https://www.lytt.co/)
1. [Major League Baseball](https://mlb.com)
1. [Mambu](https://www.mambu.com/)
1. [Mattermost](https://www.mattermost.com)
1. [Max Kelsen](https://www.maxkelsen.com/)
1. [MindSpore](https://mindspore.cn)
1. [Mirantis](https://mirantis.com/)
1. [mixi Group](https://mixi.co.jp/)
1. [Moengage](https://www.moengage.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MOO Print](https://www.moo.com/)
@@ -101,9 +85,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [New Relic](https://newrelic.com/)
1. [Nextdoor](https://nextdoor.com/)
1. [Nikkei](https://www.nikkei.co.jp/nikkeiinfo/en/)
1. [Nitro](https://gonitro.com)
1. [Octadesk](https://octadesk.com)
1. [omegaUp](https://omegaUp.com)
1. [openEuler](https://openeuler.org)
1. [openGauss](https://opengauss.org/)
1. [openLooKeng](https://openlookeng.io)
@@ -111,7 +93,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Opensurvey](https://www.opensurvey.co.kr/)
1. [Optoro](https://www.optoro.com/)
1. [Orbital Insight](https://orbitalinsight.com/)
1. [Packlink](https://www.packlink.com/)
1. [PayPay](https://paypay.ne.jp/)
1. [Peloton Interactive](https://www.onepeloton.com/)
1. [Pipefy](https://www.pipefy.com/)
@@ -129,13 +110,9 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Saildrone](https://www.saildrone.com/)
1. [Saloodo! GmbH](https://www.saloodo.com)
1. [Schwarz IT](https://jobs.schwarz/it-mission)
1. [Skit](https://skit.ai/)
1. [Snyk](https://snyk.io/)
1. [Speee](https://speee.jp/)
1. [Spendesk](https://spendesk.com/)
1. [Sumo Logic](https://sumologic.com/)
1. [Sutpc](http://www.sutpc.com/)
1. [Swiss Post](https://github.com/swisspost)
1. [Swisscom](https://www.swisscom.ch)
1. [Swissquote](https://github.com/swissquote)
1. [Syncier](https://syncier.com/)
@@ -145,12 +122,10 @@ Currently, the following organizations are **officially** using Argo CD:
1. [ThousandEyes](https://www.thousandeyes.com/)
1. [Ticketmaster](https://ticketmaster.com)
1. [Tiger Analytics](https://www.tigeranalytics.com/)
1. [Tigera](https://www.tigera.io/)
1. [Toss](https://toss.im/en)
1. [tru.ID](https://tru.id)
1. [Twilio SendGrid](https://sendgrid.com)
1. [tZERO](https://www.tzero.com/)
1. [ungleich.ch](https://ungleich.ch/)
1. [UBIO](https://ub.io/)
1. [UFirstGroup](https://www.ufirstgroup.com/en/)
1. [Universidad Mesoamericana](https://www.umes.edu.gt/)
@@ -160,7 +135,6 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Volvo Cars](https://www.volvocars.com/)
1. [VSHN - The DevOps Company](https://vshn.ch/)
1. [Walkbase](https://www.walkbase.com/)
1. [Wehkamp](https://www.wehkamp.nl/)
1. [WeMo Scooter](https://www.wemoscooter.com/)
1. [Webstores](https://www.webstores.nl)
1. [Whitehat Berlin](https://whitehat.berlin) by Guido Maria Serra +Fenaroli
@@ -180,18 +154,4 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Beleza Na Web](https://www.belezanaweb.com.br/)
1. [MariaDB](https://mariadb.com)
1. [Lightricks](https://www.lightricks.com/)
1. [RightRev](https://rightrev.com/)
1. [MeDirect](https://medirect.com.mt/)
1. [Snapp](https://snapp.ir/)
1. [Technacy](https://www.technacy.it/)
1. [freee](https://corp.freee.co.jp/en/company/)
1. [Youverify](https://youverify.co/)
1. [Keeeb](https://www.keeeb.com/)
1. [p3r](https://www.p3r.one/)
1. [Faro](https://www.faro.com/)
1. [Rise](https://www.risecard.eu/)
1. [Devopsi - Poland Software/DevOps Consulting](https://devopsi.pl/)
1. [Skyscanner](https://www.skyscanner.net/)
1. [Casavo](https://casavo.com)
1. [Majid Al Futtaim](https://www.majidalfuttaim.com/)
1. [ZOZO](https://corp.zozo.com/)
1. [Snapp](https://snapp.ir/)

View File

@@ -1 +1 @@
2.3.2
2.1.16

View File

@@ -11,4 +11,4 @@ g = _, _
e = some(where (p.eft == allow)) && !some(where (p.eft == deny))
[matchers]
m = g(r.sub, p.sub) && globOrRegexMatch(r.res, p.res) && globOrRegexMatch(r.act, p.act) && globOrRegexMatch(r.obj, p.obj)
m = g(r.sub, p.sub) && globMatch(r.res, p.res) && globMatch(r.act, p.act) && globMatch(r.obj, p.obj)

View File

@@ -256,7 +256,7 @@
},
{
"type": "string",
"description": "the selector to restrict returned list to applications only with matched labels.",
"description": "the selector to to restrict returned list to applications only with matched labels.",
"name": "selector",
"in": "query"
},
@@ -520,7 +520,7 @@
},
{
"type": "string",
"description": "the selector to restrict returned list to applications only with matched labels.",
"description": "the selector to to restrict returned list to applications only with matched labels.",
"name": "selector",
"in": "query"
},
@@ -753,11 +753,6 @@
"type": "string",
"name": "resourceName",
"in": "query"
},
{
"type": "boolean",
"name": "previous",
"in": "query"
}
],
"responses": {
@@ -937,11 +932,6 @@
"type": "string",
"name": "resourceName",
"in": "query"
},
{
"type": "boolean",
"name": "previous",
"in": "query"
}
],
"responses": {
@@ -1605,18 +1595,6 @@
"type": "string",
"name": "name",
"in": "query"
},
{
"type": "string",
"description": "type is the type of the specified cluster identifier ( \"server\" - default, \"name\" ).",
"name": "id.type",
"in": "query"
},
{
"type": "string",
"description": "value holds the cluster server URL or cluster name.",
"name": "id.value",
"in": "query"
}
],
"responses": {
@@ -1671,53 +1649,7 @@
}
}
},
"/api/v1/clusters/{id.value}": {
"get": {
"tags": [
"ClusterService"
],
"summary": "Get returns a cluster by server address",
"operationId": "ClusterService_Get",
"parameters": [
{
"type": "string",
"description": "value holds the cluster server URL or cluster name",
"name": "id.value",
"in": "path",
"required": true
},
{
"type": "string",
"name": "server",
"in": "query"
},
{
"type": "string",
"name": "name",
"in": "query"
},
{
"type": "string",
"description": "type is the type of the specified cluster identifier ( \"server\" - default, \"name\" ).",
"name": "id.type",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Cluster"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
},
"/api/v1/clusters/{cluster.server}": {
"put": {
"tags": [
"ClusterService"
@@ -1727,8 +1659,8 @@
"parameters": [
{
"type": "string",
"description": "value holds the cluster server URL or cluster name",
"name": "id.value",
"description": "Server is the API server URL of the Kubernetes cluster",
"name": "cluster.server",
"in": "path",
"required": true
},
@@ -1748,11 +1680,41 @@
"collectionFormat": "multi",
"name": "updatedFields",
"in": "query"
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1alpha1Cluster"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/clusters/{server}": {
"get": {
"tags": [
"ClusterService"
],
"summary": "Get returns a cluster by server address",
"operationId": "ClusterService_Get",
"parameters": [
{
"type": "string",
"name": "server",
"in": "path",
"required": true
},
{
"type": "string",
"description": "type is the type of the specified cluster identifier ( \"server\" - default, \"name\" ).",
"name": "id.type",
"name": "name",
"in": "query"
}
],
@@ -1780,26 +1742,14 @@
"parameters": [
{
"type": "string",
"description": "value holds the cluster server URL or cluster name",
"name": "id.value",
"name": "server",
"in": "path",
"required": true
},
{
"type": "string",
"name": "server",
"in": "query"
},
{
"type": "string",
"name": "name",
"in": "query"
},
{
"type": "string",
"description": "type is the type of the specified cluster identifier ( \"server\" - default, \"name\" ).",
"name": "id.type",
"in": "query"
}
],
"responses": {
@@ -1818,7 +1768,7 @@
}
}
},
"/api/v1/clusters/{id.value}/invalidate-cache": {
"/api/v1/clusters/{server}/invalidate-cache": {
"post": {
"tags": [
"ClusterService"
@@ -1828,8 +1778,7 @@
"parameters": [
{
"type": "string",
"description": "value holds the cluster server URL or cluster name",
"name": "id.value",
"name": "server",
"in": "path",
"required": true
}
@@ -1850,7 +1799,7 @@
}
}
},
"/api/v1/clusters/{id.value}/rotate-auth": {
"/api/v1/clusters/{server}/rotate-auth": {
"post": {
"tags": [
"ClusterService"
@@ -1860,8 +1809,7 @@
"parameters": [
{
"type": "string",
"description": "value holds the cluster server URL or cluster name",
"name": "id.value",
"name": "server",
"in": "path",
"required": true
}
@@ -2133,37 +2081,6 @@
}
}
},
"/api/v1/projects/{name}/detailed": {
"get": {
"tags": [
"ProjectService"
],
"summary": "GetDetailedProject returns a project that include project, global project and scoped resources by name",
"operationId": "ProjectService_GetDetailedProject",
"parameters": [
{
"type": "string",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/projectDetailedProjectsResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/runtimeError"
}
}
}
}
},
"/api/v1/projects/{name}/events": {
"get": {
"tags": [
@@ -2956,12 +2873,6 @@
"description": "HTTP/HTTPS proxy to access the repository.",
"name": "proxy",
"in": "query"
},
{
"type": "string",
"description": "Reference between project and repository that allow you automatically to be added as item inside SourceRepos project entity.",
"name": "project",
"in": "query"
}
],
"responses": {
@@ -3158,7 +3069,7 @@
},
{
"type": "string",
"description": "the selector to restrict returned list to applications only with matched labels.",
"description": "the selector to to restrict returned list to applications only with matched labels.",
"name": "selector",
"in": "query"
},
@@ -3582,20 +3493,6 @@
}
}
},
"clusterClusterID": {
"type": "object",
"title": "ClusterID holds a cluster server URL or cluster name",
"properties": {
"type": {
"type": "string",
"title": "type is the type of the specified cluster identifier ( \"server\" - default, \"name\" )"
},
"value": {
"type": "string",
"title": "value holds the cluster server URL or cluster name"
}
}
},
"clusterClusterResponse": {
"type": "object"
},
@@ -3636,13 +3533,6 @@
"type": "object",
"title": "Help settings",
"properties": {
"binaryUrls": {
"type": "object",
"title": "the URLs for downloading argocd binaries",
"additionalProperties": {
"type": "string"
}
},
"chatText": {
"type": "string",
"title": "the text for getting chat help, defaults to \"Chat now!\""
@@ -3743,18 +3633,9 @@
"statusBadgeEnabled": {
"type": "boolean"
},
"trackingMethod": {
"type": "string"
},
"uiBannerContent": {
"type": "string"
},
"uiBannerPermanent": {
"type": "boolean"
},
"uiBannerPosition": {
"type": "string"
},
"uiBannerURL": {
"type": "string"
},
@@ -3806,32 +3687,6 @@
}
}
},
"projectDetailedProjectsResponse": {
"type": "object",
"properties": {
"clusters": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1Cluster"
}
},
"globalProjects": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1AppProject"
}
},
"project": {
"$ref": "#/definitions/v1alpha1AppProject"
},
"repositories": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1Repository"
}
}
}
},
"projectEmptyResponse": {
"type": "object"
},
@@ -4444,10 +4299,6 @@
"description": "Operation is the type of operation which lead to this ManagedFieldsEntry being created.\nThe only valid values for this field are 'Apply' and 'Update'.",
"type": "string"
},
"subresource": {
"description": "Subresource is the name of the subresource used to update that object, or\nempty string if the object was updated through the main resource. The\nvalue of this field is used to distinguish between managers, even if they\nshare the same name. For example, a status update will be distinct from a\nregular update using the same manager name.\nNote that the APIVersion field is not related to the Subresource field and\nit always corresponds to the version of the main resource.",
"type": "string"
},
"time": {
"$ref": "#/definitions/v1Time"
}
@@ -4602,7 +4453,7 @@
},
"v1ObjectReference": {
"type": "object",
"title": "ObjectReference contains enough information to let you inspect or modify the referred object.\n---\nNew uses of this type are discouraged because of difficulty describing its usage when embedded in APIs.\n 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage.\n 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular\n restrictions like, \"must refer only to types A and B\" or \"UID not honored\" or \"name must be restricted\".\n Those cannot be well described when embedded.\n 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen.\n 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity\n during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple\n and the version of the actual struct is irrelevant.\n 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type\n will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control.\nInstead of using this type, create a locally provided and used type that is well-focused on your reference.\nFor example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n+structType=atomic",
"title": "ObjectReference contains enough information to let you inspect or modify the referred object.\n---\nNew uses of this type are discouraged because of difficulty describing its usage when embedded in APIs.\n 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage.\n 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular\n restrictions like, \"must refer only to types A and B\" or \"UID not honored\" or \"name must be restricted\".\n Those cannot be well described when embedded.\n 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen.\n 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity\n during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple\n and the version of the actual struct is irrelevant.\n 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type\n will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control.\nInstead of using this type, create a locally provided and used type that is well-focused on your reference.\nFor example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .\n+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object",
"properties": {
"apiVersion": {
"type": "string",
@@ -4635,8 +4486,8 @@
}
},
"v1OwnerReference": {
"description": "OwnerReference contains enough information to let you identify an owning\nobject. An owning object must be in the same namespace as the dependent, or\nbe cluster-scoped, so there is no namespace field.",
"type": "object",
"title": "OwnerReference contains enough information to let you identify an owning\nobject. An owning object must be in the same namespace as the dependent, or\nbe cluster-scoped, so there is no namespace field.\n+structType=atomic",
"properties": {
"apiVersion": {
"description": "API version of the referent.",
@@ -4968,10 +4819,6 @@
"$ref": "#/definitions/v1alpha1HelmFileParameter"
}
},
"ignoreMissingValueFiles": {
"type": "boolean",
"title": "IgnoreMissingValueFiles prevents helm template from failing when valueFiles do not exist locally by not appending them to helm template --values"
},
"parameters": {
"type": "array",
"title": "Parameters is a list of Helm parameters which are passed to the helm template command upon manifest generation",
@@ -4979,18 +4826,10 @@
"$ref": "#/definitions/v1alpha1HelmParameter"
}
},
"passCredentials": {
"type": "boolean",
"title": "PassCredentials pass credentials to all domains (Helm's --pass-credentials)"
},
"releaseName": {
"type": "string",
"title": "ReleaseName is the Helm release name to use. If omitted it will use the application name"
},
"skipCrds": {
"type": "boolean",
"title": "SkipCrds skips custom resource definition installation step (Helm's --skip-crds)"
},
"valueFiles": {
"type": "array",
"title": "ValuesFiles is a list of Helm value files to use when generating a template",
@@ -5283,13 +5122,6 @@
"type": "object",
"title": "Cluster is the definition of a cluster resource",
"properties": {
"annotations": {
"type": "object",
"title": "Annotations for cluster secret metadata",
"additionalProperties": {
"type": "string"
}
},
"clusterResources": {
"description": "Indicates if cluster level resources should be managed. This setting is used only if cluster is connected in a namespaced mode.",
"type": "boolean"
@@ -5303,13 +5135,6 @@
"info": {
"$ref": "#/definitions/v1alpha1ClusterInfo"
},
"labels": {
"type": "object",
"title": "Labels for cluster secret metadata",
"additionalProperties": {
"type": "string"
}
},
"name": {
"type": "string",
"title": "Name of the cluster. If omitted, will use the server address"
@@ -5321,10 +5146,6 @@
"type": "string"
}
},
"project": {
"type": "string",
"title": "Reference between project and cluster that allow you automatically to be added as item inside Destinations project entity"
},
"refreshRequestedAt": {
"$ref": "#/definitions/v1Time"
},
@@ -5471,9 +5292,6 @@
"init": {
"$ref": "#/definitions/v1alpha1Command"
},
"lockRepo": {
"type": "boolean"
},
"name": {
"type": "string"
}
@@ -5869,25 +5687,16 @@
},
"v1alpha1OverrideIgnoreDiff": {
"type": "object",
"title": "OverrideIgnoreDiff contains configurations about how fields should be ignored during diffs between\nthe desired state and live state",
"title": "TODO: describe this type",
"properties": {
"jSONPointers": {
"type": "array",
"title": "JSONPointers is a JSON path list following the format defined in RFC4627 (https://datatracker.ietf.org/doc/html/rfc6902#section-3)",
"items": {
"type": "string"
}
},
"jqPathExpressions": {
"type": "array",
"title": "JQPathExpressions is a JQ path list that will be evaludated during the diff process",
"items": {
"type": "string"
}
},
"managedFieldsManagers": {
"type": "array",
"title": "ManagedFieldsManagers is a list of trusted managers. Fields mutated by those managers will take precedence over the\ndesired state defined in the SCM and won't be displayed in diffs",
"items": {
"type": "string"
}
@@ -6053,10 +5862,6 @@
"type": "string",
"title": "Password contains the password or PAT used for authenticating at the remote repository"
},
"project": {
"type": "string",
"title": "Reference between project and repository that allow you automatically to be added as item inside SourceRepos project entity"
},
"proxy": {
"type": "string",
"title": "Proxy specifies the HTTP/HTTPS proxy used to access the repo"
@@ -6250,13 +6055,6 @@
"kind": {
"type": "string"
},
"managedFieldsManagers": {
"type": "array",
"title": "ManagedFieldsManagers is a list of trusted managers. Fields mutated by those managers will take precedence over the\ndesired state defined in the SCM and won't be displayed in diffs",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
},
@@ -6752,10 +6550,6 @@
"schedule": {
"type": "string",
"title": "Schedule is the time the window will begin, specified in cron format"
},
"timeZone": {
"type": "string",
"title": "TimeZone of the sync that will be applied to the schedule"
}
}
},

View File

@@ -49,7 +49,6 @@ func NewCommand() *cobra.Command {
glogLevel int
metricsPort int
metricsCacheExpiration time.Duration
metricsAplicationLabels []string
kubectlParallelismLimit int64
cacheSrc func() (*appstatecache.Cache, error)
redisClient *redis.Client
@@ -111,14 +110,10 @@ func NewCommand() *cobra.Command {
errors.CheckError(err)
cache.Cache.SetClient(cacheutil.NewTwoLevelClient(cache.Cache.GetClient(), 10*time.Minute))
var appController *controller.ApplicationController
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, namespace, settings.WithRepoOrClusterChangedHandler(func() {
appController.InvalidateProjectsCache()
}))
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, namespace)
kubectl := kubeutil.NewKubectl()
clusterFilter := getClusterFilter()
appController, err = controller.NewApplicationController(
appController, err := controller.NewApplicationController(
namespace,
settingsMgr,
kubeClient,
@@ -130,7 +125,6 @@ func NewCommand() *cobra.Command {
time.Duration(selfHealTimeoutSeconds)*time.Second,
metricsPort,
metricsCacheExpiration,
metricsAplicationLabels,
kubectlParallelismLimit,
clusterFilter)
errors.CheckError(err)
@@ -164,7 +158,6 @@ func NewCommand() *cobra.Command {
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", 20, "Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit.")
command.Flags().BoolVar(&repoServerPlaintext, "repo-server-plaintext", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT", false), "Disable TLS on connections to repo server")
command.Flags().BoolVar(&repoServerStrictTLS, "repo-server-strict-tls", env.ParseBoolFromEnv("ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS", false), "Whether to use strict validation of the TLS cert presented by the repo server")
command.Flags().StringSliceVar(&metricsAplicationLabels, "metrics-application-labels", []string{}, "List of Application labels that will be added to the argocd_application_labels metric")
cacheSrc = appstatecache.AddCacheFlagsToCmd(&command, func(client *redis.Client) {
redisClient = client
})

View File

@@ -1,58 +0,0 @@
package commands
import (
"time"
"github.com/argoproj/pkg/stats"
"github.com/spf13/cobra"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
"github.com/argoproj/argo-cd/v2/cmpserver"
"github.com/argoproj/argo-cd/v2/cmpserver/plugin"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
)
const (
// CLIName is the name of the CLI
cliName = "argocd-cmp-server"
)
func NewCommand() *cobra.Command {
var (
configFilePath string
)
var command = cobra.Command{
Use: cliName,
Short: "Run ArgoCD ConfigManagementPlugin Server",
Long: "ArgoCD ConfigManagementPlugin Server is an internal service which runs as sidecar container in reposerver deployment. It can be configured by following options.",
DisableAutoGenTag: true,
RunE: func(c *cobra.Command, args []string) error {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
config, err := plugin.ReadPluginConfig(configFilePath)
errors.CheckError(err)
server, err := cmpserver.NewServer(plugin.CMPServerInitConstants{
PluginConfig: *config,
})
errors.CheckError(err)
// register dumper
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
// run argocd-cmp-server server
server.Run()
return nil
},
}
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&configFilePath, "config-dir-path", common.DefaultPluginConfigFilePath, "Config management plugin configuration file location, Default is '/home/argocd/cmp-server/config/'")
return &command
}

View File

@@ -1,57 +0,0 @@
package commands
import (
"context"
"fmt"
"os"
"strings"
"github.com/argoproj/argo-cd/v2/util/git"
"github.com/spf13/cobra"
"google.golang.org/grpc"
"github.com/argoproj/argo-cd/v2/reposerver/askpass"
"github.com/argoproj/argo-cd/v2/util/errors"
grpc_util "github.com/argoproj/argo-cd/v2/util/grpc"
"github.com/argoproj/argo-cd/v2/util/io"
)
const (
// cliName is the name of the CLI
cliName = "argocd-git-ask-pass"
)
func NewCommand() *cobra.Command {
var command = cobra.Command{
Use: cliName,
Short: "Argo CD git credential helper",
DisableAutoGenTag: true,
Run: func(c *cobra.Command, args []string) {
if len(os.Args) != 2 {
errors.CheckError(fmt.Errorf("expected 1 argument, got %d", len(os.Args)-1))
}
nonce := os.Getenv(git.ASKPASS_NONCE_ENV)
if nonce == "" {
errors.CheckError(fmt.Errorf("%s is not set", git.ASKPASS_NONCE_ENV))
}
conn, err := grpc_util.BlockingDial(context.Background(), "unix", askpass.SocketPath, nil, grpc.WithInsecure())
errors.CheckError(err)
defer io.Close(conn)
client := askpass.NewAskPassServiceClient(conn)
creds, err := client.GetCredentials(context.Background(), &askpass.CredentialsRequest{Nonce: nonce})
errors.CheckError(err)
switch {
case strings.HasPrefix(os.Args[1], "Username"):
fmt.Println(creds.Username)
case strings.HasPrefix(os.Args[1], "Password"):
fmt.Println(creds.Password)
default:
errors.CheckError(fmt.Errorf("unknown credential type '%s'", os.Args[1]))
}
},
}
return &command
}

View File

@@ -1,132 +0,0 @@
package commands
import (
"context"
"fmt"
"net/http"
"os"
"strings"
service "github.com/argoproj/argo-cd/v2/util/notification/argocd"
notificationscontroller "github.com/argoproj/argo-cd/v2/notification_controller/controller"
controller "github.com/argoproj/notifications-engine/pkg/controller"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/clientcmd"
)
const (
defaultMetricsPort = 9001
)
func addK8SFlagsToCmd(cmd *cobra.Command) clientcmd.ClientConfig {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := clientcmd.ConfigOverrides{}
kflags := clientcmd.RecommendedConfigOverrideFlags("")
cmd.PersistentFlags().StringVar(&loadingRules.ExplicitPath, "kubeconfig", "", "Path to a kube config. Only required if out-of-cluster")
clientcmd.BindOverrideFlags(&overrides, cmd.PersistentFlags(), kflags)
return clientcmd.NewInteractiveDeferredLoadingClientConfig(loadingRules, &overrides, os.Stdin)
}
func NewCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
processorsCount int
namespace string
appLabelSelector string
logLevel string
logFormat string
metricsPort int
argocdRepoServer string
argocdRepoServerPlaintext bool
argocdRepoServerStrictTLS bool
configMapName string
secretName string
)
var command = cobra.Command{
Use: "controller",
Short: "Starts Argo CD Notifications controller",
RunE: func(c *cobra.Command, args []string) error {
restConfig, err := clientConfig.ClientConfig()
if err != nil {
return err
}
dynamicClient, err := dynamic.NewForConfig(restConfig)
if err != nil {
return err
}
k8sClient, err := kubernetes.NewForConfig(restConfig)
if err != nil {
return err
}
if namespace == "" {
namespace, _, err = clientConfig.Namespace()
if err != nil {
return err
}
}
level, err := log.ParseLevel(logLevel)
if err != nil {
return err
}
log.SetLevel(level)
switch strings.ToLower(logFormat) {
case "json":
log.SetFormatter(&log.JSONFormatter{})
case "text":
if os.Getenv("FORCE_LOG_COLORS") == "1" {
log.SetFormatter(&log.TextFormatter{ForceColors: true})
}
default:
return fmt.Errorf("Unknown log format '%s'", logFormat)
}
argocdService, err := service.NewArgoCDService(k8sClient, namespace, argocdRepoServer, argocdRepoServerPlaintext, argocdRepoServerStrictTLS)
if err != nil {
return err
}
defer argocdService.Close()
registry := controller.NewMetricsRegistry("argocd")
http.Handle("/metrics", promhttp.HandlerFor(prometheus.Gatherers{registry, prometheus.DefaultGatherer}, promhttp.HandlerOpts{}))
go func() {
log.Fatal(http.ListenAndServe(fmt.Sprintf("0.0.0.0:%d", metricsPort), http.DefaultServeMux))
}()
log.Infof("serving metrics on port %d", metricsPort)
log.Infof("loading configuration %d", metricsPort)
ctrl := notificationscontroller.NewController(k8sClient, dynamicClient, argocdService, namespace, appLabelSelector, registry, secretName, configMapName)
err = ctrl.Init(context.Background())
if err != nil {
return err
}
go ctrl.Run(context.Background(), processorsCount)
<-context.Background().Done()
return nil
},
}
clientConfig = addK8SFlagsToCmd(&command)
command.Flags().IntVar(&processorsCount, "processors-count", 1, "Processors count.")
command.Flags().StringVar(&appLabelSelector, "app-label-selector", "", "App label selector.")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace which controller handles. Current namespace if empty.")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().StringVar(&logFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().IntVar(&metricsPort, "metrics-port", defaultMetricsPort, "Metrics port")
command.Flags().StringVar(&argocdRepoServer, "argocd-repo-server", "argocd-repo-server:8081", "Argo CD repo server address")
command.Flags().BoolVar(&argocdRepoServerPlaintext, "argocd-repo-server-plaintext", false, "Use a plaintext client (non-TLS) to connect to repository server")
command.Flags().BoolVar(&argocdRepoServerStrictTLS, "argocd-repo-server-strict-tls", false, "Perform strict validation of TLS certificates when connecting to repo server")
command.Flags().StringVar(&configMapName, "config-map-name", "argocd-notifications-cm", "Set notifications ConfigMap name")
command.Flags().StringVar(&secretName, "secret-name", "argocd-notifications-secret", "Set notifications Secret name")
return &command
}

View File

@@ -13,12 +13,12 @@ import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"google.golang.org/grpc/health/grpc_health_v1"
"k8s.io/apimachinery/pkg/api/resource"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
"github.com/argoproj/argo-cd/v2/common"
"github.com/argoproj/argo-cd/v2/reposerver"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/reposerver/askpass"
reposervercache "github.com/argoproj/argo-cd/v2/reposerver/cache"
"github.com/argoproj/argo-cd/v2/reposerver/metrics"
"github.com/argoproj/argo-cd/v2/reposerver/repository"
@@ -62,20 +62,17 @@ func getPauseGenerationOnFailureForRequests() int {
return env.ParseNumFromEnv(common.EnvPauseGenerationRequests, defaultPauseGenerationOnFailureForRequests, 0, math.MaxInt32)
}
func getSubmoduleEnabled() bool {
return env.ParseBoolFromEnv(common.EnvGitSubmoduleEnabled, true)
}
func NewCommand() *cobra.Command {
var (
parallelismLimit int64
listenPort int
metricsPort int
cacheSrc func() (*reposervercache.Cache, error)
tlsConfigCustomizer tls.ConfigCustomizer
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
redisClient *redis.Client
disableTLS bool
parallelismLimit int64
listenPort int
metricsPort int
cacheSrc func() (*reposervercache.Cache, error)
tlsConfigCustomizer tls.ConfigCustomizer
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
redisClient *redis.Client
disableTLS bool
maxCombinedDirectoryManifestsSize string
)
var command = cobra.Command{
Use: cliName,
@@ -95,16 +92,18 @@ func NewCommand() *cobra.Command {
cache, err := cacheSrc()
errors.CheckError(err)
askPassServer := askpass.NewServer()
maxCombinedDirectoryManifestsQuantity, err := resource.ParseQuantity(maxCombinedDirectoryManifestsSize)
errors.CheckError(err)
metricsServer := metrics.NewMetricsServer()
cacheutil.CollectMetrics(redisClient, metricsServer)
server, err := reposerver.NewServer(metricsServer, cache, tlsConfigCustomizer, repository.RepoServerInitConstants{
ParallelismLimit: parallelismLimit,
ParallelismLimit: parallelismLimit,
PauseGenerationAfterFailedGenerationAttempts: getPauseGenerationAfterFailedGenerationAttempts(),
PauseGenerationOnFailureForMinutes: getPauseGenerationOnFailureForMinutes(),
PauseGenerationOnFailureForRequests: getPauseGenerationOnFailureForRequests(),
SubmoduleEnabled: getSubmoduleEnabled(),
}, askPassServer)
MaxCombinedDirectoryManifestsSize: maxCombinedDirectoryManifestsQuantity,
})
errors.CheckError(err)
grpc := server.CreateGRPC()
@@ -135,7 +134,6 @@ func NewCommand() *cobra.Command {
})
http.Handle("/metrics", metricsServer.GetHandler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", metricsPort), nil)) }()
go func() { errors.CheckError(askPassServer.Run(askpass.SocketPath)) }()
if gpg.IsGPGEnabled() {
log.Infof("Initializing GnuPG keyring at %s", common.GetGnuPGHomePath())
@@ -168,6 +166,7 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&listenPort, "port", common.DefaultPortRepoServer, "Listen on given port for incoming connections")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortRepoServerMetrics, "Start metrics server on given port")
command.Flags().BoolVar(&disableTLS, "disable-tls", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_DISABLE_TLS", false), "Disable TLS on the gRPC endpoint")
command.Flags().StringVar(&maxCombinedDirectoryManifestsSize, "max-combined-directory-manifests-size", env.StringFromEnv("ARGOCD_REPO_SERVER_MAX_COMBINED_DIRECTORY_MANIFESTS_SIZE", "10M"), "Max combined size of manifest files in a directory-type Application")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, func(client *redis.Client) {

View File

@@ -54,19 +54,7 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
)
var command = &cobra.Command{
Use: "update-password",
Short: "Update an account's password",
Long: `
This command can be used to update the password of the currently logged on
user, or an arbitrary local user account when the currently logged on user
has appropriate RBAC permissions to change other accounts.
`,
Example: `
# Update the current user's password
argocd account update-password
# Update the password for user foobar
argocd account update-password --account foobar
`,
Short: "Update password",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
@@ -79,20 +67,16 @@ has appropriate RBAC permissions to change other accounts.
userInfo := getCurrentAccount(acdClient)
if userInfo.Iss == sessionutil.SessionManagerClaimsIssuer && currentPassword == "" {
fmt.Printf("*** Enter password of currently logged in user (%s): ", userInfo.Username)
fmt.Print("*** Enter current password: ")
password, err := term.ReadPassword(int(os.Stdin.Fd()))
errors.CheckError(err)
currentPassword = string(password)
fmt.Print("\n")
}
if account == "" {
account = userInfo.Username
}
if newPassword == "" {
var err error
newPassword, err = cli.ReadAndConfirmPassword(account)
newPassword, err = cli.ReadAndConfirmPassword()
errors.CheckError(err)
}
@@ -127,7 +111,7 @@ has appropriate RBAC permissions to change other accounts.
},
}
command.Flags().StringVar(&currentPassword, "current-password", "", "password of the currently logged on user")
command.Flags().StringVar(&currentPassword, "current-password", "", "current password you wish to change")
command.Flags().StringVar(&newPassword, "new-password", "", "new password you want to update to")
command.Flags().StringVar(&account, "account", "", "an account name that should be updated. Defaults to current user account")
return command

View File

@@ -55,7 +55,6 @@ func NewAdminCommand() *cobra.Command {
command.AddCommand(NewImportCommand())
command.AddCommand(NewExportCommand())
command.AddCommand(NewDashboardCommand())
command.AddCommand(NewNotificationsCommand())
command.Flags().StringVar(&cmdutil.LogFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&cmdutil.LogLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")

View File

@@ -10,8 +10,6 @@ import (
"sort"
"time"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/ghodss/yaml"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
@@ -90,12 +88,9 @@ func NewGenAppSpecCommand() *cobra.Command {
argocd admin app generate-spec ksane --repo https://github.com/argoproj/argocd-example-apps.git --path plugins/kasane --dest-namespace default --dest-server https://kubernetes.default.svc --config-management-plugin kasane
`,
Run: func(c *cobra.Command, args []string) {
apps, err := cmdutil.ConstructApps(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
app, err := cmdutil.ConstructApp(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
errors.CheckError(err)
if len(apps) > 1 {
errors.CheckError(fmt.Errorf("failed to generate spec, more than one application is not supported"))
}
app := apps[0]
if app.Name == "" {
c.HelpFunc()(c, args)
os.Exit(1)
@@ -334,11 +329,11 @@ func reconcileApplications(
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClientset, namespace)
argoDB := db.NewDB(namespace, settingsMgr, kubeClientset)
appInformerFactory := appinformers.NewSharedInformerFactoryWithOptions(
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
appClientset,
1*time.Hour,
appinformers.WithNamespace(namespace),
appinformers.WithTweakListOptions(func(options *v1.ListOptions) {}),
namespace,
func(options *v1.ListOptions) {},
)
appInformer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
@@ -355,7 +350,7 @@ func reconcileApplications(
return true
}, func(r *http.Request) error {
return nil
}, []string{})
})
if err != nil {
return nil, err
@@ -371,7 +366,7 @@ func reconcileApplications(
)
appStateManager := controller.NewAppStateManager(
argoDB, appClientset, repoServerClient, namespace, kubeutil.NewKubectl(), settingsMgr, stateCache, projInformer, server, cache, time.Second, argo.NewResourceTracking())
argoDB, appClientset, repoServerClient, namespace, kubeutil.NewKubectl(), settingsMgr, stateCache, projInformer, server, cache, time.Second)
appsList, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(context.Background(), v1.ListOptions{LabelSelector: selector})
if err != nil {
@@ -413,5 +408,5 @@ func reconcileApplications(
}
func newLiveStateCache(argoDB db.ArgoDB, appInformer kubecache.SharedIndexInformer, settingsMgr *settings.SettingsManager, server *metrics.MetricsServer) cache.LiveStateCache {
return cache.NewLiveStateCache(argoDB, appInformer, settingsMgr, kubeutil.NewKubectl(), server, func(managedByApp map[string]bool, ref apiv1.ObjectReference) {}, nil, argo.NewResourceTracking())
return cache.NewLiveStateCache(argoDB, appInformer, settingsMgr, kubeutil.NewKubectl(), server, func(managedByApp map[string]bool, ref apiv1.ObjectReference) {}, nil)
}

View File

@@ -111,11 +111,10 @@ func NewExportCommand() *cobra.Command {
// NewImportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewImportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
prune bool
dryRun bool
verbose bool
stopOperation bool
clientConfig clientcmd.ClientConfig
prune bool
dryRun bool
verbose bool
)
var command = cobra.Command{
Use: "import SOURCE",
@@ -229,14 +228,14 @@ func NewImportCommand() *cobra.Command {
fmt.Printf("%s/%s %s created%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
}
} else if specsEqual(*bakObj, liveObj) && checkAppHasNoNeedToStopOperation(liveObj, stopOperation) {
} else if specsEqual(*bakObj, liveObj) {
if verbose {
fmt.Printf("%s/%s %s unchanged%s\n", gvk.Group, gvk.Kind, bakObj.GetName(), dryRunMsg)
}
} else {
isForbidden := false
if !dryRun {
newLive := updateLive(bakObj, &liveObj, stopOperation)
newLive := updateLive(bakObj, &liveObj)
_, err = dynClient.Update(context.Background(), newLive, v1.UpdateOptions{})
if apierr.IsForbidden(err) || apierr.IsNotFound(err) {
isForbidden = true
@@ -301,23 +300,10 @@ func NewImportCommand() *cobra.Command {
command.Flags().BoolVar(&dryRun, "dry-run", false, "Print what will be performed")
command.Flags().BoolVar(&prune, "prune", false, "Prune secrets, applications and projects which do not appear in the backup")
command.Flags().BoolVar(&verbose, "verbose", false, "Verbose output (versus only changed output)")
command.Flags().BoolVar(&stopOperation, "stop-operation", false, "Stop any existing operations")
return &command
}
// check app has no need to stop operation.
func checkAppHasNoNeedToStopOperation(liveObj unstructured.Unstructured, stopOperation bool) bool {
if !stopOperation {
return true
}
switch liveObj.GetKind() {
case "Application":
return liveObj.Object["operation"] == nil
}
return true
}
// export writes the unstructured object and removes extraneous cruft from output before writing
func export(w io.Writer, un unstructured.Unstructured) {
name := un.GetName()
@@ -343,7 +329,7 @@ func export(w io.Writer, un unstructured.Unstructured) {
// updateLive replaces the live object's finalizers, spec, annotations, labels, and data from the
// backup object but leaves all other fields intact (status, other metadata, etc...)
func updateLive(bak, live *unstructured.Unstructured, stopOperation bool) *unstructured.Unstructured {
func updateLive(bak, live *unstructured.Unstructured) *unstructured.Unstructured {
newLive := live.DeepCopy()
newLive.SetAnnotations(bak.GetAnnotations())
newLive.SetLabels(bak.GetLabels())
@@ -358,10 +344,6 @@ func updateLive(bak, live *unstructured.Unstructured, stopOperation bool) *unstr
if _, ok := bak.Object["status"]; ok {
newLive.Object["status"] = bak.Object["status"]
}
if stopOperation {
newLive.Object["operation"] = nil
}
case "ApplicationSet":
newLive.Object["spec"] = bak.Object["spec"]
}

View File

@@ -35,7 +35,6 @@ import (
"github.com/argoproj/argo-cd/v2/util/glob"
kubeutil "github.com/argoproj/argo-cd/v2/util/kube"
"github.com/argoproj/argo-cd/v2/util/settings"
"github.com/argoproj/argo-cd/v2/util/text/label"
)
func NewClusterCommand(pathOpts *clientcmd.PathOptions) *cobra.Command {
@@ -509,8 +508,6 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
bearerToken string
generateToken bool
outputFormat string
labels []string
annotations []string
)
var command = &cobra.Command{
Use: "generate-spec CONTEXT",
@@ -564,13 +561,7 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
if clusterOpts.Name != "" {
contextName = clusterOpts.Name
}
labelsMap, err := label.Parse(labels)
errors.CheckError(err)
annotationsMap, err := label.Parse(annotations)
errors.CheckError(err)
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, bearerToken, awsAuthConf, execProviderConf, labelsMap, annotationsMap)
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, bearerToken, awsAuthConf, execProviderConf)
if clusterOpts.InCluster {
clst.Server = argoappv1.KubernetesInternalAPIServerAddr
}
@@ -599,8 +590,6 @@ func NewGenClusterConfigCommand(pathOpts *clientcmd.PathOptions) *cobra.Command
command.Flags().StringVar(&clusterOpts.ServiceAccount, "service-account", "argocd-manager", fmt.Sprintf("System namespace service account to use for kubernetes resource management. If not set then default \"%s\" SA will be used", clusterauth.ArgoCDManagerServiceAccount))
command.Flags().StringVar(&clusterOpts.SystemNamespace, "system-namespace", common.DefaultSystemNamespace, "Use different system namespace")
command.Flags().StringVarP(&outputFormat, "output", "o", "yaml", "Output format. One of: json|yaml")
command.Flags().StringArrayVar(&labels, "label", nil, "Set metadata labels (e.g. --label key=value)")
command.Flags().StringArrayVar(&annotations, "annotation", nil, "Set metadata annotations (e.g. --annotation key=value)")
cmdutil.AddClusterFlags(command, &clusterOpts)
return command
}

View File

@@ -13,20 +13,18 @@ import (
func NewDashboardCommand() *cobra.Command {
var (
port int
address string
port int
)
cmd := &cobra.Command{
Use: "dashboard",
Short: "Starts Argo CD Web UI locally",
Run: func(cmd *cobra.Command, args []string) {
println(fmt.Sprintf("Argo CD UI is available at http://%s:%d", address, port))
println(fmt.Sprintf("Argo CD UI is available at http://localhost:%d", port))
<-context.Background().Done()
},
}
clientOpts := &apiclient.ClientOptions{Core: true}
headless.InitCommand(cmd, clientOpts, &port, &address)
headless.InitCommand(cmd, clientOpts, &port)
cmd.Flags().IntVar(&port, "port", common.DefaultPortAPIServer, "Listen on given port")
cmd.Flags().StringVar(&address, "address", common.DefaultAddressAPIServer, "Listen on given address")
return cmd
}

View File

@@ -1,51 +0,0 @@
package admin
import (
"log"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
service "github.com/argoproj/argo-cd/v2/util/notification/argocd"
settings "github.com/argoproj/argo-cd/v2/util/notification/settings"
"github.com/argoproj/notifications-engine/pkg/cmd"
"github.com/spf13/cobra"
)
var (
applications = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "applications"}
)
func NewNotificationsCommand() *cobra.Command {
var (
argocdRepoServer string
argocdRepoServerPlaintext bool
argocdRepoServerStrictTLS bool
)
var argocdService service.Service
toolsCommand := cmd.NewToolsCommand(
"notifications",
"notifications",
applications,
settings.GetFactorySettings(argocdService, "argocd-notifications-secret", "argocd-notifications-cm"), func(clientConfig clientcmd.ClientConfig) {
k8sCfg, err := clientConfig.ClientConfig()
if err != nil {
log.Fatalf("Failed to parse k8s config: %v", err)
}
ns, _, err := clientConfig.Namespace()
if err != nil {
log.Fatalf("Failed to parse k8s config: %v", err)
}
argocdService, err = service.NewArgoCDService(kubernetes.NewForConfigOrDie(k8sCfg), ns, argocdRepoServer, argocdRepoServerPlaintext, argocdRepoServerStrictTLS)
if err != nil {
log.Fatalf("Failed to initalize Argo CD service: %v", err)
}
})
toolsCommand.PersistentFlags().StringVar(&argocdRepoServer, "argocd-repo-server", "argocd-repo-server:8081", "Argo CD repo server address")
toolsCommand.PersistentFlags().BoolVar(&argocdRepoServerPlaintext, "argocd-repo-server-plaintext", false, "Use a plaintext client (non-TLS) to connect to repository server")
toolsCommand.PersistentFlags().BoolVar(&argocdRepoServerStrictTLS, "argocd-repo-server-strict-tls", false, "Perform strict validation of TLS certificates when connecting to repo server")
return toolsCommand
}

View File

@@ -2,6 +2,7 @@ package admin
import (
"bufio"
"fmt"
"io"
"io/ioutil"
"os"
@@ -63,7 +64,10 @@ func NewProjectAllowListGenCommand() *cobra.Command {
}()
}
globalProj := generateProjectAllowList(clientConfig, clusterRoleFileName, projName)
resourceList, err := getResourceList(clientConfig)
errors.CheckError(err)
globalProj, err := generateProjectAllowList(resourceList, clusterRoleFileName, projName)
errors.CheckError(err)
yamlBytes, err := yaml.Marshal(globalProj)
errors.CheckError(err)
@@ -78,23 +82,38 @@ func NewProjectAllowListGenCommand() *cobra.Command {
return command
}
func generateProjectAllowList(clientConfig clientcmd.ClientConfig, clusterRoleFileName string, projName string) v1alpha1.AppProject {
func getResourceList(clientConfig clientcmd.ClientConfig) ([]*metav1.APIResourceList, error) {
config, err := clientConfig.ClientConfig()
if err != nil {
return nil, fmt.Errorf("error while creating client config: %s", err)
}
disco, err := discovery.NewDiscoveryClientForConfig(config)
if err != nil {
return nil, fmt.Errorf("error while creating discovery client: %s", err)
}
serverResources, err := disco.ServerPreferredResources()
if err != nil {
return nil, fmt.Errorf("error while getting server resources: %s", err)
}
return serverResources, nil
}
func generateProjectAllowList(serverResources []*metav1.APIResourceList, clusterRoleFileName string, projName string) (*v1alpha1.AppProject, error) {
yamlBytes, err := ioutil.ReadFile(clusterRoleFileName)
errors.CheckError(err)
if err != nil {
return nil, fmt.Errorf("error reading cluster role file: %s", err)
}
var obj unstructured.Unstructured
err = yaml.Unmarshal(yamlBytes, &obj)
errors.CheckError(err)
if err != nil {
return nil, fmt.Errorf("error unmarshalling cluster role file yaml: %s", err)
}
clusterRole := &rbacv1.ClusterRole{}
err = scheme.Scheme.Convert(&obj, clusterRole, nil)
errors.CheckError(err)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
disco, err := discovery.NewDiscoveryClientForConfig(config)
errors.CheckError(err)
serverResources, err := disco.ServerPreferredResources()
errors.CheckError(err)
if err != nil {
return nil, fmt.Errorf("error converting cluster role yaml into ClusterRole struct: %s", err)
}
resourceList := make([]metav1.GroupKind, 0)
for _, rule := range clusterRole.Rules {
@@ -140,5 +159,5 @@ func generateProjectAllowList(clientConfig clientcmd.ClientConfig, clusterRoleFi
Spec: v1alpha1.AppProjectSpec{},
}
globalProj.Spec.NamespaceResourceWhitelist = resourceList
return globalProj
return &globalProj, nil
}

View File

@@ -1,57 +1,20 @@
package admin
import (
"reflect"
"testing"
"github.com/stretchr/testify/assert"
"github.com/undefinedlabs/go-mpatch"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/discovery"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
func TestProjectAllowListGen(t *testing.T) {
useMock := true
rules := clientcmd.NewDefaultClientConfigLoadingRules()
overrides := &clientcmd.ConfigOverrides{}
clientConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(rules, overrides)
if useMock {
var patchClientConfig *mpatch.Patch
patchClientConfig, err := mpatch.PatchInstanceMethodByName(reflect.TypeOf(clientConfig), "ClientConfig", func(*clientcmd.DeferredLoadingClientConfig) (*restclient.Config, error) {
return nil, nil
})
assert.NoError(t, err)
patch, err := mpatch.PatchMethod(discovery.NewDiscoveryClientForConfig, func(c *restclient.Config) (*discovery.DiscoveryClient, error) {
return &discovery.DiscoveryClient{LegacyPrefix: "/api"}, nil
})
assert.NoError(t, err)
var patchSeverPreferredResources *mpatch.Patch
discoClient := &discovery.DiscoveryClient{}
patchSeverPreferredResources, err = mpatch.PatchInstanceMethodByName(reflect.TypeOf(discoClient), "ServerPreferredResources", func(*discovery.DiscoveryClient) ([]*metav1.APIResourceList, error) {
res := metav1.APIResource{
Name: "services",
Kind: "Service",
}
resourceList := []*metav1.APIResourceList{{APIResources: []metav1.APIResource{res}}}
return resourceList, nil
})
assert.NoError(t, err)
defer func() {
err = patchClientConfig.Unpatch()
assert.NoError(t, err)
err = patch.Unpatch()
assert.NoError(t, err)
err = patchSeverPreferredResources.Unpatch()
err = patch.Unpatch()
}()
res := metav1.APIResource{
Name: "services",
Kind: "Service",
}
resourceList := []*metav1.APIResourceList{{APIResources: []metav1.APIResource{res}}}
globalProj := generateProjectAllowList(clientConfig, "testdata/test_clusterrole.yaml", "testproj")
globalProj, err := generateProjectAllowList(resourceList, "testdata/test_clusterrole.yaml", "testproj")
assert.NoError(t, err)
assert.True(t, len(globalProj.Spec.NamespaceResourceWhitelist) > 0)
}

View File

@@ -401,15 +401,11 @@ argocd admin settings resource-overrides ignore-differences ./deploy.yaml --argo
executeResourceOverrideCommand(cmdCtx, args, func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride) {
gvk := res.GroupVersionKind()
if len(override.IgnoreDifferences.JSONPointers) == 0 && len(override.IgnoreDifferences.JQPathExpressions) == 0 {
if len(override.IgnoreDifferences.JSONPointers) == 0 {
_, _ = fmt.Printf("Ignore differences are not configured for '%s/%s'\n", gvk.Group, gvk.Kind)
return
}
// This normalizer won't verify 'managedFieldsManagers' ignore difference
// configurations. This requires access to live resources which is not the
// purpose of this command. This will just apply jsonPointers and
// jqPathExpressions configurations.
normalizer, err := normalizers.NewIgnoreNormalizer(nil, overrides)
errors.CheckError(err)

View File

@@ -14,8 +14,7 @@ import (
"time"
"unicode/utf8"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/diff"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/gitops-engine/pkg/sync/hook"
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
@@ -26,6 +25,7 @@ import (
"github.com/spf13/cobra"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
@@ -45,11 +45,11 @@ import (
repoapiclient "github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/reposerver/repository"
"github.com/argoproj/argo-cd/v2/util/argo"
argodiff "github.com/argoproj/argo-cd/v2/util/argo/diff"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/git"
argoio "github.com/argoproj/argo-cd/v2/util/io"
argokube "github.com/argoproj/argo-cd/v2/util/kube"
"github.com/argoproj/argo-cd/v2/util/templates"
"github.com/argoproj/argo-cd/v2/util/text/label"
)
@@ -93,7 +93,6 @@ func NewApplicationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
command.AddCommand(NewApplicationEditCommand(clientOpts))
command.AddCommand(NewApplicationPatchCommand(clientOpts))
command.AddCommand(NewApplicationPatchResourceCommand(clientOpts))
command.AddCommand(NewApplicationDeleteResourceCommand(clientOpts))
command.AddCommand(NewApplicationResourceActionsCommand(clientOpts))
command.AddCommand(NewApplicationListResourcesCommand(clientOpts))
command.AddCommand(NewApplicationLogsCommand(clientOpts))
@@ -135,27 +134,24 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
Run: func(c *cobra.Command, args []string) {
argocdClient := argocdclient.NewClientOrDie(clientOpts)
apps, err := cmdutil.ConstructApps(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
app, err := cmdutil.ConstructApp(fileURL, appName, labels, annotations, args, appOpts, c.Flags())
errors.CheckError(err)
for _, app := range apps {
if app.Name == "" {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, appIf := argocdClient.NewApplicationClientOrDie()
defer argoio.Close(conn)
appCreateRequest := applicationpkg.ApplicationCreateRequest{
Application: *app,
Upsert: &upsert,
Validate: &appOpts.Validate,
}
created, err := appIf.Create(context.Background(), &appCreateRequest)
errors.CheckError(err)
fmt.Printf("application '%s' created\n", created.ObjectMeta.Name)
if app.Name == "" {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, appIf := argocdClient.NewApplicationClientOrDie()
defer argoio.Close(conn)
appCreateRequest := applicationpkg.ApplicationCreateRequest{
Application: *app,
Upsert: &upsert,
Validate: &appOpts.Validate,
}
created, err := appIf.Create(context.Background(), &appCreateRequest)
errors.CheckError(err)
fmt.Printf("application '%s' created\n", created.ObjectMeta.Name)
},
}
command.Flags().StringVar(&appName, "name", "", "A name for the app, ignored if a file is set (DEPRECATED)")
@@ -283,7 +279,6 @@ func NewApplicationLogsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
untilTime string
filter string
container string
previous bool
)
var command = &cobra.Command{
Use: "logs APPNAME",
@@ -313,7 +308,6 @@ func NewApplicationLogsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
UntilTime: &untilTime,
Filter: &filter,
Container: container,
Previous: previous,
})
if err != nil {
log.Fatalf("failed to get pod logs: %v", err)
@@ -355,7 +349,6 @@ func NewApplicationLogsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringVar(&untilTime, "until-time", "", "Show logs until this time")
command.Flags().StringVar(&filter, "filter", "", "Show logs contain this string")
command.Flags().StringVar(&container, "container", "", "Optional container name")
command.Flags().BoolVarP(&previous, "previous", "p", false, "Specify if the previously terminated container logs should be returned")
return command
}
@@ -561,16 +554,15 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
// NewApplicationUnsetCommand returns a new instance of an `argocd app unset` command
func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
parameters []string
valuesLiteral bool
valuesFiles []string
ignoreMissingValueFiles bool
nameSuffix bool
namePrefix bool
kustomizeVersion bool
kustomizeImages []string
pluginEnvs []string
appOpts cmdutil.AppOptions
parameters []string
valuesLiteral bool
valuesFiles []string
nameSuffix bool
namePrefix bool
kustomizeVersion bool
kustomizeImages []string
pluginEnvs []string
appOpts cmdutil.AppOptions
)
var command = &cobra.Command{
Use: "unset APPNAME parameters",
@@ -647,7 +639,7 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
if app.Spec.Source.Helm != nil {
if len(parameters) == 0 && len(valuesFiles) == 0 && !valuesLiteral && !ignoreMissingValueFiles {
if len(parameters) == 0 && len(valuesFiles) == 0 && !valuesLiteral {
c.HelpFunc()(c, args)
os.Exit(1)
}
@@ -675,14 +667,6 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
}
if ignoreMissingValueFiles {
app.Spec.Source.Helm.IgnoreMissingValueFiles = false
updated = true
}
if app.Spec.Source.Helm.PassCredentials {
app.Spec.Source.Helm.PassCredentials = false
updated = true
}
}
if app.Spec.Source.Plugin != nil {
@@ -714,7 +698,6 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
command.Flags().StringArrayVarP(&parameters, "parameter", "p", []string{}, "Unset a parameter override (e.g. -p guestbook=image)")
command.Flags().StringArrayVar(&valuesFiles, "values", []string{}, "Unset one or more Helm values files")
command.Flags().BoolVar(&valuesLiteral, "values-literal", false, "Unset literal Helm values block")
command.Flags().BoolVar(&ignoreMissingValueFiles, "ignore-missing-value-files", false, "Unset the helm ignore-missing-value-files option (revert to false)")
command.Flags().BoolVar(&nameSuffix, "namesuffix", false, "Kustomize namesuffix")
command.Flags().BoolVar(&namePrefix, "nameprefix", false, "Kustomize nameprefix")
command.Flags().BoolVar(&kustomizeVersion, "kustomize-version", false, "Kustomize version")
@@ -749,9 +732,9 @@ func liveObjects(resources []*argoappv1.ResourceDiff) ([]*unstructured.Unstructu
return objs, nil
}
func getLocalObjects(app *argoappv1.Application, local, localRepoRoot, appLabelKey, kubeVersion string, apiVersions []string, kustomizeOptions *argoappv1.KustomizeOptions,
configManagementPlugins []*argoappv1.ConfigManagementPlugin, trackingMethod string) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, localRepoRoot, appLabelKey, kubeVersion, apiVersions, kustomizeOptions, configManagementPlugins, trackingMethod)
func getLocalObjects(app *argoappv1.Application, local, localRepoRoot, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions,
configManagementPlugins []*argoappv1.ConfigManagementPlugin) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, localRepoRoot, appLabelKey, kubeVersion, kustomizeOptions, configManagementPlugins)
objs := make([]*unstructured.Unstructured, len(manifestStrings))
for i := range manifestStrings {
obj := unstructured.Unstructured{}
@@ -762,10 +745,10 @@ func getLocalObjects(app *argoappv1.Application, local, localRepoRoot, appLabelK
return objs
}
func getLocalObjectsString(app *argoappv1.Application, local, localRepoRoot, appLabelKey, kubeVersion string, apiVersions []string, kustomizeOptions *argoappv1.KustomizeOptions,
configManagementPlugins []*argoappv1.ConfigManagementPlugin, trackingMethod string) []string {
func getLocalObjectsString(app *argoappv1.Application, local, localRepoRoot, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions,
configManagementPlugins []*argoappv1.ConfigManagementPlugin) []string {
res, err := repository.GenerateManifests(context.Background(), local, localRepoRoot, app.Spec.Source.TargetRevision, &repoapiclient.ManifestRequest{
res, err := repository.GenerateManifests(local, localRepoRoot, app.Spec.Source.TargetRevision, &repoapiclient.ManifestRequest{
Repo: &argoappv1.Repository{Repo: app.Spec.Source.RepoURL},
AppLabelKey: appLabelKey,
AppName: app.Name,
@@ -773,10 +756,8 @@ func getLocalObjectsString(app *argoappv1.Application, local, localRepoRoot, app
ApplicationSource: &app.Spec.Source,
KustomizeOptions: kustomizeOptions,
KubeVersion: kubeVersion,
ApiVersions: apiVersions,
Plugins: configManagementPlugins,
TrackingMethod: trackingMethod,
}, true, &git.NoopCredsStore{})
}, true, resource.MustParse("0"))
errors.CheckError(err)
return res.Manifests
@@ -861,7 +842,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
defer argoio.Close(conn)
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
errors.CheckError(err)
localObjs := groupObjsByKey(getLocalObjects(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.Info.ServerVersion, cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins, argoSettings.TrackingMethod), liveObjs, app.Spec.Destination.Namespace)
localObjs := groupObjsByKey(getLocalObjects(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins), liveObjs, app.Spec.Destination.Namespace)
items = groupObjsForDiff(resources, localObjs, items, argoSettings, appName)
} else if revision != "" {
var unstructureds []*unstructured.Unstructured
@@ -903,18 +884,10 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
val := argoSettings.ResourceOverrides[k]
overrides[k] = *val
}
// TODO remove hardcoded IgnoreAggregatedRoles and retrieve the
// compareOptions in the protobuf
ignoreAggregatedRoles := false
diffConfig, err := argodiff.NewDiffConfigBuilder().
WithDiffSettings(app.Spec.IgnoreDifferences, overrides, ignoreAggregatedRoles).
WithTracking(argoSettings.AppLabelKey, argoSettings.TrackingMethod).
WithNoCache().
Build()
normalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, overrides)
errors.CheckError(err)
diffRes, err := argodiff.StateDiff(item.live, item.target, diffConfig)
diffRes, err := diff.Diff(item.target, item.live, diff.WithNormalizer(normalizer))
errors.CheckError(err)
if diffRes.Modified || item.target == nil || item.live == nil {
@@ -951,7 +924,6 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
func groupObjsForDiff(resources *application.ManagedResourcesResponse, objs map[kube.ResourceKey]*unstructured.Unstructured, items []objKeyLiveTarget, argoSettings *settings.Settings, appName string) []objKeyLiveTarget {
resourceTracking := argo.NewResourceTracking()
for _, res := range resources.Items {
var live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(res.NormalizedLiveState), &live)
@@ -965,7 +937,7 @@ func groupObjsForDiff(resources *application.ManagedResourcesResponse, objs map[
}
if local, ok := objs[key]; ok || live != nil {
if local != nil && !kube.IsCRD(local) {
err = resourceTracking.SetAppInstance(local, argoSettings.AppLabelKey, appName, "", argoappv1.TrackingMethod(argoSettings.GetTrackingMethod()))
err = argokube.SetAppInstanceLabel(local, argoSettings.AppLabelKey, appName)
errors.CheckError(err)
}
@@ -1298,7 +1270,6 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
timeout uint
strategy string
force bool
replace bool
async bool
retryLimit int64
retryBackoffDuration time.Duration
@@ -1406,35 +1377,18 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
errors.CheckError(err)
argoio.Close(conn)
localObjsStrings = getLocalObjectsString(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.Info.ServerVersion, cluster.Info.APIVersions, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins, argoSettings.TrackingMethod)
}
syncOptionsFactory := func() *applicationpkg.SyncOptions {
syncOptions := applicationpkg.SyncOptions{}
items := make([]string, 0)
if replace {
items = append(items, common.SyncOptionReplace)
}
if len(items) == 0 {
// for prevent send even empty array if not need
return nil
}
syncOptions.Items = items
return &syncOptions
localObjsStrings = getLocalObjectsString(app, local, localRepoRoot, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins)
}
syncReq := applicationpkg.ApplicationSyncRequest{
Name: &appName,
DryRun: dryRun,
Revision: revision,
Resources: selectedResources,
Prune: prune,
Manifests: localObjsStrings,
Infos: getInfos(infos),
SyncOptions: syncOptionsFactory(),
Name: &appName,
DryRun: dryRun,
Revision: revision,
Resources: selectedResources,
Prune: prune,
Manifests: localObjsStrings,
Infos: getInfos(infos),
}
switch strategy {
case "apply":
syncReq.Strategy = &argoappv1.SyncStrategy{Apply: &argoappv1.SyncStrategyApply{}}
@@ -1491,7 +1445,6 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().Int64Var(&retryBackoffFactor, "retry-backoff-factor", argoappv1.DefaultSyncRetryFactor, "Factor multiplies the base duration after each failed retry")
command.Flags().StringVar(&strategy, "strategy", "", "Sync strategy (one of: apply|hook)")
command.Flags().BoolVar(&force, "force", false, "Use a force apply")
command.Flags().BoolVar(&replace, "replace", false, "Use a kubectl create/replace instead apply")
command.Flags().BoolVar(&async, "async", false, "Do not wait for application to sync before continuing")
command.Flags().StringVar(&local, "local", "", "Path to a local directory. When this flag is present no git queries will be made")
command.Flags().StringVar(&localRepoRoot, "local-repo-root", "/", "Path to the repository root. Used together with --local allows setting the repository root")
@@ -2250,59 +2203,3 @@ func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions)
return command
}
func NewApplicationDeleteResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var resourceName string
var namespace string
var kind string
var group string
var force bool
var orphan bool
var all bool
command := &cobra.Command{
Use: "delete-resource APPNAME",
Short: "Delete resource in an application",
}
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource")
command.Flags().StringVar(&kind, "kind", "", "Kind")
err := command.MarkFlagRequired("kind")
errors.CheckError(err)
command.Flags().StringVar(&group, "group", "", "Group")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
command.Flags().BoolVar(&force, "force", false, "Indicates whether to orphan the dependents of the deleted resource")
command.Flags().BoolVar(&orphan, "orphan", false, "Indicates whether to force delete the resource")
command.Flags().BoolVar(&all, "all", false, "Indicates whether to patch multiple matching of resources")
command.Run = func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer argoio.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
objectsToDelete := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
for i := range objectsToDelete {
obj := objectsToDelete[i]
gvk := obj.GroupVersionKind()
_, err = appIf.DeleteResource(ctx, &applicationpkg.ApplicationResourceDeleteRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: obj.GetName(),
Version: gvk.Version,
Group: gvk.Group,
Kind: gvk.Kind,
Force: &force,
Orphan: &orphan,
})
errors.CheckError(err)
log.Infof("Resource '%s' deleted", obj.GetName())
}
}
return command
}

View File

@@ -4,18 +4,13 @@ import (
"context"
"fmt"
"os"
"regexp"
"strings"
"text/tabwriter"
"github.com/mattn/go-isatty"
"k8s.io/client-go/kubernetes"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/text/label"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
cmdutil "github.com/argoproj/argo-cd/v2/cmd/util"
@@ -23,6 +18,7 @@ import (
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"
clusterpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/cluster"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/clusterauth"
"github.com/argoproj/argo-cd/v2/util/errors"
"github.com/argoproj/argo-cd/v2/util/io"
@@ -64,8 +60,6 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
var (
clusterOpts cmdutil.ClusterOptions
skipConfirmation bool
labels []string
annotations []string
)
var command = &cobra.Command{
Use: "add CONTEXT",
@@ -85,6 +79,15 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
log.Fatalf("Context %s does not exist in kubeconfig", contextName)
}
isTerminal := isatty.IsTerminal(os.Stdout.Fd()) || isatty.IsCygwinTerminal(os.Stdout.Fd())
if isTerminal && !skipConfirmation {
message := fmt.Sprintf("WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `%s` with full cluster level admin privileges. Do you want to continue [y/N]? ", contextName)
if !cli.AskToProceed(message) {
os.Exit(1)
}
}
overrides := clientcmd.ConfigOverrides{
Context: *clstContext,
}
@@ -115,39 +118,22 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
if clusterOpts.ServiceAccount != "" {
managerBearerToken, err = clusterauth.GetServiceAccountBearerToken(clientset, clusterOpts.SystemNamespace, clusterOpts.ServiceAccount)
} else {
isTerminal := isatty.IsTerminal(os.Stdout.Fd()) || isatty.IsCygwinTerminal(os.Stdout.Fd())
if isTerminal && !skipConfirmation {
message := fmt.Sprintf("WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `%s` with full cluster level admin privileges. Do you want to continue [y/N]? ", contextName)
if !cli.AskToProceed(message) {
os.Exit(1)
}
}
managerBearerToken, err = clusterauth.InstallClusterManagerRBAC(clientset, clusterOpts.SystemNamespace, clusterOpts.Namespaces)
}
errors.CheckError(err)
}
labelsMap, err := label.Parse(labels)
errors.CheckError(err)
annotationsMap, err := label.Parse(annotations)
errors.CheckError(err)
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer io.Close(conn)
if clusterOpts.Name != "" {
contextName = clusterOpts.Name
}
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, managerBearerToken, awsAuthConf, execProviderConf, labelsMap, annotationsMap)
clst := cmdutil.NewCluster(contextName, clusterOpts.Namespaces, clusterOpts.ClusterResources, conf, managerBearerToken, awsAuthConf, execProviderConf)
if clusterOpts.InCluster {
clst.Server = argoappv1.KubernetesInternalAPIServerAddr
}
if clusterOpts.Shard >= 0 {
clst.Shard = &clusterOpts.Shard
}
if clusterOpts.Project != "" {
clst.Project = clusterOpts.Project
}
clstCreateReq := clusterpkg.ClusterCreateRequest{
Cluster: clst,
Upsert: clusterOpts.Upsert,
@@ -162,8 +148,6 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
command.Flags().StringVar(&clusterOpts.ServiceAccount, "service-account", "", fmt.Sprintf("System namespace service account to use for kubernetes resource management. If not set then default \"%s\" SA will be created", clusterauth.ArgoCDManagerServiceAccount))
command.Flags().StringVar(&clusterOpts.SystemNamespace, "system-namespace", common.DefaultSystemNamespace, "Use different system namespace")
command.Flags().BoolVarP(&skipConfirmation, "yes", "y", false, "Skip explicit confirmation")
command.Flags().StringArrayVar(&labels, "label", nil, "Set metadata labels (e.g. --label key=value)")
command.Flags().StringArrayVar(&annotations, "annotation", nil, "Set metadata annotations (e.g. --annotation key=value)")
cmdutil.AddClusterFlags(command, &clusterOpts)
return command
}
@@ -174,10 +158,9 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
output string
)
var command = &cobra.Command{
Use: "get SERVER/NAME",
Short: "Get cluster information",
Example: `argocd cluster get https://12.34.567.89
argocd cluster get in-cluster`,
Use: "get SERVER",
Short: "Get cluster information",
Example: `argocd cluster get https://12.34.567.89`,
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
@@ -186,8 +169,8 @@ argocd cluster get in-cluster`,
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer io.Close(conn)
clusters := make([]argoappv1.Cluster, 0)
for _, clusterSelector := range args {
clst, err := clusterIf.Get(context.Background(), getQueryBySelector(clusterSelector))
for _, clusterName := range args {
clst, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
errors.CheckError(err)
clusters = append(clusters, *clst)
}
@@ -274,29 +257,17 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
// Print table of cluster information
func printClusterTable(clusters []argoappv1.Cluster) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\tPROJECT\n")
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\n")
for _, c := range clusters {
server := c.Server
if len(c.Namespaces) > 0 {
server = fmt.Sprintf("%s (%d namespaces)", c.Server, len(c.Namespaces))
}
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message, c.Project)
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", server, c.Name, c.ServerVersion, c.ConnectionState.Status, c.ConnectionState.Message)
}
_ = w.Flush()
}
// Returns cluster query for getting cluster depending on the cluster selector
func getQueryBySelector(clusterSelector string) *clusterpkg.ClusterQuery {
var query clusterpkg.ClusterQuery
isServer, err := regexp.MatchString(`^https?://`, clusterSelector)
if isServer || err != nil {
query.Server = clusterSelector
} else {
query.Name = clusterSelector
}
return &query
}
// Print list of cluster servers
func printClusterServers(clusters []argoappv1.Cluster) {
for _, c := range clusters {

View File

@@ -6,24 +6,8 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/stretchr/testify/assert"
)
func Test_getQueryBySelector(t *testing.T) {
query := getQueryBySelector("my-cluster")
assert.Equal(t, query.Name, "my-cluster")
assert.Equal(t, query.Server, "")
query = getQueryBySelector("http://my-server")
assert.Equal(t, query.Name, "")
assert.Equal(t, query.Server, "http://my-server")
query = getQueryBySelector("https://my-server")
assert.Equal(t, query.Name, "")
assert.Equal(t, query.Server, "https://my-server")
}
func Test_printClusterTable(t *testing.T) {
printClusterTable([]v1alpha1.Cluster{
{

View File

@@ -18,7 +18,6 @@ import (
type forwardCacheClient struct {
namespace string
context string
init sync.Once
client cache.CacheClient
err error
@@ -26,9 +25,7 @@ type forwardCacheClient struct {
func (c *forwardCacheClient) doLazy(action func(client cache.CacheClient) error) error {
c.init.Do(func() {
overrides := clientcmd.ConfigOverrides{
CurrentContext: c.context,
}
overrides := clientcmd.ConfigOverrides{}
redisPort, err := kubeutil.PortForward(6379, c.namespace, &overrides,
"app.kubernetes.io/name=argocd-redis-ha-haproxy", "app.kubernetes.io/name=argocd-redis")
if err != nil {
@@ -77,7 +74,6 @@ func (c *forwardCacheClient) NotifyUpdated(key string) error {
type forwardRepoClientset struct {
namespace string
context string
init sync.Once
repoClientset repoapiclient.Clientset
err error
@@ -85,9 +81,7 @@ type forwardRepoClientset struct {
func (c *forwardRepoClientset) NewRepoServerClient() (io.Closer, repoapiclient.RepoServerServiceClient, error) {
c.init.Do(func() {
overrides := clientcmd.ConfigOverrides{
CurrentContext: c.context,
}
overrides := clientcmd.ConfigOverrides{}
repoServerPort, err := kubeutil.PortForward(8081, c.namespace, &overrides, "app.kubernetes.io/name=argocd-repo-server")
if err != nil {
c.err = err

View File

@@ -12,11 +12,10 @@ import (
"github.com/golang/protobuf/ptypes/empty"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/utils/pointer"
"k8s.io/client-go/tools/clientcmd"
argoapi "github.com/argoproj/argo-cd/v2/pkg/apiclient"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
@@ -28,8 +27,6 @@ import (
"github.com/argoproj/argo-cd/v2/util/cli"
"github.com/argoproj/argo-cd/v2/util/io"
"github.com/argoproj/argo-cd/v2/util/localconfig"
flag "github.com/spf13/pflag"
)
func testAPI(clientOpts *argoapi.ClientOptions) error {
@@ -46,27 +43,21 @@ func testAPI(clientOpts *argoapi.ClientOptions) error {
return err
}
func retrieveContextIfChanged(contextFlag *flag.Flag) string {
if contextFlag != nil && contextFlag.Changed {
return contextFlag.Value.String()
}
return ""
func addKubectlFlagsToCmd(cmd *cobra.Command) clientcmd.ClientConfig {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := clientcmd.ConfigOverrides{}
kflags := clientcmd.RecommendedConfigOverrideFlags("")
cmd.Flags().StringVar(&loadingRules.ExplicitPath, "kubeconfig", "", "Path to a kube config. Only required if out-of-cluster")
clientcmd.BindOverrideFlags(&overrides, cmd.Flags(), kflags)
return clientcmd.NewInteractiveDeferredLoadingClientConfig(loadingRules, &overrides, os.Stdin)
}
// InitCommand allows executing command in a headless mode: on the fly starts Argo CD API server and
// changes provided client options to use started API server port
func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *int, address *string) *cobra.Command {
func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *int) *cobra.Command {
ctx, cancel := context.WithCancel(context.Background())
flags := pflag.NewFlagSet("tmp", pflag.ContinueOnError)
clientConfig := cli.AddKubectlFlagsToSet(flags)
// copy k8s persistent flags into argocd command flags
flags.VisitAll(func(flag *pflag.Flag) {
// skip Kubernetes server flags since argocd has it's own server flag
if flag.Name == "server" {
return
}
cmd.Flags().AddFlag(flag)
})
clientConfig := addKubectlFlagsToCmd(cmd)
cmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
startInProcessAPI := clientOpts.Core
if !startInProcessAPI {
@@ -91,12 +82,8 @@ func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *in
cli.SetLogLevel(log.ErrorLevel.String())
log.SetLevel(log.ErrorLevel)
os.Setenv(v1alpha1.EnvVarFakeInClusterConfig, "true")
if address == nil {
address = pointer.String("localhost")
}
if port == nil || *port == 0 {
addr := fmt.Sprintf("%s:0", *address)
ln, err := net.Listen("tcp", addr)
ln, err := net.Listen("tcp", "localhost:0")
if err != nil {
return err
}
@@ -122,14 +109,12 @@ func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *in
return err
}
context := retrieveContextIfChanged(cmd.Flag("context"))
mr, err := miniredis.Run()
if err != nil {
return err
}
appstateCache := appstatecache.NewCache(cacheutil.NewCache(&forwardCacheClient{namespace: namespace, context: context}), time.Hour)
appstateCache := appstatecache.NewCache(cacheutil.NewCache(&forwardCacheClient{namespace: namespace}), time.Hour)
srv := server.NewServer(ctx, server.ArgoCDServerOpts{
EnableGZip: false,
Namespace: namespace,
@@ -140,12 +125,12 @@ func InitCommand(cmd *cobra.Command, clientOpts *argoapi.ClientOptions, port *in
Cache: servercache.NewCache(appstateCache, 0, 0, 0),
KubeClientset: kubeClientset,
Insecure: true,
ListenHost: *address,
RepoClientset: &forwardRepoClientset{namespace: namespace, context: context},
ListenHost: "localhost",
RepoClientset: &forwardRepoClientset{namespace: namespace},
})
go srv.Run(ctx, *port, 0)
clientOpts.ServerAddr = fmt.Sprintf("%s:%d", *address, *port)
clientOpts.ServerAddr = fmt.Sprintf("localhost:%d", *port)
clientOpts.PlainText = true
if !cache.WaitForCacheSync(ctx.Done(), srv.Initialized) {
log.Fatal("Timed out waiting for project cache to sync")

View File

@@ -1,80 +0,0 @@
package headless
import (
"testing"
flag "github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
)
type StringFlag struct {
// The exact value provided on the flag
value string
}
func (f StringFlag) String() string {
return f.value
}
func (f *StringFlag) Set(value string) error {
f.value = value
return nil
}
func (f *StringFlag) Type() string {
return "string"
}
func Test_FlagContextNotChanged(t *testing.T) {
res := retrieveContextIfChanged(&flag.Flag{
Name: "",
Shorthand: "",
Usage: "",
Value: &StringFlag{value: "test"},
DefValue: "",
Changed: false,
NoOptDefVal: "",
Deprecated: "",
Hidden: false,
ShorthandDeprecated: "",
Annotations: nil,
})
assert.Equal(t, "", res)
}
func Test_FlagContextChanged(t *testing.T) {
res := retrieveContextIfChanged(&flag.Flag{
Name: "",
Shorthand: "",
Usage: "",
Value: &StringFlag{value: "test"},
DefValue: "",
Changed: true,
NoOptDefVal: "",
Deprecated: "",
Hidden: false,
ShorthandDeprecated: "",
Annotations: nil,
})
assert.Equal(t, "test", res)
}
func Test_FlagContextNil(t *testing.T) {
res := retrieveContextIfChanged(&flag.Flag{
Name: "",
Shorthand: "",
Usage: "",
Value: nil,
DefValue: "",
Changed: false,
NoOptDefVal: "",
Deprecated: "",
Hidden: false,
ShorthandDeprecated: "",
Annotations: nil,
})
assert.Equal(t, "", res)
}

View File

@@ -13,7 +13,7 @@ import (
"time"
"github.com/coreos/go-oidc"
"github.com/golang-jwt/jwt/v4"
"github.com/dgrijalva/jwt-go/v4"
log "github.com/sirupsen/logrus"
"github.com/skratchdot/open-golang/open"
"github.com/spf13/cobra"
@@ -90,8 +90,6 @@ argocd login cd.argoproj.io --core`,
ServerAddr: server,
Insecure: globalClientOpts.Insecure,
PlainText: globalClientOpts.PlainText,
ClientCertFile: globalClientOpts.ClientCertFile,
ClientCertKeyFile: globalClientOpts.ClientCertKeyFile,
GRPCWeb: globalClientOpts.GRPCWeb,
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
PortForward: globalClientOpts.PortForward,
@@ -127,7 +125,9 @@ argocd login cd.argoproj.io --core`,
errors.CheckError(err)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, acdSet.GetOIDCConfig(), oauth2conf, provider)
}
parser := jwt.NewParser(jwt.WithoutClaimsValidation())
parser := &jwt.Parser{
ValidationHelper: jwt.NewValidationHelper(jwt.WithoutClaimsValidation(), jwt.WithoutAudienceValidation()),
}
claims := jwt.MapClaims{}
_, _, err := parser.ParseUnverified(tokenString, &claims)
errors.CheckError(err)
@@ -200,7 +200,10 @@ func oauth2Login(ctx context.Context, port int, oidcSettings *settingspkg.OIDCCo
// completionChan is to signal flow completed. Non-empty string indicates error
completionChan := make(chan string)
// stateNonce is an OAuth2 state nonce
stateNonce := rand.RandString(10)
// According to the spec (https://www.rfc-editor.org/rfc/rfc6749#section-10.10), this must be guessable with
// probability <= 2^(-128). The following call generates one of 52^24 random strings, ~= 2^136 possibilities.
stateNonce, err := rand.String(24)
errors.CheckError(err)
var tokenString string
var refreshToken string
@@ -210,7 +213,8 @@ func oauth2Login(ctx context.Context, port int, oidcSettings *settingspkg.OIDCCo
}
// PKCE implementation of https://tools.ietf.org/html/rfc7636
codeVerifier := rand.RandStringCharset(43, "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~")
codeVerifier, err := rand.StringFromCharset(43, "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~")
errors.CheckError(err)
codeChallengeHash := sha256.Sum256([]byte(codeVerifier))
codeChallenge := base64.RawURLEncoding.EncodeToString(codeChallengeHash[:])
@@ -294,7 +298,8 @@ func oauth2Login(ctx context.Context, port int, oidcSettings *settingspkg.OIDCCo
opts = append(opts, oauth2.SetAuthURLParam("code_challenge_method", "S256"))
url = oauth2conf.AuthCodeURL(stateNonce, opts...)
case oidcutil.GrantTypeImplicit:
url = oidcutil.ImplicitFlowURL(oauth2conf, stateNonce, opts...)
url, err = oidcutil.ImplicitFlowURL(oauth2conf, stateNonce, opts...)
errors.CheckError(err)
default:
log.Fatalf("Unsupported grant type: %v", grantType)
}

View File

@@ -3,7 +3,7 @@ package commands
import (
"testing"
"github.com/golang-jwt/jwt/v4"
"github.com/dgrijalva/jwt-go/v4"
"github.com/stretchr/testify/assert"
)

View File

@@ -219,17 +219,8 @@ func NewProjectRemoveSignatureKeyCommand(clientOpts *argocdclient.ClientOptions)
// NewProjectAddDestinationCommand returns a new instance of an `argocd proj add-destination` command
func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var nameInsteadServer bool
buildApplicationDestination := func(destination string, namespace string, nameInsteadServer bool) v1alpha1.ApplicationDestination {
if nameInsteadServer {
return v1alpha1.ApplicationDestination{Name: destination, Namespace: namespace}
}
return v1alpha1.ApplicationDestination{Server: destination, Namespace: namespace}
}
var command = &cobra.Command{
Use: "add-destination PROJECT SERVER/NAME NAMESPACE",
Use: "add-destination PROJECT SERVER NAMESPACE",
Short: "Add project destination",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
@@ -237,8 +228,8 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
destination := buildApplicationDestination(args[1], namespace, nameInsteadServer)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer argoio.Close(conn)
@@ -246,18 +237,15 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
errors.CheckError(err)
for _, dest := range proj.Spec.Destinations {
dstServerExist := destination.Server != "" && dest.Server == destination.Server
dstNameExist := destination.Name != "" && dest.Name == destination.Name
if dest.Namespace == namespace && (dstServerExist || dstNameExist) {
if dest.Namespace == namespace && dest.Server == server {
log.Fatal("Specified destination is already defined in project")
}
}
proj.Spec.Destinations = append(proj.Spec.Destinations, destination)
proj.Spec.Destinations = append(proj.Spec.Destinations, v1alpha1.ApplicationDestination{Server: server, Namespace: namespace})
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().BoolVar(&nameInsteadServer, "name", false, "Use name as destination instead server")
return command
}

View File

@@ -9,7 +9,7 @@ import (
"time"
timeutil "github.com/argoproj/pkg/time"
jwtgo "github.com/golang-jwt/jwt/v4"
jwtgo "github.com/dgrijalva/jwt-go/v4"
"github.com/spf13/cobra"
argocdclient "github.com/argoproj/argo-cd/v2/pkg/apiclient"

View File

@@ -116,7 +116,6 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
namespaces []string
clusters []string
manualSync bool
timeZone string
)
var command = &cobra.Command{
Use: "add PROJECT",
@@ -133,7 +132,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync, timeZone)
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync)
errors.CheckError(err)
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
@@ -147,7 +146,6 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
command.Flags().BoolVar(&manualSync, "manual-sync", false, "Allow manual syncs for both deny and allow windows")
command.Flags().StringVar(&timeZone, "time-zone", "UTC", "Time zone of the sync window")
return command
}
@@ -191,7 +189,6 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
applications []string
namespaces []string
clusters []string
timeZone string
)
var command = &cobra.Command{
Use: "update PROJECT ID",
@@ -215,7 +212,7 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
for i, window := range proj.Spec.SyncWindows {
if id == i {
err := window.Update(schedule, duration, applications, namespaces, clusters, timeZone)
err := window.Update(schedule, duration, applications, namespaces, clusters)
if err != nil {
errors.CheckError(err)
}
@@ -231,7 +228,6 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
command.Flags().StringSliceVar(&applications, "applications", []string{}, "Applications that the schedule will be applied to. Comma separated, wildcards supported (e.g. --applications prod-\\*,website)")
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
command.Flags().StringVar(&timeZone, "time-zone", "UTC", "Time zone of the sync window. (e.g. --time-zone \"America/New_York\")")
return command
}

View File

@@ -43,15 +43,13 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
var tokenString string
var refreshToken string
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
ClientCertFile: globalClientOpts.ClientCertFile,
ClientCertKeyFile: globalClientOpts.ClientCertKeyFile,
GRPCWeb: globalClientOpts.GRPCWeb,
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
PlainText: configCtx.Server.PlainText,
Headers: globalClientOpts.Headers,
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
PlainText: configCtx.Server.PlainText,
Headers: globalClientOpts.Headers,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
claims, err := configCtx.User.Claims()

View File

@@ -182,7 +182,6 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
GithubAppInstallationID: repoOpts.Repo.GithubAppInstallationId,
GithubAppEnterpriseBaseUrl: repoOpts.Repo.GitHubAppEnterpriseBaseURL,
Proxy: repoOpts.Proxy,
Project: repoOpts.Repo.Project,
}
_, err := repoIf.ValidateAccess(context.Background(), &repoAccessReq)
errors.CheckError(err)
@@ -227,7 +226,7 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
// Print table of repo info
func printRepoTable(repos appsv1.Repositories) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\n")
for _, r := range repos {
var hasCreds string
if !r.HasCredentials() {
@@ -239,7 +238,7 @@ func printRepoTable(repos appsv1.Repositories) {
hasCreds = "true"
}
}
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%v\t%v\t%s\t%s\t%s\t%s\n", r.Type, r.Name, r.Repo, r.IsInsecure(), r.EnableOCI, r.EnableLFS, hasCreds, r.ConnectionState.Status, r.ConnectionState.Message, r.Project)
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%v\t%v\t%s\t%s\t%s\n", r.Type, r.Name, r.Repo, r.IsInsecure(), r.EnableOCI, r.EnableLFS, hasCreds, r.ConnectionState.Status, r.ConnectionState.Message)
}
_ = w.Flush()
}

View File

@@ -40,19 +40,19 @@ func NewCommand() *cobra.Command {
}
command.AddCommand(NewCompletionCommand())
command.AddCommand(headless.InitCommand(NewVersionCmd(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewClusterCommand(&clientOpts, pathOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewApplicationCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewVersionCmd(&clientOpts), &clientOpts, nil))
command.AddCommand(headless.InitCommand(NewClusterCommand(&clientOpts, pathOpts), &clientOpts, nil))
command.AddCommand(headless.InitCommand(NewApplicationCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(NewLoginCommand(&clientOpts))
command.AddCommand(NewReloginCommand(&clientOpts))
command.AddCommand(headless.InitCommand(NewRepoCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewRepoCredsCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewRepoCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(headless.InitCommand(NewRepoCredsCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(NewContextCommand(&clientOpts))
command.AddCommand(headless.InitCommand(NewProjectCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewAccountCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewProjectCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(headless.InitCommand(NewAccountCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(NewLogoutCommand(&clientOpts))
command.AddCommand(headless.InitCommand(NewCertCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewGPGCommand(&clientOpts), &clientOpts, nil, nil))
command.AddCommand(headless.InitCommand(NewCertCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(headless.InitCommand(NewGPGCommand(&clientOpts), &clientOpts, nil))
command.AddCommand(admin.NewAdminCommand())
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()

View File

@@ -8,10 +8,7 @@ import (
"github.com/spf13/cobra"
appcontroller "github.com/argoproj/argo-cd/v2/cmd/argocd-application-controller/commands"
cmpserver "github.com/argoproj/argo-cd/v2/cmd/argocd-cmp-server/commands"
dex "github.com/argoproj/argo-cd/v2/cmd/argocd-dex/commands"
gitaskpass "github.com/argoproj/argo-cd/v2/cmd/argocd-git-ask-pass/commands"
notification "github.com/argoproj/argo-cd/v2/cmd/argocd-notification/commands"
reposerver "github.com/argoproj/argo-cd/v2/cmd/argocd-repo-server/commands"
apiserver "github.com/argoproj/argo-cd/v2/cmd/argocd-server/commands"
cli "github.com/argoproj/argo-cd/v2/cmd/argocd/commands"
@@ -37,14 +34,8 @@ func main() {
command = appcontroller.NewCommand()
case "argocd-repo-server":
command = reposerver.NewCommand()
case "argocd-cmp-server":
command = cmpserver.NewCommand()
case "argocd-dex":
command = dex.NewCommand()
case "argocd-notifications":
command = notification.NewCommand()
case "argocd-git-ask-pass":
command = gitaskpass.NewCommand()
default:
command = cli.NewCommand()
}

View File

@@ -9,8 +9,6 @@ import (
"strings"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@@ -36,15 +34,12 @@ type AppOptions struct {
destNamespace string
Parameters []string
valuesFiles []string
ignoreMissingValueFiles bool
values string
releaseName string
helmSets []string
helmSetStrings []string
helmSetFiles []string
helmVersion string
helmPassCredentials bool
helmSkipCrds bool
project string
syncPolicy string
syncOptions []string
@@ -88,15 +83,12 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
command.Flags().StringVar(&opts.destNamespace, "dest-namespace", "", "K8s target namespace (overrides the namespace specified in the ksonnet app.yaml)")
command.Flags().StringArrayVarP(&opts.Parameters, "parameter", "p", []string{}, "set a parameter override (e.g. -p guestbook=image=example/guestbook:latest)")
command.Flags().StringArrayVar(&opts.valuesFiles, "values", []string{}, "Helm values file(s) to use")
command.Flags().BoolVar(&opts.ignoreMissingValueFiles, "ignore-missing-value-files", false, "Ignore locally missing valueFiles when setting helm template --values")
command.Flags().StringVar(&opts.values, "values-literal-file", "", "Filename or URL to import as a literal Helm values block")
command.Flags().StringVar(&opts.releaseName, "release-name", "", "Helm release-name")
command.Flags().StringVar(&opts.helmVersion, "helm-version", "", "Helm version")
command.Flags().BoolVar(&opts.helmPassCredentials, "helm-pass-credentials", false, "Pass credentials to all domain")
command.Flags().StringArrayVar(&opts.helmSets, "helm-set", []string{}, "Helm set values on the command line (can be repeated to set several values: --helm-set key1=val1 --helm-set key2=val2)")
command.Flags().StringArrayVar(&opts.helmSetStrings, "helm-set-string", []string{}, "Helm set STRING values on the command line (can be repeated to set several values: --helm-set-string key1=val1 --helm-set-string key2=val2)")
command.Flags().StringArrayVar(&opts.helmSetFiles, "helm-set-file", []string{}, "Helm set values from respective files specified via the command line (can be repeated to set several values: --helm-set-file key1=path1 --helm-set-file key2=path2)")
command.Flags().BoolVar(&opts.helmSkipCrds, "helm-skip-crds", false, "Skip helm crd installation step")
command.Flags().StringVar(&opts.project, "project", "", "Application project name")
command.Flags().StringVar(&opts.syncPolicy, "sync-policy", "", "Set the sync policy (one of: none, automated (aliases of automated: auto, automatic))")
command.Flags().StringArrayVar(&opts.syncOptions, "sync-option", []string{}, "Add or remove a sync option, e.g add `Prune=false`. Remove using `!` prefix, e.g. `!Prune=false`")
@@ -130,9 +122,6 @@ func AddAppFlags(command *cobra.Command, opts *AppOptions) {
func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, appOpts *AppOptions) int {
visited := 0
if flags == nil {
return visited
}
flags.Visit(func(f *pflag.Flag) {
visited++
switch f.Name {
@@ -151,8 +140,6 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
spec.RevisionHistoryLimit = &i
case "values":
setHelmOpt(&spec.Source, helmOpts{valueFiles: appOpts.valuesFiles})
case "ignore-missing-value-files":
setHelmOpt(&spec.Source, helmOpts{ignoreMissingValueFiles: appOpts.ignoreMissingValueFiles})
case "values-literal-file":
var data []byte
@@ -169,16 +156,12 @@ func SetAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
setHelmOpt(&spec.Source, helmOpts{releaseName: appOpts.releaseName})
case "helm-version":
setHelmOpt(&spec.Source, helmOpts{version: appOpts.helmVersion})
case "helm-pass-credentials":
setHelmOpt(&spec.Source, helmOpts{passCredentials: appOpts.helmPassCredentials})
case "helm-set":
setHelmOpt(&spec.Source, helmOpts{helmSets: appOpts.helmSets})
case "helm-set-string":
setHelmOpt(&spec.Source, helmOpts{helmSetStrings: appOpts.helmSetStrings})
case "helm-set-file":
setHelmOpt(&spec.Source, helmOpts{helmSetFiles: appOpts.helmSetFiles})
case "helm-skip-crds":
setHelmOpt(&spec.Source, helmOpts{skipCrds: appOpts.helmSkipCrds})
case "directory-recurse":
if spec.Source.Directory != nil {
spec.Source.Directory.Recurse = appOpts.directoryRecurse
@@ -389,16 +372,13 @@ func setPluginOptEnvs(src *argoappv1.ApplicationSource, envs []string) {
}
type helmOpts struct {
valueFiles []string
ignoreMissingValueFiles bool
values string
releaseName string
version string
helmSets []string
helmSetStrings []string
helmSetFiles []string
passCredentials bool
skipCrds bool
valueFiles []string
values string
releaseName string
version string
helmSets []string
helmSetStrings []string
helmSetFiles []string
}
func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
@@ -408,9 +388,6 @@ func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
if len(opts.valueFiles) > 0 {
src.Helm.ValueFiles = opts.valueFiles
}
if opts.ignoreMissingValueFiles {
src.Helm.IgnoreMissingValueFiles = opts.ignoreMissingValueFiles
}
if len(opts.values) > 0 {
src.Helm.Values = opts.values
}
@@ -420,12 +397,6 @@ func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
if opts.version != "" {
src.Helm.Version = opts.version
}
if opts.passCredentials {
src.Helm.PassCredentials = opts.passCredentials
}
if opts.skipCrds {
src.Helm.SkipCrds = opts.skipCrds
}
for _, text := range opts.helmSets {
p, err := argoappv1.NewHelmParameter(text, false)
if err != nil {
@@ -547,99 +518,39 @@ func SetParameterOverrides(app *argoappv1.Application, parameters []string) {
}
}
func readApps(yml []byte, apps *[]*argoappv1.Application) error {
yamls, _ := kube.SplitYAMLToString(yml)
var err error
for _, yml := range yamls {
var app argoappv1.Application
err = config.Unmarshal([]byte(yml), &app)
*apps = append(*apps, &app)
if err != nil {
return err
}
}
return err
}
func readAppsFromStdin(apps *[]*argoappv1.Application) error {
func readAppFromStdin(app *argoappv1.Application) error {
reader := bufio.NewReader(os.Stdin)
data, err := ioutil.ReadAll(reader)
if err != nil {
return err
}
err = readApps(data, apps)
err := config.UnmarshalReader(reader, &app)
if err != nil {
return fmt.Errorf("unable to read manifest from stdin: %v", err)
}
return nil
}
func readAppsFromURI(fileURL string, apps *[]*argoappv1.Application) error {
func readAppFromURI(fileURL string, app *argoappv1.Application) error {
parsedURL, err := url.ParseRequestURI(fileURL)
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
err = config.UnmarshalLocalFile(fileURL, &app)
} else {
err = config.UnmarshalRemoteFile(fileURL, &app)
}
return err
}
readFilePayload := func() ([]byte, error) {
parsedURL, err := url.ParseRequestURI(fileURL)
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
return ioutil.ReadFile(fileURL)
func ConstructApp(fileURL, appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) (*argoappv1.Application, error) {
var app argoappv1.Application
if fileURL == "-" {
// read stdin
err := readAppFromStdin(&app)
if err != nil {
return nil, err
}
return config.ReadRemoteFile(fileURL)
}
yml, err := readFilePayload()
if err != nil {
return err
}
return readApps(yml, apps)
}
func constructAppsFromStdin() ([]*argoappv1.Application, error) {
apps := make([]*argoappv1.Application, 0)
// read stdin
err := readAppsFromStdin(&apps)
if err != nil {
return nil, err
}
return apps, nil
}
func constructAppsBaseOnName(appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) ([]*argoappv1.Application, error) {
var app *argoappv1.Application
// read arguments
if len(args) == 1 {
if appName != "" && appName != args[0] {
return nil, fmt.Errorf("--name argument '%s' does not match app name %s", appName, args[0])
} else if fileURL != "" {
// read uri
err := readAppFromURI(fileURL, &app)
if err != nil {
return nil, err
}
appName = args[0]
}
app = &argoappv1.Application{
TypeMeta: v1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: application.Group + "/v1alpha1",
},
ObjectMeta: v1.ObjectMeta{
Name: appName,
},
}
SetAppSpecOptions(flags, &app.Spec, &appOpts)
SetParameterOverrides(app, appOpts.Parameters)
mergeLabels(app, labels)
setAnnotations(app, annotations)
return []*argoappv1.Application{
app,
}, nil
}
func constructAppsFromFileUrl(fileURL, appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) ([]*argoappv1.Application, error) {
apps := make([]*argoappv1.Application, 0)
// read uri
err := readAppsFromURI(fileURL, &apps)
if err != nil {
return nil, err
}
for _, app := range apps {
if len(args) == 1 && args[0] != app.Name {
return nil, fmt.Errorf("app name '%s' does not match app spec metadata.name '%s'", args[0], app.Name)
}
@@ -650,20 +561,32 @@ func constructAppsFromFileUrl(fileURL, appName string, labels, annotations, args
return nil, fmt.Errorf("app.Name is empty. --name argument can be used to provide app.Name")
}
SetAppSpecOptions(flags, &app.Spec, &appOpts)
SetParameterOverrides(app, appOpts.Parameters)
mergeLabels(app, labels)
setAnnotations(app, annotations)
SetParameterOverrides(&app, appOpts.Parameters)
mergeLabels(&app, labels)
setAnnotations(&app, annotations)
} else {
// read arguments
if len(args) == 1 {
if appName != "" && appName != args[0] {
return nil, fmt.Errorf("--name argument '%s' does not match app name %s", appName, args[0])
}
appName = args[0]
}
app = argoappv1.Application{
TypeMeta: v1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: application.Group + "/v1alpha1",
},
ObjectMeta: v1.ObjectMeta{
Name: appName,
},
}
SetAppSpecOptions(flags, &app.Spec, &appOpts)
SetParameterOverrides(&app, appOpts.Parameters)
mergeLabels(&app, labels)
setAnnotations(&app, annotations)
}
return apps, nil
}
func ConstructApps(fileURL, appName string, labels, annotations, args []string, appOpts AppOptions, flags *pflag.FlagSet) ([]*argoappv1.Application, error) {
if fileURL == "-" {
return constructAppsFromStdin()
} else if fileURL != "" {
return constructAppsFromFileUrl(fileURL, appName, labels, annotations, args, appOpts, flags)
}
return constructAppsBaseOnName(appName, labels, annotations, args, appOpts, flags)
return &app, nil
}
func mergeLabels(app *argoappv1.Application, labels []string) {

View File

@@ -1,16 +1,12 @@
package util
import (
"io/ioutil"
"os"
"testing"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
argoappv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
func Test_setHelmOpt(t *testing.T) {
@@ -24,11 +20,6 @@ func Test_setHelmOpt(t *testing.T) {
setHelmOpt(&src, helmOpts{valueFiles: []string{"foo"}})
assert.Equal(t, []string{"foo"}, src.Helm.ValueFiles)
})
t.Run("IgnoreMissingValueFiles", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{ignoreMissingValueFiles: true})
assert.Equal(t, true, src.Helm.IgnoreMissingValueFiles)
})
t.Run("ReleaseName", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{releaseName: "foo"})
@@ -54,16 +45,6 @@ func Test_setHelmOpt(t *testing.T) {
setHelmOpt(&src, helmOpts{version: "v3"})
assert.Equal(t, "v3", src.Helm.Version)
})
t.Run("HelmPassCredentials", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{passCredentials: true})
assert.Equal(t, true, src.Helm.PassCredentials)
})
t.Run("HelmSkipCrds", func(t *testing.T) {
src := v1alpha1.ApplicationSource{}
setHelmOpt(&src, helmOpts{skipCrds: true})
assert.Equal(t, true, src.Helm.SkipCrds)
})
}
func Test_setKustomizeOpt(t *testing.T) {
@@ -209,106 +190,3 @@ func Test_setAnnotations(t *testing.T) {
assert.Equal(t, map[string]string{"hoge": ""}, app.Annotations)
})
}
const appsYaml = `---
# Source: apps/templates/helm.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sth1
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: sth
server: 'https://kubernetes.default.svc'
project: default
source:
repoURL: 'https://github.com/pasha-codefresh/argocd-example-apps'
targetRevision: HEAD
path: apps
helm:
valueFiles:
- values.yaml
---
# Source: apps/templates/helm.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sth2
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: sth
server: 'https://kubernetes.default.svc'
project: default
source:
repoURL: 'https://github.com/pasha-codefresh/argocd-example-apps'
targetRevision: HEAD
path: apps
helm:
valueFiles:
- values.yaml`
func TestReadAppsFromURI(t *testing.T) {
file, err := ioutil.TempFile(os.TempDir(), "")
if err != nil {
panic(err)
}
defer func() {
_ = os.Remove(file.Name())
}()
_, _ = file.WriteString(appsYaml)
_ = file.Sync()
apps := make([]*argoappv1.Application, 0)
err = readAppsFromURI(file.Name(), &apps)
assert.NoError(t, err)
assert.Equal(t, 2, len(apps))
assert.Equal(t, "sth1", apps[0].Name)
assert.Equal(t, "sth2", apps[1].Name)
}
func TestConstructAppFromStdin(t *testing.T) {
file, err := ioutil.TempFile(os.TempDir(), "")
if err != nil {
panic(err)
}
defer func() {
_ = os.Remove(file.Name())
}()
_, _ = file.WriteString(appsYaml)
_ = file.Sync()
if _, err := file.Seek(0, 0); err != nil {
log.Fatal(err)
}
os.Stdin = file
apps, err := ConstructApps("-", "test", []string{}, []string{}, []string{}, AppOptions{}, nil)
if err := file.Close(); err != nil {
log.Fatal(err)
}
assert.NoError(t, err)
assert.Equal(t, 2, len(apps))
assert.Equal(t, "sth1", apps[0].Name)
assert.Equal(t, "sth2", apps[1].Name)
}
func TestConstructBasedOnName(t *testing.T) {
apps, err := ConstructApps("", "test", []string{}, []string{}, []string{}, AppOptions{}, nil)
assert.NoError(t, err)
assert.Equal(t, 1, len(apps))
assert.Equal(t, "test", apps[0].Name)
}

View File

@@ -55,7 +55,7 @@ func PrintKubeContexts(ca clientcmd.ConfigAccess) {
}
}
func NewCluster(name string, namespaces []string, clusterResources bool, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig, execProviderConf *argoappv1.ExecProviderConfig, labels, annotations map[string]string) *argoappv1.Cluster {
func NewCluster(name string, namespaces []string, clusterResources bool, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig, execProviderConf *argoappv1.ExecProviderConfig) *argoappv1.Cluster {
tlsClientConfig := argoappv1.TLSClientConfig{
Insecure: conf.TLSClientConfig.Insecure,
ServerName: conf.TLSClientConfig.ServerName,
@@ -89,8 +89,6 @@ func NewCluster(name string, namespaces []string, clusterResources bool, conf *r
AWSAuthConfig: awsAuthConf,
ExecProviderConfig: execProviderConf,
},
Labels: labels,
Annotations: annotations,
}
// Bearer token will preferentially be used for auth if present,
@@ -113,7 +111,6 @@ type ClusterOptions struct {
Namespaces []string
ClusterResources bool
Name string
Project string
Shard int64
ExecProviderCommand string
ExecProviderArgs []string
@@ -129,7 +126,6 @@ func AddClusterFlags(command *cobra.Command, opts *ClusterOptions) {
command.Flags().StringArrayVar(&opts.Namespaces, "namespace", nil, "List of namespaces which are allowed to manage")
command.Flags().BoolVar(&opts.ClusterResources, "cluster-resources", false, "Indicates if cluster level resources should be managed. The setting is used only if list of managed namespaces is not empty.")
command.Flags().StringVar(&opts.Name, "name", "", "Overwrite the cluster name")
command.Flags().StringVar(&opts.Project, "project", "", "project of the cluster")
command.Flags().Int64Var(&opts.Shard, "shard", -1, "Cluster shard number; inferred from hostname if not set")
command.Flags().StringVar(&opts.ExecProviderCommand, "exec-command", "", "Command to run to provide client credentials to the cluster. You may need to build a custom ArgoCD image to ensure the command is available at runtime.")
command.Flags().StringArrayVar(&opts.ExecProviderArgs, "exec-command-args", nil, "Arguments to supply to the --exec-command executable")

View File

@@ -11,8 +11,6 @@ import (
)
func Test_newCluster(t *testing.T) {
labels := map[string]string{"key1": "val1"}
annotations := map[string]string{"key2": "val2"}
clusterWithData := NewCluster("test-cluster", []string{"test-namespace"}, false, &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
Insecure: false,
@@ -25,13 +23,11 @@ func Test_newCluster(t *testing.T) {
},
"test-bearer-token",
&v1alpha1.AWSAuthConfig{},
&v1alpha1.ExecProviderConfig{}, labels, annotations)
&v1alpha1.ExecProviderConfig{})
assert.Equal(t, "test-cert-data", string(clusterWithData.Config.CertData))
assert.Equal(t, "test-key-data", string(clusterWithData.Config.KeyData))
assert.Equal(t, "", clusterWithData.Config.BearerToken)
assert.Equal(t, labels, clusterWithData.Labels)
assert.Equal(t, annotations, clusterWithData.Annotations)
clusterWithFiles := NewCluster("test-cluster", []string{"test-namespace"}, false, &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
@@ -45,13 +41,11 @@ func Test_newCluster(t *testing.T) {
},
"test-bearer-token",
&v1alpha1.AWSAuthConfig{},
&v1alpha1.ExecProviderConfig{}, labels, nil)
&v1alpha1.ExecProviderConfig{})
assert.True(t, strings.Contains(string(clusterWithFiles.Config.CertData), "test-cert-data"))
assert.True(t, strings.Contains(string(clusterWithFiles.Config.KeyData), "test-key-data"))
assert.Equal(t, "", clusterWithFiles.Config.BearerToken)
assert.Equal(t, labels, clusterWithFiles.Labels)
assert.Nil(t, clusterWithFiles.Annotations)
clusterWithBearerToken := NewCluster("test-cluster", []string{"test-namespace"}, false, &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
@@ -63,9 +57,7 @@ func Test_newCluster(t *testing.T) {
},
"test-bearer-token",
&v1alpha1.AWSAuthConfig{},
&v1alpha1.ExecProviderConfig{}, nil, nil)
&v1alpha1.ExecProviderConfig{})
assert.Equal(t, "test-bearer-token", clusterWithBearerToken.Config.BearerToken)
assert.Nil(t, clusterWithBearerToken.Labels)
assert.Nil(t, clusterWithBearerToken.Annotations)
}

View File

@@ -27,7 +27,6 @@ type RepoOptions struct {
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().StringVar(&opts.Repo.Type, "type", common.DefaultRepoType, "type of the repository, \"git\" or \"helm\"")
command.Flags().StringVar(&opts.Repo.Name, "name", "", "name of the repository, mandatory for repositories of type helm")
command.Flags().StringVar(&opts.Repo.Project, "project", "", "project of the repository")
command.Flags().StringVar(&opts.Repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&opts.Repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&opts.SshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")

View File

@@ -1,62 +0,0 @@
package apiclient
import (
"context"
"time"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
grpc_retry "github.com/grpc-ecosystem/go-grpc-middleware/retry"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
grpc_util "github.com/argoproj/argo-cd/v2/util/grpc"
"github.com/argoproj/argo-cd/v2/util/io"
)
const (
// MaxGRPCMessageSize contains max grpc message size
MaxGRPCMessageSize = 100 * 1024 * 1024
)
// Clientset represents config management plugin server api clients
type Clientset interface {
NewConfigManagementPluginClient() (io.Closer, ConfigManagementPluginServiceClient, error)
}
type clientSet struct {
address string
}
func (c *clientSet) NewConfigManagementPluginClient() (io.Closer, ConfigManagementPluginServiceClient, error) {
conn, err := NewConnection(c.address)
if err != nil {
return nil, nil, err
}
return conn, NewConfigManagementPluginServiceClient(conn), nil
}
func NewConnection(address string) (*grpc.ClientConn, error) {
retryOpts := []grpc_retry.CallOption{
grpc_retry.WithMax(3),
grpc_retry.WithBackoff(grpc_retry.BackoffLinear(1000 * time.Millisecond)),
}
unaryInterceptors := []grpc.UnaryClientInterceptor{grpc_retry.UnaryClientInterceptor(retryOpts...)}
dialOpts := []grpc.DialOption{
grpc.WithStreamInterceptor(grpc_retry.StreamClientInterceptor(retryOpts...)),
grpc.WithUnaryInterceptor(grpc_middleware.ChainUnaryClient(unaryInterceptors...)),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(MaxGRPCMessageSize), grpc.MaxCallSendMsgSize(MaxGRPCMessageSize)),
}
dialOpts = append(dialOpts, grpc.WithInsecure())
conn, err := grpc_util.BlockingDial(context.Background(), "unix", address, nil, dialOpts...)
if err != nil {
log.Errorf("Unable to connect to config management plugin service with address %s", address)
return nil, err
}
return conn, nil
}
// NewConfigManagementPluginClientSet creates new instance of config management plugin server Clientset
func NewConfigManagementPluginClientSet(address string) Clientset {
return &clientSet{address: address}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,91 +0,0 @@
package plugin
import (
"fmt"
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v2/common"
configUtil "github.com/argoproj/argo-cd/v2/util/config"
)
const (
ConfigManagementPluginKind string = "ConfigManagementPlugin"
)
type PluginConfig struct {
metav1.TypeMeta `json:",inline"`
Metadata metav1.ObjectMeta `json:"metadata"`
Spec PluginConfigSpec `json:"spec"`
}
type PluginConfigSpec struct {
Version string `json:"version"`
Init Command `json:"init,omitempty"`
Generate Command `json:"generate"`
Discover Discover `json:"discover"`
AllowConcurrency bool `json:"allowConcurrency"`
LockRepo bool `json:"lockRepo"`
}
//Discover holds find and fileName
type Discover struct {
Find Find `json:"find"`
FileName string `json:"fileName"`
}
// Command holds binary path and arguments list
type Command struct {
Command []string `json:"command,omitempty"`
Args []string `json:"args,omitempty"`
}
// Find holds find command or glob pattern
type Find struct {
Command
Glob string `json:"glob"`
}
func ReadPluginConfig(filePath string) (*PluginConfig, error) {
path := fmt.Sprintf("%s/%s", strings.TrimRight(filePath, "/"), common.PluginConfigFileName)
var config PluginConfig
err := configUtil.UnmarshalLocalFile(path, &config)
if err != nil {
return nil, err
}
if err = ValidatePluginConfig(config); err != nil {
return nil, err
}
return &config, nil
}
func ValidatePluginConfig(config PluginConfig) error {
if config.Metadata.Name == "" {
return fmt.Errorf("invalid plugin configuration file. metadata.name should be non-empty.")
}
if config.TypeMeta.Kind != ConfigManagementPluginKind {
return fmt.Errorf("invalid plugin configuration file. kind should be %s, found %s", ConfigManagementPluginKind, config.TypeMeta.Kind)
}
if len(config.Spec.Generate.Command) == 0 {
return fmt.Errorf("invalid plugin configuration file. spec.generate command should be non-empty")
}
if config.Spec.Discover.Find.Glob == "" && len(config.Spec.Discover.Find.Command.Command) == 0 && config.Spec.Discover.FileName == "" {
return fmt.Errorf("invalid plugin configuration file. atleast one of discover.find.command or discover.find.glob or discover.fineName should be non-empty")
}
return nil
}
func (cfg *PluginConfig) Address() string {
var address string
pluginSockFilePath := common.GetPluginSockFilePath()
if cfg.Spec.Version != "" {
address = fmt.Sprintf("%s/%s-%s.sock", pluginSockFilePath, cfg.Metadata.Name, cfg.Spec.Version)
} else {
address = fmt.Sprintf("%s/%s.sock", pluginSockFilePath, cfg.Metadata.Name)
}
return address
}

View File

@@ -1,223 +0,0 @@
package plugin
import (
"bytes"
"context"
"errors"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/argoproj/pkg/rand"
"github.com/argoproj/argo-cd/v2/util/buffered_context"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/mattn/go-zglob"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/v2/cmpserver/apiclient"
)
// cmpTimeoutBuffer is the amount of time before the request deadline to timeout server-side work. It makes sure there's
// enough time before the client times out to send a meaningful error message.
const cmpTimeoutBuffer = 100 * time.Millisecond
// Service implements ConfigManagementPluginService interface
type Service struct {
initConstants CMPServerInitConstants
}
type CMPServerInitConstants struct {
PluginConfig PluginConfig
}
// NewService returns a new instance of the ConfigManagementPluginService
func NewService(initConstants CMPServerInitConstants) *Service {
return &Service{
initConstants: initConstants,
}
}
func runCommand(ctx context.Context, command Command, path string, env []string) (string, error) {
if len(command.Command) == 0 {
return "", fmt.Errorf("Command is empty")
}
cmd := exec.CommandContext(ctx, command.Command[0], append(command.Command[1:], command.Args...)...)
cmd.Env = env
cmd.Dir = path
execId, err := rand.RandString(5)
if err != nil {
return "", err
}
logCtx := log.WithFields(log.Fields{"execID": execId})
// log in a way we can copy-and-paste into a terminal
args := strings.Join(cmd.Args, " ")
logCtx.WithFields(log.Fields{"dir": cmd.Dir}).Info(args)
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
// Make sure the command is killed immediately on timeout. https://stackoverflow.com/a/38133948/684776
cmd.SysProcAttr = newSysProcAttr(true)
start := time.Now()
err = cmd.Start()
if err != nil {
return "", err
}
go func() {
<-ctx.Done()
// Kill by group ID to make sure child processes are killed. The - tells `kill` that it's a group ID.
// Since we didn't set Pgid in SysProcAttr, the group ID is the same as the process ID. https://pkg.go.dev/syscall#SysProcAttr
_ = sysCallKill(-cmd.Process.Pid)
}()
err = cmd.Wait()
duration := time.Since(start)
output := stdout.String()
logCtx.WithFields(log.Fields{"duration": duration}).Debug(output)
if err != nil {
err := newCmdError(args, errors.New(err.Error()), strings.TrimSpace(stderr.String()))
logCtx.Error(err.Error())
return strings.TrimSuffix(output, "\n"), err
}
return strings.TrimSuffix(output, "\n"), nil
}
type CmdError struct {
Args string
Stderr string
Cause error
}
func (ce *CmdError) Error() string {
res := fmt.Sprintf("`%v` failed %v", ce.Args, ce.Cause)
if ce.Stderr != "" {
res = fmt.Sprintf("%s: %s", res, ce.Stderr)
}
return res
}
func newCmdError(args string, cause error, stderr string) *CmdError {
return &CmdError{Args: args, Stderr: stderr, Cause: cause}
}
// Environ returns a list of environment variables in name=value format from a list of variables
func environ(envVars []*apiclient.EnvEntry) []string {
var environ []string
for _, item := range envVars {
if item != nil && item.Name != "" && item.Value != "" {
environ = append(environ, fmt.Sprintf("%s=%s", item.Name, item.Value))
}
}
return environ
}
// GenerateManifest runs generate command from plugin config file and returns generated manifest files
func (s *Service) GenerateManifest(ctx context.Context, q *apiclient.ManifestRequest) (*apiclient.ManifestResponse, error) {
bufferedCtx, cancel := buffered_context.WithEarlierDeadline(ctx, cmpTimeoutBuffer)
defer cancel()
if deadline, ok := bufferedCtx.Deadline(); ok {
log.Infof("Generating manifests with deadline %v from now", time.Until(deadline))
} else {
log.Info("Generating manifests with no request-level timeout")
}
config := s.initConstants.PluginConfig
env := append(os.Environ(), environ(q.Env)...)
if len(config.Spec.Init.Command) > 0 {
_, err := runCommand(bufferedCtx, config.Spec.Init, q.AppPath, env)
if err != nil {
return &apiclient.ManifestResponse{}, err
}
}
out, err := runCommand(bufferedCtx, config.Spec.Generate, q.AppPath, env)
if err != nil {
return &apiclient.ManifestResponse{}, err
}
manifests, err := kube.SplitYAMLToString([]byte(out))
if err != nil {
return &apiclient.ManifestResponse{}, err
}
return &apiclient.ManifestResponse{
Manifests: manifests,
}, err
}
// MatchRepository checks whether the application repository type is supported by config management plugin server
func (s *Service) MatchRepository(ctx context.Context, q *apiclient.RepositoryRequest) (*apiclient.RepositoryResponse, error) {
bufferedCtx, cancel := buffered_context.WithEarlierDeadline(ctx, cmpTimeoutBuffer)
defer cancel()
var repoResponse apiclient.RepositoryResponse
config := s.initConstants.PluginConfig
if config.Spec.Discover.FileName != "" {
log.Debugf("config.Spec.Discover.FileName is provided")
pattern := strings.TrimSuffix(q.Path, "/") + "/" + strings.TrimPrefix(config.Spec.Discover.FileName, "/")
matches, err := filepath.Glob(pattern)
if err != nil || len(matches) == 0 {
log.Debugf("Could not find match for pattern %s. Error is %v.", pattern, err)
return &repoResponse, err
} else if len(matches) > 0 {
repoResponse.IsSupported = true
return &repoResponse, nil
}
}
if config.Spec.Discover.Find.Glob != "" {
log.Debugf("config.Spec.Discover.Find.Glob is provided")
pattern := strings.TrimSuffix(q.Path, "/") + "/" + strings.TrimPrefix(config.Spec.Discover.Find.Glob, "/")
// filepath.Glob doesn't have '**' support hence selecting third-party lib
// https://github.com/golang/go/issues/11862
matches, err := zglob.Glob(pattern)
if err != nil || len(matches) == 0 {
log.Debugf("Could not find match for pattern %s. Error is %v.", pattern, err)
return &repoResponse, err
} else if len(matches) > 0 {
repoResponse.IsSupported = true
return &repoResponse, nil
}
}
log.Debugf("Going to try runCommand.")
find, err := runCommand(bufferedCtx, config.Spec.Discover.Find.Command, q.Path, os.Environ())
if err != nil {
return &repoResponse, err
}
var isSupported bool
if find != "" {
isSupported = true
}
return &apiclient.RepositoryResponse{
IsSupported: isSupported,
}, nil
}
// GetPluginConfig returns plugin config
func (s *Service) GetPluginConfig(ctx context.Context, q *apiclient.ConfigRequest) (*apiclient.ConfigResponse, error) {
config := s.initConstants.PluginConfig
return &apiclient.ConfigResponse{
AllowConcurrency: config.Spec.AllowConcurrency,
LockRepo: config.Spec.LockRepo,
}, nil
}

View File

@@ -1,61 +0,0 @@
syntax = "proto3";
option go_package = "github.com/argoproj/argo-cd/v2/cmpserver/apiclient";
package plugin;
import "k8s.io/api/core/v1/generated.proto";
// ManifestRequest is a query for manifest generation.
message ManifestRequest {
// Name of the application for which the request is triggered
string appName = 1;
string appPath = 2;
string repoPath = 3;
bool noCache = 4;
repeated EnvEntry env = 5;
}
// EnvEntry represents an entry in the application's environment
message EnvEntry {
// Name is the name of the variable, usually expressed in uppercase
string name = 1;
// Value is the value of the variable
string value = 2;
}
message ManifestResponse {
repeated string manifests = 1;
string sourceType = 2;
}
message RepositoryRequest {
string path = 1;
repeated EnvEntry env = 2;
}
message RepositoryResponse {
bool isSupported = 1;
}
message ConfigRequest {
}
message ConfigResponse {
bool allowConcurrency = 1;
bool lockRepo = 2;
}
// ConfigManagementPlugin Service
service ConfigManagementPluginService {
// GenerateManifest generates manifest for application in specified repo name and revision
rpc GenerateManifest(ManifestRequest) returns (ManifestResponse) {
}
// MatchRepository returns whether or not the given path is supported by the plugin
rpc MatchRepository(RepositoryRequest) returns (RepositoryResponse) {
}
// Get configuration of the plugin
rpc GetPluginConfig(ConfigRequest) returns (ConfigResponse) {
}
}

View File

@@ -1,83 +0,0 @@
package plugin
import (
"context"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/argoproj/argo-cd/v2/cmpserver/apiclient"
)
func newService(configFilePath string) (*Service, error) {
config, err := ReadPluginConfig(configFilePath)
if err != nil {
return nil, err
}
initConstants := CMPServerInitConstants{
PluginConfig: *config,
}
service := &Service{
initConstants: initConstants,
}
return service, nil
}
func TestMatchRepository(t *testing.T) {
configFilePath := "./testdata/ksonnet/config"
service, err := newService(configFilePath)
require.NoError(t, err)
q := apiclient.RepositoryRequest{}
path, err := os.Getwd()
require.NoError(t, err)
q.Path = path
res1, err := service.MatchRepository(context.Background(), &q)
require.NoError(t, err)
require.True(t, res1.IsSupported)
}
func Test_Negative_ConfigFile_DoesnotExist(t *testing.T) {
configFilePath := "./testdata/kustomize-neg/config"
service, err := newService(configFilePath)
require.Error(t, err)
require.Nil(t, service)
}
func TestGenerateManifest(t *testing.T) {
configFilePath := "./testdata/kustomize/config"
service, err := newService(configFilePath)
require.NoError(t, err)
q := apiclient.ManifestRequest{}
res1, err := service.GenerateManifest(context.Background(), &q)
require.NoError(t, err)
require.NotNil(t, res1)
expectedOutput := "{\"apiVersion\":\"v1\",\"data\":{\"foo\":\"bar\"},\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"my-map\"}}"
if res1 != nil {
require.Equal(t, expectedOutput, res1.Manifests[0])
}
}
// TestRunCommandContextTimeout makes sure the command dies at timeout rather than sleeping past the timeout.
func TestRunCommandContextTimeout(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 990*time.Millisecond)
defer cancel()
// Use a subshell so there's a child command.
command := Command{
Command: []string{"sh", "-c"},
Args: []string{"sleep 5"},
}
before := time.Now()
_, err := runCommand(ctx, command, "", []string{})
after := time.Now()
assert.Error(t, err) // The command should time out, causing an error.
assert.Less(t, after.Sub(before), 1*time.Second)
}

View File

@@ -1,16 +0,0 @@
//go:build !windows
// +build !windows
package plugin
import (
"syscall"
)
func newSysProcAttr(setpgid bool) *syscall.SysProcAttr {
return &syscall.SysProcAttr{Setpgid: setpgid}
}
func sysCallKill(pid int) error {
return syscall.Kill(pid, syscall.SIGKILL)
}

View File

@@ -1,16 +0,0 @@
//go:build windows
// +build windows
package plugin
import (
"syscall"
)
func newSysProcAttr(setpgid bool) *syscall.SysProcAttr {
return &syscall.SysProcAttr{}
}
func sysCallKill(pid int) error {
return nil
}

View File

@@ -1,15 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: ksonnet
spec:
version: v1.0
init:
command: [ks, version]
generate:
command: [sh, -c, "ks show $ARGOCD_APP_ENV"]
discover:
find:
glob: "**/*/main.jsonnet"
allowConcurrency: false
lockRepo: false

View File

@@ -1,16 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: kustomize
spec:
version: v1.0
init:
command: [kustomize, version]
generate:
command: [sh, -c, "cd testdata/kustomize && kustomize build"]
discover:
find:
command: [sh, -c, find . -name kustomization.yaml]
glob: "**/*/kustomization.yaml"
allowConcurrency: true
lockRepo: false

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: my-map
data:
foo: bar

View File

@@ -1,16 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: kustomize
spec:
version: v1.0
init:
command: [kustomize, version]
generate:
command: [sh, -c, "cd testdata/kustomize && kustomize build"]
discover:
find:
command: [sh, -c, find . -name kustomization.yaml]
glob: "**/*/kustomization.yaml"
allowConcurrency: true
lockRepo: false

View File

@@ -1,5 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./cm.yaml

View File

@@ -1,107 +0,0 @@
package cmpserver
import (
"net"
"os"
"os/signal"
"syscall"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
grpc_logrus "github.com/grpc-ecosystem/go-grpc-middleware/logging/logrus"
grpc_prometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/health"
"google.golang.org/grpc/health/grpc_health_v1"
"google.golang.org/grpc/reflection"
"github.com/argoproj/argo-cd/v2/cmpserver/apiclient"
"github.com/argoproj/argo-cd/v2/cmpserver/plugin"
"github.com/argoproj/argo-cd/v2/common"
versionpkg "github.com/argoproj/argo-cd/v2/pkg/apiclient/version"
"github.com/argoproj/argo-cd/v2/server/version"
"github.com/argoproj/argo-cd/v2/util/errors"
grpc_util "github.com/argoproj/argo-cd/v2/util/grpc"
)
// ArgoCDCMPServer is the config management plugin server implementation
type ArgoCDCMPServer struct {
log *log.Entry
opts []grpc.ServerOption
initConstants plugin.CMPServerInitConstants
stopCh chan os.Signal
doneCh chan interface{}
sig os.Signal
}
// NewServer returns a new instance of the Argo CD config management plugin server
func NewServer(initConstants plugin.CMPServerInitConstants) (*ArgoCDCMPServer, error) {
if os.Getenv(common.EnvEnableGRPCTimeHistogramEnv) == "true" {
grpc_prometheus.EnableHandlingTimeHistogram()
}
serverLog := log.NewEntry(log.StandardLogger())
streamInterceptors := []grpc.StreamServerInterceptor{grpc_logrus.StreamServerInterceptor(serverLog), grpc_prometheus.StreamServerInterceptor, grpc_util.PanicLoggerStreamServerInterceptor(serverLog)}
unaryInterceptors := []grpc.UnaryServerInterceptor{grpc_logrus.UnaryServerInterceptor(serverLog), grpc_prometheus.UnaryServerInterceptor, grpc_util.PanicLoggerUnaryServerInterceptor(serverLog)}
serverOpts := []grpc.ServerOption{
grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(unaryInterceptors...)),
grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(streamInterceptors...)),
grpc.MaxRecvMsgSize(apiclient.MaxGRPCMessageSize),
grpc.MaxSendMsgSize(apiclient.MaxGRPCMessageSize),
}
return &ArgoCDCMPServer{
log: serverLog,
opts: serverOpts,
stopCh: make(chan os.Signal),
doneCh: make(chan interface{}),
initConstants: initConstants,
}, nil
}
func (a *ArgoCDCMPServer) Run() {
config := a.initConstants.PluginConfig
// Listen on the socket address
_ = os.Remove(config.Address())
listener, err := net.Listen("unix", config.Address())
errors.CheckError(err)
log.Infof("argocd-cmp-server %s serving on %s", common.GetVersion(), listener.Addr())
signal.Notify(a.stopCh, syscall.SIGINT, syscall.SIGTERM)
go a.Shutdown(config.Address())
grpcServer := a.CreateGRPC()
err = grpcServer.Serve(listener)
errors.CheckError(err)
if a.sig != nil {
<-a.doneCh
}
}
// CreateGRPC creates new configured grpc server
func (a *ArgoCDCMPServer) CreateGRPC() *grpc.Server {
server := grpc.NewServer(a.opts...)
versionpkg.RegisterVersionServiceServer(server, version.NewServer(nil, func() (bool, error) {
return true, nil
}))
pluginService := plugin.NewService(a.initConstants)
apiclient.RegisterConfigManagementPluginServiceServer(server, pluginService)
healthService := health.NewServer()
grpc_health_v1.RegisterHealthServer(server, healthService)
// Register reflection service on gRPC server.
reflection.Register(server)
return server
}
func (a *ArgoCDCMPServer) Shutdown(address string) {
defer signal.Stop(a.stopCh)
a.sig = <-a.stopCh
_ = os.Remove(address)
close(a.doneCh)
}

View File

@@ -42,11 +42,6 @@ const (
DefaultPortRepoServerMetrics = 8084
)
// Default listener address for ArgoCD components
const (
DefaultAddressAPIServer = "localhost"
)
// Default paths on the pod's file system
const (
// The default path where TLS certificates for repositories are located
@@ -59,12 +54,6 @@ const (
DefaultGnuPgHomePath = "/app/config/gpg/keys"
// Default path to repo server TLS endpoint config
DefaultAppConfigPath = "/app/config"
// Default path to cmp server plugin socket file
DefaultPluginSockFilePath = "/home/argocd/cmp-server/plugins"
// Default path to cmp server plugin configuration file
DefaultPluginConfigFilePath = "/home/argocd/cmp-server/config"
// Plugin Config File is a ConfigManagementPlugin manifest located inside the plugin container
PluginConfigFileName = "plugin.yaml"
)
// Argo CD application related constants
@@ -76,10 +65,6 @@ const (
ArgoCDUserAgentName = "argocd-client"
// AuthCookieName is the HTTP cookie name where we store our auth token
AuthCookieName = "argocd.token"
// StateCookieName is the HTTP cookie name that holds temporary nonce tokens for CSRF protection
StateCookieName = "argocd.oauthstate"
// StateCookieMaxAge is the maximum age of the oauth state cookie
StateCookieMaxAge = time.Minute * 5
// ChangePasswordSSOTokenMaxAge is the max token age for password change operation
ChangePasswordSSOTokenMaxAge = time.Minute * 5
@@ -128,9 +113,6 @@ const (
// LabelValueSecretTypeRepoCreds indicates a secret type of repository credentials
LabelValueSecretTypeRepoCreds = "repo-creds"
// The Argo CD application name is used as the instance name
AnnotationKeyAppInstance = "argocd.argoproj.io/tracking-id"
// AnnotationCompareOptions is a comma-separated list of options for comparison
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"
@@ -162,12 +144,6 @@ const (
EnvVarTLSDataPath = "ARGOCD_TLS_DATA_PATH"
// Specifies number of git remote operations attempts count
EnvGitAttemptsCount = "ARGOCD_GIT_ATTEMPTS_COUNT"
// Specifices max duration of git remote operation retry
EnvGitRetryMaxDuration = "ARGOCD_GIT_RETRY_MAX_DURATION"
// Specifies duration of git remote operation retry
EnvGitRetryDuration = "ARGOCD_GIT_RETRY_DURATION"
// Specifies fator of git remote operation retry
EnvGitRetryFactor = "ARGOCD_GIT_RETRY_FACTOR"
// Overrides git submodule support, true by default
EnvGitSubmoduleEnabled = "ARGOCD_GIT_MODULES_ENABLED"
// EnvGnuPGHome is the path to ArgoCD's GnuPG keyring for signature verification
@@ -196,10 +172,6 @@ const (
EnvLogFormat = "ARGOCD_LOG_FORMAT"
// EnvLogLevel log level that is defined by `--loglevel` option
EnvLogLevel = "ARGOCD_LOG_LEVEL"
// EnvMaxCookieNumber max number of chunks a cookie can be broken into
EnvMaxCookieNumber = "ARGOCD_MAX_COOKIE_NUMBER"
// EnvPluginSockFilePath allows to override the pluginSockFilePath for repo server and cmp server
EnvPluginSockFilePath = "ARGOCD_PLUGINSOCKFILEPATH"
)
const (
@@ -212,12 +184,6 @@ const (
CacheVersion = "1.8.3"
)
const (
DefaultGitRetryMaxDuration time.Duration = time.Second * 5 // 5s
DefaultGitRetryDuration time.Duration = time.Millisecond * 250 // 0.25s
DefaultGitRetryFactor = int64(2)
)
// GetGnuPGHomePath retrieves the path to use for GnuPG home directory, which is either taken from GNUPGHOME environment or a default value
func GetGnuPGHomePath() string {
if gnuPgHome := os.Getenv(EnvGnuPGHome); gnuPgHome == "" {
@@ -226,12 +192,3 @@ func GetGnuPGHomePath() string {
return gnuPgHome
}
}
// GetPluginSockFilePath retrieves the path of plugin sock file, which is either taken from PluginSockFilePath environment or a default value
func GetPluginSockFilePath() string {
if pluginSockFilePath := os.Getenv(EnvPluginSockFilePath); pluginSockFilePath == "" {
return DefaultPluginSockFilePath
} else {
return pluginSockFilePath
}
}

View File

@@ -37,6 +37,9 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
// make sure to register workqueue prometheus metrics
_ "k8s.io/component-base/metrics/prometheus/workqueue"
statecache "github.com/argoproj/argo-cd/v2/controller/cache"
"github.com/argoproj/argo-cd/v2/controller/metrics"
"github.com/argoproj/argo-cd/v2/pkg/apis/application"
@@ -46,7 +49,6 @@ import (
applisters "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/util/argo"
argodiff "github.com/argoproj/argo-cd/v2/util/argo/diff"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/errors"
@@ -111,7 +113,6 @@ type ApplicationController struct {
metricsServer *metrics.MetricsServer
kubectlSemaphore *semaphore.Weighted
clusterFilter func(cluster *appv1.Cluster) bool
projByNameCache sync.Map
}
// NewApplicationController creates new instance of ApplicationController.
@@ -127,7 +128,6 @@ func NewApplicationController(
selfHealTimeout time.Duration,
metricsPort int,
metricsCacheExpiration time.Duration,
metricsApplicationLabels []string,
kubectlParallelismLimit int64,
clusterFilter func(cluster *appv1.Cluster) bool,
) (*ApplicationController, error) {
@@ -152,7 +152,6 @@ func NewApplicationController(
settingsMgr: settingsMgr,
selfHealTimeout: selfHealTimeout,
clusterFilter: clusterFilter,
projByNameCache: sync.Map{},
}
if kubectlParallelismLimit > 0 {
ctrl.kubectlSemaphore = semaphore.NewWeighted(kubectlParallelismLimit)
@@ -165,26 +164,16 @@ func NewApplicationController(
AddFunc: func(obj interface{}) {
if key, err := cache.MetaNamespaceKeyFunc(obj); err == nil {
ctrl.projectRefreshQueue.Add(key)
if projMeta, ok := obj.(metav1.Object); ok {
ctrl.InvalidateProjectsCache(projMeta.GetName())
}
}
},
UpdateFunc: func(old, new interface{}) {
if key, err := cache.MetaNamespaceKeyFunc(new); err == nil {
ctrl.projectRefreshQueue.Add(key)
if projMeta, ok := new.(metav1.Object); ok {
ctrl.InvalidateProjectsCache(projMeta.GetName())
}
}
},
DeleteFunc: func(obj interface{}) {
if key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj); err == nil {
ctrl.projectRefreshQueue.Add(key)
if projMeta, ok := obj.(metav1.Object); ok {
ctrl.InvalidateProjectsCache(projMeta.GetName())
}
}
},
})
@@ -192,7 +181,7 @@ func NewApplicationController(
var err error
ctrl.metricsServer, err = metrics.NewMetricsServer(metricsAddr, appLister, ctrl.canProcessApp, func(r *http.Request) error {
return nil
}, metricsApplicationLabels)
})
if err != nil {
return nil, err
}
@@ -202,8 +191,8 @@ func NewApplicationController(
return nil, err
}
}
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectl, ctrl.metricsServer, ctrl.handleObjectUpdated, clusterFilter, argo.NewResourceTracking())
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout, argo.NewResourceTracking())
stateCache := statecache.NewLiveStateCache(db, appInformer, ctrl.settingsMgr, kubectl, ctrl.metricsServer, ctrl.handleObjectUpdated, clusterFilter)
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectl, ctrl.settingsMgr, stateCache, projInformer, ctrl.metricsServer, argoCache, ctrl.statusRefreshTimeout)
ctrl.appInformer = appInformer
ctrl.appLister = appLister
ctrl.projInformer = projInformer
@@ -213,19 +202,6 @@ func NewApplicationController(
return &ctrl, nil
}
func (ctrl *ApplicationController) InvalidateProjectsCache(names ...string) {
if len(names) > 0 {
for _, name := range names {
ctrl.projByNameCache.Delete(name)
}
} else {
ctrl.projByNameCache.Range(func(key, _ interface{}) bool {
ctrl.projByNameCache.Delete(key)
return true
})
}
}
func (ctrl *ApplicationController) GetMetricsServer() *metrics.MetricsServer {
return ctrl.metricsServer
}
@@ -255,35 +231,8 @@ func isSelfReferencedApp(app *appv1.Application, ref v1.ObjectReference) bool {
gvk.Kind == application.ApplicationKind
}
func (ctrl *ApplicationController) newAppProjCache(name string) *appProjCache {
return &appProjCache{name: name, ctrl: ctrl}
}
type appProjCache struct {
name string
ctrl *ApplicationController
lock sync.Mutex
appProj *appv1.AppProject
}
func (projCache *appProjCache) GetAppProject(ctx context.Context) (*appv1.AppProject, error) {
projCache.lock.Lock()
defer projCache.lock.Unlock()
if projCache.appProj != nil {
return projCache.appProj, nil
}
proj, err := argo.GetAppProjectByName(projCache.name, applisters.NewAppProjectLister(projCache.ctrl.projInformer.GetIndexer()), projCache.ctrl.namespace, projCache.ctrl.settingsMgr, projCache.ctrl.db, ctx)
if err != nil {
return nil, err
}
projCache.appProj = proj
return projCache.appProj, nil
}
func (ctrl *ApplicationController) getAppProj(app *appv1.Application) (*appv1.AppProject, error) {
projCache, _ := ctrl.projByNameCache.LoadOrStore(app.Spec.GetProject(), ctrl.newAppProjCache(app.Spec.GetProject()))
return projCache.(*appProjCache).GetAppProject(context.TODO())
return argo.GetAppProject(&app.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace, ctrl.settingsMgr)
}
func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]bool, ref v1.ObjectReference) {
@@ -296,8 +245,12 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
if !ok {
continue
}
// exclude resource unless it is permitted in the app project. If project is not permitted then it is not controlled by the user and there is no point showing the warning.
if proj, err := ctrl.getAppProj(app); err == nil && proj.IsGroupKindPermitted(ref.GroupVersionKind().GroupKind(), true) &&
!isKnownOrphanedResourceExclusion(kube.NewResourceKey(ref.GroupVersionKind().Group, ref.GroupVersionKind().Kind, ref.Namespace, ref.Name), proj) {
managedByApp[app.Name] = true
managedByApp[app.Name] = false
}
}
}
}
@@ -317,43 +270,24 @@ func (ctrl *ApplicationController) handleObjectUpdated(managedByApp map[string]b
if isManagedResource {
level = CompareWithRecent
}
// Additional check for debug level so we don't need to evaluate the
// format string in case of non-debug scenarios
if log.GetLevel() >= log.DebugLevel {
var resKey string
if ref.Namespace != "" {
resKey = ref.Namespace + "/" + ref.Name
} else {
resKey = "(cluster-scoped)/" + ref.Name
}
log.Debugf("Refreshing app %s for change in cluster of object %s of type %s/%s", appName, resKey, ref.APIVersion, ref.Kind)
}
ctrl.requestAppRefresh(appName, &level, nil)
}
}
// setAppManagedResources will build a list of ResourceDiff based on the provided comparisonResult
// and persist app resources related data in the cache. Will return the persisted ApplicationTree.
func (ctrl *ApplicationController) setAppManagedResources(a *appv1.Application, comparisonResult *comparisonResult) (*appv1.ApplicationTree, error) {
managedResources, err := ctrl.hideSecretData(a, comparisonResult)
managedResources, err := ctrl.managedResources(comparisonResult)
if err != nil {
return nil, fmt.Errorf("error getting managed resources: %s", err)
return nil, err
}
tree, err := ctrl.getResourceTree(a, managedResources)
if err != nil {
return nil, fmt.Errorf("error getting resource tree: %s", err)
return nil, err
}
err = ctrl.cache.SetAppResourcesTree(a.Name, tree)
if err != nil {
return nil, fmt.Errorf("error setting app resource tree: %s", err)
return nil, err
}
err = ctrl.cache.SetAppManagedResources(a.Name, managedResources)
if err != nil {
return nil, fmt.Errorf("error setting app managed resources: %s", err)
}
return tree, nil
return tree, ctrl.cache.SetAppManagedResources(a.Name, managedResources)
}
// returns true of given resources exist in the namespace by default and not managed by the user
@@ -383,7 +317,7 @@ func isKnownOrphanedResourceExclusion(key kube.ResourceKey, proj *appv1.AppProje
func (ctrl *ApplicationController) getResourceTree(a *appv1.Application, managedResources []*appv1.ResourceDiff) (*appv1.ApplicationTree, error) {
nodes := make([]appv1.ResourceNode, 0)
proj, err := ctrl.getAppProj(a)
proj, err := argo.GetAppProject(&a.Spec, applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()), ctrl.namespace, ctrl.settingsMgr)
if err != nil {
return nil, err
}
@@ -563,7 +497,7 @@ func (ctrl *ApplicationController) getAppHosts(a *appv1.Application, appNodes []
return hosts, nil
}
func (ctrl *ApplicationController) hideSecretData(app *appv1.Application, comparisonResult *comparisonResult) ([]*appv1.ResourceDiff, error) {
func (ctrl *ApplicationController) managedResources(comparisonResult *comparisonResult) ([]*appv1.ResourceDiff, error) {
items := make([]*appv1.ResourceDiff, len(comparisonResult.managedResources))
for i := range comparisonResult.managedResources {
res := comparisonResult.managedResources[i]
@@ -583,46 +517,26 @@ func (ctrl *ApplicationController) hideSecretData(app *appv1.Application, compar
var err error
target, live, err = diff.HideSecretData(res.Target, res.Live)
if err != nil {
return nil, fmt.Errorf("error hiding secret data: %s", err)
return nil, err
}
compareOptions, err := ctrl.settingsMgr.GetResourceCompareOptions()
if err != nil {
return nil, fmt.Errorf("error getting resource compare options: %s", err)
return nil, err
}
resourceOverrides, err := ctrl.settingsMgr.GetResourceOverrides()
resDiffPtr, err := diff.Diff(target, live,
diff.WithNormalizer(comparisonResult.diffNormalizer),
diff.WithLogr(logutils.NewLogrusLogger(logutils.NewWithCurrentConfig())),
diff.IgnoreAggregatedRoles(compareOptions.IgnoreAggregatedRoles))
if err != nil {
return nil, fmt.Errorf("error getting resource overrides: %s", err)
return nil, err
}
appLabelKey, err := ctrl.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return nil, fmt.Errorf("error getting app instance label key: %s", err)
}
trackingMethod, err := ctrl.settingsMgr.GetTrackingMethod()
if err != nil {
return nil, fmt.Errorf("error getting tracking method: %s", err)
}
diffConfig, err := argodiff.NewDiffConfigBuilder().
WithDiffSettings(app.Spec.IgnoreDifferences, resourceOverrides, compareOptions.IgnoreAggregatedRoles).
WithTracking(appLabelKey, trackingMethod).
WithNoCache().
WithLogger(logutils.NewLogrusLogger(logutils.NewWithCurrentConfig())).
Build()
if err != nil {
return nil, fmt.Errorf("appcontroller error building diff config: %s", err)
}
diffResult, err := argodiff.StateDiff(live, target, diffConfig)
if err != nil {
return nil, fmt.Errorf("error applying diff: %s", err)
}
resDiff = diffResult
resDiff = *resDiffPtr
}
if live != nil {
data, err := json.Marshal(live)
if err != nil {
return nil, fmt.Errorf("error marshaling live json: %s", err)
return nil, err
}
item.LiveState = string(data)
} else {
@@ -632,7 +546,7 @@ func (ctrl *ApplicationController) hideSecretData(app *appv1.Application, compar
if target != nil {
data, err := json.Marshal(target)
if err != nil {
return nil, fmt.Errorf("error marshaling target json: %s", err)
return nil, err
}
item.TargetState = string(data)
} else {
@@ -894,7 +808,7 @@ func (ctrl *ApplicationController) getPermittedAppLiveObjects(app *appv1.Applica
}
// Don't delete live resources which are not permitted in the app project
for k, v := range objsMap {
if !proj.IsLiveResourcePermitted(v, app.Spec.Destination.Server, app.Spec.Destination.Name) {
if !proj.IsLiveResourcePermitted(v, app.Spec.Destination.Server) {
delete(objsMap, k)
}
}
@@ -1694,7 +1608,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
return nil, nil
}
proj, err := applisters.NewAppProjectLister(ctrl.projInformer.GetIndexer()).AppProjects(ctrl.namespace).Get(app.Spec.GetProject())
proj, err := ctrl.getAppProj(app)
if err != nil {
return nil, nil
}

View File

@@ -106,7 +106,6 @@ func newFakeController(data *fakeData) *ApplicationController {
time.Minute,
common.DefaultPortArgoCDMetrics,
data.metricsCacheExpiration,
[]string{},
0,
nil,
)
@@ -785,11 +784,11 @@ func TestHandleOrphanedResourceUpdated(t *testing.T) {
isRequested, level := ctrl.isRefreshRequested(app1.Name)
assert.True(t, isRequested)
assert.Equal(t, CompareWithRecent, level)
assert.Equal(t, ComparisonWithNothing, level)
isRequested, level = ctrl.isRefreshRequested(app2.Name)
assert.True(t, isRequested)
assert.Equal(t, CompareWithRecent, level)
assert.Equal(t, ComparisonWithNothing, level)
}
func TestGetResourceTree_HasOrphanedResources(t *testing.T) {

View File

@@ -2,12 +2,10 @@ package cache
import (
"context"
"errors"
"fmt"
"math"
"os"
"reflect"
"sync"
"syscall"
"time"
clustercache "github.com/argoproj/gitops-engine/pkg/cache"
@@ -16,7 +14,6 @@ import (
log "github.com/sirupsen/logrus"
"golang.org/x/sync/semaphore"
v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -27,7 +24,6 @@ import (
appv1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/env"
logutils "github.com/argoproj/argo-cd/v2/util/log"
"github.com/argoproj/argo-cd/v2/util/lua"
"github.com/argoproj/argo-cd/v2/util/settings"
@@ -36,69 +32,25 @@ import (
const (
// EnvClusterCacheResyncDuration is the env variable that holds cluster cache re-sync duration
EnvClusterCacheResyncDuration = "ARGOCD_CLUSTER_CACHE_RESYNC_DURATION"
// EnvClusterCacheWatchResyncDuration is the env variable that holds cluster cache watch re-sync duration
EnvClusterCacheWatchResyncDuration = "ARGOCD_CLUSTER_CACHE_WATCH_RESYNC_DURATION"
// EnvClusterRetryTimeoutDuration is the env variable that holds cluster retry duration when sync error happens
EnvClusterSyncRetryTimeoutDuration = "ARGOCD_CLUSTER_SYNC_RETRY_TIMEOUT_DURATION"
// EnvClusterCacheListPageSize is the env variable to control size of the list page size when making K8s queries
EnvClusterCacheListPageSize = "ARGOCD_CLUSTER_CACHE_LIST_PAGE_SIZE"
// EnvClusterCacheListSemaphore is the env variable to control size of the list semaphore
// This is used to limit the number of concurrent memory consuming operations on the
// k8s list queries results across all clusters to avoid memory spikes during cache initialization.
EnvClusterCacheListSemaphore = "ARGOCD_CLUSTER_CACHE_LIST_SEMAPHORE"
// EnvClusterCacheRetryLimit is the env variable to control the retry limit for listing resources during cluster cache sync
EnvClusterCacheAttemptLimit = "ARGOCD_CLUSTER_CACHE_ATTEMPT_LIMIT"
// EnvClusterCacheRetryUseBackoff is the env variable to control whether to use a backoff strategy with the retry during cluster cache sync
EnvClusterCacheRetryUseBackoff = "ARGOCD_CLUSTER_CACHE_RETRY_USE_BACKOFF"
)
// GitOps engine cluster cache tuning options
var (
// clusterCacheResyncDuration controls the duration of cluster cache refresh.
// NOTE: this differs from gitops-engine default of 24h
clusterCacheResyncDuration = 12 * time.Hour
// clusterCacheWatchResyncDuration controls the maximum duration that group/kind watches are allowed to run
// for before relisting & restarting the watch
clusterCacheWatchResyncDuration = 10 * time.Minute
// clusterSyncRetryTimeoutDuration controls the sync retry duration when cluster sync error happens
clusterSyncRetryTimeoutDuration = 10 * time.Second
// The default limit of 50 is chosen based on experiments.
clusterCacheListSemaphoreSize int64 = 50
// clusterCacheListPageSize is the page size when performing K8s list requests.
// 500 is equal to kubectl's size
clusterCacheListPageSize int64 = 500
// clusterCacheRetryLimit sets a retry limit for failed requests during cluster cache sync
// If set to 1, retries are disabled.
clusterCacheAttemptLimit int32 = 1
// clusterCacheRetryUseBackoff specifies whether to use a backoff strategy on cluster cache sync, if retry is enabled
clusterCacheRetryUseBackoff bool = false
// K8SClusterResyncDuration controls the duration of cluster cache refresh
K8SClusterResyncDuration = 12 * time.Hour
)
func init() {
clusterCacheResyncDuration = env.ParseDurationFromEnv(EnvClusterCacheResyncDuration, clusterCacheResyncDuration, 0, math.MaxInt64)
clusterCacheWatchResyncDuration = env.ParseDurationFromEnv(EnvClusterCacheWatchResyncDuration, clusterCacheWatchResyncDuration, 0, math.MaxInt64)
clusterSyncRetryTimeoutDuration = env.ParseDurationFromEnv(EnvClusterSyncRetryTimeoutDuration, clusterSyncRetryTimeoutDuration, 0, math.MaxInt64)
clusterCacheListPageSize = env.ParseInt64FromEnv(EnvClusterCacheListPageSize, clusterCacheListPageSize, 0, math.MaxInt64)
clusterCacheListSemaphoreSize = env.ParseInt64FromEnv(EnvClusterCacheListSemaphore, clusterCacheListSemaphoreSize, 0, math.MaxInt64)
clusterCacheAttemptLimit = int32(env.ParseInt64FromEnv(EnvClusterCacheAttemptLimit, 1, 1, math.MaxInt32))
clusterCacheRetryUseBackoff = env.ParseBoolFromEnv(EnvClusterCacheRetryUseBackoff, false)
if clusterResyncDurationStr := os.Getenv(EnvClusterCacheResyncDuration); clusterResyncDurationStr != "" {
if duration, err := time.ParseDuration(clusterResyncDurationStr); err == nil {
K8SClusterResyncDuration = duration
}
}
}
type LiveStateCache interface {
// Returns k8s server version
GetVersionsInfo(serverURL string) (string, []kube.APIResourceInfo, error)
GetVersionsInfo(serverURL string) (string, []metav1.APIGroup, error)
// Returns true of given group kind is a namespaced resource
IsNamespaced(server string, gk schema.GroupKind) (bool, error)
// Returns synced cluster cache
@@ -153,19 +105,19 @@ func NewLiveStateCache(
kubectl kube.Kubectl,
metricsServer *metrics.MetricsServer,
onObjectUpdated ObjectUpdatedHandler,
clusterFilter func(cluster *appv1.Cluster) bool,
resourceTracking argo.ResourceTracking) LiveStateCache {
clusterFilter func(cluster *appv1.Cluster) bool) LiveStateCache {
return &liveStateCache{
appInformer: appInformer,
db: db,
clusters: make(map[string]clustercache.ClusterCache),
onObjectUpdated: onObjectUpdated,
kubectl: kubectl,
settingsMgr: settingsMgr,
metricsServer: metricsServer,
clusterFilter: clusterFilter,
resourceTracking: resourceTracking,
appInformer: appInformer,
db: db,
clusters: make(map[string]clustercache.ClusterCache),
onObjectUpdated: onObjectUpdated,
kubectl: kubectl,
settingsMgr: settingsMgr,
metricsServer: metricsServer,
// The default limit of 50 is chosen based on experiments.
listSemaphore: semaphore.NewWeighted(50),
clusterFilter: clusterFilter,
}
}
@@ -175,14 +127,17 @@ type cacheSettings struct {
}
type liveStateCache struct {
db db.ArgoDB
appInformer cache.SharedIndexInformer
onObjectUpdated ObjectUpdatedHandler
kubectl kube.Kubectl
settingsMgr *settings.SettingsManager
metricsServer *metrics.MetricsServer
clusterFilter func(cluster *appv1.Cluster) bool
resourceTracking argo.ResourceTracking
db db.ArgoDB
appInformer cache.SharedIndexInformer
onObjectUpdated ObjectUpdatedHandler
kubectl kube.Kubectl
settingsMgr *settings.SettingsManager
metricsServer *metrics.MetricsServer
clusterFilter func(cluster *appv1.Cluster) bool
// listSemaphore is used to limit the number of concurrent memory consuming operations on the
// k8s list queries results across all clusters to avoid memory spikes during cache initialization.
listSemaphore *semaphore.Weighted
clusters map[string]clustercache.ClusterCache
cacheSettings cacheSettings
@@ -303,19 +258,6 @@ func skipAppRequeuing(key kube.ResourceKey) bool {
return ignoredRefreshResources[key.Group+"/"+key.Kind]
}
// isRetryableError is a helper method to see whether an error
// returned from the dynamic client is potentially retryable.
func isRetryableError(err error) bool {
return kerrors.IsInternalError(err) ||
kerrors.IsInvalid(err) ||
kerrors.IsServerTimeout(err) ||
kerrors.IsServiceUnavailable(err) ||
kerrors.IsTimeout(err) ||
kerrors.IsUnexpectedObjectError(err) ||
kerrors.IsUnexpectedServerError(err) ||
errors.Is(err, syscall.ECONNRESET)
}
func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, error) {
c.lock.RLock()
clusterCache, ok := c.clusters[server]
@@ -343,13 +285,9 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
return nil, fmt.Errorf("controller is configured to ignore cluster %s", cluster.Server)
}
trackingMethod := argo.GetTrackingMethod(c.settingsMgr)
clusterCacheOpts := []clustercache.UpdateSettingsFunc{
clustercache.SetListSemaphore(semaphore.NewWeighted(clusterCacheListSemaphoreSize)),
clustercache.SetListPageSize(clusterCacheListPageSize),
clustercache.SetWatchResyncTimeout(clusterCacheWatchResyncDuration),
clustercache.SetClusterSyncRetryTimeout(clusterSyncRetryTimeoutDuration),
clustercache.SetResyncTimeout(clusterCacheResyncDuration),
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(),
clustercache.SetListSemaphore(c.listSemaphore),
clustercache.SetResyncTimeout(K8SClusterResyncDuration),
clustercache.SetSettings(cacheSettings.clusterSettings),
clustercache.SetNamespaces(cluster.Namespaces),
clustercache.SetClusterResources(cluster.ClusterResources),
@@ -357,8 +295,7 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
res := &ResourceInfo{}
populateNodeInfo(un, res)
res.Health, _ = health.GetResourceHealth(un, cacheSettings.clusterSettings.ResourceHealthOverride)
appName := c.resourceTracking.GetAppName(un, cacheSettings.appInstanceLabelKey, trackingMethod)
appName := kube.GetAppInstanceLabel(un, cacheSettings.appInstanceLabelKey)
if isRoot && appName != "" {
res.AppName = appName
}
@@ -369,10 +306,7 @@ func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, e
return res, res.AppName != "" || gvk.Kind == kube.CustomResourceDefinitionKind
}),
clustercache.SetLogr(logutils.NewLogrusLogger(log.WithField("server", cluster.Server))),
clustercache.SetRetryOptions(clusterCacheAttemptLimit, clusterCacheRetryUseBackoff, isRetryableError),
}
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(), clusterCacheOpts...)
)
_ = clusterCache.OnResourceUpdated(func(newRes *clustercache.Resource, oldRes *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) {
toNotify := make(map[string]bool)
@@ -485,12 +419,12 @@ func (c *liveStateCache) GetManagedLiveObjs(a *appv1.Application, targetObjs []*
})
}
func (c *liveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIResourceInfo, error) {
func (c *liveStateCache) GetVersionsInfo(serverURL string) (string, []metav1.APIGroup, error) {
clusterInfo, err := c.getSyncedCluster(serverURL)
if err != nil {
return "", nil, err
}
return clusterInfo.GetServerVersion(), clusterInfo.GetAPIResources(), nil
return clusterInfo.GetServerVersion(), clusterInfo.GetAPIGroups(), nil
}
func (c *liveStateCache) isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {

View File

@@ -1,12 +1,9 @@
package cache
import (
"errors"
"fmt"
"strings"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/gitops-engine/pkg/utils/text"
v1 "k8s.io/api/core/v1"
@@ -30,20 +27,25 @@ func populateNodeInfo(un *unstructured.Unstructured, res *ResourceInfo) {
switch gvk.Kind {
case kube.PodKind:
populatePodInfo(un, res)
return
case kube.ServiceKind:
populateServiceInfo(un, res)
return
case "Node":
populateHostNodeInfo(un, res)
return
}
case "extensions", "networking.k8s.io":
switch gvk.Kind {
case kube.IngressKind:
populateIngressInfo(un, res)
return
}
case "networking.istio.io":
switch gvk.Kind {
case "VirtualService":
populateIstioVirtualServiceInfo(un, res)
return
}
}
@@ -84,36 +86,16 @@ func populateServiceInfo(un *unstructured.Unstructured, res *ResourceInfo) {
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetLabels: targetLabels, Ingress: ingress}
}
func getServiceName(backend map[string]interface{}, gvk schema.GroupVersionKind) (string, error) {
switch gvk.Group {
case "extensions":
return fmt.Sprintf("%s", backend["serviceName"]), nil
case "networking.k8s.io":
switch gvk.Version {
case "v1beta1":
return fmt.Sprintf("%s", backend["serviceName"]), nil
case "v1":
if service, ok, err := unstructured.NestedMap(backend, "service"); ok && err == nil {
return fmt.Sprintf("%s", service["name"]), nil
}
}
}
return "", errors.New("unable to resolve string")
}
func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
ingress := getIngress(un)
targetsMap := make(map[v1alpha1.ResourceRef]bool)
gvk := un.GroupVersionKind()
if backend, ok, err := unstructured.NestedMap(un.Object, "spec", "backend"); ok && err == nil {
if serviceName, err := getServiceName(backend, gvk); err == nil {
targetsMap[v1alpha1.ResourceRef{
Group: "",
Kind: kube.ServiceKind,
Namespace: un.GetNamespace(),
Name: serviceName,
}] = true
}
targetsMap[v1alpha1.ResourceRef{
Group: "",
Kind: kube.ServiceKind,
Namespace: un.GetNamespace(),
Name: fmt.Sprintf("%s", backend["serviceName"]),
}] = true
}
urlsSet := make(map[string]bool)
if rules, ok, err := unstructured.NestedSlice(un.Object, "spec", "rules"); ok && err == nil {
@@ -141,15 +123,13 @@ func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
continue
}
if backend, ok, err := unstructured.NestedMap(path, "backend"); ok && err == nil {
if serviceName, err := getServiceName(backend, gvk); err == nil {
targetsMap[v1alpha1.ResourceRef{
Group: "",
Kind: kube.ServiceKind,
Namespace: un.GetNamespace(),
Name: serviceName,
}] = true
}
if serviceName, ok, err := unstructured.NestedString(path, "backend", "serviceName"); ok && err == nil {
targetsMap[v1alpha1.ResourceRef{
Group: "",
Kind: kube.ServiceKind,
Namespace: un.GetNamespace(),
Name: serviceName,
}] = true
}
if host == nil || host == "" {

View File

@@ -43,25 +43,6 @@ var (
ingress:
- hostname: localhost`)
testLinkAnnotatedService = strToUnstructured(`
apiVersion: v1
kind: Service
metadata:
name: helm-guestbook
namespace: default
resourceVersion: "123"
uid: "4"
annotations:
link.argocd.argoproj.io/external-link: http://my-grafana.com/pre-generated-link
spec:
selector:
app: guestbook
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost`)
testIngress = strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
@@ -93,39 +74,6 @@ var (
ingress:
- ip: 107.178.210.11`)
testLinkAnnotatedIngress = strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
uid: "4"
annotations:
link.argocd.argoproj.io/external-link: http://my-grafana.com/ingress-link
spec:
backend:
serviceName: not-found-service
servicePort: 443
rules:
- host: helm-guestbook.com
http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /
- backend:
serviceName: helm-guestbook
servicePort: https
path: /
tls:
- host: helm-guestbook.com
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
testIngressWildCardPath = strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
@@ -185,43 +133,6 @@ var (
ingress:
- ip: 107.178.210.11`)
testIngressNetworkingV1 = strToUnstructured(`
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
uid: "4"
spec:
backend:
service:
name: not-found-service
port:
number: 443
rules:
- host: helm-guestbook.com
http:
paths:
- backend:
service:
name: helm-guestbook
port:
number: 443
path: /
- backend:
service:
name: helm-guestbook
port:
name: https
path: /
tls:
- host: helm-guestbook.com
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
testIstioVirtualService = strToUnstructured(`
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
@@ -320,17 +231,6 @@ func TestGetServiceInfo(t *testing.T) {
}, info.NetworkingInfo)
}
func TestGetLinkAnnotatedServiceInfo(t *testing.T) {
info := &ResourceInfo{}
populateNodeInfo(testLinkAnnotatedService, info)
assert.Equal(t, 0, len(info.Info))
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
TargetLabels: map[string]string{"app": "guestbook"},
Ingress: []v1.LoadBalancerIngress{{Hostname: "localhost"}},
ExternalURLs: []string{"http://my-grafana.com/pre-generated-link"},
}, info.NetworkingInfo)
}
func TestGetIstioVirtualServiceInfo(t *testing.T) {
info := &ResourceInfo{}
populateNodeInfo(testIstioVirtualService, info)
@@ -355,40 +255,8 @@ func TestGetIstioVirtualServiceInfo(t *testing.T) {
}
func TestGetIngressInfo(t *testing.T) {
var tests = []struct {
Ingress *unstructured.Unstructured
}{
{testIngress},
{testIngressNetworkingV1},
}
for _, tc := range tests {
info := &ResourceInfo{}
populateNodeInfo(tc.Ingress, info)
assert.Equal(t, 0, len(info.Info))
sort.Slice(info.NetworkingInfo.TargetRefs, func(i, j int) bool {
return strings.Compare(info.NetworkingInfo.TargetRefs[j].Name, info.NetworkingInfo.TargetRefs[i].Name) < 0
})
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
TargetRefs: []v1alpha1.ResourceRef{{
Namespace: "default",
Group: "",
Kind: kube.ServiceKind,
Name: "not-found-service",
}, {
Namespace: "default",
Group: "",
Kind: kube.ServiceKind,
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://helm-guestbook.com/"},
}, info.NetworkingInfo)
}
}
func TestGetLinkAnnotatedIngressInfo(t *testing.T) {
info := &ResourceInfo{}
populateNodeInfo(testLinkAnnotatedIngress, info)
populateNodeInfo(testIngress, info)
assert.Equal(t, 0, len(info.Info))
sort.Slice(info.NetworkingInfo.TargetRefs, func(i, j int) bool {
return strings.Compare(info.NetworkingInfo.TargetRefs[j].Name, info.NetworkingInfo.TargetRefs[i].Name) < 0
@@ -406,7 +274,7 @@ func TestGetLinkAnnotatedIngressInfo(t *testing.T) {
Kind: kube.ServiceKind,
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://helm-guestbook.com/", "http://my-grafana.com/ingress-link"},
ExternalURLs: []string{"https://helm-guestbook.com/"},
}, info.NetworkingInfo)
}
@@ -528,7 +396,7 @@ func TestGetIngressInfoNoHost(t *testing.T) {
}
func TestExternalUrlWithSubPath(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: networking.k8s.io/v1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
@@ -556,7 +424,7 @@ func TestExternalUrlWithSubPath(t *testing.T) {
}
func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: networking.k8s.io/v1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
@@ -595,7 +463,7 @@ func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
}
func TestExternalUrlWithNoSubPath(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: networking.k8s.io/v1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook

View File

@@ -17,6 +17,8 @@ import (
unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1alpha1 "github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
)
@@ -111,7 +113,7 @@ func (_m *LiveStateCache) GetNamespaceTopLevelResources(server string, namespace
}
// GetVersionsInfo provides a mock function with given fields: serverURL
func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIResourceInfo, error) {
func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []v1.APIGroup, error) {
ret := _m.Called(serverURL)
var r0 string
@@ -121,12 +123,12 @@ func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []kube.APIR
r0 = ret.Get(0).(string)
}
var r1 []kube.APIResourceInfo
if rf, ok := ret.Get(1).(func(string) []kube.APIResourceInfo); ok {
var r1 []v1.APIGroup
if rf, ok := ret.Get(1).(func(string) []v1.APIGroup); ok {
r1 = rf(serverURL)
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).([]kube.APIResourceInfo)
r1 = ret.Get(1).([]v1.APIGroup)
}
}

View File

@@ -64,7 +64,6 @@ func (c *clusterInfoUpdater) updateClusters() {
clusters, err := c.db.ListClusters(context.Background())
if err != nil {
log.Warnf("Failed to save clusters info: %v", err)
return
}
var clustersFiltered []appv1.Cluster
if c.clusterFilter == nil {
@@ -107,7 +106,7 @@ func (c *clusterInfoUpdater) updateClusterInfo(cluster appv1.Cluster, info *cach
}
if info != nil {
clusterInfo.ServerVersion = info.K8SVersion
clusterInfo.APIVersions = argo.APIResourcesToStrings(info.APIResources, false)
clusterInfo.APIVersions = argo.APIGroupsToVersions(info.APIGroups)
if info.LastCacheSyncTime == nil {
clusterInfo.ConnectionState.Status = appv1.ConnectionStatusUnknown
} else if info.SyncError == nil {

View File

@@ -41,12 +41,6 @@ var (
descClusterDefaultLabels,
nil,
)
descClusterConnectionStatus = prometheus.NewDesc(
"argocd_cluster_connection_status",
"The k8s cluster current connection status.",
append(descClusterDefaultLabels, "k8s_version"),
nil,
)
)
type HasClustersInfo interface {
@@ -83,11 +77,9 @@ func (c *clusterCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- descClusterCacheResources
ch <- descClusterAPIs
ch <- descClusterCacheAgeSeconds
ch <- descClusterConnectionStatus
}
func (c *clusterCollector) Collect(ch chan<- prometheus.Metric) {
now := time.Now()
for _, c := range c.info {
defaultValues := []string{c.Server}
@@ -99,6 +91,5 @@ func (c *clusterCollector) Collect(ch chan<- prometheus.Metric) {
cacheAgeSeconds = int(now.Sub(*c.LastCacheSyncTime).Seconds())
}
ch <- prometheus.MustNewConstMetric(descClusterCacheAgeSeconds, prometheus.GaugeValue, float64(cacheAgeSeconds), defaultValues...)
ch <- prometheus.MustNewConstMetric(descClusterConnectionStatus, prometheus.GaugeValue, boolFloat64(c.SyncError == nil), append(defaultValues, c.K8SVersion)...)
}
}

View File

@@ -1,104 +0,0 @@
package metrics
import (
"errors"
"testing"
gitopsCache "github.com/argoproj/gitops-engine/pkg/cache"
)
func TestMetricClusterConnectivity(t *testing.T) {
type testCases struct {
testCombination
skip bool
description string
metricLabels []string
clustersInfo []gitopsCache.ClusterInfo
}
cases := []testCases{
{
description: "metric will have value 1 if connected with the cluster",
skip: false,
metricLabels: []string{"non-existing"},
testCombination: testCombination{
applications: []string{fakeApp},
responseContains: `
# TYPE argocd_cluster_connection_status gauge
argocd_cluster_connection_status{k8s_version="1.21",server="server1"} 1
`,
},
clustersInfo: []gitopsCache.ClusterInfo{
{
Server: "server1",
K8SVersion: "1.21",
SyncError: nil,
},
},
},
{
description: "metric will have value 0 if not connected with the cluster",
skip: false,
metricLabels: []string{"non-existing"},
testCombination: testCombination{
applications: []string{fakeApp},
responseContains: `
# TYPE argocd_cluster_connection_status gauge
argocd_cluster_connection_status{k8s_version="1.21",server="server1"} 0
`,
},
clustersInfo: []gitopsCache.ClusterInfo{
{
Server: "server1",
K8SVersion: "1.21",
SyncError: errors.New("error connecting with cluster"),
},
},
},
{
description: "will have one metric per cluster",
skip: false,
metricLabels: []string{"non-existing"},
testCombination: testCombination{
applications: []string{fakeApp},
responseContains: `
# TYPE argocd_cluster_connection_status gauge
argocd_cluster_connection_status{k8s_version="1.21",server="server1"} 1
argocd_cluster_connection_status{k8s_version="1.21",server="server2"} 1
argocd_cluster_connection_status{k8s_version="1.21",server="server3"} 1
`,
},
clustersInfo: []gitopsCache.ClusterInfo{
{
Server: "server1",
K8SVersion: "1.21",
SyncError: nil,
},
{
Server: "server2",
K8SVersion: "1.21",
SyncError: nil,
},
{
Server: "server3",
K8SVersion: "1.21",
SyncError: nil,
},
},
},
}
for _, c := range cases {
c := c
t.Run(c.description, func(t *testing.T) {
if !c.skip {
cfg := TestMetricServerConfig{
FakeAppYAMLs: c.applications,
ExpectedResponse: c.responseContains,
AppLabels: c.metricLabels,
ClustersInfo: c.clustersInfo,
}
runTest(t, cfg)
}
})
}
}

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"net/http"
"os"
"regexp"
"strconv"
"time"
@@ -21,7 +20,6 @@ import (
applister "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/git"
"github.com/argoproj/argo-cd/v2/util/healthz"
"github.com/argoproj/argo-cd/v2/util/profile"
)
type MetricsServer struct {
@@ -51,8 +49,6 @@ const (
var (
descAppDefaultLabels = []string{"namespace", "name", "project"}
descAppLabels *prometheus.Desc
descAppInfo = prometheus.NewDesc(
"argocd_app_info",
"Information about application.",
@@ -125,7 +121,7 @@ var (
redisRequestCounter = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_redis_request_total",
Help: "Number of redis requests executed during application reconciliation.",
Help: "Number of kubernetes requests executed during application reconciliation.",
},
[]string{"hostname", "initiator", "failed"},
)
@@ -141,32 +137,19 @@ var (
)
// NewMetricsServer returns a new prometheus server which collects application metrics
func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, healthCheck func(r *http.Request) error, appLabels []string) (*MetricsServer, error) {
func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, healthCheck func(r *http.Request) error) (*MetricsServer, error) {
hostname, err := os.Hostname()
if err != nil {
return nil, err
}
if len(appLabels) > 0 {
normalizedLabels := normalizeLabels("label", appLabels)
descAppLabels = prometheus.NewDesc(
"argocd_app_labels",
"Argo Application labels converted to Prometheus labels",
append(descAppDefaultLabels, normalizedLabels...),
nil,
)
}
mux := http.NewServeMux()
registry := NewAppRegistry(appLister, appFilter, appLabels)
registry.MustRegister(depth, adds, latency, workDuration, unfinished, longestRunningProcessor, retries)
registry := NewAppRegistry(appLister, appFilter)
mux.Handle(MetricsPath, promhttp.HandlerFor(prometheus.Gatherers{
// contains app controller specific metrics
registry,
// contains process, golang and controller workqueues metrics
prometheus.DefaultGatherer,
}, promhttp.HandlerOpts{}))
profile.RegisterProfiler(mux)
healthz.ServeHealthCheck(mux, healthCheck)
registry.MustRegister(syncCounter)
@@ -197,20 +180,6 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, appFil
}, nil
}
// Prometheus invalid labels, more info: https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels.
var invalidPromLabelChars = regexp.MustCompile(`[^a-zA-Z0-9_]`)
func normalizeLabels(prefix string, appLabels []string) []string {
results := []string{}
for _, label := range appLabels {
//prometheus labels don't accept dash in their name
curr := invalidPromLabelChars.ReplaceAllString(label, "_")
result := fmt.Sprintf("%s_%s", prefix, curr)
results = append(results, result)
}
return results
}
func (m *MetricsServer) RegisterClustersInfoSource(ctx context.Context, source HasClustersInfo) {
collector := &clusterCollector{infoSource: source}
go collector.Run(ctx)
@@ -303,30 +272,25 @@ func (m *MetricsServer) SetExpiration(cacheExpiration time.Duration) error {
type appCollector struct {
store applister.ApplicationLister
appFilter func(obj interface{}) bool
appLabels []string
}
// NewAppCollector returns a prometheus collector for application metrics
func NewAppCollector(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string) prometheus.Collector {
func NewAppCollector(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool) prometheus.Collector {
return &appCollector{
store: appLister,
appFilter: appFilter,
appLabels: appLabels,
}
}
// NewAppRegistry creates a new prometheus registry that collects applications
func NewAppRegistry(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool, appLabels []string) *prometheus.Registry {
func NewAppRegistry(appLister applister.ApplicationLister, appFilter func(obj interface{}) bool) *prometheus.Registry {
registry := prometheus.NewRegistry()
registry.MustRegister(NewAppCollector(appLister, appFilter, appLabels))
registry.MustRegister(NewAppCollector(appLister, appFilter))
return registry
}
// Describe implements the prometheus.Collector interface
func (c *appCollector) Describe(ch chan<- *prometheus.Desc) {
if len(c.appLabels) > 0 {
ch <- descAppLabels
}
ch <- descAppInfo
ch <- descAppSyncStatusCode
ch <- descAppHealthStatus
@@ -341,7 +305,7 @@ func (c *appCollector) Collect(ch chan<- prometheus.Metric) {
}
for _, app := range apps {
if c.appFilter(app) {
c.collectApps(ch, app)
collectApps(ch, app)
}
}
}
@@ -353,7 +317,7 @@ func boolFloat64(b bool) float64 {
return 0
}
func (c *appCollector) collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
addConstMetric := func(desc *prometheus.Desc, t prometheus.ValueType, v float64, lv ...string) {
project := app.Spec.GetProject()
lv = append([]string{app.Namespace, app.Name, project}, lv...)
@@ -380,15 +344,6 @@ func (c *appCollector) collectApps(ch chan<- prometheus.Metric, app *argoappv1.A
addGauge(descAppInfo, 1, git.NormalizeGitURL(app.Spec.Source.RepoURL), app.Spec.Destination.Server, app.Spec.Destination.Namespace, string(syncStatus), string(healthStatus), operation)
if len(c.appLabels) > 0 {
labelValues := []string{}
for _, desiredLabel := range c.appLabels {
value := app.GetLabels()[desiredLabel]
labelValues = append(labelValues, value)
}
addGauge(descAppLabels, 1, labelValues...)
}
// Deprecated controller metrics
if os.Getenv(EnvVarLegacyControllerMetrics) == "true" {
addGauge(descAppCreated, float64(app.CreationTimestamp.Unix()))

View File

@@ -10,7 +10,6 @@ import (
"testing"
"time"
gitopsCache "github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
@@ -30,10 +29,6 @@ kind: Application
metadata:
name: my-app
namespace: argocd
labels:
team-name: my-team
team-bu: bu-id
argoproj.io/cluster: test-cluster
spec:
destination:
namespace: dummy-namespace
@@ -55,10 +50,6 @@ kind: Application
metadata:
name: my-app-2
namespace: argocd
labels:
team-name: my-team
team-bu: bu-id
argoproj.io/cluster: test-cluster
spec:
destination:
namespace: dummy-namespace
@@ -86,10 +77,6 @@ metadata:
name: my-app-3
namespace: argocd
deletionTimestamp: "2020-03-16T09:17:45Z"
labels:
team-name: my-team
team-bu: bu-id
argoproj.io/cluster: test-cluster
spec:
destination:
namespace: dummy-namespace
@@ -151,7 +138,7 @@ func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.Applic
fakeApps = append(fakeApps, a)
}
appClientset := appclientset.NewSimpleClientset(fakeApps...)
factory := appinformer.NewSharedInformerFactoryWithOptions(appClientset, 0, appinformer.WithNamespace("argocd"), appinformer.WithTweakListOptions(func(options *metav1.ListOptions) {}))
factory := appinformer.NewFilteredSharedInformerFactory(appClientset, 0, "argocd", func(options *metav1.ListOptions) {})
appInformer := factory.Argoproj().V1alpha1().Applications().Informer()
go appInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), appInformer.HasSynced) {
@@ -161,71 +148,30 @@ func newFakeLister(fakeAppYAMLs ...string) (context.CancelFunc, applister.Applic
}
func testApp(t *testing.T, fakeAppYAMLs []string, expectedResponse string) {
t.Helper()
testMetricServer(t, fakeAppYAMLs, expectedResponse, []string{})
}
type fakeClusterInfo struct {
clustersInfo []gitopsCache.ClusterInfo
}
func (f *fakeClusterInfo) GetClustersInfo() []gitopsCache.ClusterInfo {
return f.clustersInfo
}
type TestMetricServerConfig struct {
FakeAppYAMLs []string
ExpectedResponse string
AppLabels []string
ClustersInfo []gitopsCache.ClusterInfo
}
func testMetricServer(t *testing.T, fakeAppYAMLs []string, expectedResponse string, appLabels []string) {
t.Helper()
cfg := TestMetricServerConfig{
FakeAppYAMLs: fakeAppYAMLs,
ExpectedResponse: expectedResponse,
AppLabels: appLabels,
ClustersInfo: []gitopsCache.ClusterInfo{},
}
runTest(t, cfg)
}
func runTest(t *testing.T, cfg TestMetricServerConfig) {
t.Helper()
cancel, appLister := newFakeLister(cfg.FakeAppYAMLs...)
cancel, appLister := newFakeLister(fakeAppYAMLs...)
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, cfg.AppLabels)
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
assert.NoError(t, err)
if len(cfg.ClustersInfo) > 0 {
ci := &fakeClusterInfo{clustersInfo: cfg.ClustersInfo}
collector := &clusterCollector{
infoSource: ci,
info: ci.GetClustersInfo(),
}
metricsServ.registry.MustRegister(collector)
}
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
assert.Equal(t, rr.Code, http.StatusOK)
body := rr.Body.String()
assertMetricsPrinted(t, cfg.ExpectedResponse, body)
log.Println(body)
assertMetricsPrinted(t, expectedResponse, body)
}
type testCombination struct {
applications []string
responseContains string
expectedResponse string
}
func TestMetrics(t *testing.T) {
combinations := []testCombination{
{
applications: []string{fakeApp, fakeApp2, fakeApp3},
responseContains: `
expectedResponse: `
# HELP argocd_app_info Information about application.
# TYPE argocd_app_info gauge
argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:6443",health_status="Degraded",name="my-app-3",namespace="argocd",operation="delete",project="important-project",repo="https://github.com/argoproj/argocd-example-apps",sync_status="OutOfSync"} 1
@@ -235,7 +181,7 @@ argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:
},
{
applications: []string{fakeDefaultApp},
responseContains: `
expectedResponse: `
# HELP argocd_app_info Information about application.
# TYPE argocd_app_info gauge
argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:6443",health_status="Healthy",name="my-app",namespace="argocd",operation="",project="default",repo="https://github.com/argoproj/argocd-example-apps",sync_status="Synced"} 1
@@ -244,50 +190,7 @@ argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:
}
for _, combination := range combinations {
testApp(t, combination.applications, combination.responseContains)
}
}
func TestMetricLabels(t *testing.T) {
type testCases struct {
testCombination
description string
metricLabels []string
}
cases := []testCases{
{
description: "will return the labels metrics successfully",
metricLabels: []string{"team-name", "team-bu", "argoproj.io/cluster"},
testCombination: testCombination{
applications: []string{fakeApp, fakeApp2, fakeApp3},
responseContains: `
# TYPE argocd_app_labels gauge
argocd_app_labels{label_argoproj_io_cluster="test-cluster",label_team_bu="bu-id",label_team_name="my-team",name="my-app",namespace="argocd",project="important-project"} 1
argocd_app_labels{label_argoproj_io_cluster="test-cluster",label_team_bu="bu-id",label_team_name="my-team",name="my-app-2",namespace="argocd",project="important-project"} 1
argocd_app_labels{label_argoproj_io_cluster="test-cluster",label_team_bu="bu-id",label_team_name="my-team",name="my-app-3",namespace="argocd",project="important-project"} 1
`,
},
},
{
description: "metric will have empty label value if not present in the application",
metricLabels: []string{"non-existing"},
testCombination: testCombination{
applications: []string{fakeApp, fakeApp2, fakeApp3},
responseContains: `
# TYPE argocd_app_labels gauge
argocd_app_labels{label_non_existing="",name="my-app",namespace="argocd",project="important-project"} 1
argocd_app_labels{label_non_existing="",name="my-app-2",namespace="argocd",project="important-project"} 1
argocd_app_labels{label_non_existing="",name="my-app-3",namespace="argocd",project="important-project"} 1
`,
},
},
}
for _, c := range cases {
c := c
t.Run(c.description, func(t *testing.T) {
testMetricServer(t, c.applications, c.responseContains, c.metricLabels)
})
testApp(t, combination.applications, combination.expectedResponse)
}
}
@@ -319,7 +222,7 @@ argocd_app_sync_status{name="my-app",namespace="argocd",project="important-proje
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
assert.NoError(t, err)
appSyncTotal := `
@@ -349,12 +252,11 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
// assertMetricsPrinted asserts every line in the expected lines appears in the body
func assertMetricsPrinted(t *testing.T, expectedLines, body string) {
t.Helper()
for _, line := range strings.Split(expectedLines, "\n") {
if line == "" {
continue
}
assert.Contains(t, body, line, "expected metrics mismatch")
assert.Contains(t, body, line)
}
}
@@ -371,7 +273,7 @@ func assertMetricsNotPrinted(t *testing.T, expectedLines, body string) {
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
assert.NoError(t, err)
appReconcileMetrics := `
@@ -404,7 +306,7 @@ argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argoc
func TestMetricsReset(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck, []string{})
metricsServ, err := NewMetricsServer("localhost:8082", appLister, appFilter, noOpHealthCheck)
assert.NoError(t, err)
appSyncTotal := `

View File

@@ -1,101 +0,0 @@
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
"k8s.io/client-go/util/workqueue"
)
const (
WorkQueueSubsystem = "workqueue"
DepthKey = "depth"
AddsKey = "adds_total"
QueueLatencyKey = "queue_duration_seconds"
WorkDurationKey = "work_duration_seconds"
UnfinishedWorkKey = "unfinished_work_seconds"
LongestRunningProcessorKey = "longest_running_processor_seconds"
RetriesKey = "retries_total"
)
var (
depth = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Subsystem: WorkQueueSubsystem,
Name: DepthKey,
Help: "Current depth of workqueue",
}, []string{"name"})
adds = prometheus.NewCounterVec(prometheus.CounterOpts{
Subsystem: WorkQueueSubsystem,
Name: AddsKey,
Help: "Total number of adds handled by workqueue",
}, []string{"name"})
latency = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Subsystem: WorkQueueSubsystem,
Name: QueueLatencyKey,
Help: "How long in seconds an item stays in workqueue before being requested",
Buckets: []float64{1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 5, 10, 15, 30, 60, 120, 180},
}, []string{"name"})
workDuration = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Subsystem: WorkQueueSubsystem,
Name: WorkDurationKey,
Help: "How long in seconds processing an item from workqueue takes.",
Buckets: []float64{1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 5, 10, 15, 30, 60, 120, 180},
}, []string{"name"})
unfinished = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Subsystem: WorkQueueSubsystem,
Name: UnfinishedWorkKey,
Help: "How many seconds of work has been done that " +
"is in progress and hasn't been observed by work_duration. Large " +
"values indicate stuck threads. One can deduce the number of stuck " +
"threads by observing the rate at which this increases.",
}, []string{"name"})
longestRunningProcessor = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Subsystem: WorkQueueSubsystem,
Name: LongestRunningProcessorKey,
Help: "How many seconds has the longest running " +
"processor for workqueue been running.",
}, []string{"name"})
retries = prometheus.NewCounterVec(prometheus.CounterOpts{
Subsystem: WorkQueueSubsystem,
Name: RetriesKey,
Help: "Total number of retries handled by workqueue",
}, []string{"name"})
)
func init() {
workqueue.SetProvider(workqueueMetricsProvider{})
}
type workqueueMetricsProvider struct{}
func (workqueueMetricsProvider) NewDepthMetric(name string) workqueue.GaugeMetric {
return depth.WithLabelValues(name)
}
func (workqueueMetricsProvider) NewAddsMetric(name string) workqueue.CounterMetric {
return adds.WithLabelValues(name)
}
func (workqueueMetricsProvider) NewLatencyMetric(name string) workqueue.HistogramMetric {
return latency.WithLabelValues(name)
}
func (workqueueMetricsProvider) NewWorkDurationMetric(name string) workqueue.HistogramMetric {
return workDuration.WithLabelValues(name)
}
func (workqueueMetricsProvider) NewUnfinishedWorkSecondsMetric(name string) workqueue.SettableGaugeMetric {
return unfinished.WithLabelValues(name)
}
func (workqueueMetricsProvider) NewLongestRunningProcessorSecondsMetric(name string) workqueue.SettableGaugeMetric {
return longestRunningProcessor.WithLabelValues(name)
}
func (workqueueMetricsProvider) NewRetriesMetric(name string) workqueue.CounterMetric {
return retries.WithLabelValues(name)
}

View File

@@ -13,6 +13,7 @@ import (
hookutil "github.com/argoproj/gitops-engine/pkg/sync/hook"
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
resourceutil "github.com/argoproj/gitops-engine/pkg/sync/resource"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -29,7 +30,6 @@ import (
appclientset "github.com/argoproj/argo-cd/v2/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/util/argo"
argodiff "github.com/argoproj/argo-cd/v2/util/argo/diff"
appstatecache "github.com/argoproj/argo-cd/v2/util/cache/appstate"
"github.com/argoproj/argo-cd/v2/util/db"
"github.com/argoproj/argo-cd/v2/util/gpg"
@@ -64,14 +64,13 @@ type AppStateManager interface {
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
}
// comparisonResult holds the state of an application after the reconciliation
type comparisonResult struct {
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
managedResources []managedResource
reconciliationResult sync.ReconciliationResult
diffConfig argodiff.DiffConfig
diffNormalizer diff.Normalizer
appSourceType v1alpha1.ApplicationSourceType
// timings maps phases of comparison to the duration it took to complete (for statistical purposes)
timings map[string]time.Duration
@@ -99,7 +98,6 @@ type appStateManager struct {
cache *appstatecache.Cache
namespace string
statusRefreshTimeout time.Duration
resourceTracking argo.ResourceTracking
}
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
@@ -140,10 +138,6 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
if err != nil {
return nil, nil, err
}
enabledSourceTypes, err := m.settingsMgr.GetEnabledSourceTypes()
if err != nil {
return nil, nil, err
}
ts.AddCheckpoint("plugins_ms")
tools := make([]*appv1.ConfigManagementPlugin, len(plugins))
for i := range plugins {
@@ -154,7 +148,6 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
if err != nil {
return nil, nil, err
}
kustomizeOptions, err := kustomizeSettings.GetOptions(app.Spec.Source)
if err != nil {
return nil, nil, err
@@ -165,30 +158,28 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
return nil, nil, err
}
ts.AddCheckpoint("build_options_ms")
serverVersion, apiResources, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
serverVersion, apiGroups, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
if err != nil {
return nil, nil, err
}
ts.AddCheckpoint("version_ms")
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &apiclient.ManifestRequest{
Repo: repo,
Repos: permittedHelmRepos,
Revision: revision,
NoCache: noCache,
NoRevisionCache: noRevisionCache,
AppLabelKey: appLabelKey,
AppName: app.Name,
Namespace: app.Spec.Destination.Namespace,
ApplicationSource: &source,
Plugins: tools,
KustomizeOptions: kustomizeOptions,
KubeVersion: serverVersion,
ApiVersions: argo.APIResourcesToStrings(apiResources, true),
VerifySignature: verifySignature,
HelmRepoCreds: permittedHelmCredentials,
TrackingMethod: string(argo.GetTrackingMethod(m.settingsMgr)),
EnabledSourceTypes: enabledSourceTypes,
HelmOptions: helmOptions,
Repo: repo,
Repos: permittedHelmRepos,
Revision: revision,
NoCache: noCache,
NoRevisionCache: noRevisionCache,
AppLabelKey: appLabelKey,
AppName: app.Name,
Namespace: app.Spec.Destination.Namespace,
ApplicationSource: &source,
Plugins: tools,
KustomizeOptions: kustomizeOptions,
KubeVersion: serverVersion,
ApiVersions: argo.APIGroupsToVersions(apiGroups),
VerifySignature: verifySignature,
HelmRepoCreds: permittedHelmCredentials,
HelmOptions: helmOptions,
})
if err != nil {
return nil, nil, err
@@ -262,22 +253,24 @@ func DeduplicateTargetObjects(
return result, conditions, nil
}
// getComparisonSettings will return the system level settings related to the
// diff/normalization process.
func (m *appStateManager) getComparisonSettings() (string, map[string]v1alpha1.ResourceOverride, *settings.ResourcesFilter, error) {
func (m *appStateManager) getComparisonSettings(app *appv1.Application) (string, map[string]v1alpha1.ResourceOverride, diff.Normalizer, *settings.ResourcesFilter, error) {
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
return "", nil, nil, err
return "", nil, nil, nil, err
}
appLabelKey, err := m.settingsMgr.GetAppInstanceLabelKey()
if err != nil {
return "", nil, nil, err
return "", nil, nil, nil, err
}
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, resourceOverrides)
if err != nil {
return "", nil, nil, nil, err
}
resFilter, err := m.settingsMgr.GetResourcesFilter()
if err != nil {
return "", nil, nil, err
return "", nil, nil, nil, err
}
return appLabelKey, resourceOverrides, resFilter, nil
return appLabelKey, resourceOverrides, diffNormalizer, resFilter, nil
}
// verifyGnuPGSignature verifies the result of a GnuPG operation for a given git
@@ -319,13 +312,64 @@ func verifyGnuPGSignature(revision string, project *appv1.AppProject, manifestIn
return conditions
}
func (m *appStateManager) diffArrayCached(configArray []*unstructured.Unstructured, liveArray []*unstructured.Unstructured, cachedDiff []*appv1.ResourceDiff, opts ...diff.Option) (*diff.DiffResultList, error) {
numItems := len(configArray)
if len(liveArray) != numItems {
return nil, fmt.Errorf("left and right arrays have mismatched lengths")
}
diffByKey := map[kube.ResourceKey]*appv1.ResourceDiff{}
for i := range cachedDiff {
res := cachedDiff[i]
diffByKey[kube.NewResourceKey(res.Group, res.Kind, res.Namespace, res.Name)] = cachedDiff[i]
}
diffResultList := diff.DiffResultList{
Diffs: make([]diff.DiffResult, numItems),
}
for i := 0; i < numItems; i++ {
config := configArray[i]
live := liveArray[i]
resourceVersion := ""
var key kube.ResourceKey
if live != nil {
key = kube.GetResourceKey(live)
resourceVersion = live.GetResourceVersion()
} else {
key = kube.GetResourceKey(config)
}
var dr *diff.DiffResult
if cachedDiff, ok := diffByKey[key]; ok && cachedDiff.ResourceVersion == resourceVersion {
dr = &diff.DiffResult{
NormalizedLive: []byte(cachedDiff.NormalizedLiveState),
PredictedLive: []byte(cachedDiff.PredictedLiveState),
Modified: cachedDiff.Modified,
}
} else {
res, err := diff.Diff(configArray[i], liveArray[i], opts...)
if err != nil {
return nil, err
}
dr = res
}
if dr != nil {
diffResultList.Diffs[i] = *dr
if dr.Modified {
diffResultList.Modified = true
}
}
}
return &diffResultList, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *appv1.AppProject, revision string, source v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string) *comparisonResult {
ts := stats.NewTimingStats()
appLabelKey, resourceOverrides, resFilter, err := m.getComparisonSettings()
appLabelKey, resourceOverrides, diffNormalizer, resFilter, err := m.getComparisonSettings(app)
ts.AddCheckpoint("settings_ms")
// return unknown comparison result if basic comparison settings cannot be loaded
@@ -417,16 +461,14 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
// filter out all resources which are not permitted in the application project
for k, v := range liveObjByKey {
if !project.IsLiveResourcePermitted(v, app.Spec.Destination.Server, app.Spec.Destination.Name) {
if !project.IsLiveResourcePermitted(v, app.Spec.Destination.Server) {
delete(liveObjByKey, k)
}
}
trackingMethod := argo.GetTrackingMethod(m.settingsMgr)
for _, liveObj := range liveObjByKey {
if liveObj != nil {
appInstanceName := m.resourceTracking.GetAppName(liveObj, appLabelKey, trackingMethod)
appInstanceName := kubeutil.GetAppInstanceLabel(liveObj, appLabelKey)
if appInstanceName != "" && appInstanceName != app.Name {
conditions = append(conditions, v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
@@ -446,26 +488,28 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
compareOptions = settings.GetDefaultDiffOptions()
}
logCtx.Debugf("built managed objects list")
var diffResults *diff.DiffResultList
diffOpts := []diff.Option{
diff.WithNormalizer(diffNormalizer),
diff.IgnoreAggregatedRoles(compareOptions.IgnoreAggregatedRoles),
}
cachedDiff := make([]*appv1.ResourceDiff, 0)
// restore comparison using cached diff result if previous comparison was performed for the same revision
revisionChanged := manifestInfo == nil || app.Status.Sync.Revision != manifestInfo.Revision
specChanged := !reflect.DeepEqual(app.Status.Sync.ComparedTo, appv1.ComparedTo{Source: app.Spec.Source, Destination: app.Spec.Destination})
_, refreshRequested := app.IsRefreshRequested()
noCache = noCache || refreshRequested || app.Status.Expired(m.statusRefreshTimeout) || specChanged || revisionChanged
noCache = noCache || refreshRequested || app.Status.Expired(m.statusRefreshTimeout)
diffConfigBuilder := argodiff.NewDiffConfigBuilder().
WithDiffSettings(app.Spec.IgnoreDifferences, resourceOverrides, compareOptions.IgnoreAggregatedRoles).
WithTracking(appLabelKey, string(trackingMethod))
if noCache {
diffConfigBuilder.WithNoCache()
if noCache || specChanged || revisionChanged || m.cache.GetAppManagedResources(app.Name, &cachedDiff) != nil {
// (rare) cache miss
diffResults, err = diff.DiffArray(reconciliation.Target, reconciliation.Live, diffOpts...)
} else {
diffConfigBuilder.WithCache(m.cache, app.GetName())
diffResults, err = m.diffArrayCached(reconciliation.Target, reconciliation.Live, cachedDiff, diffOpts...)
}
// it necessary to ignore the error at this point to avoid creating duplicated
// application conditions as argo.StateDiffs will validate this diffConfig again.
diffConfig, _ := diffConfigBuilder.Build()
diffResults, err := argodiff.StateDiffs(reconciliation.Live, reconciliation.Target, diffConfig)
if err != nil {
diffResults = &diff.DiffResultList{}
failedToLoadObjs = true
@@ -586,7 +630,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
resources: resourceSummaries,
managedResources: managedResources,
reconciliationResult: reconciliation,
diffConfig: diffConfig,
diffNormalizer: diffNormalizer,
diffResultList: diffResults,
}
if manifestInfo != nil {
@@ -643,7 +687,6 @@ func NewAppStateManager(
metricsServer *metrics.MetricsServer,
cache *appstatecache.Cache,
statusRefreshTimeout time.Duration,
resourceTracking argo.ResourceTracking,
) AppStateManager {
return &appStateManager{
liveStateCache: liveStateCache,
@@ -657,6 +700,5 @@ func NewAppStateManager(
projInformer: projInformer,
metricsServer: metricsServer,
statusRefreshTimeout: statusRefreshTimeout,
resourceTracking: resourceTracking,
}
}

View File

@@ -2,7 +2,6 @@ package controller
import (
"context"
"encoding/json"
"fmt"
"os"
"strconv"
@@ -12,7 +11,6 @@ import (
"github.com/argoproj/gitops-engine/pkg/sync"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
jsonpatch "github.com/evanphx/json-patch"
log "github.com/sirupsen/logrus"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -24,7 +22,6 @@ import (
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
listersv1alpha1 "github.com/argoproj/argo-cd/v2/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/util/argo"
"github.com/argoproj/argo-cd/v2/util/argo/diff"
logutils "github.com/argoproj/argo-cd/v2/util/log"
"github.com/argoproj/argo-cd/v2/util/lua"
"github.com/argoproj/argo-cd/v2/util/rand"
@@ -63,16 +60,6 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
return
}
syncOp = *state.Operation.Sync
// validates if it should fail the sync if it finds shared resources
hasSharedResource, sharedResourceMessage := hasSharedResourceCondition(app)
if syncOp.SyncOptions.HasOption("FailOnSharedResource=true") &&
hasSharedResource {
state.Phase = common.OperationFailed
state.Message = fmt.Sprintf("Shared resouce found: %s", sharedResourceMessage)
return
}
if syncOp.Source == nil {
// normal sync case (where source is taken from app.spec.source)
source = app.Spec.Source
@@ -100,7 +87,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
revision = syncOp.Revision
}
proj, err := argo.GetAppProject(&app.Spec, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace, m.settingsMgr, m.db, context.TODO())
proj, err := argo.GetAppProject(&app.Spec, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace, m.settingsMgr)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
@@ -140,7 +127,13 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
}
atomic.AddUint64(&syncIdPrefix, 1)
syncId := fmt.Sprintf("%05d-%s", syncIdPrefix, rand.RandString(5))
randSuffix, err := rand.String(5)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed generate random sync ID: %v", err)
return
}
syncId := fmt.Sprintf("%05d-%s", syncIdPrefix, randSuffix)
logEntry := log.WithFields(log.Fields{"application": app.Name, "syncId": syncId})
initialResourcesRes := make([]common.ResourceSyncResult, 0)
@@ -175,24 +168,9 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
return
}
reconciliationResult := compareResult.reconciliationResult
// if RespectIgnoreDifferences is enabled, it should normalize the target
// resources which in this case applies the live values in the configured
// ignore differences fields.
if syncOp.SyncOptions.HasOption("RespectIgnoreDifferences=true") {
patchedTargets, err := normalizeTargetResources(compareResult)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to normalize target resources: %s", err)
return
}
reconciliationResult.Target = patchedTargets
}
syncCtx, cleanup, err := sync.NewSyncContext(
compareResult.syncStatus.Revision,
reconciliationResult,
compareResult.reconciliationResult,
restConfig,
rawConfig,
m.kubectl,
@@ -204,7 +182,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
if !proj.IsGroupKindPermitted(un.GroupVersionKind().GroupKind(), res.Namespaced) {
return fmt.Errorf("Resource %s:%s is not permitted in project %s.", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, proj.Name)
}
if res.Namespaced && !proj.IsDestinationPermitted(v1alpha1.ApplicationDestination{Namespace: un.GetNamespace(), Server: app.Spec.Destination.Server, Name: app.Spec.Destination.Name}) {
if res.Namespaced && !proj.IsDestinationPermitted(v1alpha1.ApplicationDestination{Namespace: un.GetNamespace(), Server: app.Spec.Destination.Server}) {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), proj.Name)
}
return nil
@@ -273,159 +251,6 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
}
}
// normalizeTargetResources will apply the diff normalization in all live and target resources.
// Then it calculates the merge patch between the normalized live and the current live resources.
// Finally it applies the merge patch in the normalized target resources. This is done to ensure
// that target resources have the same ignored diff fields values from live ones to avoid them to
// be applied in the cluster. Returns the list of normalized target resources.
func normalizeTargetResources(cr *comparisonResult) ([]*unstructured.Unstructured, error) {
// normalize live and target resources
normalized, err := diff.Normalize(cr.reconciliationResult.Live, cr.reconciliationResult.Target, cr.diffConfig)
if err != nil {
return nil, err
}
patchedTargets := []*unstructured.Unstructured{}
for idx, live := range cr.reconciliationResult.Live {
normalizedTarget := normalized.Targets[idx]
if normalizedTarget == nil {
patchedTargets = append(patchedTargets, nil)
continue
}
originalTarget := cr.reconciliationResult.Target[idx]
if live == nil {
patchedTargets = append(patchedTargets, originalTarget)
continue
}
// calculate targetPatch between normalized and target resource
targetPatch, err := getMergePatch(normalizedTarget, originalTarget)
if err != nil {
return nil, err
}
// check if there is a patch to apply. An empty patch is identified by a '{}' string.
if len(targetPatch) > 2 {
livePatch, err := getMergePatch(normalized.Lives[idx], live)
if err != nil {
return nil, err
}
// generate a minimal patch that uses the fields from targetPatch (template)
// with livePatch values
patch, err := compilePatch(targetPatch, livePatch)
if err != nil {
return nil, err
}
normalizedTarget, err = applyMergePatch(normalizedTarget, patch)
if err != nil {
return nil, err
}
} else {
// if there is no patch just use the original target
normalizedTarget = originalTarget
}
patchedTargets = append(patchedTargets, normalizedTarget)
}
return patchedTargets, nil
}
// compilePatch will generate a patch using the fields from templatePatch with
// the values from valuePatch.
func compilePatch(templatePatch, valuePatch []byte) ([]byte, error) {
templateMap := make(map[string]interface{})
err := json.Unmarshal(templatePatch, &templateMap)
if err != nil {
return nil, err
}
valueMap := make(map[string]interface{})
err = json.Unmarshal(valuePatch, &valueMap)
if err != nil {
return nil, err
}
resultMap := intersectMap(templateMap, valueMap)
return json.Marshal(resultMap)
}
// intersectMap will return map with the fields intersection from the 2 provided
// maps populated with the valueMap values.
func intersectMap(templateMap, valueMap map[string]interface{}) map[string]interface{} {
result := make(map[string]interface{})
for k, v := range templateMap {
if innerTMap, ok := v.(map[string]interface{}); ok {
if innerVMap, ok := valueMap[k].(map[string]interface{}); ok {
result[k] = intersectMap(innerTMap, innerVMap)
}
} else if innerTSlice, ok := v.([]interface{}); ok {
if innerVSlice, ok := valueMap[k].([]interface{}); ok {
items := []interface{}{}
for idx, innerTSliceValue := range innerTSlice {
if idx < len(innerVSlice) {
if tSliceValueMap, ok := innerTSliceValue.(map[string]interface{}); ok {
if vSliceValueMap, ok := innerVSlice[idx].(map[string]interface{}); ok {
item := intersectMap(tSliceValueMap, vSliceValueMap)
items = append(items, item)
}
} else {
items = append(items, innerVSlice[idx])
}
}
}
if len(items) > 0 {
result[k] = items
}
}
} else {
if _, ok := valueMap[k]; ok {
result[k] = valueMap[k]
}
}
}
return result
}
// getMergePatch calculates and returns the patch between the original and the
// modified unstructures.
func getMergePatch(original, modified *unstructured.Unstructured) ([]byte, error) {
originalJSON, err := original.MarshalJSON()
if err != nil {
return nil, err
}
modifiedJSON, err := modified.MarshalJSON()
if err != nil {
return nil, err
}
return jsonpatch.CreateMergePatch(originalJSON, modifiedJSON)
}
// applyMergePatch will apply the given patch in the obj and return the patched
// unstructure.
func applyMergePatch(obj *unstructured.Unstructured, patch []byte) (*unstructured.Unstructured, error) {
originalJSON, err := obj.MarshalJSON()
if err != nil {
return nil, err
}
patchedJSON, err := jsonpatch.MergePatch(originalJSON, patch)
if err != nil {
return nil, err
}
patchedObj := &unstructured.Unstructured{}
_, _, err = unstructured.UnstructuredJSONScheme.Decode(patchedJSON, nil, patchedObj)
if err != nil {
return nil, err
}
return patchedObj, nil
}
// hasSharedResourceCondition will check if the Application has any resource that has already
// been synced by another Application. If the resource is found in another Application it returns
// true along with a human readable message of which specific resource has this condition.
func hasSharedResourceCondition(app *v1alpha1.Application) (bool, string) {
for _, condition := range app.Status.Conditions {
if condition.Type == v1alpha1.ApplicationConditionSharedResourceWarning {
return true, condition.Message
}
}
return false, ""
}
// delayBetweenSyncWaves is a gitops-engine SyncWaveHook which introduces an artificial delay
// between each sync wave. We introduce an artificial delay in order give other controllers a
// _chance_ to react to the spec change that we just applied. This is important because without

View File

@@ -5,20 +5,15 @@ import (
"os"
"testing"
"github.com/argoproj/gitops-engine/pkg/sync"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/argo-cd/v2/controller/testdata"
"github.com/argoproj/argo-cd/v2/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v2/reposerver/apiclient"
"github.com/argoproj/argo-cd/v2/test"
"github.com/argoproj/argo-cd/v2/util/argo/diff"
)
func TestPersistRevisionHistory(t *testing.T) {
@@ -147,204 +142,3 @@ func TestSyncComparisonError(t *testing.T) {
assert.NotEmpty(t, conditions)
assert.Equal(t, "abc123", opState.SyncResult.Revision)
}
func TestAppStateManager_SyncAppState(t *testing.T) {
type fixture struct {
project *v1alpha1.AppProject
application *v1alpha1.Application
controller *ApplicationController
}
setup := func() *fixture {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
project := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
Spec: v1alpha1.AppProjectSpec{
SignatureKeys: []v1alpha1.SignatureKey{{KeyID: "test"}},
},
}
data := fakeData{
apps: []runtime.Object{app, project},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
return &fixture{
project: project,
application: app,
controller: ctrl,
}
}
t.Run("will fail the sync if finds shared resources", func(t *testing.T) {
// given
t.Parallel()
f := setup()
syncErrorMsg := "deployment already applied by another application"
condition := v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
Message: syncErrorMsg,
}
f.application.Status.Conditions = append(f.application.Status.Conditions, condition)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &v1alpha1.ApplicationSource{},
SyncOptions: []string{"FailOnSharedResource=true"},
},
}}
// when
f.controller.appStateManager.SyncAppState(f.application, opState)
// then
assert.Equal(t, common.OperationFailed, opState.Phase)
assert.Contains(t, opState.Message, syncErrorMsg)
})
}
func TestNormalizeTargetResources(t *testing.T) {
type fixture struct {
comparisonResult *comparisonResult
}
setup := func(t *testing.T, ignores []v1alpha1.ResourceIgnoreDifferences) *fixture {
t.Helper()
dc, err := diff.NewDiffConfigBuilder().
WithDiffSettings(ignores, nil, true).
WithNoCache().
Build()
require.NoError(t, err)
live := test.YamlToUnstructured(testdata.LiveDeploymentYaml)
target := test.YamlToUnstructured(testdata.TargetDeploymentYaml)
return &fixture{
&comparisonResult{
reconciliationResult: sync.ReconciliationResult{
Live: []*unstructured.Unstructured{live},
Target: []*unstructured.Unstructured{target},
},
diffConfig: dc,
},
}
}
t.Run("will modify target resource adding live state in fields it should ignore", func(t *testing.T) {
// given
ignore := v1alpha1.ResourceIgnoreDifferences{
Group: "*",
Kind: "*",
ManagedFieldsManagers: []string{"janitor"},
}
ignores := []v1alpha1.ResourceIgnoreDifferences{ignore}
f := setup(t, ignores)
// when
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
require.Equal(t, 1, len(targets))
iksmVersion := targets[0].GetAnnotations()["iksm-version"]
assert.Equal(t, "2.0", iksmVersion)
})
t.Run("will not modify target resource if ignore difference is not configured", func(t *testing.T) {
// given
f := setup(t, []v1alpha1.ResourceIgnoreDifferences{})
// when
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
require.Equal(t, 1, len(targets))
iksmVersion := targets[0].GetAnnotations()["iksm-version"]
assert.Equal(t, "1.0", iksmVersion)
})
t.Run("will remove fields from target if not present in live", func(t *testing.T) {
ignore := v1alpha1.ResourceIgnoreDifferences{
Group: "apps",
Kind: "Deployment",
JSONPointers: []string{"/metadata/annotations/iksm-version"},
}
ignores := []v1alpha1.ResourceIgnoreDifferences{ignore}
f := setup(t, ignores)
live := f.comparisonResult.reconciliationResult.Live[0]
unstructured.RemoveNestedField(live.Object, "metadata", "annotations", "iksm-version")
// when
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
require.Equal(t, 1, len(targets))
_, ok := targets[0].GetAnnotations()["iksm-version"]
assert.False(t, ok)
})
t.Run("will correctly normalize with multiple ignore configurations", func(t *testing.T) {
// given
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
Group: "apps",
Kind: "Deployment",
JSONPointers: []string{"/spec/replicas"},
},
{
Group: "*",
Kind: "*",
ManagedFieldsManagers: []string{"janitor"},
},
}
f := setup(t, ignores)
// when
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
require.Equal(t, 1, len(targets))
normalized := targets[0]
iksmVersion, ok := normalized.GetAnnotations()["iksm-version"]
require.True(t, ok)
assert.Equal(t, "2.0", iksmVersion)
replicas, ok, err := unstructured.NestedInt64(normalized.Object, "spec", "replicas")
require.NoError(t, err)
require.True(t, ok)
assert.Equal(t, int64(4), replicas)
})
t.Run("will keep new array entries not found in live state if not ignored", func(t *testing.T) {
t.Skip("limitation in the current implementation")
// given
ignores := []v1alpha1.ResourceIgnoreDifferences{
{
Group: "apps",
Kind: "Deployment",
JQPathExpressions: []string{".spec.template.spec.containers[] | select(.name == \"guestbook-ui\")"},
},
}
f := setup(t, ignores)
target := test.YamlToUnstructured(testdata.TargetDeploymentNewEntries)
f.comparisonResult.reconciliationResult.Target = []*unstructured.Unstructured{target}
// when
targets, err := normalizeTargetResources(f.comparisonResult)
// then
require.NoError(t, err)
require.Equal(t, 1, len(targets))
containers, ok, err := unstructured.NestedSlice(targets[0].Object, "spec", "template", "spec", "containers")
require.NoError(t, err)
require.True(t, ok)
assert.Equal(t, 2, len(containers))
})
}

View File

@@ -1,14 +0,0 @@
package testdata
import _ "embed"
var (
//go:embed live-deployment.yaml
LiveDeploymentYaml string
//go:embed target-deployment.yaml
TargetDeploymentYaml string
//go:embed target-deployment-new-entries.yaml
TargetDeploymentNewEntries string
)

View File

@@ -1,177 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
argocd.argoproj.io/tracking-id: 'guestbook:apps/Deployment:default/kustomize-guestbook-ui'
deployment.kubernetes.io/revision: '9'
iksm-version: '2.0'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"argocd.argoproj.io/tracking-id":"guestbook:apps/Deployment:default/kustomize-guestbook-ui","iksm-version":"2.0"},"name":"kustomize-guestbook-ui","namespace":"default"},"spec":{"replicas":4,"revisionHistoryLimit":3,"selector":{"matchLabels":{"app":"guestbook-ui"}},"template":{"metadata":{"labels":{"app":"guestbook-ui"}},"spec":{"containers":[{"env":[{"name":"SOME_ENV_VAR","value":"some_value"}],"image":"gcr.io/heptio-images/ks-guestbook-demo:0.1","name":"guestbook-ui","ports":[{"containerPort":80}],"resources":{"requests":{"cpu":"50m","memory":"100Mi"}}}]}}}}
creationTimestamp: '2022-01-05T15:45:21Z'
generation: 119
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:iksm-version': {}
manager: janitor
operation: Apply
time: '2022-01-06T18:21:04Z'
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:argocd.argoproj.io/tracking-id': {}
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:spec':
'f:progressDeadlineSeconds': {}
'f:replicas': {}
'f:revisionHistoryLimit': {}
'f:selector': {}
'f:strategy':
'f:rollingUpdate':
.: {}
'f:maxSurge': {}
'f:maxUnavailable': {}
'f:type': {}
'f:template':
'f:metadata':
'f:labels':
.: {}
'f:app': {}
'f:spec':
'f:containers':
'k:{"name":"guestbook-ui"}':
.: {}
'f:env':
.: {}
'k:{"name":"SOME_ENV_VAR"}':
.: {}
'f:name': {}
'f:value': {}
'f:image': {}
'f:imagePullPolicy': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":80,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:protocol': {}
'f:resources':
.: {}
'f:requests':
.: {}
'f:cpu': {}
'f:memory': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:dnsPolicy': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext': {}
'f:terminationGracePeriodSeconds': {}
manager: argocd
operation: Update
time: '2022-01-06T15:04:15Z'
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:deployment.kubernetes.io/revision': {}
'f:status':
'f:availableReplicas': {}
'f:conditions':
.: {}
'k:{"type":"Available"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'k:{"type":"Progressing"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'f:observedGeneration': {}
'f:readyReplicas': {}
'f:replicas': {}
'f:updatedReplicas': {}
manager: kube-controller-manager
operation: Update
time: '2022-01-06T18:15:14Z'
name: kustomize-guestbook-ui
namespace: default
resourceVersion: '8289211'
uid: ef253575-ce44-4c5e-84ad-16e81d0df6eb
spec:
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: guestbook-ui
spec:
containers:
- env:
- name: SOME_ENV_VAR
value: some_value
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
imagePullPolicy: IfNotPresent
name: guestbook-ui
ports:
- containerPort: 80
protocol: TCP
resources:
requests:
cpu: 50m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
conditions:
- lastTransitionTime: '2022-01-05T22:20:37Z'
lastUpdateTime: '2022-01-05T22:43:47Z'
message: >-
ReplicaSet "kustomize-guestbook-ui-6549d54677" has successfully
progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2022-01-06T18:15:14Z'
lastUpdateTime: '2022-01-06T18:15:14Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
observedGeneration: 119
readyReplicas: 4
replicas: 4
updatedReplicas: 4

View File

@@ -1,36 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
argocd.argoproj.io/tracking-id: 'guestbook:apps/Deployment:default/kustomize-guestbook-ui'
iksm-version: '1.0'
name: kustomize-guestbook-ui
namespace: default
spec:
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- name: guestbook-ui
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
env:
- name: SOME_ENV_VAR
value: some_value
- name: NEW_ENV_VAR
value: new_value
ports:
- containerPort: 80
- grpcPort: 8081
resources:
requests:
cpu: 50m
memory: 100Mi
- name: new-container
image: 'new-image:1.0'

View File

@@ -1,31 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
argocd.argoproj.io/tracking-id: 'guestbook:apps/Deployment:default/kustomize-guestbook-ui'
iksm-version: '1.0'
name: kustomize-guestbook-ui
namespace: default
spec:
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- env:
- name: SOME_ENV_VAR
value: some_value
image: 'gcr.io/heptio-images/ks-guestbook-demo:0.1'
name: guestbook-ui
ports:
- containerPort: 80
resources:
requests:
cpu: 50m
memory: 100Mi

0
docs/assets/azure-enterprise-claims.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 7.2 KiB

After

Width:  |  Height:  |  Size: 7.2 KiB

0
docs/assets/azure-enterprise-saml-urls.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

0
docs/assets/azure-enterprise-users.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -4,12 +4,6 @@ You can download the latest Argo CD version from [the latest release page of thi
## Linux and WSL
### ArchLinux User Repository ([AUR](https://aur.archlinux.org/packages/))
```bash
yay -Sy argocd-bin
```
### Homebrew
```bash

View File

@@ -15,7 +15,7 @@ Since the CI pipeline is triggered on Git commits, there is currently no (known)
If you are absolutely sure that the failure was due to a failure in the pipeline, and not an error within the changes you committed, you can push an empty commit to your branch, thus retriggering the pipeline without any code changes. To do so, issue
```bash
git commit -s --allow-empty -m "Retrigger CI pipeline"
git commit --allow-empty -m "Retrigger CI pipeline"
git push origin <yourbranch>
```
@@ -23,9 +23,7 @@ git push origin <yourbranch>
First, make sure the failing build step succeeds on your machine. Remember the containerized build toolchain is available, too.
If the build is failing at the `Ensure Go modules synchronicity` step, you need to first download all Go dependent modules locally via `go mod download` and then run `go mod tidy` to make sure the dependent Go modules are tidied up. Finally commit and push your changes to `go.mod` and `go.sum` to your branch.
If the build is failing at the `Build & cache Go code`, you need to make sure `make build-local` runs successfully on your local machine.
If the build is failing at the `Ensuring Gopkg.lock is up-to-date` step, you need to update the dependencies before you push your commits. Run `make dep-ensure` and `make dep` and commit the changes to `Gopkg.lock` to your branch.
### Why does the codegen step fail?

View File

@@ -45,7 +45,7 @@ If you make changes to the Argo UI component, and your Argo CD changes depend on
1. Make changes to Argo UI and submit the PR request.
2. Also, prepare your Argo CD changes, but don't create the PR just yet.
3. **After** the Argo UI PR has been merged to master, then as part of your Argo CD changes:
- Run `yarn add git+https://github.com/argoproj/argo-ui.git` in the `ui/` directory, and then,
- Run `yarn add git+https://github.com/argoproj/argo-ui.git`, and then,
- Check in the regenerated yarn.lock file as part of your Argo CD commit
4. Create the Argo CD PR when you are ready. The PR build and test checks should pass.

View File

@@ -29,7 +29,7 @@ The Docker version must be fairly recent, and support multi-stage builds. You sh
* Obviously, you will need a `git` client for pulling source code and pushing back your changes.
* Last but not least, you will need a Go SDK and related tools (such as GNU `make`) installed and working on your development environment. The minimum required Go version for building and testing Argo CD is **v1.17**.
* Last but not least, you will need a Go SDK and related tools (such as GNU `make`) installed and working on your development environment. The minimum required Go version for building and testing Argo CD is **v1.16**.
* We will assume that your Go workspace is at `~/go`.
@@ -269,7 +269,7 @@ and others. Although you can make changes to these files and run them locally, i
6. Commit changes and open a PR to [Argo UI](https://github.com/argoproj/argo-ui).
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `yarn add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the lastest master commit for argo-ui.
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd` folder and run `yarn add https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the lastest master commit for argo-ui.
8. Submit changes to `ui/yarn.lock`in a PR to Argo CD.
@@ -304,7 +304,7 @@ You need to pull in all required Go dependencies. To do so, run
### Test your build toolchain
The first thing you can do to test whether your build toolchain is setup correctly is by generating the glue code for the API and after that, run a normal build:
The first thing you can do whether your build toolchain is setup correctly is by generating the glue code for the API and after that, run a normal build:
* `make codegen-local`
* `make build-local`

Some files were not shown because too many files have changed in this diff Show More