Compare commits

...

226 Commits

Author SHA1 Message Date
Isaac Gaskin
c6cd7c5e32 chore: helm2 verison bump (#4724)
* chore: helm2 verison bump
2021-02-26 09:00:53 -08:00
jannfis
dd26b73e40 chore: Replace deprecated GH actions directives for integration tests (#4589)
* chore: Replace deprecated set-env directives

* revert lint version change

* Revert go.mod and go.sum changes

* Fix typo

* Update golangci-lint-action to v2

* Fix golangci-lint version

* Skip new lint complaints in test

* Skip more new lint complaints in test

* Exclude new SA5011 check in lint
2021-02-25 22:42:24 -08:00
jannfis
b8910f943d chore: Replace deprecated commands for release action (#4593) 2021-02-25 22:22:01 -08:00
Alexander Matyushentsev
fe80a4072e fix: API server should not print resource body when resource update fails (#5617)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2021-02-25 19:26:52 -08:00
jannfis
d4d767d831 chore: Update golang to v1.14.12 [backport to release-1.6] (#4835)
* chore: Update golang to v1.14.12 [backport to release-1.6]

Signed-off-by: jannfis <jann@mistrust.net>
2020-11-16 09:10:15 -08:00
Alexander Matyushentsev
f282a33e59 fix: kustomize-version flag is not working on 'argocd app set' command (#4040) 2020-08-04 15:05:22 -07:00
argo-bot
3d1f37b0c5 Bump version to 1.6.2 2020-07-31 23:33:25 +00:00
argo-bot
1582ca665a Bump version to 1.6.2 2020-07-31 23:33:15 +00:00
May Zhang
345b77ecda feat: adding validate for app create and app set (#4016)
* feat: adding disable-validation for app create and app set

* feat: adding disable-validation for app create and app set

* feat: change test func name

* feat: added support of app unset and app edit in addition to app create and app set.

* feat: remove extra space.
2020-07-31 16:01:57 -07:00
Alexander Matyushentsev
9d1d605e56 fix: use glob matcher in casbin built-in model (#3966) 2020-07-29 11:13:24 -07:00
jannfis
47507930da fix: Normalize Helm chart path when chart name contains a slash (#3987)
* fix: Normalize Helm chart path when chart name contains a slash

* handle special cases and add unit tests
2020-07-28 13:11:27 -07:00
Simon Clifford
a066c97fd9 fix: allow duplicates when using generateName (#3878)
* Generate Name field generates names when created so should not cause a duplication warning
* Updating existing test case to check no additional conditions are added
2020-07-28 13:11:22 -07:00
Alexander Matyushentsev
0d8f996b85 fix: nil pointer dereference while syncing an app (#3915) 2020-07-28 09:39:05 -07:00
argo-bot
159674ee84 Bump version to 1.6.1 2020-06-19 00:30:22 +00:00
argo-bot
73d176bda3 Bump version to 1.6.1 2020-06-19 00:30:12 +00:00
Alexander Matyushentsev
bc5ad35238 chore: trigger-release should not fail if commentChar not configured (#3740) 2020-06-18 17:29:32 -07:00
Alexander Matyushentsev
cbc0b2fcd2 chore: fix tag matching in trigger-release.sh script (#3785) 2020-06-18 12:58:23 -07:00
Alexander Matyushentsev
18628acae9 fix: return default scopes if custom scopes are not configured (#3805) 2020-06-18 12:36:57 -07:00
argo-bot
c10ae246ab Bump version to 1.6.0 2020-06-16 22:30:38 +00:00
argo-bot
41e72d1247 Bump version to 1.6.0 2020-06-16 22:30:28 +00:00
Timothy Vandenbrande
6393d925bf fix: use uid instead of named user in Dockerfile (#3108) 2020-06-16 15:06:52 -07:00
Alexander Matyushentsev
faffea357e feat: upgrade redis to 5.0.8-alpine (#3783) 2020-06-16 14:04:49 -07:00
Alexander Matyushentsev
623b4757f4 fix: upgrade awscli version (#3774) 2020-06-16 10:11:19 -07:00
Alexander Matyushentsev
a68f483fa3 fix: html encode login error/description before rendering it (#3773) 2020-06-15 16:34:01 -07:00
Alexander Matyushentsev
a17217d9e8 fix: avoid panic in badge handler (#3741) 2020-06-15 12:02:14 -07:00
Alexander Matyushentsev
a7c776d356 fix: cluster state cache should be initialized before using (#3752) (#3763) 2020-06-12 20:18:02 -07:00
May Zhang
247879b424 feat: Support additional metadata in Application sync operation (#3747)
* feat: Support additional metadata in Application sync operation

* regenerated generated.pb.go
2020-06-11 14:39:33 -07:00
Alexander Matyushentsev
2058cb6374 fix: ensure cache settings read/writes are protected by mutex (#3753) 2020-06-11 13:13:48 -07:00
argo-bot
f5bcc1dbcc Bump version to 1.6.0-rc2 2020-06-09 22:12:40 +00:00
argo-bot
4d7e6945bd Bump version to 1.6.0-rc2 2020-06-09 22:12:30 +00:00
jannfis
df65d017a2 fix: Reap orphaned ("zombie") processes in argocd-repo-server pod (#3611) (#3721)
* fix: Reap orphaned ("zombie") processes in argocd-repo-server pod
2020-06-09 13:59:30 -07:00
jannfis
283645e63e chore: Introduce release automation (#3711)
* chore: Introduce release automation
2020-06-09 12:14:14 -07:00
Alexander Matyushentsev
0e016a368d fix: delete api should return 404 error if app does not exist (#3739) 2020-06-09 11:54:22 -07:00
jannfis
26d711a6db fix: Fix possible nil pointer deref on resource deduplication (#3725) 2020-06-08 15:35:44 -07:00
Alexander Matyushentsev
1f12aac601 fix: upgrade gitops-engine dependency to v0.1.2 (#3729) 2020-06-08 15:13:49 -07:00
Alexander Matyushentsev
b39e93a4d2 Update manifests to v1.6.0-rc1 2020-06-02 13:30:08 -07:00
Alexander Matyushentsev
97aa28a687 Update manifests to v1.6.0-rc1 2020-06-02 12:15:02 -07:00
Alexander Matyushentsev
e775b8fce8 chore: run intergation tests in release branches (#3697) 2020-06-02 11:07:06 -07:00
David Maciel
84bece53a3 docs: Use official curl image instead of appropriate/curl (#3695)
The appropriate/curl last update is from two years ago:

```
$ docker run -it --entrypoint /bin/ash appropriate/curl:latest
$ curl --version
curl 7.59.0 (x86_64-alpine-linux-musl) libcurl/7.59.0 LibreSSL/2.6.3 zlib/1.2.11 libssh2/1.8.0
Release-Date: 2018-03-14
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz UnixSockets HTTPS-proxy
```

There is already an official curl image at https://hub.docker.com/r/curlimages/curl (The link to the docker hub can be found in the official curl download page: https://curl.haxx.se/download.html)
$ docker run -it --entrypoint /bin/ash curlimages/curl:latest
$ curl --version
curl 7.70.0-DEV (x86_64-pc-linux-musl) libcurl/7.70.0-DEV OpenSSL/1.1.1d zlib/1.2.11 libssh2/1.9.0 nghttp2/1.40.0
Release-Date: [unreleased]
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL TLS-SRP UnixSockets
2020-06-02 18:45:15 +02:00
Alexander Matyushentsev
4a6fe4cd31 feat: upgrade kustomize to v3.6.1 version (#3696) 2020-06-02 09:37:10 -07:00
Isaac Gaskin
45cb1d5967 fix: correcting arg positions for Sprintf example message (#3692) 2020-06-02 09:29:36 +02:00
jannfis
415d5e569f chore: Add GH actions status badge to README (#3690) 2020-06-01 21:52:23 +02:00
Alexander Matyushentsev
03a0a192ec refactor: upgrade gitops engine (#3687) 2020-06-01 20:23:07 +02:00
jannfis
a40f3689b3 chore: Add missing asset to Dockerfile (#3678)
* chore: Fix complaints of golang-ci lint v1.26.0

* chore: Fix Dockerfile
2020-05-31 19:28:16 -07:00
jannfis
80515a3b57 chore: Migrate CI toolchain to GitHub actions (#3677)
* chore: Migrate CI to GitHub actions

* Do not install golangci-lint, we use the action

* Integrate codecov.io upload

* Use some better names for analyze job & steps

* go mod tidy

* Update tools

* Disable CircleCI completely

* Satisfy CircleCI with a dummy step until it's disabled
2020-05-31 23:11:28 +02:00
Alin Balutoiu
00f99edf1a feat: Add build support for ARM images (#3554)
Signed-off-by: Alin Balutoiu <alinbalutoiu@gmail.com>
2020-05-31 19:31:29 +02:00
jannfis
bc83719037 chore: Fix complaints of golang-ci lint v1.26.0 (#3673) 2020-05-30 18:54:14 -07:00
Alexander Matyushentsev
2926f2bd60 fix: settings manager should invalidate cache after updating repositories/repository credentials (#3672) 2020-05-30 10:00:11 +02:00
Henno Schooljan
7b2a95e83c fix: Allow unsetting the last remaining values file (#3644) (#3645)
* fix: Allow unsetting the last values file (#3644)

Because the `setHelmOpt()` function does not act on empty inputs, it 
would do nothing when removing the last values file using `argocd app 
unset`.
The parameter overrides are actually being unset correctly, so this has 
been changed to work the same way by manipulating `app.Spec.Source.Helm` 
directly.

This fixes #3644.

* fix: Allow unsetting the last values file, add tests

* Retrigger CI pipeline
2020-05-29 22:27:40 -07:00
AGloberman
d89b7d8a41 fix: Read cert data from kubeconfig during cluster addition and use if present (#3655) (#3667)
* Read cert data from kubeconfig and use if present

* gofmt test updates
2020-05-29 11:33:56 -07:00
Henno Schooljan
36da074344 feat: CLI: Allow setting Helm values literal (#3601) (#3646)
* feat: CLI: Allow setting Helm values literal (#3601)

While you could already set values files using the `--values` flag with external data using the CLI with `argocd app create` and `argocd app set`, this was not yet possible for managing the literal `.spec.source.helm.values` value in an Application without resorting to a complicated `argocd app patch` escaped parameter or by generating the entire application YAML manifest by yourself.

Therefore, this PR adds a `--values-literal-file` flag to the `argocd app create` and `argocd app set` commands, which accepts a local file name or URL to a values file, which will be read and included as a multiline string in the application manifest. This is different from the `--helm-set-file` flag which expects the file in the chart itself.

The `argocd app unset` command is expanded with a `--values-literal` flag, so we can also unset this field again.

I hope I chose nice enough names for the flags, I wanted to make clear it expects a file name, but also distinguish it enough from the existing `--values` flag which actually points to values files.

Because the current `setHelmOpt()` functionality would not work for unsetting things to an empty value and it was difficult to do these changes independently, this PR also contains the fix for issue #3644. A separate PR has still been created for that one because I think it should end up as a separate issue in the release notes.

* feat: CLI: Allow setting Helm values literal, add tests
2020-05-29 09:00:28 -07:00
Hyungrok Kim
2277af2f32 docs: Add PUBG to USERS.md (#3669) 2020-05-29 08:14:32 -07:00
Alexander Matyushentsev
ee64a4d9ca fix: upgrade gitops engine dependency (#3668) 2020-05-28 18:42:01 -07:00
Fred Dubois
d9bae2f83f Add AppDirect to USERS.md (#3666) 2020-05-28 11:43:57 -07:00
Josh Soref
a724574ede chore: Spelling (#3647)
chore: Spelling (#3647)
2020-05-27 10:22:13 -07:00
Buvanesh Kumar
1c5fc0076e Dynamically get the account details from gcloud (#3662) 2020-05-27 10:10:56 -07:00
Alexander Matyushentsev
f826b0e397 chore: fix image dev build command (#3659) 2020-05-27 08:04:09 +02:00
jannfis
196eb16244 chore: Migrate GH image build action to go modules (#3654) 2020-05-26 14:10:39 -07:00
Josh Soref
09f3b45e39 chore: Spelling - md files (#3651)
chore: Spelling - md files (#3651)
2020-05-26 13:58:29 -07:00
jannfis
c914ea0218 chore: Update Dockerfile to reflect switch to go modules (#3652)
* chore: Update Dockerfile to reflect switch to go modules
2020-05-26 12:42:58 -07:00
jannfis
5ad46b025a docs: Adapt contribution guide for Go modules (#3653) 2020-05-26 21:30:40 +02:00
Alexander Matyushentsev
22dcc1e87d fix: prevent possible null pointer dereference error in TestAddHelmRepoInsecureSkipVerify test (#3650) 2020-05-26 10:52:12 -07:00
Alexander Matyushentsev
91d5d7e37b fix: sort output of 'argocd-util resource-overrides action list' command (#3649) 2020-05-26 10:49:28 -07:00
jannfis
23bf07d206 chore: Migrate to Go modules (#3639)
* chore: Migrate to Go modules

* Update CircleCI config

* Fix path

* Attach vendor for test step correctly

* restore_vendor -> attach_vendor

* Update cache path

* Checkout code before attaching vendor

* Move checkout to even earlier in job

* Don't restore cache for e2e step

* .

* Explicitly set GOPATH

* Restore Build cache

* Fix permissions

* Set correct environment for docker env

* Uncache everything

* Fix permissions

* Use workspace for caching Go code

* .

* go mod tidy

* Try to speed up builds

* Make mod target implicit dependencies

* Do not call make mod-download or mod-vendor

* Fix permissions

* Don't have modules dependendencies on test-e2e-local

* Fix confgi

* Bye bye

* Remove test parallelism

* Get max test parallelism back in, but with lower value
2020-05-26 18:15:38 +02:00
Im, Juno
6637ee5c24 fix: oidc should set samesite cookie (#3632)
* fix: oidc should set samesite cookie

* edit: adapt the path value for cookies from root path
2020-05-26 09:13:50 +02:00
jannfis
867282f726 fix: Allow underscores in hostnames in certificate module (#3596) 2020-05-25 21:34:06 +02:00
Phil Gore
8aadc310c9 fix: apply scopes from argocd-rbac-cm to project jwt group searches (#3508)
* merging changes

* apply scopes from argocd-rbac-cm to projects

* fixing server merge conflict

* passing tests
2020-05-25 09:05:56 +02:00
May Zhang
ac097f143c Fixing how to compare two objects. (#3636) 2020-05-22 15:10:25 -07:00
Alexander Matyushentsev
4a12cbb231 fix: fix nil pointer dereference error after cluster deletion (#3634) 2020-05-22 09:35:04 -07:00
Chance Zibolski
40eb8c79ab feat: Upgrade kustomize to 3.5.5 (#3619)
* feat: Upgrade kustomize to 3.5.5
2020-05-20 19:24:25 -07:00
Florian
710d06c800 docs: minor typo fix (#3625)
Just a quick typo fix
2020-05-20 20:46:07 +02:00
Alexander Matyushentsev
2f2f39c8a6 feat: upgrade gitops engine version (#3624) 2020-05-20 11:15:23 -07:00
Andy Feller
7c831ad781 docs: Fix minor typos in code examples (#3622) 2020-05-20 18:37:27 +02:00
Alexander Matyushentsev
51998e0846 feat: argocd-util settings resource-overrides list-actions (#3616) 2020-05-19 14:46:46 -07:00
jannfis
313de86941 fix: Prevent possible nil pointer dereference when getting Helm client (#3613) 2020-05-19 18:40:50 +02:00
dthomson25
0a4ce77bce Add Rollout Restart to Discovery.lua script (#3607)
* Add Rollout Restart to Discovery.lua script

* Fix tests
2020-05-19 08:16:08 -07:00
Shuhei Kitagawa
f95004c428 fix: Allow CLI version command to succeed without server connection (#3049) (#3550) 2020-05-19 14:21:34 +02:00
Drew Boswell
f59391161e docs: Update USERS.md with Swissquote (#3605)
* Update USERS.md

* Update USERS.md
2020-05-19 14:19:38 +02:00
Alexander Matyushentsev
27e95df536 feat: update gitops engine version to get access to sync error (#3609) 2020-05-18 12:26:34 -07:00
May Zhang
ec23d917eb feat: adding failure retry (#3548)
* Initial Try of failure retry

* Get failureRetryCount and failureRetryPeriodSecond from command line args.

* Get failureRetryCount and failureRetryPeriodSecond from command line args.

* Get failureRetryCount and failureRetryPeriodSecond from command line args.

* Get failureRetryCount and failureRetryPeriodSecond from command line args.

* Update logic to find out if we should retry.

* 1. add retry wrapper to only argo client.
2. change to env variables instead of command line arguments.

* keep imports grouped

* resolve merge conflict
2020-05-18 09:51:03 -07:00
Micke Lisinge
991ee9b771 feat: Implement GKE ManagedCertificate CRD health checks (#3600)
* Implement GKE ManagedCertificate CRD health checks

* Retrigger CI pipeline

* Match against FailedNotVisible instead of Failed
2020-05-17 17:23:54 +02:00
Alexander Matyushentsev
c21a6eae7d docs: update 1.5.4 changelog (#3585)
* docs: update 1.5.4 changelog

* Update CHANGELOG.md
2020-05-16 17:26:02 +02:00
jannfis
173a0f011d chore: Fix Docker mounts for good (#3597) 2020-05-15 14:43:12 -07:00
Alexander Matyushentsev
fe8d47e0ea feat: move engine code to argoproj/gitops-engine repo (#3599) 2020-05-15 14:39:29 -07:00
Alexander Matyushentsev
192ee93fc4 feat: Gitops engine (#3066)
* Move utils packages that are required for gitops engine under engine/pkg/utils package.
Following changes were implemented:
* util/health package is split into two parts: resource health assessement & resource health assessement and moved into engine/pkg/utils
* utils packages moved: Closer and Close method of util package moved into engine/pkg/utils/io package
* packages diff, errors, exec, json, kube and tracing moved into engine/pkg/utils

* Move single cluster caching into engine/kube/cache package

* move sync functionality to engine/kube/sync package

* remove dependency on metrics package from engine/pkg/utils/kube/cache

* move annotation label definitions into engine/pkg/utils/kube/sync

* make sure engine/pkg has no dependencies on other argo-cd packages

* allow importing engine as a go module

* implement a high-level interface that might be consumed by flux

* fix deadlock caused by cluster cache event handler

* ClusterCache should return error if requested group kind not found

* remove obsolete tests

* apply reviewer notes
2020-05-15 10:01:18 -07:00
Jai Govindani
490d4004b1 docs: typo/punctuation in tracking_strategies.md (#3449)
* Fix typo/punctuation

* Add Honestbank to USERS.md

* fix: typo in image caption

Signed-off-by: Jai Govindani <jai@honestbank.com>
2020-05-15 18:25:23 +02:00
Shuhei Kitagawa
c2ff86e8b2 chore: Change cli log to logrus (#3566) 2020-05-15 17:49:35 +02:00
chroju
a32f70f207 Fix small typo in user management docs (#3438) 2020-05-14 07:42:08 -07:00
jannfis
02b3c61fd9 feat: Introduce diff normalizer knobs and allow for ignoring aggregated cluster roles (#2382) (#3076)
* Add the ability to ignore rules added by aggregated cluster roles
2020-05-13 13:34:43 -07:00
Riccardo M. Cefala
f822d098c3 Add External Secrets Operator to secret management list (#3579) 2020-05-13 11:36:12 -07:00
Alexander Matyushentsev
dc5ac89f36 docs: add more details about redis issue during 1.4 -> 1.5 upgrade (#3584) 2020-05-13 11:08:13 -07:00
Christine Banek
046406e7d3 fix: Fix login with port forwarding (#3574)
The argocd command line supports using the --port-forward options
to allow you to connect to an argocd without an ingress rule.  This
is especially useful for command line automation of new environments.

But login doesn't respect port-forward - making it impossible to login
to argocd to be able to later create apps.  App creation works fine
with port-forwarding after being able to login.
2020-05-13 10:28:42 -07:00
Alexander Matyushentsev
4f49168c1a fix: use 'git show-ref' to both retrieve and store generated manifests (#3578)
* fix: use 'git show-ref' to both retrieve and store generated manifests

* print warning message when an incorrect repo tag is detected
2020-05-13 10:19:59 -07:00
Simon Rüegg
22bb1dd40f feat: Implement Crossplane CRD health checks (#3581)
* Implement Crossplane CRD health checks

A health check for the ClusterStackInstall CRD to help Argo CD to wait
for a successful install.

Signed-off-by: Simon Rüegg <simon@rueggs.ch>

* Add VSHN to USERS.md

Signed-off-by: Simon Rüegg <simon@rueggs.ch>
2020-05-13 09:45:27 -07:00
Alexander Matyushentsev
cd1de6e680 fix: redis request failed with nil error should not be counted as failed (#3576) 2020-05-12 18:02:22 -07:00
Alexander Matyushentsev
24fa758444 fix: enable redis retries; add redis request duration metric (#3575) 2020-05-12 14:39:18 -07:00
jannfis
9208176e86 feat: Allow selecting TLS ciphers on server (#3524) 2020-05-12 12:32:06 -07:00
rachelwang20
66e1fb78f7 feat: Adding deploy time and duration label (#3563)
* Adding deploy time and duration label

* Update application-deployment-history.tsx

* Adding deploy time and duration label

* Formatting
2020-05-11 13:24:29 -07:00
May Zhang
e42102a67e fix: when --rootpath is on, 404 is returned when URL contains encoded URI (#3564)
* Fix when --rootpath is on, 404 is returned when URL contains encoded URI

* Update doc

* Update doc
2020-05-11 08:39:27 -07:00
dthomson25
887adffcc8 Add Rollout restart action (#3557) 2020-05-08 14:08:34 -07:00
Shuwei Hao
c66919f9ff support delete cluster from UI (#3555)
Signed-off-by: haoshuwei <haoshuwei24@gmail.com>
2020-05-08 09:22:11 -07:00
jqlu
301d18820a feat: add button loading status for time-consuming operations (#3559) 2020-05-08 00:21:20 -07:00
jannfis
d2d37583af docs: Add note about required ConfigMap annotation to docs (#3540)
* Add note about required ConfigMap annotation to docs

* More clarifications
2020-05-07 22:41:21 -07:00
iamsudip
e0e995a944 Add moengage to users list (#3542) 2020-05-07 22:40:36 -07:00
jannfis
1f87950d48 Update security docs to reflect recent changes (#3558) 2020-05-07 22:31:15 -07:00
Shuhei Kitagawa
9a17103830 feat: Add --logformat switch to API server, repository server and controller (#3408) 2020-05-05 14:03:48 +02:00
Shuhei Kitagawa
fd49a4e74f doc: Fix Contribution Guide link in running-locally.md (#3545) 2020-05-05 12:32:47 +02:00
May Zhang
e78f61ea37 Fix version (#3544) 2020-05-04 14:48:16 -07:00
SB
e5d4673eac feat: Add a Get Repo command to see if Argo CD has a repo (#3523)
* fix: Updating to jsonnet v1.15.0 fix issue #3277

* feat: Changes from codegen, adding a repository gt service

* feat: Adding a get repository command

* Retrigger CI pipeline

* refactor: delete deprecated option on Get
refactor printing getcommand result
Getrepository() dependent on rbac enforcement

* fix: setting Get repo command to get
2020-05-04 09:20:48 +02:00
jannfis
3df4850418 fix: Disable keep-alive for HTTPS connection to Git (#3531) 2020-05-01 10:56:25 -07:00
Chris Pappas
1b0421c3aa docs: Adding OneLogin integration docs (#3529) 2020-04-30 21:05:03 -07:00
Alexander Matyushentsev
20d56730ff Revert "fix: Text overflow when the application status panel item was too big (#3460)" (#3530)
This reverts commit ffc99354d2.
2020-04-30 17:47:03 -07:00
Martin Jaime
ffc99354d2 fix: Text overflow when the application status panel item was too big (#3460) 2020-04-30 14:23:12 -07:00
Alexander Matyushentsev
e845478a96 docs: v1.5.3 changelog (#3526)
* docs: v1.5.3 changelog

* address reviewer notes
2020-04-30 13:17:44 -07:00
Alexander Matyushentsev
28dd7167d0 docs: describe upgrading process and version breaking changes (#3512)
* docs: describe upgrading process and version breaking changes

* add upgrading instructions link to overview doc
2020-04-30 12:28:39 -07:00
May Zhang
faee66888e docs: update document for --rootpath support for argoCD server and UI (#3525)
* Update document for rootpath support
2020-04-30 12:01:54 -07:00
Moses Bett
fc753ac489 added MTN Group as a user of ArgoCD (#3497) 2020-04-30 11:19:46 -07:00
jqlu
88a76022c1 fix: remove extra backtick(`) in JSX (#3518) 2020-04-30 09:42:04 -07:00
Alexander Matyushentsev
31df9d11a9 feat: upgrade helm 3 to v3.2.0; user --insecure-verify-flag (#3514) 2020-04-29 21:04:25 -07:00
May Zhang
7ed6c18762 fix: to support --rootpath (#3503)
* added path to cookie

* additional changes to support rootpath:
1. when using https, redirect to the right URL.
2. when rootpath is set, handle healthz, swagger, etc.

* additional changes to support rootpath:
1. when using https, redirect to the right URL.
2. when rootpath is set, handle healthz only.

* additional changes to support rootpath:
1. when using https, redirect to the right URL.
2. when rootpath is set, handle healthz only.

* Fixed for swagger-ui with rootpath

* Fixed for swagger test

* Fixed for redirect path

* Fixed for redirect path
2020-04-29 14:40:03 -07:00
Alexander Matyushentsev
81d5b13083 fix: api incorrectly verifies if auto-sync is enabled and reject sync local request (#3506) 2020-04-29 00:09:27 -07:00
Alexander Matyushentsev
02624bf3a6 fix: redis timeout should be more than client timeout (#3505) 2020-04-29 08:49:10 +02:00
Alexander Matyushentsev
842a3d12f6 feat: add redis metrics to application controller and api server (#3500)
* add redis metrics to application controller and api server

* fix failed test
2020-04-28 12:52:03 -07:00
May Zhang
14f1725f53 added path to cookie (#3501) 2020-04-28 10:26:47 -07:00
Alexander Matyushentsev
9b142c799a fix: 'argocd sync' does not take into account IgnoreExtraneous annotation (#3486) 2020-04-28 08:49:07 -07:00
May Zhang
d77072b534 fix: Set root path (#3475)
* Set root path

* updated http mux if --rootpath is set during server startup.
updated baseHRef if --rootpath is set.
added --grpc-web-root-path for CLI.

* added rootpath as part of config context name

* clean up not used variables.
2020-04-27 08:35:58 -07:00
Alexander Matyushentsev
8a77075cff fix: CLI renders flipped diff results (#3480) 2020-04-24 10:18:15 -07:00
Alexander Matyushentsev
f5b600d4af feat: limit the maximum number of concurrent login attempts (#3467)
* feat: limit the maximum number of concurrent login attempts

* unit test rate limiter

* address reviewer questions
2020-04-23 12:33:17 -07:00
Alexander Matyushentsev
4ae70139d9 feat: upgrade kustomize version to 3.5.4 (#3472) 2020-04-23 10:29:15 -07:00
rumstead
7660e40fdc Update README.md (#3470)
Fix spelling/spacing
2020-04-23 07:49:01 -07:00
Alexander Matyushentsev
310b40aa20 feat: downgrade dex to 2.22.0 and revert bug workaround (#3468) 2020-04-22 17:34:58 -07:00
Alexander Matyushentsev
0b0c72a80b docs: update incorrect sync options example (#3457) 2020-04-22 09:41:57 +02:00
Alexander Matyushentsev
14188656c8 fix: GetApplicationSyncWindows API should not validate project permissions (#3456) 2020-04-21 11:26:17 -07:00
jannfis
76bacfdea4 fix: Add initial implementation for rate limiting failed logins (#3404)
* fix: Add initial implementation  for rate limiting failed logins

* Trigger test build

* Remove deprecated code and fix new project tests

* move cache related code from sessionmanager to cache access wrapper

* avoid using sleep in sessionmanager tests

* mention SECONDS in session manager environment variables to make it easier to understand meaning of each variable

* Login button should be disabled while user is waiting for login result

* prevent timing-based user enumeration attack

* reject too many failed attempts; always compute hash and introduce random delay

* remove unused constants

* fix linter errors

Co-authored-by: Alexander Matyushentsev <amatyushentsev@gmail.com>
2020-04-21 11:10:25 -07:00
Alexander Matyushentsev
ee44e489b5 fix: javascript error on accounts list page (#3453) 2020-04-21 10:08:36 -07:00
Alexander Matyushentsev
89774fef17 fix: argocd-util kubeconfig should use RawRestConfig to export config (#3447)
* fix: argocd-util kubeconfig should use RawRestConfig to export config

* document 'argocd-util kubeconfig' command
2020-04-21 13:45:20 +02:00
Alexander Matyushentsev
acc2369dc7 feat: upgrade dex to v2.23.0 (#3448)
* feat: upgrade dex to v2.23.0

* workaround for https://github.com/dexidp/dex/issues/1695
2020-04-20 22:22:19 -07:00
Alexander Matyushentsev
9de06e35eb Clean docker images before building Argo CD image to avoid no disk space left failure (#3446) 2020-04-20 15:57:36 -07:00
jannfis
4575adca86 chore: Disable lint in CI due to OOM fails (#3444) 2020-04-20 23:01:05 +02:00
jannfis
ca42a375c2 Revert "feat: metrics, argocd_app_info adding syncpolicy info, argocd_cluster_info adding clustername (#3411)" (#3443)
This reverts commit 0214eb8d92.
2020-04-20 08:55:38 -07:00
wecger
0214eb8d92 feat: metrics, argocd_app_info adding syncpolicy info, argocd_cluster_info adding clustername (#3411)
* extending metrics with syncpolicies and clustername

* extending metrics with syncpolicies and clustername: fixing tests

* extending metrics with syncpolicies and clustername: fixing order in labels

* extending metrics with syncpolicies and clustername: fixing lint issues
2020-04-20 11:32:20 +02:00
Alexander Matyushentsev
949518e680 fix: sort imports in knowntypes_normalizer.go (#3440)
* fix: sort imports in knowntypes_normalizer.go

* drafT
2020-04-19 09:05:26 +02:00
Ed Lee
81e4bb1fef Update OWNERS (#3439) 2020-04-18 19:54:44 -07:00
Alexander Matyushentsev
35b40cdb22 fix: support both <group>/<kind> as well as <kind> as a resource override key (#3433) 2020-04-17 09:23:19 -07:00
Alexander Matyushentsev
fa47fe00a2 fix: Grafana dashboard should dynamically load list of clusters (#3435) 2020-04-17 00:13:34 -07:00
Alexander Matyushentsev
743371ed4f fix: use $datasource variable as a source for all dashboard panels (#3434) 2020-04-16 18:13:52 -07:00
jannfis
75d9f23adb chore: Migrate golangci-lint into CircleCI tests (#3427) 2020-04-16 08:55:13 -07:00
SB
e05ebc4990 fix: Updating to jsonnet v1.15.0 fix issue #3277 (#3431) 2020-04-16 08:44:51 -07:00
May Zhang
6ffd34dcf9 fix for helm repo add with flag --insecure-skip-server-verification (#3420) 2020-04-15 16:56:27 -07:00
Alexander Matyushentsev
16c6eaf9ae feat: support user specified account token ids (#3425) 2020-04-15 15:19:25 -07:00
Alexander Matyushentsev
e67a55463f fix: update min CLI version to 1.4.0 (#3413) 2020-04-15 14:17:24 -07:00
jannfis
476cb655b7 fix: Make CLI downwards compatible using old repository API (#3418) 2020-04-15 23:05:17 +02:00
jannfis
6c1ccf4d60 docs: Update documentation for CVE-2020-5260 (#3421) 2020-04-15 12:16:29 -07:00
Alexander Matyushentsev
05f5a79923 feat: support separate Kustomize version per application (#3414) 2020-04-15 12:04:31 -07:00
May Zhang
3dbc330cf0 fix: app diff --local support for helm repo. #3151 (#3407)
* Fixing argocd app diff when using helm repo

* adding test code

* get rid of optional parameter

* get rid of optional parameter

* Added test case

* Fix failed tests
2020-04-14 08:13:38 -07:00
jannfis
8ad928330f chore: Fix a bunch of lint issues (#3412)
* chore: Fix linter complaints
2020-04-14 08:01:43 -07:00
Bruno Clermont
355e77e56f feat: add support for dex prometheus metrics (#3249) 2020-04-14 07:53:59 -07:00
Alexander Matyushentsev
376d79a454 feat: add settings troubleshooting commands to the 'argocd-util' binary (#3398)
* feat: add settings troubleshooting commands to the 'argocd-util' binary
2020-04-14 07:51:16 -07:00
peteski
b74a9461ed docs: Updated best practices to remove typos. (#3410)
Fixed typos in best practices section for immutability.
2020-04-14 12:26:09 +02:00
rachelwang20
b4236e1dc7 feat: Let user to define meaningful unique JWT token name (#3388)
* feat: Let user to define meaningful unique JWT token name

* Update sessionmanager.go

* Update server_test.go

* Update sessionmanager_test.go

* Adding get JWTToken by id if not then by issued time

* Adding relate tests

* Adding relate tests

* Retrigger the build

* feat: Let user to define meaningful unique JWT token name

* Update sessionmanager.go

* Update server_test.go

* Update sessionmanager_test.go

* Adding get JWTToken by id if not then by issued time

* Adding relate tests

* Retrigger the build

* feat: Let user to define meaningful unique JWT token name

* Adding get JWTToken by id if not then by issued time

* Adding relate tests

* Adding UI change

* add yarn lint
2020-04-13 14:13:05 -07:00
jannfis
6ecc25edbd docs: Beautify & update security considerations doc (#3400) 2020-04-13 21:51:40 +02:00
jannfis
56ca1fb4ea chore: Fix some Sonarcloud related quirks (#3399)
* chore: Fix some Sonarcloud related quirks
2020-04-13 11:45:54 -07:00
jannfis
3629346085 chore: Workaround for CircleCI bug (#3397)
* chore: workaround circleci bug

* Oops
2020-04-10 15:47:49 -07:00
jannfis
092072a281 chore: Add sonarqube configuration for CI (#3392) 2020-04-10 22:18:31 +02:00
Alexey Osheychik
e1142f9759 docs: Fix azure AD integration doc (#3396)
* docs: Add permissions configuration process for Azure to docs

* docs: Update configuration for Azure SSO integration
2020-04-10 13:03:11 +02:00
jannfis
fbd3fe69ff docs: Add bug triage process proposal to docs (#3394) 2020-04-09 11:35:37 -07:00
ramz
f05f84979c Fix for #3286 updated with the expected display message. (#3385) 2020-04-09 11:21:38 -07:00
Devan Goodwin
3d6ff9e903 Add a fake owner reference for ClusterServiceVersion. (#3390)
For anyone installing an Operator Lifecycle Manager operator, the ArgoCD
UI would show your OperatorGroup and Subscription, but would not detect
the resulting ClusterServiceVersion, and subsequent pods etc, limiting
the value of the UI in viewing overall status of your operator.

The CSV should not technically have an owner reference, so we add a fake
one in similar fashion to the pre-existing code above for endpoints. The
CSV then is linked to it's OperatorGroup via the olm.operatorGroup
annotation. The CSV has no link to it's Subscription or InstallPlan that
I can see. Adding an annotation to this might be something we could
pursue with OLM folks.
2020-04-09 11:20:37 -07:00
Alexander Matyushentsev
4c812576c1 chore: minor dev tools fixing for mac (#3330) 2020-04-09 15:01:45 +02:00
jannfis
6753fc9743 Add Dex and missing url field to FAQ (#3380) 2020-04-08 13:23:23 -07:00
Andreas Kappler
8d082cc46e feat: Introduce sync-option SkipDryRunOnMissingResource=true (#2873) (#3247)
* feat: Introduce sync-option SkipDryRunOnMissingResource=true
2020-04-08 10:53:18 -07:00
Denis Jajčević
f586385c8b docs: Fix diffing_known_types.txt link (#3381)
Update documentation to have valid link to `diffing_known_types.txt`.
2020-04-08 19:03:07 +02:00
jannfis
466c73fa3b chore: Code coverage offensive 05: util/clusterauth (#3371)
* Add first batch of clusterauth tests

* More tests
2020-04-07 12:39:24 +02:00
May Zhang
9e6c78d55c Fix for jsonnet when it is localed in nested subdirectory and uses import (#3372) 2020-04-06 15:20:07 -07:00
jannfis
c6af4cca10 docs: Clarify RBAC requirement for local users (#3361)
* Clarify RBAC requirement for local users

* Update docs/operator-manual/user-management/index.md

Co-Authored-By: Alexander Matyushentsev <AMatyushentsev@gmail.com>

Co-authored-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2020-04-06 21:41:10 +02:00
Alexander Matyushentsev
5448466ddc feat: support normalizing CRD fields that use known built-in K8S types (#3357)
* feat: support normalizing CRD fields that use known built-in K8S types

* apply reviewers notes

* fix codegen
2020-04-06 21:13:50 +02:00
yutachaos
0eec2fee71 fix: Update 4.5.3 redis-ha helm manifest (#3370)
Signed-off-by: yutachaos <18604471+yutachaos@users.noreply.github.com>
2020-04-06 12:02:40 -07:00
Alexander Matyushentsev
e5452ff70e fix: return 401 error code if username does not exist (#3369) 2020-04-06 11:15:15 +02:00
jannfis
9fdd782854 fix: Do not panic while running hooks with short revision (#3368) 2020-04-05 23:54:09 -07:00
jannfis
053ae28ed5 Add note about .git suffixes for GitLab repository URLs (#3363) 2020-04-05 21:09:54 -07:00
jannfis
6eb4f41343 chore: Keep autogenerated message in upstream.yaml for HA manifests (#3367)
* fix: Keep autogenerated message in upstream.yaml
2020-04-05 21:08:59 -07:00
Alexander Matyushentsev
d9072d8200 docs: mention issue #3358 in 1.5 changelog (#3359) 2020-04-04 08:43:38 +02:00
Alexander Matyushentsev
238abbf771 docs: document built-in user limitations and workaround (#3341)
* document security limitations

* minor issue description revisions + formatting

* Update security.md

* move CVEs description into separate document

Co-authored-by: Matt Hamilton <matt@soluble.ai>
Co-authored-by: Ed Lee <edlee2121@users.noreply.github.com>
2020-04-04 08:43:21 +02:00
jannfis
aa4fb9ab4a fix: Increase HAProxy check interval to prevent intermittent failures (#3356)
* Increase HAProxy check interval to prevent intermittent failures/state flapping

* Restore original namespace
2020-04-03 17:10:37 -07:00
May Zhang
55bc144410 fix: Helm v3 CRD are not deployed (#3345)
* Fixing could not find plugin issue when app sync and app diff

* Fixing codegen error

* Revert "Fixing codegen error"

This reverts commit b2dcfb81

* Fixing codegen error

* If user is logged in, settings API would return ConfigManagementPlugins

* For helm3, add flag --include-crds when calling helm template to support helm3 crd

* Fixing typo.

* Added further assertion of ResourceSyncStatusIs for CRD resources.
2020-04-03 13:41:17 -07:00
jannfis
c428e091ab chore: Fix flaky test TestWatchCacheUpdated (#3350) 2020-04-03 18:26:37 +02:00
Sara Jarjoura
c13bf422f8 docs: Added detail on how to use the caData field with Okta SAML (#3351) 2020-04-03 16:49:11 +02:00
jannfis
b4bbe60b8c chore: Code coverage offensive 02: util/argo (#3329) 2020-04-03 09:53:47 +02:00
jannfis
4fdf573fd1 chore: More tests (#3340) 2020-04-02 23:38:23 +02:00
rachelwang20
d326daef62 Rebase (#3333) 2020-04-02 11:08:24 -07:00
May Zhang
98337065ae fix: Fixing could not find plugin issue when app sync and app diff (#3326)
* Fixing could not find plugin issue when app sync and app diff

* Fixing codegen error

* Revert "Fixing codegen error"

This reverts commit b2dcfb81

* Fixing codegen error

* If user is logged in, settings API would return ConfigManagementPlugins
2020-04-02 09:50:42 -07:00
jannfis
1b1df76ef2 chore: Code coverage offensive 03: util/cache (#3335)
* Add dependency for miniredis, used for unit testing Redis cache

* Add more tests
2020-04-02 09:16:42 -07:00
Yurii Komar
7f40739b97 Changed GRPC Message Size for apiclient (#3337) 2020-04-02 09:14:02 -07:00
dthomson25
9db879f68f Add V0.8 changes to Rollouts healthcheck (#3331) 2020-04-02 09:00:30 -07:00
Alexander Matyushentsev
e4235dabb8 docs: improved Grafana dashboard (#3327) 2020-04-01 11:58:43 -07:00
May Zhang
ff07b112b1 fix: update documentation for adding environment variable KUBE_VERSION and KUBE_API_VERSION (#3323)
* ArgoCD plugin: add environment variable KUBEVERSION and KUBE_API_VERSIONS.

* Added test verification of KUBE_API_VERSION

* Using assert.EqualValues for assertion.

* update build-environment.md to add KUBE_VERSION and KUBE_API_VERSIONS
2020-03-31 20:21:39 -07:00
Alexander Matyushentsev
53e618a4c0 docs: mention metrics changes and add legacy grafana dashboard (#3324) 2020-03-31 17:54:57 -07:00
Alexander Matyushentsev
eae0527839 fix: argocd fails to connect clusters with IAM authentication configuration (#3325) 2020-03-31 17:44:00 -07:00
jannfis
2d79dbb0bb chore: Update argocd-test-tools to Go v1.14.1 (#3306)
* Update test-tools-image to v0.2.0 and optimize layers in Dockerfile

* Also adapt CirclCI config for new image

* Retrigger CI on possible flaky test
2020-03-31 19:33:59 +02:00
Dai Kurosawa
a501cdbb56 chore: Upgrade golang version from v1.14.0 to v1.14.1 (#3304)
* Upgrade golang version from v1.14.0 to v1.14.1

* use argocd-test-tools version v0.2.0
2020-03-31 19:13:51 +02:00
May Zhang
d2c1821148 feat: ArgoCD plugin: add environment variable KUBEVERSION and KUBE_API_VERS… (#3318)
* ArgoCD plugin: add environment variable KUBEVERSION and KUBE_API_VERSIONS.

* Added test verification of KUBE_API_VERSION

* Using assert.EqualValues for assertion.
2020-03-31 09:13:15 -07:00
rachelwang20
00d44910b8 feat: Whitelisted namespace in UI (#3314)
* Including namespace whiteliste resources support

* regenerate CRD definition and related go code

* Redo make codegen

* revert pkg/apiclient/repository/repository.pb.go

* Whitelisted namespace in UI

* Reflect the whitelist description

* Break one long line to two lines

* Break lines

* Adding line break

* Formatting

Co-authored-by: Alexander Matyushentsev <amatyushentsev@gmail.com>
2020-03-30 16:13:40 -07:00
Alexander Matyushentsev
7ae204d426 fix: avoid nil pointer dereference in badge handler (#3316) 2020-03-30 14:27:43 -07:00
Alexander Matyushentsev
6411958be5 fix: pass APIVersions value to manifest generation request during app validation and during app manifests loading (#3312)
* fix: pass APIVersions value to manifest generation request during app validation and during app manifests loading
2020-03-30 13:36:46 -07:00
jannfis
a7f6866344 Add UI lint instructions and accompanying targets to Makefile (#3315) 2020-03-30 22:21:27 +02:00
Shuwei Hao
c71bfc62ba fix: update help info about argcd account can-i (#3310)
Signed-off-by: Shuwei Hao <haoshuwei24@gmail.com>
2020-03-30 11:49:03 +02:00
jannfis
c4d6fde1c4 chore: Code coverage offensive #1: util/dex (#3305)
* Fix possible panic when generating Dex config from malformed YAML

* Add first batch of tests for util/dex

* More tests

* More tests

* More tests

* Use constants
2020-03-29 14:02:04 -07:00
jannfis
306a84193a Fix possible panic when generating Dex config from malformed YAML (#3303) 2020-03-29 11:42:17 +02:00
jannfis
b02f7f14a7 chore: Run "dep check" in CircleCI pipeline to detect for changes in Gopkg.lock (#3301)
* Run "dep check" in CircleCI pipeline to detect for changes in Gopkg.lock

* Run dep check after restoring vendor cache

* Use -skip-vendor on dep check
2020-03-29 11:10:02 +02:00
Alexander Matyushentsev
ac8ac14545 fix: SSO user unable to change local account password (#3297) (#3298)
* fix: SSO user unable to change local account password (#3297)

* apply code review notes
2020-03-29 10:35:25 +02:00
Alexander Matyushentsev
cdb8758b34 fix: use pagination while loading initial cluster state to avoid memory spikes (#3299) 2020-03-27 22:31:36 -07:00
Alexander Matyushentsev
521f87fe5f fix: convert 'namesuffix', 'nameprefix' string flags to boolean flags in 'argocd app unset' (#3300) 2020-03-27 20:26:22 -07:00
jannfis
27141ff083 chore: Containerize complete build & test toolchain (#3245)
chore: Containerize complete build & test toolchain
2020-03-27 11:36:20 -07:00
Alexander Matyushentsev
7599516f68 fix: fix Cannot read property 'length' of undefined error (#3296) 2020-03-27 10:48:49 +01:00
rachelwang20
e3a18b9cd7 feat: Including namespace whiteliste resources support (#3292)
feat: Including namespace whiteliste resources support (#3292)
2020-03-26 16:13:31 -07:00
May Zhang
eef35a32ab feat: Argocd App Unset Kustomize Override (#3289)
feat: Argocd App Unset Kustomize Override (#3289)
2020-03-26 15:35:55 -07:00
jannfis
e26dace64d Fix unparam errors from linter (#3283) 2020-03-26 09:31:22 -07:00
Adriaan Knapen
1520346369 docs: document Traefik 2 ingress requires disabling TLS (#3284) 2020-03-26 09:30:35 -07:00
Therianthropie
702d4358d1 Add Healy to USERS.md (#3250)
Moved Healy entry to the right position
2020-03-26 09:03:30 -07:00
Alexander Matyushentsev
0162971ea0 fix: implement workaround for helm/helm#6870 bug (#3290)
* fix: implement workaround for  helm/helm#6870 bug

* Update app_management_test.go
2020-03-25 21:07:02 -07:00
Alexander Matyushentsev
7fd7999e49 fix: increase max connections count to support clusters with very large number of CRDs (#3278) 2020-03-25 01:02:33 -07:00
Alexander Matyushentsev
03f773d0ff docs: add slack link to issue template (#3273) 2020-03-24 09:08:35 +01:00
Alexander Matyushentsev
c4bc740fb7 docs: fix resource hooks docs layout (#3266) (#3272) 2020-03-24 08:55:14 +01:00
Jesse Suen
5934bc4699 improvement: remove app name and project labels from reconcliation histogram to reduce cardinality (#3271) 2020-03-23 16:07:37 -07:00
Josef Meier
3f0d26ec17 Refine docs: How to explicitly select a tool (#3261)
Its easy to misconfigure the Application if you use the Application creation wizard, because the default is 'Directory'. If you choose that, your kustomize repo won't work.
2020-03-23 09:54:09 -07:00
Alexander Matyushentsev
7665e58613 chore: update example dashboard to use updated metrics (#3264) 2020-03-23 09:53:04 -07:00
484 changed files with 21784 additions and 17531 deletions

View File

@@ -1,217 +1,16 @@
version: 2.1
commands:
configure_git:
steps:
- run:
name: Configure Git
command: |
set -x
# must be configured for tests to run
git config --global user.email you@example.com
git config --global user.name "Your Name"
echo "export PATH=/home/circleci/.go_workspace/src/github.com/argoproj/argo-cd/hack:\$PATH" | tee -a $BASH_ENV
echo "export GIT_ASKPASS=git-ask-pass.sh" | tee -a $BASH_ENV
restore_vendor:
steps:
- restore_cache:
keys:
- vendor-v1-{{ checksum "Gopkg.lock" }}-{{ .Environment.CIRCLE_JOB }}
save_vendor:
steps:
- save_cache:
key: vendor-v1-{{ checksum "Gopkg.lock" }}-{{ .Environment.CIRCLE_JOB }}
paths:
- vendor
install_golang:
steps:
- run:
name: Install Golang v1.14
command: |
go get golang.org/dl/go1.14
[ -e /home/circleci/sdk/go1.14 ] || go1.14 download
go env
echo "export GOPATH=/home/circleci/.go_workspace" | tee -a $BASH_ENV
echo "export PATH=/home/circleci/sdk/go1.14/bin:\$PATH" | tee -a $BASH_ENV
save_go_cache:
steps:
- save_cache:
key: go-v1-{{ .Branch }}-{{ .Environment.CIRCLE_JOB }}
# https://circleci.com/docs/2.0/language-go/
paths:
- /home/circleci/.cache/go-build
- /home/circleci/sdk/go1.14
restore_go_cache:
steps:
- restore_cache:
keys:
- go-v1-{{ .Branch }}-{{ .Environment.CIRCLE_JOB }}
- go-v1-master-{{ .Environment.CIRCLE_JOB }}
jobs:
codegen:
dummy:
docker:
- image: circleci/golang:1.12
working_directory: /go/src/github.com/argoproj/argo-cd
steps:
- checkout
- restore_cache:
keys:
- codegen-v1-{{ checksum "Gopkg.lock" }}-{{ checksum "hack/installers/install-codegen-go-tools.sh" }}
- run: ./hack/install.sh codegen-go-tools
- run: sudo ./hack/install.sh codegen-tools
- run: dep ensure -v
- save_cache:
key: codegen-v1-{{ checksum "Gopkg.lock" }}-{{ checksum "hack/installers/install-codegen-go-tools.sh" }}
paths: [vendor, /tmp/dl, /go/pkg]
- run: helm2 init --client-only
- run: make codegen-local
- run:
name: Check nothing has changed
command: |
set -xo pipefail
# This makes sure you ran `make pre-commit` before you pushed.
# We exclude the Swagger resources; CircleCI doesn't generate them correctly.
# When this fails, it will, create a patch file you can apply locally to fix it.
# To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
git diff --exit-code -- . ':!Gopkg.lock' ':!assets/swagger.json' | tee codegen.patch
- store_artifacts:
path: codegen.patch
destination: .
test:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
steps:
- restore_go_cache
- install_golang
- checkout
- restore_cache:
key: test-dl-v2
- run: sudo ./hack/install.sh kubectl-linux kubectx-linux dep-linux ksonnet-linux helm-linux helm2-linux kustomize-linux
- save_cache:
key: test-dl-v2
paths: [/tmp/dl]
- configure_git
- run: go get github.com/jstemmer/go-junit-report
- restore_vendor
- run: dep ensure -v
- run: make test
- save_vendor
- save_go_cache
- run:
name: Uploading code coverage
command: bash <(curl -s https://codecov.io/bash) -f coverage.out
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
destination: .
e2e:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: circleci/classic:201808-01
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
ARGOCD_SSH_DATA_PATH: "/tmp/argo-e2e/app/config/ssh"
ARGOCD_TLS_DATA_PATH: "/tmp/argo-e2e/app/config/tls"
- image: cimg/base:2020.01
steps:
- run:
name: Install and start K3S v0.5.0
name: Dummy step
command: |
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
kubectl version
background: true
environment:
INSTALL_K3S_EXEC: --docker
INSTALL_K3S_VERSION: v0.5.0
- restore_go_cache
- install_golang
- checkout
- restore_cache:
keys: [e2e-dl-v2]
- run: sudo ./hack/install.sh kubectx-linux dep-linux ksonnet-linux helm-linux helm2-linux kustomize-linux
- run: go get github.com/jstemmer/go-junit-report
- save_cache:
key: e2e-dl-v2
paths: [/tmp/dl]
- restore_vendor
- run: dep ensure -v
- configure_git
- run: make cli
- run:
name: Create namespace
command: |
set -x
kubectl create ns argocd-e2e
kubens argocd-e2e
# install the certificates (not 100% sure we need this)
sudo cp /var/lib/rancher/k3s/server/tls/token-ca.crt /usr/local/share/ca-certificates/k3s.crt
sudo update-ca-certificates
# create the kubecfg, again - not sure we need this
cat /etc/rancher/k3s/k3s.yaml | sed "s/localhost/`hostname`/" | tee ~/.kube/config
echo "127.0.0.1 `hostname`" | sudo tee -a /etc/hosts
- run:
name: Apply manifests
command: kustomize build test/manifests/base | kubectl apply -f -
- run:
name: Start Redis
command: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save "" --appendonly no
background: true
- run:
name: Start repo server
command: go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379
background: true
- run:
name: Start API server
command: go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app
background: true
- run:
name: Start Test Git
command: |
test/fixture/testrepos/start-git.sh
background: true
- run: until curl -v http://localhost:8080/healthz; do sleep 10; done
- run:
name: Start controller
command: go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081 --kubeconfig ~/.kube/config
background: true
- run:
command: PATH=dist:$PATH make test-e2e
environment:
ARGOCD_OPTS: "--server localhost:8080 --plaintext"
ARGOCD_E2E_K3S: "true"
- save_vendor
- save_go_cache
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
destination: .
ui:
docker:
- image: node:11.15.0
working_directory: ~/argo-cd/ui
steps:
- checkout:
path: ~/argo-cd/
- restore_cache:
keys:
- yarn-packages-v4-{{ checksum "yarn.lock" }}
- run: yarn install --frozen-lockfile --ignore-optional --non-interactive
- save_cache:
key: yarn-packages-v4-{{ checksum "yarn.lock" }}
paths: [~/.cache/yarn, node_modules]
- run: yarn test
- run: ./node_modules/.bin/codecov -p ..
- run: NODE_ENV='production' yarn build
- run: yarn lint
echo "This is a dummy step to satisfy CircleCI"
workflows:
version: 2
workflow:
jobs:
- test
- codegen
- ui:
requires:
- codegen
- e2e
jobs:
- dummy

324
.circleci/config.yml.off Normal file
View File

@@ -0,0 +1,324 @@
# CircleCI currently disabled in favor of GH actions
version: 2.1
commands:
prepare_environment:
steps:
- run:
name: Configure environment
command: |
set -x
echo "export GOCACHE=/tmp/go-build-cache" | tee -a $BASH_ENV
echo "export ARGOCD_TEST_VERBOSE=true" | tee -a $BASH_ENV
echo "export ARGOCD_TEST_PARALLELISM=4" | tee -a $BASH_ENV
echo "export ARGOCD_SONAR_VERSION=4.2.0.1873" | tee -a $BASH_ENV
configure_git:
steps:
- run:
name: Configure Git
command: |
set -x
# must be configured for tests to run
git config --global user.email you@example.com
git config --global user.name "Your Name"
echo "export PATH=/home/circleci/.go_workspace/src/github.com/argoproj/argo-cd/hack:\$PATH" | tee -a $BASH_ENV
echo "export GIT_ASKPASS=git-ask-pass.sh" | tee -a $BASH_ENV
setup_go_modules:
steps:
- run:
name: Run go mod download and populate vendor
command: |
go mod download
go mod vendor
save_coverage_info:
steps:
- persist_to_workspace:
root: .
paths:
- coverage.out
save_node_modules:
steps:
- persist_to_workspace:
root: ~/argo-cd
paths:
- ui/node_modules
save_go_cache:
steps:
- persist_to_workspace:
root: /tmp
paths:
- go-build-cache
attach_go_cache:
steps:
- attach_workspace:
at: /tmp
install_golang:
steps:
- run:
name: Install Golang v1.14.1
command: |
go get golang.org/dl/go1.14.1
[ -e /home/circleci/sdk/go1.14.1 ] || go1.14.1 download
go env
echo "export GOPATH=/home/circleci/.go_workspace" | tee -a $BASH_ENV
echo "export PATH=/home/circleci/sdk/go1.14.1/bin:\$PATH" | tee -a $BASH_ENV
jobs:
build:
docker:
- image: argoproj/argocd-test-tools:v0.5.0
working_directory: /go/src/github.com/argoproj/argo-cd
steps:
- prepare_environment
- checkout
- run: make build-local
- run: chmod -R 777 vendor
- run: chmod -R 777 ${GOCACHE}
- save_go_cache
codegen:
docker:
- image: argoproj/argocd-test-tools:v0.5.0
working_directory: /go/src/github.com/argoproj/argo-cd
steps:
- prepare_environment
- checkout
- attach_go_cache
- run: helm2 init --client-only
- run: make codegen-local
- run:
name: Check nothing has changed
command: |
set -xo pipefail
# This makes sure you ran `make pre-commit` before you pushed.
# We exclude the Swagger resources; CircleCI doesn't generate them correctly.
# When this fails, it will, create a patch file you can apply locally to fix it.
# To troubleshoot builds: https://argoproj.github.io/argo-cd/developer-guide/ci/
git diff --exit-code -- . ':!Gopkg.lock' ':!assets/swagger.json' | tee codegen.patch
- store_artifacts:
path: codegen.patch
destination: .
test:
working_directory: /go/src/github.com/argoproj/argo-cd
docker:
- image: argoproj/argocd-test-tools:v0.5.0
steps:
- prepare_environment
- checkout
- configure_git
- attach_go_cache
- run: make test-local
- run:
name: Uploading code coverage
command: bash <(curl -s https://codecov.io/bash) -f coverage.out
- run:
name: Output of test-results
command: |
ls -l test-results || true
cat test-results/junit.xml || true
- save_coverage_info
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
destination: .
lint:
working_directory: /go/src/github.com/argoproj/argo-cd
docker:
- image: argoproj/argocd-test-tools:v0.5.0
steps:
- prepare_environment
- checkout
- configure_git
- attach_vendor
- store_go_cache_docker
- run:
name: Run golangci-lint
command: ARGOCD_LINT_GOGC=10 make lint-local
- run:
name: Check that nothing has changed
command: |
gDiff=$(git diff)
if test "$gDiff" != ""; then
echo
echo "###############################################################################"
echo "golangci-lint has made automatic corrections to your code. Please check below"
echo "diff output and commit this to your local branch, or run make lint locally."
echo "###############################################################################"
echo
git diff
exit 1
fi
sonarcloud:
working_directory: /go/src/github.com/argoproj/argo-cd
docker:
- image: argoproj/argocd-test-tools:v0.5.0
environment:
NODE_MODULES: /go/src/github.com/argoproj/argo-cd/ui/node_modules
steps:
- prepare_environment
- checkout
- attach_workspace:
at: .
- run:
command: mkdir -p /tmp/cache/scanner
name: Create cache directory if it doesn't exist
- restore_cache:
keys:
- v1-sonarcloud-scanner-4.2.0.1873
- run:
command: |
set -e
VERSION=4.2.0.1873
SONAR_TOKEN=$SONAR_TOKEN
SCANNER_DIRECTORY=/tmp/cache/scanner
export SONAR_USER_HOME=$SCANNER_DIRECTORY/.sonar
OS="linux"
echo $SONAR_USER_HOME
if [[ ! -x "$SCANNER_DIRECTORY/sonar-scanner-$VERSION-$OS/bin/sonar-scanner" ]]; then
curl -Ol https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-$VERSION-$OS.zip
unzip -qq -o sonar-scanner-cli-$VERSION-$OS.zip -d $SCANNER_DIRECTORY
fi
chmod +x $SCANNER_DIRECTORY/sonar-scanner-$VERSION-$OS/bin/sonar-scanner
chmod +x $SCANNER_DIRECTORY/sonar-scanner-$VERSION-$OS/jre/bin/java
# Workaround for a possible bug in CircleCI
if ! echo $CIRCLE_PULL_REQUEST | grep https://github.com/argoproj; then
unset CIRCLE_PULL_REQUEST
unset CIRCLE_PULL_REQUESTS
fi
# Explicitly set NODE_MODULES
export NODE_MODULES=/go/src/github.com/argoproj/argo-cd/ui/node_modules
export NODE_PATH=/go/src/github.com/argoproj/argo-cd/ui/node_modules
$SCANNER_DIRECTORY/sonar-scanner-$VERSION-$OS/bin/sonar-scanner
name: SonarCloud
- save_cache:
key: v1-sonarcloud-scanner-4.2.0.1873
paths:
- /tmp/cache/scanner
e2e:
working_directory: /home/circleci/.go_workspace/src/github.com/argoproj/argo-cd
machine:
image: ubuntu-1604:201903-01
environment:
ARGOCD_FAKE_IN_CLUSTER: "true"
ARGOCD_SSH_DATA_PATH: "/tmp/argo-e2e/app/config/ssh"
ARGOCD_TLS_DATA_PATH: "/tmp/argo-e2e/app/config/tls"
ARGOCD_E2E_K3S: "true"
steps:
- run:
name: Install and start K3S v0.5.0
command: |
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
kubectl version
environment:
INSTALL_K3S_EXEC: --docker
INSTALL_K3S_VERSION: v0.5.0
- prepare_environment
- checkout
- run:
name: Fix permissions on filesystem
command: |
mkdir -p /home/circleci/.go_workspace/pkg/mod
chmod -R 777 /home/circleci/.go_workspace/pkg/mod
mkdir -p /tmp/go-build-cache
chmod -R 777 /tmp/go-build-cache
- attach_go_cache
- run:
name: Update kubectl configuration for container
command: |
ipaddr=$(ifconfig $IFACE |grep "inet " | awk '{print $2}')
if echo $ipaddr | grep -q 'addr:'; then
ipaddr=$(echo $ipaddr | awk -F ':' '{print $2}')
fi
test -d $HOME/.kube || mkdir -p $HOME/.kube
kubectl config view --raw | sed -e "s/127.0.0.1:6443/${ipaddr}:6443/g" -e "s/localhost:6443/${ipaddr}:6443/g" > $HOME/.kube/config
environment:
IFACE: ens4
- run:
name: Start E2E test server
command: make start-e2e
background: true
environment:
DOCKER_SRCDIR: /home/circleci/.go_workspace/src
ARGOCD_E2E_TEST: "true"
ARGOCD_IN_CI: "true"
GOPATH: /home/circleci/.go_workspace
- run:
name: Wait for API server to become available
command: |
count=1
until curl -v http://localhost:8080/healthz; do
sleep 10;
if test $count -ge 60; then
echo "Timeout"
exit 1
fi
count=$((count+1))
done
- run:
name: Run E2E tests
command: |
make test-e2e
environment:
ARGOCD_OPTS: "--plaintext"
ARGOCD_E2E_K3S: "true"
IFACE: ens4
DOCKER_SRCDIR: /home/circleci/.go_workspace/src
GOPATH: /home/circleci/.go_workspace
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
destination: .
ui:
docker:
- image: node:11.15.0
working_directory: ~/argo-cd/ui
steps:
- checkout:
path: ~/argo-cd/
- restore_cache:
keys:
- yarn-packages-v4-{{ checksum "yarn.lock" }}
- run: yarn install --frozen-lockfile --ignore-optional --non-interactive
- save_cache:
key: yarn-packages-v4-{{ checksum "yarn.lock" }}
paths: [~/.cache/yarn, node_modules]
- run: yarn test
- run: ./node_modules/.bin/codecov -p ..
- run: NODE_ENV='production' yarn build
- run: yarn lint
- save_node_modules
orbs:
sonarcloud: sonarsource/sonarcloud@1.0.1
workflows:
version: 2
workflow:
jobs:
- build
- test:
requires:
- build
- codegen:
requires:
- build
- ui:
requires:
- build
- sonarcloud:
context: SonarCloud
requires:
- test
- ui
- e2e:
requires:
- build

View File

@@ -5,6 +5,10 @@ title: ''
labels: 'bug'
assignees: ''
---
If you are trying to resolve an environment-specific issue or have a one-off question about the edge case that does not require a feature then please consider asking a
question in argocd slack [channel](https://argoproj.github.io/community/join-slack).
Checklist:
* [ ] I've searched in the docs and FAQ for my answer: http://bit.ly/argocd-faq.

349
.github/workflows/ci-build.yaml vendored Normal file
View File

@@ -0,0 +1,349 @@
name: Integration tests
on:
push:
branches:
- 'master'
- 'release-*'
- '!release-1.4'
- '!release-1.5'
pull_request:
branches:
- 'master'
- 'release-1.6'
jobs:
check-go:
name: Ensure Go modules synchronicity
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Golang
uses: actions/setup-go@v1
with:
go-version: '1.14.12'
- name: Download all Go modules
run: |
go mod download
- name: Check for tidyness of go.mod and go.sum
run: |
go mod tidy
git diff --exit-code -- .
build-go:
name: Build & cache Go code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Golang
uses: actions/setup-go@v1
with:
go-version: '1.14.12'
- name: Restore go build cache
uses: actions/cache@v1
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Download all Go modules
run: |
go mod download
- name: Compile all packages
run: make build-local
lint-go:
name: Lint Go code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
with:
version: v1.29
args: --timeout 5m --exclude SA5011
test-go:
name: Run unit tests for Go packages
runs-on: ubuntu-latest
needs:
- build-go
steps:
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
uses: actions/checkout@v2
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@v1
with:
go-version: '1.14.12'
- name: Install required packages
run: |
sudo apt-get install git -y
- name: Switch to temporal branch so we re-attach head
run: |
git switch -c temporal-pr-branch
git status
- name: Fetch complete history for blame information
run: |
git fetch --prune --no-tags --depth=1 origin +refs/heads/*:refs/remotes/origin/*
- name: Add ~/go/bin to PATH
run: |
echo "/home/runner/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@v1
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
- name: Setup git username and email
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download and vendor all required packages
run: |
go mod download
- name: Run all unit tests
run: make test-local
- name: Generate code coverage artifacts
uses: actions/upload-artifact@v2
with:
name: code-coverage
path: coverage.out
- name: Generate test results artifacts
uses: actions/upload-artifact@v2
with:
name: test-results
path: test-results/
codegen:
name: Check changes to generated code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Golang
uses: actions/setup-go@v1
with:
go-version: '1.14.12'
- name: Create symlink in GOPATH
run: |
mkdir -p ~/go/src/github.com/argoproj
cp -a ../argo-cd ~/go/src/github.com/argoproj
- name: Add ~/go/bin to PATH
run: |
echo "/home/runner/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Download & vendor dependencies
run: |
# We need to vendor go modules for codegen yet
go mod download
go mod vendor -v
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
- name: Install toolchain for codegen
run: |
make install-codegen-tools-local
make install-go-tools-local
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
- name: Initialize local Helm
run: |
helm2 init --client-only
- name: Run codegen
run: |
set -x
export GOPATH=$(go env GOPATH)
make codegen-local
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
- name: Check nothing has changed
run: |
set -xo pipefail
git diff --exit-code -- . ':!go.sum' ':!go.mod' ':!assets/swagger.json' | tee codegen.patch
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
build-ui:
name: Build, test & lint UI code
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup NodeJS
uses: actions/setup-node@v1
with:
node-version: '11.15.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@v1
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
- name: Install node dependencies
run: |
cd ui && yarn install --frozen-lockfile --ignore-optional --non-interactive
- name: Build UI code
run: |
yarn test
yarn build
env:
NODE_ENV: production
working-directory: ui/
- name: Run ESLint
run: yarn lint
working-directory: ui/
analyze:
name: Process & analyze test artifacts
runs-on: ubuntu-latest
needs:
- test-go
- build-ui
env:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@v1
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
- name: Remove other node_modules directory
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Create test-results directory
run: |
mkdir -p test-results
- name: Get code coverage artifiact
uses: actions/download-artifact@v2
with:
name: code-coverage
- name: Get test result artifact
uses: actions/download-artifact@v2
with:
name: test-results
path: test-results
- name: Upload code coverage information to codecov.io
uses: codecov/codecov-action@v1
with:
file: coverage.out
- name: Perform static code analysis using SonarCloud
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SCANNER_VERSION: 4.2.0.1873
SCANNER_PATH: /tmp/cache/scanner
OS: linux
run: |
# We do not use the provided action, because it does contain an old
# version of the scanner, and also takes time to build.
set -e
mkdir -p ${SCANNER_PATH}
export SONAR_USER_HOME=${SCANNER_PATH}/.sonar
if [[ ! -x "${SCANNER_PATH}/sonar-scanner-${SCANNER_VERSION}-${OS}/bin/sonar-scanner" ]]; then
curl -Ol https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-${SCANNER_VERSION}-${OS}.zip
unzip -qq -o sonar-scanner-cli-${SCANNER_VERSION}-${OS}.zip -d ${SCANNER_PATH}
fi
chmod +x ${SCANNER_PATH}/sonar-scanner-${SCANNER_VERSION}-${OS}/bin/sonar-scanner
chmod +x ${SCANNER_PATH}/sonar-scanner-${SCANNER_VERSION}-${OS}/jre/bin/java
# Explicitly set NODE_MODULES
export NODE_MODULES=${PWD}/ui/node_modules
export NODE_PATH=${PWD}/ui/node_modules
${SCANNER_PATH}/sonar-scanner-${SCANNER_VERSION}-${OS}/bin/sonar-scanner
if: env.sonar_secret != ''
test-e2e:
name: Run end-to-end tests
runs-on: ubuntu-latest
needs:
- build-go
env:
GOPATH: /home/runner/go
ARGOCD_FAKE_IN_CLUSTER: "true"
ARGOCD_SSH_DATA_PATH: "/tmp/argo-e2e/app/config/ssh"
ARGOCD_TLS_DATA_PATH: "/tmp/argo-e2e/app/config/tls"
ARGOCD_E2E_SSH_KNOWN_HOSTS: "../fixture/certs/ssh_known_hosts"
ARGOCD_E2E_K3S: "true"
ARGOCD_IN_CI: "true"
ARGOCD_E2E_APISERVER_PORT: "8088"
ARGOCD_SERVER: "127.0.0.1:8088"
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Golang
uses: actions/setup-go@v1
with:
go-version: '1.14.12'
- name: Install K3S
env:
INSTALL_K3S_VERSION: v0.5.0
run: |
set -x
curl -sfL https://get.k3s.io | sh -
sudo chmod -R a+rw /etc/rancher/k3s
sudo mkdir -p $HOME/.kube && sudo chown -R runner $HOME/.kube
sudo k3s kubectl config view --raw > $HOME/.kube/config
sudo chown runner $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@v1
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Add ~/go/bin to PATH
run: |
echo "/home/runner/go/bin" >> $GITHUB_PATH
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Download Go dependencies
run: |
go mod download
go get github.com/mattn/goreman
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
- name: Setup git username and email
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Pull Docker image required for tests
run: |
docker pull quay.io/dexidp/dex:v2.22.0
docker pull argoproj/argo-cd-ci-builder:v1.0.0
docker pull redis:5.0.3-alpine
- name: Run E2E server and wait for it being available
timeout-minutes: 30
run: |
set -x
# Something is weird in GH runners -- there's a phantom listener for
# port 8080 which is not visible in netstat -tulpen, but still there
# with a HTTP listener. We have API server listening on port 8088
# instead.
make start-e2e-local &
count=1
until curl -f http://127.0.0.1:8088/healthz; do
sleep 10;
if test $count -ge 60; then
echo "Timeout"
exit 1
fi
count=$((count+1))
done
- name: Run E2E testsuite
run: |
set -x
make test-e2e-local

View File

@@ -13,19 +13,10 @@ jobs:
steps:
- uses: actions/setup-go@v1
with:
go-version: '1.14'
go-version: '1.14.12'
- uses: actions/checkout@master
with:
path: src/github.com/argoproj/argo-cd
- uses: actions/cache@v1
with:
path: src/github.com/argoproj/argo-cd/vendor
key: ${{ runner.os }}-go-dep-${{ hashFiles('**/Gopkg.lock') }}
# download dependencies
- run: mkdir -p $GITHUB_WORKSPACE/bin && curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
working-directory: src/github.com/argoproj/argo-cd
- run: $GOPATH/bin/dep ensure -v
working-directory: ./src/github.com/argoproj/argo-cd
# get image tag
- run: echo ::set-output name=tag::$(cat ./VERSION)-${GITHUB_SHA::8}
@@ -33,7 +24,9 @@ jobs:
id: image
# build
- run: make image DEV_IMAGE=true DOCKER_PUSH=false IMAGE_NAMESPACE=docker.pkg.github.com/argoproj/argo-cd IMAGE_TAG=${{ steps.image.outputs.tag }}
- run: |
docker images -a --format "{{.ID}}" | xargs -I {} docker rmi {}
make image DEV_IMAGE=true DOCKER_PUSH=false IMAGE_NAMESPACE=docker.pkg.github.com/argoproj/argo-cd IMAGE_TAG=${{ steps.image.outputs.tag }}
working-directory: ./src/github.com/argoproj/argo-cd
# publish
@@ -54,4 +47,4 @@ jobs:
git config --global user.name 'CI'
git diff --exit-code && echo 'Already deployed' || (git commit -am 'Upgrade argocd to ${{ steps.image.outputs.tag }}' && git push)
working-directory: argoproj-deployments/argocd
# TODO: clean up old images once github supports it: https://github.community/t5/How-to-use-Git-and-GitHub/Deleting-images-from-Github-Package-Registry/m-p/41202/thread-id/9811
# TODO: clean up old images once github supports it: https://github.community/t5/How-to-use-Git-and-GitHub/Deleting-images-from-Github-Package-Registry/m-p/41202/thread-id/9811

289
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,289 @@
name: Create ArgoCD release
on:
push:
tags:
- 'release-v*'
- '!release-v1.5*'
- '!release-v1.4*'
- '!release-v1.3*'
- '!release-v1.2*'
- '!release-v1.1*'
- '!release-v1.0*'
- '!release-v0*'
jobs:
prepare-release:
name: Perform automatic release on trigger ${{ github.ref }}
runs-on: ubuntu-latest
env:
# The name of the tag as supplied by the GitHub event
SOURCE_TAG: ${{ github.ref }}
# The image namespace where Docker image will be published to
IMAGE_NAMESPACE: argoproj
# Whether to create & push image and release assets
DRY_RUN: false
# Whether a draft release should be created, instead of public one
DRAFT_RELEASE: false
# The name of the repository containing tap formulae
TAP_REPOSITORY: argoproj/homebrew-tap
# Whether to update homebrew with this release as well
# Set RELEASE_HOMEBREW_TOKEN secret in repository for this to work - needs
# access to public repositories (or homebrew-tap repo specifically)
UPDATE_HOMEBREW: false
# Name of the GitHub user for Git config
GIT_USERNAME: argo-bot
# E-Mail of the GitHub user for Git config
GIT_EMAIL: argoproj@gmail.com
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Check if the published tag is well formed and setup vars
run: |
set -xue
# Target version must match major.minor.patch and optional -rcX suffix
# where X must be a number.
TARGET_VERSION=${SOURCE_TAG#*release-v}
if ! echo ${TARGET_VERSION} | egrep '^[0-9]+\.[0-9]+\.[0-9]+(-rc[0-9]+)*$'; then
echo "::error::Target version '${TARGET_VERSION}' is malformed, refusing to continue." >&2
exit 1
fi
# Target branch is the release branch we're going to operate on
# Its name is 'release-<major>.<minor>'
TARGET_BRANCH="release-${TARGET_VERSION%\.[0-9]*}"
# The release tag is the source tag, minus the release- prefix
RELEASE_TAG="${SOURCE_TAG#*release-}"
# Whether this is a pre-release (indicated by -rc suffix)
PRE_RELEASE=false
if echo "${RELEASE_TAG}" | egrep -- '-rc[0-9]+$'; then
PRE_RELEASE=true
fi
# We must not have a release trigger within the same release branch,
# because that means a release for this branch is already running.
if git tag -l | grep "release-v${TARGET_VERSION%\.[0-9]*}" | grep -v "release-v${TARGET_VERSION}"; then
echo "::error::Another release for branch ${TARGET_BRANCH} is currently in progress."
exit 1
fi
# Ensure that release do not yet exist
if git rev-parse ${RELEASE_TAG}; then
echo "::error::Release tag ${RELEASE_TAG} already exists in repository. Refusing to continue."
exit 1
fi
# Make the variables available in follow-up steps
echo "TARGET_VERSION=${TARGET_VERSION}" >> $GITHUB_ENV
echo "TARGET_BRANCH=${TARGET_BRANCH}" >> $GITHUB_ENV
echo "RELEASE_TAG=${RELEASE_TAG}" >> $GITHUB_ENV
echo "PRE_RELEASE=${PRE_RELEASE}" >> $GITHUB_ENV
- name: Check if our release tag has a correct annotation
run: |
set -ue
# Fetch all tag information as well
git fetch --prune --tags --force
echo "=========== BEGIN COMMIT MESSAGE ============="
git show ${SOURCE_TAG}
echo "============ END COMMIT MESSAGE =============="
# Quite dirty hack to get the release notes from the annotated tag
# into a temporary file.
RELEASE_NOTES=$(mktemp -p /tmp release-notes.XXXXXX)
prefix=true
begin=false
git show ${SOURCE_TAG} | while read line; do
# Whatever is in commit history for the tag, we only want that
# annotation from our tag. We discard everything else.
if test "$begin" = "false"; then
if echo $line | grep -q "tag ${SOURCE_TAG#refs/tags/}"; then begin="true"; fi
continue
fi
if test "$prefix" = "true"; then
if test -z "$line"; then prefix=false; fi
else
if echo $line | egrep -q '^commit [0-9a-f]+'; then
break
fi
echo $line >> ${RELEASE_NOTES}
fi
done
# For debug purposes
echo "============BEGIN RELEASE NOTES================="
cat ${RELEASE_NOTES}
echo "=============END RELEASE NOTES=================="
# Too short release notes are suspicious. We need at least 100 bytes.
relNoteLen=$(stat -c '%s' $RELEASE_NOTES)
if test $relNoteLen -lt 100; then
echo "::error::No release notes provided in tag annotation (or tag is not annotated)"
exit 1
fi
# Check for magic string '## Quick Start' in head of release notes
if ! head -2 ${RELEASE_NOTES} | grep -iq '## Quick Start'; then
echo "::error::Release notes seem invalid, quick start section not found."
exit 1
fi
# We store path to temporary release notes file for later reading, we
# need it when creating release.
echo "RELEASE_NOTES=${RELEASE_NOTES}" >> $GITHUB_ENV
- name: Setup Golang
uses: actions/setup-go@v1
with:
go-version: '1.14.12'
- name: Setup Git author information
run: |
set -ue
git config --global user.email "${GIT_EMAIL}"
git config --global user.name "${GIT_USERNAME}"
- name: Checkout corresponding release branch
run: |
set -ue
echo "Switching to release branch '${TARGET_BRANCH}'"
if ! git checkout ${TARGET_BRANCH}; then
echo "::error::Checking out release branch '${TARGET_BRANCH}' for target version '${TARGET_VERSION}' (tagged '${RELEASE_TAG}') failed. Does it exist in repo?"
exit 1
fi
- name: Create VERSION information
run: |
set -ue
echo "Bumping version from $(cat VERSION) to ${TARGET_VERSION}"
echo "${TARGET_VERSION}" > VERSION
git commit -m "Bump version to ${TARGET_VERSION}" VERSION
- name: Generate new set of manifests
run: |
set -ue
make install-codegen-tools-local
helm2 init --client-only
make manifests-local VERSION=${TARGET_VERSION}
git diff
git commit manifests/ -m "Bump version to ${TARGET_VERSION}"
- name: Create the release tag
run: |
set -ue
echo "Creating release ${RELEASE_TAG}"
git tag ${RELEASE_TAG}
- name: Build Docker image for release
run: |
set -ue
git clean -fd
mkdir -p dist/
make image IMAGE_TAG="${TARGET_VERSION}" DOCKER_PUSH=false
make release-cli
chmod +x ./dist/argocd-linux-amd64
./dist/argocd-linux-amd64 version --client
if: ${{ env.DRY_RUN != 'true' }}
- name: Push docker image to repository
env:
DOCKER_USERNAME: ${{ secrets.RELEASE_DOCKERHUB_USERNAME }}
DOCKER_TOKEN: ${{ secrets.RELEASE_DOCKERHUB_TOKEN }}
run: |
set -ue
docker login --username "${DOCKER_USERNAME}" --password "${DOCKER_TOKEN}"
docker push ${IMAGE_NAMESPACE}/argocd:v${TARGET_VERSION}
if: ${{ env.DRY_RUN != 'true' }}
- name: Read release notes file
id: release-notes
uses: juliangruber/read-file-action@v1
with:
path: ${{ env.RELEASE_NOTES }}
- name: Push changes to release branch
run: |
set -ue
git push origin ${TARGET_BRANCH}
git push origin ${RELEASE_TAG}
- name: Create GitHub release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
id: create_release
with:
tag_name: ${{ env.RELEASE_TAG }}
release_name: ${{ env.RELEASE_TAG }}
draft: ${{ env.DRAFT_RELEASE }}
prerelease: ${{ env.PRE_RELEASE }}
body: ${{ steps.release-notes.outputs.content }}
- name: Upload argocd-linux-amd64 binary to release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/argocd-linux-amd64
asset_name: argocd-linux-amd64
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload argocd-darwin-amd64 binary to release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/argocd-darwin-amd64
asset_name: argocd-darwin-amd64
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Upload argocd-windows-amd64 binary to release assets
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/argocd-windows-amd64.exe
asset_name: argocd-windows-amd64.exe
asset_content_type: application/octet-stream
if: ${{ env.DRY_RUN != 'true' }}
- name: Check out homebrew tap repository
uses: actions/checkout@v2
env:
HOMEBREW_TOKEN: ${{ secrets.RELEASE_HOMEBREW_TOKEN }}
with:
repository: ${{ env.TAP_REPOSITORY }}
path: homebrew-tap
fetch-depth: 0
token: ${{ env.HOMEBREW_TOKEN }}
if: ${{ env.HOMEBREW_TOKEN != '' && env.UPDATE_HOMEBREW == 'true' && env.PRE_RELEASE != 'true' }}
- name: Update homebrew tap formula
env:
HOMEBREW_TOKEN: ${{ secrets.RELEASE_HOMEBREW_TOKEN }}
run: |
set -ue
cd homebrew-tap
./update.sh argocd ${TARGET_VERSION}
git commit -am "Update argocd to ${TARGET_VERSION}"
git push
cd ..
rm -rf homebrew-tap
if: ${{ env.HOMEBREW_TOKEN != '' && env.UPDATE_HOMEBREW == 'true' && env.PRE_RELEASE != 'true' }}
- name: Delete original request tag from repository
run: |
set -ue
git push --delete origin ${SOURCE_TAG}
if: ${{ always() }}

4
.gitignore vendored
View File

@@ -9,4 +9,6 @@ site/
cmd/**/debug
debug.test
coverage.out
test-results
test-results
.scannerwork
.scratch

View File

@@ -1,6 +1,112 @@
# Changelog
## v1.5.0 (Not released)
## v1.5.5 (2020-05-16)
- feat: add Rollout restart action (#3557)
- fix: enable redis retries; add redis request duration metric (#3547)
- fix: when --rootpath is on, 404 is returned when URL contains encoded URI (#3564)
## v1.5.4 (2020-05-05)
- fix: CLI commands with --grpc-web
## v1.5.3 (2020-05-01)
This patch release introduces a set of enhancements and bug fixes. Here are most notable changes:
#### Multiple Kustomize Versions
The bundled Kustomize version had been upgraded to v3.5.4. Argo CD allows changing bundled version using
[custom image or init container](https://argoproj.github.io/argo-cd/operator-manual/custom_tools/).
This [feature](https://argoproj.github.io/argo-cd/user-guide/kustomize/#custom-kustomize-versions)
enables bundling multiple Kustomize versions at the same time and allows end-users to specify the required version per application.
#### Custom Root Path
The feature allows accessing Argo CD UI and API using a custom root path(for example https://myhostname/argocd).
This enables running Argo CD behind a proxy that takes care of user authentication (such as Ambassador) or hosting
multiple Argo CD using the same hostname. A set of bug fixes and enhancements had been implemented to makes it easier.
Use new `--rootpath` [flag](https://argoproj.github.io/argo-cd/operator-manual/ingress/#argocd-server-and-ui-root-path-v153) to enable the feature.
### Login Rate Limiting
The feature prevents a built-in user password brute force attack and addresses the known
[vulnerability](https://argoproj.github.io/argo-cd/security_considerations/#cve-2020-8827-insufficient-anti-automationanti-brute-force).
### Settings Management Tools
A new set of [CLI commands](https://argoproj.github.io/argo-cd/operator-manual/troubleshooting/) that simplify configuring Argo CD.
Using the CLI you can test settings changes offline without affecting running Argo CD instance and have ability to troubleshot diffing
customizations, custom resource health checks, and more.
### Other
* New Project and Application CRD settings ([#2900](https://github.com/argoproj/argo-cd/issues/2900), [#2873](https://github.com/argoproj/argo-cd/issues/2873)) that allows customizing Argo CD behavior.
* Upgraded Dex (v2.22.0) enables seamless [SSO integration](https://www.openshift.com/blog/openshift-authentication-integration-with-argocd) with Openshift.
#### Enhancements
* feat: added --grpc-web-root-path for CLI. (#3483)
* feat: limit the maximum number of concurrent login attempts (#3467)
* feat: upgrade kustomize version to 3.5.4 (#3472)
* feat: upgrade dex to 2.22.0 (#3468)
* feat: support user specified account token ids (#3425)
* feat: support separate Kustomize version per application (#3414)
* feat: add support for dex prometheus metrics (#3249)
* feat: add settings troubleshooting commands to the 'argocd-util' binary (#3398)
* feat: Let user to define meaningful unique JWT token name (#3388)
* feat: Display link between OLM ClusterServiceVersion and it's OperatorGroup (#3390)
* feat: Introduce sync-option SkipDryRunOnMissingResource=true (#2873) (#3247)
* feat: support normalizing CRD fields that use known built-in K8S types (#3357)
* feat: Whitelisted namespace resources (#2900)
#### Bug Fixes
* fix: added path to cookie (#3501)
* fix: 'argocd sync' does not take into account IgnoreExtraneous annotation (#3486)
* fix: CLI renders flipped diff results (#3480)
* fix: GetApplicationSyncWindows API should not validate project permissions (#3456)
* fix: argocd-util kubeconfig should use RawRestConfig to export config (#3447)
* fix: javascript error on accounts list page (#3453)
* fix: support both <group>/<kind> as well as <kind> as a resource override key (#3433)
* fix: Updating to jsonnet v1.15.0 fix issue #3277 (#3431)
* fix for helm repo add with flag --insecure-skip-server-verification (#3420)
* fix: app diff --local support for helm repo. #3151 (#3407)
* fix: Syncing apps incorrectly states "app synced", but this is not true (#3286)
* fix: for jsonnet when it is localed in nested subdirectory and uses import (#3372)
* fix: Update 4.5.3 redis-ha helm manifest (#3370)
* fix: return 401 error code if username does not exist (#3369)
* fix: Do not panic while running hooks with short revision (#3368)
## v1.5.2 (2020-04-20)
#### Critical security fix
This release contains a critical security fix. Please refer to the
[security document](https://argoproj.github.io/argo-cd/security_considerations/#CVE-2020-5260-possible-git-credential-leak)
for more information.
**Upgrading is strongly recommended**
## v1.4.3 (2020-04-20)
#### Critical security fix
This release contains a critical security fix. Please refer to the
[security document](https://argoproj.github.io/argo-cd/security_considerations/#CVE-2020-5260-possible-git-credential-leak)
for more information.
## v1.5.1 (2020-04-06)
#### Bug Fixes
* fix: return 401 error code if username does not exist (#3369)
* fix: Do not panic while running hooks with short revision (#3368)
* fix: Increase HAProxy check interval to prevent intermittent failures (#3356)
* fix: Helm v3 CRD are not deployed (#3345)
## v1.5.0 (2020-04-02)
#### Helm Integration Enhancements - Helm 3 Support And More
@@ -12,12 +118,49 @@ Following enhancement were implemented in addition to Helm 3:
* The `--set-file` flag can be specified in the application specification.
* Fixed bug that prevents automatically update Helm chart when new version is published (#3193)
#### Better Performance and Improved Metrics
If you are running Argo CD instances with several hundred applications on it, you should see a
huge performance boost and significantly less Kubernetes API server load.
The Argo CD controller Prometheus metrics have been reworked to enable a richer Grafana dashboard.
The improved dashboard is available at [examples/dashboard.json](https://github.com/argoproj/argo-cd/blob/master/examples/dashboard.json).
You can set `ARGOCD_LEGACY_CONTROLLER_METRICS=true` environment variable and use [examples/dashboard-legacy.json](https://github.com/argoproj/argo-cd/blob/master/examples/dashboard-legacy.json)
to keep using old dashboard.
#### Local accounts
The local accounts had been introduced additional to `admin` user and SSO integration. The feature is useful for creating authentication
tokens with limited permissions to automate Argo CD management. Local accounts also could be used small by teams when SSO integration is overkill.
This enhancement also allows to disable admin user and enforce only SSO logins.
#### Redis HA Proxy mode
As part of this release, the bundled Redis was upgraded to version 4.3.4 with enabled HAProxy.
The HA proxy replaced the sentinel and provides more reliable Redis connection.
> After publishing 1.5.0 release we've discovered that default HAProxy settings might cause intermittent failures.
> See [argo-cd#3358](https://github.com/argoproj/argo-cd/issues/3358)
#### Windows CLI
Windows users deploy to Kubernetes too! Now you can use Argo CD CLI on Linux, Mac OS, and Windows. The Windows compatible binary is available
in the release details page as well as on the Argo CD Help page.
#### Breaking Changes
The `argocd_app_sync_status`, `argocd_app_health_status` and `argocd_app_created_time` prometheus metrics are deprecated in favor of additional labels
to `argocd_app_info` metric. The deprecated labels are still available can be re-enabled using `ARGOCD_LEGACY_CONTROLLER_METRICS=true` environment variable.
The legacy example Grafana dashboard is available at [examples/dashboard-legacy.json](https://github.com/argoproj/argo-cd/blob/master/examples/dashboard-legacy.json).
#### Known issues
Last-minute bugs that will be addressed in 1.5.1 shortly:
* https://github.com/argoproj/argo-cd/issues/3336
* https://github.com/argoproj/argo-cd/issues/3319
* https://github.com/argoproj/argo-cd/issues/3339
* https://github.com/argoproj/argo-cd/issues/3358
#### Enhancements
* feat: support helm3 (#2383) (#3178)
* feat: Argo CD Service Account / Local Users #3185
@@ -299,7 +442,7 @@ https://youtu.be/GP7xtrnNznw
##### Orphan Resources
Some users would like to make sure that resources in a namespace are managed only by Argo CD. So we've introduced the concept of an "orphan resource" - any resource that is in namespace associated with an app, but not managed by Argo CD. This is enabled in the project settings. Once enabled, Argo CD will show in the app view any resources in the app's namepspace that is not mananged by Argo CD.
Some users would like to make sure that resources in a namespace are managed only by Argo CD. So we've introduced the concept of an "orphan resource" - any resource that is in namespace associated with an app, but not managed by Argo CD. This is enabled in the project settings. Once enabled, Argo CD will show in the app view any resources in the app's namespace that is not managed by Argo CD.
https://youtu.be/9ZoTevVQf5I
@@ -342,7 +485,7 @@ There may be instances when you want to control the times during which an Argo C
#### Bug Fixes
- failed parsing on parameters with comma (#1660)
- Statefuleset with OnDelete Update Strategy stuck progressing (#1881)
- Statefulset with OnDelete Update Strategy stuck progressing (#1881)
- Warning during secret diffing (#1923)
- Error message "Unable to load data: key is missing" is confusing (#1944)
- OIDC group bindings are truncated (#2006)
@@ -381,7 +524,7 @@ There may be instances when you want to control the times during which an Argo C
- Creating an application from Helm repository should select "Helm" as source type (#2378)
- The parameters of ValidateAccess GRPC method should not be logged (#2386)
- Maintenance window meaning is confusing (#2398)
- UI bug when targetRevision is ommited (#2407)
- UI bug when targetRevision is omitted (#2407)
- Too many vulnerabilities in Docker image (#2425)
- proj windows commands not consistent with other commands (#2443)
- Custom resource actions cannot be executed from the UI (#2448)
@@ -475,7 +618,7 @@ Support for Git LFS enabled repositories - now you can store Helm charts as tar
+ Added 'SyncFail' to possible HookTypes in UI (#2147)
+ Support for Git LFS enabled repositories (#1853)
+ Server certificate and known hosts management (#1514)
+ Client HTTPS certifcates for private git repositories (#1945)
+ Client HTTPS certificates for private git repositories (#1945)
+ Badge for application status (#1435)
+ Make the health check for APIService a built in (#1841)
+ Bitbucket Server and Gogs webhook providers (#1269)
@@ -515,7 +658,7 @@ Support for Git LFS enabled repositories - now you can store Helm charts as tar
- Fix history api fallback implementation to support app names with dots (#2114)
- Fixes some code issues related to Kustomize build options. (#2146)
- Adds checks around valid paths for apps (#2133)
- Enpoint incorrectly considered top level managed resource (#2060)
- Endpoint incorrectly considered top level managed resource (#2060)
- Allow adding certs for hostnames ending on a dot (#2116)
#### Other
@@ -838,7 +981,7 @@ Argo CD introduces some additional CLI commands:
#### Label selector changes, dex-server rename
The label selectors for deployments were been renamed to use kubernetes common labels
(`app.kuberentes.io/name=NAME` instead of `app=NAME`). Since K8s deployment label selectors are
(`app.kubernetes.io/name=NAME` instead of `app=NAME`). Since K8s deployment label selectors are
immutable, during an upgrade from v0.11 to v0.12, the old deployments should be deleted using
`--cascade=false` which allows the new deployments to be created without introducing downtime.
Once the new deployments are ready, the older replicasets can be deleted. Use the following
@@ -935,7 +1078,7 @@ has a minimum client version of v0.12.0. Older CLI clients will be rejected.
- Fix CRD creation/deletion handling (#1249)
- Git cloning via SSH was not verifying host public key (#1276)
- Fixed multiple goroutine leaks in controller and api-server
- Fix isssue where `argocd app set -p` required repo privileges. (#1280)
- Fix issue where `argocd app set -p` required repo privileges. (#1280)
- Fix local diff of non-namespaced resources. Also handle duplicates in local diff (#1289)
- Deprecated resource kinds from 'extensions' groups are not reconciled correctly (#1232)
- Fix issue where CLI would panic after timeout when cli did not have get permissions (#1209)
@@ -1113,7 +1256,7 @@ which have a dependency to external helm repositories.
+ Allow more fine-grained sync (issue #508)
+ Display init container logs (issue #681)
+ Redirect to /auth/login instead of /login when SSO token is used for authenticaion (issue #348)
+ Redirect to /auth/login instead of /login when SSO token is used for authentication (issue #348)
+ Support ability to use a helm values files from a URL (issue #624)
+ Support public not-connected repo in app creation UI (issue #426)
+ Use ksonnet CLI instead of ksonnet libs (issue #626)
@@ -1388,7 +1531,7 @@ RBAC policy rules, need to be rewritten to include one extra column with the eff
+ Sync/Rollback/Delete is asynchronously handled by controller
* Refactor CRUD operation on clusters and repos
* Sync will always perform kubectl apply
* Synced Status considers last-applied-configuration annotatoin
* Synced Status considers last-applied-configuration annotation
* Server & namespace are mandatory fields (still inferred from app.yaml)
* Manifests are memoized in repo server
- Fix connection timeouts to SSH repos

View File

@@ -4,7 +4,7 @@ ARG BASE_IMAGE=debian:10-slim
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.14.0 as builder
FROM golang:1.14.12 as builder
RUN echo 'deb http://deb.debian.org/debian buster-backports main' >> /etc/apt/sources.list
@@ -25,8 +25,8 @@ WORKDIR /tmp
ADD hack/install.sh .
ADD hack/installers installers
ADD hack/tool-versions.sh .
RUN ./install.sh dep-linux
RUN ./install.sh packr-linux
RUN ./install.sh kubectl-linux
RUN ./install.sh ksonnet-linux
@@ -50,9 +50,9 @@ RUN groupadd -g 999 argocd && \
chmod g=u /home/argocd && \
chmod g=u /etc/passwd && \
apt-get update && \
apt-get install -y git git-lfs python3-pip && \
apt-get install -y git git-lfs python3-pip tini && \
apt-get clean && \
pip3 install awscli==1.17.7 && \
pip3 install awscli==1.18.80 && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/git-ask-pass.sh /usr/local/bin/git-ask-pass.sh
@@ -75,7 +75,7 @@ RUN mkdir -p /app/config/tls
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
USER argocd
USER 999
WORKDIR /home/argocd
####################################################################################################
@@ -97,28 +97,26 @@ RUN NODE_ENV='production' yarn build
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM golang:1.14.0 as argocd-build
FROM golang:1.14.12 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
WORKDIR /go/src/github.com/argoproj/argo-cd
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
COPY go.mod go.mod
COPY go.sum go.sum
RUN go mod download
# Perform the build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY . .
RUN make cli server controller repo-server argocd-util && \
make CLI_NAME=argocd-darwin-amd64 GOOS=darwin cli && \
make CLI_NAME=argocd-windows-amd64.exe GOOS=windows cli
RUN make cli server controller repo-server argocd-util
ARG BUILD_ALL_CLIS=true
RUN if [ "$BUILD_ALL_CLIS" = "true" ] ; then \
make CLI_NAME=argocd-darwin-amd64 GOOS=darwin cli && \
make CLI_NAME=argocd-windows-amd64.exe GOOS=windows cli \
; fi
####################################################################################################
# Final image

2118
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,121 +0,0 @@
# Packages should only be added to the following list when we use them *outside* of our go code.
# (e.g. we want to build the binary to invoke as part of the build process, such as in
# generate-proto.sh). Normal use of golang packages should be added via `dep ensure`, and pinned
# with a [[constraint]] or [[override]] when version is important.
required = [
"github.com/golang/protobuf/protoc-gen-go",
"github.com/gogo/protobuf/protoc-gen-gofast",
"github.com/gogo/protobuf/protoc-gen-gogofast",
"k8s.io/code-generator/cmd/go-to-protobuf",
"k8s.io/kube-openapi/cmd/openapi-gen",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
"golang.org/x/sync/errgroup",
]
[[constraint]]
name = "google.golang.org/grpc"
version = "1.15.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "1.3.1"
# override github.com/grpc-ecosystem/go-grpc-middleware's constraint on master
[[override]]
name = "github.com/golang/protobuf"
version = "1.2.0"
[[constraint]]
name = "github.com/grpc-ecosystem/grpc-gateway"
version = "v1.3.1"
# prometheus does not believe in semversioning yet
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
[[override]]
branch = "release-1.16"
name = "k8s.io/api"
[[override]]
branch = "release-1.16"
name = "k8s.io/kubernetes"
[[override]]
branch = "release-1.16"
name = "k8s.io/code-generator"
[[override]]
branch = "release-1.16"
name = "k8s.io/apimachinery"
[[override]]
branch = "release-1.16"
name = "k8s.io/apiextensions-apiserver"
[[override]]
branch = "release-1.16"
name = "k8s.io/apiserver"
[[override]]
branch = "release-1.16"
name = "k8s.io/kubectl"
[[override]]
branch = "release-1.16"
name = "k8s.io/cli-runtime"
[[override]]
version = "2.0.3"
name = "sigs.k8s.io/kustomize"
# ASCIIRenderer does not implement blackfriday.Renderer
[[override]]
name = "github.com/russross/blackfriday"
version = "1.5.2"
[[override]]
branch = "release-13.0"
name = "k8s.io/client-go"
[[override]]
name = "github.com/casbin/casbin"
version = "1.9.1"
[[constraint]]
name = "github.com/stretchr/testify"
version = "1.5.1"
[[constraint]]
name = "github.com/gobuffalo/packr"
version = "v1.11.0"
[[constraint]]
branch = "master"
name = "github.com/argoproj/pkg"
[[constraint]]
branch = "master"
name = "github.com/yudai/gojsondiff"
# Fixes: Could not introduce sigs.k8s.io/kustomize@v2.0.3, as it has a dependency on github.com/spf13/cobra with constraint ^0.0.2, which has no overlap with existing constraint 0.0.5 from (root)
[[override]]
name = "github.com/spf13/cobra"
revision = "0.0.5"
# TODO: move off of k8s.io/kube-openapi and use controller-tools for CRD spec generation
# (override argoproj/argo contraint on master)
[[override]]
revision = "30be4d16710ac61bce31eb28a01054596fe6a9f1"
name = "k8s.io/kube-openapi"
# jsonpatch replace operation does not apply: doc is missing key: /metadata/annotations
[[override]]
name = "github.com/evanphx/json-patch"
version = "v4.1.0"
[[constraint]]
name = "github.com/google/uuid"
version = "1.1.1"

284
Makefile
View File

@@ -8,11 +8,85 @@ BUILD_DATE=$(shell date -u +'%Y-%m-%dT%H:%M:%SZ')
GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi)
GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)
PACKR_CMD=$(shell if [ "`which packr`" ]; then echo "packr"; else echo "go run vendor/github.com/gobuffalo/packr/packr/main.go"; fi)
VOLUME_MOUNT=$(shell if [[ selinuxenabled -eq 0 ]]; then echo ":Z"; elif [[ $(go env GOOS)=="darwin" ]]; then echo ":delegated"; else echo ""; fi)
PACKR_CMD=$(shell if [ "`which packr`" ]; then echo "packr"; else echo "go run github.com/gobuffalo/packr/packr"; fi)
VOLUME_MOUNT=$(shell if test "$(go env GOOS)" = "darwin"; then echo ":delegated"; elif test selinuxenabled; then echo ":Z"; else echo ""; fi)
define run-in-dev-tool
docker run --rm -it -u $(shell id -u) -e HOME=/home/user -v ${CURRENT_DIR}:/go/src/github.com/argoproj/argo-cd${VOLUME_MOUNT} -w /go/src/github.com/argoproj/argo-cd argocd-dev-tools bash -c "GOPATH=/go $(1)"
GOPATH?=$(shell if test -x `which go`; then go env GOPATH; else echo "$(HOME)/go"; fi)
GOCACHE?=$(HOME)/.cache/go-build
DOCKER_SRCDIR?=$(GOPATH)/src
DOCKER_WORKDIR?=/go/src/github.com/argoproj/argo-cd
ARGOCD_PROCFILE?=Procfile
# Configuration for building argocd-test-tools image
TEST_TOOLS_NAMESPACE?=argoproj
TEST_TOOLS_IMAGE=argocd-test-tools
TEST_TOOLS_TAG?=v0.5.0
ifdef TEST_TOOLS_NAMESPACE
TEST_TOOLS_PREFIX=${TEST_TOOLS_NAMESPACE}/
endif
# You can change the ports where ArgoCD components will be listening on by
# setting the appropriate environment variables before running make.
ARGOCD_E2E_APISERVER_PORT?=8080
ARGOCD_E2E_REPOSERVER_PORT?=8081
ARGOCD_E2E_REDIS_PORT?=6379
ARGOCD_E2E_DEX_PORT?=5556
ARGOCD_E2E_YARN_HOST?=localhost
ARGOCD_IN_CI?=false
ARGOCD_TEST_E2E?=true
ARGOCD_LINT_GOGC?=20
# Runs any command in the argocd-test-utils container in server mode
# Server mode container will start with uid 0 and drop privileges during runtime
define run-in-test-server
docker run --rm -it \
--name argocd-test-server \
-e USER_ID=$(shell id -u) \
-e HOME=/home/user \
-e GOPATH=/go \
-e GOCACHE=/tmp/go-build-cache \
-e ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
-e ARGOCD_E2E_TEST=$(ARGOCD_E2E_TEST) \
-e ARGOCD_E2E_YARN_HOST=$(ARGOCD_E2E_YARN_HOST) \
-v ${DOCKER_SRCDIR}:/go/src${VOLUME_MOUNT} \
-v ${GOPATH}/pkg/mod:/go/pkg/mod${VOLUME_MOUNT} \
-v ${GOCACHE}:/tmp/go-build-cache${VOLUME_MOUNT} \
-v ${HOME}/.kube:/home/user/.kube${VOLUME_MOUNT} \
-v /tmp:/tmp${VOLUME_MOUNT} \
-w ${DOCKER_WORKDIR} \
-p ${ARGOCD_E2E_APISERVER_PORT}:8080 \
-p 4000:4000 \
$(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE):$(TEST_TOOLS_TAG) \
bash -c "$(1)"
endef
# Runs any command in the argocd-test-utils container in client mode
define run-in-test-client
docker run --rm -it \
--name argocd-test-client \
-u $(shell id -u) \
-e HOME=/home/user \
-e GOPATH=/go \
-e ARGOCD_E2E_K3S=$(ARGOCD_E2E_K3S) \
-e GOCACHE=/tmp/go-build-cache \
-e ARGOCD_LINT_GOGC=$(ARGOCD_LINT_GOGC) \
-v ${DOCKER_SRCDIR}:/go/src${VOLUME_MOUNT} \
-v ${GOPATH}/pkg/mod:/go/pkg/mod${VOLUME_MOUNT} \
-v ${GOCACHE}:/tmp/go-build-cache${VOLUME_MOUNT} \
-v ${HOME}/.kube:/home/user/.kube${VOLUME_MOUNT} \
-v /tmp:/tmp${VOLUME_MOUNT} \
-w ${DOCKER_WORKDIR} \
$(TEST_TOOLS_NAMESPACE)/$(TEST_TOOLS_IMAGE):$(TEST_TOOLS_TAG) \
bash -c "$(1)"
endef
#
define exec-in-test-server
docker exec -it -u $(shell id -u) -e ARGOCD_E2E_K3S=$(ARGOCD_E2E_K3S) argocd-test-server $(1)
endef
PATH:=$(PATH):$(PWD)/hack
@@ -55,29 +129,41 @@ endif
.PHONY: all
all: cli image argocd-util
.PHONY: gogen
gogen:
export GO111MODULE=off
go generate ./util/argo/...
.PHONY: protogen
protogen:
export GO111MODULE=off
./hack/generate-proto.sh
.PHONY: openapigen
openapigen:
export GO111MODULE=off
./hack/update-openapi.sh
.PHONY: clientgen
clientgen:
export GO111MODULE=off
./hack/update-codegen.sh
.PHONY: codegen-local
codegen-local: protogen clientgen openapigen manifests-local
codegen-local: mod-vendor-local gogen protogen clientgen openapigen manifests-local
rm -rf vendor/
.PHONY: codegen
codegen: dev-tools-image
$(call run-in-dev-tool,make codegen-local)
codegen:
$(call run-in-test-client,make codegen-local)
.PHONY: cli
cli: clean-debug
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
.PHONY: cli-docker
go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
.PHONY: release-cli
release-cli: clean-debug image
docker create --name tmp-argocd-linux $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)
@@ -91,17 +177,23 @@ argocd-util: clean-debug
# Build argocd-util as a statically linked binary, so it could run within the alpine-based dex container (argoproj/argo-cd#844)
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: dev-tools-image
dev-tools-image:
cd hack && docker build -t argocd-dev-tools . -f Dockerfile.dev-tools
# .PHONY: dev-tools-image
# dev-tools-image:
# docker build -t $(DEV_TOOLS_PREFIX)$(DEV_TOOLS_IMAGE) . -f hack/Dockerfile.dev-tools
# docker tag $(DEV_TOOLS_PREFIX)$(DEV_TOOLS_IMAGE) $(DEV_TOOLS_PREFIX)$(DEV_TOOLS_IMAGE):$(DEV_TOOLS_VERSION)
.PHONY: test-tools-image
test-tools-image:
docker build -t $(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE) -f test/container/Dockerfile .
docker tag $(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE) $(TEST_TOOLS_PREFIX)$(TEST_TOOLS_IMAGE):$(TEST_TOOLS_TAG)
.PHONY: manifests-local
manifests-local:
./hack/update-manifests.sh
.PHONY: manifests
manifests: dev-tools-image
$(call run-in-dev-tool,make manifests-local IMAGE_TAG='${IMAGE_TAG}')
manifests: test-tools-image
$(call run-in-test-client,make manifests-local IMAGE_TAG='${IMAGE_TAG}')
# NOTE: we use packr to do the build instead of go, since we embed swagger files and policy.csv
@@ -120,7 +212,7 @@ controller:
.PHONY: packr
packr:
go build -o ${DIST_DIR}/packr ./vendor/github.com/gobuffalo/packr/packr/
go build -o ${DIST_DIR}/packr github.com/gobuffalo/packr/packr/
.PHONY: image
ifeq ($(DEV_IMAGE), true)
@@ -146,48 +238,119 @@ image:
endif
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) ; fi
.PHONY: armimage
# The "BUILD_ALL_CLIS" argument is to skip building the CLIs for darwin and windows
# which would take a really long time.
armimage:
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)-arm . --build-arg BUILD_ALL_CLIS="false"
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) ; fi
.PHONY: dep
dep:
dep ensure -v
.PHONY: mod-download
mod-download:
$(call run-in-test-client,go mod download)
.PHONY: dep-ensure
dep-ensure:
dep ensure -no-vendor
.PHONY: mod-download-local
mod-download-local:
go mod download
.PHONY: mod-vendor
mod-vendor:
$(call run-in-test-client,go mod vendor)
.PHONY: mod-vendor-local
mod-vendor-local: mod-download-local
go mod vendor
# Deprecated - replace by install-local-tools
.PHONY: install-lint-tools
install-lint-tools:
./hack/install.sh lint-tools
# Run linter on the code
.PHONY: lint
lint:
$(call run-in-test-client,make lint-local)
# Run linter on the code (local version)
.PHONY: lint-local
lint-local:
golangci-lint --version
# NOTE: If you get a "Killed" OOM message, try reducing the value of GOGC
# See https://github.com/golangci/golangci-lint#memory-usage-of-golangci-lint
GOGC=100 golangci-lint run --fix --verbose
GOGC=$(ARGOCD_LINT_GOGC) GOMAXPROCS=2 golangci-lint run --fix --verbose --timeout 300s
.PHONY: lint-ui
lint-ui:
$(call run-in-test-client,make lint-ui-local)
.PHONY: lint-ui-local
lint-ui-local:
cd ui && yarn lint
# Build all Go code
.PHONY: build
build:
mkdir -p $(GOCACHE)
$(call run-in-test-client, make build-local)
# Build all Go code (local version)
.PHONY: build-local
build-local:
go build -v `go list ./... | grep -v 'resource_customizations\|test/e2e'`
# Run all unit tests
#
# If TEST_MODULE is set (to fully qualified module name), only this specific
# module will be tested.
.PHONY: test
test:
./hack/test.sh -coverprofile=coverage.out `go list ./... | grep -v 'test/e2e'`
mkdir -p $(GOCACHE)
$(call run-in-test-client,make TEST_MODULE=$(TEST_MODULE) test-local)
# Run all unit tests (local version)
.PHONY: test-local
test-local:
if test "$(TEST_MODULE)" = ""; then \
./hack/test.sh -coverprofile=coverage.out `go list ./... | grep -v 'test/e2e'`; \
else \
./hack/test.sh -coverprofile=coverage.out "$(TEST_MODULE)"; \
fi
# Run the E2E test suite. E2E test servers (see start-e2e target) must be
# started before.
.PHONY: test-e2e
test-e2e:
test-e2e:
$(call exec-in-test-server,make test-e2e-local)
# Run the E2E test suite (local version)
.PHONY: test-e2e-local
test-e2e-local: cli
# NO_PROXY ensures all tests don't go out through a proxy if one is configured on the test system
export GO111MODULE=off
NO_PROXY=* ./hack/test.sh -timeout 15m -v ./test/e2e
# Spawns a shell in the test server container for debugging purposes
debug-test-server:
$(call run-in-test-server,/bin/bash)
# Spawns a shell in the test client container for debugging purposes
debug-test-client:
$(call run-in-test-client,/bin/bash)
# Starts e2e server in a container
.PHONY: start-e2e
start-e2e: cli
killall goreman || true
# check we can connect to Docker to start Redis
start-e2e:
docker version
mkdir -p ${GOCACHE}
$(call run-in-test-server,make ARGOCD_PROCFILE=test/container/Procfile start-e2e-local)
# Starts e2e server locally (or within a container)
.PHONY: start-e2e-local
start-e2e-local:
kubectl create ns argocd-e2e || true
kubectl config set-context --current --namespace=argocd-e2e
kustomize build test/manifests/base | kubectl apply -f -
@@ -196,7 +359,9 @@ start-e2e: cli
ARGOCD_TLS_DATA_PATH=/tmp/argo-e2e/app/config/tls \
ARGOCD_E2E_DISABLE_AUTH=false \
ARGOCD_ZJWT_FEATURE_FLAG=always \
goreman start ${ARGOCD_START}
ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
ARGOCD_E2E_TEST=true \
goreman -f $(ARGOCD_PROCFILE) start
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug
@@ -209,17 +374,28 @@ clean: clean-debug
.PHONY: start
start:
killall goreman || true
# check we can connect to Docker to start Redis
docker version
kubectl create ns argocd || true
kubens argocd
ARGOCD_ZJWT_FEATURE_FLAG=always \
goreman start ${ARGOCD_START}
$(call run-in-test-server,make ARGOCD_PROCFILE=test/container/Procfile start-local ARGOCD_START=${ARGOCD_START})
# Starts a local instance of ArgoCD
.PHONY: start-local
start-local: mod-vendor-local
# check we can connect to Docker to start Redis
killall goreman || true
kubectl create ns argocd || true
ARGOCD_ZJWT_FEATURE_FLAG=always \
ARGOCD_IN_CI=false \
ARGOCD_E2E_TEST=false \
goreman -f $(ARGOCD_PROCFILE) start ${ARGOCD_START}
# Runs pre-commit validation with the virtualized toolchain
.PHONY: pre-commit
pre-commit: dep-ensure codegen build lint test
# Runs pre-commit validation with the local toolchain
.PHONY: pre-commit-local
pre-commit-local: dep-ensure-local codegen-local build-local lint-local test-local
.PHONY: release-precheck
release-precheck: manifests
@if [ "$(GIT_TREE_STATE)" != "clean" ]; then echo 'git tree state is $(GIT_TREE_STATE)' ; exit 1; fi
@@ -245,3 +421,47 @@ lint-docs:
.PHONY: publish-docs
publish-docs: lint-docs
mkdocs gh-deploy
# Verify that kubectl can connect to your K8s cluster from Docker
.PHONY: verify-kube-connect
verify-kube-connect:
$(call run-in-test-client,kubectl version)
# Show the Go version of local and virtualized environments
.PHONY: show-go-version
show-go-version:
@echo -n "Local Go version: "
@go version
@echo -n "Docker Go version: "
$(call run-in-test-client,go version)
# Installs all tools required to build and test ArgoCD locally
.PHONY: install-tools-local
install-tools-local: install-test-tools-local install-codegen-tools-local install-go-tools-local
# Installs all tools required for running unit & end-to-end tests (Linux packages)
.PHONY: install-test-tools-local
install-test-tools-local:
sudo ./hack/install.sh packr-linux
sudo ./hack/install.sh kubectl-linux
sudo ./hack/install.sh kustomize-linux
sudo ./hack/install.sh ksonnet-linux
sudo ./hack/install.sh helm2-linux
sudo ./hack/install.sh helm-linux
# Installs all tools required for running codegen (Linux packages)
.PHONY: install-codegen-tools-local
install-codegen-tools-local:
sudo ./hack/install.sh codegen-tools
# Installs all tools required for running codegen (Go packages)
.PHONY: install-go-tools-local
install-go-tools-local:
./hack/install.sh codegen-go-tools
.PHONY: dep-ui
dep-ui:
$(call run-in-test-client,make dep-ui-local)
dep-ui-local:
cd ui && yarn install

4
OWNERS
View File

@@ -1,10 +1,12 @@
owners:
- alexec
- alexmt
- jessesuen
approvers:
- alexec
- alexmt
- dthomson25
- jannfis
- jessesuen
- mayzhang2000
- rachelwang20

View File

@@ -1,6 +1,6 @@
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --staticassets ui/dist/app"
dex: sh -c "go run github.com/argoproj/argo-cd/cmd/argocd-util gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.21.0 serve /dex.yaml"
dex: sh -c "go run github.com/argoproj/argo-cd/cmd/argocd-util gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.22.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p ${ARGOCD_E2E_REDIS_PORT:-6379}:${ARGOCD_E2E_REDIS_PORT:-6379} redis:5.0.3-alpine --save "" --appendonly no --port ${ARGOCD_E2E_REDIS_PORT:-6379}
repo-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-repo-server/main.go --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'

View File

@@ -1,3 +1,4 @@
[![Integration tests](https://github.com/argoproj/argo-cd/workflows/Integration%20tests/badge.svg?branch=master)](https://github.com/argoproj/argo-cd/actions?query=workflow%3A%22Integration+tests%22)
[![slack](https://img.shields.io/badge/slack-argoproj-brightgreen.svg?logo=slack)](https://argoproj.github.io/community/join-slack)
[![codecov](https://codecov.io/gh/argoproj/argo-cd/branch/master/graph/badge.svg)](https://codecov.io/gh/argoproj/argo-cd)
[![Release Version](https://img.shields.io/github/v/release/argoproj/argo-cd?label=argo-cd)](https://github.com/argoproj/argo-cd/releases/latest)

8
SECURITY_CONTACTS Normal file
View File

@@ -0,0 +1,8 @@
# Defined below are the security contacts for this repo.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://argoproj.github.io/argo-cd/security_considerations/#reporting-vulnerabilities
alexmt
edlee2121
jessesuen

View File

@@ -6,6 +6,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [127Labs](https://127labs.com/)
1. [Adevinta](https://www.adevinta.com/)
1. [AppDirect](https://www.appdirect.com)
1. [ANSTO - Australian Synchrotron](https://www.synchrotron.org.au/)
1. [ARZ Allgemeines Rechenzentrum GmbH ](https://www.arz.at/)
1. [Baloise](https://www.baloise.com)
@@ -22,7 +23,9 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Fave](https://myfave.com)
1. [Future PLC](https://www.futureplc.com/)
1. [GMETRI](https://gmetri.com/)
1. [Healy](https://www.healyworld.net)
1. [hipages](https://hipages.com.au/)
1. [Honestbank](https://honestbank.com)
1. [Intuit](https://www.intuit.com/)
1. [KintoHub](https://www.kintohub.com/)
1. [KompiTech GmbH](https://www.kompitech.com/)
@@ -37,11 +40,13 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Peloton Interactive](https://www.onepeloton.com/)
1. [Pipefy](https://www.pipefy.com/)
1. [Prudential](https://prudential.com.sg)
1. [PUBG](https://www.pubg.com)
1. [Red Hat](https://www.redhat.com/)
1. [Robotinfra](https://www.robotinfra.com)
1. [Riskified](https://www.riskified.com/)
1. [Saildrone](https://www.saildrone.com/)
1. [Saloodo! GmbH](https://www.saloodo.com)
1. [Swissquote](https://github.com/swissquote)
1. [Syncier](https://syncier.com/)
1. [Tesla](https://tesla.com/)
1. [ThousandEyes](https://www.thousandeyes.com/)
@@ -53,6 +58,9 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Universidad Mesoamericana](https://www.umes.edu.gt/)
1. [Viaduct](https://www.viaduct.ai/)
1. [Volvo Cars](https://www.volvocars.com/)
1. [VSHN - The DevOps Company](https://vshn.ch/)
1. [Walkbase](https://www.walkbase.com/)
1. [Whitehat Berlin](https://whitehat.berlin) by Guido Maria Serra +Fenaroli
1. [Yieldlab](https://www.yieldlab.de/)
1. [MTN Group](https://www.mtn.com/)
1. [Moengage](https://www.moengage.com/)

View File

@@ -1 +1 @@
1.5.0
1.6.2

View File

@@ -11,4 +11,4 @@ g = _, _
e = some(where (p.eft == allow)) && !some(where (p.eft == deny))
[matchers]
m = g(r.sub, p.sub) && keyMatch(r.res, p.res) && keyMatch(r.act, p.act) && keyMatch(r.obj, p.obj)
m = g(r.sub, p.sub) && globMatch(r.res, p.res) && globMatch(r.act, p.act) && globMatch(r.obj, p.obj)

View File

@@ -1689,6 +1689,36 @@
}
},
"/api/v1/repositories/{repo}": {
"get": {
"tags": [
"RepositoryService"
],
"summary": "Get returns a repository or its credentials",
"operationId": "GetMixin3",
"parameters": [
{
"type": "string",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "boolean",
"format": "boolean",
"description": "Whether to force a cache refresh on repo's connection state.",
"name": "forceRefresh",
"in": "query"
}
],
"responses": {
"200": {
"description": "(empty)",
"schema": {
"$ref": "#/definitions/v1alpha1Repository"
}
}
}
},
"delete": {
"tags": [
"RepositoryService"
@@ -2038,6 +2068,9 @@
"format": "int64",
"title": "expiresIn represents a duration in seconds"
},
"id": {
"type": "string"
},
"name": {
"type": "string"
}
@@ -2141,6 +2174,12 @@
"type": "boolean",
"format": "boolean"
},
"infos": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1Info"
}
},
"manifests": {
"type": "array",
"items": {
@@ -2339,6 +2378,12 @@
"appLabelKey": {
"type": "string"
},
"configManagementPlugins": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1ConfigManagementPlugin"
}
},
"dexConfig": {
"$ref": "#/definitions/clusterDexConfig"
},
@@ -2351,6 +2396,12 @@
"kustomizeOptions": {
"$ref": "#/definitions/v1alpha1KustomizeOptions"
},
"kustomizeVersions": {
"type": "array",
"items": {
"type": "string"
}
},
"oidcConfig": {
"$ref": "#/definitions/clusterOIDCConfig"
},
@@ -2425,6 +2476,9 @@
"format": "int64",
"title": "expiresIn represents a duration in seconds"
},
"id": {
"type": "string"
},
"project": {
"type": "string"
},
@@ -2463,7 +2517,7 @@
},
"repocredsRepoCredsResponse": {
"type": "object",
"title": "RepoCredsResponse is a resonse to most repository credentials requests"
"title": "RepoCredsResponse is a response to most repository credentials requests"
},
"repositoryAppInfo": {
"type": "object",
@@ -2567,7 +2621,7 @@
"$ref": "#/definitions/repositoryKsonnetEnvironmentDestination"
},
"k8sVersion": {
"description": "KubernetesVersion is the kubernetes version the targetted cluster is running on.",
"description": "KubernetesVersion is the kubernetes version the targeted cluster is running on.",
"type": "string"
},
"name": {
@@ -3166,6 +3220,13 @@
"$ref": "#/definitions/v1GroupKind"
}
},
"namespaceResourceWhitelist": {
"type": "array",
"title": "NamespaceResourceWhitelist contains list of whitelisted namespace level resources",
"items": {
"$ref": "#/definitions/v1GroupKind"
}
},
"orphanedResources": {
"$ref": "#/definitions/v1alpha1OrphanedResourcesMonitorSettings"
},
@@ -3402,6 +3463,10 @@
"nameSuffix": {
"type": "string",
"title": "NameSuffix is a suffix appended to resources for kustomize apps"
},
"version": {
"type": "string",
"title": "Version contains optional Kustomize version"
}
}
},
@@ -3623,6 +3688,24 @@
}
}
},
"v1alpha1Command": {
"type": "object",
"title": "Command holds binary path and arguments list",
"properties": {
"args": {
"type": "array",
"items": {
"type": "string"
}
},
"command": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"v1alpha1ComparedTo": {
"type": "object",
"title": "ComparedTo contains application source and target which was used for resources comparison",
@@ -3635,6 +3718,21 @@
}
}
},
"v1alpha1ConfigManagementPlugin": {
"type": "object",
"title": "ConfigManagementPlugin contains config management plugin configuration",
"properties": {
"generate": {
"$ref": "#/definitions/v1alpha1Command"
},
"init": {
"$ref": "#/definitions/v1alpha1Command"
},
"name": {
"type": "string"
}
}
},
"v1alpha1ConnectionState": {
"type": "object",
"title": "ConnectionState contains information about remote resource connection state",
@@ -3743,6 +3841,9 @@
"iat": {
"type": "string",
"format": "int64"
},
"id": {
"type": "string"
}
}
},
@@ -3762,6 +3863,18 @@
}
}
},
"v1alpha1KnownTypeField": {
"type": "object",
"title": "KnownTypeField contains mapping between CRD field and known Kubernetes type",
"properties": {
"field": {
"type": "string"
},
"type": {
"type": "string"
}
}
},
"v1alpha1KsonnetParameter": {
"type": "object",
"title": "KsonnetParameter is a ksonnet component parameter",
@@ -3781,6 +3894,10 @@
"type": "object",
"title": "KustomizeOptions are options for kustomize to use when building manifests",
"properties": {
"binaryPath": {
"type": "string",
"title": "BinaryPath holds optional path to kustomize binary"
},
"buildOptions": {
"type": "string",
"title": "BuildOptions is a string of build parameters to use when calling `kustomize build`"
@@ -3791,6 +3908,12 @@
"description": "Operation contains requested operation parameters.",
"type": "object",
"properties": {
"info": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1Info"
}
},
"initiatedBy": {
"$ref": "#/definitions/v1alpha1OperationInitiator"
},
@@ -4236,6 +4359,12 @@
},
"ignoreDifferences": {
"type": "string"
},
"knownTypeFields": {
"type": "array",
"items": {
"$ref": "#/definitions/v1alpha1KnownTypeField"
}
}
}
},
@@ -4473,7 +4602,7 @@
},
"syncOptions": {
"type": "array",
"title": "Options allow youe to specify whole app sync-options",
"title": "Options allow you to specify whole app sync-options",
"items": {
"type": "string"
}

View File

@@ -6,7 +6,10 @@ import (
"os"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/pkg/stats"
"github.com/go-redis/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
@@ -19,12 +22,12 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
cacheutil "github.com/argoproj/argo-cd/util/cache"
appstatecache "github.com/argoproj/argo-cd/util/cache/appstate"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
@@ -44,23 +47,25 @@ func newCommand() *cobra.Command {
selfHealTimeoutSeconds int
statusProcessors int
operationProcessors int
logFormat string
logLevel string
glogLevel int
metricsPort int
kubectlParallelismLimit int64
cacheSrc func() (*appstatecache.Cache, error)
redisClient *redis.Client
)
var command = cobra.Command{
Use: cliName,
Short: "application-controller is a controller to operate on applications CRD",
RunE: func(c *cobra.Command, args []string) error {
cli.SetLogFormat(logFormat)
cli.SetLogLevel(logLevel)
cli.SetGLogLevel(glogLevel)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
config.QPS = common.K8sClientConfigQPS
config.Burst = common.K8sClientConfigBurst
errors.CheckError(v1alpha1.SetK8SConfigDefaults(config))
kubeClient := kubernetes.NewForConfigOrDie(config)
appClient := appclientset.NewForConfigOrDie(config)
@@ -91,6 +96,7 @@ func newCommand() *cobra.Command {
metricsPort,
kubectlParallelismLimit)
errors.CheckError(err)
cacheutil.CollectMetrics(redisClient, appController.GetMetricsServer())
vers := common.GetVersion()
log.Infof("Application Controller (version: %s, built: %s) starting (namespace: %s)", vers.Version, vers.BuildDate, namespace)
@@ -111,13 +117,15 @@ func newCommand() *cobra.Command {
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", 60, "Repo server RPC call timeout seconds.")
command.Flags().IntVar(&statusProcessors, "status-processors", 1, "Number of application status processors")
command.Flags().IntVar(&operationProcessors, "operation-processors", 1, "Number of application operation processors")
command.Flags().StringVar(&logFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortArgoCDMetrics, "Start metrics server on given port")
command.Flags().IntVar(&selfHealTimeoutSeconds, "self-heal-timeout-seconds", 5, "Specifies timeout between application self heal attempts")
command.Flags().Int64Var(&kubectlParallelismLimit, "kubectl-parallelism-limit", 20, "Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit.")
cacheSrc = appstatecache.AddCacheFlagsToCmd(&command)
cacheSrc = appstatecache.AddCacheFlagsToCmd(&command, func(client *redis.Client) {
redisClient = client
})
return &command
}

View File

@@ -7,15 +7,17 @@ import (
"os"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/pkg/stats"
"github.com/go-redis/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/reposerver"
reposervercache "github.com/argoproj/argo-cd/reposerver/cache"
"github.com/argoproj/argo-cd/reposerver/metrics"
cacheutil "github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/tls"
)
@@ -27,17 +29,20 @@ const (
func newCommand() *cobra.Command {
var (
logFormat string
logLevel string
parallelismLimit int64
listenPort int
metricsPort int
cacheSrc func() (*reposervercache.Cache, error)
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
redisClient *redis.Client
)
var command = cobra.Command{
Use: cliName,
Short: "Run argocd-repo-server",
RunE: func(c *cobra.Command, args []string) error {
cli.SetLogFormat(logFormat)
cli.SetLogLevel(logLevel)
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
@@ -47,6 +52,7 @@ func newCommand() *cobra.Command {
errors.CheckError(err)
metricsServer := metrics.NewMetricsServer()
cacheutil.CollectMetrics(redisClient, metricsServer)
server, err := reposerver.NewServer(metricsServer, cache, tlsConfigCustomizer, parallelismLimit)
errors.CheckError(err)
@@ -67,12 +73,15 @@ func newCommand() *cobra.Command {
},
}
command.Flags().StringVar(&logFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().Int64Var(&parallelismLimit, "parallelismlimit", 0, "Limit on number of concurrent manifests generate requests. Any value less the 1 means no limit.")
command.Flags().IntVar(&listenPort, "port", common.DefaultPortRepoServer, "Listen on given port for incoming connections")
command.Flags().IntVar(&metricsPort, "metrics-port", common.DefaultPortRepoServerMetrics, "Start metrics server on given port")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command)
cacheSrc = reposervercache.AddCacheFlagsToCmd(&command, func(client *redis.Client) {
redisClient = client
})
return &command
}

View File

@@ -4,33 +4,57 @@ import (
"context"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/pkg/stats"
"github.com/go-redis/redis"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
log "github.com/sirupsen/logrus"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/server"
servercache "github.com/argoproj/argo-cd/server/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/env"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/tls"
)
const (
failureRetryCountEnv = "ARGOCD_K8S_RETRY_COUNT"
failureRetryPeriodMilliSecondsEnv = "ARGOCD_K8S_RETRY_DURATION_MILLISECONDS"
)
var (
failureRetryCount = 0
failureRetryPeriodMilliSeconds = 100
)
func init() {
failureRetryCount = env.ParseNumFromEnv(failureRetryCountEnv, failureRetryCount, 0, 10)
failureRetryPeriodMilliSeconds = env.ParseNumFromEnv(failureRetryPeriodMilliSecondsEnv, failureRetryPeriodMilliSeconds, 0, 1000)
}
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
redisClient *redis.Client
insecure bool
listenPort int
metricsPort int
logFormat string
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
repoServerTimeoutSeconds int
staticAssetsDir string
baseHRef string
rootPath string
repoServerAddress string
dexServerAddress string
disableAuth bool
@@ -43,13 +67,13 @@ func NewCommand() *cobra.Command {
Short: "Run the argocd API server",
Long: "Run the argocd API server",
Run: func(c *cobra.Command, args []string) {
cli.SetLogFormat(logFormat)
cli.SetLogLevel(logLevel)
cli.SetGLogLevel(glogLevel)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
config.QPS = common.K8sClientConfigQPS
config.Burst = common.K8sClientConfigBurst
errors.CheckError(v1alpha1.SetK8SConfigDefaults(config))
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
@@ -60,9 +84,24 @@ func NewCommand() *cobra.Command {
errors.CheckError(err)
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
appclientsetConfig, err := clientConfig.ClientConfig()
errors.CheckError(err)
errors.CheckError(v1alpha1.SetK8SConfigDefaults(appclientsetConfig))
if failureRetryCount > 0 {
appclientsetConfig = kube.AddFailureRetryWrapper(appclientsetConfig, failureRetryCount, failureRetryPeriodMilliSeconds)
}
appclientset := appclientset.NewForConfigOrDie(appclientsetConfig)
repoclientset := apiclient.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
if rootPath != "" {
if baseHRef != "" && baseHRef != rootPath {
log.Warnf("--basehref and --rootpath had conflict: basehref: %s rootpath: %s", baseHRef, rootPath)
}
baseHRef = rootPath
}
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,
ListenPort: listenPort,
@@ -70,6 +109,7 @@ func NewCommand() *cobra.Command {
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
BaseHRef: baseHRef,
RootPath: rootPath,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
@@ -78,6 +118,7 @@ func NewCommand() *cobra.Command {
TLSConfigCustomizer: tlsConfigCustomizer,
Cache: cache,
XFrameOptions: frameOptions,
RedisClient: redisClient,
}
stats.RegisterStackDumper()
@@ -98,6 +139,8 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&insecure, "insecure", false, "Run server without TLS")
command.Flags().StringVar(&staticAssetsDir, "staticassets", "", "Static assets directory path")
command.Flags().StringVar(&baseHRef, "basehref", "/", "Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from /")
command.Flags().StringVar(&rootPath, "rootpath", "", "Used if Argo CD is running behind reverse proxy under subpath different from /")
command.Flags().StringVar(&logFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().StringVar(&repoServerAddress, "repo-server", common.DefaultRepoServerAddr, "Repo server address")
@@ -109,6 +152,8 @@ func NewCommand() *cobra.Command {
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", 60, "Repo server RPC call timeout seconds.")
command.Flags().StringVar(&frameOptions, "x-frame-options", "sameorigin", "Set X-Frame-Options header in HTTP responses to `value`. To disable, set to \"\".")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
cacheSrc = servercache.AddCacheFlagsToCmd(command)
cacheSrc = servercache.AddCacheFlagsToCmd(command, func(client *redis.Client) {
redisClient = client
})
return command
}

View File

@@ -1,8 +1,9 @@
package main
import (
"github.com/argoproj/gitops-engine/pkg/utils/errors"
commands "github.com/argoproj/argo-cd/cmd/argocd-server/commands"
"github.com/argoproj/argo-cd/errors"
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"

View File

@@ -1,4 +1,4 @@
package main
package commands
import (
"fmt"
@@ -6,14 +6,14 @@ import (
"path/filepath"
"strings"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appclient "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/typed/application/v1alpha1"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/gitops-engine/pkg/diff"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/spf13/cobra"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"

View File

@@ -1,4 +1,4 @@
package main
package commands
import (
"testing"

View File

@@ -0,0 +1,546 @@
package commands
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
"reflect"
"sort"
"strconv"
"strings"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/diff"
healthutil "github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/argo/normalizers"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/lua"
"github.com/argoproj/argo-cd/util/settings"
)
type settingsOpts struct {
argocdCMPath string
argocdSecretPath string
loadClusterSettings bool
clientConfig clientcmd.ClientConfig
}
type commandContext interface {
createSettingsManager() (*settings.SettingsManager, error)
}
func collectLogs(callback func()) string {
log.SetLevel(log.DebugLevel)
out := bytes.Buffer{}
log.SetOutput(&out)
defer log.SetLevel(log.FatalLevel)
callback()
return out.String()
}
func setSettingsMeta(obj v1.Object) {
obj.SetNamespace("default")
labels := obj.GetLabels()
if labels == nil {
labels = make(map[string]string)
}
labels["app.kubernetes.io/part-of"] = "argocd"
obj.SetLabels(labels)
}
func (opts *settingsOpts) createSettingsManager() (*settings.SettingsManager, error) {
var argocdCM *corev1.ConfigMap
if opts.argocdCMPath == "" && !opts.loadClusterSettings {
return nil, fmt.Errorf("either --argocd-cm-path must be provided or --load-cluster-settings must be set to true")
} else if opts.argocdCMPath == "" {
realClientset, ns, err := opts.getK8sClient()
if err != nil {
return nil, err
}
argocdCM, err = realClientset.CoreV1().ConfigMaps(ns).Get(common.ArgoCDConfigMapName, v1.GetOptions{})
if err != nil {
return nil, err
}
} else {
data, err := ioutil.ReadFile(opts.argocdCMPath)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(data, &argocdCM)
if err != nil {
return nil, err
}
}
setSettingsMeta(argocdCM)
var argocdSecret *corev1.Secret
if opts.argocdSecretPath != "" {
data, err := ioutil.ReadFile(opts.argocdSecretPath)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(data, &argocdSecret)
if err != nil {
return nil, err
}
setSettingsMeta(argocdSecret)
} else if opts.loadClusterSettings {
realClientset, ns, err := opts.getK8sClient()
if err != nil {
return nil, err
}
argocdSecret, err = realClientset.CoreV1().Secrets(ns).Get(common.ArgoCDSecretName, v1.GetOptions{})
if err != nil {
return nil, err
}
} else {
argocdSecret = &corev1.Secret{
ObjectMeta: v1.ObjectMeta{
Name: common.ArgoCDSecretName,
},
Data: map[string][]byte{
"admin.password": []byte("test"),
"server.secretkey": []byte("test"),
},
}
}
setSettingsMeta(argocdSecret)
clientset := fake.NewSimpleClientset(argocdSecret, argocdCM)
manager := settings.NewSettingsManager(context.Background(), clientset, "default")
errors.CheckError(manager.ResyncInformers())
return manager, nil
}
func (opts *settingsOpts) getK8sClient() (*kubernetes.Clientset, string, error) {
namespace, _, err := opts.clientConfig.Namespace()
if err != nil {
return nil, "", err
}
restConfig, err := opts.clientConfig.ClientConfig()
if err != nil {
return nil, "", err
}
realClientset, err := kubernetes.NewForConfig(restConfig)
if err != nil {
return nil, "", err
}
return realClientset, namespace, nil
}
func NewSettingsCommand() *cobra.Command {
var (
opts settingsOpts
)
var command = &cobra.Command{
Use: "settings",
Short: "Provides set of commands for settings validation and troubleshooting",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
},
}
log.SetLevel(log.FatalLevel)
command.AddCommand(NewValidateSettingsCommand(&opts))
command.AddCommand(NewResourceOverridesCommand(&opts))
opts.clientConfig = cli.AddKubectlFlagsToCmd(command)
command.PersistentFlags().StringVar(&opts.argocdCMPath, "argocd-cm-path", "", "Path to local argocd-cm.yaml file")
command.PersistentFlags().StringVar(&opts.argocdSecretPath, "argocd-secret-path", "", "Path to local argocd-secret.yaml file")
command.PersistentFlags().BoolVar(&opts.loadClusterSettings, "load-cluster-settings", false,
"Indicates that config map and secret should be loaded from cluster unless local file path is provided")
return command
}
type settingValidator func(manager *settings.SettingsManager) (string, error)
func joinValidators(validators ...settingValidator) settingValidator {
return func(manager *settings.SettingsManager) (string, error) {
var errorStrs []string
var summaries []string
for i := range validators {
summary, err := validators[i](manager)
if err != nil {
errorStrs = append(errorStrs, err.Error())
}
if summary != "" {
summaries = append(summaries, summary)
}
}
if len(errorStrs) > 0 {
return "", fmt.Errorf("%s", strings.Join(errorStrs, "\n"))
}
return strings.Join(summaries, "\n"), nil
}
}
var validatorsByGroup = map[string]settingValidator{
"general": joinValidators(func(manager *settings.SettingsManager) (string, error) {
general, err := manager.GetSettings()
if err != nil {
return "", err
}
ssoProvider := ""
if general.DexConfig != "" {
if _, err := settings.UnmarshalDexConfig(general.DexConfig); err != nil {
return "", fmt.Errorf("invalid dex.config: %v", err)
}
ssoProvider = "Dex"
} else if general.OIDCConfigRAW != "" {
if _, err := settings.UnmarshalOIDCConfig(general.OIDCConfigRAW); err != nil {
return "", fmt.Errorf("invalid oidc.config: %v", err)
}
ssoProvider = "OIDC"
}
var summary string
if ssoProvider != "" {
summary = fmt.Sprintf("%s is configured", ssoProvider)
if general.URL == "" {
summary = summary + " ('url' field is missing)"
}
} else if ssoProvider != "" && general.URL != "" {
} else {
summary = "SSO is not configured"
}
return summary, nil
}, func(manager *settings.SettingsManager) (string, error) {
_, err := manager.GetAppInstanceLabelKey()
return "", err
}, func(manager *settings.SettingsManager) (string, error) {
_, err := manager.GetHelp()
return "", err
}, func(manager *settings.SettingsManager) (string, error) {
_, err := manager.GetGoogleAnalytics()
return "", err
}),
"plugins": func(manager *settings.SettingsManager) (string, error) {
plugins, err := manager.GetConfigManagementPlugins()
if err != nil {
return "", err
}
return fmt.Sprintf("%d plugins", len(plugins)), nil
},
"kustomize": func(manager *settings.SettingsManager) (string, error) {
opts, err := manager.GetKustomizeSettings()
if err != nil {
return "", err
}
summary := "default options"
if opts.BuildOptions != "" {
summary = opts.BuildOptions
}
if len(opts.Versions) > 0 {
summary = fmt.Sprintf("%s (%d versions)", summary, len(opts.Versions))
}
return summary, err
},
"repositories": joinValidators(func(manager *settings.SettingsManager) (string, error) {
repos, err := manager.GetRepositories()
if err != nil {
return "", err
}
return fmt.Sprintf("%d repositories", len(repos)), nil
}, func(manager *settings.SettingsManager) (string, error) {
creds, err := manager.GetRepositoryCredentials()
if err != nil {
return "", err
}
return fmt.Sprintf("%d repository credentials", len(creds)), nil
}),
"accounts": func(manager *settings.SettingsManager) (string, error) {
accounts, err := manager.GetAccounts()
if err != nil {
return "", err
}
return fmt.Sprintf("%d accounts", len(accounts)), nil
},
"resource-overrides": func(manager *settings.SettingsManager) (string, error) {
overrides, err := manager.GetResourceOverrides()
if err != nil {
return "", err
}
return fmt.Sprintf("%d resource overrides", len(overrides)), nil
},
}
func NewValidateSettingsCommand(cmdCtx commandContext) *cobra.Command {
var (
groups []string
)
var allGroups []string
for k := range validatorsByGroup {
allGroups = append(allGroups, k)
}
sort.Slice(allGroups, func(i, j int) bool {
return allGroups[i] < allGroups[j]
})
var command = &cobra.Command{
Use: "validate",
Short: "Validate settings",
Long: "Validates settings specified in 'argocd-cm' ConfigMap and 'argocd-secret' Secret",
Example: `
#Validates all settings in the specified YAML file
argocd-util settings validate --argocd-cm-path ./argocd-cm.yaml
#Validates accounts and plugins settings in Kubernetes cluster of current kubeconfig context
argocd-util settings validate --group accounts --group plugins --load-cluster-settings`,
Run: func(c *cobra.Command, args []string) {
settingsManager, err := cmdCtx.createSettingsManager()
errors.CheckError(err)
if len(groups) == 0 {
groups = allGroups
}
for i, group := range groups {
validator := validatorsByGroup[group]
logs := collectLogs(func() {
summary, err := validator(settingsManager)
if err != nil {
_, _ = fmt.Fprintf(os.Stdout, "❌ %s\n", group)
_, _ = fmt.Fprintf(os.Stdout, "%s\n", err.Error())
} else {
_, _ = fmt.Fprintf(os.Stdout, "✅ %s\n", group)
if summary != "" {
_, _ = fmt.Fprintf(os.Stdout, "%s\n", summary)
}
}
})
if logs != "" {
_, _ = fmt.Fprintf(os.Stdout, "%s\n", logs)
}
if i != len(groups)-1 {
_, _ = fmt.Fprintf(os.Stdout, "\n")
}
}
},
}
command.Flags().StringArrayVar(&groups, "group", nil, fmt.Sprintf(
"Optional list of setting groups that have to be validated ( one of: %s)", strings.Join(allGroups, ", ")))
return command
}
func NewResourceOverridesCommand(cmdCtx commandContext) *cobra.Command {
var command = &cobra.Command{
Use: "resource-overrides",
Short: "Troubleshoot resource overrides",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
},
}
command.AddCommand(NewResourceIgnoreDifferencesCommand(cmdCtx))
command.AddCommand(NewResourceActionListCommand(cmdCtx))
command.AddCommand(NewResourceActionRunCommand(cmdCtx))
command.AddCommand(NewResourceHealthCommand(cmdCtx))
return command
}
func executeResourceOverrideCommand(cmdCtx commandContext, args []string, callback func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride)) {
data, err := ioutil.ReadFile(args[0])
errors.CheckError(err)
res := unstructured.Unstructured{}
errors.CheckError(yaml.Unmarshal(data, &res))
settingsManager, err := cmdCtx.createSettingsManager()
errors.CheckError(err)
overrides, err := settingsManager.GetResourceOverrides()
errors.CheckError(err)
gvk := res.GroupVersionKind()
key := gvk.Kind
if gvk.Group != "" {
key = fmt.Sprintf("%s/%s", gvk.Group, gvk.Kind)
}
override, hasOverride := overrides[key]
if !hasOverride {
_, _ = fmt.Printf("No overrides configured for '%s/%s'\n", gvk.Group, gvk.Kind)
return
}
callback(res, override, overrides)
}
func NewResourceIgnoreDifferencesCommand(cmdCtx commandContext) *cobra.Command {
var command = &cobra.Command{
Use: "ignore-differences RESOURCE_YAML_PATH",
Short: "Renders fields excluded from diffing",
Long: "Renders ignored fields using the 'ignoreDifferences' setting specified in the 'resource.customizations' field of 'argocd-cm' ConfigMap",
Example: `
argocd-util settings resource-overrides ignore-differences ./deploy.yaml --argocd-cm-path ./argocd-cm.yaml`,
Run: func(c *cobra.Command, args []string) {
if len(args) < 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
executeResourceOverrideCommand(cmdCtx, args, func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride) {
gvk := res.GroupVersionKind()
if override.IgnoreDifferences == "" {
_, _ = fmt.Printf("Ignore differences are not configured for '%s/%s'\n", gvk.Group, gvk.Kind)
return
}
normalizer, err := normalizers.NewIgnoreNormalizer(nil, overrides)
errors.CheckError(err)
normalizedRes := res.DeepCopy()
logs := collectLogs(func() {
errors.CheckError(normalizer.Normalize(normalizedRes))
})
if logs != "" {
_, _ = fmt.Println(logs)
}
if reflect.DeepEqual(&res, normalizedRes) {
_, _ = fmt.Printf("No fields are ignored by ignoreDifferences settings: \n%s\n", override.IgnoreDifferences)
return
}
_, _ = fmt.Printf("Following fields are ignored:\n\n")
_ = diff.PrintDiff(res.GetName(), &res, normalizedRes)
})
},
}
return command
}
func NewResourceHealthCommand(cmdCtx commandContext) *cobra.Command {
var command = &cobra.Command{
Use: "health RESOURCE_YAML_PATH",
Short: "Assess resource health",
Long: "Assess resource health using the lua script configured in the 'resource.customizations' field of 'argocd-cm' ConfigMap",
Example: `
argocd-util settings resource-overrides health ./deploy.yaml --argocd-cm-path ./argocd-cm.yaml`,
Run: func(c *cobra.Command, args []string) {
if len(args) < 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
executeResourceOverrideCommand(cmdCtx, args, func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride) {
gvk := res.GroupVersionKind()
if override.HealthLua == "" {
_, _ = fmt.Printf("Health script is not configured for '%s/%s'\n", gvk.Group, gvk.Kind)
return
}
resHealth, err := healthutil.GetResourceHealth(&res, lua.ResourceHealthOverrides(overrides))
errors.CheckError(err)
_, _ = fmt.Printf("STATUS: %s\n", resHealth.Status)
_, _ = fmt.Printf("MESSAGE: %s\n", resHealth.Message)
})
},
}
return command
}
func NewResourceActionListCommand(cmdCtx commandContext) *cobra.Command {
var command = &cobra.Command{
Use: "list-actions RESOURCE_YAML_PATH",
Short: "List available resource actions",
Long: "List actions available for given resource action using the lua scripts configured in the 'resource.customizations' field of 'argocd-cm' ConfigMap and outputs updated fields",
Example: `
argocd-util settings resource-overrides action list /tmp/deploy.yaml --argocd-cm-path ./argocd-cm.yaml`,
Run: func(c *cobra.Command, args []string) {
if len(args) < 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
executeResourceOverrideCommand(cmdCtx, args, func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride) {
gvk := res.GroupVersionKind()
if override.Actions == "" {
_, _ = fmt.Printf("Actions are not configured for '%s/%s'\n", gvk.Group, gvk.Kind)
return
}
luaVM := lua.VM{ResourceOverrides: overrides}
discoveryScript, err := luaVM.GetResourceActionDiscovery(&res)
errors.CheckError(err)
availableActions, err := luaVM.ExecuteResourceActionDiscovery(&res, discoveryScript)
errors.CheckError(err)
sort.Slice(availableActions, func(i, j int) bool {
return availableActions[i].Name < availableActions[j].Name
})
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "NAME\tENABLED\n")
for _, action := range availableActions {
_, _ = fmt.Fprintf(w, "%s\t%s\n", action.Name, strconv.FormatBool(action.Disabled))
}
_ = w.Flush()
})
},
}
return command
}
func NewResourceActionRunCommand(cmdCtx commandContext) *cobra.Command {
var command = &cobra.Command{
Use: "run-action RESOURCE_YAML_PATH ACTION",
Aliases: []string{"action"},
Short: "Executes resource action",
Long: "Executes resource action using the lua script configured in the 'resource.customizations' field of 'argocd-cm' ConfigMap and outputs updated fields",
Example: `
argocd-util settings resource-overrides action run /tmp/deploy.yaml restart --argocd-cm-path ./argocd-cm.yaml`,
Run: func(c *cobra.Command, args []string) {
if len(args) < 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
action := args[1]
executeResourceOverrideCommand(cmdCtx, args, func(res unstructured.Unstructured, override v1alpha1.ResourceOverride, overrides map[string]v1alpha1.ResourceOverride) {
gvk := res.GroupVersionKind()
if override.Actions == "" {
_, _ = fmt.Printf("Actions are not configured for '%s/%s'\n", gvk.Group, gvk.Kind)
return
}
luaVM := lua.VM{ResourceOverrides: overrides}
action, err := luaVM.GetResourceAction(&res, action)
errors.CheckError(err)
modifiedRes, err := luaVM.ExecuteResourceAction(&res, action.ActionLua)
errors.CheckError(err)
if reflect.DeepEqual(&res, modifiedRes) {
_, _ = fmt.Printf("No fields had been changed by action: \n%s\n", action.Name)
return
}
_, _ = fmt.Printf("Following fields have been changed:\n\n")
_ = diff.PrintDiff(res.GetName(), &res, modifiedRes)
})
},
}
return command
}

View File

@@ -0,0 +1,383 @@
package commands
import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"testing"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/util/settings"
utils "github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
)
func captureStdout(callback func()) (string, error) {
oldStdout := os.Stdout
oldStderr := os.Stderr
r, w, err := os.Pipe()
if err != nil {
return "", err
}
os.Stdout = w
defer func() {
os.Stdout = oldStdout
os.Stderr = oldStderr
}()
callback()
utils.Close(w)
data, err := ioutil.ReadAll(r)
if err != nil {
return "", err
}
return string(data), err
}
func newSettingsManager(data map[string]string) *settings.SettingsManager {
clientset := fake.NewSimpleClientset(&v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: common.ArgoCDConfigMapName,
Labels: map[string]string{
"app.kubernetes.io/part-of": "argocd",
},
},
Data: data,
}, &v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: common.ArgoCDSecretName,
},
Data: map[string][]byte{
"admin.password": []byte("test"),
"server.secretkey": []byte("test"),
},
})
return settings.NewSettingsManager(context.Background(), clientset, "default")
}
type fakeCmdContext struct {
mgr *settings.SettingsManager
// nolint:unused,structcheck
out bytes.Buffer
}
func newCmdContext(data map[string]string) *fakeCmdContext {
return &fakeCmdContext{mgr: newSettingsManager(data)}
}
func (ctx *fakeCmdContext) createSettingsManager() (*settings.SettingsManager, error) {
return ctx.mgr, nil
}
type validatorTestCase struct {
validator string
data map[string]string
containsSummary string
containsError string
}
func TestCreateSettingsManager(t *testing.T) {
f, closer, err := tempFile(`apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
url: https://myargocd.com`)
if !assert.NoError(t, err) {
return
}
defer utils.Close(closer)
opts := settingsOpts{argocdCMPath: f}
settingsManager, err := opts.createSettingsManager()
if !assert.NoError(t, err) {
return
}
argoCDSettings, err := settingsManager.GetSettings()
if !assert.NoError(t, err) {
return
}
assert.Equal(t, "https://myargocd.com", argoCDSettings.URL)
}
func TestValidator(t *testing.T) {
testCases := map[string]validatorTestCase{
"General_SSOIsNotConfigured": {
validator: "general", containsSummary: "SSO is not configured",
},
"General_DexInvalidConfig": {
validator: "general",
data: map[string]string{"dex.config": "abcdefg"},
containsError: "invalid dex.config",
},
"General_OIDCConfigured": {
validator: "general",
data: map[string]string{
"url": "https://myargocd.com",
"oidc.config": `
name: Okta
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: aaaabbbbccccddddeee`,
},
containsSummary: "OIDC is configured",
},
"General_DexConfiguredMissingURL": {
validator: "general",
data: map[string]string{
"dex.config": `connectors:
- type: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: aabbccddeeff00112233`,
},
containsSummary: "Dex is configured ('url' field is missing)",
},
"Plugins_ValidConfig": {
validator: "plugins",
data: map[string]string{
"configManagementPlugins": `[{"name": "test1"}, {"name": "test2"}]`,
},
containsSummary: "2 plugins",
},
"Kustomize_ModifiedOptions": {
validator: "kustomize",
containsSummary: "default options",
},
"Kustomize_DefaultOptions": {
validator: "kustomize",
data: map[string]string{
"kustomize.buildOptions": "updated-options (2 versions)",
"kustomize.versions.v123": "binary-123",
"kustomize.versions.v321": "binary-321",
},
containsSummary: "updated-options",
},
"Repositories": {
validator: "repositories",
data: map[string]string{
"repositories": `
- url: https://github.com/argoproj/my-private-repository1
- url: https://github.com/argoproj/my-private-repository2`,
},
containsSummary: "2 repositories",
},
"Accounts": {
validator: "accounts",
data: map[string]string{
"accounts.user1": "apiKey, login",
"accounts.user2": "login",
"accounts.user3": "apiKey",
},
containsSummary: "4 accounts",
},
"ResourceOverrides": {
validator: "resource-overrides",
data: map[string]string{
"resource.customizations": `
admissionregistration.k8s.io/MutatingWebhookConfiguration:
ignoreDifferences: |
jsonPointers:
- /webhooks/0/clientConfig/caBundle`,
},
containsSummary: "1 resource overrides",
},
}
for name := range testCases {
tc := testCases[name]
t.Run(name, func(t *testing.T) {
validator, ok := validatorsByGroup[tc.validator]
if !assert.True(t, ok) {
return
}
summary, err := validator(newSettingsManager(tc.data))
if tc.containsSummary != "" {
assert.NoError(t, err)
assert.Contains(t, summary, tc.containsSummary)
} else if tc.containsError != "" {
if assert.Error(t, err) {
assert.Contains(t, err.Error(), tc.containsError)
}
}
})
}
}
const (
testDeploymentYAML = `apiVersion: v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 0`
)
func tempFile(content string) (string, io.Closer, error) {
f, err := ioutil.TempFile("", "*.yaml")
if err != nil {
return "", nil, err
}
_, err = f.Write([]byte(content))
if err != nil {
_ = os.Remove(f.Name())
return "", nil, err
}
return f.Name(), utils.NewCloser(func() error {
return os.Remove(f.Name())
}), nil
}
func TestValidateSettingsCommand_NoErrors(t *testing.T) {
cmd := NewValidateSettingsCommand(newCmdContext(map[string]string{}))
out, err := captureStdout(func() {
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
for k := range validatorsByGroup {
assert.Contains(t, out, fmt.Sprintf("✅ %s", k))
}
}
func TestResourceOverrideIgnoreDifferences(t *testing.T) {
f, closer, err := tempFile(testDeploymentYAML)
if !assert.NoError(t, err) {
return
}
defer utils.Close(closer)
t.Run("NoOverridesConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"ignore-differences", f})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, "No overrides configured")
})
t.Run("DataIgnored", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
ignoreDifferences: |
jsonPointers:
- /spec`}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"ignore-differences", f})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, "< spec:")
})
}
func TestResourceOverrideHealth(t *testing.T) {
f, closer, err := tempFile(testDeploymentYAML)
if !assert.NoError(t, err) {
return
}
defer utils.Close(closer)
t.Run("NoHealthAssessment", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment: {}`}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"health", f})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, "Health script is not configured")
})
t.Run("HealthAssessmentConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
health.lua: |
return { status = "Progressing" }
`}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"health", f})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, "Progressing")
})
}
func TestResourceOverrideAction(t *testing.T) {
f, closer, err := tempFile(testDeploymentYAML)
if !assert.NoError(t, err) {
return
}
defer utils.Close(closer)
t.Run("NoActions", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment: {}`}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"run-action", f, "test"})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, "Actions are not configured")
})
t.Run("ActionConfigured", func(t *testing.T) {
cmd := NewResourceOverridesCommand(newCmdContext(map[string]string{
"resource.customizations": `apps/Deployment:
actions: |
discovery.lua: |
actions = {}
actions["resume"] = {["disabled"] = false}
actions["restart"] = {["disabled"] = false}
return actions
definitions:
- name: test
action.lua: |
obj.metadata.labels["test"] = 'updated'
return obj
`}))
out, err := captureStdout(func() {
cmd.SetArgs([]string{"run-action", f, "test"})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, "test: updated")
out, err = captureStdout(func() {
cmd.SetArgs([]string{"list-actions", f})
err := cmd.Execute()
assert.NoError(t, err)
})
assert.NoError(t, err)
assert.Contains(t, out, `NAME ENABLED
restart false
resume false
`)
})
}

View File

@@ -11,6 +11,8 @@ import (
"reflect"
"syscall"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@@ -24,12 +26,11 @@ import (
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/cmd/argocd-util/commands"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/dex"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
// load the gcp plugin (required to authenticate against GKE clusters).
@@ -55,7 +56,8 @@ var (
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
logLevel string
logFormat string
logLevel string
)
var command = &cobra.Command{
@@ -72,8 +74,10 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewImportCommand())
command.AddCommand(NewExportCommand())
command.AddCommand(NewClusterConfig())
command.AddCommand(NewProjectsCommand())
command.AddCommand(commands.NewProjectsCommand())
command.AddCommand(commands.NewSettingsCommand())
command.Flags().StringVar(&logFormat, "logformat", "text", "Set the logging format. One of: text|json")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
return command
}
@@ -632,7 +636,7 @@ func NewClusterConfig() *cobra.Command {
cluster, err := db.NewDB(namespace, settings.NewSettingsManager(context.Background(), kubeclientset, namespace), kubeclientset).GetCluster(context.Background(), serverUrl)
errors.CheckError(err)
err = kube.WriteKubeConfig(cluster.RESTConfig(), namespace, output)
err = kube.WriteKubeConfig(cluster.RawRestConfig(), namespace, output)
errors.CheckError(err)
},
}

View File

@@ -27,6 +27,8 @@ connectors:
type: ldap
grpc:
addr: 0.0.0.0:5557
telemetry:
http: 0.0.0.0:5558
issuer: https://argocd.example.com/api/dex
oauth2:
skipApprovalScreen: true
@@ -81,6 +83,8 @@ staticClients:
- http://localhost
storage:
type: memory
telemetry:
http: 0.0.0.0:5558
web:
http: 0.0.0.0:5556
`

View File

@@ -10,20 +10,21 @@ import (
"text/tabwriter"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
timeutil "github.com/argoproj/pkg/time"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
accountpkg "github.com/argoproj/argo-cd/pkg/apiclient/account"
"github.com/argoproj/argo-cd/pkg/apiclient/session"
"github.com/argoproj/argo-cd/server/rbacpolicy"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/localconfig"
sessionutil "github.com/argoproj/argo-cd/util/session"
)
func NewAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
@@ -59,14 +60,20 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
c.HelpFunc()(c, args)
os.Exit(1)
}
acdClient := argocdclient.NewClientOrDie(clientOpts)
conn, usrIf := acdClient.NewAccountClientOrDie()
defer io.Close(conn)
if currentPassword == "" {
userInfo := getCurrentAccount(acdClient)
if userInfo.Iss == sessionutil.SessionManagerClaimsIssuer && currentPassword == "" {
fmt.Print("*** Enter current password: ")
password, err := terminal.ReadPassword(int(os.Stdin.Fd()))
errors.CheckError(err)
currentPassword = string(password)
fmt.Print("\n")
}
if newPassword == "" {
var err error
newPassword, err = cli.ReadAndConfirmPassword()
@@ -79,16 +86,12 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
Name: account,
}
acdClient := argocdclient.NewClientOrDie(clientOpts)
conn, usrIf := acdClient.NewAccountClientOrDie()
defer util.Close(conn)
ctx := context.Background()
_, err := usrIf.UpdatePassword(ctx, &updatePasswordRequest)
errors.CheckError(err)
fmt.Printf("Password updated\n")
if account == "" || account == getCurrentAccount(acdClient) {
if account == "" || account == userInfo.Username {
// Get a new JWT token after updating the password
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
@@ -128,7 +131,7 @@ func NewAccountGetUserInfoCommand(clientOpts *argocdclient.ClientOptions) *cobra
}
conn, client := argocdclient.NewClientOrDie(clientOpts).NewSessionClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
ctx := context.Background()
response, err := client.GetUserInfo(ctx, &session.GetUserInfoRequest{})
@@ -171,11 +174,11 @@ argocd account can-i sync applications '*'
argocd account can-i update projects 'default'
# Can I create a cluster?
argocd account can-i create cluster '*'
argocd account can-i create clusters '*'
Actions: %v
Resources: %v
`, rbacpolicy.Resources, rbacpolicy.Actions),
`, rbacpolicy.Actions, rbacpolicy.Resources),
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
@@ -183,7 +186,7 @@ Resources: %v
}
conn, client := argocdclient.NewClientOrDie(clientOpts).NewAccountClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
ctx := context.Background()
response, err := client.CanI(ctx, &accountpkg.CanIRequest{
@@ -223,7 +226,7 @@ func NewAccountListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Run: func(c *cobra.Command, args []string) {
conn, client := argocdclient.NewClientOrDie(clientOpts).NewAccountClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
ctx := context.Background()
response, err := client.ListAccounts(ctx, &accountpkg.ListAccountRequest{})
@@ -246,12 +249,12 @@ func NewAccountListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
return cmd
}
func getCurrentAccount(clientset argocdclient.Client) string {
func getCurrentAccount(clientset argocdclient.Client) session.GetUserInfoResponse {
conn, client := clientset.NewSessionClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
userInfo, err := client.GetUserInfo(context.Background(), &session.GetUserInfoRequest{})
errors.CheckError(err)
return userInfo.Username
return *userInfo
}
func NewAccountGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
@@ -271,11 +274,11 @@ argocd account get --account <account-name>`,
clientset := argocdclient.NewClientOrDie(clientOpts)
if account == "" {
account = getCurrentAccount(clientset)
account = getCurrentAccount(clientset).Username
}
conn, client := clientset.NewAccountClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
acc, err := client.GetAccount(context.Background(), &accountpkg.GetAccountRequest{Name: account})
@@ -328,6 +331,7 @@ func NewAccountGenerateTokenCommand(clientOpts *argocdclient.ClientOptions) *cob
var (
account string
expiresIn string
id string
)
cmd := &cobra.Command{
Use: "generate-token",
@@ -341,15 +345,16 @@ argocd account generate-token --account <account-name>`,
clientset := argocdclient.NewClientOrDie(clientOpts)
conn, client := clientset.NewAccountClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
if account == "" {
account = getCurrentAccount(clientset)
account = getCurrentAccount(clientset).Username
}
expiresIn, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
response, err := client.CreateToken(context.Background(), &accountpkg.CreateTokenRequest{
Name: account,
ExpiresIn: int64(expiresIn.Seconds()),
Id: id,
})
errors.CheckError(err)
fmt.Println(response.Token)
@@ -357,6 +362,7 @@ argocd account generate-token --account <account-name>`,
}
cmd.Flags().StringVarP(&account, "account", "a", "", "Account name. Defaults to the current account.")
cmd.Flags().StringVarP(&expiresIn, "expires-in", "e", "0s", "Duration before the token will expire. (Default: No expiration)")
cmd.Flags().StringVar(&id, "id", "", "Optional token id. Fallback to uuid if not value specified.")
return cmd
}
@@ -381,9 +387,9 @@ argocd account generate-token --account <account-name>`,
clientset := argocdclient.NewClientOrDie(clientOpts)
conn, client := clientset.NewAccountClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
if account == "" {
account = getCurrentAccount(clientset)
account = getCurrentAccount(clientset).Username
}
_, err := client.DeleteToken(context.Background(), &accountpkg.DeleteTokenRequest{Name: account, Id: id})
errors.CheckError(err)

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/url"
"os"
"reflect"
@@ -15,6 +16,13 @@ import (
"text/tabwriter"
"time"
"github.com/argoproj/gitops-engine/pkg/diff"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/gitops-engine/pkg/sync/hook"
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
argoio "github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@@ -26,7 +34,6 @@ import (
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apiclient"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
@@ -36,15 +43,11 @@ import (
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
repoapiclient "github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/config"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource/ignore"
argokube "github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/templates"
"github.com/argoproj/argo-cd/util/text/label"
)
@@ -171,10 +174,11 @@ func NewApplicationCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
}
conn, appIf := argocdClient.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
appCreateRequest := applicationpkg.ApplicationCreateRequest{
Application: app,
Upsert: &upsert,
Validate: &appOpts.validate,
}
created, err := appIf.Create(context.Background(), &appCreateRequest)
errors.CheckError(err)
@@ -200,6 +204,18 @@ func setLabels(app *argoappv1.Application, labels []string) {
app.SetLabels(mapLabels)
}
func getInfos(infos []string) []*argoappv1.Info {
mapInfos, err := label.Parse(infos)
errors.CheckError(err)
sliceInfos := make([]*argoappv1.Info, len(mapInfos))
i := 0
for key, element := range mapInfos {
sliceInfos[i] = &argoappv1.Info{Name: key, Value: element}
i++
}
return sliceInfos
}
func getRefreshType(refresh bool, hardRefresh bool) *string {
if hardRefresh {
refreshType := string(argoappv1.RefreshTypeHard)
@@ -233,13 +249,13 @@ func NewApplicationGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
}
acdClient := argocdclient.NewClientOrDie(clientOpts)
conn, appIf := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
errors.CheckError(err)
pConn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(pConn)
defer argoio.Close(pConn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: app.Spec.Project})
errors.CheckError(err)
@@ -358,7 +374,7 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
syncStatusStr += fmt.Sprintf(" (%s)", app.Status.Sync.Revision[0:7])
}
fmt.Printf(printOpFmtStr, "Sync Status:", syncStatusStr)
healthStr := app.Status.Health.Status
healthStr := string(app.Status.Health.Status)
if app.Status.Health.Message != "" {
healthStr = fmt.Sprintf("%s (%s)", app.Status.Health.Status, app.Status.Health.Message)
}
@@ -449,7 +465,7 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
appName := args[0]
argocdClient := argocdclient.NewClientOrDie(clientOpts)
conn, appIf := argocdClient.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
visited := setAppSpecOptions(c.Flags(), &app.Spec, &appOpts)
@@ -460,8 +476,9 @@ func NewApplicationSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
}
setParameterOverrides(app, appOpts.parameters)
_, err = appIf.UpdateSpec(ctx, &applicationpkg.ApplicationUpdateSpecRequest{
Name: &app.Name,
Spec: app.Spec,
Name: &app.Name,
Spec: app.Spec,
Validate: &appOpts.validate,
})
errors.CheckError(err)
},
@@ -490,6 +507,18 @@ func setAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
spec.RevisionHistoryLimit = &i
case "values":
setHelmOpt(&spec.Source, helmOpts{valueFiles: appOpts.valuesFiles})
case "values-literal-file":
var data []byte
// read uri
parsedURL, err := url.ParseRequestURI(appOpts.values)
if err != nil || !(parsedURL.Scheme == "http" || parsedURL.Scheme == "https") {
data, err = ioutil.ReadFile(appOpts.values)
} else {
data, err = config.ReadRemoteFile(appOpts.values)
}
errors.CheckError(err)
setHelmOpt(&spec.Source, helmOpts{values: string(data)})
case "release-name":
setHelmOpt(&spec.Source, helmOpts{releaseName: appOpts.releaseName})
case "helm-set":
@@ -514,6 +543,8 @@ func setAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
setKustomizeOpt(&spec.Source, kustomizeOpts{nameSuffix: appOpts.nameSuffix})
case "kustomize-image":
setKustomizeOpt(&spec.Source, kustomizeOpts{images: appOpts.kustomizeImages})
case "kustomize-version":
setKustomizeOpt(&spec.Source, kustomizeOpts{version: appOpts.kustomizeVersion})
case "jsonnet-tla-str":
setJsonnetOpt(&spec.Source, appOpts.jsonnetTlaStr, false)
case "jsonnet-tla-code":
@@ -565,7 +596,7 @@ func setAppSpecOptions(flags *pflag.FlagSet, spec *argoappv1.ApplicationSpec, ap
}
if flags.Changed("self-heal") {
if spec.SyncPolicy == nil || spec.SyncPolicy.Automated == nil {
log.Fatal("Cannot set --self-helf: application not configured with automatic sync")
log.Fatal("Cannot set --self-heal: application not configured with automatic sync")
}
spec.SyncPolicy.Automated.SelfHeal = appOpts.selfHeal
}
@@ -589,12 +620,14 @@ type kustomizeOpts struct {
namePrefix string
nameSuffix string
images []string
version string
}
func setKustomizeOpt(src *argoappv1.ApplicationSource, opts kustomizeOpts) {
if src.Kustomize == nil {
src.Kustomize = &argoappv1.ApplicationSourceKustomize{}
}
src.Kustomize.Version = opts.version
src.Kustomize.NamePrefix = opts.namePrefix
src.Kustomize.NameSuffix = opts.nameSuffix
for _, image := range opts.images {
@@ -607,6 +640,7 @@ func setKustomizeOpt(src *argoappv1.ApplicationSource, opts kustomizeOpts) {
type helmOpts struct {
valueFiles []string
values string
releaseName string
helmSets []string
helmSetStrings []string
@@ -620,6 +654,9 @@ func setHelmOpt(src *argoappv1.ApplicationSource, opts helmOpts) {
if len(opts.valueFiles) > 0 {
src.Helm.ValueFiles = opts.valueFiles
}
if len(opts.values) > 0 {
src.Helm.Values = opts.values
}
if opts.releaseName != "" {
src.Helm.ReleaseName = opts.releaseName
}
@@ -678,6 +715,7 @@ type appOptions struct {
destNamespace string
parameters []string
valuesFiles []string
values string
releaseName string
helmSets []string
helmSetStrings []string
@@ -696,6 +734,8 @@ type appOptions struct {
jsonnetExtVarStr []string
jsonnetExtVarCode []string
kustomizeImages []string
kustomizeVersion string
validate bool
}
func addAppFlags(command *cobra.Command, opts *appOptions) {
@@ -709,6 +749,7 @@ func addAppFlags(command *cobra.Command, opts *appOptions) {
command.Flags().StringVar(&opts.destNamespace, "dest-namespace", "", "K8s target namespace (overrides the namespace specified in the ksonnet app.yaml)")
command.Flags().StringArrayVarP(&opts.parameters, "parameter", "p", []string{}, "set a parameter override (e.g. -p guestbook=image=example/guestbook:latest)")
command.Flags().StringArrayVar(&opts.valuesFiles, "values", []string{}, "Helm values file(s) to use")
command.Flags().StringVar(&opts.values, "values-literal-file", "", "Filename or URL to import as a literal Helm values block")
command.Flags().StringVar(&opts.releaseName, "release-name", "", "Helm release-name")
command.Flags().StringArrayVar(&opts.helmSets, "helm-set", []string{}, "Helm set values on the command line (can be repeated to set several values: --helm-set key1=val1 --helm-set key2=val2)")
command.Flags().StringArrayVar(&opts.helmSetStrings, "helm-set-string", []string{}, "Helm set STRING values on the command line (can be repeated to set several values: --helm-set-string key1=val1 --helm-set-string key2=val2)")
@@ -720,6 +761,7 @@ func addAppFlags(command *cobra.Command, opts *appOptions) {
command.Flags().BoolVar(&opts.selfHeal, "self-heal", false, "Set self healing when sync is automated")
command.Flags().StringVar(&opts.namePrefix, "nameprefix", "", "Kustomize nameprefix")
command.Flags().StringVar(&opts.nameSuffix, "namesuffix", "", "Kustomize namesuffix")
command.Flags().StringVar(&opts.kustomizeVersion, "kustomize-version", "", "Kustomize version")
command.Flags().BoolVar(&opts.directoryRecurse, "directory-recurse", false, "Recurse directory")
command.Flags().StringVar(&opts.configManagementPlugin, "config-management-plugin", "", "Config management plugin name")
command.Flags().StringArrayVar(&opts.jsonnetTlaStr, "jsonnet-tla-str", []string{}, "Jsonnet top level string arguments")
@@ -727,30 +769,80 @@ func addAppFlags(command *cobra.Command, opts *appOptions) {
command.Flags().StringArrayVar(&opts.jsonnetExtVarStr, "jsonnet-ext-var-str", []string{}, "Jsonnet string ext var")
command.Flags().StringArrayVar(&opts.jsonnetExtVarCode, "jsonnet-ext-var-code", []string{}, "Jsonnet ext var")
command.Flags().StringArrayVar(&opts.kustomizeImages, "kustomize-image", []string{}, "Kustomize images (e.g. --kustomize-image node:8.15.0 --kustomize-image mysql=mariadb,alpine@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d)")
command.Flags().BoolVar(&opts.validate, "validate", true, "Validation of repo and cluster")
}
// NewApplicationUnsetCommand returns a new instance of an `argocd app unset` command
func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
parameters []string
valuesFiles []string
parameters []string
valuesLiteral bool
valuesFiles []string
nameSuffix bool
namePrefix bool
kustomizeVersion bool
kustomizeImages []string
appOpts appOptions
)
var command = &cobra.Command{
Use: "unset APPNAME -p COMPONENT=PARAM",
Use: "unset APPNAME parameters",
Short: "Unset application parameters",
Example: ` # Unset kustomize override kustomize image
argocd app unset my-app --kustomize-image=alpine
# Unset kustomize override prefix
argocd app unset my-app --namesuffix
# Unset parameter override
argocd app unset my-app -p COMPONENT=PARAM`,
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 || (len(parameters) == 0 && len(valuesFiles) == 0) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
updated := false
if app.Spec.Source.Kustomize != nil {
if namePrefix {
updated = true
app.Spec.Source.Kustomize.NamePrefix = ""
}
if nameSuffix {
updated = true
app.Spec.Source.Kustomize.NameSuffix = ""
}
if kustomizeVersion {
updated = true
app.Spec.Source.Kustomize.Version = ""
}
for _, kustomizeImage := range kustomizeImages {
for i, item := range app.Spec.Source.Kustomize.Images {
if argoappv1.KustomizeImage(kustomizeImage).Match(item) {
updated = true
//remove i
a := app.Spec.Source.Kustomize.Images
copy(a[i:], a[i+1:]) // Shift a[i+1:] left one index.
a[len(a)-1] = "" // Erase last element (write zero value).
a = a[:len(a)-1] // Truncate slice.
app.Spec.Source.Kustomize.Images = a
}
}
}
}
if app.Spec.Source.Ksonnet != nil {
if len(parameters) == 0 && len(valuesFiles) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
for _, paramStr := range parameters {
parts := strings.SplitN(paramStr, "=", 2)
if len(parts) != 2 {
@@ -767,6 +859,10 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
if app.Spec.Source.Helm != nil {
if len(parameters) == 0 && len(valuesFiles) == 0 && !valuesLiteral {
c.HelpFunc()(c, args)
os.Exit(1)
}
for _, paramStr := range parameters {
helmParams := app.Spec.Source.Helm.Parameters
for i, p := range helmParams {
@@ -777,31 +873,41 @@ func NewApplicationUnsetCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
}
}
specValueFiles := app.Spec.Source.Helm.ValueFiles
if valuesLiteral {
app.Spec.Source.Helm.Values = ""
updated = true
}
for _, valuesFile := range valuesFiles {
specValueFiles := app.Spec.Source.Helm.ValueFiles
for i, vf := range specValueFiles {
if vf == valuesFile {
specValueFiles = append(specValueFiles[0:i], specValueFiles[i+1:]...)
app.Spec.Source.Helm.ValueFiles = append(specValueFiles[0:i], specValueFiles[i+1:]...)
updated = true
break
}
}
}
setHelmOpt(&app.Spec.Source, helmOpts{valueFiles: specValueFiles})
if !updated {
return
}
}
setAppSpecOptions(c.Flags(), &app.Spec, &appOpts)
_, err = appIf.UpdateSpec(context.Background(), &applicationpkg.ApplicationUpdateSpecRequest{
Name: &app.Name,
Spec: app.Spec,
Name: &app.Name,
Spec: app.Spec,
Validate: &appOpts.validate,
})
errors.CheckError(err)
},
}
command.Flags().StringArrayVarP(&parameters, "parameter", "p", []string{}, "unset a parameter override (e.g. -p guestbook=image)")
command.Flags().StringArrayVar(&valuesFiles, "values", []string{}, "unset one or more helm values files")
command.Flags().StringArrayVarP(&parameters, "parameter", "p", []string{}, "Unset a parameter override (e.g. -p guestbook=image)")
command.Flags().StringArrayVar(&valuesFiles, "values", []string{}, "Unset one or more Helm values files")
command.Flags().BoolVar(&valuesLiteral, "values-literal", false, "Unset literal Helm values block")
command.Flags().BoolVar(&nameSuffix, "namesuffix", false, "Kustomize namesuffix")
command.Flags().BoolVar(&namePrefix, "nameprefix", false, "Kustomize nameprefix")
command.Flags().BoolVar(&kustomizeVersion, "kustomize-version", false, "Kustomize version")
command.Flags().StringArrayVar(&kustomizeImages, "kustomize-image", []string{}, "Kustomize images name (e.g. --kustomize-image node --kustomize-image mysql)")
return command
}
@@ -831,8 +937,9 @@ func liveObjects(resources []*argoappv1.ResourceDiff) ([]*unstructured.Unstructu
return objs, nil
}
func getLocalObjects(app *argoappv1.Application, local, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, appLabelKey, kubeVersion, kustomizeOptions)
func getLocalObjects(app *argoappv1.Application, local, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions,
configManagementPlugins []*argoappv1.ConfigManagementPlugin) []*unstructured.Unstructured {
manifestStrings := getLocalObjectsString(app, local, appLabelKey, kubeVersion, kustomizeOptions, configManagementPlugins)
objs := make([]*unstructured.Unstructured, len(manifestStrings))
for i := range manifestStrings {
obj := unstructured.Unstructured{}
@@ -843,7 +950,8 @@ func getLocalObjects(app *argoappv1.Application, local, appLabelKey, kubeVersion
return objs
}
func getLocalObjectsString(app *argoappv1.Application, local, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions) []string {
func getLocalObjectsString(app *argoappv1.Application, local, appLabelKey, kubeVersion string, kustomizeOptions *argoappv1.KustomizeOptions,
configManagementPlugins []*argoappv1.ConfigManagementPlugin) []string {
res, err := repository.GenerateManifests(local, "/", app.Spec.Source.TargetRevision, &repoapiclient.ManifestRequest{
Repo: &argoappv1.Repository{Repo: app.Spec.Source.RepoURL},
AppLabelKey: appLabelKey,
@@ -852,7 +960,8 @@ func getLocalObjectsString(app *argoappv1.Application, local, appLabelKey, kubeV
ApplicationSource: &app.Spec.Source,
KustomizeOptions: kustomizeOptions,
KubeVersion: kubeVersion,
})
Plugins: configManagementPlugins,
}, true)
errors.CheckError(err)
return res.Manifests
@@ -864,7 +973,7 @@ type resourceInfoProvider struct {
// Infer if obj is namespaced or not from corresponding live objects list. If corresponding live object has namespace then target object is also namespaced.
// If live object is missing then it does not matter if target is namespaced or not.
func (p *resourceInfoProvider) IsNamespaced(server string, gk schema.GroupKind) (bool, error) {
func (p *resourceInfoProvider) IsNamespaced(gk schema.GroupKind) (bool, error) {
return p.namespacedByGk[gk], nil
}
@@ -876,7 +985,7 @@ func groupLocalObjs(localObs []*unstructured.Unstructured, liveObjs []*unstructu
namespacedByGk[schema.GroupKind{Group: key.Group, Kind: key.Kind}] = key.Namespace != ""
}
}
localObs, _, err := controller.DeduplicateTargetObjects("", appNamespace, localObs, &resourceInfoProvider{namespacedByGk: namespacedByGk})
localObs, _, err := controller.DeduplicateTargetObjects(appNamespace, localObs, &resourceInfoProvider{namespacedByGk: namespacedByGk})
errors.CheckError(err)
objByKey := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range localObs {
@@ -908,7 +1017,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
clientset := argocdclient.NewClientOrDie(clientOpts)
conn, appIf := clientset.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName, Refresh: getRefreshType(refresh, hardRefresh)})
errors.CheckError(err)
@@ -923,16 +1032,16 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}, 0)
conn, settingsIf := clientset.NewSettingsClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
argoSettings, err := settingsIf.Get(context.Background(), &settingspkg.SettingsQuery{})
errors.CheckError(err)
if local != "" {
conn, clusterIf := clientset.NewClusterClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: app.Spec.Destination.Server})
errors.CheckError(err)
localObjs := groupLocalObjs(getLocalObjects(app, local, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions), liveObjs, app.Spec.Destination.Namespace)
localObjs := groupLocalObjs(getLocalObjects(app, local, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins), liveObjs, app.Spec.Destination.Namespace)
for _, res := range resources.Items {
var live = &unstructured.Unstructured{}
err := json.Unmarshal([]byte(res.NormalizedLiveState), &live)
@@ -946,7 +1055,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
if local, ok := localObjs[key]; ok || live != nil {
if local != nil && !kube.IsCRD(local) {
err = kube.SetAppInstanceLabel(local, argoSettings.AppLabelKey, appName)
err = argokube.SetAppInstanceLabel(local, argoSettings.AppLabelKey, appName)
errors.CheckError(err)
}
@@ -1009,7 +1118,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
normalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, overrides)
errors.CheckError(err)
diffRes, err := diff.Diff(item.target, item.live, normalizer)
diffRes, err := diff.Diff(item.target, item.live, normalizer, diff.GetDefaultDiffOptions())
errors.CheckError(err)
if diffRes.Modified || item.target == nil || item.live == nil {
@@ -1017,9 +1126,9 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
var live *unstructured.Unstructured
var target *unstructured.Unstructured
if item.target != nil && item.live != nil {
target = item.live
live = &unstructured.Unstructured{}
err = json.Unmarshal(diffRes.PredictedLive, live)
target = &unstructured.Unstructured{}
live = item.live
err = json.Unmarshal(diffRes.PredictedLive, target)
errors.CheckError(err)
} else {
live = item.live
@@ -1027,7 +1136,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
foundDiffs = true
_ = diff.PrintDiff(item.key.Name, target, live)
_ = diff.PrintDiff(item.key.Name, live, target)
}
}
if foundDiffs {
@@ -1056,7 +1165,7 @@ func NewApplicationDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
os.Exit(1)
}
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
for _, appName := range args {
appDeleteReq := applicationpkg.ApplicationDeleteRequest{
Name: &appName,
@@ -1128,7 +1237,7 @@ func NewApplicationListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
argocd app list -l app.kubernetes.io/instance=my-app`,
Run: func(c *cobra.Command, args []string) {
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
apps, err := appIf.List(context.Background(), &applicationpkg.ApplicationQuery{Selector: selector})
errors.CheckError(err)
appList := apps.Items
@@ -1252,7 +1361,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
appNames := args
acdClient := argocdclient.NewClientOrDie(clientOpts)
closer, appIf := acdClient.NewApplicationClientOrDie()
defer util.Close(closer)
defer argoio.Close(closer)
if selector != "" {
list, err := appIf.List(context.Background(), &applicationpkg.ApplicationQuery{Selector: selector})
errors.CheckError(err)
@@ -1298,6 +1407,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
force bool
async bool
local string
infos []string
)
var command = &cobra.Command{
Use: "sync [APPNAME... | -l selector]",
@@ -1322,7 +1432,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
acdClient := argocdclient.NewClientOrDie(clientOpts)
conn, appIf := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
selectedLabels, err := label.Parse(labels)
errors.CheckError(err)
@@ -1388,14 +1498,14 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
conn, settingsIf := acdClient.NewSettingsClientOrDie()
argoSettings, err := settingsIf.Get(context.Background(), &settingspkg.SettingsQuery{})
errors.CheckError(err)
util.Close(conn)
argoio.Close(conn)
conn, clusterIf := acdClient.NewClusterClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
cluster, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: app.Spec.Destination.Server})
errors.CheckError(err)
util.Close(conn)
localObjsStrings = getLocalObjectsString(app, local, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions)
argoio.Close(conn)
localObjsStrings = getLocalObjectsString(app, local, argoSettings.AppLabelKey, cluster.ServerVersion, argoSettings.KustomizeOptions, argoSettings.ConfigManagementPlugins)
}
syncReq := applicationpkg.ApplicationSyncRequest{
@@ -1405,6 +1515,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
Resources: selectedResources,
Prune: prune,
Manifests: localObjsStrings,
Infos: getInfos(infos),
}
switch strategy {
case "apply":
@@ -1424,15 +1535,15 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
app, err := waitOnApplicationStatus(acdClient, appName, timeout, false, false, true, false, selectedResources)
errors.CheckError(err)
// Only get resources to be pruned if sync was application-wide
if len(selectedResources) == 0 {
pruningRequired := app.Status.OperationState.SyncResult.Resources.PruningRequired()
if pruningRequired > 0 {
log.Fatalf("%d resources require pruning", pruningRequired)
}
if !app.Status.OperationState.Phase.Successful() && !dryRun {
os.Exit(1)
if !dryRun {
if !app.Status.OperationState.Phase.Successful() {
log.Fatalf("Operation has completed with phase: %s", app.Status.OperationState.Phase)
} else if len(selectedResources) == 0 && app.Status.Sync.Status != argoappv1.SyncStatusCodeSynced {
// Only get resources to be pruned if sync was application-wide and final status is not synced
pruningRequired := app.Status.OperationState.SyncResult.Resources.PruningRequired()
if pruningRequired > 0 {
log.Fatalf("%d resources require pruning", pruningRequired)
}
}
}
}
@@ -1444,12 +1555,13 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
command.Flags().StringVar(&revision, "revision", "", "Sync to a specific revision. Preserves parameter overrides")
command.Flags().StringArrayVar(&resources, "resource", []string{}, fmt.Sprintf("Sync only specific resources as GROUP%sKIND%sNAME. Fields may be blank. This option may be specified repeatedly", resourceFieldDelimiter, resourceFieldDelimiter))
command.Flags().StringVarP(&selector, "selector", "l", "", "Sync apps that match this label")
command.Flags().StringArrayVar(&labels, "label", []string{}, fmt.Sprintf("Sync only specific resources with a label. This option may be specified repeatedly."))
command.Flags().StringArrayVar(&labels, "label", []string{}, "Sync only specific resources with a label. This option may be specified repeatedly.")
command.Flags().UintVar(&timeout, "timeout", defaultCheckTimeoutSeconds, "Time out after this many seconds")
command.Flags().StringVar(&strategy, "strategy", "", "Sync strategy (one of: apply|hook)")
command.Flags().BoolVar(&force, "force", false, "Use a force apply")
command.Flags().BoolVar(&async, "async", false, "Do not wait for application to sync before continuing")
command.Flags().StringVar(&local, "local", "", "Path to a local directory. When this flag is present no git queries will be made")
command.Flags().StringArrayVar(&infos, "info", []string{}, "A list of key-value pairs during sync process. These infos will be persisted in app.")
return command
}
@@ -1510,7 +1622,7 @@ func getResourceStates(app *argoappv1.Application, selectedResources []argoappv1
if resource, ok := resourceByKey[key]; ok && res.HookType == "" {
health = ""
if resource.Health != nil {
health = resource.Health.Status
health = string(resource.Health.Status)
}
sync = string(resource.Status)
}
@@ -1531,7 +1643,7 @@ func getResourceStates(app *argoappv1.Application, selectedResources []argoappv1
res := resourceByKey[resKey]
health := ""
if res.Health != nil {
health = res.Health.Status
health = string(res.Health.Status)
}
states = append(states, &resourceState{
Group: res.Group, Kind: res.Kind, Namespace: res.Namespace, Name: res.Name, Status: string(res.Status), Health: health, Hook: "", Message: ""})
@@ -1564,12 +1676,12 @@ func groupResourceStates(app *argoappv1.Application, selectedResources []argoapp
func checkResourceStatus(watchSync bool, watchHealth bool, watchOperation bool, watchSuspended bool, healthStatus string, syncStatus string, operationStatus *argoappv1.Operation) bool {
healthCheckPassed := true
if watchSuspended && watchHealth {
healthCheckPassed = healthStatus == argoappv1.HealthStatusHealthy ||
healthStatus == argoappv1.HealthStatusSuspended
healthCheckPassed = healthStatus == string(health.HealthStatusHealthy) ||
healthStatus == string(health.HealthStatusSuspended)
} else if watchSuspended {
healthCheckPassed = healthStatus == argoappv1.HealthStatusSuspended
healthCheckPassed = healthStatus == string(health.HealthStatusSuspended)
} else if watchHealth {
healthCheckPassed = healthStatus == argoappv1.HealthStatusHealthy
healthCheckPassed = healthStatus == string(health.HealthStatusHealthy)
}
synced := !watchSync || syncStatus == string(argoappv1.SyncStatusCodeSynced)
@@ -1588,7 +1700,7 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
// time when the sync status lags behind when an operation completes
refresh := false
printFinalStatus := func(app *argoappv1.Application) {
printFinalStatus := func(app *argoappv1.Application) *argoappv1.Application {
var err error
if refresh {
conn, appClient := acdClient.NewApplicationClientOrDie()
@@ -1611,6 +1723,7 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
printAppResources(w, app)
_ = w.Flush()
}
return app
}
if timeout != 0 {
@@ -1625,14 +1738,27 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
prevStates := make(map[string]*resourceState)
appEventCh := acdClient.WatchApplicationWithRetry(ctx, appName)
conn, appClient := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
app, err := appClient.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
for appEvent := range appEventCh {
app = &appEvent.Application
operationInProgress := false
// consider the operation is in progress
if app.Operation != nil {
// if it just got requested
operationInProgress = true
refresh = true
} else if app.Status.OperationState != nil {
if app.Status.OperationState.FinishedAt == nil {
// if it is not finished yet
operationInProgress = true
} else if app.Status.ReconciledAt == nil || app.Status.ReconciledAt.Before(app.Status.OperationState.FinishedAt) {
// if it is just finished and we need to wait for controller to reconcile app once after syncing
operationInProgress = true
}
}
var selectedResourcesAreReady bool
@@ -1649,11 +1775,11 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
}
} else {
// Wait on the application as a whole
selectedResourcesAreReady = checkResourceStatus(watchSync, watchHealth, watchOperation, watchSuspended, app.Status.Health.Status, string(app.Status.Sync.Status), appEvent.Application.Operation)
selectedResourcesAreReady = checkResourceStatus(watchSync, watchHealth, watchOperation, watchSuspended, string(app.Status.Health.Status), string(app.Status.Sync.Status), appEvent.Application.Operation)
}
if selectedResourcesAreReady {
printFinalStatus(app)
if selectedResourcesAreReady && !operationInProgress {
app = printFinalStatus(app)
return app, nil
}
@@ -1662,8 +1788,8 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
var doPrint bool
stateKey := newState.Key()
if prevState, found := prevStates[stateKey]; found {
if watchHealth && prevState.Health != argoappv1.HealthStatusUnknown && prevState.Health != argoappv1.HealthStatusDegraded && newState.Health == argoappv1.HealthStatusDegraded {
printFinalStatus(app)
if watchHealth && prevState.Health != string(health.HealthStatusUnknown) && prevState.Health != string(health.HealthStatusDegraded) && newState.Health == string(health.HealthStatusDegraded) {
_ = printFinalStatus(app)
return nil, fmt.Errorf("application '%s' health state has transitioned from %s to %s", appName, prevState.Health, newState.Health)
}
doPrint = prevState.Merge(newState)
@@ -1677,7 +1803,7 @@ func waitOnApplicationStatus(acdClient apiclient.Client, appName string, timeout
}
_ = w.Flush()
}
printFinalStatus(app)
_ = printFinalStatus(app)
return nil, fmt.Errorf("timed out (%ds) waiting for app %q match desired state", timeout, appName)
}
@@ -1786,7 +1912,7 @@ func NewApplicationHistoryCommand(clientOpts *argocdclient.ClientOptions) *cobra
os.Exit(1)
}
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
appName := args[0]
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
@@ -1820,7 +1946,7 @@ func NewApplicationRollbackCommand(clientOpts *argocdclient.ClientOptions) *cobr
errors.CheckError(err)
acdClient := argocdclient.NewClientOrDie(clientOpts)
conn, appIf := acdClient.NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
ctx := context.Background()
app, err := appIf.Get(ctx, &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
@@ -1893,7 +2019,7 @@ func NewApplicationManifestsCommand(clientOpts *argocdclient.ClientOptions) *cob
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(context.Background(), &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
@@ -1951,7 +2077,7 @@ func NewApplicationTerminateOpCommand(clientOpts *argocdclient.ClientOptions) *c
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
ctx := context.Background()
_, err := appIf.TerminateOperation(ctx, &applicationpkg.OperationTerminateRequest{Name: &appName})
errors.CheckError(err)
@@ -1972,7 +2098,7 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
app, err := appIf.Get(context.Background(), &applicationpkg.ApplicationQuery{Name: &appName})
errors.CheckError(err)
appData, err := json.Marshal(app.Spec)
@@ -1990,7 +2116,10 @@ func NewApplicationEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if err != nil {
return err
}
_, err = appIf.UpdateSpec(context.Background(), &applicationpkg.ApplicationUpdateSpecRequest{Name: &app.Name, Spec: updatedSpec})
var appOpts appOptions
setAppSpecOptions(c.Flags(), &app.Spec, &appOpts)
_, err = appIf.UpdateSpec(context.Background(), &applicationpkg.ApplicationUpdateSpecRequest{Name: &app.Name, Spec: updatedSpec, Validate: &appOpts.validate})
if err != nil {
return fmt.Errorf("Failed to update application spec:\n%v", err)
}
@@ -2021,7 +2150,7 @@ func NewApplicationPatchCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
patchedApp, err := appIf.Patch(context.Background(), &applicationpkg.ApplicationPatchRequest{
Name: &appName,
@@ -2108,7 +2237,7 @@ func NewApplicationPatchResourceCommand(clientOpts *argocdclient.ClientOptions)
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)

View File

@@ -4,18 +4,18 @@ import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"strconv"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
applicationpkg "github.com/argoproj/argo-cd/pkg/apiclient/application"
"github.com/argoproj/argo-cd/util"
)
type DisplayedAction struct {
@@ -59,7 +59,7 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
@@ -144,7 +144,7 @@ func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOpti
actionName := args[1]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &applicationpkg.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)

View File

@@ -2,22 +2,21 @@ package commands
import (
"context"
"crypto/x509"
"fmt"
"os"
"sort"
"strings"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
certificatepkg "github.com/argoproj/argo-cd/pkg/apiclient/certificate"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
certutil "github.com/argoproj/argo-cd/util/cert"
"crypto/x509"
)
// NewCertCommand returns a new instance of an `argocd repo` command
@@ -66,7 +65,7 @@ func NewCertAddTLSCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
Short: "Add TLS certificate data for connecting to repository server SERVERNAME",
Run: func(c *cobra.Command, args []string) {
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
if len(args) != 1 {
c.HelpFunc()(c, args)
@@ -149,7 +148,7 @@ func NewCertAddSSHCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
Run: func(c *cobra.Command, args []string) {
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
var sshKnownHostsLists []string
var err error
@@ -221,7 +220,7 @@ func NewCertRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
os.Exit(1)
}
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
hostNamePattern := args[0]
// Prevent the user from specifying a wildcard as hostname as precaution
@@ -276,7 +275,7 @@ func NewCertListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
conn, certIf := argocdclient.NewClientOrDie(clientOpts).NewCertClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
certificates, err := certIf.ListCertificates(context.Background(), &certificatepkg.RepositoryCertificateQuery{HostNamePattern: hostNamePattern, CertType: certType})
errors.CheckError(err)

View File

@@ -9,6 +9,8 @@ import (
"strings"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
@@ -16,11 +18,9 @@ import (
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
clusterpkg "github.com/argoproj/argo-cd/pkg/apiclient/cluster"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/clusterauth"
)
@@ -110,7 +110,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
errors.CheckError(err)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
clst := newCluster(contextName, namespaces, conf, managerBearerToken, awsAuthConf)
if inCluster {
clst.Server = common.KubernetesInternalAPIServerAddr
@@ -179,22 +179,42 @@ func newCluster(name string, namespaces []string, conf *rest.Config, managerBear
Insecure: conf.TLSClientConfig.Insecure,
ServerName: conf.TLSClientConfig.ServerName,
CAData: conf.TLSClientConfig.CAData,
CertData: conf.TLSClientConfig.CertData,
KeyData: conf.TLSClientConfig.KeyData,
}
if len(conf.TLSClientConfig.CAData) == 0 && conf.TLSClientConfig.CAFile != "" {
data, err := ioutil.ReadFile(conf.TLSClientConfig.CAFile)
errors.CheckError(err)
tlsClientConfig.CAData = data
}
if len(conf.TLSClientConfig.CertData) == 0 && conf.TLSClientConfig.CertFile != "" {
data, err := ioutil.ReadFile(conf.TLSClientConfig.CertFile)
errors.CheckError(err)
tlsClientConfig.CertData = data
}
if len(conf.TLSClientConfig.KeyData) == 0 && conf.TLSClientConfig.KeyFile != "" {
data, err := ioutil.ReadFile(conf.TLSClientConfig.KeyFile)
errors.CheckError(err)
tlsClientConfig.KeyData = data
}
clst := argoappv1.Cluster{
Server: conf.Host,
Name: name,
Namespaces: namespaces,
Config: argoappv1.ClusterConfig{
BearerToken: managerBearerToken,
TLSClientConfig: tlsClientConfig,
AWSAuthConfig: awsAuthConf,
},
}
// Bearer token will preferentially be used for auth if present,
// Even in presence of key/cert credentials
// So set bearer token only if the key/cert data is absent
if len(tlsClientConfig.CertData) == 0 || len(tlsClientConfig.KeyData) == 0 {
clst.Config.BearerToken = managerBearerToken
}
return &clst
}
@@ -213,7 +233,7 @@ func NewClusterGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
os.Exit(1)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
clusters := make([]argoappv1.Cluster, 0)
for _, clusterName := range args {
clst, err := clusterIf.Get(context.Background(), &clusterpkg.ClusterQuery{Server: clusterName})
@@ -282,7 +302,7 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
os.Exit(1)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
// clientset, err := kubernetes.NewForConfig(conf)
// errors.CheckError(err)
@@ -330,7 +350,7 @@ func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Short: "List configured clusters",
Run: func(c *cobra.Command, args []string) {
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
clusters, err := clusterIf.List(context.Background(), &clusterpkg.ClusterQuery{})
errors.CheckError(err)
switch output {
@@ -362,7 +382,7 @@ func NewClusterRotateAuthCommand(clientOpts *argocdclient.ClientOptions) *cobra.
os.Exit(1)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
clusterQuery := clusterpkg.ClusterQuery{
Server: args[0],
}

View File

@@ -1,9 +1,12 @@
package commands
import (
"strings"
"testing"
"github.com/stretchr/testify/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/rest"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
@@ -29,3 +32,52 @@ func Test_printClusterTable(t *testing.T) {
},
})
}
func Test_newCluster(t *testing.T) {
clusterWithData := newCluster("test-cluster", []string{"test-namespace"}, &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
Insecure: false,
ServerName: "test-endpoint.example.com",
CAData: []byte("test-ca-data"),
CertData: []byte("test-cert-data"),
KeyData: []byte("test-key-data"),
},
Host: "test-endpoint.example.com",
},
"test-bearer-token",
&v1alpha1.AWSAuthConfig{})
assert.Equal(t, "test-cert-data", string(clusterWithData.Config.CertData))
assert.Equal(t, "test-key-data", string(clusterWithData.Config.KeyData))
assert.Equal(t, "", clusterWithData.Config.BearerToken)
clusterWithFiles := newCluster("test-cluster", []string{"test-namespace"}, &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
Insecure: false,
ServerName: "test-endpoint.example.com",
CAData: []byte("test-ca-data"),
CertFile: "./testdata/test.cert.pem",
KeyFile: "./testdata/test.key.pem",
},
Host: "test-endpoint.example.com",
},
"test-bearer-token",
&v1alpha1.AWSAuthConfig{})
assert.True(t, strings.Contains(string(clusterWithFiles.Config.CertData), "test-cert-data"))
assert.True(t, strings.Contains(string(clusterWithFiles.Config.KeyData), "test-key-data"))
assert.Equal(t, "", clusterWithFiles.Config.BearerToken)
clusterWithBearerToken := newCluster("test-cluster", []string{"test-namespace"}, &rest.Config{
TLSClientConfig: rest.TLSClientConfig{
Insecure: false,
ServerName: "test-endpoint.example.com",
CAData: []byte("test-ca-data"),
},
Host: "test-endpoint.example.com",
},
"test-bearer-token",
&v1alpha1.AWSAuthConfig{})
assert.Equal(t, "test-bearer-token", clusterWithBearerToken.Config.BearerToken)
}

View File

@@ -3,9 +3,9 @@ package commands
import (
"fmt"
"io"
"log"
"os"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)

View File

@@ -8,10 +8,10 @@ import (
"strings"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
)

View File

@@ -6,8 +6,11 @@ import (
"net/http"
"os"
"strconv"
"strings"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/coreos/go-oidc"
"github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
@@ -15,11 +18,9 @@ import (
"github.com/spf13/cobra"
"golang.org/x/oauth2"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
sessionpkg "github.com/argoproj/argo-cd/pkg/apiclient/session"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/localconfig"
@@ -41,41 +42,55 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
Short: "Log in to Argo CD",
Long: "Log in to Argo CD",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
var server string
if len(args) != 1 && !globalClientOpts.PortForward {
c.HelpFunc()(c, args)
os.Exit(1)
}
server := args[0]
tlsTestResult, err := grpc_util.TestTLS(server)
errors.CheckError(err)
if !tlsTestResult.TLS {
if !globalClientOpts.PlainText {
if !cli.AskToProceed("WARNING: server is not configured with TLS. Proceed (y/n)? ") {
os.Exit(1)
if globalClientOpts.PortForward {
server = "port-forward"
} else {
server = args[0]
tlsTestResult, err := grpc_util.TestTLS(server)
errors.CheckError(err)
if !tlsTestResult.TLS {
if !globalClientOpts.PlainText {
if !cli.AskToProceed("WARNING: server is not configured with TLS. Proceed (y/n)? ") {
os.Exit(1)
}
globalClientOpts.PlainText = true
}
globalClientOpts.PlainText = true
}
} else if tlsTestResult.InsecureErr != nil {
if !globalClientOpts.Insecure {
if !cli.AskToProceed(fmt.Sprintf("WARNING: server certificate had error: %s. Proceed insecurely (y/n)? ", tlsTestResult.InsecureErr)) {
os.Exit(1)
} else if tlsTestResult.InsecureErr != nil {
if !globalClientOpts.Insecure {
if !cli.AskToProceed(fmt.Sprintf("WARNING: server certificate had error: %s. Proceed insecurely (y/n)? ", tlsTestResult.InsecureErr)) {
os.Exit(1)
}
globalClientOpts.Insecure = true
}
globalClientOpts.Insecure = true
}
}
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: server,
Insecure: globalClientOpts.Insecure,
PlainText: globalClientOpts.PlainText,
GRPCWeb: globalClientOpts.GRPCWeb,
ConfigPath: "",
ServerAddr: server,
Insecure: globalClientOpts.Insecure,
PlainText: globalClientOpts.PlainText,
GRPCWeb: globalClientOpts.GRPCWeb,
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
PortForward: globalClientOpts.PortForward,
PortForwardNamespace: globalClientOpts.PortForwardNamespace,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
setConn, setIf := acdClient.NewSettingsClientOrDie()
defer util.Close(setConn)
defer io.Close(setConn)
if ctxName == "" {
ctxName = server
if globalClientOpts.GRPCWebRootPath != "" {
rootPath := strings.TrimRight(strings.TrimLeft(globalClientOpts.GRPCWebRootPath, "/"), "/")
ctxName = fmt.Sprintf("%s/%s", server, rootPath)
}
}
// Perform the login
@@ -99,7 +114,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
SkipClaimsValidation: true,
}
claims := jwt.MapClaims{}
_, _, err = parser.ParseUnverified(tokenString, &claims)
_, _, err := parser.ParseUnverified(tokenString, &claims)
errors.CheckError(err)
fmt.Printf("'%s' logged in successfully\n", userDisplayName(claims))
@@ -110,10 +125,11 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
localCfg = &localconfig.LocalConfig{}
}
localCfg.UpsertServer(localconfig.Server{
Server: server,
PlainText: globalClientOpts.PlainText,
Insecure: globalClientOpts.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
Server: server,
PlainText: globalClientOpts.PlainText,
Insecure: globalClientOpts.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
})
localCfg.UpsertUser(localconfig.User{
Name: ctxName,
@@ -283,7 +299,7 @@ func oauth2Login(ctx context.Context, port int, oidcSettings *settingspkg.OIDCCo
func passwordLogin(acdClient argocdclient.Client, username, password string) string {
username, password = cli.PromptCredentials(username, password)
sessConn, sessionIf := acdClient.NewSessionClientOrDie()
defer util.Close(sessConn)
defer io.Close(sessConn)
sessionRequest := sessionpkg.SessionCreateRequest{
Username: username,
Password: password,

View File

@@ -4,10 +4,10 @@ import (
"fmt"
"os"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
)

View File

@@ -12,6 +12,8 @@ import (
"text/tabwriter"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
argoio "github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/dustin/go-humanize"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
@@ -20,11 +22,9 @@ import (
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/config"
"github.com/argoproj/argo-cd/util/git"
@@ -170,7 +170,7 @@ func NewProjectCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
}
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
_, err := projIf.Create(context.Background(), &projectpkg.ProjectCreateRequest{Project: &proj, Upsert: upsert})
errors.CheckError(err)
},
@@ -200,7 +200,7 @@ func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -247,7 +247,7 @@ func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *co
server := args[1]
namespace := args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -279,7 +279,7 @@ func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions)
server := args[1]
namespace := args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -317,7 +317,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
projName := args[0]
url := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -340,18 +340,19 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
return command
}
func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string) bool) *cobra.Command {
func modifyClusterResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string) bool) *cobra.Command {
return &cobra.Command{
Use: cmdUse,
Short: cmdDesc,
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, group, kind := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -364,11 +365,55 @@ func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.C
}
}
func modifyNamespaceResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string, useWhitelist bool) bool) *cobra.Command {
var (
list string
)
var command = &cobra.Command{
Use: cmdUse,
Short: cmdDesc,
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, group, kind := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
var useWhitelist = false
if list == "white" {
useWhitelist = true
}
if action(proj, group, kind, useWhitelist) {
_, err = projIf.Update(context.Background(), &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
command.Flags().StringVarP(&list, "list", "l", "black", "Use blacklist or whitelist. This can only be 'white' or 'black'")
return command
}
// NewProjectAllowNamespaceResourceCommand returns a new instance of an `deny-cluster-resources` command
func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-namespace-resource PROJECT GROUP KIND"
desc := "Removes a namespaced API resource from the blacklist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
desc := "Removes a namespaced API resource from the blacklist or add a namespaced API resource to the whitelist"
return modifyNamespaceResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string, useWhitelist bool) bool {
if useWhitelist {
for _, item := range proj.Spec.NamespaceResourceWhitelist {
if item.Group == group && item.Kind == kind {
fmt.Printf("Group '%s' and kind '%s' already present in whitelisted namespaced resources\n", group, kind)
return false
}
}
proj.Spec.NamespaceResourceWhitelist = append(proj.Spec.NamespaceResourceWhitelist, v1.GroupKind{Group: group, Kind: kind})
fmt.Printf("Group '%s' and kind '%s' is added to whitelisted namespaced resources\n", group, kind)
return true
}
index := -1
for i, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
@@ -381,6 +426,7 @@ func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOpti
return false
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist[:index], proj.Spec.NamespaceResourceBlacklist[index+1:]...)
fmt.Printf("Group '%s' and kind '%s' is removed from blacklisted namespaced resources\n", group, kind)
return true
})
}
@@ -388,8 +434,25 @@ func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOpti
// NewProjectDenyNamespaceResourceCommand returns a new instance of an `argocd proj deny-namespace-resource` command
func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-namespace-resource PROJECT GROUP KIND"
desc := "Adds a namespaced API resource to the blacklist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
desc := "Adds a namespaced API resource to the blacklist or removes a namespaced API resource from the whitelist"
return modifyNamespaceResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string, useWhitelist bool) bool {
if useWhitelist {
index := -1
for i, item := range proj.Spec.NamespaceResourceWhitelist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
fmt.Printf("Group '%s' and kind '%s' not in whitelisted namespaced resources\n", group, kind)
return false
}
proj.Spec.NamespaceResourceWhitelist = append(proj.Spec.NamespaceResourceWhitelist[:index], proj.Spec.NamespaceResourceWhitelist[index+1:]...)
fmt.Printf("Group '%s' and kind '%s' is removed from whitelisted namespaced resources\n", group, kind)
return true
}
for _, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
fmt.Printf("Group '%s' and kind '%s' already present in blacklisted namespaced resources\n", group, kind)
@@ -397,6 +460,7 @@ func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptio
}
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist, v1.GroupKind{Group: group, Kind: kind})
fmt.Printf("Group '%s' and kind '%s' is added to blacklisted namespaced resources\n", group, kind)
return true
})
}
@@ -405,7 +469,7 @@ func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptio
func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-cluster-resource PROJECT GROUP KIND"
desc := "Removes a cluster-scoped API resource from the whitelist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
return modifyClusterResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
@@ -426,7 +490,7 @@ func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions
func NewProjectAllowClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-cluster-resource PROJECT GROUP KIND"
desc := "Adds a cluster-scoped API resource to the whitelist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
return modifyClusterResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
fmt.Printf("Group '%s' and kind '%s' already present in whitelisted cluster resources\n", group, kind)
@@ -451,7 +515,7 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
projName := args[0]
url := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -487,7 +551,7 @@ func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
os.Exit(1)
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
for _, name := range args {
_, err := projIf.Delete(context.Background(), &projectpkg.ProjectQuery{Name: name})
errors.CheckError(err)
@@ -524,7 +588,7 @@ func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
Short: "List projects",
Run: func(c *cobra.Command, args []string) {
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
projects, err := projIf.List(context.Background(), &projectpkg.ProjectQuery{})
errors.CheckError(err)
switch output {
@@ -650,7 +714,7 @@ func NewProjectGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
p, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -680,7 +744,7 @@ func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer argoio.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
projData, err := json.Marshal(proj.Spec)

View File

@@ -7,14 +7,14 @@ import (
"strconv"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
timeutil "github.com/argoproj/pkg/time"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
)
const (
@@ -60,7 +60,7 @@ func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cob
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -95,7 +95,7 @@ func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -140,7 +140,7 @@ func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -174,7 +174,7 @@ func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -211,7 +211,7 @@ func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *c
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &projectpkg.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
@@ -240,7 +240,7 @@ func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *c
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &projectpkg.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
@@ -281,7 +281,7 @@ func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
project, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -315,7 +315,7 @@ func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Com
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -357,7 +357,7 @@ func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobr
}
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := proj.AddGroupToRole(roleName, groupName)
@@ -386,7 +386,7 @@ func NewProjectRoleRemoveGroupCommand(clientOpts *argocdclient.ClientOptions) *c
}
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := proj.RemoveGroupFromRole(roleName, groupName)

View File

@@ -2,21 +2,19 @@ package commands
import (
"context"
"os"
"github.com/spf13/cobra"
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"strconv"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
projectpkg "github.com/argoproj/argo-cd/pkg/apiclient/project"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
)
// NewProjectWindowsCommand returns a new instance of the `argocd proj windows` command
@@ -55,7 +53,7 @@ func NewProjectWindowsDisableManualSyncCommand(clientOpts *argocdclient.ClientOp
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -90,7 +88,7 @@ func NewProjectWindowsEnableManualSyncCommand(clientOpts *argocdclient.ClientOpt
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -129,7 +127,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -168,7 +166,7 @@ func NewProjectWindowsDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -207,7 +205,7 @@ func NewProjectWindowsUpdateCommand(clientOpts *argocdclient.ClientOptions) *cob
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
@@ -248,7 +246,7 @@ func NewProjectWindowsListCommand(clientOpts *argocdclient.ClientOptions) *cobra
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
proj, err := projIf.Get(context.Background(), &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)

View File

@@ -5,14 +5,14 @@ import (
"fmt"
"os"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
argoio "github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/coreos/go-oidc"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
settingspkg "github.com/argoproj/argo-cd/pkg/apiclient/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
)
@@ -43,11 +43,12 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
var tokenString string
var refreshToken string
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
PlainText: configCtx.Server.PlainText,
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
GRPCWebRootPath: globalClientOpts.GRPCWebRootPath,
PlainText: configCtx.Server.PlainText,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
claims, err := configCtx.User.Claims()
@@ -58,7 +59,7 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
} else {
fmt.Println("Reinitiating SSO login")
setConn, setIf := acdClient.NewSettingsClientOrDie()
defer util.Close(setConn)
defer argoio.Close(setConn)
ctx := context.Background()
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)

View File

@@ -7,15 +7,15 @@ import (
"os"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
repositorypkg "github.com/argoproj/argo-cd/pkg/apiclient/repository"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
)
@@ -32,6 +32,7 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
command.AddCommand(NewRepoAddCommand(clientOpts))
command.AddCommand(NewRepoGetCommand(clientOpts))
command.AddCommand(NewRepoListCommand(clientOpts))
command.AddCommand(NewRepoRemoveCommand(clientOpts))
return command
@@ -131,7 +132,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
// If the user set a username, but didn't supply password via --password,
// then we prompt for it
@@ -165,7 +166,7 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
Upsert: upsert,
}
createdRepo, err := repoIf.CreateRepository(context.Background(), &repoCreateReq)
createdRepo, err := repoIf.Create(context.Background(), &repoCreateReq)
errors.CheckError(err)
fmt.Printf("repository '%s' added\n", createdRepo.Repo)
},
@@ -195,9 +196,9 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
os.Exit(1)
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
for _, repoURL := range args {
_, err := repoIf.DeleteRepository(context.Background(), &repositorypkg.RepoQuery{Repo: repoURL})
_, err := repoIf.Delete(context.Background(), &repositorypkg.RepoQuery{Repo: repoURL})
errors.CheckError(err)
}
},
@@ -243,7 +244,7 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
Short: "List configured repositories",
Run: func(c *cobra.Command, args []string) {
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
forceRefresh := false
switch refresh {
case "":
@@ -253,7 +254,7 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
err := fmt.Errorf("--refresh must be one of: 'hard'")
errors.CheckError(err)
}
repos, err := repoIf.ListRepositories(context.Background(), &repositorypkg.RepoQuery{ForceRefresh: forceRefresh})
repos, err := repoIf.List(context.Background(), &repositorypkg.RepoQuery{ForceRefresh: forceRefresh})
errors.CheckError(err)
switch output {
case "yaml", "json":
@@ -273,3 +274,52 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command.Flags().StringVar(&refresh, "refresh", "", "Force a cache refresh on connection status")
return command
}
// NewRepoGetCommand returns a new instance of an `argocd repo rm` command
func NewRepoGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
refresh string
)
var command = &cobra.Command{
Use: "get",
Short: "Get a configured repository by URL",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
// Repository URL
repoURL := args[0]
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer io.Close(conn)
forceRefresh := false
switch refresh {
case "":
case "hard":
forceRefresh = true
default:
err := fmt.Errorf("--refresh must be one of: 'hard'")
errors.CheckError(err)
}
repo, err := repoIf.Get(context.Background(), &repositorypkg.RepoQuery{Repo: repoURL, ForceRefresh: forceRefresh})
errors.CheckError(err)
switch output {
case "yaml", "json":
err := PrintResource(repo, output)
errors.CheckError(err)
case "url":
fmt.Println(repo.Repo)
// wide is the default
case "wide", "":
printRepoTable(appsv1.Repositories{repo})
default:
errors.CheckError(fmt.Errorf("unknown output format: %s", output))
}
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide|url")
command.Flags().StringVar(&refresh, "refresh", "", "Force a cache refresh on connection status")
return command
}

View File

@@ -7,14 +7,14 @@ import (
"os"
"text/tabwriter"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
repocredspkg "github.com/argoproj/argo-cd/pkg/apiclient/repocreds"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
)
@@ -104,7 +104,7 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoCredsClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
// If the user set a username, but didn't supply password via --password,
// then we prompt for it
@@ -142,7 +142,7 @@ func NewRepoCredsRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
os.Exit(1)
}
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoCredsClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
for _, repoURL := range args {
_, err := repoIf.DeleteRepositoryCredentials(context.Background(), &repocredspkg.RepoCredsDeleteRequest{Url: repoURL})
errors.CheckError(err)
@@ -182,7 +182,7 @@ func NewRepoCredsListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
Short: "List configured repository credentials",
Run: func(c *cobra.Command, args []string) {
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoCredsClientOrDie()
defer util.Close(conn)
defer io.Close(conn)
repos, err := repoIf.ListRepositoryCredentials(context.Background(), &repocredspkg.RepoCredsQuery{})
errors.CheckError(err)
switch output {

View File

@@ -1,10 +1,10 @@
package commands
import (
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/config"
@@ -15,9 +15,13 @@ func init() {
cobra.OnInitialize(initConfig)
}
var logLevel string
var (
logFormat string
logLevel string
)
func initConfig() {
cli.SetLogFormat(logFormat)
cli.SetLogLevel(logLevel)
}
@@ -59,6 +63,8 @@ func NewCommand() *cobra.Command {
command.PersistentFlags().StringVar(&clientOpts.CertFile, "server-crt", config.GetFlag("server-crt", ""), "Server certificate file")
command.PersistentFlags().StringVar(&clientOpts.AuthToken, "auth-token", config.GetFlag("auth-token", ""), "Authentication token")
command.PersistentFlags().BoolVar(&clientOpts.GRPCWeb, "grpc-web", config.GetBoolFlag("grpc-web"), "Enables gRPC-web protocol. Useful if Argo CD server is behind proxy which does not support HTTP2.")
command.PersistentFlags().StringVar(&clientOpts.GRPCWebRootPath, "grpc-web-root-path", config.GetFlag("grpc-web-root-path", ""), "Enables gRPC-web protocol. Useful if Argo CD server is behind proxy which does not support HTTP2. Set web root.")
command.PersistentFlags().StringVar(&logFormat, "logformat", config.GetFlag("logformat", "text"), "Set the logging format. One of: text|json")
command.PersistentFlags().StringVar(&logLevel, "loglevel", config.GetFlag("loglevel", "info"), "Set the logging level. One of: debug|info|warn|error")
command.PersistentFlags().StringSliceVarP(&clientOpts.Headers, "header", "H", []string{}, "Sets additional header to all requests made by Argo CD CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers)")
command.PersistentFlags().BoolVar(&clientOpts.PortForward, "port-forward", config.GetBoolFlag("port-forward"), "Connect to a random argocd-server port using port forwarding")

View File

@@ -0,0 +1,3 @@
-----BEGIN CERTIFICATE-----
test-cert-data
-----END CERTIFICATE-----

View File

@@ -0,0 +1,3 @@
-----BEGIN RSA PRIVATE KEY-----
test-key-data
-----END RSA PRIVATE KEY-----

View File

@@ -3,16 +3,17 @@ package commands
import (
"context"
"fmt"
"io"
"github.com/golang/protobuf/ptypes/empty"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
argoio "github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apiclient/version"
"github.com/argoproj/argo-cd/util"
)
// NewVersionCmd returns a new `version` command to be used as a sub-command to root
@@ -25,7 +26,7 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
versionCmd := cobra.Command{
Use: "version",
Short: fmt.Sprintf("Print version information"),
Short: "Print version information",
Example: ` # Print the full version of client and server to stdout
argocd version
@@ -39,44 +40,39 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
argocd version --short -o yaml
`,
Run: func(cmd *cobra.Command, args []string) {
var (
versionIf version.VersionServiceClient
serverVers *version.VersionMessage
conn io.Closer
err error
)
if !client {
// Get Server version
conn, versionIf = argocdclient.NewClientOrDie(clientOpts).NewVersionClientOrDie()
defer util.Close(conn)
serverVers, err = versionIf.Version(context.Background(), &empty.Empty{})
errors.CheckError(err)
}
cv := common.GetVersion()
switch output {
case "yaml", "json":
clientVers := common.GetVersion()
version := make(map[string]interface{})
if !short {
version["client"] = clientVers
v := make(map[string]interface{})
if short {
v["client"] = map[string]string{cliName: cv.Version}
} else {
version["client"] = map[string]string{cliName: clientVers.Version}
v["client"] = cv
}
if !client {
if !short {
version["server"] = serverVers
sv := getServerVersion(clientOpts)
if short {
v["server"] = map[string]string{"argocd-server": sv.Version}
} else {
version["server"] = map[string]string{"argocd-server": serverVers.Version}
v["server"] = sv
}
}
err := PrintResource(version, output)
err := PrintResource(v, output)
errors.CheckError(err)
case "short":
printVersion(serverVers, client, true)
case "wide", "":
// we use value of short for backward compatibility
printVersion(serverVers, client, short)
case "wide", "short", "":
printClientVersion(&cv, short || (output == "short"))
if !client {
sv := getServerVersion(clientOpts)
printServerVersion(sv, short || (output == "short"))
}
default:
errors.CheckError(fmt.Errorf("unknown output format: %s", output))
log.Fatalf("unknown output format: %s", output)
}
},
}
@@ -86,38 +82,52 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
return &versionCmd
}
func printVersion(serverVers *version.VersionMessage, client bool, short bool) {
version := common.GetVersion()
func getServerVersion(options *argocdclient.ClientOptions) *version.VersionMessage {
conn, versionIf := argocdclient.NewClientOrDie(options).NewVersionClientOrDie()
defer argoio.Close(conn)
v, err := versionIf.Version(context.Background(), &empty.Empty{})
errors.CheckError(err)
return v
}
func printClientVersion(version *common.Version, short bool) {
fmt.Printf("%s: %s\n", cliName, version)
if !short {
fmt.Printf(" BuildDate: %s\n", version.BuildDate)
fmt.Printf(" GitCommit: %s\n", version.GitCommit)
fmt.Printf(" GitTreeState: %s\n", version.GitTreeState)
if version.GitTag != "" {
fmt.Printf(" GitTag: %s\n", version.GitTag)
}
fmt.Printf(" GoVersion: %s\n", version.GoVersion)
fmt.Printf(" Compiler: %s\n", version.Compiler)
fmt.Printf(" Platform: %s\n", version.Platform)
}
if client {
if short {
return
}
fmt.Printf("%s: %s\n", "argocd-server", serverVers.Version)
if !short {
fmt.Printf(" BuildDate: %s\n", serverVers.BuildDate)
fmt.Printf(" GitCommit: %s\n", serverVers.GitCommit)
fmt.Printf(" GitTreeState: %s\n", serverVers.GitTreeState)
if version.GitTag != "" {
fmt.Printf(" GitTag: %s\n", serverVers.GitTag)
}
fmt.Printf(" GoVersion: %s\n", serverVers.GoVersion)
fmt.Printf(" Compiler: %s\n", serverVers.Compiler)
fmt.Printf(" Platform: %s\n", serverVers.Platform)
fmt.Printf(" Ksonnet Version: %s\n", serverVers.KsonnetVersion)
fmt.Printf(" Kustomize Version: %s\n", serverVers.KustomizeVersion)
fmt.Printf(" Helm Version: %s\n", serverVers.HelmVersion)
fmt.Printf(" Kubectl Version: %s\n", serverVers.KubectlVersion)
fmt.Printf(" BuildDate: %s\n", version.BuildDate)
fmt.Printf(" GitCommit: %s\n", version.GitCommit)
fmt.Printf(" GitTreeState: %s\n", version.GitTreeState)
if version.GitTag != "" {
fmt.Printf(" GitTag: %s\n", version.GitTag)
}
fmt.Printf(" GoVersion: %s\n", version.GoVersion)
fmt.Printf(" Compiler: %s\n", version.Compiler)
fmt.Printf(" Platform: %s\n", version.Platform)
}
func printServerVersion(version *version.VersionMessage, short bool) {
fmt.Printf("%s: %s\n", "argocd-server", version.Version)
if short {
return
}
fmt.Printf(" BuildDate: %s\n", version.BuildDate)
fmt.Printf(" GitCommit: %s\n", version.GitCommit)
fmt.Printf(" GitTreeState: %s\n", version.GitTreeState)
if version.GitTag != "" {
fmt.Printf(" GitTag: %s\n", version.GitTag)
}
fmt.Printf(" GoVersion: %s\n", version.GoVersion)
fmt.Printf(" Compiler: %s\n", version.Compiler)
fmt.Printf(" Platform: %s\n", version.Platform)
fmt.Printf(" Ksonnet Version: %s\n", version.KsonnetVersion)
fmt.Printf(" Kustomize Version: %s\n", version.KustomizeVersion)
fmt.Printf(" Helm Version: %s\n", version.HelmVersion)
fmt.Printf(" Kubectl Version: %s\n", version.KubectlVersion)
}

View File

@@ -1,8 +1,9 @@
package main
import (
"github.com/argoproj/gitops-engine/pkg/utils/errors"
commands "github.com/argoproj/argo-cd/cmd/argocd/commands"
"github.com/argoproj/argo-cd/errors"
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"

View File

@@ -3,6 +3,7 @@ package common
import (
"os"
"strconv"
"time"
)
// Default service addresses and URLS of Argo CD internal services
@@ -65,6 +66,8 @@ const (
AuthCookieName = "argocd.token"
// RevisionHistoryLimit is the max number of successful sync to keep in history
RevisionHistoryLimit = 10
// ChangePasswordSSOTokenMaxAge is the max token age for password change operation
ChangePasswordSSOTokenMaxAge = time.Minute * 5
)
// Dex related constants
@@ -101,14 +104,7 @@ const (
// AnnotationCompareOptions is a comma-separated list of options for comparison
AnnotationCompareOptions = "argocd.argoproj.io/compare-options"
// AnnotationSyncOptions is a comma-separated list of options for syncing
AnnotationSyncOptions = "argocd.argoproj.io/sync-options"
// AnnotationSyncWave indicates which wave of the sync the resource or hook should be in
AnnotationSyncWave = "argocd.argoproj.io/sync-wave"
// AnnotationKeyHook contains the hook type of a resource
AnnotationKeyHook = "argocd.argoproj.io/hook"
// AnnotationKeyHookDeletePolicy is the policy of deleting a hook
AnnotationKeyHookDeletePolicy = "argocd.argoproj.io/hook-delete-policy"
// AnnotationKeyRefresh is the annotation key which indicates that app needs to be refreshed. Removed by application controller after app is refreshed.
// Might take values 'normal'/'hard'. Value 'hard' means manifest cache and target cluster state cache should be invalidated before refresh.
AnnotationKeyRefresh = "argocd.argoproj.io/refresh"
@@ -141,13 +137,15 @@ const (
EnvK8sClientQPS = "ARGOCD_K8S_CLIENT_QPS"
// EnvK8sClientBurst is the burst value used for the kubernetes client (default: twice the client QPS)
EnvK8sClientBurst = "ARGOCD_K8S_CLIENT_BURST"
// EnvK8sClientMaxIdleConnections is the number of max idle connections in K8s REST client HTTP transport (default: 500)
EnvK8sClientMaxIdleConnections = "ARGOCD_K8S_CLIENT_MAX_IDLE_CONNECTIONS"
)
const (
// MinClientVersion is the minimum client version that can interface with this API server.
// When introducing breaking changes to the API or datastructures, this number should be bumped.
// The value here may be lower than the current value in VERSION
MinClientVersion = "1.3.0"
MinClientVersion = "1.4.0"
// CacheVersion is a objects version cached using util/cache/cache.go.
// Number should be bumped in case of backward incompatible change to make sure cache is invalidated after upgrade.
CacheVersion = "1.0.0"
@@ -158,6 +156,8 @@ var (
K8sClientConfigQPS float32 = 50
// K8sClientConfigBurst controls the burst to be used in K8s REST client configs
K8sClientConfigBurst int = 100
// K8sMaxIdleConnections controls the number of max idle connections in K8s REST client HTTP transport
K8sMaxIdleConnections = 500
)
func init() {
@@ -173,4 +173,10 @@ func init() {
} else {
K8sClientConfigBurst = 2 * int(K8sClientConfigQPS)
}
if envMaxConn := os.Getenv(EnvK8sClientMaxIdleConnections); envMaxConn != "" {
if maxConn, err := strconv.Atoi(envMaxConn); err != nil {
K8sMaxIdleConnections = maxConn
}
}
}

View File

@@ -12,6 +12,12 @@ import (
"sync"
"time"
"github.com/argoproj/gitops-engine/pkg/diff"
"github.com/argoproj/gitops-engine/pkg/health"
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/errors"
"github.com/argoproj/gitops-engine/pkg/utils/io"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/semaphore"
v1 "k8s.io/api/core/v1"
@@ -31,7 +37,6 @@ import (
"github.com/argoproj/argo-cd/common"
statecache "github.com/argoproj/argo-cd/controller/cache"
"github.com/argoproj/argo-cd/controller/metrics"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
@@ -39,12 +44,9 @@ import (
"github.com/argoproj/argo-cd/pkg/client/informers/externalversions/application/v1alpha1"
applisters "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
appstatecache "github.com/argoproj/argo-cd/util/cache/appstate"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
settings_util "github.com/argoproj/argo-cd/util/settings"
)
@@ -167,7 +169,11 @@ func NewApplicationController(
return &ctrl, nil
}
func (ctrl *ApplicationController) onKubectlRun(command string) (util.Closer, error) {
func (ctrl *ApplicationController) GetMetricsServer() *metrics.MetricsServer {
return ctrl.metricsServer
}
func (ctrl *ApplicationController) onKubectlRun(command string) (io.Closer, error) {
ctrl.metricsServer.IncKubectlExec(command)
if ctrl.kubectlSemaphore != nil {
if err := ctrl.kubectlSemaphore.Acquire(context.Background(), 1); err != nil {
@@ -175,7 +181,7 @@ func (ctrl *ApplicationController) onKubectlRun(command string) (util.Closer, er
}
ctrl.metricsServer.IncKubectlExecPending(command)
}
return util.NewCloser(func() error {
return io.NewCloser(func() error {
if ctrl.kubectlSemaphore != nil {
ctrl.kubectlSemaphore.Release(1)
ctrl.metricsServer.DecKubectlExecPending(command)
@@ -359,7 +365,11 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
if err != nil {
return nil, err
}
resDiffPtr, err := diff.Diff(target, live, comparisonResult.diffNormalizer)
compareOptions, err := ctrl.settingsMgr.GetResourceCompareOptions()
if err != nil {
return nil, err
}
resDiffPtr, err := diff.Diff(target, live, comparisonResult.diffNormalizer, compareOptions)
if err != nil {
return nil, err
}
@@ -385,11 +395,6 @@ func (ctrl *ApplicationController) managedResources(comparisonResult *comparison
} else {
item.TargetState = "null"
}
jsonDiff, err := resDiff.JSONFormat()
if err != nil {
return nil, err
}
item.Diff = jsonDiff
item.PredictedLiveState = string(resDiff.PredictedLive)
item.NormalizedLiveState = string(resDiff.NormalizedLive)
@@ -409,6 +414,8 @@ func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int
go ctrl.appInformer.Run(ctx.Done())
go ctrl.projInformer.Run(ctx.Done())
errors.CheckError(ctrl.stateCache.Init())
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced, ctrl.projInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
@@ -539,7 +546,7 @@ func (ctrl *ApplicationController) processAppComparisonTypeQueueItem() (processN
return
}
// shouldbeDeleted returns whether a given resource obj should be deleted on cascade delete of application app
// shouldBeDeleted returns whether a given resource obj should be deleted on cascade delete of application app
func (ctrl *ApplicationController) shouldBeDeleted(app *appv1.Application, obj *unstructured.Unstructured) bool {
return !kube.IsCRD(obj) && !isSelfReferencedApp(app, kube.GetObjectRef(obj))
}
@@ -592,7 +599,7 @@ func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Applic
}
config := metrics.AddMetricsTransportWrapper(ctrl.metricsServer, app, cluster.RESTConfig())
err = util.RunAllAsync(len(objs), func(i int) error {
err = kube.RunAllAsync(len(objs), func(i int) error {
obj := objs[i]
return ctrl.kubectl.DeleteResource(config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), false)
})
@@ -662,7 +669,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
defer func() {
if r := recover(); r != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
state.Phase = appv1.OperationError
state.Phase = synccommon.OperationError
if rerr, ok := r.(error); ok {
state.Message = rerr.Error()
} else {
@@ -689,20 +696,20 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
state = app.Status.OperationState.DeepCopy()
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
} else {
state = &appv1.OperationState{Phase: appv1.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
state = &appv1.OperationState{Phase: synccommon.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
ctrl.setOperationState(app, state)
logCtx.Infof("Initialized new operation: %v", *app.Operation)
}
ctrl.appStateManager.SyncAppState(app, state)
if state.Phase == appv1.OperationRunning {
if state.Phase == synccommon.OperationRunning {
// It's possible for an app to be terminated while we were operating on it. We do not want
// to clobber the Terminated state with Running. Get the latest app state to check for this.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err == nil {
if freshApp.Status.OperationState != nil && freshApp.Status.OperationState.Phase == appv1.OperationTerminating {
state.Phase = appv1.OperationTerminating
if freshApp.Status.OperationState != nil && freshApp.Status.OperationState.Phase == synccommon.OperationTerminating {
state.Phase = synccommon.OperationTerminating
state.Message = "operation is terminating"
// after this, we will get requeued to the workqueue, but next time the
// SyncAppState will operate in a Terminating phase, allowing the worker to perform
@@ -725,7 +732,7 @@ func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Appli
}
func (ctrl *ApplicationController) setOperationState(app *appv1.Application, state *appv1.OperationState) {
util.RetryUntilSucceed(func() error {
kube.RetryUntilSucceed(func() error {
if state.Phase == "" {
// expose any bugs where we neglect to set phase
panic("no phase was set")
@@ -869,7 +876,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
project, hasErrors := ctrl.refreshAppConditions(app)
if hasErrors {
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
app.Status.Health.Status = appv1.HealthStatusUnknown
app.Status.Health.Status = health.HealthStatusUnknown
ctrl.persistAppStatus(origApp, &app.Status)
return
}
@@ -958,7 +965,7 @@ func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application,
reason = "spec.destination differs"
} else if requested, level := ctrl.isRefreshRequested(app.Name); requested {
compareWith = level
reason = fmt.Sprintf("controller refresh requested")
reason = "controller refresh requested"
}
if reason != "" {
@@ -1151,7 +1158,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
// alreadyAttemptedSync returns whether or not the most recent sync was performed against the
// commitSHA and with the same app source config which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) (bool, appv1.OperationPhase) {
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) (bool, synccommon.OperationPhase) {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false, ""
}

View File

@@ -6,6 +6,10 @@ import (
"testing"
"time"
"github.com/argoproj/gitops-engine/pkg/cache/mocks"
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/gitops-engine/pkg/utils/kube/kubetest"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
@@ -25,12 +29,9 @@ import (
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
"github.com/argoproj/argo-cd/reposerver/apiclient"
mockrepoclient "github.com/argoproj/argo-cd/reposerver/apiclient/mocks"
mockreposerver "github.com/argoproj/argo-cd/reposerver/mocks"
"github.com/argoproj/argo-cd/test"
cacheutil "github.com/argoproj/argo-cd/util/cache"
appstatecache "github.com/argoproj/argo-cd/util/cache/appstate"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
"github.com/argoproj/argo-cd/util/settings"
)
@@ -57,7 +58,7 @@ func newFakeController(data *fakeData) *ApplicationController {
// Mock out call to GenerateManifest
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil)
mockRepoClientset := mockreposerver.Clientset{}
mockRepoClientset := mockrepoclient.Clientset{}
mockRepoClientset.On("NewRepoServerClient").Return(&fakeCloser{}, &mockRepoClient, nil)
secret := corev1.Secret{
@@ -106,6 +107,9 @@ func newFakeController(data *fakeData) *ApplicationController {
defer cancelProj()
cancelApp := test.StartInformer(ctrl.appInformer)
defer cancelApp()
clusterCacheMock := mocks.ClusterCache{}
clusterCacheMock.On("IsNamespaced", mock.Anything).Return(true, nil)
mockStateCache := mockstatecache.LiveStateCache{}
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
ctrl.stateCache = &mockStateCache
@@ -117,6 +121,7 @@ func newFakeController(data *fakeData) *ApplicationController {
response[k] = v.ResourceNode
}
mockStateCache.On("GetNamespaceTopLevelResources", mock.Anything, mock.Anything).Return(response, nil)
mockStateCache.On("GetClusterCache", mock.Anything).Return(&clusterCacheMock, nil)
mockStateCache.On("IterateHierarchy", mock.Anything, mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
key := args[1].(kube.ResourceKey)
action := args[2].(func(child argoappv1.ResourceNode, appName string))
@@ -310,7 +315,7 @@ func TestSkipAutoSync(t *testing.T) {
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{},
},
Phase: argoappv1.OperationFailed,
Phase: synccommon.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
Source: *app.Spec.Source.DeepCopy(),
@@ -367,7 +372,7 @@ func TestAutoSyncIndicateError(t *testing.T) {
Source: app.Spec.Source.DeepCopy(),
},
},
Phase: argoappv1.OperationFailed,
Phase: synccommon.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
Source: *app.Spec.Source.DeepCopy(),
@@ -411,7 +416,7 @@ func TestAutoSyncParameterOverrides(t *testing.T) {
},
},
},
Phase: argoappv1.OperationFailed,
Phase: synccommon.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
@@ -465,7 +470,7 @@ func TestFinalizeAppDeletion(t *testing.T) {
assert.True(t, patched)
}
// Ensure any stray resources irregulary labeled with instance label of app are not deleted upon deleting,
// Ensure any stray resources irregularly labeled with instance label of app are not deleted upon deleting,
// when app project restriction is in place
{
defaultProj := argoappv1.AppProject{
@@ -666,7 +671,7 @@ func TestSetOperationStateOnDeletedApp(t *testing.T) {
patched = true
return true, nil, apierr.NewNotFound(schema.GroupResource{}, "my-app")
})
ctrl.setOperationState(newFakeApp(), &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
ctrl.setOperationState(newFakeApp(), &argoappv1.OperationState{Phase: synccommon.OperationSucceeded})
assert.True(t, patched)
}

View File

@@ -5,6 +5,9 @@ import (
"reflect"
"sync"
clustercache "github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -15,23 +18,18 @@ import (
"github.com/argoproj/argo-cd/controller/metrics"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/lua"
"github.com/argoproj/argo-cd/util/settings"
)
type cacheSettings struct {
ResourceOverrides map[string]appv1.ResourceOverride
AppInstanceLabelKey string
ResourcesFilter *settings.ResourcesFilter
}
type LiveStateCache interface {
// Returns k8s server version
GetVersionsInfo(serverURL string) (string, []metav1.APIGroup, error)
// Returns true of given group kind is a namespaced resource
IsNamespaced(server string, gk schema.GroupKind) (bool, error)
// Returns synced cluster cache
GetClusterCache(server string) (clustercache.ClusterCache, error)
// Executes give callback against resource specified by the key and all its children
IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode, appName string)) error
// Returns state of live nodes which correspond for target nodes of specified application.
@@ -40,23 +38,21 @@ type LiveStateCache interface {
GetNamespaceTopLevelResources(server string, namespace string) (map[kube.ResourceKey]appv1.ResourceNode, error)
// Starts watching resources of each controlled cluster.
Run(ctx context.Context) error
// Invalidate invalidates the entire cluster state cache
Invalidate()
// Returns information about monitored clusters
GetClustersInfo() []metrics.ClusterInfo
GetClustersInfo() []clustercache.ClusterInfo
// Init must be executed before cache can be used
Init() error
}
type ObjectUpdatedHandler = func(managedByApp map[string]bool, ref v1.ObjectReference)
func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isNamespaced bool) kube.ResourceKey {
key := kube.GetResourceKey(un)
if !isNamespaced {
key.Namespace = ""
} else if isNamespaced && key.Namespace == "" {
key.Namespace = a.Spec.Destination.Namespace
}
return key
type ResourceInfo struct {
Info []appv1.InfoItem
AppName string
// networkingInfo are available only for known types involved into networking: Ingress, Service, Pod
NetworkingInfo *appv1.ResourceNetworkingInfo
Images []string
Health *health.HealthStatus
}
func NewLiveStateCache(
@@ -68,29 +64,32 @@ func NewLiveStateCache(
onObjectUpdated ObjectUpdatedHandler) LiveStateCache {
return &liveStateCache{
appInformer: appInformer,
db: db,
clusters: make(map[string]*clusterInfo),
lock: &sync.RWMutex{},
onObjectUpdated: onObjectUpdated,
kubectl: kubectl,
settingsMgr: settingsMgr,
metricsServer: metricsServer,
cacheSettingsLock: &sync.Mutex{},
appInformer: appInformer,
db: db,
clusters: make(map[string]clustercache.ClusterCache),
onObjectUpdated: onObjectUpdated,
kubectl: kubectl,
settingsMgr: settingsMgr,
metricsServer: metricsServer,
}
}
type cacheSettings struct {
clusterSettings clustercache.Settings
appInstanceLabelKey string
}
type liveStateCache struct {
db db.ArgoDB
clusters map[string]*clusterInfo
lock *sync.RWMutex
appInformer cache.SharedIndexInformer
onObjectUpdated ObjectUpdatedHandler
kubectl kube.Kubectl
settingsMgr *settings.SettingsManager
metricsServer *metrics.MetricsServer
cacheSettingsLock *sync.Mutex
cacheSettings *cacheSettings
db db.ArgoDB
appInformer cache.SharedIndexInformer
onObjectUpdated ObjectUpdatedHandler
kubectl kube.Kubectl
settingsMgr *settings.SettingsManager
metricsServer *metrics.MetricsServer
clusters map[string]clustercache.ClusterCache
cacheSettings cacheSettings
lock sync.RWMutex
}
func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
@@ -106,72 +105,198 @@ func (c *liveStateCache) loadCacheSettings() (*cacheSettings, error) {
if err != nil {
return nil, err
}
return &cacheSettings{AppInstanceLabelKey: appInstanceLabelKey, ResourceOverrides: resourceOverrides, ResourcesFilter: resourcesFilter}, nil
clusterSettings := clustercache.Settings{
ResourceHealthOverride: lua.ResourceHealthOverrides(resourceOverrides),
ResourcesFilter: resourcesFilter,
}
return &cacheSettings{clusterSettings, appInstanceLabelKey}, nil
}
func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
func asResourceNode(r *clustercache.Resource) appv1.ResourceNode {
gv, err := schema.ParseGroupVersion(r.Ref.APIVersion)
if err != nil {
gv = schema.GroupVersion{}
}
parentRefs := make([]appv1.ResourceRef, len(r.OwnerRefs))
for _, ownerRef := range r.OwnerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
ownerKey := kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, r.Ref.Namespace, ownerRef.Name)
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: r.Ref.Namespace, Group: ownerKey.Group, UID: string(ownerRef.UID)}
}
var resHealth *appv1.HealthStatus
resourceInfo := resInfo(r)
if resourceInfo.Health != nil {
resHealth = &appv1.HealthStatus{Status: resourceInfo.Health.Status, Message: resourceInfo.Health.Message}
}
return appv1.ResourceNode{
ResourceRef: appv1.ResourceRef{
UID: string(r.Ref.UID),
Name: r.Ref.Name,
Group: gv.Group,
Version: gv.Version,
Kind: r.Ref.Kind,
Namespace: r.Ref.Namespace,
},
ParentRefs: parentRefs,
Info: resourceInfo.Info,
ResourceVersion: r.ResourceVersion,
NetworkingInfo: resourceInfo.NetworkingInfo,
Images: resourceInfo.Images,
Health: resHealth,
}
}
func resInfo(r *clustercache.Resource) *ResourceInfo {
info, ok := r.Info.(*ResourceInfo)
if !ok || info == nil {
info = &ResourceInfo{}
}
return info
}
func isRootAppNode(r *clustercache.Resource) bool {
return resInfo(r).AppName != "" && len(r.OwnerRefs) == 0
}
func getApp(r *clustercache.Resource, ns map[kube.ResourceKey]*clustercache.Resource) string {
return getAppRecursive(r, ns, map[kube.ResourceKey]bool{})
}
func ownerRefGV(ownerRef metav1.OwnerReference) schema.GroupVersion {
gv, err := schema.ParseGroupVersion(ownerRef.APIVersion)
if err != nil {
gv = schema.GroupVersion{}
}
return gv
}
func getAppRecursive(r *clustercache.Resource, ns map[kube.ResourceKey]*clustercache.Resource, visited map[kube.ResourceKey]bool) string {
if !visited[r.ResourceKey()] {
visited[r.ResourceKey()] = true
} else {
log.Warnf("Circular dependency detected: %v.", visited)
return resInfo(r).AppName
}
if resInfo(r).AppName != "" {
return resInfo(r).AppName
}
for _, ownerRef := range r.OwnerRefs {
gv := ownerRefGV(ownerRef)
if parent, ok := ns[kube.NewResourceKey(gv.Group, ownerRef.Kind, r.Ref.Namespace, ownerRef.Name)]; ok {
app := getAppRecursive(parent, ns, visited)
if app != "" {
return app
}
}
}
return ""
}
var (
ignoredRefreshResources = map[string]bool{
"/" + kube.EndpointsKind: true,
}
)
// skipAppRequeuing checks if the object is an API type which we want to skip requeuing against.
// We ignore API types which have a high churn rate, and/or whose updates are irrelevant to the app
func skipAppRequeuing(key kube.ResourceKey) bool {
return ignoredRefreshResources[key.Group+"/"+key.Kind]
}
func (c *liveStateCache) getCluster(server string) (clustercache.ClusterCache, error) {
c.lock.RLock()
info, ok := c.clusters[server]
clusterCache, ok := c.clusters[server]
cacheSettings := c.cacheSettings
c.lock.RUnlock()
if ok {
return info, nil
return clusterCache, nil
}
c.lock.Lock()
defer c.lock.Unlock()
info, ok = c.clusters[server]
clusterCache, ok = c.clusters[server]
if ok {
return info, nil
return clusterCache, nil
}
logCtx := log.WithField("server", server)
logCtx.Info("initializing cluster")
cluster, err := c.db.GetCluster(context.Background(), server)
if err != nil {
return nil, err
}
info = &clusterInfo{
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.RWMutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
onObjectUpdated: c.onObjectUpdated,
kubectl: c.kubectl,
cluster: cluster,
syncTime: nil,
log: logCtx,
cacheSettingsSrc: c.getCacheSettings,
onEventReceived: func(event watch.EventType, un *unstructured.Unstructured) {
gvk := un.GroupVersionKind()
c.metricsServer.IncClusterEventsCount(cluster.Server, gvk.Group, gvk.Kind)
},
metricsServer: c.metricsServer,
}
c.clusters[cluster.Server] = info
return info, nil
clusterCache = clustercache.NewClusterCache(cluster.RESTConfig(),
clustercache.SetSettings(cacheSettings.clusterSettings),
clustercache.SetNamespaces(cluster.Namespaces),
clustercache.SetPopulateResourceInfoHandler(func(un *unstructured.Unstructured, isRoot bool) (interface{}, bool) {
res := &ResourceInfo{}
populateNodeInfo(un, res)
res.Health, _ = health.GetResourceHealth(un, cacheSettings.clusterSettings.ResourceHealthOverride)
appName := kube.GetAppInstanceLabel(un, cacheSettings.appInstanceLabelKey)
if isRoot && appName != "" {
res.AppName = appName
}
// edge case. we do not label CRDs, so they miss the tracking label we inject. But we still
// want the full resource to be available in our cache (to diff), so we store all CRDs
return res, res.AppName != "" || un.GroupVersionKind().Kind == kube.CustomResourceDefinitionKind
}),
)
_ = clusterCache.OnResourceUpdated(func(newRes *clustercache.Resource, oldRes *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) {
toNotify := make(map[string]bool)
var ref v1.ObjectReference
if newRes != nil {
ref = newRes.Ref
} else {
ref = oldRes.Ref
}
for _, r := range []*clustercache.Resource{newRes, oldRes} {
if r == nil {
continue
}
app := getApp(r, namespaceResources)
if app == "" || skipAppRequeuing(r.ResourceKey()) {
continue
}
toNotify[app] = isRootAppNode(r) || toNotify[app]
}
c.onObjectUpdated(toNotify, ref)
})
_ = clusterCache.OnEvent(func(event watch.EventType, un *unstructured.Unstructured) {
gvk := un.GroupVersionKind()
c.metricsServer.IncClusterEventsCount(cluster.Server, gvk.Group, gvk.Kind)
})
c.clusters[cluster.Server] = clusterCache
return clusterCache, nil
}
func (c *liveStateCache) getSyncedCluster(server string) (*clusterInfo, error) {
info, err := c.getCluster(server)
func (c *liveStateCache) getSyncedCluster(server string) (clustercache.ClusterCache, error) {
clusterCache, err := c.getCluster(server)
if err != nil {
return nil, err
}
err = info.ensureSynced()
err = clusterCache.EnsureSynced()
if err != nil {
return nil, err
}
return info, nil
return clusterCache, nil
}
func (c *liveStateCache) Invalidate() {
func (c *liveStateCache) invalidate(cacheSettings cacheSettings) {
log.Info("invalidating live state cache")
c.lock.RLock()
defer c.lock.RLock()
c.lock.Lock()
defer c.lock.Unlock()
c.cacheSettings = cacheSettings
for _, clust := range c.clusters {
clust.invalidate()
clust.Invalidate(clustercache.SetSettings(cacheSettings.clusterSettings))
}
log.Info("live state cache invalidated")
}
@@ -181,7 +306,7 @@ func (c *liveStateCache) IsNamespaced(server string, gk schema.GroupKind) (bool,
if err != nil {
return false, err
}
return clusterInfo.isNamespaced(gk), nil
return clusterInfo.IsNamespaced(gk)
}
func (c *liveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode, appName string)) error {
@@ -189,7 +314,9 @@ func (c *liveStateCache) IterateHierarchy(server string, key kube.ResourceKey, a
if err != nil {
return err
}
clusterInfo.iterateHierarchy(key, action)
clusterInfo.IterateHierarchy(key, func(resource *clustercache.Resource, namespaceResources map[kube.ResourceKey]*clustercache.Resource) {
action(asResourceNode(resource), getApp(resource, namespaceResources))
})
return nil
}
@@ -198,7 +325,12 @@ func (c *liveStateCache) GetNamespaceTopLevelResources(server string, namespace
if err != nil {
return nil, err
}
return clusterInfo.getNamespaceTopLevelResources(namespace), nil
resources := clusterInfo.GetNamespaceTopLevelResources(namespace)
res := make(map[kube.ResourceKey]appv1.ResourceNode)
for k, r := range resources {
res[k] = asResourceNode(r)
}
return res, nil
}
func (c *liveStateCache) GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
@@ -206,7 +338,9 @@ func (c *liveStateCache) GetManagedLiveObjs(a *appv1.Application, targetObjs []*
if err != nil {
return nil, err
}
return clusterInfo.getManagedLiveObjs(a, targetObjs)
return clusterInfo.GetManagedLiveObjs(targetObjs, func(r *clustercache.Resource) bool {
return resInfo(r).AppName == a.Name
})
}
func (c *liveStateCache) GetVersionsInfo(serverURL string) (string, []metav1.APIGroup, error) {
@@ -214,7 +348,7 @@ func (c *liveStateCache) GetVersionsInfo(serverURL string) (string, []metav1.API
if err != nil {
return "", nil, err
}
return clusterInfo.serverVersion, clusterInfo.apiGroups, nil
return clusterInfo.GetServerVersion(), clusterInfo.GetAPIGroups(), nil
}
func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
@@ -226,10 +360,6 @@ func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
return false
}
func (c *liveStateCache) getCacheSettings() *cacheSettings {
return c.cacheSettings
}
func (c *liveStateCache) watchSettings(ctx context.Context) {
updateCh := make(chan *settings.ArgoCDSettings, 1)
c.settingsMgr.Subscribe(updateCh)
@@ -244,15 +374,15 @@ func (c *liveStateCache) watchSettings(ctx context.Context) {
continue
}
c.cacheSettingsLock.Lock()
c.lock.Lock()
needInvalidate := false
if !reflect.DeepEqual(c.cacheSettings, nextCacheSettings) {
c.cacheSettings = nextCacheSettings
if !reflect.DeepEqual(c.cacheSettings, *nextCacheSettings) {
c.cacheSettings = *nextCacheSettings
needInvalidate = true
}
c.cacheSettingsLock.Unlock()
c.lock.Unlock()
if needInvalidate {
c.Invalidate()
c.invalidate(*nextCacheSettings)
}
case <-ctx.Done():
done = true
@@ -263,28 +393,30 @@ func (c *liveStateCache) watchSettings(ctx context.Context) {
close(updateCh)
}
// Run watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (c *liveStateCache) Run(ctx context.Context) error {
func (c *liveStateCache) Init() error {
cacheSettings, err := c.loadCacheSettings()
if err != nil {
return err
}
c.cacheSettings = cacheSettings
c.cacheSettings = *cacheSettings
return nil
}
// Run watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (c *liveStateCache) Run(ctx context.Context) error {
go c.watchSettings(ctx)
util.RetryUntilSucceed(func() error {
kube.RetryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
c.lock.Lock()
cluster, ok := c.clusters[event.Cluster.Server]
if ok {
defer c.lock.Unlock()
if event.Type == watch.Deleted {
cluster.invalidate()
cluster.Invalidate()
delete(c.clusters, event.Cluster.Server)
} else if event.Type == watch.Modified {
cluster.cluster = event.Cluster
cluster.invalidate()
cluster.Invalidate(clustercache.SetConfig(event.Cluster.RESTConfig()))
}
} else {
c.lock.Unlock()
@@ -299,18 +431,23 @@ func (c *liveStateCache) Run(ctx context.Context) error {
return c.db.WatchClusters(ctx, clusterEventCallback)
}, "watch clusters", ctx, clusterRetryTimeout)
}, "watch clusters", ctx, clustercache.ClusterRetryTimeout)
<-ctx.Done()
c.invalidate(c.cacheSettings)
return nil
}
func (c *liveStateCache) GetClustersInfo() []metrics.ClusterInfo {
func (c *liveStateCache) GetClustersInfo() []clustercache.ClusterInfo {
c.lock.RLock()
defer c.lock.RUnlock()
res := make([]metrics.ClusterInfo, 0)
res := make([]clustercache.ClusterInfo, 0)
for _, info := range c.clusters {
res = append(res, info.getClusterInfo())
res = append(res, info.GetClusterInfo())
}
return res
}
func (c *liveStateCache) GetClusterCache(server string) (clustercache.ClusterCache, error) {
return c.getSyncedCluster(server)
}

View File

@@ -1,26 +0,0 @@
package cache
import (
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestGetServerVersion(t *testing.T) {
now := time.Now()
cache := &liveStateCache{
lock: &sync.RWMutex{},
clusters: map[string]*clusterInfo{
"http://localhost": {
syncTime: &now,
lock: &sync.RWMutex{},
serverVersion: "123",
},
}}
version, _, err := cache.GetVersionsInfo("http://localhost")
assert.NoError(t, err)
assert.Equal(t, "123", version)
}

View File

@@ -1,631 +0,0 @@
package cache
import (
"context"
"fmt"
"runtime/debug"
"sort"
"strings"
"sync"
"time"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic"
"github.com/argoproj/argo-cd/controller/metrics"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/kube"
)
const (
clusterSyncTimeout = 24 * time.Hour
clusterRetryTimeout = 10 * time.Second
watchResourcesRetryTimeout = 1 * time.Second
)
type apiMeta struct {
namespaced bool
resourceVersion string
watchCancel context.CancelFunc
}
type clusterInfo struct {
syncTime *time.Time
syncError error
apisMeta map[schema.GroupKind]*apiMeta
serverVersion string
apiGroups []metav1.APIGroup
// namespacedResources is a simple map which indicates a groupKind is namespaced
namespacedResources map[schema.GroupKind]bool
// lock is a rw lock which protects the fields of clusterInfo
lock *sync.RWMutex
nodes map[kube.ResourceKey]*node
nsIndex map[string]map[kube.ResourceKey]*node
onObjectUpdated ObjectUpdatedHandler
onEventReceived func(event watch.EventType, un *unstructured.Unstructured)
kubectl kube.Kubectl
cluster *appv1.Cluster
log *log.Entry
cacheSettingsSrc func() *cacheSettings
metricsServer *metrics.MetricsServer
}
func (c *clusterInfo) replaceResourceCache(gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured, ns string) {
info, ok := c.apisMeta[gk]
if ok {
objByKey := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range objs {
objByKey[kube.GetResourceKey(&objs[i])] = &objs[i]
}
// update existing nodes
for i := range objs {
obj := &objs[i]
key := kube.GetResourceKey(&objs[i])
existingNode, exists := c.nodes[key]
c.onNodeUpdated(exists, existingNode, obj, key)
}
// remove existing nodes that a no longer exist
for key, existingNode := range c.nodes {
if key.Kind != gk.Kind || key.Group != gk.Group || ns != "" && key.Namespace != ns {
continue
}
if _, ok := objByKey[key]; !ok {
c.onNodeRemoved(key, existingNode)
}
}
info.resourceVersion = resourceVersion
}
}
func isServiceAccountTokenSecret(un *unstructured.Unstructured) (bool, metav1.OwnerReference) {
ref := metav1.OwnerReference{
APIVersion: "v1",
Kind: kube.ServiceAccountKind,
}
if un.GetKind() != kube.SecretKind || un.GroupVersionKind().Group != "" {
return false, ref
}
if typeVal, ok, err := unstructured.NestedString(un.Object, "type"); !ok || err != nil || typeVal != "kubernetes.io/service-account-token" {
return false, ref
}
annotations := un.GetAnnotations()
if annotations == nil {
return false, ref
}
id, okId := annotations["kubernetes.io/service-account.uid"]
name, okName := annotations["kubernetes.io/service-account.name"]
if okId && okName {
ref.Name = name
ref.UID = types.UID(id)
}
return ref.Name != "" && ref.UID != "", ref
}
func (c *clusterInfo) createObjInfo(un *unstructured.Unstructured, appInstanceLabel string) *node {
ownerRefs := un.GetOwnerReferences()
gvk := un.GroupVersionKind()
// Special case for endpoint. Remove after https://github.com/kubernetes/kubernetes/issues/28483 is fixed
if gvk.Group == "" && gvk.Kind == kube.EndpointsKind && len(un.GetOwnerReferences()) == 0 {
ownerRefs = append(ownerRefs, metav1.OwnerReference{
Name: un.GetName(),
Kind: kube.ServiceKind,
APIVersion: "v1",
})
}
// edge case. Consider auto-created service account tokens as a child of service account objects
if yes, ref := isServiceAccountTokenSecret(un); yes {
ownerRefs = append(ownerRefs, ref)
}
nodeInfo := &node{
resourceVersion: un.GetResourceVersion(),
ref: kube.GetObjectRef(un),
ownerRefs: ownerRefs,
}
populateNodeInfo(un, nodeInfo)
appName := kube.GetAppInstanceLabel(un, appInstanceLabel)
if len(ownerRefs) == 0 && appName != "" {
nodeInfo.appName = appName
nodeInfo.resource = un
} else {
// edge case. we do not label CRDs, so they miss the tracking label we inject. But we still
// want the full resource to be available in our cache (to diff), so we store all CRDs
switch gvk.Kind {
case kube.CustomResourceDefinitionKind:
nodeInfo.resource = un
}
}
nodeInfo.health, _ = health.GetResourceHealth(un, c.cacheSettingsSrc().ResourceOverrides)
return nodeInfo
}
func (c *clusterInfo) setNode(n *node) {
key := n.resourceKey()
c.nodes[key] = n
ns, ok := c.nsIndex[key.Namespace]
if !ok {
ns = make(map[kube.ResourceKey]*node)
c.nsIndex[key.Namespace] = ns
}
ns[key] = n
}
func (c *clusterInfo) removeNode(key kube.ResourceKey) {
delete(c.nodes, key)
if ns, ok := c.nsIndex[key.Namespace]; ok {
delete(ns, key)
if len(ns) == 0 {
delete(c.nsIndex, key.Namespace)
}
}
}
func (c *clusterInfo) invalidate() {
c.lock.Lock()
defer c.lock.Unlock()
c.syncTime = nil
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
c.apisMeta = nil
c.namespacedResources = nil
c.log.Warnf("invalidated cluster")
}
func (c *clusterInfo) synced() bool {
syncTime := c.syncTime
if syncTime == nil {
return false
}
if c.syncError != nil {
return time.Now().Before(syncTime.Add(clusterRetryTimeout))
}
return time.Now().Before(syncTime.Add(clusterSyncTimeout))
}
func (c *clusterInfo) stopWatching(gk schema.GroupKind, ns string) {
c.lock.Lock()
defer c.lock.Unlock()
if info, ok := c.apisMeta[gk]; ok {
info.watchCancel()
delete(c.apisMeta, gk)
c.replaceResourceCache(gk, "", []unstructured.Unstructured{}, ns)
c.log.Warnf("Stop watching: %s not found", gk)
}
}
// startMissingWatches lists supported cluster resources and start watching for changes unless watch is already running
func (c *clusterInfo) startMissingWatches() error {
config := c.cluster.RESTConfig()
apis, err := c.kubectl.GetAPIResources(config, c.cacheSettingsSrc().ResourcesFilter)
if err != nil {
return err
}
client, err := c.kubectl.NewDynamicClient(config)
if err != nil {
return err
}
namespacedResources := make(map[schema.GroupKind]bool)
for i := range apis {
api := apis[i]
namespacedResources[api.GroupKind] = api.Meta.Namespaced
if _, ok := c.apisMeta[api.GroupKind]; !ok {
ctx, cancel := context.WithCancel(context.Background())
info := &apiMeta{namespaced: api.Meta.Namespaced, watchCancel: cancel}
c.apisMeta[api.GroupKind] = info
err = c.processApi(client, api, func(resClient dynamic.ResourceInterface, ns string) error {
go c.watchEvents(ctx, api, info, resClient, ns)
return nil
})
if err != nil {
return err
}
}
}
c.namespacedResources = namespacedResources
return nil
}
func runSynced(lock sync.Locker, action func() error) error {
lock.Lock()
defer lock.Unlock()
return action()
}
func (c *clusterInfo) watchEvents(ctx context.Context, api kube.APIResourceInfo, info *apiMeta, resClient dynamic.ResourceInterface, ns string) {
util.RetryUntilSucceed(func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
}()
err = runSynced(c.lock, func() error {
if info.resourceVersion == "" {
list, err := resClient.List(metav1.ListOptions{})
if err != nil {
return err
}
c.replaceResourceCache(api.GroupKind, list.GetResourceVersion(), list.Items, ns)
}
return nil
})
if err != nil {
return err
}
w, err := resClient.Watch(metav1.ListOptions{ResourceVersion: info.resourceVersion})
if errors.IsNotFound(err) {
c.stopWatching(api.GroupKind, ns)
return nil
}
if errors.IsGone(err) {
info.resourceVersion = ""
c.log.Warnf("Resource version of %s is too old", api.GroupKind)
}
if err != nil {
return err
}
defer w.Stop()
for {
select {
case <-ctx.Done():
return nil
case event, ok := <-w.ResultChan():
if ok {
obj := event.Object.(*unstructured.Unstructured)
info.resourceVersion = obj.GetResourceVersion()
c.processEvent(event.Type, obj)
if kube.IsCRD(obj) {
if event.Type == watch.Deleted {
group, groupOk, groupErr := unstructured.NestedString(obj.Object, "spec", "group")
kind, kindOk, kindErr := unstructured.NestedString(obj.Object, "spec", "names", "kind")
if groupOk && groupErr == nil && kindOk && kindErr == nil {
gk := schema.GroupKind{Group: group, Kind: kind}
c.stopWatching(gk, ns)
}
} else {
err = runSynced(c.lock, func() error {
return c.startMissingWatches()
})
}
}
if err != nil {
c.log.Warnf("Failed to start missing watch: %v", err)
}
} else {
return fmt.Errorf("Watch %s on %s has closed", api.GroupKind, c.cluster.Server)
}
}
}
}, fmt.Sprintf("watch %s on %s", api.GroupKind, c.cluster.Server), ctx, watchResourcesRetryTimeout)
}
func (c *clusterInfo) processApi(client dynamic.Interface, api kube.APIResourceInfo, callback func(resClient dynamic.ResourceInterface, ns string) error) error {
resClient := client.Resource(api.GroupVersionResource)
if len(c.cluster.Namespaces) == 0 {
return callback(resClient, "")
}
if !api.Meta.Namespaced {
return nil
}
for _, ns := range c.cluster.Namespaces {
err := callback(resClient.Namespace(ns), ns)
if err != nil {
return err
}
}
return nil
}
func (c *clusterInfo) sync() (err error) {
c.log.Info("Start syncing cluster")
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
c.apisMeta = make(map[schema.GroupKind]*apiMeta)
c.nodes = make(map[kube.ResourceKey]*node)
config := c.cluster.RESTConfig()
version, err := c.kubectl.GetServerVersion(config)
if err != nil {
return err
}
c.serverVersion = version
groups, err := c.kubectl.GetAPIGroups(config)
if err != nil {
return err
}
c.apiGroups = groups
apis, err := c.kubectl.GetAPIResources(config, c.cacheSettingsSrc().ResourcesFilter)
if err != nil {
return err
}
client, err := c.kubectl.NewDynamicClient(config)
if err != nil {
return err
}
lock := sync.Mutex{}
err = util.RunAllAsync(len(apis), func(i int) error {
return c.processApi(client, apis[i], func(resClient dynamic.ResourceInterface, _ string) error {
list, err := resClient.List(metav1.ListOptions{})
if err != nil {
return err
}
lock.Lock()
for i := range list.Items {
c.setNode(c.createObjInfo(&list.Items[i], c.cacheSettingsSrc().AppInstanceLabelKey))
}
lock.Unlock()
return nil
})
})
if err == nil {
err = c.startMissingWatches()
}
if err != nil {
log.Errorf("Failed to sync cluster %s: %v", c.cluster.Server, err)
return err
}
c.log.Info("Cluster successfully synced")
return nil
}
func (c *clusterInfo) ensureSynced() error {
// first check if cluster is synced *without lock*
if c.synced() {
return c.syncError
}
c.lock.Lock()
defer c.lock.Unlock()
// before doing any work, check once again now that we have the lock, to see if it got
// synced between the first check and now
if c.synced() {
return c.syncError
}
err := c.sync()
syncTime := time.Now()
c.syncTime = &syncTime
c.syncError = err
return c.syncError
}
func (c *clusterInfo) getNamespaceTopLevelResources(namespace string) map[kube.ResourceKey]appv1.ResourceNode {
c.lock.RLock()
defer c.lock.RUnlock()
nodes := make(map[kube.ResourceKey]appv1.ResourceNode)
for _, node := range c.nsIndex[namespace] {
if len(node.ownerRefs) == 0 {
nodes[node.resourceKey()] = node.asResourceNode()
}
}
return nodes
}
func (c *clusterInfo) iterateHierarchy(key kube.ResourceKey, action func(child appv1.ResourceNode, appName string)) {
c.lock.Lock()
defer c.lock.Unlock()
if objInfo, ok := c.nodes[key]; ok {
nsNodes := c.nsIndex[key.Namespace]
action(objInfo.asResourceNode(), objInfo.getApp(nsNodes))
childrenByUID := make(map[types.UID][]*node)
for _, child := range nsNodes {
if objInfo.isParentOf(child) {
childrenByUID[child.ref.UID] = append(childrenByUID[child.ref.UID], child)
}
}
// make sure children has no duplicates
for _, children := range childrenByUID {
if len(children) > 0 {
// The object might have multiple children with the same UID (e.g. replicaset from apps and extensions group). It is ok to pick any object but we need to make sure
// we pick the same child after every refresh.
sort.Slice(children, func(i, j int) bool {
key1 := children[i].resourceKey()
key2 := children[j].resourceKey()
return strings.Compare(key1.String(), key2.String()) < 0
})
child := children[0]
action(child.asResourceNode(), child.getApp(nsNodes))
child.iterateChildren(nsNodes, map[kube.ResourceKey]bool{objInfo.resourceKey(): true}, action)
}
}
}
}
func (c *clusterInfo) isNamespaced(gk schema.GroupKind) bool {
// this is safe to access without a lock since we always replace the entire map instead of mutating keys
if isNamespaced, ok := c.namespacedResources[gk]; ok {
return isNamespaced
}
log.Warnf("group/kind %s scope is unknown (known objects: %d). assuming namespaced object", gk, len(c.namespacedResources))
return true
}
func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
c.lock.RLock()
defer c.lock.RUnlock()
managedObjs := make(map[kube.ResourceKey]*unstructured.Unstructured)
// iterate all objects in live state cache to find ones associated with app
for key, o := range c.nodes {
if o.appName == a.Name && o.resource != nil && len(o.ownerRefs) == 0 {
managedObjs[key] = o.resource
}
}
config := metrics.AddMetricsTransportWrapper(c.metricsServer, a, c.cluster.RESTConfig())
// iterate target objects and identify ones that already exist in the cluster,
// but are simply missing our label
lock := &sync.Mutex{}
err := util.RunAllAsync(len(targetObjs), func(i int) error {
targetObj := targetObjs[i]
key := GetTargetObjKey(a, targetObj, c.isNamespaced(targetObj.GroupVersionKind().GroupKind()))
lock.Lock()
managedObj := managedObjs[key]
lock.Unlock()
if managedObj == nil {
if existingObj, exists := c.nodes[key]; exists {
if existingObj.resource != nil {
managedObj = existingObj.resource
} else {
var err error
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), existingObj.ref.Name, existingObj.ref.Namespace)
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
}
} else if _, watched := c.apisMeta[key.GroupKind()]; !watched {
var err error
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), targetObj.GetName(), targetObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
}
}
if managedObj != nil {
converted, err := c.kubectl.ConvertToVersion(managedObj, targetObj.GroupVersionKind().Group, targetObj.GroupVersionKind().Version)
if err != nil {
// fallback to loading resource from kubernetes if conversion fails
log.Warnf("Failed to convert resource: %v", err)
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), managedObj.GetName(), managedObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
} else {
managedObj = converted
}
lock.Lock()
managedObjs[key] = managedObj
lock.Unlock()
}
return nil
})
if err != nil {
return nil, err
}
return managedObjs, nil
}
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) {
if c.onEventReceived != nil {
c.onEventReceived(event, un)
}
key := kube.GetResourceKey(un)
if event == watch.Modified && skipAppRequeing(key) {
return
}
c.lock.Lock()
defer c.lock.Unlock()
existingNode, exists := c.nodes[key]
if event == watch.Deleted {
if exists {
c.onNodeRemoved(key, existingNode)
}
} else if event != watch.Deleted {
c.onNodeUpdated(exists, existingNode, un, key)
}
}
func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstructured.Unstructured, key kube.ResourceKey) {
nodes := make([]*node, 0)
if exists {
nodes = append(nodes, existingNode)
}
newObj := c.createObjInfo(un, c.cacheSettingsSrc().AppInstanceLabelKey)
c.setNode(newObj)
nodes = append(nodes, newObj)
toNotify := make(map[string]bool)
for i := range nodes {
n := nodes[i]
if ns, ok := c.nsIndex[n.ref.Namespace]; ok {
app := n.getApp(ns)
if app == "" {
continue
}
toNotify[app] = n.isRootAppNode() || toNotify[app]
}
}
c.onObjectUpdated(toNotify, newObj.ref)
}
func (c *clusterInfo) onNodeRemoved(key kube.ResourceKey, n *node) {
appName := n.appName
if ns, ok := c.nsIndex[key.Namespace]; ok {
appName = n.getApp(ns)
}
c.removeNode(key)
managedByApp := make(map[string]bool)
if appName != "" {
managedByApp[appName] = n.isRootAppNode()
}
c.onObjectUpdated(managedByApp, n.ref)
}
var (
ignoredRefreshResources = map[string]bool{
"/" + kube.EndpointsKind: true,
}
)
func (c *clusterInfo) getClusterInfo() metrics.ClusterInfo {
c.lock.RLock()
defer c.lock.RUnlock()
return metrics.ClusterInfo{
APIsCount: len(c.apisMeta),
K8SVersion: c.serverVersion,
ResourcesCount: len(c.nodes),
Server: c.cluster.Server,
LastCacheSyncTime: c.syncTime,
}
}
// skipAppRequeing checks if the object is an API type which we want to skip requeuing against.
// We ignore API types which have a high churn rate, and/or whose updates are irrelevant to the app
func skipAppRequeing(key kube.ResourceKey) bool {
return ignoredRefreshResources[key.Group+"/"+key.Kind]
}

View File

@@ -1,558 +0,0 @@
package cache
import (
"fmt"
"sort"
"strings"
"sync"
"testing"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic/fake"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
func strToUnstructured(jsonStr string) *unstructured.Unstructured {
obj := make(map[string]interface{})
err := yaml.Unmarshal([]byte(jsonStr), &obj)
errors.CheckError(err)
return &unstructured.Unstructured{Object: obj}
}
func mustToUnstructured(obj interface{}) *unstructured.Unstructured {
un, err := kube.ToUnstructured(obj)
errors.CheckError(err)
return un
}
var (
testPod = strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
uid: "1"
name: helm-guestbook-pod
namespace: default
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: helm-guestbook-rs
uid: "2"
resourceVersion: "123"`)
testRS = strToUnstructured(`
apiVersion: apps/v1
kind: ReplicaSet
metadata:
uid: "2"
name: helm-guestbook-rs
namespace: default
annotations:
deployment.kubernetes.io/revision: "2"
ownerReferences:
- apiVersion: apps/v1beta1
kind: Deployment
name: helm-guestbook
uid: "3"
resourceVersion: "123"`)
testDeploy = strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: helm-guestbook
uid: "3"
name: helm-guestbook
namespace: default
resourceVersion: "123"`)
testService = strToUnstructured(`
apiVersion: v1
kind: Service
metadata:
name: helm-guestbook
namespace: default
resourceVersion: "123"
uid: "4"
spec:
selector:
app: guestbook
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost`)
testIngress = strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
uid: "4"
spec:
backend:
serviceName: not-found-service
servicePort: 443
rules:
- host: helm-guestbook.com
http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /
- backend:
serviceName: helm-guestbook
servicePort: https
path: /
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
)
func newCluster(objs ...*unstructured.Unstructured) *clusterInfo {
runtimeObjs := make([]runtime.Object, len(objs))
for i := range objs {
runtimeObjs[i] = objs[i]
}
scheme := runtime.NewScheme()
client := fake.NewSimpleDynamicClient(scheme, runtimeObjs...)
apiResources := []kube.APIResourceInfo{{
GroupKind: schema.GroupKind{Group: "", Kind: "Pod"},
GroupVersionResource: schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"},
Meta: metav1.APIResource{Namespaced: true},
}, {
GroupKind: schema.GroupKind{Group: "apps", Kind: "ReplicaSet"},
GroupVersionResource: schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "replicasets"},
Meta: metav1.APIResource{Namespaced: true},
}, {
GroupKind: schema.GroupKind{Group: "apps", Kind: "Deployment"},
GroupVersionResource: schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"},
Meta: metav1.APIResource{Namespaced: true},
}}
return newClusterExt(&kubetest.MockKubectlCmd{APIResources: apiResources, DynamicClient: client})
}
func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
return &clusterInfo{
lock: &sync.RWMutex{},
nodes: make(map[kube.ResourceKey]*node),
onObjectUpdated: func(managedByApp map[string]bool, reference corev1.ObjectReference) {},
kubectl: kubectl,
nsIndex: make(map[string]map[kube.ResourceKey]*node),
cluster: &appv1.Cluster{},
syncTime: nil,
apisMeta: make(map[schema.GroupKind]*apiMeta),
log: log.WithField("cluster", "test"),
cacheSettingsSrc: func() *cacheSettings {
return &cacheSettings{AppInstanceLabelKey: common.LabelKeyAppInstance}
},
}
}
func getChildren(cluster *clusterInfo, un *unstructured.Unstructured) []appv1.ResourceNode {
hierarchy := make([]appv1.ResourceNode, 0)
cluster.iterateHierarchy(kube.GetResourceKey(un), func(child appv1.ResourceNode, app string) {
hierarchy = append(hierarchy, child)
})
return hierarchy[1:]
}
func TestEnsureSynced(t *testing.T) {
obj1 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook1", "namespace": "default1"}
`)
obj2 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook2", "namespace": "default2"}
`)
cluster := newCluster(obj1, obj2)
err := cluster.ensureSynced()
assert.Nil(t, err)
assert.Len(t, cluster.nodes, 2)
var names []string
for k := range cluster.nodes {
names = append(names, k.Name)
}
assert.ElementsMatch(t, []string{"helm-guestbook1", "helm-guestbook2"}, names)
}
func TestEnsureSyncedSingleNamespace(t *testing.T) {
obj1 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook1", "namespace": "default1"}
`)
obj2 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook2", "namespace": "default2"}
`)
cluster := newCluster(obj1, obj2)
cluster.cluster.Namespaces = []string{"default1"}
err := cluster.ensureSynced()
assert.Nil(t, err)
assert.Len(t, cluster.nodes, 1)
var names []string
for k := range cluster.nodes {
names = append(names, k.Name)
}
assert.ElementsMatch(t, []string{"helm-guestbook1"}, names)
}
func TestGetNamespaceResources(t *testing.T) {
defaultNamespaceTopLevel1 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook1", "namespace": "default"}
`)
defaultNamespaceTopLevel2 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook2", "namespace": "default"}
`)
kubesystemNamespaceTopLevel2 := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata: {"name": "helm-guestbook3", "namespace": "kube-system"}
`)
cluster := newCluster(defaultNamespaceTopLevel1, defaultNamespaceTopLevel2, kubesystemNamespaceTopLevel2)
err := cluster.ensureSynced()
assert.Nil(t, err)
resources := cluster.getNamespaceTopLevelResources("default")
assert.Len(t, resources, 2)
assert.Equal(t, resources[kube.GetResourceKey(defaultNamespaceTopLevel1)].Name, "helm-guestbook1")
assert.Equal(t, resources[kube.GetResourceKey(defaultNamespaceTopLevel2)].Name, "helm-guestbook2")
resources = cluster.getNamespaceTopLevelResources("kube-system")
assert.Len(t, resources, 1)
assert.Equal(t, resources[kube.GetResourceKey(kubesystemNamespaceTopLevel2)].Name, "helm-guestbook3")
}
func TestGetChildren(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
assert.Equal(t, []appv1.ResourceNode{{
ResourceRef: appv1.ResourceRef{
Kind: "Pod",
Namespace: "default",
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
UID: "1",
},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
Version: "",
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
ResourceVersion: "123",
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
}}, rsChildren)
deployChildren := getChildren(cluster, testDeploy)
assert.Equal(t, append([]appv1.ResourceNode{{
ResourceRef: appv1.ResourceRef{
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
Group: "apps",
Version: "v1",
UID: "2",
},
ResourceVersion: "123",
Health: &appv1.HealthStatus{Status: appv1.HealthStatusHealthy},
Info: []appv1.InfoItem{{Name: "Revision", Value: "Rev:2"}},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook", UID: "3"}},
}}, rsChildren...), deployChildren)
}
func TestGetManagedLiveObjs(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
targetDeploy := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-guestbook
labels:
app: helm-guestbook`)
managedObjs, err := cluster.getManagedLiveObjs(&appv1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "helm-guestbook"},
Spec: appv1.ApplicationSpec{
Destination: appv1.ApplicationDestination{
Namespace: "default",
},
},
}, []*unstructured.Unstructured{targetDeploy})
assert.Nil(t, err)
assert.Equal(t, managedObjs, map[kube.ResourceKey]*unstructured.Unstructured{
kube.NewResourceKey("apps", "Deployment", "default", "helm-guestbook"): testDeploy,
})
}
func TestChildDeletedEvent(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.processEvent(watch.Deleted, testPod)
rsChildren := getChildren(cluster, testRS)
assert.Equal(t, []appv1.ResourceNode{}, rsChildren)
}
func TestProcessNewChildEvent(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
newPod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
uid: "4"
name: helm-guestbook-pod2
namespace: default
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: helm-guestbook-rs
uid: "2"
resourceVersion: "123"`)
cluster.processEvent(watch.Added, newPod)
rsChildren := getChildren(cluster, testRS)
sort.Slice(rsChildren, func(i, j int) bool {
return strings.Compare(rsChildren[i].Name, rsChildren[j].Name) < 0
})
assert.Equal(t, []appv1.ResourceNode{{
ResourceRef: appv1.ResourceRef{
Kind: "Pod",
Namespace: "default",
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
UID: "1",
},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
Version: "",
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
ResourceVersion: "123",
}, {
ResourceRef: appv1.ResourceRef{
Kind: "Pod",
Namespace: "default",
Name: "helm-guestbook-pod2",
Group: "",
Version: "v1",
UID: "4",
},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
Version: "",
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
UID: "2",
}},
ResourceVersion: "123",
}}, rsChildren)
}
func TestUpdateResourceTags(t *testing.T) {
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{Kind: "Pod", APIVersion: "v1"},
ObjectMeta: metav1.ObjectMeta{Name: "testPod", Namespace: "default"},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Name: "test",
Image: "test",
}},
},
}
cluster := newCluster(mustToUnstructured(pod))
err := cluster.ensureSynced()
assert.Nil(t, err)
podNode := cluster.nodes[kube.GetResourceKey(mustToUnstructured(pod))]
assert.NotNil(t, podNode)
assert.Equal(t, []appv1.InfoItem{{Name: "Containers", Value: "0/1"}}, podNode.info)
pod.Status = corev1.PodStatus{
ContainerStatuses: []corev1.ContainerStatus{{
State: corev1.ContainerState{
Terminated: &corev1.ContainerStateTerminated{
ExitCode: -1,
},
},
}},
}
cluster.processEvent(watch.Modified, mustToUnstructured(pod))
podNode = cluster.nodes[kube.GetResourceKey(mustToUnstructured(pod))]
assert.NotNil(t, podNode)
assert.Equal(t, []appv1.InfoItem{{Name: "Status Reason", Value: "ExitCode:-1"}, {Name: "Containers", Value: "0/1"}}, podNode.info)
}
func TestUpdateAppResource(t *testing.T) {
updatesReceived := make([]string, 0)
cluster := newCluster(testPod, testRS, testDeploy)
cluster.onObjectUpdated = func(managedByApp map[string]bool, _ corev1.ObjectReference) {
for appName, fullRefresh := range managedByApp {
updatesReceived = append(updatesReceived, fmt.Sprintf("%s: %v", appName, fullRefresh))
}
}
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
assert.Contains(t, updatesReceived, "helm-guestbook: false")
}
func TestCircularReference(t *testing.T) {
dep := testDeploy.DeepCopy()
dep.SetOwnerReferences([]metav1.OwnerReference{{
Name: testPod.GetName(),
Kind: testPod.GetKind(),
APIVersion: testPod.GetAPIVersion(),
}})
cluster := newCluster(testPod, testRS, dep)
err := cluster.ensureSynced()
assert.Nil(t, err)
children := getChildren(cluster, dep)
assert.Len(t, children, 2)
node := cluster.nodes[kube.GetResourceKey(dep)]
assert.NotNil(t, node)
app := node.getApp(cluster.nodes)
assert.Equal(t, "", app)
}
func TestWatchCacheUpdated(t *testing.T) {
removed := testPod.DeepCopy()
removed.SetName(testPod.GetName() + "-removed-pod")
updated := testPod.DeepCopy()
updated.SetName(testPod.GetName() + "-updated-pod")
updated.SetResourceVersion("updated-pod-version")
cluster := newCluster(removed, updated)
err := cluster.ensureSynced()
assert.Nil(t, err)
added := testPod.DeepCopy()
added.SetName(testPod.GetName() + "-new-pod")
podGroupKind := testPod.GroupVersionKind().GroupKind()
cluster.replaceResourceCache(podGroupKind, "updated-list-version", []unstructured.Unstructured{*updated, *added}, "")
_, ok := cluster.nodes[kube.GetResourceKey(removed)]
assert.False(t, ok)
updatedNode, ok := cluster.nodes[kube.GetResourceKey(updated)]
assert.True(t, ok)
assert.Equal(t, updatedNode.resourceVersion, "updated-pod-version")
_, ok = cluster.nodes[kube.GetResourceKey(added)]
assert.True(t, ok)
}
func TestNamespaceModeReplace(t *testing.T) {
ns1Pod := testPod.DeepCopy()
ns1Pod.SetNamespace("ns1")
ns1Pod.SetName("pod1")
ns2Pod := testPod.DeepCopy()
ns2Pod.SetNamespace("ns2")
podGroupKind := testPod.GroupVersionKind().GroupKind()
cluster := newCluster(ns1Pod, ns2Pod)
err := cluster.ensureSynced()
assert.Nil(t, err)
cluster.replaceResourceCache(podGroupKind, "", nil, "ns1")
_, ok := cluster.nodes[kube.GetResourceKey(ns1Pod)]
assert.False(t, ok)
_, ok = cluster.nodes[kube.GetResourceKey(ns2Pod)]
assert.True(t, ok)
}
func TestGetDuplicatedChildren(t *testing.T) {
extensionsRS := testRS.DeepCopy()
extensionsRS.SetGroupVersionKind(schema.GroupVersionKind{Group: "extensions", Kind: kube.ReplicaSetKind, Version: "v1beta1"})
cluster := newCluster(testDeploy, testRS, extensionsRS)
err := cluster.ensureSynced()
assert.Nil(t, err)
// Get children multiple times to make sure the right child is picked up every time.
for i := 0; i < 5; i++ {
children := getChildren(cluster, testDeploy)
assert.Len(t, children, 1)
assert.Equal(t, "apps", children[0].Group)
assert.Equal(t, kube.ReplicaSetKind, children[0].Kind)
assert.Equal(t, testRS.GetName(), children[0].Name)
}
}

View File

@@ -3,38 +3,37 @@ package cache
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/gitops-engine/pkg/utils/text"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
k8snode "k8s.io/kubernetes/pkg/util/node"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
)
func populateNodeInfo(un *unstructured.Unstructured, node *node) {
func populateNodeInfo(un *unstructured.Unstructured, res *ResourceInfo) {
gvk := un.GroupVersionKind()
revision := resource.GetRevision(un)
if revision > 0 {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Revision", Value: fmt.Sprintf("Rev:%v", revision)})
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "Revision", Value: fmt.Sprintf("Rev:%v", revision)})
}
switch gvk.Group {
case "":
switch gvk.Kind {
case kube.PodKind:
populatePodInfo(un, node)
populatePodInfo(un, res)
return
case kube.ServiceKind:
populateServiceInfo(un, node)
populateServiceInfo(un, res)
return
}
case "extensions", "networking.k8s.io":
switch gvk.Kind {
case kube.IngressKind:
populateIngressInfo(un, node)
populateIngressInfo(un, res)
return
}
}
@@ -58,16 +57,16 @@ func getIngress(un *unstructured.Unstructured) []v1.LoadBalancerIngress {
return res
}
func populateServiceInfo(un *unstructured.Unstructured, node *node) {
func populateServiceInfo(un *unstructured.Unstructured, res *ResourceInfo) {
targetLabels, _, _ := unstructured.NestedStringMap(un.Object, "spec", "selector")
ingress := make([]v1.LoadBalancerIngress, 0)
if serviceType, ok, err := unstructured.NestedString(un.Object, "spec", "type"); ok && err == nil && serviceType == string(v1.ServiceTypeLoadBalancer) {
ingress = getIngress(un)
}
node.networkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetLabels: targetLabels, Ingress: ingress}
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetLabels: targetLabels, Ingress: ingress}
}
func populateIngressInfo(un *unstructured.Unstructured, node *node) {
func populateIngressInfo(un *unstructured.Unstructured, res *ResourceInfo) {
ingress := getIngress(un)
targetsMap := make(map[v1alpha1.ResourceRef]bool)
if backend, ok, err := unstructured.NestedMap(un.Object, "spec", "backend"); ok && err == nil {
@@ -88,7 +87,7 @@ func populateIngressInfo(un *unstructured.Unstructured, node *node) {
host := rule["host"]
if host == nil || host == "" {
for i := range ingress {
host = util.FirstNonEmpty(ingress[i].Hostname, ingress[i].IP)
host = text.FirstNonEmpty(ingress[i].Hostname, ingress[i].IP)
if host != "" {
break
}
@@ -155,10 +154,10 @@ func populateIngressInfo(un *unstructured.Unstructured, node *node) {
for url := range urlsSet {
urls = append(urls, url)
}
node.networkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetRefs: targets, Ingress: ingress, ExternalURLs: urls}
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetRefs: targets, Ingress: ingress, ExternalURLs: urls}
}
func populatePodInfo(un *unstructured.Unstructured, node *node) {
func populatePodInfo(un *unstructured.Unstructured, res *ResourceInfo) {
pod := v1.Pod{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &pod)
if err != nil {
@@ -181,9 +180,9 @@ func populatePodInfo(un *unstructured.Unstructured, node *node) {
imagesSet[container.Image] = true
}
node.images = nil
res.Images = nil
for image := range imagesSet {
node.images = append(node.images, image)
res.Images = append(res.Images, image)
}
initializing := false
@@ -250,8 +249,8 @@ func populatePodInfo(un *unstructured.Unstructured, node *node) {
}
if reason != "" {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Status Reason", Value: reason})
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "Status Reason", Value: reason})
}
node.info = append(node.info, v1alpha1.InfoItem{Name: "Containers", Value: fmt.Sprintf("%d/%d", readyContainers, totalContainers)})
node.networkingInfo = &v1alpha1.ResourceNetworkingInfo{Labels: un.GetLabels()}
res.Info = append(res.Info, v1alpha1.InfoItem{Name: "Containers", Value: fmt.Sprintf("%d/%d", readyContainers, totalContainers)})
res.NetworkingInfo = &v1alpha1.ResourceNetworkingInfo{Labels: un.GetLabels()}
}

View File

@@ -5,12 +5,68 @@ import (
"strings"
"testing"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/pkg/errors"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
)
"github.com/stretchr/testify/assert"
func strToUnstructured(jsonStr string) *unstructured.Unstructured {
obj := make(map[string]interface{})
err := yaml.Unmarshal([]byte(jsonStr), &obj)
errors.CheckError(err)
return &unstructured.Unstructured{Object: obj}
}
var (
testService = strToUnstructured(`
apiVersion: v1
kind: Service
metadata:
name: helm-guestbook
namespace: default
resourceVersion: "123"
uid: "4"
spec:
selector:
app: guestbook
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost`)
testIngress = strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
uid: "4"
spec:
backend:
serviceName: not-found-service
servicePort: 443
rules:
- host: helm-guestbook.com
http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /
- backend:
serviceName: helm-guestbook
servicePort: https
path: /
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
)
func TestGetPodInfo(t *testing.T) {
@@ -31,29 +87,29 @@ func TestGetPodInfo(t *testing.T) {
containers:
- image: bar`)
node := &node{}
populateNodeInfo(pod, node)
assert.Equal(t, []v1alpha1.InfoItem{{Name: "Containers", Value: "0/1"}}, node.info)
assert.Equal(t, []string{"bar"}, node.images)
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{Labels: map[string]string{"app": "guestbook"}}, node.networkingInfo)
info := &ResourceInfo{}
populateNodeInfo(pod, info)
assert.Equal(t, []v1alpha1.InfoItem{{Name: "Containers", Value: "0/1"}}, info.Info)
assert.Equal(t, []string{"bar"}, info.Images)
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{Labels: map[string]string{"app": "guestbook"}}, info.NetworkingInfo)
}
func TestGetServiceInfo(t *testing.T) {
node := &node{}
populateNodeInfo(testService, node)
assert.Equal(t, 0, len(node.info))
info := &ResourceInfo{}
populateNodeInfo(testService, info)
assert.Equal(t, 0, len(info.Info))
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
TargetLabels: map[string]string{"app": "guestbook"},
Ingress: []v1.LoadBalancerIngress{{Hostname: "localhost"}},
}, node.networkingInfo)
}, info.NetworkingInfo)
}
func TestGetIngressInfo(t *testing.T) {
node := &node{}
populateNodeInfo(testIngress, node)
assert.Equal(t, 0, len(node.info))
sort.Slice(node.networkingInfo.TargetRefs, func(i, j int) bool {
return strings.Compare(node.networkingInfo.TargetRefs[j].Name, node.networkingInfo.TargetRefs[i].Name) < 0
info := &ResourceInfo{}
populateNodeInfo(testIngress, info)
assert.Equal(t, 0, len(info.Info))
sort.Slice(info.NetworkingInfo.TargetRefs, func(i, j int) bool {
return strings.Compare(info.NetworkingInfo.TargetRefs[j].Name, info.NetworkingInfo.TargetRefs[i].Name) < 0
})
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
@@ -69,7 +125,7 @@ func TestGetIngressInfo(t *testing.T) {
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://helm-guestbook.com/"},
}, node.networkingInfo)
}, info.NetworkingInfo)
}
func TestGetIngressInfoNoHost(t *testing.T) {
@@ -92,8 +148,8 @@ func TestGetIngressInfoNoHost(t *testing.T) {
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
info := &ResourceInfo{}
populateNodeInfo(ingress, info)
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
@@ -104,7 +160,7 @@ func TestGetIngressInfoNoHost(t *testing.T) {
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://107.178.210.11/"},
}, node.networkingInfo)
}, info.NetworkingInfo)
}
func TestExternalUrlWithSubPath(t *testing.T) {
ingress := strToUnstructured(`
@@ -126,11 +182,11 @@ func TestExternalUrlWithSubPath(t *testing.T) {
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
info := &ResourceInfo{}
populateNodeInfo(ingress, info)
expectedExternalUrls := []string{"https://107.178.210.11/my/sub/path/"}
assert.Equal(t, expectedExternalUrls, node.networkingInfo.ExternalURLs)
assert.Equal(t, expectedExternalUrls, info.NetworkingInfo.ExternalURLs)
}
func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
ingress := strToUnstructured(`
@@ -160,11 +216,11 @@ func TestExternalUrlWithMultipleSubPaths(t *testing.T) {
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
info := &ResourceInfo{}
populateNodeInfo(ingress, info)
expectedExternalUrls := []string{"https://helm-guestbook.com/my/sub/path/", "https://helm-guestbook.com/my/sub/path/2", "https://helm-guestbook.com"}
actualURLs := node.networkingInfo.ExternalURLs
actualURLs := info.NetworkingInfo.ExternalURLs
sort.Strings(expectedExternalUrls)
sort.Strings(actualURLs)
assert.Equal(t, expectedExternalUrls, actualURLs)
@@ -188,11 +244,11 @@ func TestExternalUrlWithNoSubPath(t *testing.T) {
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
info := &ResourceInfo{}
populateNodeInfo(ingress, info)
expectedExternalUrls := []string{"https://107.178.210.11"}
assert.Equal(t, expectedExternalUrls, node.networkingInfo.ExternalURLs)
assert.Equal(t, expectedExternalUrls, info.NetworkingInfo.ExternalURLs)
}
func TestExternalUrlWithNetworkingApi(t *testing.T) {
@@ -214,9 +270,9 @@ func TestExternalUrlWithNetworkingApi(t *testing.T) {
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
info := &ResourceInfo{}
populateNodeInfo(ingress, info)
expectedExternalUrls := []string{"https://107.178.210.11"}
assert.Equal(t, expectedExternalUrls, node.networkingInfo.ExternalURLs)
assert.Equal(t, expectedExternalUrls, info.NetworkingInfo.ExternalURLs)
}

View File

@@ -5,8 +5,9 @@ package mocks
import (
context "context"
metrics "github.com/argoproj/argo-cd/controller/metrics"
kube "github.com/argoproj/argo-cd/util/kube"
cache "github.com/argoproj/gitops-engine/pkg/cache"
kube "github.com/argoproj/gitops-engine/pkg/utils/kube"
mock "github.com/stretchr/testify/mock"
@@ -24,16 +25,39 @@ type LiveStateCache struct {
mock.Mock
}
// GetClusterCache provides a mock function with given fields: server
func (_m *LiveStateCache) GetClusterCache(server string) (cache.ClusterCache, error) {
ret := _m.Called(server)
var r0 cache.ClusterCache
if rf, ok := ret.Get(0).(func(string) cache.ClusterCache); ok {
r0 = rf(server)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(cache.ClusterCache)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(server)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetClustersInfo provides a mock function with given fields:
func (_m *LiveStateCache) GetClustersInfo() []metrics.ClusterInfo {
func (_m *LiveStateCache) GetClustersInfo() []cache.ClusterInfo {
ret := _m.Called()
var r0 []metrics.ClusterInfo
if rf, ok := ret.Get(0).(func() []metrics.ClusterInfo); ok {
var r0 []cache.ClusterInfo
if rf, ok := ret.Get(0).(func() []cache.ClusterInfo); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]metrics.ClusterInfo)
r0 = ret.Get(0).([]cache.ClusterInfo)
}
}
@@ -116,9 +140,18 @@ func (_m *LiveStateCache) GetVersionsInfo(serverURL string) (string, []v1.APIGro
return r0, r1, r2
}
// Invalidate provides a mock function with given fields:
func (_m *LiveStateCache) Invalidate() {
_m.Called()
// Init provides a mock function with given fields:
func (_m *LiveStateCache) Init() error {
ret := _m.Called()
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// IsNamespaced provides a mock function with given fields: server, gk

View File

@@ -1,142 +0,0 @@
package cache
import (
log "github.com/sirupsen/logrus"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
)
type node struct {
resourceVersion string
ref v1.ObjectReference
ownerRefs []metav1.OwnerReference
info []appv1.InfoItem
appName string
// available only for root application nodes
resource *unstructured.Unstructured
// networkingInfo are available only for known types involved into networking: Ingress, Service, Pod
networkingInfo *appv1.ResourceNetworkingInfo
images []string
health *appv1.HealthStatus
}
func (n *node) isRootAppNode() bool {
return n.appName != "" && len(n.ownerRefs) == 0
}
func (n *node) resourceKey() kube.ResourceKey {
return kube.NewResourceKey(n.ref.GroupVersionKind().Group, n.ref.Kind, n.ref.Namespace, n.ref.Name)
}
func (n *node) isParentOf(child *node) bool {
for i, ownerRef := range child.ownerRefs {
// backfill UID of inferred owner child references
if ownerRef.UID == "" && n.ref.Kind == ownerRef.Kind && n.ref.APIVersion == ownerRef.APIVersion && n.ref.Name == ownerRef.Name {
ownerRef.UID = n.ref.UID
child.ownerRefs[i] = ownerRef
return true
}
if n.ref.UID == ownerRef.UID {
return true
}
}
return false
}
func ownerRefGV(ownerRef metav1.OwnerReference) schema.GroupVersion {
gv, err := schema.ParseGroupVersion(ownerRef.APIVersion)
if err != nil {
gv = schema.GroupVersion{}
}
return gv
}
func (n *node) getApp(ns map[kube.ResourceKey]*node) string {
return n.getAppRecursive(ns, map[kube.ResourceKey]bool{})
}
func (n *node) getAppRecursive(ns map[kube.ResourceKey]*node, visited map[kube.ResourceKey]bool) string {
if !visited[n.resourceKey()] {
visited[n.resourceKey()] = true
} else {
log.Warnf("Circular dependency detected: %v.", visited)
return n.appName
}
if n.appName != "" {
return n.appName
}
for _, ownerRef := range n.ownerRefs {
gv := ownerRefGV(ownerRef)
if parent, ok := ns[kube.NewResourceKey(gv.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)]; ok {
app := parent.getAppRecursive(ns, visited)
if app != "" {
return app
}
}
}
return ""
}
func newResourceKeySet(set map[kube.ResourceKey]bool, keys ...kube.ResourceKey) map[kube.ResourceKey]bool {
newSet := make(map[kube.ResourceKey]bool)
for k, v := range set {
newSet[k] = v
}
for i := range keys {
newSet[keys[i]] = true
}
return newSet
}
func (n *node) asResourceNode() appv1.ResourceNode {
gv, err := schema.ParseGroupVersion(n.ref.APIVersion)
if err != nil {
gv = schema.GroupVersion{}
}
parentRefs := make([]appv1.ResourceRef, len(n.ownerRefs))
for _, ownerRef := range n.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
ownerKey := kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: n.ref.Namespace, Group: ownerKey.Group, UID: string(ownerRef.UID)}
}
return appv1.ResourceNode{
ResourceRef: appv1.ResourceRef{
UID: string(n.ref.UID),
Name: n.ref.Name,
Group: gv.Group,
Version: gv.Version,
Kind: n.ref.Kind,
Namespace: n.ref.Namespace,
},
ParentRefs: parentRefs,
Info: n.info,
ResourceVersion: n.resourceVersion,
NetworkingInfo: n.networkingInfo,
Images: n.images,
Health: n.health,
}
}
func (n *node) iterateChildren(ns map[kube.ResourceKey]*node, parents map[kube.ResourceKey]bool, action func(child appv1.ResourceNode, appName string)) {
for childKey, child := range ns {
if n.isParentOf(ns[childKey]) {
if parents[childKey] {
key := n.resourceKey()
log.Warnf("Circular dependency detected. %s is child and parent of %s", childKey.String(), key.String())
} else {
action(child.asResourceNode(), child.getApp(ns))
child.iterateChildren(ns, newResourceKeySet(parents, n.resourceKey()), action)
}
}
}
}

View File

@@ -1,83 +0,0 @@
package cache
import (
"testing"
"github.com/argoproj/argo-cd/common"
"github.com/stretchr/testify/assert"
)
var c = &clusterInfo{cacheSettingsSrc: func() *cacheSettings {
return &cacheSettings{AppInstanceLabelKey: common.LabelKeyAppInstance}
}}
func TestIsParentOf(t *testing.T) {
child := c.createObjInfo(testPod, "")
parent := c.createObjInfo(testRS, "")
grandParent := c.createObjInfo(testDeploy, "")
assert.True(t, parent.isParentOf(child))
assert.False(t, grandParent.isParentOf(child))
}
func TestIsParentOfSameKindDifferentGroupAndUID(t *testing.T) {
rs := testRS.DeepCopy()
rs.SetAPIVersion("somecrd.io/v1")
rs.SetUID("123")
child := c.createObjInfo(testPod, "")
invalidParent := c.createObjInfo(rs, "")
assert.False(t, invalidParent.isParentOf(child))
}
func TestIsServiceParentOfEndPointWithTheSameName(t *testing.T) {
nonMatchingNameEndPoint := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
name: not-matching-name
namespace: default
`), "")
matchingNameEndPoint := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
name: helm-guestbook
namespace: default
`), "")
parent := c.createObjInfo(testService, "")
assert.True(t, parent.isParentOf(matchingNameEndPoint))
assert.Equal(t, parent.ref.UID, matchingNameEndPoint.ownerRefs[0].UID)
assert.False(t, parent.isParentOf(nonMatchingNameEndPoint))
}
func TestIsServiceAccoountParentOfSecret(t *testing.T) {
serviceAccount := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
uid: '123'
secrets:
- name: default-token-123
`), "")
tokenSecret := c.createObjInfo(strToUnstructured(`
apiVersion: v1
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: '123'
name: default-token-123
namespace: default
uid: '345'
type: kubernetes.io/service-account-token
`), "")
assert.True(t, serviceAccount.isParentOf(tokenSecret))
}

View File

@@ -5,6 +5,8 @@ import (
"sync"
"time"
"github.com/argoproj/gitops-engine/pkg/cache"
"github.com/prometheus/client_golang/prometheus"
)
@@ -41,25 +43,19 @@ var (
)
)
type ClusterInfo struct {
Server string
K8SVersion string
ResourcesCount int
APIsCount int
LastCacheSyncTime *time.Time
}
type HasClustersInfo interface {
GetClustersInfo() []ClusterInfo
GetClustersInfo() []cache.ClusterInfo
}
type clusterCollector struct {
infoSource HasClustersInfo
info []ClusterInfo
info []cache.ClusterInfo
lock sync.Mutex
}
func (c *clusterCollector) Run(ctx context.Context) {
// FIXME: complains about SA1015
// nolint:staticcheck
tick := time.Tick(metricsCollectionInterval)
for {
select {

View File

@@ -4,8 +4,10 @@ import (
"context"
"net/http"
"os"
"strconv"
"time"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
@@ -24,7 +26,9 @@ type MetricsServer struct {
kubectlExecPendingGauge *prometheus.GaugeVec
k8sRequestCounter *prometheus.CounterVec
clusterEventsCounter *prometheus.CounterVec
redisRequestCounter *prometheus.CounterVec
reconcileHistogram *prometheus.HistogramVec
redisRequestHistogram *prometheus.HistogramVec
registry *prometheus.Registry
}
@@ -101,13 +105,30 @@ var (
// Buckets chosen after observing a ~2100ms mean reconcile time
Buckets: []float64{0.25, .5, 1, 2, 4, 8, 16},
},
append(descAppDefaultLabels, "dest_server"),
[]string{"namespace", "dest_server"},
)
clusterEventsCounter = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "argocd_cluster_events_total",
Help: "Number of processes k8s resource events.",
}, append(descClusterDefaultLabels, "group", "kind"))
redisRequestCounter = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_redis_request_total",
Help: "Number of kubernetes requests executed during application reconciliation.",
},
[]string{"initiator", "failed"},
)
redisRequestHistogram = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_redis_request_duration",
Help: "Redis requests duration.",
Buckets: []float64{0.01, 0.05, 0.10, 0.25, .5, 1},
},
[]string{"initiator"},
)
)
// NewMetricsServer returns a new prometheus server which collects application metrics
@@ -128,6 +149,8 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
registry.MustRegister(kubectlExecPendingGauge)
registry.MustRegister(reconcileHistogram)
registry.MustRegister(clusterEventsCounter)
registry.MustRegister(redisRequestCounter)
registry.MustRegister(redisRequestHistogram)
return &MetricsServer{
registry: registry,
@@ -141,6 +164,8 @@ func NewMetricsServer(addr string, appLister applister.ApplicationLister, health
kubectlExecPendingGauge: kubectlExecPendingGauge,
reconcileHistogram: reconcileHistogram,
clusterEventsCounter: clusterEventsCounter,
redisRequestCounter: redisRequestCounter,
redisRequestHistogram: redisRequestHistogram,
}
}
@@ -189,9 +214,18 @@ func (m *MetricsServer) IncKubernetesRequest(app *argoappv1.Application, server,
).Inc()
}
func (m *MetricsServer) IncRedisRequest(failed bool) {
m.redisRequestCounter.WithLabelValues("argocd-application-controller", strconv.FormatBool(failed)).Inc()
}
// ObserveRedisRequestDuration observes redis request duration
func (m *MetricsServer) ObserveRedisRequestDuration(duration time.Duration) {
m.redisRequestHistogram.WithLabelValues("argocd-application-controller").Observe(duration.Seconds())
}
// IncReconcile increments the reconcile counter for an application
func (m *MetricsServer) IncReconcile(app *argoappv1.Application, duration time.Duration) {
m.reconcileHistogram.WithLabelValues(app.Namespace, app.Name, app.Spec.GetProject(), app.Spec.Destination.Server).Observe(duration.Seconds())
m.reconcileHistogram.WithLabelValues(app.Namespace, app.Spec.Destination.Server).Observe(duration.Seconds())
}
type appCollector struct {
@@ -260,10 +294,10 @@ func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
}
healthStatus := app.Status.Health.Status
if healthStatus == "" {
healthStatus = argoappv1.HealthStatusUnknown
healthStatus = health.HealthStatusUnknown
}
addGauge(descAppInfo, 1, git.NormalizeGitURL(app.Spec.Source.RepoURL), app.Spec.Destination.Server, app.Spec.Destination.Namespace, string(syncStatus), healthStatus, operation)
addGauge(descAppInfo, 1, git.NormalizeGitURL(app.Spec.Source.RepoURL), app.Spec.Destination.Server, app.Spec.Destination.Namespace, string(syncStatus), string(healthStatus), operation)
// Deprecated controller metrics
if os.Getenv(EnvVarLegacyControllerMetrics) == "true" {
@@ -273,11 +307,12 @@ func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
addGauge(descAppSyncStatusCode, boolFloat64(syncStatus == argoappv1.SyncStatusCodeOutOfSync), string(argoappv1.SyncStatusCodeOutOfSync))
addGauge(descAppSyncStatusCode, boolFloat64(syncStatus == argoappv1.SyncStatusCodeUnknown || syncStatus == ""), string(argoappv1.SyncStatusCodeUnknown))
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusUnknown || healthStatus == ""), argoappv1.HealthStatusUnknown)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusProgressing), argoappv1.HealthStatusProgressing)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusSuspended), argoappv1.HealthStatusSuspended)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusHealthy), argoappv1.HealthStatusHealthy)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusDegraded), argoappv1.HealthStatusDegraded)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusMissing), argoappv1.HealthStatusMissing)
healthStatus := app.Status.Health.Status
addGauge(descAppHealthStatus, boolFloat64(healthStatus == health.HealthStatusUnknown || healthStatus == ""), string(health.HealthStatusUnknown))
addGauge(descAppHealthStatus, boolFloat64(healthStatus == health.HealthStatusProgressing), string(health.HealthStatusProgressing))
addGauge(descAppHealthStatus, boolFloat64(healthStatus == health.HealthStatusSuspended), string(health.HealthStatusSuspended))
addGauge(descAppHealthStatus, boolFloat64(healthStatus == health.HealthStatusHealthy), string(health.HealthStatusHealthy))
addGauge(descAppHealthStatus, boolFloat64(healthStatus == health.HealthStatusDegraded), string(health.HealthStatusDegraded))
addGauge(descAppHealthStatus, boolFloat64(healthStatus == health.HealthStatusMissing), string(health.HealthStatusMissing))
}
}

View File

@@ -10,6 +10,7 @@ import (
"testing"
"time"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -227,11 +228,11 @@ argocd_app_sync_total{dest_server="https://localhost:6443",name="my-app",namespa
`
fakeApp := newFakeApp(fakeApp)
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationRunning})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationFailed})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationError})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationRunning})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationFailed})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationError})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationSucceeded})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: common.OperationSucceeded})
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
@@ -260,16 +261,16 @@ func TestReconcileMetrics(t *testing.T) {
appReconcileMetrics := `
# HELP argocd_app_reconcile Application reconciliation performance.
# TYPE argocd_app_reconcile histogram
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="0.25"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="0.5"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="1"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="2"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="4"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="8"} 1
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="16"} 1
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",le="+Inf"} 1
argocd_app_reconcile_sum{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project"} 5
argocd_app_reconcile_count{dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project"} 1
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="0.25"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="0.5"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="1"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="2"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="4"} 0
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="8"} 1
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="16"} 1
argocd_app_reconcile_bucket{dest_server="https://localhost:6443",namespace="argocd",le="+Inf"} 1
argocd_app_reconcile_sum{dest_server="https://localhost:6443",namespace="argocd"} 5
argocd_app_reconcile_count{dest_server="https://localhost:6443",namespace="argocd"} 1
`
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, 5*time.Second)

View File

@@ -6,8 +6,15 @@ import (
"fmt"
"time"
"github.com/argoproj/gitops-engine/pkg/diff"
"github.com/argoproj/gitops-engine/pkg/health"
"github.com/argoproj/gitops-engine/pkg/sync"
hookutil "github.com/argoproj/gitops-engine/pkg/sync/hook"
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
resourceutil "github.com/argoproj/gitops-engine/pkg/sync/resource"
"github.com/argoproj/gitops-engine/pkg/utils/io"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
"github.com/yudai/gojsondiff"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -21,19 +28,20 @@ import (
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/health"
hookutil "github.com/argoproj/argo-cd/util/hook"
kubeutil "github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/resource"
"github.com/argoproj/argo-cd/util/resource/ignore"
argohealth "github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/settings"
"github.com/argoproj/argo-cd/util/stats"
)
type resourceInfoProviderStub struct {
}
func (r *resourceInfoProviderStub) IsNamespaced(_ schema.GroupKind) (bool, error) {
return false, nil
}
type managedResource struct {
Target *unstructured.Unstructured
Live *unstructured.Unstructured
@@ -54,10 +62,6 @@ func GetLiveObjs(res []managedResource) []*unstructured.Unstructured {
return objs
}
type ResourceInfoProvider interface {
IsNamespaced(server string, gk schema.GroupKind) (bool, error)
}
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *appv1.AppProject, revision string, source v1alpha1.ApplicationSource, noCache bool, localObjects []string) *comparisonResult
@@ -65,27 +69,17 @@ type AppStateManager interface {
}
type comparisonResult struct {
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
managedResources []managedResource
hooks []*unstructured.Unstructured
diffNormalizer diff.Normalizer
appSourceType v1alpha1.ApplicationSourceType
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
managedResources []managedResource
reconciliationResult sync.ReconciliationResult
diffNormalizer diff.Normalizer
appSourceType v1alpha1.ApplicationSourceType
// timings maps phases of comparison to the duration it took to complete (for statistical purposes)
timings map[string]time.Duration
}
func (cr *comparisonResult) targetObjs() []*unstructured.Unstructured {
objs := cr.hooks
for _, r := range cr.managedResources {
if r.Target != nil {
objs = append(objs, r.Target)
}
}
return objs
}
// appStateManager allows to compare applications to git
type appStateManager struct {
metricsServer *metrics.MetricsServer
@@ -99,23 +93,23 @@ type appStateManager struct {
namespace string
}
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, []*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, *apiclient.ManifestResponse, error) {
ts := stats.NewTimingStats()
helmRepos, err := m.db.ListHelmRepositories(context.Background())
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
ts.AddCheckpoint("helm_ms")
repo, err := m.db.GetRepository(context.Background(), source.RepoURL)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
ts.AddCheckpoint("repo_ms")
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
defer util.Close(conn)
defer io.Close(conn)
if revision == "" {
revision = source.TargetRevision
@@ -123,7 +117,7 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
plugins, err := m.settingsMgr.GetConfigManagementPlugins()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
ts.AddCheckpoint("plugins_ms")
tools := make([]*appv1.ConfigManagementPlugin, len(plugins))
@@ -131,20 +125,18 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
tools[i] = &plugins[i]
}
buildOptions, err := m.settingsMgr.GetKustomizeBuildOptions()
kustomizeSettings, err := m.settingsMgr.GetKustomizeSettings()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
kustomizeOptions, err := kustomizeSettings.GetOptions(app.Spec.Source)
if err != nil {
return nil, nil, err
}
ts.AddCheckpoint("build_options_ms")
serverVersion, apiGroups, err := m.liveStateCache.GetVersionsInfo(app.Spec.Destination.Server)
if err != nil {
return nil, nil, nil, err
}
var apiVersions []string
for _, g := range apiGroups {
for _, v := range g.Versions {
apiVersions = append(apiVersions, v.GroupVersion)
}
return nil, nil, err
}
ts.AddCheckpoint("version_ms")
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &apiclient.ManifestRequest{
@@ -157,20 +149,19 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
Namespace: app.Spec.Destination.Namespace,
ApplicationSource: &source,
Plugins: tools,
KustomizeOptions: &appv1.KustomizeOptions{
BuildOptions: buildOptions,
},
KubeVersion: serverVersion,
ApiVersions: apiVersions,
KustomizeOptions: kustomizeOptions,
KubeVersion: serverVersion,
ApiVersions: argo.APIGroupsToVersions(apiGroups),
})
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
ts.AddCheckpoint("manifests_ms")
targetObjs, hooks, err := unmarshalManifests(manifestInfo.Manifests)
targetObjs, err := unmarshalManifests(manifestInfo.Manifests)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
ts.AddCheckpoint("unmarshal_ms")
logCtx := log.WithField("application", app.Name)
for k, v := range ts.Timings() {
@@ -178,49 +169,43 @@ func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1
}
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
logCtx.Info("getRepoObjs stats")
return targetObjs, hooks, manifestInfo, nil
return targetObjs, manifestInfo, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, []*unstructured.Unstructured, error) {
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, error) {
targetObjs := make([]*unstructured.Unstructured, 0)
hooks := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifests {
obj, err := v1alpha1.UnmarshalToUnstructured(manifest)
if err != nil {
return nil, nil, err
}
if obj == nil || ignore.Ignore(obj) {
continue
}
if hookutil.IsHook(obj) {
hooks = append(hooks, obj)
} else {
targetObjs = append(targetObjs, obj)
return nil, err
}
targetObjs = append(targetObjs, obj)
}
return targetObjs, hooks, nil
return targetObjs, nil
}
func DeduplicateTargetObjects(
server string,
namespace string,
objs []*unstructured.Unstructured,
infoProvider ResourceInfoProvider,
infoProvider kubeutil.ResourceInfoProvider,
) ([]*unstructured.Unstructured, []v1alpha1.ApplicationCondition, error) {
targetByKey := make(map[kubeutil.ResourceKey][]*unstructured.Unstructured)
for i := range objs {
obj := objs[i]
isNamespaced, err := infoProvider.IsNamespaced(server, obj.GroupVersionKind().GroupKind())
if err != nil {
return objs, nil, err
if obj == nil {
continue
}
isNamespaced := kubeutil.IsNamespacedOrUnknown(infoProvider, obj.GroupVersionKind().GroupKind())
if !isNamespaced {
obj.SetNamespace("")
} else if obj.GetNamespace() == "" {
obj.SetNamespace(namespace)
}
key := kubeutil.GetResourceKey(obj)
if key.Name == "" && obj.GetGenerateName() != "" {
key.Name = fmt.Sprintf("%s%d", obj.GetGenerateName(), i)
}
targetByKey[key] = append(targetByKey[key], obj)
}
conditions := make([]v1alpha1.ApplicationCondition, 0)
@@ -240,42 +225,6 @@ func DeduplicateTargetObjects(
return result, conditions, nil
}
// dedupLiveResources handles removes live resource duplicates with the same UID. Duplicates are created in a separate resource groups.
// E.g. apps/Deployment produces duplicate in extensions/Deployment, authorization.openshift.io/ClusterRole produces duplicate in rbac.authorization.k8s.io/ClusterRole etc.
// The method removes such duplicates unless it was defined in git ( exists in target resources list ). At least one duplicate stays.
// If non of duplicates are in git at random one stays
func dedupLiveResources(targetObjs []*unstructured.Unstructured, liveObjsByKey map[kubeutil.ResourceKey]*unstructured.Unstructured) {
targetObjByKey := make(map[kubeutil.ResourceKey]*unstructured.Unstructured)
for i := range targetObjs {
targetObjByKey[kubeutil.GetResourceKey(targetObjs[i])] = targetObjs[i]
}
liveObjsById := make(map[types.UID][]*unstructured.Unstructured)
for k := range liveObjsByKey {
obj := liveObjsByKey[k]
if obj != nil {
liveObjsById[obj.GetUID()] = append(liveObjsById[obj.GetUID()], obj)
}
}
for id := range liveObjsById {
objs := liveObjsById[id]
if len(objs) > 1 {
duplicatesLeft := len(objs)
for i := range objs {
obj := objs[i]
resourceKey := kubeutil.GetResourceKey(obj)
if _, ok := targetObjByKey[resourceKey]; !ok {
delete(liveObjsByKey, resourceKey)
duplicatesLeft--
if duplicatesLeft == 1 {
break
}
}
}
}
}
}
func (m *appStateManager) getComparisonSettings(app *appv1.Application) (string, map[string]v1alpha1.ResourceOverride, diff.Normalizer, *settings.ResourcesFilter, error) {
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
@@ -311,7 +260,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
ComparedTo: appv1.ComparedTo{Source: source, Destination: app.Spec.Destination},
Status: appv1.SyncStatusCodeUnknown,
},
healthStatus: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
healthStatus: &appv1.HealthStatus{Status: health.HealthStatusUnknown},
}
}
@@ -323,19 +272,18 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
logCtx.Infof("Comparing app state (cluster: %s, namespace: %s)", app.Spec.Destination.Server, app.Spec.Destination.Namespace)
var targetObjs []*unstructured.Unstructured
var hooks []*unstructured.Unstructured
var manifestInfo *apiclient.ManifestResponse
now := metav1.Now()
if len(localManifests) == 0 {
targetObjs, hooks, manifestInfo, err = m.getRepoObjs(app, source, appLabelKey, revision, noCache)
targetObjs, manifestInfo, err = m.getRepoObjs(app, source, appLabelKey, revision, noCache)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error(), LastTransitionTime: &now})
failedToLoadObjs = true
}
} else {
targetObjs, hooks, err = unmarshalManifests(localManifests)
targetObjs, err = unmarshalManifests(localManifests)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error(), LastTransitionTime: &now})
@@ -345,7 +293,12 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
}
ts.AddCheckpoint("git_ms")
targetObjs, dedupConditions, err := DeduplicateTargetObjects(app.Spec.Destination.Server, app.Spec.Destination.Namespace, targetObjs, m.liveStateCache)
var infoProvider kubeutil.ResourceInfoProvider
infoProvider, err = m.liveStateCache.GetClusterCache(app.Spec.Destination.Server)
if err != nil {
infoProvider = &resourceInfoProviderStub{}
}
targetObjs, dedupConditions, err := DeduplicateTargetObjects(app.Spec.Destination.Namespace, targetObjs, infoProvider)
if err != nil {
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error(), LastTransitionTime: &now})
}
@@ -370,7 +323,8 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error(), LastTransitionTime: &now})
failedToLoadObjs = true
}
dedupLiveResources(targetObjs, liveObjByKey)
logCtx.Debugf("Retrieved lived manifests")
// filter out all resources which are not permitted in the application project
for k, v := range liveObjByKey {
if !project.IsLiveResourcePermitted(v, app.Spec.Destination.Server) {
@@ -391,33 +345,18 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
}
}
managedLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
for i, obj := range targetObjs {
gvk := obj.GroupVersionKind()
ns := util.FirstNonEmpty(obj.GetNamespace(), app.Spec.Destination.Namespace)
if namespaced, err := m.liveStateCache.IsNamespaced(app.Spec.Destination.Server, obj.GroupVersionKind().GroupKind()); err == nil && !namespaced {
ns = ""
}
key := kubeutil.NewResourceKey(gvk.Group, gvk.Kind, ns, obj.GetName())
if liveObj, ok := liveObjByKey[key]; ok {
managedLiveObj[i] = liveObj
delete(liveObjByKey, key)
} else {
managedLiveObj[i] = nil
}
}
reconciliation := sync.Reconcile(targetObjs, liveObjByKey, app.Spec.Destination.Namespace, infoProvider)
ts.AddCheckpoint("live_ms")
// Everything remaining in liveObjByKey are "extra" resources that aren't tracked in git.
// The following adds all the extras to the managedLiveObj list and backfills the targetObj
// list with nils, so that the lists are of equal lengths for comparison purposes.
for _, obj := range liveObjByKey {
targetObjs = append(targetObjs, nil)
managedLiveObj = append(managedLiveObj, obj)
compareOptions, err := m.settingsMgr.GetResourceCompareOptions()
if err != nil {
log.Warnf("Could not get compare options from ConfigMap (assuming defaults): %v", err)
compareOptions = diff.GetDefaultDiffOptions()
}
logCtx.Debugf("built managed objects list")
// Do the actual comparison
diffResults, err := diff.DiffArray(targetObjs, managedLiveObj, diffNormalizer)
diffResults, err := diff.DiffArray(reconciliation.Target, reconciliation.Live, diffNormalizer, compareOptions)
if err != nil {
diffResults = &diff.DiffResultList{}
failedToLoadObjs = true
@@ -426,10 +365,10 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
ts.AddCheckpoint("diff_ms")
syncCode := v1alpha1.SyncStatusCodeSynced
managedResources := make([]managedResource, len(targetObjs))
resourceSummaries := make([]v1alpha1.ResourceStatus, len(targetObjs))
for i, targetObj := range targetObjs {
liveObj := managedLiveObj[i]
managedResources := make([]managedResource, len(reconciliation.Target))
resourceSummaries := make([]v1alpha1.ResourceStatus, len(reconciliation.Target))
for i, targetObj := range reconciliation.Target {
liveObj := reconciliation.Live[i]
obj := liveObj
if obj == nil {
obj = targetObj
@@ -453,12 +392,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
if i < len(diffResults.Diffs) {
diffResult = diffResults.Diffs[i]
} else {
diffResult = diff.DiffResult{
Diff: gojsondiff.New().CompareObjects(map[string]interface{}{}, map[string]interface{}{}),
Modified: false,
NormalizedLive: []byte("{}"),
PredictedLive: []byte("{}"),
}
diffResult = diff.DiffResult{Modified: false, NormalizedLive: []byte("{}"), PredictedLive: []byte("{}")}
}
if resState.Hook || ignore.Ignore(obj) {
// For resource hooks, don't store sync status, and do not affect overall sync status
@@ -470,7 +404,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
resState.Status = v1alpha1.SyncStatusCodeOutOfSync
// we ignore the status if the obj needs pruning AND we have the annotation
needsPruning := targetObj == nil && liveObj != nil
if !(needsPruning && resource.HasAnnotationOption(obj, common.AnnotationCompareOptions, "IgnoreExtraneous")) {
if !(needsPruning && resourceutil.HasAnnotationOption(obj, common.AnnotationCompareOptions, "IgnoreExtraneous")) {
syncCode = v1alpha1.SyncStatusCodeOutOfSync
}
} else {
@@ -515,7 +449,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
}
ts.AddCheckpoint("sync_ms")
healthStatus, err := health.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), resourceOverrides, func(obj *unstructured.Unstructured) bool {
healthStatus, err := argohealth.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), resourceOverrides, func(obj *unstructured.Unstructured) bool {
return !isSelfReferencedApp(app, kubeutil.GetObjectRef(obj))
})
@@ -524,12 +458,12 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *ap
}
compRes := comparisonResult{
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
managedResources: managedResources,
hooks: hooks,
diffNormalizer: diffNormalizer,
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
managedResources: managedResources,
reconciliationResult: reconciliation,
diffNormalizer: diffNormalizer,
}
if manifestInfo != nil {
compRes.appSourceType = v1alpha1.ApplicationSourceType(manifestInfo.SourceType)
@@ -571,7 +505,7 @@ func (m *appStateManager) persistRevisionHistory(app *v1alpha1.Application, revi
return err
}
// NewAppStateManager creates new instance of Ksonnet app comparator
// NewAppStateManager creates new instance of AppStateManager
func NewAppStateManager(
db db.ArgoDB,
appclientset appclientset.Interface,

View File

@@ -4,6 +4,10 @@ import (
"encoding/json"
"testing"
"github.com/argoproj/gitops-engine/pkg/health"
synccommon "github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -14,7 +18,6 @@ import (
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
)
// TestCompareAppStateEmpty tests comparison when both git and live have no objects
@@ -45,7 +48,7 @@ func TestCompareAppStateMissing(t *testing.T) {
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{test.PodManifest},
Manifests: []string{PodManifest},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
@@ -64,7 +67,7 @@ func TestCompareAppStateMissing(t *testing.T) {
// TestCompareAppStateExtra tests when there is an extra object in live but not defined in git
func TestCompareAppStateExtra(t *testing.T) {
pod := test.NewPod()
pod := NewPod()
pod.SetNamespace(test.FakeDestNamespace)
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
@@ -91,8 +94,8 @@ func TestCompareAppStateExtra(t *testing.T) {
// TestCompareAppStateHook checks that hooks are detected during manifest generation, and not
// considered as part of resources when assessing Synced status
func TestCompareAppStateHook(t *testing.T) {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
pod := NewPod()
pod.SetAnnotations(map[string]string{synccommon.AnnotationKeyHook: "PreSync"})
podBytes, _ := json.Marshal(pod)
app := newFakeApp()
data := fakeData{
@@ -111,13 +114,13 @@ func TestCompareAppStateHook(t *testing.T) {
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 1, len(compRes.hooks))
assert.Equal(t, 1, len(compRes.reconciliationResult.Hooks))
assert.Equal(t, 0, len(app.Status.Conditions))
}
// checks that ignore resources are detected, but excluded from status
func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
pod := test.NewPod()
pod := NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationCompareOptions: "IgnoreExtraneous"})
app := newFakeApp()
data := fakeData{
@@ -143,8 +146,8 @@ func TestCompareAppStateCompareOptionIgnoreExtraneous(t *testing.T) {
// TestCompareAppStateExtraHook tests when there is an extra _hook_ object in live but not defined in git
func TestCompareAppStateExtraHook(t *testing.T) {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
pod := NewPod()
pod.SetAnnotations(map[string]string{synccommon.AnnotationKeyHook: "PreSync"})
pod.SetNamespace(test.FakeDestNamespace)
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
@@ -166,7 +169,7 @@ func TestCompareAppStateExtraHook(t *testing.T) {
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.hooks))
assert.Equal(t, 0, len(compRes.reconciliationResult.Hooks))
assert.Equal(t, 0, len(app.Status.Conditions))
}
@@ -177,16 +180,22 @@ func toJSON(t *testing.T, obj *unstructured.Unstructured) string {
}
func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
obj1 := test.NewPod()
obj1 := NewPod()
obj1.SetNamespace(test.FakeDestNamespace)
obj2 := test.NewPod()
obj3 := test.NewPod()
obj2 := NewPod()
obj3 := NewPod()
obj3.SetNamespace("kube-system")
obj4 := NewPod()
obj4.SetGenerateName("my-pod")
obj4.SetName("")
obj5 := NewPod()
obj5.SetName("")
obj5.SetGenerateName("my-pod")
app := newFakeApp()
data := fakeData{
manifestResponse: &apiclient.ManifestResponse{
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3)},
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3), toJSON(t, obj4), toJSON(t, obj5)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
@@ -204,7 +213,7 @@ func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
assert.NotNil(t, app.Status.Conditions[0].LastTransitionTime)
assert.Equal(t, argoappv1.ApplicationConditionRepeatedResourceWarning, app.Status.Conditions[0].Type)
assert.Equal(t, "Resource /Pod/fake-dest-ns/my-pod appeared 2 times among application resources.", app.Status.Conditions[0].Message)
assert.Equal(t, 2, len(compRes.resources))
assert.Equal(t, 4, len(compRes.resources))
}
var defaultProj = argoappv1.AppProject{
@@ -250,7 +259,7 @@ func TestSetHealth(t *testing.T) {
compRes := ctrl.appStateManager.CompareAppState(app, &defaultProj, "", app.Spec.Source, false, nil)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
assert.Equal(t, compRes.healthStatus.Status, health.HealthStatusHealthy)
}
func TestSetHealthSelfReferencedApp(t *testing.T) {
@@ -282,7 +291,7 @@ func TestSetHealthSelfReferencedApp(t *testing.T) {
compRes := ctrl.appStateManager.CompareAppState(app, &defaultProj, "", app.Spec.Source, false, nil)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
assert.Equal(t, compRes.healthStatus.Status, health.HealthStatusHealthy)
}
func TestSetManagedResourcesWithOrphanedResources(t *testing.T) {
@@ -352,7 +361,7 @@ func TestReturnUnknownComparisonStateOnSettingLoadError(t *testing.T) {
compRes := ctrl.appStateManager.CompareAppState(app, &defaultProj, "", app.Spec.Source, false, nil)
assert.Equal(t, argoappv1.HealthStatusUnknown, compRes.healthStatus.Status)
assert.Equal(t, health.HealthStatusUnknown, compRes.healthStatus.Status)
assert.Equal(t, argoappv1.SyncStatusCodeUnknown, compRes.syncStatus.Status)
}
@@ -385,13 +394,6 @@ func TestSetManagedResourcesKnownOrphanedResourceExceptions(t *testing.T) {
assert.Equal(t, "guestbook", tree.OrphanedNodes[0].Name)
}
func Test_comparisonResult_obs(t *testing.T) {
assert.Len(t, (&comparisonResult{}).targetObjs(), 0)
assert.Len(t, (&comparisonResult{managedResources: []managedResource{{}}}).targetObjs(), 0)
assert.Len(t, (&comparisonResult{managedResources: []managedResource{{Target: test.NewPod()}}}).targetObjs(), 1)
assert.Len(t, (&comparisonResult{hooks: []*unstructured.Unstructured{{}}}).targetObjs(), 1)
}
func Test_appStateManager_persistRevisionHistory(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{

View File

@@ -3,65 +3,27 @@ package controller
import (
"context"
"fmt"
"reflect"
"sort"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/argoproj/gitops-engine/pkg/sync"
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
"k8s.io/apimachinery/pkg/api/errors"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller/metrics"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
listersv1alpha1 "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/lua"
"github.com/argoproj/argo-cd/util/rand"
"github.com/argoproj/argo-cd/util/resource"
)
const (
crdReadinessTimeout = time.Duration(3) * time.Second
)
var syncIdPrefix uint64 = 0
type syncContext struct {
resourceOverrides map[string]v1alpha1.ResourceOverride
appName string
proj *v1alpha1.AppProject
compareResult *comparisonResult
config *rest.Config
dynamicIf dynamic.Interface
disco discovery.DiscoveryInterface
extensionsclientset *clientset.Clientset
kubectl kube.Kubectl
namespace string
server string
syncOp *v1alpha1.SyncOperation
syncRes *v1alpha1.SyncOperationResult
syncResources []v1alpha1.SyncOperationResource
opState *v1alpha1.OperationState
log *log.Entry
// lock to protect concurrent updates of the result list
lock sync.Mutex
}
func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState) {
// Sync requests might be requested with ambiguous revisions (e.g. master, HEAD, v1.2.3).
// This can change meaning when resuming operations (e.g a hook sync). After calculating a
@@ -71,11 +33,10 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
var revision string
var syncOp v1alpha1.SyncOperation
var syncRes *v1alpha1.SyncOperationResult
var syncResources []v1alpha1.SyncOperationResource
var source v1alpha1.ApplicationSource
if state.Operation.Sync == nil {
state.Phase = v1alpha1.OperationFailed
state.Phase = common.OperationFailed
state.Message = "Invalid operation request: no operation specified"
return
}
@@ -87,7 +48,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
// rollback case
source = *state.Operation.Sync.Source
}
syncResources = syncOp.Resources
if state.SyncResult != nil {
syncRes = state.SyncResult
revision = state.SyncResult.Revision
@@ -109,7 +70,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
proj, err := argo.GetAppProject(&app.Spec, listersv1alpha1.NewAppProjectLister(m.projInformer.GetIndexer()), m.namespace)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to load application project: %v", err)
return
}
@@ -121,7 +82,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
v1alpha1.ApplicationConditionComparisonError: true,
v1alpha1.ApplicationConditionInvalidSpecError: true,
}); len(errConditions) > 0 {
state.Phase = v1alpha1.OperationError
state.Phase = common.OperationError
state.Message = argo.FormatAppConditions(errConditions)
return
}
@@ -132,694 +93,95 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, state *v1alpha
clst, err := m.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Phase = common.OperationError
state.Message = err.Error()
return
}
rawConfig := clst.RawRestConfig()
restConfig := metrics.AddMetricsTransportWrapper(m.metricsServer, app, clst.RESTConfig())
dynamicIf, err := dynamic.NewForConfig(restConfig)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Message = fmt.Sprintf("Failed to initialize dynamic client: %v", err)
return
}
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Message = fmt.Sprintf("Failed to initialize discovery client: %v", err)
return
}
extensionsclientset, err := clientset.NewForConfig(restConfig)
if err != nil {
state.Phase = v1alpha1.OperationError
state.Message = fmt.Sprintf("Failed to initialize extensions client: %v", err)
return
}
resourceOverrides, err := m.settingsMgr.GetResourceOverrides()
if err != nil {
state.Phase = v1alpha1.OperationError
state.Phase = common.OperationError
state.Message = fmt.Sprintf("Failed to load resource overrides: %v", err)
return
}
atomic.AddUint64(&syncIdPrefix, 1)
syncId := fmt.Sprintf("%05d-%s", syncIdPrefix, rand.RandString(5))
syncCtx := syncContext{
resourceOverrides: resourceOverrides,
appName: app.Name,
proj: proj,
compareResult: compareResult,
config: restConfig,
dynamicIf: dynamicIf,
disco: disco,
extensionsclientset: extensionsclientset,
kubectl: m.kubectl,
namespace: app.Spec.Destination.Namespace,
server: app.Spec.Destination.Server,
syncOp: &syncOp,
syncRes: syncRes,
syncResources: syncResources,
opState: state,
log: log.WithFields(log.Fields{"application": app.Name, "syncId": syncId}),
logEntry := log.WithFields(log.Fields{"application": app.Name, "syncId": syncId})
initialResourcesRes := make([]common.ResourceSyncResult, 0)
for i, res := range syncRes.Resources {
key := kube.ResourceKey{Group: res.Group, Kind: res.Kind, Namespace: res.Namespace, Name: res.Name}
initialResourcesRes = append(initialResourcesRes, common.ResourceSyncResult{
ResourceKey: key,
Message: res.Message,
Status: res.Status,
HookPhase: res.HookPhase,
HookType: res.HookType,
SyncPhase: res.SyncPhase,
Version: res.Version,
Order: i + 1,
})
}
syncCtx, err := sync.NewSyncContext(compareResult.syncStatus.Revision, compareResult.reconciliationResult, restConfig, rawConfig, m.kubectl, app.Spec.Destination.Namespace, logEntry,
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *v1.APIResource) error {
if !proj.IsGroupKindPermitted(un.GroupVersionKind().GroupKind(), res.Namespaced) {
return fmt.Errorf("Resource %s:%s is not permitted in project %s.", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, proj.Name)
}
if res.Namespaced && !proj.IsDestinationPermitted(v1alpha1.ApplicationDestination{Namespace: un.GetNamespace(), Server: app.Spec.Destination.Server}) {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), proj.Name)
}
return nil
}),
sync.WithOperationSettings(syncOp.DryRun, syncOp.Prune, syncOp.SyncStrategy.Force(), syncOp.IsApplyStrategy() || len(syncOp.Resources) > 0),
sync.WithInitialState(state.Phase, state.Message, initialResourcesRes),
sync.WithResourcesFilter(func(key kube.ResourceKey, target *unstructured.Unstructured, live *unstructured.Unstructured) bool {
return len(syncOp.Resources) == 0 || argo.ContainsSyncResource(key.Name, schema.GroupVersionKind{Kind: key.Kind, Group: key.Group}, syncOp.Resources)
}),
sync.WithManifestValidation(!syncOp.SyncOptions.HasOption("Validate=false")),
)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to record sync to history: %v", err)
}
start := time.Now()
if state.Phase == v1alpha1.OperationTerminating {
syncCtx.terminate()
if state.Phase == common.OperationTerminating {
syncCtx.Terminate()
} else {
syncCtx.sync()
syncCtx.Sync()
}
var resState []common.ResourceSyncResult
state.Phase, state.Message, resState = syncCtx.GetState()
state.SyncResult.Resources = nil
for _, res := range resState {
state.SyncResult.Resources = append(state.SyncResult.Resources, &v1alpha1.ResourceResult{
HookType: res.HookType,
Group: res.ResourceKey.Group,
Kind: res.ResourceKey.Kind,
Namespace: res.ResourceKey.Namespace,
Name: res.ResourceKey.Name,
Version: res.Version,
SyncPhase: res.SyncPhase,
HookPhase: res.HookPhase,
Status: res.Status,
Message: res.Message,
})
}
syncCtx.log.WithField("duration", time.Since(start)).Info("sync/terminate complete")
logEntry.WithField("duration", time.Since(start)).Info("sync/terminate complete")
if !syncOp.DryRun && !syncCtx.isSelectiveSync() && syncCtx.opState.Phase.Successful() {
if !syncOp.DryRun && len(syncOp.Resources) == 0 && state.Phase.Successful() {
err := m.persistRevisionHistory(app, compareResult.syncStatus.Revision, source)
if err != nil {
syncCtx.setOperationPhase(v1alpha1.OperationError, fmt.Sprintf("failed to record sync to history: %v", err))
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to record sync to history: %v", err)
}
}
}
// sync has performs the actual apply or hook based sync
func (sc *syncContext) sync() {
sc.log.WithFields(log.Fields{"isSelectiveSync": sc.isSelectiveSync(), "skipHooks": sc.skipHooks(), "started": sc.started()}).Info("syncing")
tasks, ok := sc.getSyncTasks()
if !ok {
sc.setOperationPhase(v1alpha1.OperationFailed, "one or more synchronization tasks are not valid")
return
}
sc.log.WithFields(log.Fields{"tasks": tasks, "isSelectiveSync": sc.isSelectiveSync()}).Info("tasks")
// Perform a `kubectl apply --dry-run` against all the manifests. This will detect most (but
// not all) validation issues with the user's manifests (e.g. will detect syntax issues, but
// will not not detect if they are mutating immutable fields). If anything fails, we will refuse
// to perform the sync. we only wish to do this once per operation, performing additional dry-runs
// is harmless, but redundant. The indicator we use to detect if we have already performed
// the dry-run for this operation, is if the resource or hook list is empty.
if !sc.started() {
sc.log.Debug("dry-run")
if sc.runTasks(tasks, true) == failed {
sc.setOperationPhase(v1alpha1.OperationFailed, "one or more objects failed to apply (dry run)")
return
}
}
// update status of any tasks that are running, note that this must exclude pruning tasks
for _, task := range tasks.Filter(func(t *syncTask) bool {
// just occasionally, you can be running yet not have a live resource
return t.running() && t.liveObj != nil
}) {
if task.isHook() {
// update the hook's result
operationState, message, err := sc.getOperationPhase(task.liveObj)
if err != nil {
sc.setResourceResult(task, "", v1alpha1.OperationError, fmt.Sprintf("failed to get resource health: %v", err))
} else {
sc.setResourceResult(task, "", operationState, message)
// maybe delete the hook
if task.needsDeleting() {
err := sc.deleteResource(task)
if err != nil && !errors.IsNotFound(err) {
sc.setResourceResult(task, "", v1alpha1.OperationError, fmt.Sprintf("failed to delete resource: %v", err))
}
}
}
} else {
// this must be calculated on the live object
healthStatus, err := health.GetResourceHealth(task.liveObj, sc.resourceOverrides)
if err == nil {
log.WithFields(log.Fields{"task": task, "healthStatus": healthStatus}).Debug("attempting to update health of running task")
if healthStatus == nil {
// some objects (e.g. secret) do not have health, and they automatically success
sc.setResourceResult(task, task.syncStatus, v1alpha1.OperationSucceeded, task.message)
} else {
switch healthStatus.Status {
case v1alpha1.HealthStatusHealthy:
sc.setResourceResult(task, task.syncStatus, v1alpha1.OperationSucceeded, healthStatus.Message)
case v1alpha1.HealthStatusDegraded:
sc.setResourceResult(task, task.syncStatus, v1alpha1.OperationFailed, healthStatus.Message)
}
}
}
}
}
// if (a) we are multi-step and we have any running tasks,
// or (b) there are any running hooks,
// then wait...
multiStep := tasks.multiStep()
if tasks.Any(func(t *syncTask) bool { return (multiStep || t.isHook()) && t.running() }) {
sc.setOperationPhase(v1alpha1.OperationRunning, "one or more tasks are running")
return
}
// syncFailTasks only run during failure, so separate them from regular tasks
syncFailTasks, tasks := tasks.Split(func(t *syncTask) bool { return t.phase == v1alpha1.SyncPhaseSyncFail })
// if there are any completed but unsuccessful tasks, sync is a failure.
if tasks.Any(func(t *syncTask) bool { return t.completed() && !t.successful() }) {
sc.setOperationFailed(syncFailTasks, "one or more synchronization tasks completed unsuccessfully")
return
}
sc.log.WithFields(log.Fields{"tasks": tasks}).Debug("filtering out non-pending tasks")
// remove tasks that are completed, we can assume that there are no running tasks
tasks = tasks.Filter(func(t *syncTask) bool { return t.pending() })
// If no sync tasks were generated (e.g., in case all application manifests have been removed),
// the sync operation is successful.
if len(tasks) == 0 {
sc.setOperationPhase(v1alpha1.OperationSucceeded, "successfully synced (no more tasks)")
return
}
// remove any tasks not in this wave
phase := tasks.phase()
wave := tasks.wave()
// if it is the last phase/wave and the only remaining tasks are non-hooks, the we are successful
// EVEN if those objects subsequently degraded
// This handles the common case where neither hooks or waves are used and a sync equates to simply an (asynchronous) kubectl apply of manifests, which succeeds immediately.
complete := !tasks.Any(func(t *syncTask) bool { return t.phase != phase || wave != t.wave() || t.isHook() })
sc.log.WithFields(log.Fields{"phase": phase, "wave": wave, "tasks": tasks, "syncFailTasks": syncFailTasks}).Debug("filtering tasks in correct phase and wave")
tasks = tasks.Filter(func(t *syncTask) bool { return t.phase == phase && t.wave() == wave })
sc.setOperationPhase(v1alpha1.OperationRunning, "one or more tasks are running")
sc.log.WithFields(log.Fields{"tasks": tasks}).Debug("wet-run")
runState := sc.runTasks(tasks, false)
switch runState {
case failed:
sc.setOperationFailed(syncFailTasks, "one or more objects failed to apply")
case successful:
if complete {
sc.setOperationPhase(v1alpha1.OperationSucceeded, "successfully synced (all tasks run)")
}
}
}
func (sc *syncContext) setOperationFailed(syncFailTasks syncTasks, message string) {
if len(syncFailTasks) > 0 {
// if all the failure hooks are completed, don't run them again, and mark the sync as failed
if syncFailTasks.All(func(task *syncTask) bool { return task.completed() }) {
sc.setOperationPhase(v1alpha1.OperationFailed, message)
return
}
// otherwise, we need to start the failure hooks, and then return without setting
// the phase, so we make sure we have at least one more sync
sc.log.WithFields(log.Fields{"syncFailTasks": syncFailTasks}).Debug("running sync fail tasks")
if sc.runTasks(syncFailTasks, false) == failed {
sc.setOperationPhase(v1alpha1.OperationFailed, message)
}
} else {
sc.setOperationPhase(v1alpha1.OperationFailed, message)
}
}
func (sc *syncContext) started() bool {
return len(sc.syncRes.Resources) > 0
}
func (sc *syncContext) isSelectiveSync() bool {
// we've selected no resources
if sc.syncResources == nil {
return false
}
// map both lists into string
var a []string
for _, r := range sc.compareResult.resources {
if !r.Hook {
a = append(a, fmt.Sprintf("%s:%s:%s", r.Group, r.Kind, r.Name))
}
}
sort.Strings(a)
var b []string
for _, r := range sc.syncResources {
b = append(b, fmt.Sprintf("%s:%s:%s", r.Group, r.Kind, r.Name))
}
sort.Strings(b)
return !reflect.DeepEqual(a, b)
}
// this essentially enforces the old "apply" behaviour
func (sc *syncContext) skipHooks() bool {
// All objects passed a `kubectl apply --dry-run`, so we are now ready to actually perform the sync.
// default sync strategy to hook if no strategy
return sc.syncOp.IsApplyStrategy() || sc.isSelectiveSync()
}
func (sc *syncContext) containsResource(resourceState managedResource) bool {
return !sc.isSelectiveSync() ||
(resourceState.Live != nil && argo.ContainsSyncResource(resourceState.Live.GetName(), resourceState.Live.GroupVersionKind(), sc.syncResources)) ||
(resourceState.Target != nil && argo.ContainsSyncResource(resourceState.Target.GetName(), resourceState.Target.GroupVersionKind(), sc.syncResources))
}
// generates the list of sync tasks we will be performing during this sync.
func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
resourceTasks := syncTasks{}
successful = true
for _, resource := range sc.compareResult.managedResources {
if !sc.containsResource(resource) {
sc.log.WithFields(log.Fields{"group": resource.Group, "kind": resource.Kind, "name": resource.Name}).
Debug("skipping")
continue
}
obj := obj(resource.Target, resource.Live)
// this creates garbage tasks
if hook.IsHook(obj) {
sc.log.WithFields(log.Fields{"group": obj.GroupVersionKind().Group, "kind": obj.GetKind(), "namespace": obj.GetNamespace(), "name": obj.GetName()}).
Debug("skipping hook")
continue
}
for _, phase := range syncPhases(obj) {
resourceTasks = append(resourceTasks, &syncTask{phase: phase, targetObj: resource.Target, liveObj: resource.Live})
}
}
sc.log.WithFields(log.Fields{"resourceTasks": resourceTasks}).Debug("tasks from managed resources")
hookTasks := syncTasks{}
if !sc.skipHooks() {
for _, obj := range sc.compareResult.hooks {
for _, phase := range syncPhases(obj) {
// Hook resources names are deterministic, whether they are defined by the user (metadata.name),
// or formulated at the time of the operation (metadata.generateName). If user specifies
// metadata.generateName, then we will generate a formulated metadata.name before submission.
targetObj := obj.DeepCopy()
if targetObj.GetName() == "" {
postfix := strings.ToLower(fmt.Sprintf("%s-%s-%d", sc.syncRes.Revision[0:7], phase, sc.opState.StartedAt.UTC().Unix()))
generateName := obj.GetGenerateName()
targetObj.SetName(fmt.Sprintf("%s%s", generateName, postfix))
}
hookTasks = append(hookTasks, &syncTask{phase: phase, targetObj: targetObj})
}
}
}
sc.log.WithFields(log.Fields{"hookTasks": hookTasks}).Debug("tasks from hooks")
tasks := resourceTasks
tasks = append(tasks, hookTasks...)
// enrich target objects with the namespace
for _, task := range tasks {
if task.targetObj == nil {
continue
}
if task.targetObj.GetNamespace() == "" {
// If target object's namespace is empty, we set namespace in the object. We do
// this even though it might be a cluster-scoped resource. This prevents any
// possibility of the resource from unintentionally becoming created in the
// namespace during the `kubectl apply`
task.targetObj = task.targetObj.DeepCopy()
task.targetObj.SetNamespace(sc.namespace)
}
}
// enrich task with live obj
for _, task := range tasks {
if task.targetObj == nil || task.liveObj != nil {
continue
}
task.liveObj = sc.liveObj(task.targetObj)
}
// enrich tasks with the result
for _, task := range tasks {
_, result := sc.syncRes.Resources.Find(task.group(), task.kind(), task.namespace(), task.name(), task.phase)
if result != nil {
task.syncStatus = result.Status
task.operationState = result.HookPhase
task.message = result.Message
}
}
// check permissions
for _, task := range tasks {
serverRes, err := kube.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind())
if err != nil {
// Special case for custom resources: if CRD is not yet known by the K8s API server,
// skip verification during `kubectl apply --dry-run` since we expect the CRD
// to be created during app synchronization.
if apierr.IsNotFound(err) && sc.hasCRDOfGroupKind(task.group(), task.kind()) {
sc.log.WithFields(log.Fields{"task": task}).Debug("skip dry-run for custom resource")
task.skipDryRun = true
} else {
sc.setResourceResult(task, v1alpha1.ResultCodeSyncFailed, "", err.Error())
successful = false
}
} else {
if !sc.proj.IsGroupKindPermitted(schema.GroupKind{Group: task.group(), Kind: task.kind()}, serverRes.Namespaced) {
sc.setResourceResult(task, v1alpha1.ResultCodeSyncFailed, "", fmt.Sprintf("Resource %s:%s is not permitted in project %s.", task.group(), task.kind(), sc.proj.Name))
successful = false
}
if serverRes.Namespaced && !sc.proj.IsDestinationPermitted(v1alpha1.ApplicationDestination{Namespace: task.namespace(), Server: sc.server}) {
sc.setResourceResult(task, v1alpha1.ResultCodeSyncFailed, "", fmt.Sprintf("namespace %v is not permitted in project '%s'", task.namespace(), sc.proj.Name))
successful = false
}
}
}
sort.Sort(tasks)
return tasks, successful
}
func obj(a, b *unstructured.Unstructured) *unstructured.Unstructured {
if a != nil {
return a
} else {
return b
}
}
func (sc *syncContext) liveObj(obj *unstructured.Unstructured) *unstructured.Unstructured {
for _, resource := range sc.compareResult.managedResources {
if resource.Group == obj.GroupVersionKind().Group &&
resource.Kind == obj.GetKind() &&
// cluster scoped objects will not have a namespace, even if the user has defined it
(resource.Namespace == "" || resource.Namespace == obj.GetNamespace()) &&
resource.Name == obj.GetName() {
return resource.Live
}
}
return nil
}
func (sc *syncContext) setOperationPhase(phase v1alpha1.OperationPhase, message string) {
if sc.opState.Phase != phase || sc.opState.Message != message {
sc.log.Infof("Updating operation state. phase: %s -> %s, message: '%s' -> '%s'", sc.opState.Phase, phase, sc.opState.Message, message)
}
sc.opState.Phase = phase
sc.opState.Message = message
}
// ensureCRDReady waits until specified CRD is ready (established condition is true). Method is best effort - it does not fail even if CRD is not ready without timeout.
func (sc *syncContext) ensureCRDReady(name string) {
_ = wait.PollImmediate(time.Duration(100)*time.Millisecond, crdReadinessTimeout, func() (bool, error) {
crd, err := sc.extensionsclientset.ApiextensionsV1beta1().CustomResourceDefinitions().Get(name, metav1.GetOptions{})
if err != nil {
return false, err
}
for _, condition := range crd.Status.Conditions {
if condition.Type == v1beta1.Established {
return condition.Status == v1beta1.ConditionTrue, nil
}
}
return false, nil
})
}
// applyObject performs a `kubectl apply` of a single resource
func (sc *syncContext) applyObject(targetObj *unstructured.Unstructured, dryRun, force, validate bool) (v1alpha1.ResultCode, string) {
message, err := sc.kubectl.ApplyResource(sc.config, targetObj, targetObj.GetNamespace(), dryRun, force, validate)
if err != nil {
return v1alpha1.ResultCodeSyncFailed, err.Error()
}
if kube.IsCRD(targetObj) && !dryRun {
sc.ensureCRDReady(targetObj.GetName())
}
return v1alpha1.ResultCodeSynced, message
}
// pruneObject deletes the object if both prune is true and dryRun is false. Otherwise appropriate message
func (sc *syncContext) pruneObject(liveObj *unstructured.Unstructured, prune, dryRun bool) (v1alpha1.ResultCode, string) {
if !prune {
return v1alpha1.ResultCodePruneSkipped, "ignored (requires pruning)"
} else if resource.HasAnnotationOption(liveObj, common.AnnotationSyncOptions, "Prune=false") {
return v1alpha1.ResultCodePruneSkipped, "ignored (no prune)"
} else {
if dryRun {
return v1alpha1.ResultCodePruned, "pruned (dry run)"
} else {
// Skip deletion if object is already marked for deletion, so we don't cause a resource update hotloop
deletionTimestamp := liveObj.GetDeletionTimestamp()
if deletionTimestamp == nil || deletionTimestamp.IsZero() {
err := sc.kubectl.DeleteResource(sc.config, liveObj.GroupVersionKind(), liveObj.GetName(), liveObj.GetNamespace(), false)
if err != nil {
return v1alpha1.ResultCodeSyncFailed, err.Error()
}
}
return v1alpha1.ResultCodePruned, "pruned"
}
}
}
func (sc *syncContext) hasCRDOfGroupKind(group string, kind string) bool {
for _, obj := range sc.compareResult.targetObjs() {
if kube.IsCRD(obj) {
crdGroup, ok, err := unstructured.NestedString(obj.Object, "spec", "group")
if err != nil || !ok {
continue
}
crdKind, ok, err := unstructured.NestedString(obj.Object, "spec", "names", "kind")
if err != nil || !ok {
continue
}
if group == crdGroup && crdKind == kind {
return true
}
}
}
return false
}
// terminate looks for any running jobs/workflow hooks and deletes the resource
func (sc *syncContext) terminate() {
terminateSuccessful := true
sc.log.Debug("terminating")
tasks, _ := sc.getSyncTasks()
for _, task := range tasks {
if !task.isHook() || task.liveObj == nil {
continue
}
phase, msg, err := sc.getOperationPhase(task.liveObj)
if err != nil {
sc.setOperationPhase(v1alpha1.OperationError, fmt.Sprintf("Failed to get hook health: %v", err))
return
}
if phase == v1alpha1.OperationRunning {
err := sc.deleteResource(task)
if err != nil {
sc.setResourceResult(task, "", v1alpha1.OperationFailed, fmt.Sprintf("Failed to delete: %v", err))
terminateSuccessful = false
} else {
sc.setResourceResult(task, "", v1alpha1.OperationSucceeded, fmt.Sprintf("Deleted"))
}
} else {
sc.setResourceResult(task, "", phase, msg)
}
}
if terminateSuccessful {
sc.setOperationPhase(v1alpha1.OperationFailed, "Operation terminated")
} else {
sc.setOperationPhase(v1alpha1.OperationError, "Operation termination had errors")
}
}
func (sc *syncContext) deleteResource(task *syncTask) error {
sc.log.WithFields(log.Fields{"task": task}).Debug("deleting resource")
resIf, err := sc.getResourceIf(task)
if err != nil {
return err
}
propagationPolicy := metav1.DeletePropagationForeground
return resIf.Delete(task.name(), &metav1.DeleteOptions{PropagationPolicy: &propagationPolicy})
}
func (sc *syncContext) getResourceIf(task *syncTask) (dynamic.ResourceInterface, error) {
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind())
if err != nil {
return nil, err
}
res := kube.ToGroupVersionResource(task.groupVersionKind().GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, res, task.namespace())
return resIf, err
}
var operationPhases = map[v1alpha1.ResultCode]v1alpha1.OperationPhase{
v1alpha1.ResultCodeSynced: v1alpha1.OperationRunning,
v1alpha1.ResultCodeSyncFailed: v1alpha1.OperationFailed,
v1alpha1.ResultCodePruned: v1alpha1.OperationSucceeded,
v1alpha1.ResultCodePruneSkipped: v1alpha1.OperationSucceeded,
}
// tri-state
type runState = int
const (
successful = iota
pending
failed
)
func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) runState {
dryRun = dryRun || sc.syncOp.DryRun
sc.log.WithFields(log.Fields{"numTasks": len(tasks), "dryRun": dryRun}).Debug("running tasks")
runState := successful
var createTasks syncTasks
var pruneTasks syncTasks
for _, task := range tasks {
if task.isPrune() {
pruneTasks = append(pruneTasks, task)
} else {
createTasks = append(createTasks, task)
}
}
// prune first
{
var wg sync.WaitGroup
for _, task := range pruneTasks {
wg.Add(1)
go func(t *syncTask) {
defer wg.Done()
logCtx := sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t})
logCtx.Debug("pruning")
result, message := sc.pruneObject(t.liveObj, sc.syncOp.Prune, dryRun)
if result == v1alpha1.ResultCodeSyncFailed {
runState = failed
logCtx.WithField("message", message).Info("pruning failed")
}
if !dryRun || sc.syncOp.DryRun || result == v1alpha1.ResultCodeSyncFailed {
sc.setResourceResult(t, result, operationPhases[result], message)
}
}(task)
}
wg.Wait()
}
// delete anything that need deleting
if runState == successful && createTasks.Any(func(t *syncTask) bool { return t.needsDeleting() }) {
var wg sync.WaitGroup
for _, task := range createTasks.Filter(func(t *syncTask) bool { return t.needsDeleting() }) {
wg.Add(1)
go func(t *syncTask) {
defer wg.Done()
sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t}).Debug("deleting")
if !dryRun {
err := sc.deleteResource(t)
if err != nil {
// it is possible to get a race condition here, such that the resource does not exist when
// delete is requested, we treat this as a nop
if !apierr.IsNotFound(err) {
runState = failed
sc.setResourceResult(t, "", v1alpha1.OperationError, fmt.Sprintf("failed to delete resource: %v", err))
}
} else {
// if there is anything that needs deleting, we are at best now in pending and
// want to return and wait for sync to be invoked again
runState = pending
}
}
}(task)
}
wg.Wait()
}
// finally create resources
if runState == successful {
processCreateTasks := func(tasks syncTasks) {
var createWg sync.WaitGroup
for _, task := range tasks {
if dryRun && task.skipDryRun {
continue
}
createWg.Add(1)
go func(t *syncTask) {
defer createWg.Done()
logCtx := sc.log.WithFields(log.Fields{"dryRun": dryRun, "task": t})
logCtx.Debug("applying")
validate := !(sc.syncOp.SyncOptions.HasOption("Validate=false") || resource.HasAnnotationOption(t.targetObj, common.AnnotationSyncOptions, "Validate=false"))
result, message := sc.applyObject(t.targetObj, dryRun, sc.syncOp.SyncStrategy.Force(), validate)
if result == v1alpha1.ResultCodeSyncFailed {
logCtx.WithField("message", message).Info("apply failed")
runState = failed
}
if !dryRun || sc.syncOp.DryRun || result == v1alpha1.ResultCodeSyncFailed {
sc.setResourceResult(t, result, operationPhases[result], message)
}
}(task)
}
createWg.Wait()
}
var tasksGroup syncTasks
for _, task := range createTasks {
//Only wait if the type of the next task is different than the previous type
if len(tasksGroup) > 0 && tasksGroup[0].targetObj.GetKind() != task.kind() {
processCreateTasks(tasksGroup)
tasksGroup = syncTasks{task}
} else {
tasksGroup = append(tasksGroup, task)
}
}
if len(tasksGroup) > 0 {
processCreateTasks(tasksGroup)
}
}
return runState
}
// setResourceResult sets a resource details in the SyncResult.Resources list
func (sc *syncContext) setResourceResult(task *syncTask, syncStatus v1alpha1.ResultCode, operationState v1alpha1.OperationPhase, message string) {
task.syncStatus = syncStatus
task.operationState = operationState
// we always want to keep the latest message
if message != "" {
task.message = message
}
sc.lock.Lock()
defer sc.lock.Unlock()
i, existing := sc.syncRes.Resources.Find(task.group(), task.kind(), task.namespace(), task.name(), task.phase)
res := v1alpha1.ResourceResult{
Group: task.group(),
Version: task.version(),
Kind: task.kind(),
Namespace: task.namespace(),
Name: task.name(),
Status: task.syncStatus,
Message: task.message,
HookType: task.hookType(),
HookPhase: task.operationState,
SyncPhase: task.phase,
}
logCtx := sc.log.WithFields(log.Fields{"namespace": task.namespace(), "kind": task.kind(), "name": task.name(), "phase": task.phase})
if existing != nil {
// update existing value
if res.Status != existing.Status || res.HookPhase != existing.HookPhase || res.Message != existing.Message {
logCtx.Infof("updating resource result, status: '%s' -> '%s', phase '%s' -> '%s', message '%s' -> '%s'",
existing.Status, res.Status,
existing.HookPhase, res.HookPhase,
existing.Message, res.Message)
}
sc.syncRes.Resources[i] = &res
} else {
logCtx.Infof("adding resource result, status: '%s', phase: '%s', message: '%s'", res.Status, res.HookPhase, res.Message)
sc.syncRes.Resources = append(sc.syncRes.Resources, &res)
}
}

View File

@@ -1,35 +0,0 @@
package controller
import (
"fmt"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/health"
)
// getOperationPhase returns a hook status from an _live_ unstructured object
func (sc *syncContext) getOperationPhase(hook *unstructured.Unstructured) (v1alpha1.OperationPhase, string, error) {
phase := v1alpha1.OperationSucceeded
message := fmt.Sprintf("%s created", hook.GetName())
resHealth, err := health.GetResourceHealth(hook, sc.resourceOverrides)
if err != nil {
return "", "", err
}
if resHealth != nil {
switch resHealth.Status {
case v1alpha1.HealthStatusUnknown, v1alpha1.HealthStatusDegraded:
phase = v1alpha1.OperationFailed
message = resHealth.Message
case v1alpha1.HealthStatusProgressing, v1alpha1.HealthStatusSuspended:
phase = v1alpha1.OperationRunning
message = resHealth.Message
case v1alpha1.HealthStatusHealthy:
phase = v1alpha1.OperationSucceeded
message = resHealth.Message
}
}
return phase, message, nil
}

View File

@@ -1,29 +0,0 @@
package controller
import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
)
func syncPhases(obj *unstructured.Unstructured) []v1alpha1.SyncPhase {
if hook.Skip(obj) {
return nil
} else if hook.IsHook(obj) {
phasesMap := make(map[v1alpha1.SyncPhase]bool)
for _, hookType := range hook.Types(obj) {
switch hookType {
case v1alpha1.HookTypePreSync, v1alpha1.HookTypeSync, v1alpha1.HookTypePostSync, v1alpha1.HookTypeSyncFail:
phasesMap[v1alpha1.SyncPhase(hookType)] = true
}
}
var phases []v1alpha1.SyncPhase
for phase := range phasesMap {
phases = append(phases, phase)
}
return phases
} else {
return []v1alpha1.SyncPhase{v1alpha1.SyncPhaseSync}
}
}

View File

@@ -1,57 +0,0 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
)
func TestSyncPhaseNone(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSync}, syncPhases(&unstructured.Unstructured{}))
}
func TestSyncPhasePreSync(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePreSync}, syncPhases(pod("PreSync")))
}
func TestSyncPhaseSync(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSync}, syncPhases(pod("Sync")))
}
func TestSyncPhaseSkip(t *testing.T) {
assert.Nil(t, syncPhases(pod("Skip")))
}
// garbage hooks are still hooks, but have no phases, because some user spelled something wrong
func TestSyncPhaseGarbage(t *testing.T) {
assert.Nil(t, syncPhases(pod("Garbage")))
}
func TestSyncPhasePost(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhasePostSync}, syncPhases(pod("PostSync")))
}
func TestSyncPhaseFail(t *testing.T) {
assert.Equal(t, []SyncPhase{SyncPhaseSyncFail}, syncPhases(pod("SyncFail")))
}
func TestSyncPhaseTwoPhases(t *testing.T) {
assert.ElementsMatch(t, []SyncPhase{SyncPhasePreSync, SyncPhasePostSync}, syncPhases(pod("PreSync,PostSync")))
}
func TestSyncDuplicatedPhases(t *testing.T) {
assert.ElementsMatch(t, []SyncPhase{SyncPhasePreSync}, syncPhases(pod("PreSync,PreSync")))
assert.ElementsMatch(t, []SyncPhase{SyncPhasePreSync}, syncPhases(podWithHelmHook("pre-install,pre-upgrade")))
}
func pod(hookType string) *unstructured.Unstructured {
return test.Annotate(test.NewPod(), "argocd.argoproj.io/hook", hookType)
}
func podWithHelmHook(hookType string) *unstructured.Unstructured {
return test.Annotate(test.NewPod(), "helm.sh/hook", hookType)
}

View File

@@ -1,130 +0,0 @@
package controller
import (
"fmt"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/resource/syncwaves"
)
// syncTask holds the live and target object. At least one should be non-nil. A targetObj of nil
// indicates the live object needs to be pruned. A liveObj of nil indicates the object has yet to
// be deployed
type syncTask struct {
phase v1alpha1.SyncPhase
liveObj *unstructured.Unstructured
targetObj *unstructured.Unstructured
skipDryRun bool
syncStatus v1alpha1.ResultCode
operationState v1alpha1.OperationPhase
message string
}
func ternary(val bool, a, b string) string {
if val {
return a
} else {
return b
}
}
func (t *syncTask) String() string {
return fmt.Sprintf("%s/%d %s %s/%s:%s/%s %s->%s (%s,%s,%s)",
t.phase, t.wave(),
ternary(t.isHook(), "hook", "resource"), t.group(), t.kind(), t.namespace(), t.name(),
ternary(t.liveObj != nil, "obj", "nil"), ternary(t.targetObj != nil, "obj", "nil"),
t.syncStatus, t.operationState, t.message,
)
}
func (t *syncTask) isPrune() bool {
return t.targetObj == nil
}
// return the target object (if this exists) otherwise the live object
// some caution - often you explicitly want the live object not the target object
func (t *syncTask) obj() *unstructured.Unstructured {
return obj(t.targetObj, t.liveObj)
}
func (t *syncTask) wave() int {
return syncwaves.Wave(t.obj())
}
func (t *syncTask) isHook() bool {
return hook.IsHook(t.obj())
}
func (t *syncTask) group() string {
return t.groupVersionKind().Group
}
func (t *syncTask) kind() string {
return t.groupVersionKind().Kind
}
func (t *syncTask) version() string {
return t.groupVersionKind().Version
}
func (t *syncTask) groupVersionKind() schema.GroupVersionKind {
return t.obj().GroupVersionKind()
}
func (t *syncTask) name() string {
return t.obj().GetName()
}
func (t *syncTask) namespace() string {
return t.obj().GetNamespace()
}
func (t *syncTask) pending() bool {
return t.operationState == ""
}
func (t *syncTask) running() bool {
return t.operationState.Running()
}
func (t *syncTask) completed() bool {
return t.operationState.Completed()
}
func (t *syncTask) successful() bool {
return t.operationState.Successful()
}
func (t *syncTask) failed() bool {
return t.operationState.Failed()
}
func (t *syncTask) hookType() v1alpha1.HookType {
if t.isHook() {
return v1alpha1.HookType(t.phase)
} else {
return ""
}
}
func (t *syncTask) hasHookDeletePolicy(policy v1alpha1.HookDeletePolicy) bool {
// cannot have a policy if it is not a hook, it is meaningless
if !t.isHook() {
return false
}
for _, p := range hook.DeletePolicies(t.obj()) {
if p == policy {
return true
}
}
return false
}
func (t *syncTask) needsDeleting() bool {
return t.liveObj != nil && (t.pending() && t.hasHookDeletePolicy(v1alpha1.HookDeletePolicyBeforeHookCreation) ||
t.successful() && t.hasHookDeletePolicy(v1alpha1.HookDeletePolicyHookSucceeded) ||
t.failed() && t.hasHookDeletePolicy(v1alpha1.HookDeletePolicyHookFailed))
}

View File

@@ -1,66 +0,0 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/test"
)
func Test_syncTask_hookType(t *testing.T) {
type fields struct {
phase SyncPhase
liveObj *unstructured.Unstructured
}
tests := []struct {
name string
fields fields
want HookType
}{
{"Empty", fields{SyncPhaseSync, NewPod()}, ""},
{"PreSyncHook", fields{SyncPhasePreSync, NewHook(HookTypePreSync)}, HookTypePreSync},
{"SyncHook", fields{SyncPhaseSync, NewHook(HookTypeSync)}, HookTypeSync},
{"PostSyncHook", fields{SyncPhasePostSync, NewHook(HookTypePostSync)}, HookTypePostSync},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
task := &syncTask{
phase: tt.fields.phase,
liveObj: tt.fields.liveObj,
}
hookType := task.hookType()
assert.EqualValues(t, tt.want, hookType)
})
}
}
func Test_syncTask_hasHookDeletePolicy(t *testing.T) {
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(HookDeletePolicyBeforeHookCreation))
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(HookDeletePolicyHookSucceeded))
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(HookDeletePolicyHookFailed))
// must be hook
assert.False(t, (&syncTask{targetObj: Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).hasHookDeletePolicy(HookDeletePolicyHookSucceeded))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).hasHookDeletePolicy(HookDeletePolicyHookFailed))
}
func Test_syncTask_needsDeleting(t *testing.T) {
assert.False(t, (&syncTask{liveObj: NewPod()}).needsDeleting())
// must be hook
assert.False(t, (&syncTask{liveObj: Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
// no need to delete if no live obj
assert.False(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argoocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
assert.True(t, (&syncTask{liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
assert.True(t, (&syncTask{liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).needsDeleting())
assert.True(t, (&syncTask{operationState: OperationSucceeded, liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).needsDeleting())
assert.True(t, (&syncTask{operationState: OperationFailed, liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).needsDeleting())
}
func Test_syncTask_wave(t *testing.T) {
assert.Equal(t, 0, (&syncTask{targetObj: NewPod()}).wave())
assert.Equal(t, 1, (&syncTask{targetObj: Annotate(NewPod(), "argocd.argoproj.io/sync-wave", "1")}).wave())
}

View File

@@ -1,185 +0,0 @@
package controller
import (
"strings"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
// kindOrder represents the correct order of Kubernetes resources within a manifest
var syncPhaseOrder = map[v1alpha1.SyncPhase]int{
v1alpha1.SyncPhasePreSync: -1,
v1alpha1.SyncPhaseSync: 0,
v1alpha1.SyncPhasePostSync: 1,
v1alpha1.SyncPhaseSyncFail: 2,
}
// kindOrder represents the correct order of Kubernetes resources within a manifest
// https://github.com/helm/helm/blob/master/pkg/tiller/kind_sorter.go
var kindOrder = map[string]int{}
func init() {
kinds := []string{
"Namespace",
"ResourceQuota",
"LimitRange",
"PodSecurityPolicy",
"PodDisruptionBudget",
"Secret",
"ConfigMap",
"StorageClass",
"PersistentVolume",
"PersistentVolumeClaim",
"ServiceAccount",
"CustomResourceDefinition",
"ClusterRole",
"ClusterRoleBinding",
"Role",
"RoleBinding",
"Service",
"DaemonSet",
"Pod",
"ReplicationController",
"ReplicaSet",
"Deployment",
"StatefulSet",
"Job",
"CronJob",
"Ingress",
"APIService",
}
for i, kind := range kinds {
// make sure none of the above entries are zero, we need that for custom resources
kindOrder[kind] = i - len(kinds)
}
}
type syncTasks []*syncTask
func (s syncTasks) Len() int {
return len(s)
}
func (s syncTasks) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// order is
// 1. phase
// 2. wave
// 3. kind
// 4. name
func (s syncTasks) Less(i, j int) bool {
tA := s[i]
tB := s[j]
d := syncPhaseOrder[tA.phase] - syncPhaseOrder[tB.phase]
if d != 0 {
return d < 0
}
d = tA.wave() - tB.wave()
if d != 0 {
return d < 0
}
a := tA.obj()
b := tB.obj()
// we take advantage of the fact that if the kind is not in the kindOrder map,
// then it will return the default int value of zero, which is the highest value
d = kindOrder[a.GetKind()] - kindOrder[b.GetKind()]
if d != 0 {
return d < 0
}
return a.GetName() < b.GetName()
}
func (s syncTasks) Filter(predicate func(task *syncTask) bool) (tasks syncTasks) {
for _, task := range s {
if predicate(task) {
tasks = append(tasks, task)
}
}
return tasks
}
func (s syncTasks) Split(predicate func(task *syncTask) bool) (trueTasks, falseTasks syncTasks) {
for _, task := range s {
if predicate(task) {
trueTasks = append(trueTasks, task)
} else {
falseTasks = append(falseTasks, task)
}
}
return trueTasks, falseTasks
}
func (s syncTasks) All(predicate func(task *syncTask) bool) bool {
for _, task := range s {
if !predicate(task) {
return false
}
}
return true
}
func (s syncTasks) Any(predicate func(task *syncTask) bool) bool {
for _, task := range s {
if predicate(task) {
return true
}
}
return false
}
func (s syncTasks) Find(predicate func(task *syncTask) bool) *syncTask {
for _, task := range s {
if predicate(task) {
return task
}
}
return nil
}
func (s syncTasks) String() string {
var values []string
for _, task := range s {
values = append(values, task.String())
}
return "[" + strings.Join(values, ", ") + "]"
}
func (s syncTasks) phase() v1alpha1.SyncPhase {
if len(s) > 0 {
return s[0].phase
}
return ""
}
func (s syncTasks) wave() int {
if len(s) > 0 {
return s[0].wave()
}
return 0
}
func (s syncTasks) lastPhase() v1alpha1.SyncPhase {
if len(s) > 0 {
return s[len(s)-1].phase
}
return ""
}
func (s syncTasks) lastWave() int {
if len(s) > 0 {
return s[len(s)-1].wave()
}
return 0
}
func (s syncTasks) multiStep() bool {
return s.wave() != s.lastWave() || s.phase() != s.lastPhase()
}

View File

@@ -1,392 +0,0 @@
package controller
import (
"sort"
"testing"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/common"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/test"
)
func Test_syncTasks_kindOrder(t *testing.T) {
assert.Equal(t, -27, kindOrder["Namespace"])
assert.Equal(t, -1, kindOrder["APIService"])
assert.Equal(t, 0, kindOrder["MyCRD"])
}
func TestSortSyncTask(t *testing.T) {
sort.Sort(unsortedTasks)
assert.Equal(t, sortedTasks, unsortedTasks)
}
func TestAnySyncTasks(t *testing.T) {
res := unsortedTasks.Any(func(task *syncTask) bool {
return task.name() == "a"
})
assert.True(t, res)
res = unsortedTasks.Any(func(task *syncTask) bool {
return task.name() == "does-not-exist"
})
assert.False(t, res)
}
func TestAllSyncTasks(t *testing.T) {
res := unsortedTasks.All(func(task *syncTask) bool {
return task.name() != ""
})
assert.False(t, res)
res = unsortedTasks.All(func(task *syncTask) bool {
return task.name() == "a"
})
assert.False(t, res)
}
func TestSplitSyncTasks(t *testing.T) {
named, unnamed := sortedTasks.Split(func(task *syncTask) bool {
return task.name() != ""
})
assert.Equal(t, named, namedObjTasks)
assert.Equal(t, unnamed, unnamedTasks)
}
var unsortedTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
phase: SyncPhaseSyncFail, targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhasePostSync, targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
var sortedTasks = syncTasks{
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhaseSyncFail,
targetObj: &unstructured.Unstructured{},
},
}
var namedObjTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "a",
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"name": "b",
},
},
},
},
}
var unnamedTasks = syncTasks{
{
phase: SyncPhasePreSync,
targetObj: &unstructured.Unstructured{},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "-1",
},
},
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
"argocd.argoproj.io/sync-wave": "1",
},
},
},
},
},
{
phase: SyncPhasePostSync,
targetObj: &unstructured.Unstructured{},
},
{
phase: SyncPhaseSyncFail,
targetObj: &unstructured.Unstructured{},
},
}
func Test_syncTasks_Filter(t *testing.T) {
tasks := syncTasks{{phase: SyncPhaseSync}, {phase: SyncPhasePostSync}}
assert.Equal(t, syncTasks{{phase: SyncPhaseSync}}, tasks.Filter(func(t *syncTask) bool {
return t.phase == SyncPhaseSync
}))
}
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "Workflow",
},
}}
namespace := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "Namespace",
},
},
}
unsorted := syncTasks{crd, namespace}
sort.Sort(unsorted)
assert.Equal(t, syncTasks{namespace, crd}, unsorted)
}
func Test_syncTasks_multiStep(t *testing.T) {
t.Run("Single", func(t *testing.T) {
tasks := syncTasks{{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: SyncPhaseSync}}
assert.Equal(t, SyncPhaseSync, tasks.phase())
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, SyncPhaseSync, tasks.lastPhase())
assert.Equal(t, -1, tasks.lastWave())
assert.False(t, tasks.multiStep())
})
t.Run("Double", func(t *testing.T) {
tasks := syncTasks{
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: SyncPhasePreSync},
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "1"), phase: SyncPhasePostSync},
}
assert.Equal(t, SyncPhasePreSync, tasks.phase())
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, SyncPhasePostSync, tasks.lastPhase())
assert.Equal(t, 1, tasks.lastWave())
assert.True(t, tasks.multiStep())
})
}

View File

@@ -1,462 +1,19 @@
package controller
import (
"fmt"
"reflect"
"testing"
log "github.com/sirupsen/logrus"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
fakedisco "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/rest"
testcore "k8s.io/client-go/testing"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
. "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/apiclient"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
fakeDisco := &fakedisco.FakeDiscovery{Fake: &testcore.Fake{}}
fakeDisco.Resources = append(resources,
&v1.APIResourceList{
GroupVersion: "v1",
APIResources: []v1.APIResource{
{Kind: "Pod", Group: "", Version: "v1", Namespaced: true},
{Kind: "Service", Group: "", Version: "v1", Namespaced: true},
},
},
&v1.APIResourceList{
GroupVersion: "apps/v1",
APIResources: []v1.APIResource{
{Kind: "Deployment", Group: "apps", Version: "v1", Namespaced: true},
},
})
sc := syncContext{
config: &rest.Config{},
namespace: test.FakeArgoCDNamespace,
server: test.FakeClusterURL,
syncRes: &v1alpha1.SyncOperationResult{
Revision: "FooBarBaz",
},
syncOp: &v1alpha1.SyncOperation{
Prune: true,
SyncStrategy: &v1alpha1.SyncStrategy{
Apply: &v1alpha1.SyncStrategyApply{},
},
},
proj: &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{{
Server: test.FakeClusterURL,
Namespace: test.FakeArgoCDNamespace,
}},
ClusterResourceWhitelist: []v1.GroupKind{
{Group: "*", Kind: "*"},
},
},
},
opState: &v1alpha1.OperationState{},
disco: fakeDisco,
log: log.WithFields(log.Fields{"application": "fake-app"}),
}
sc.kubectl = &kubetest.MockKubectlCmd{}
return &sc
}
func newManagedResource(live *unstructured.Unstructured) managedResource {
return managedResource{
Live: live,
Group: live.GroupVersionKind().Group,
Version: live.GroupVersionKind().Version,
Kind: live.GroupVersionKind().Kind,
Namespace: live.GetNamespace(),
Name: live.GetName(),
}
}
func TestSyncNotPermittedNamespace(t *testing.T) {
syncCtx := newTestSyncCtx()
targetPod := test.NewPod()
targetPod.SetNamespace("kube-system")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: targetPod,
}, {
Live: nil,
Target: test.NewService(),
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncCreateInSortedOrder(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewPod(),
}, {
Live: nil,
Target: test.NewService(),
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodeSynced, result.Status)
assert.Equal(t, "", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, "", result.Message)
} else {
t.Error("Resource isn't a pod or a service")
}
}
}
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: v1alpha1.SchemeGroupVersion.String(),
APIResources: []v1.APIResource{
{Name: "workflows", Namespaced: false, Kind: "Workflow", Group: "argoproj.io"},
{Name: "application", Namespaced: false, Kind: "Application", Group: "argoproj.io"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "argoproj.io", Kind: "*"},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: kube.MustToUnstructured(&rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{Kind: "ClusterRole", APIVersion: "rbac.authorization.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{Name: "argo-ui-cluster-role"}}),
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Contains(t, result.Message, "not permitted in project")
}
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.proj.Spec.NamespaceResourceBlacklist = []v1.GroupKind{
{Group: "*", Kind: "Deployment"},
}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewDeployment(),
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Contains(t, result.Message, "not permitted in project")
}
func TestSyncSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewService(),
}, {
Live: pod,
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, result.Status)
assert.Equal(t, "", result.Message)
} else {
t.Error("Resource isn't a pod or a service")
}
}
}
func TestSyncDeleteSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
svc := test.NewService()
svc.SetNamespace(test.FakeArgoCDNamespace)
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: svc,
Target: nil,
}, {
Live: pod,
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
for i := range syncCtx.syncRes.Resources {
result := syncCtx.syncRes.Resources[i]
if result.Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else if result.Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodePruned, result.Status)
assert.Equal(t, "pruned", result.Message)
} else {
t.Error("Resource isn't a pod or a service")
}
}
}
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
testSvc := test.NewService()
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
testSvc.GetName(): {
Output: "",
Err: fmt.Errorf("foo"),
},
},
}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: testSvc,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Equal(t, "foo", result.Message)
}
func TestSyncPruneFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
"test-service": {
Output: "",
Err: fmt.Errorf("foo"),
},
},
}
testSvc := test.NewService()
testSvc.SetName("test-service")
testSvc.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: testSvc,
Target: nil,
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 1)
result := syncCtx.syncRes.Resources[0]
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, result.Status)
assert.Equal(t, "foo", result.Message)
}
func TestDontSyncOrPruneHooks(t *testing.T) {
syncCtx := newTestSyncCtx()
targetPod := test.NewPod()
targetPod.SetName("dont-create-me")
targetPod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
liveSvc := test.NewService()
liveSvc.SetName("dont-prune-me")
liveSvc.SetNamespace(test.FakeArgoCDNamespace)
liveSvc.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
syncCtx.compareResult = &comparisonResult{
hooks: []*unstructured.Unstructured{targetPod, liveSvc},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 0)
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
// make sure that we do not prune resources with Prune=false
func TestDontPrunePruneFalse(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationSyncOptions: "Prune=false"})
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Live: pod}}}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodePruneSkipped, syncCtx.syncRes.Resources[0].Status)
assert.Equal(t, "ignored (no prune)", syncCtx.syncRes.Resources[0].Message)
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationSucceeded, syncCtx.opState.Phase)
}
// make sure Validate=false means we don't validate
func TestSyncOptionValidate(t *testing.T) {
tests := []struct {
name string
annotationVal string
want bool
}{
{"Empty", "", true},
{"True", "Validate=true", true},
{"False", "Validate=false", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationSyncOptions: tt.annotationVal})
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod, Live: pod}}}
syncCtx.sync()
kubectl, _ := syncCtx.kubectl.(*kubetest.MockKubectlCmd)
assert.Equal(t, tt.want, kubectl.LastValidate)
})
}
}
// make sure Validate means we don't validate
func TestSyncValidate(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod, Live: pod}}}
syncCtx.syncOp.SyncOptions = SyncOptions{"Validate=false"}
syncCtx.sync()
kubectl := syncCtx.kubectl.(*kubetest.MockKubectlCmd)
assert.False(t, kubectl.LastValidate)
}
func TestSelectiveSyncOnly(t *testing.T) {
syncCtx := newTestSyncCtx()
pod1 := test.NewPod()
pod1.SetName("pod-1")
pod2 := test.NewPod()
pod2.SetName("pod-2")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod1}},
}
syncCtx.syncResources = []v1alpha1.SyncOperationResource{{Kind: "Pod", Name: "pod-1"}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, "pod-1", tasks[0].name())
}
func TestUnnamedHooksGetUniqueNames(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
pod.SetName("")
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync,PostSync"})
syncCtx.compareResult = &comparisonResult{hooks: []*unstructured.Unstructured{pod}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 2)
assert.Contains(t, tasks[0].name(), "foobarb-presync-")
assert.Contains(t, tasks[1].name(), "foobarb-postsync-")
assert.Equal(t, "", pod.GetName())
}
func TestManagedResourceAreNotNamed(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
pod.SetName("")
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod}}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, "", tasks[0].name())
assert.Equal(t, "", pod.GetName())
}
func TestDeDupingTasks(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "Sync"})
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{pod},
}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
}
func TestObjectsGetANamespace(t *testing.T) {
syncCtx := newTestSyncCtx()
pod := test.NewPod()
syncCtx.compareResult = &comparisonResult{managedResources: []managedResource{{Target: pod}}}
tasks, successful := syncCtx.getSyncTasks()
assert.True(t, successful)
assert.Len(t, tasks, 1)
assert.Equal(t, test.FakeArgoCDNamespace, tasks[0].namespace())
assert.Equal(t, "", pod.GetNamespace())
}
func TestPersistRevisionHistory(t *testing.T) {
app := newFakeApp()
app.Status.OperationState = nil
@@ -543,171 +100,3 @@ func TestPersistRevisionHistoryRollback(t *testing.T) {
assert.Equal(t, source, updatedApp.Status.History[0].Source)
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}
func TestSyncFailureHookWithSuccessfulSync(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: test.NewPod()}},
hooks: []*unstructured.Unstructured{test.NewHook(HookTypeSyncFail)},
}
syncCtx.sync()
assert.Equal(t, OperationSucceeded, syncCtx.opState.Phase)
// only one result, we did not run the failure failureHook
assert.Len(t, syncCtx.syncRes.Resources, 1)
}
func TestSyncFailureHookWithFailedSync(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{test.NewHook(HookTypeSyncFail)},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{pod.GetName(): {Err: fmt.Errorf("")}},
}
syncCtx.sync()
syncCtx.sync()
assert.Equal(t, OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 2)
}
func TestBeforeHookCreation(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
hook := test.Annotate(test.Annotate(test.NewPod(), common.AnnotationKeyHook, "Sync"), common.AnnotationKeyHookDeletePolicy, "BeforeHookCreation")
hook.SetNamespace(test.FakeArgoCDNamespace)
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{newManagedResource(hook)},
hooks: []*unstructured.Unstructured{hook},
}
syncCtx.dynamicIf = fake.NewSimpleDynamicClient(runtime.NewScheme())
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Empty(t, syncCtx.syncRes.Resources[0].Message)
}
func TestRunSyncFailHooksFailed(t *testing.T) {
// Tests that other SyncFail Hooks run even if one of them fail.
syncCtx := newTestSyncCtx()
syncCtx.syncOp.SyncStrategy.Apply = nil
pod := test.NewPod()
successfulSyncFailHook := test.NewHook(HookTypeSyncFail)
successfulSyncFailHook.SetName("successful-sync-fail-hook")
failedSyncFailHook := test.NewHook(HookTypeSyncFail)
failedSyncFailHook.SetName("failed-sync-fail-hook")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{Target: pod}},
hooks: []*unstructured.Unstructured{successfulSyncFailHook, failedSyncFailHook},
}
syncCtx.kubectl = &kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
// Fail operation
pod.GetName(): {Err: fmt.Errorf("")},
// Fail a single SyncFail hook
failedSyncFailHook.GetName(): {Err: fmt.Errorf("")}},
}
syncCtx.sync()
syncCtx.sync()
fmt.Println(syncCtx.syncRes.Resources)
fmt.Println(syncCtx.opState.Phase)
// Operation as a whole should fail
assert.Equal(t, OperationFailed, syncCtx.opState.Phase)
// failedSyncFailHook should fail
assert.Equal(t, OperationFailed, syncCtx.syncRes.Resources[1].HookPhase)
assert.Equal(t, ResultCodeSyncFailed, syncCtx.syncRes.Resources[1].Status)
// successfulSyncFailHook should be synced running (it is an nginx pod)
assert.Equal(t, OperationRunning, syncCtx.syncRes.Resources[2].HookPhase)
assert.Equal(t, ResultCodeSynced, syncCtx.syncRes.Resources[2].Status)
}
func Test_syncContext_isSelectiveSync(t *testing.T) {
type fields struct {
compareResult *comparisonResult
syncResources []SyncOperationResource
}
oneSyncResource := []SyncOperationResource{{}}
oneResource := func(group, kind, name string, hook bool) *comparisonResult {
return &comparisonResult{resources: []v1alpha1.ResourceStatus{{Group: group, Kind: kind, Name: name, Hook: hook}}}
}
tests := []struct {
name string
fields fields
want bool
}{
{"Empty", fields{}, false},
{"OneCompareResult", fields{oneResource("", "", "", false), []SyncOperationResource{}}, true},
{"OneSyncResource", fields{&comparisonResult{}, oneSyncResource}, true},
{"Equal", fields{oneResource("", "", "", false), oneSyncResource}, false},
{"EqualOutOfOrder", fields{&comparisonResult{resources: []v1alpha1.ResourceStatus{{Group: "a"}, {Group: "b"}}}, []SyncOperationResource{{Group: "b"}, {Group: "a"}}}, false},
{"KindDifferent", fields{oneResource("foo", "", "", false), oneSyncResource}, true},
{"GroupDifferent", fields{oneResource("", "foo", "", false), oneSyncResource}, true},
{"NameDifferent", fields{oneResource("", "", "foo", false), oneSyncResource}, true},
{"HookIgnored", fields{oneResource("", "", "", true), []SyncOperationResource{}}, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sc := &syncContext{
compareResult: tt.fields.compareResult,
syncResources: tt.fields.syncResources,
}
if got := sc.isSelectiveSync(); got != tt.want {
t.Errorf("syncContext.isSelectiveSync() = %v, want %v", got, tt.want)
}
})
}
}
func Test_syncContext_liveObj(t *testing.T) {
type fields struct {
compareResult *comparisonResult
}
type args struct {
obj *unstructured.Unstructured
}
obj := test.NewPod()
obj.SetNamespace("my-ns")
found := test.NewPod()
tests := []struct {
name string
fields fields
args args
want *unstructured.Unstructured
}{
{"None", fields{compareResult: &comparisonResult{managedResources: []managedResource{}}}, args{obj: &unstructured.Unstructured{}}, nil},
{"Found", fields{compareResult: &comparisonResult{managedResources: []managedResource{{Group: obj.GroupVersionKind().Group, Kind: obj.GetKind(), Namespace: obj.GetNamespace(), Name: obj.GetName(), Live: found}}}}, args{obj: obj}, found},
{"EmptyNamespace", fields{compareResult: &comparisonResult{managedResources: []managedResource{{Group: obj.GroupVersionKind().Group, Kind: obj.GetKind(), Name: obj.GetName(), Live: found}}}}, args{obj: obj}, found},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sc := &syncContext{
compareResult: tt.fields.compareResult,
}
if got := sc.liveObj(tt.args.obj); !reflect.DeepEqual(got, tt.want) {
t.Errorf("syncContext.liveObj() = %v, want %v", got, tt.want)
}
})
}
}
func Test_syncContext_hasCRDOfGroupKind(t *testing.T) {
// target
assert.False(t, (&syncContext{compareResult: &comparisonResult{managedResources: []managedResource{{Target: test.NewCRD()}}}}).hasCRDOfGroupKind("", ""))
assert.True(t, (&syncContext{compareResult: &comparisonResult{managedResources: []managedResource{{Target: test.NewCRD()}}}}).hasCRDOfGroupKind("argoproj.io", "TestCrd"))
// hook
assert.False(t, (&syncContext{compareResult: &comparisonResult{hooks: []*unstructured.Unstructured{test.NewCRD()}}}).hasCRDOfGroupKind("", ""))
assert.True(t, (&syncContext{compareResult: &comparisonResult{hooks: []*unstructured.Unstructured{test.NewCRD()}}}).hasCRDOfGroupKind("argoproj.io", "TestCrd"))
}

View File

@@ -1,167 +1 @@
# Contributing
## Before You Start
You must install and run the ArgoCD using a local Kubernetes (e.g. Docker for Desktop or Minikube) first. This will help you understand the application, but also get your local environment set-up.
Then, to get a good grounding in Go, try out [the tutorial](https://tour.golang.org/).
## Pre-requisites
Install:
* [docker](https://docs.docker.com/install/#supported-platforms)
* [git](https://git-scm.com/) and [git-lfs](https://git-lfs.github.com/)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [kubectx](https://kubectx.dev)
* [minikube](https://kubernetes.io/docs/setup/minikube/) or Docker for Desktop
Brew users can quickly install the lot:
```bash
brew install go git-lfs kubectl kubectx dep ksonnet/tap/ks helm@2 kustomize
```
Check the versions:
```bash
go version ;# must be v1.12.x
helm version ;# must be v2.13.x
kustomize version ;# must be v3.1.x
```
Set up environment variables (e.g. is `~/.bashrc`):
```bash
export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
```
Checkout the code:
```bash
go get -u github.com/argoproj/argo-cd
cd ~/go/src/github.com/argoproj/argo-cd
```
## Building
Ensure dependencies are up to date first:
```shell
dep ensure
make dev-tools-image
make install-lint-tools
go get github.com/mattn/goreman
go get github.com/jstemmer/go-junit-report
```
Common make targets:
* `make codegen` - Run code generation
* `make lint` - Lint code
* `make test` - Run unit tests
* `make cli` - Make the `argocd` CLI tool
Check out the following [documentation](https://github.com/argoproj/argo-cd/blob/master/docs/developer-guide/test-e2e.md) for instructions on running the e2e tests.
## Running Locally
It is much easier to run and debug if you run ArgoCD on your local machine than in the Kubernetes cluster.
You should scale the deployments to zero:
```bash
kubectl -n argocd scale deployment/argocd-application-controller --replicas 0
kubectl -n argocd scale deployment/argocd-dex-server --replicas 0
kubectl -n argocd scale deployment/argocd-repo-server --replicas 0
kubectl -n argocd scale deployment/argocd-server --replicas 0
kubectl -n argocd scale deployment/argocd-redis --replicas 0
```
Download Yarn dependencies and Compile:
```bash
~/go/src/github.com/argoproj/argo-cd/ui
yarn install
yarn build
```
Then start the services:
```bash
cd ~/go/src/github.com/argoproj/argo-cd
make start
```
You can now execute `argocd` command against your locally running ArgoCD by appending `--server localhost:8080 --plaintext --insecure`, e.g.:
```bash
argocd app create guestbook --path guestbook --repo https://github.com/argoproj/argocd-example-apps.git --dest-server https://kubernetes.default.svc --dest-namespace default --server localhost:8080 --plaintext --insecure
```
You can open the UI: [http://localhost:4000](http://localhost:4000)
As an alternative to using the above command line parameters each time you call `argocd` CLI, you can set the following environment variables:
```bash
export ARGOCD_SERVER=127.0.0.1:8080
export ARGOCD_OPTS="--plaintext --insecure"
```
## Running Local Containers
You may need to run containers locally, so here's how:
Create login to Docker Hub, then login.
```bash
docker login
```
Add your username as the environment variable, e.g. to your `~/.bash_profile`:
```bash
export IMAGE_NAMESPACE=alexcollinsintuit
```
If you don't want to use `latest` as the image's tag (the default), you can set it from the environment too:
```bash
export IMAGE_TAG=yourtag
```
Build the image:
```bash
DOCKER_PUSH=true make image
```
Update the manifests (be sure to do that from a shell that has above environment variables set)
```bash
make manifests
```
Install the manifests:
```bash
kubectl -n argocd apply --force -f manifests/install.yaml
```
Scale your deployments up:
```bash
kubectl -n argocd scale deployment/argocd-application-controller --replicas 1
kubectl -n argocd scale deployment/argocd-dex-server --replicas 1
kubectl -n argocd scale deployment/argocd-repo-server --replicas 1
kubectl -n argocd scale deployment/argocd-server --replicas 1
kubectl -n argocd scale deployment/argocd-redis --replicas 1
```
Now you can set-up the port-forwarding and open the UI or CLI.
Please refer to [the Contribution Guide](https://argoproj.github.io/argo-cd/developer-guide/contributing/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

175
docs/bug_triage.md Normal file
View File

@@ -0,0 +1,175 @@
# Bug triage proposal for ArgoCD
## Situation
Lots of issues on our issue tracker. Many of them not bugs, but questions,
or very environment related. It's easy to lose oversight.
Also, it's not obvious which bugs are important. Which bugs should be fixed
first? Can we make a new release with the current inventory of open bugs?
Is there still a bug that should make it to the new release?
## Proposal
We should agree upon a common issue triage process. The process must be lean
and efficient, and should support us and the community looking into the GH
issue tracker at making the following decisions:
* Is it even a real bug?
* If it is a real bug, what is the current status of the bug (next to "open" or "closed")?
* How important is it to fix the bug?
* How urgent is it to fix the bug?
* Who will be working to fix the bug?
We need new methods to classify our bugs, at least into these categories:
* validity: Does the issue indeed represent a true bug
* severity: Denominates what impact the bug has
* priority: Denominates the urgency of the fix
## Triage process
GH issue tracker provides us with the possibility to label issues. Using these
labels is not perfect, but should give a good start. Each new issue created in
our issue tracker should be correctly labeled during its lifecycle, so keeping
an overview would be simplified by the ability to filter for labels.
The triage process could be as follows:
1. A new bug issue is created by someone on the tracker
1. The first person of the core team to see it will start the triage by classifying
the issue (see below). This will indicate the creator that we have noticed the
issue, and that it's not "fire & forget" tracker.
1. Initial classification should be possible even when much of the information is
missing yet. In this case, the issue would be classified as such (see below).
Again, this indicates that someone has noticed the issue, and there is activity
in progress to get the required information.
1. Classification of the issue can change over its life-cycle. However, once the
issue has been initially classified correctly (that is, with something else than
the "placeholder" classification discussed above), changes to the classification
should be discussed first with the person who initially classified the issue.
## Classification
We have introduced some new labels in the GH issue tracker for classifying the
bug issues. These labels are prefixed with the string `bug/`, and should be
applied to all new issues in our tracker.
### Classification requires more information
If it is not yet possible to classify the bug, i.e. because more information is
required to correctly classify the bug, you should always set the label
`bug/in-triage` to make it clear that triage process has started but could not
yet be completed.
### Issue type
If it's clear that a bug issue is not a bug, but a question or reach for support,
it should be marked as such:
* Remove any of the labels prefixed `bug/` that might be attached to the issue
* Remove the label `bug` from the issue
* Add the label `inquiry` to the issue
If the inquiry turns out to be something that should be covered by the docs, but
is not, the following actions should be taken:
* The title of the issue should be adapted that it will be clear that the bug
affects the docs, not the code
* The label `documentation` should be attached to the issue
If the issue is too confusing (can happen), another possibility is to close the
issue and create a new one as described in above (with a meaningful title and
the label `documentation` attached to it).
### Validity
Some reported bugs may be invalid. It could be a user error, a misconfiguration
or something along these lines. If it is clear that the bug falls into one of
these categories:
* Remove any of the labels prefixed `bug/` that might be attached to the issue
* Add the label `invalid` to the issue
* Retain the `bug` label to the issue
* Close the issue
When closing the issue, it is important to let requester know why the issue
has been closed. The optimum would be to provide a solution to his request
in the comments of the issue, or at least pointers to possible solutions.
### Regressions
Sometimes it happens that something that worked in a previous release does
not work now when it should still work. If this is the case, the following
actions should be done
* Add the label `regression` to the issue
* Continue with triage
### Severity
It is important to find out how severe the impact of a bug is, and to label
the bug with this information. For this purpose, the following labels exist
in our tracker:
* `bug/severity:minor`: Bug has limited impact and maybe affects only an
edge-case. Core functionality is not affected, and there is no data loss
involved. Something might not work as expected. Example of these kind of
bugs could be a CLI command that is not working as expected, a glitch in
the UI, wrong documentation, etc.
* `bug/severity:major`: Malfunction in one of the core components, impacting
a majority of users or one of the core functionalities in ArgoCD. There is
no data loss involved, but for example a sync is not working due to a bug
in ArgoCD (and not due to user error), manifests fail to render, etc.
* `bug/severity:critical`: A critical bug in ArgoCD, possibly resulting in
data loss, integrity breach or severe degraded overall functionality.
### Priority
The priority of an issue indicates how quickly the issue should be fixed and
released. This information should help us in deciding the target release for
the fix, and whether a bug would even justify a dedicated patch release. The
following labels can be used to classify bugs into their priority:
* `bug/priority:low`: Will be fixed without any specific target release.
* `bug/priority:medium`: Should be fixed in the minor or major release, which
ever comes first.
* `bug/priority:high`: Should be fixed with the next patch release.
* `bug/priority:urgent`: Should be fixed immediately and might even justify a
dedicated patch release.
The priority should be set according to the value of the fix and the attached
severity. This means. a bug with a severity of `minor` could still be classified
with priority `high`, when it is a *low hanging fruit* (i.e. the bug is easy to
fix with low effort) and contributes to overall user experience of ArgoCD.
Likewise, a bug classified with a severity of `major` could still have a
priority of `medium`, if there is a workaround available for the bug which
mitigates the effects of the bug to a bearable extend.
Bugs classified with a severity of `critical` most likely belong to either
the `urgent` priority, or to the `high` category when there is a workaround
available.
Bugs that have a `regression`label attached (see Regression above) should
usually be handled with higher priority, so those kind of issues will most
likely have a priority of `high` or `urgent` attached to it.
## Summary
Applying a little discipline when working with our issue tracker could greatly
help us in making informed decision about which bugs to fix when. Also, it
would help us to get a clear view whether we can do for example a new minor
release without having forgot any outstanding issues that should make it into
that release.
If we are able to work with classification of bug issues, we might want to
extend the triage for enhancement proposals and PRs as well.

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

View File

@@ -1,5 +1,9 @@
# CI
!!!warning
This documentation is out-of-date. Please bear with us while we work to
update the documentation to reflect reality!
## Troubleshooting Builds
### "Check nothing has changed" step fails

View File

@@ -0,0 +1,261 @@
# Contribution guide
## Preface
We want to make contributing to ArgoCD as simple and smooth as possible.
This guide shall help you in setting up your build & test environment, so that you can start developing and testing bug fixes and feature enhancements without having to make too much effort in setting up a local toolchain.
If you want to to submit a PR, please read this document carefully, as it contains important information guiding you through our PR quality gates.
As is the case with the development process, this document is under constant change. If you notice any error, or if you think this document is out-of-date, or if you think it is missing something: Feel free to submit a PR or submit a bug to our GitHub issue tracker.
If you need guidance with submitting a PR, or have any other questions regarding development of ArgoCD, do not hesitate to [join our Slack](https://argoproj.github.io/community/join-slack) and get in touch with us in the `#argo-dev` channel!
## Before you start
You will need at least the following things in your toolchain in order to develop and test ArgoCD locally:
* A Kubernetes cluster. You won't need a fully blown multi-master, multi-node cluster, but you will need something like K3S, Minikube or microk8s. You will also need a working Kubernetes client (`kubectl`) configuration in your development environment. The configuration must reside in `~/.kube/config` and the API server URL must point to the IP address of your local machine (or VM), and **not** to `localhost` or `127.0.0.1` if you are using the virtualized development toolchain (see below)
* You will also need a working Docker runtime environment, to be able to build and run images.
The Docker version must be fairly recent, and support multi-stage builds. You should not work as root. Make your local user a member of the `docker` group to be able to control the Docker service on your machine.
* Obviously, you will need a `git` client for pulling source code and pushing back your changes.
* Last but not least, you will need a Go SDK and related tools (such as GNU `make`) installed and working on your development environment. The minimum required Go version for building ArgoCD is **v1.14.0**.
* We will assume that your Go workspace is at `~/go`
!!! note
**Attention minikube users**: By default, minikube will create Kubernetes client configuration that uses authentication data from files. This is incompatible with the virtualized toolchain. So if you intend to use the virtualized toolchain, you have to embed this authentication data into the client configuration. To do so, issue `minikube config set embed-certs true` and restart your minikube. Please also note that minikube using the Docker driver is currently not supported with the virtualized toolchain, because the Docker driver exposes the API server on 127.0.0.1 hard-coded. If in doubt, run `make verify-kube-connect` to find out.
## Submitting PRs
When you submit a PR against ArgoCD's GitHub repository, a couple of CI checks will be run automatically to ensure your changes will build fine and meet certain quality standards. Your contribution needs to pass those checks in order to be merged into the repository.
In general, it might be beneficial to only submit a PR for an existing issue. Especially for larger changes, an Enhancement Proposal should exist before.
!!!note
Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from ArgoCD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to ArgoCD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
The following read will help you to submit a PR that meets the standards of our CI tests:
### Title of the PR
Please use a meaningful and concise title for your PR. This will help us to pick PRs for review quickly, and the PR title will also end up in the Changelog.
We use the [Semantic PR title checker](https://github.com/zeke/semantic-pull-requests) to categorize your PR into one of the following categories:
* `fix` - Your PR contains one or more code bug fixes
* `feat` - Your PR contains a new feature
* `docs` - Your PR improves the documentation
* `chore` - Your PR improves any internals of ArgoCD, such as the build process, unit tests, etc
Please prefix the title of your PR with one of the valid categories. For example, if you chose the title your PR `Add documentation for GitHub SSO integration`, please use `docs: Add documentation for GitHub SSO integration` instead.
### Contributor License Agreement
Every contributor to ArgoCD must have signed the current Contributor License Agreement (CLA). You only have to sign the CLA when you are a first time contributor, or when the agreement has changed since your last time signing it. The main purpose of the CLA is to ensure that you hold the required rights for your contribution. The CLA signing is an automated process.
You can read the current version of the CLA [here](https://cla-assistant.io/argoproj/argo-cd).
### PR template checklist
Upon opening a PR, the details will contain a checklist from a template. Please read the checklist, and tick those marks that apply to you.
### Automated builds & tests
After you have submitted your PR, and whenever you push new commits to that branch, GitHub will run a number of Continuous Integration checks against your code. It will execute the following actions, and each of them has to pass:
* Build the Go code (`make build`)
* Generate API glue code and manifests (`make codegen`)
* Run a Go linter on the code (`make lint`)
* Run the unit tests (`make test`)
* Run the End-to-End tests (`make test-e2e`)
* Build and lint the UI code (`make ui`)
* Build the `argocd` CLI (`make cli`)
If any of these tests in the CI pipeline fail, it means that some of your contribution is considered faulty (or a test might be flaky, see below).
### Code test coverage
We use [CodeCov](https://codecov.io) in our CI pipeline to check for test coverage, and once you submit your PR, it will run and report on the coverage difference as a comment within your PR. If the difference is too high in the negative, i.e. your submission introduced a significant drop in code coverage, the CI check will fail.
Whenever you develop a new feature or submit a bug fix, please also write appropriate unit tests for it. If you write a completely new module, please aim for at least 80% of coverage.
If you want to see how much coverage just a specific module (i.e. your new one) has, you can set the `TEST_MODULE` to the (fully qualified) name of that module with `make test`, i.e.
```bash
make test TEST_MODULE=github.com/argoproj/argo-cd/server/cache
...
ok github.com/argoproj/argo-cd/server/cache 0.029s coverage: 89.3% of statements
```
## Local vs Virtualized toolchain
ArgoCD provides a fully virtualized development and testing toolchain using Docker images. It is recommended to use those images, as they provide the same runtime environment as the final product and it is much easier to keep up-to-date with changes to the toolchain and dependencies. But as using Docker comes with a slight performance penalty, you might want to setup a local toolchain.
Most relevant targets for the build & test cycles in the `Makefile` provide two variants, one of them suffixed with `-local`. For example, `make test` will run unit tests in the Docker container, `make test-local` will run it natively on your local system.
If you are going to use the virtualized toolchain, please bear in mind the following things:
* Your Kubernetes API server must listen on the interface of your local machine or VM, and not on `127.0.0.1` only.
* Your Kubernetes client configuration (`~/.kube/config`) must not use an API URL that points to `localhost` or `127.0.0.1`.
You can test whether the virtualized toolchain has access to your Kubernetes cluster by running `make verify-kube-connect` (*after* you have setup your development environment, as described below), which will run `kubectl version` inside the Docker container used for running all tests.
The Docker container for the virtualized toolchain will use the following local mounts from your workstation, and possibly modify its contents:
* `~/go/src` - Your Go workspace's source directory (modifications expected)
* `~/.cache/go-build` - Your Go build cache (modifications expected)
* `~/.kube` - Your Kubernetes client configuration (no modifications)
* `/tmp` - Your system's temp directory (modifications expected)
## Setting up your development environment
The following steps are required no matter whether you chose to use a virtualized or a local toolchain.
### Clone the ArgoCD repository from your personal fork on GitHub
* `mkdir -p ~/go/src/github.com/argoproj`
* `cd ~/go/src/github.com/argoproj`
* `git clone https://github.com/yourghuser/argo-cd`
* `cd argo-cd`
### Optional: Setup an additional Git remote
While everyone has their own Git workflow, the author of this document recommends to create a remote called `upstream` in your local copy pointing to the original ArgoCD repository. This way, you can easily keep your local branches up-to-date by merging in latest changes from the ArgoCD repository, i.e. by doing a `git pull upstream master` in your locally checked out branch. To create the remote, run `git remote add upstream https://github.com/argoproj/argo-cd`
### Install the must-have requirements
Make sure you fulfill the pre-requisites above and run some preliminary tests. Neither of them should report an error.
* Run `kubectl version`
* Run `docker version`
* Run `go version`
### Build (or pull) the required Docker image
Build the required Docker image by running `make test-tools-image` or pull the latest version by issuing `docker pull argoproj/argocd-test-tools`.
The `Dockerfile` used to build these images can be found at `test/container/Dockerfile`.
### Test connection from build container to your K8s cluster
Run `make verify-kube-connect`, it should execute without error.
If you receive an error similar to the following:
```
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
make: *** [Makefile:386: verify-kube-connect] Error 1
```
you should edit your `~/.kube/config` and modify the `server` option to point to your correct K8s API (as described above).
## The development cycle
When you have developed and possibly manually tested the code you want to contribute, you should ensure that everything will build correctly. Commit your changes to the local copy of your Git branch and perform the following steps:
### Pull in all build dependencies
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required dependencies, issue:
* `make dep-ui`
ArgoCD recently migrated to Go modules. Usually, dependencies will be downloaded on build time, but the Makefile provides two targets to download and vendor all dependencies:
* `make mod-download` will download all required Go modules and
* `make mod-vendor` will vendor those dependencies into the ArgoCD source tree
### Generate API glue code and other assets
ArgoCD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
* Run `make codegen`, this might take a while
* Check if something has changed by running `git status` or `git diff`
* Commit any possible changes to your local Git branch, an appropriate commit message would be `Changes from codegen`, for example.
!!!note
There are a few non-obvious assets that are auto-generated. You should not change the autogenerated assets, as they will be overwritten by a subsequent run of `make codegen`. Instead, change their source files. Prominent examples of non-obvious auto-generated code are `swagger.json` or the installation manifest YAMLs.
### Build your code and run unit tests
After the code glue has been generated, your code should build and the unit tests should run without any errors. Execute the following statements:
* `make build`
* `make test`
These steps are non-modifying, so there's no need to check for changes afterwards.
### Lint your code base
In order to keep a consistent code style in our source tree, your code must be well-formed in accordance to some widely accepted rules, which are applied by a Linter.
The Linter might make some automatic changes to your code, such as indentation fixes. Some other errors reported by the Linter have to be fixed manually.
* Run `make lint` and observe any errors reported by the Linter
* Fix any of the errors reported and commit to your local branch
* Finally, after the Linter reports no errors anymore, run `git status` or `git diff` to check for any changes made automatically by Lint
* If there were automatic changes, commit them to your local branch
If you touched UI code, you should also run the Yarn linter on it:
* Run `make lint-ui`
* Fix any of the errors reported by it
## Setting up a local toolchain
For development, you can either use the fully virtualized toolchain provided as Docker images, or you can set up the toolchain on your local development machine. Due to the dynamic nature of requirements, you might want to stay with the virtualized environment.
### Install required dependencies and build-tools
!!!note
The installations instructions are valid for Linux hosts only. Mac instructions will follow shortly.
For installing the tools required to build and test ArgoCD on your local system, we provide convenient installer scripts. By default, they will install binaries to `/usr/local/bin` on your system, which might require `root` privileges.
You can change the target location by setting the `BIN` environment before running the installer scripts. For example, you can install the binaries into `~/go/bin` (which should then be the first component in your `PATH` environment, i.e. `export PATH=~/go/bin:$PATH`):
```shell
make BIN=~/go/bin install-tools-local
```
Additionally, you have to install at least the following tools via your OS's package manager (this list might not be always up-to-date):
* Git LFS plugin
* GnuPG version 2
### Install Go dependencies
You need to pull in all required Go dependencies. To do so, run
* `make mod-download-local`
* `make mod-vendor-local`
### Test your build toolchain
The first thing you can do whether your build toolchain is setup correctly is by generating the glue code for the API and after that, run a normal build:
* `make codegen-local`
* `make build-local`
This should return without any error.
### Run unit-tests
The next thing is to make sure that unit tests are running correctly on your system. These will require that all dependencies, such as Helm, Kustomize, Git, GnuPG, etc are correctly installed and fully functioning:
* `make test-local`
### Run end-to-end tests
The final step is running the End-to-End testsuite, which makes sure that your Kubernetes dependencies are working properly. This will involve starting all of the ArgoCD components locally on your computer. The end-to-end tests consists of two parts: a server component, and a client component.
* First, start the End-to-End server: `make start-e2e-local`. This will spawn a number of processes and services on your system.
* When all components have started, run `make test-e2e-local` to run the end-to-end tests against your local services.
For more information about End-to-End tests, refer to the [End-to-End test documentation](test-e2e.md).

View File

@@ -0,0 +1,65 @@
# Contribution FAQ
## General
### Can I discuss my contribution ideas somewhere?
Sure thing! You can either open an Enhancement Proposal in our GitHub issue tracker or you can [join us on Slack](https://argoproj.github.io/community/join-slack) in channel #argo-dev to discuss your ideas and get guidance for submitting a PR.
### Noone has looked at my PR yet. Why?
As we have limited man power, it can sometimes take a while for someone to respond to your PR. Especially, when your PR contains complex or non-obvious changes. Please bear with us, we try to look at every PR that we receive.
### Why has my PR been declined? I put much work in it!
We appreciate that you have put your valuable time and know how into a contribution. Alas, some changes do not fit into the overall ArgoCD philosophy, and therefore can't be merged into the official ArgoCD source tree.
To be on the safe side, make sure that you have created an Enhancement Proposal for your change before starting to work on your PR and have gathered enough feedback from the community and the maintainers.
## Failing CI checks
### One of the CI checks failed. Why?
You can click on the "Details" link next to the failed step to get more details about the failure. This will take you to CircleCI website.
![CircleCI pipeline](ci-pipeline-failed.png)
### Can I retrigger the checks without pushing a new commit?
Since the CI pipeline is triggered on Git commits, there is currently no (known) way on how to retrigger the CI checks without pushing a new commit to your branch.
If you are absolutely sure that the failure was due to a failure in the pipeline, and not an error within the changes you commited, you can push an empty commit to your branch, thus retriggering the pipeline without any code changes. To do so, issue
```bash
git commit --allow-empty -m "Retrigger CI pipeline"
git push origin <yourbranch>
```
### Why does the build step fail?
Chances are that it fails for two of the following reasons in the CI while running fine on your machine:
* Sometimes, CircleCI kills the build step due to excessive memory usage. This happens rarely, but it has happened in the past. If you see a message like "killed" in the log output of CircleCI, you should retrigger the pipeline as described above. If the issue persists, please let us know.
* If the build is failing at the `Ensuring Gopkg.lock is up-to-date` step, you need to update the dependencies before you push your commits. Run `make dep-ensure` and `make dep` and commit the changes to `Gopkg.lock` to your branch.
### Why does the codegen step fail?
If the codegen step fails with "Check nothing has changed...", chances are high that you did not run `make codegen`, or did not commit the changes it made. You should double check by running `make codegen` followed by `git status` in the local working copy of your branch. Commit any changes and push them to your GH branch to have the CI check it again.
A second common case for this is, when you modified any of the auto generated assets, as these will be overwritten upon `make codegen`.
Generally, this step runs `codegen` and compares the outcome against the Git branch it has checked out. If there are differences, the step will fail.
### Why does the lint step fail?
The lint step is most likely to fail for two reasons:
* The `golangci-lint` process was OOM killed by CircleCI. This happens sometimes, and is annoying. This is indicated by a `Killed.` message in the CircleCI output.
If this is the case, please re-trigger the CI process as described above and see if it runs through.
* Your code failed to lint correctly, or modifications were performed by the `golangci-lint` process. You should run `make lint` on your local branch and fix all the issues.
### Why does the test or e2e steps fail?
You should check for the cause of the failure on the CircleCI web site, as described above. This will give you the name of the test that has failed, and details about why. If your test are passing locally (using the virtualized toolchain), chances are that the test might be flaky and will pass the next time it is run. Please retrigger the CI pipeline as described above and see if the test step now passes.

View File

@@ -1,5 +1,155 @@
# Releasing
## Automated release procedure
Starting from `release-1.6` branch, ArgoCD can be released in automatic fashion
using GitHub actions. The release process takes about 20 minutes, sometimes a
little less, depending on the performance of GitHub actions runners.
The target release branch must already exist in GitHub repository. If you for
example want to create a release `v1.7.0`, the corresponding release branch
`release-1.7` needs to exist, otherwise the release cannot be build. Also,
the trigger tag should always be created in the release branch, checked out
in your local repository clone.
Before triggering the release automation, the `CHANGELOG.md` should be updated
with the latest information, and this change should be commited and pushed to
the GitHub repository to the release branch. Afterwards, the automation can be
triggered.
**Manual steps before release creation:**
* Update `CHANGELOG.md` with changes for this release
* Commit & push changes to `CHANGELOG.md`
* Prepare release notes (save to some file, or copy from Changelog)
**The automation will perform the following steps:**
* Update `VERSION` file in release branch
* Update manifests with image tags of new version in release branch
* Build the Docker image and push to Docker Hub
* Create release tag in the GitHub repository
* Create GitHub release and attach the required assets to it (CLI binaries, ...)
Finally, it will the remove trigger tag from repository again.
Automation supports both, GA and pre-releases. The automation is triggered by
pushing a tag to the repository. The tag must be in one of the following formats
to trigger the GH workflow:
* GA: `release-v<MAJOR>.<MINOR>.<PATCH>`
* Pre-release: `release-v<MAJOR>.<MINOR>.<PATCH>-rc<RC#>`
The tag must be an annotated tag, and it must contain the release notes in the
commit message. Please note that Markdown uses `#` character for formatting, but
Git uses it as comment char. To solve this, temporarily switch Git comment char
to something else, the `;` character is recommended.
For example, considering you have configured the Git remote for repository to
`github.com/argoproj/argo-cd` to be named `upstream` and are in your locally
checked out repo:
```shell
git config core.commentChar ';'
git tag -a -F /path/to/release-notes.txt release-v1.6.0-rc2
git push upstream release-v1.6.0-rc2
git tag -d release-v1.6.0-rc2
git config core.commentChar '#'
```
For convenience, there is a shell script in the tree that ensures all the
pre-requisites are met and that the trigger is well-formed before pushing
it to the GitHub repo.
In summary, the modifications it does are:
* Create annotated trigger tag in your local repository
* Push tag to GitHub repository to trigger workflow
* Remove trigger tag from your local repository
The script can be found at `hacks/trigger-release.sh` and is used as follows:
```shell
./hacks/trigger-release.sh <version> <remote name> [<release notes path>]
```
The `<version>` identifier needs to be specified **without** the `release-`
prefix, so just specify it as `v1.6.0-rc2` for example. The `<remote name>`
specifies the name of the remote used to push to the GitHub repository.
If you omit the `<release notes path>`, an editor will pop-up asking you to
enter the tag's annotation so you can paste the release notes, save and exit.
It will also take care of temporarily configuring the `core.commentChar` and
setting it back to its original state.
!!!note
It is strongly recommended to use this script to trigger the workflow
instead of manually pushing a tag to the repository.
Once the trigger tag is pushed to the repo, the GitHub workflow will start
execution. You can follow its progress under `Actions` tab, the name of the
action is `Create release`. Don't get confused by the name of the running
workflow, it will be the commit message of the latest commit to `master`
branch, this is a limitation of GH actions.
The workflow performs necessary checks so that the release can be sucessfully
build before the build actually starts. It will error when one of the
prerequisites is not met, or if the release cannot be build (i.e. already
exists, release notes invalid, etc etc). You can see a summary of what has
failed in the job's overview page, and more detailed errors in the output
of the step that has failed.
!!!note
You cannot perform more than one release on the same release branch at the
same time. For example, both `v1.6.0` and `v1.6.1` would operate on the
`release-1.6` branch. If you submit `v1.6.1` while `v1.6.0` is still
executing, the release automation will not execute. You have to either
cancel `v1.6.0` before submitting `v1.6.1` or wait until it has finished.
You can execute releases on different release branches simultaneously, for
example `v1.6.0` and `v1.7.0-rc1`, without problems.
### Verifying automated release
After the automatic release creation has finished, you should perform manual
checks to see if the release came out correctly:
* Check status & output of the GitHub action
* Check [https://github.com/argoproj/argo-cd/releases](https://github.com/argoproj/argo-cd/releases)
to see if release has been correctly created, and if all required assets
are attached.
* Check whether the image has been published on DockerHub correctly
### If something went wrong
If something went wrong, damage should be limited. Depending on the steps that
have been performed, you will need to manually clean up.
* Delete release tag (i.e. `v1.6.0-rc2`) created on GitHub repository. This
will immediately set release (if created) to `draft` status, invisible for
general public.
* Delete the draft release (if created) from `Releases` page on GitHub
* If Docker image has been pushed to DockerHub, delete it
* If commits have been performed to the release branch, revert them. Paths that could have been commited to are:
* `VERSION`
* `manifests/*`
### Post-process manual steps
For now, the only manual steps left are to
* update brew formulae for ArgoCD CLI on Mac if release is GA
* update stable tag in GitHub repository to point to new release (if appropriate)
These will be automated as well in the future.
## Manual releasing
Automatic release process does not interfere with manual release process, since
the trigger tag does not match a normal release tag. If you prefer to perform,
manual release or if automatic release is for some reason broken, these are the
steps:
Make sure you are logged into Docker Hub:
```bash
@@ -42,18 +192,14 @@ git push $REPO $BRANCH
git push $REPO $VERSION
```
If GA, update `stable` tag:
```bash
git tag stable --force && git push $REPO stable --force
```
Update [Github releases](https://github.com/argoproj/argo-cd/releases) with:
* Getting started (copy from previous release)
* Changelog
* Binaries (e.g. `dist/argocd-darwin-amd64`).
## Update brew formulae (manual)
If GA, update Brew formula:
```bash
@@ -64,7 +210,15 @@ git commit -am "Update argocd to $VERSION"
git push
```
### Verify
## Update stable tag (manual)
If GA, update `stable` tag:
```bash
git tag stable --force && git push $REPO stable --force
```
## Verify release
Locally:

View File

@@ -0,0 +1,108 @@
# Running ArgoCD locally
## Run ArgoCD outside of Kubernetes
During development, it might be viable to run ArgoCD outside of a Kubernetes cluster. This will greatly speed up development, as you don't have to constantly build, push and install new ArgoCD Docker images with your latest changes.
You will still need a working Kubernetes cluster, as described in the [Contribution Guide](contributing.md), where ArgoCD will store all of its resources.
If you followed the [Contribution Guide](contributing.md) in setting up your toolchain, you can run ArgoCD locally with these simple steps:
### Scale down any ArgoCD instance in your cluster
First make sure that ArgoCD is not running in your development cluster by scaling down the deployments:
```shell
kubectl -n argocd scale deployment/argocd-application-controller --replicas 0
kubectl -n argocd scale deployment/argocd-dex-server --replicas 0
kubectl -n argocd scale deployment/argocd-repo-server --replicas 0
kubectl -n argocd scale deployment/argocd-server --replicas 0
kubectl -n argocd scale deployment/argocd-redis --replicas 0
```
### Start local services
When you use the virtualized toolchain, starting local services is as simple as running
```bash
make start
```
This will start all ArgoCD services and the UI in a Docker container and expose the following ports to your host:
* The ArgoCD API server on port 8080
* The ArgoCD UI server on port 4000
You can now use either the web UI by pointing your browser to `http://localhost:4000` or use the CLI against the API at `http://localhost:8080`. Be sure to use the `--insecure` and `--plaintext` options to the CLI.
As an alternative to using the above command line parameters each time you call `argocd` CLI, you can set the following environment variables:
```bash
export ARGOCD_SERVER=127.0.0.1:8080
export ARGOCD_OPTS="--plaintext --insecure"
```
### Scale up ArgoCD in your cluster
Once you have finished testing your changes locally and want to bring back ArgoCD in your development cluster, simply scale the deployments up again:
```bash
kubectl -n argocd scale deployment/argocd-application-controller --replicas 1
kubectl -n argocd scale deployment/argocd-dex-server --replicas 1
kubectl -n argocd scale deployment/argocd-repo-server --replicas 1
kubectl -n argocd scale deployment/argocd-server --replicas 1
kubectl -n argocd scale deployment/argocd-redis --replicas 1
```
## Run your own ArgoCD images on your cluster
For your final tests, it might be necessary to build your own images and run them in your development cluster.
### Create Docker account and login
You might need to create a account on [Docker Hub](https://hub.docker.com) if you don't have one already. Once you created your account, login from your development environment:
```bash
docker login
```
### Create and push Docker images
You will need to push the built images to your own Docker namespace:
```bash
export IMAGE_NAMESPACE=youraccount
```
If you don't set `IMAGE_TAG` in your environment, the default of `:latest` will be used. To change the tag, export the variable in the environment:
```bash
export IMAGE_TAG=1.5.0-myrc
```
Then you can build & push the image in one step:
```bash
DOCKER_PUSH=true make image
```
### Configure manifests for your image
With `IMAGE_NAMESPACE` and `IMAGE_TAG` still set, run
```bash
make manifests
```
to build a new set of installation manifests which include your specific image reference.
!!!note
Do not commit these manifests to your repository. If you want to revert the changes, the easiest way is to unset `IMAGE_NAMESPACE` and `IMAGE_TAG` from your environment and run `make manifests` again. This will re-create the default manifests.
### Configure your cluster with custom manifests
The final step is to push the manifests to your cluster, so it will pull and run your image:
```bash
kubectl -n argocd --force -f manifests/install.yaml
```

View File

@@ -1,5 +1,9 @@
# E2E Tests
!!!warning
This documentation is out-of-date. Please bear with us while we work to
update the documentation to reflect reality!
The directory contains E2E tests and test applications. The test assume that Argo CD services are installed into `argocd-e2e` namespace or cluster in current context. One throw-away
namespace `argocd-e2e***` is created prior to tests execute. The throw-away namespace is used as a target namespace for test applications.

View File

@@ -115,11 +115,9 @@ E.g.
* `'3072Mi'` normalized to `'3Gi'`
* `3072` normalized to `'3072'` (quotes added)
To fix this - replace your values with the normalized values.
To fix this use diffing customizations [settings](./user-guide/diffing.md#known-kubernetes-types-in-crds-resource-limits-volume-mounts-etc).
See [#1615](https://github.com/argoproj/argo-cd/issues/1615)
# How Do I Fix "invalid cookie, longer than max length 4093"?
## How Do I Fix "invalid cookie, longer than max length 4093"?
Argo CD uses a JWT as the auth token. You likely are part of many groups and have gone over the 4KB limit which is set for cookies.
You can get the list of groups by opening "developer tools -> network"
@@ -150,3 +148,8 @@ argocd ... --insecure
```
!!! warning "Do not use `--insecure` in production"
## I have configured Dex via `dex.config` in `argocd-cm`, it still says Dex is unconfigured. Why?
Most likely you forgot to set the `url` in `argocd-cm` to point to your ArgoCD as well. See also
[the docs](/operator-manual/user-management/#2-configure-argo-cd-for-sso)

View File

@@ -20,7 +20,7 @@ This will create a new namespace, `argocd`, where Argo CD services and applicati
On GKE, you will need grant your account the ability to create new cluster roles:
```bash
kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value account)"
```
!!! note

View File

@@ -25,7 +25,8 @@ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/st
```
Follow our [getting started guide](getting_started.md). Further user oriented [documentation](user_guide/)
is provided for additional features. Developer oriented [documentation](developer-guide/) is available for people interested in building third-party integrations.
is provided for additional features. If you are looking to upgrade ArgoCD, see the [upgrade guide](./operator-manual/upgrading/overview.md).
Developer oriented [documentation](developer-guide/) is available for people interested in building third-party integrations.
## How it works

View File

@@ -50,6 +50,8 @@ spec:
# kustomize specific config
kustomize:
# Optional kustomize version. Note: version must be configured in argocd-cm ConfigMap
version: v3.5.4
# Optional image name prefix
namePrefix: prod-
# Optional images passed to "kustomize edit set image".
@@ -92,7 +94,8 @@ spec:
automated:
prune: true # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
validate: true # Validate resources before applying to k8s, defaults to true.
syncOptions: # Sync options which modifies sync behavior
- Validate=false # disables resource validation (equivalent to 'kubectl apply --validate=true')
# Ignore differences at the specified json pointers
ignoreDifferences:

View File

@@ -204,6 +204,10 @@ data:
# Build options/parameters to use with `kustomize build` (optional)
kustomize.buildOptions: --load_restrictor none
# Additional Kustomize versions and corresponding binary paths
kustomize.version.v3.5.1: /custom-tools/kustomize_3_5_1
kustomize.version.v3.5.4: /custom-tools/kustomize_3_5_4
# The metadata.label key name where Argo CD injects the app name as a tracking label (optional).
# Tracking labels are used to determine which resources need to be deleted when pruning.
# If omitted, Argo CD injects the app name into the label: 'app.kubernetes.io/instance'

View File

@@ -9,7 +9,7 @@ metadata:
type: Opaque
data:
# TLS certificate and private key for API server (required).
# Autogenerated with a self-signed ceritificate when keys are missing or invalid.
# Autogenerated with a self-signed certificate when keys are missing or invalid.
tls.crt:
tls.key:

View File

@@ -4,15 +4,20 @@ Argo CD applications, projects and settings can be defined declaratively using K
## Quick Reference
| Name | Kind | Description |
|------|------|-------------|
| [`argocd-cm.yaml`](argocd-cm.yaml) | ConfigMap | General Argo CD configuration |
| [`argocd-secret.yaml`](argocd-secret.yaml) | Secret | Password, Certificates, Signing Key |
| [`argocd-rbac-cm.yaml`](argocd-rbac-cm.yaml) | ConfigMap | RBAC Configuration |
| [`argocd-tls-certs-cm.yaml`](argocd-tls-certs-cm.yaml) | ConfigMap | Custom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later) |
| [`argocd-ssh-known-hosts-cm.yaml`](argocd-ssh-known-hosts-cm.yaml) | ConfigMap | SSH known hosts data for connecting Git repositories via SSH (v1.2 and later) |
| [`application.yaml`](application.yaml) | Application | Example application spec |
| [`project.yaml`](project.yaml) | AppProject | Example project spec |
| File Name | Resource Name | Kind | Description |
|-----------|---------------|------|-------------|
| [`argocd-cm.yaml`](argocd-cm.yaml) | argocd-cm | ConfigMap | General Argo CD configuration |
| [`argocd-secret.yaml`](argocd-secret.yaml) | argocd-secret | Secret | Password, Certificates, Signing Key |
| [`argocd-rbac-cm.yaml`](argocd-rbac-cm.yaml) | argocd-rbac-cm | ConfigMap | RBAC Configuration |
| [`argocd-tls-certs-cm.yaml`](argocd-tls-certs-cm.yaml) | argocd-tls-certs-cm | ConfigMap | Custom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later) |
| [`argocd-ssh-known-hosts-cm.yaml`](argocd-ssh-known-hosts-cm.yaml) | argocd-ssh-known-hosts-cm | ConfigMap | SSH known hosts data for connecting Git repositories via SSH (v1.2 and later) |
| [`application.yaml`](application.yaml) | - | Application | Example application spec |
| [`project.yaml`](project.yaml) | - | AppProject | Example project spec |
All resources, including `Application` and `AppProject` specs, have to be installed in the ArgoCD namespace (by default `argocd`). Also, ConfigMap and Secret resources need to be named as shown in the table above. For `Application` and `AppProject` resources, the name of the resource equals the name of the application or project within ArgoCD. This also means that application and project names are unique within the same ArgoCD installation - you cannot i.e. have the same application name for two different applications.
!!!warning "A note about ConfigMap resources"
Be sure to annotate your ConfigMap resources using the label `app.kubernetes.io/part-of: argocd`, otherwise ArgoCD will not be able to use them.
## Applications
@@ -100,6 +105,12 @@ spec:
kind: LimitRange
- group: ''
kind: NetworkPolicy
# Deny all namespaced-scoped resources from being created, except for Deployment and StatefulSet
namespaceResourceWhitelist:
- group: 'apps'
kind: Deployment
- group: 'apps'
kind: StatefulSet
roles:
# A role which provides read-only access to all applications in the project
- name: read-only
@@ -122,6 +133,12 @@ spec:
## Repositories
!!!note
Some Git hosters - notably GitLab and possibly on-premise GitLab instances as well - require you to
specify the `.git` suffix in the repository URL, otherwise they will send a HTTP 301 redirect to the
repository URL suffixed with `.git`. ArgoCD will **not** follow these redirects, so you have to
adapt your repository URL to be suffixed with `.git`.
Repository credentials are stored in secret. Use following steps to configure a repo:
1. Create secret which contains repository credentials. Consider using [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets) to store encrypted secret
@@ -430,7 +447,7 @@ tlsClientConfig:
# PEM-encoded bytes (typically read from a client certificate key file).
keyData: string
# ServerName is passed to the server for SNI and is used in the client to check server
# ceritificates against. If ServerName is empty, the hostname used to contact the
# certificates against. If ServerName is empty, the hostname used to contact the
# server is used.
serverName: string
```

View File

@@ -3,7 +3,7 @@
## Overview
Argo CD provides built-in health assessment for several standard Kubernetes types, which is then
surfaced to the overall Application health status as a whole. The following checks are made for
specific types of kuberentes resources:
specific types of kubernetes resources:
### Deployment, ReplicaSet, StatefulSet DaemonSet
* Observed generation is equal to desired generation.

View File

@@ -158,6 +158,8 @@ Traefik can be used as an edge router and provide [TLS](https://docs.traefik.io/
It currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections _on the same port_ meaning you do not require multiple ingress objects and hosts.
The API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command.
```yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
@@ -196,6 +198,59 @@ ArgoCD endpoints may be protected by one or more reverse proxies layers, in that
$ argocd login <host>:<port> --header 'x-token1:foo' --header 'x-token2:bar' # can be repeated multiple times
$ argocd login <host>:<port> --header 'x-token1:foo,x-token2:bar' # headers can also be comma separated
```
## ArgoCD Server and UI Root Path (v1.5.3)
ArgoCD server and UI can be configured to be available under a non-root path (e.g. `/argo-cd`).
To do this, add the `--rootpath` flag into the `argocd-server` deployment command:
```yaml
spec:
template:
spec:
name: argocd-server
containers:
- command:
- /argocd-server
- --staticassets
- /shared/app
- --repo-server
- argocd-repo-server:8081
- --rootpath
- /argo-cd
```
NOTE: The flag `--rootpath` changes both API Server and UI base URL.
Example nginx.conf:
```
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 443;
location /argo-cd/ {
proxy_pass https://localhost:8080/argo-cd/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
# buffering should be disabled for api/v1/stream/applications to support chunked response
proxy_buffering off;
}
}
}
```
Flag ```--grpc-web-root-path ``` is used to provide a non-root path (e.g. /argo-cd)
```shell
$ argocd login <host>:<port> --grpc-web-root-path /argo-cd
```
## UI Base Path

View File

@@ -30,6 +30,13 @@ spec:
- group: ''
kind: NetworkPolicy
# Deny all namespaced-scoped resources from being created, except for Deployment and StatefulSet
namespaceResourceWhitelist:
- group: 'apps'
kind: Deployment
- group: 'apps'
kind: StatefulSet
# Enables namespace orphaned resource monitoring.
orphanedResources:
warn: false

View File

@@ -2,8 +2,8 @@
The RBAC feature enables restriction of access to Argo CD resources. Argo CD does not have its own
user management system and has only one built-in user `admin`. The `admin` user is a superuser and
it has unrestricted access to the system. RBAC requires [SSO configuration](user-management/index.md). Once SSO is
configured, additional RBAC roles can be defined, and SSO groups can man be mapped to roles.
it has unrestricted access to the system. RBAC requires [SSO configuration](user-management/index.md) or [one or more local users setup](user-management/index.md).
Once SSO or local users are configured, additional RBAC roles can be defined, and SSO groups or local users can man be mapped to roles.
## Basic Built-in Roles

View File

@@ -4,6 +4,7 @@ Argo CD is un-opinionated about how secrets are managed. There's many ways to do
* [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)
* [Godaddy Kubernetes External Secrets](https://github.com/godaddy/kubernetes-external-secrets)
* [External Secrets Operator](https://github.com/ContainerSolutions/externalsecret-operator)
* [Hashicorp Vault](https://www.vaultproject.io)
* [Banzai Cloud Bank-Vaults](https://github.com/banzaicloud/bank-vaults)
* [Helm Secrets](https://github.com/futuresimple/helm-secrets)

Some files were not shown because too many files have changed in this diff Show More