Compare commits

...

223 Commits

Author SHA1 Message Date
Alexander Matyushentsev
1b0e3e7b80 Update manifests to v0.9.2 2018-09-28 09:36:21 -07:00
Jesse Suen
2faf45af94 Add version check during release to ensure compiled version is accurate (#646) 2018-09-27 18:57:44 -07:00
Jesse Suen
b8c4436870 Update generated files 2018-09-27 18:09:50 -07:00
Jesse Suen
f8b2576f19 Fix issue where argocd-server logged credentials in plain text during repo add (issue #653) 2018-09-27 18:04:10 -07:00
Jesse Suen
61f4513e52 Switch to go-git for all remote git interactions including auth (issue #651) 2018-09-27 18:03:56 -07:00
Jesse Suen
4d210c887b Do not append .git extension during normalization for Azure hosted git (issue #643) (#645) 2018-09-27 18:03:33 -07:00
Alexander Matyushentsev
ab6b2969ac Issue #650 - Temporary ignore service catalog resources (#661) 2018-09-27 18:00:24 -07:00
Andrew Merenbach
447dd30768 Update generated files (#660) 2018-09-27 13:53:56 -07:00
dthomson25
ab98589e1a Normalize policies by always adding space after comma (#659) 2018-09-27 13:30:49 -07:00
Stephen Haynes
2fd6c11338 update to kustomize 1.0.8 (#644) 2018-09-26 21:28:08 -07:00
Alexander Matyushentsev
b90102bc68 Update argocd VERSION file 2018-09-24 15:28:17 -07:00
Alexander Matyushentsev
1aa7146d3e Update manifests to v0.9.1 2018-09-24 14:26:20 -07:00
Alexander Matyushentsev
b366977265 Issue #639 - Repo server unable to execute ls-remote for private repos (#640) 2018-09-24 14:22:23 -07:00
Alexander Matyushentsev
b6439dc545 Update manifests to v0.9.0 2018-09-24 13:16:09 -07:00
Alexander Matyushentsev
bed82d68df Update changelog and fix release command dependency (#638) 2018-09-24 12:58:17 -07:00
Jesse Suen
359271dfa8 Update manifests to support in-cluster installations (#634) 2018-09-24 10:14:31 -07:00
Jesse Suen
0af77a0706 Add more event sources and provide better detail in event messages (issue #635) (#637)
* Expand SyncOperation to also store parameter overrides
Fix auto-sync when used with parameter overrides

* Add more event sources and provide better detail in event messages (issue #635)
2018-09-24 08:52:43 -07:00
Jesse Suen
e6efd79ad8 Support ability to use a helm values files from a URL (issue #624) 2018-09-21 16:05:42 -07:00
Jesse Suen
c953934d2e Simplify the RBAC resources to remove unnecessary sub-resources (issue #629) 2018-09-21 15:25:08 -07:00
Alexander Matyushentsev
5b4742d42b Issue #613 - Don't delete CRD (#630) 2018-09-21 10:29:32 -07:00
Jesse Suen
269f70df51 Trim git url during normalization (issue #614) (#623) 2018-09-20 16:26:17 -07:00
Jesse Suen
67177f933b Fix false OutOfSync condition when an explicit namespace is set in the config (#622) 2018-09-20 14:52:16 -07:00
Jesse Suen
606fdcded7 Rename server.crt/server.key to tls.crt/tls.key to integrate with Ingress (issue #617) 2018-09-20 12:49:23 -07:00
Alexander Matyushentsev
70b9db68b4 Issue #599 - Lazy enforcement of unknown cluster/namespace restricted resources (#612) 2018-09-20 09:48:54 -07:00
Jesse Suen
dc8a2f5d62 Support for exporting prometheus metrics about ArgoCD applications (#608) 2018-09-17 14:05:11 -07:00
Alexander Matyushentsev
8830cf9556 609 - Support restricting TLS version (#610) 2018-09-17 13:14:00 -07:00
Jesse Suen
bfb558eb92 Fix issue where helm hooks were being deployed as part of sync (issue #605) 2018-09-17 11:29:44 -07:00
Jesse Suen
505866a4c6 Support helm charts with dependencies and namespace sensitivity (issue #582) 2018-09-17 11:29:44 -07:00
Yuki Kodama
acd2de80fb Update getting started to point to v0.8.2 (#607) 2018-09-15 23:45:06 -07:00
Alexander Matyushentsev
0b08bf4537 Issue #523 - Use 'kubectl auth reconcile' for RBAC resources (#600) 2018-09-14 20:38:35 -07:00
Jesse Suen
223091482c Improve three-way diff to provide more accurate Sync status and diff result (issue #597) (#604) 2018-09-14 19:10:11 -07:00
Andrew Merenbach
4699946e1b Derive dedicated Dex deployment (#564)
Put Dex into its own deployment and service to decouple API server stability from auth token processing
2018-09-14 17:08:12 -07:00
Jesse Suen
097f87fd52 Improve remarshalling to use reflection/schema builders to handle all k8s core types (#603) 2018-09-14 16:17:20 -07:00
Alexander Matyushentsev
66b4f3a685 Issue #515 - handle concurrent settings initialization by api servers (#602) 2018-09-14 15:09:12 -07:00
Jesse Suen
02116d4bfc Fix comparison failure when app contains unregistered custom resource (issue #583) (#596) 2018-09-13 14:02:04 -07:00
Jesse Suen
fb17589af6 Fix race conditions in kube.GetResourcesWithLabel and DeleteResourceWithLabel (issue #587) (#593) 2018-09-13 13:58:47 -07:00
Alexander Matyushentsev
15ce7ea880 Issue #584 - ArgoCD fails to deploy resources list (#598) 2018-09-13 13:52:30 -07:00
Alexander Matyushentsev
57a3123a55 Issue #482 - Support IAM Authentication for managing external K8s clusters (#588) 2018-09-13 00:09:23 -07:00
Jesse Suen
32e96e4bb2 Fix app sync / wait panic in CLI 2018-09-12 23:41:42 -07:00
Jesse Suen
47ee26a77a Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression 2018-09-12 23:41:42 -07:00
dthomson25
9cd5d52fbc Add iat as path param for delete token http call (#586) 2018-09-12 19:49:20 -07:00
Alexander Matyushentsev
aa2afcd47b Issue #330 - Projects need controls on cluster-scoped resources (#558)
* Issue #330 - Projects need controls on cluster-scoped resources

* Issue #330 - Introduce namespace resources black-list
2018-09-11 15:10:47 -07:00
Jesse Suen
fd510e7933 Support an automated sync policy upon detection of OutOfSync status from git (#571) 2018-09-11 14:28:53 -07:00
Jesse Suen
e29d5b9634 In-memory implementation of ls-remote using go-git to reduce repo lock contention (#574) 2018-09-11 13:53:51 -07:00
Conor Fennell
2f9891b15b Issue #577 - Add rbac non resource url policy for argocd-manager-role (#578)
* Add rbac non resource url policy for argocd-manager-role
* allow all non resource urls to be added through rbac
2018-09-11 13:23:10 -07:00
Jesse Suen
c3ecd615ff Update getting started and docs to point to v0.8.1 (#575) 2018-09-10 19:05:47 -07:00
Jesse Suen
4e22a3cb21 Add link to SigApps video and update CHANGELOG for v0.8.1 (#572) 2018-09-10 16:08:08 -07:00
Jesse Suen
bc98b65190 Fix controller hot loop when app source contains bad manifests (issue #568) (#570) 2018-09-10 10:58:13 -07:00
Jesse Suen
02b756ef40 Fix issue where branch checkout did not have accurate git tree state (issue #567) (#569) 2018-09-10 10:55:12 -07:00
dthomson25
954706570c Reorder K8s resources to correct creation order (#551) 2018-09-10 10:14:14 -07:00
Alexander Matyushentsev
e2faf6970f Issue #527 - Support --in-cluster authentication without providing a kubeconfig (#559)
* Issue #527 - Support --in-cluster authentication without providing a kubeconfig

* Issue #527 - make sure resources are watched for 'local' cluster
2018-09-10 08:20:17 -07:00
Alexander Matyushentsev
a528ae9c12 Issue #553 - Turn on TLS for repo server (#563) 2018-09-08 00:17:29 +03:00
Alexander Matyushentsev
0a5871eba4 Issue #470 - K8s secrets need to be redacted in API server (#560) 2018-09-07 23:51:32 +03:00
Alexander Matyushentsev
27471d5249 Issue #540 - Support raw jsonnet as an application source (#561) 2018-09-07 21:15:19 +03:00
Alexander Matyushentsev
ed484c00db Issue 499 - fileFiles path should be relative to app directory (#552) 2018-09-05 23:37:26 +03:00
Jesse Suen
b868f26ca4 Update documentation for v0.8.0 (#550) 2018-09-04 22:31:21 -07:00
Jesse Suen
d7c04ae24c Update manifests to use v0.8.0
Make manifests friendly to `kubectl apply` semantics by omitting `data:` field
RBAC docs improvements
2018-09-04 17:58:50 -07:00
Jesse Suen
5bcf8c40e0 Minor improvements to token CLI (#549) 2018-09-04 17:57:31 -07:00
dthomson25
b8e30ed953 Add documentation on project roles and JWT tokens (#533)
* Add documentation on project roles and JWT tokens
* Add AppProject CRD to architecture.md
2018-09-04 17:57:00 -07:00
Jesse Suen
e3adb30ca7 Run all containers as an unprivileged user (resolves #528) (#546) 2018-09-04 13:47:00 -07:00
Jesse Suen
1e8c570c8a Fix argocd app wait printing incorrect Sync output (resolves #542) (#543)
Timeout condition was not printing final status.
2018-08-31 11:25:15 -07:00
Jesse Suen
40f2220f1d Fix issue where argocd could not sync to a tag (#541) 2018-08-31 11:24:14 -07:00
dthomson25
4da779c44c Add PVC healthcheck to controller (#501) (#537) 2018-08-28 16:40:32 -07:00
Jesse Suen
b54a5a3e25 Refactor Makefile/build to use a single Dockerfile. Update kustomize to v1.0.7 (#538) 2018-08-28 16:00:14 -07:00
dthomson25
f572bcff58 Create default project on startup (#535)
Implements to solve issue #514
2018-08-28 10:06:01 -07:00
Andrew Merenbach
d47b7e6128 Use gRPC error codes instead of fmt.Errorf (#532) 2018-08-27 17:54:29 -07:00
Andrew Merenbach
8d9e4faae9 Add health check on API server (#522)
* Add app health endpoints

* Update generated files

* Revert "Update generated files"

This reverts commit 40f490797645ed0f30d05785748e3919dea31b7f.

* Revert "Add app health endpoints"

This reverts commit 650688dd2ee4a533e29b7df69e0bbb2436eead6b.

* Add dedicated health endpoint

* Update generated files

* Slim down basic server

* Update generated files

* Update health server creation

* Fix import, endpoint casing

* Flesh out basic health check

* Add additional endpoints, fix check, thanks @jessesuen

* Fix errors

* Update generated files

* Simplify health check, update endpoint

* Update generated files

* Factor out health check code

* Update generated files

* Rm health endpoint

* Add healthz utility

* Log error instead of printing it

* Update comment

* Add liveness, readiness probes to manifest for API server

* Add health check test

* Tweak timeouts, endpoints in probes, thanks @jessesuen

* Tweak probes, thanks @jessesuen
2018-08-23 09:24:21 -07:00
Jesse Suen
cf630055b0 API discovery becomes best effort when partial resource list is returned (resolves #524) (#526) 2018-08-21 13:53:42 -07:00
dthomson25
a5870c894f Fix typo in sso.md (#518) 2018-08-21 09:10:31 -07:00
Andrew Merenbach
c236ee99d4 Use named FIFO so we can exit with non-zero status (#516) 2018-08-16 13:22:29 -07:00
Jesse Suen
130e242aa9 Fix build breakage (#517) 2018-08-15 18:31:43 -07:00
Jesse Suen
39f0a17d0d Add ability to delete a single application resource (issue #262) (#511) 2018-08-15 15:01:29 -07:00
Jesse Suen
da0682afa7 Support for kustomize app directories (#510) 2018-08-15 14:54:56 -07:00
Andrew Merenbach
3c755a2002 Upgrade Ksonnet (#506) 2018-08-15 12:56:41 -07:00
dthomson25
66f64fbf15 Add Project JWT tokens (#498)
Implemented Project JWT Tokens (#472) using #228 as the overall design
2018-08-15 12:54:24 -07:00
Alexander Matyushentsev
4c0a0e09e2 Issue 435 - pump ci logs to s3 (#509) 2018-08-14 01:59:43 +03:00
Alexander Matyushentsev
f8de6084ed Issue #458 - Add api to load project events (#504) 2018-08-10 01:55:43 +03:00
Andrew Merenbach
36624f9d89 Enable code coverage (#500)
* Update Gopkg.toml

* Update Gopkg.lock

* Add new test-coverage command

* Update .gitignore to ignore coverage.out

* Test injection of COVERALLS_TOKEN variable

* Add draft of .travis.yml

* Rm recursive coveralls token

* Ensure that goveralls gets installed

* Rm second Go version

* Update workflow with coverage testing

* Change service from argo to argo-ci

* Rm .travis.yml

* Try setting coveralls token more explicitly

* Try file-based instead of env-based token

* Try both methods of providing token

* Go back to just env-based token

* Update with another printout test

* Try using container, thanks @alexmt

* Simplify for now, take 2

* Rm quotes

* Move env to ci-builder template

* Rm coveralls token

* Add coverage badge for current branch, take 2

* Add else statement for output in case of missing token

* Ensure we use the race detector

* Don't install goveralls with dep ensure

* Update generated files

* Try ignoring intermediate files

* Don't use race detector for now

* Try new pattern to ignore

* Try different pattern now

* Try different ignore path

* Try a different ignore style

* Ignore generated protobuf files properly now

* Rm standalone test since we have test-coverage
2018-08-09 15:54:15 -07:00
Alexander Matyushentsev
cbf1e3419b Issue #489 - Static assets are being browser cached between upgrades (#502) 2018-08-08 21:10:40 +03:00
Andrew Merenbach
7c8cc41d4c Support explicit deny (#497)
* Honor deny in RBAC model

* Add explicit allow for roles

* Update tests

* Test explicit deny

* Test deny,allow=>deny and allow,deny=>deny
2018-08-08 09:33:44 -07:00
Andrew Merenbach
5dbbd0a76f Support UI cluster creation (#469)
* Add kubeconfig string to ClusterCreateRequest

* Update generated files

* Copy and adapt cluster management logic into db

* Add service account deletion to db

* Return errors from new DB methods

* Adapt InstallClusterManagerRBAC for db

* Update errors in db

* Return error if it occurs from db

* Integrate code to (un)install cluster manager

* Use invalid argument instead of failed precondition

* Set bearer token if error is nil

* Rm cluster RBAC install from CLI

* Rm cluster mgmt install from e2e

* Rm common/install.go

* Move install components into server/cluster, thanks @jessesuen

* Rm unneeded ctx arg

* Restore common/installer.go

* Replace all quoted percent-s with percent-q

* Refactor common/installer.go with error returns

* Return errors rather than exiting fatally

* Return proper number of args

* Slim down cluster methods again

* Simplify, simplify, simplify

* Return gRPC error if RBAC could not be installed

* Issue log entries, not print statements

* Fix log import

* Update generated files

* Refactor

* Major cleanup

* Unmarshal context now

* Put claims check after bearer token insertion

* Initial work to use Kubernetes manifest to create a cluster

* Pass context name now

* Wire up prototype

* Add missing parameter for e2e test

* Just read file directly

* Change how we read cluster server

* Support more attributes from localconfig

* Update generated files

* Support incluster flag

* Comment out unused field for now

* Rm previous NewCluster function

* Unmarshal kube config successfully

* Handle insecure clusters, too

* Use existing logic to get config, thanks @jessesuen

* Revert cluster.go to master version

* Update invocations of RBAC installation

* Fix e2e invocation

* Don't remove management account, thanks @jessesuen

* Fix missing error check in e2e test

* Fix missing clientset arg in e2e fixture

* Create kubeclientset for kubeconfig, thanks @jessesuen
2018-08-06 11:27:48 -07:00
Jesse Suen
da7be2e3ca Update manifests and install instructions for v0.7.1 (#496) 2018-08-03 13:39:00 -07:00
Alexander Matyushentsev
d138c10eb6 Fix 404 error in repo API (#495) 2018-08-03 12:54:33 -07:00
Alexander Matyushentsev
4af13eba60 Issue #474 - ListApps API does not scale (#494) 2018-08-03 20:10:38 +03:00
Alexander Matyushentsev
e998e499db Issue #476 - AppProjectSpec SourceRepos mislabeled (#490) 2018-08-02 20:59:05 +03:00
Alexander Matyushentsev
53cdced69b Issue #491 - Failed e2e test does not fail CI workflow (#492)
* Issue #491 - Fix broken e2e test

* Issue #491 - return e2e test exit code
2018-08-02 20:50:36 +03:00
ChocoPowwwa
e726da46a5 Fix linux download link in getting_started.md (#487)
fix typo
2018-08-02 00:04:21 -07:00
Jesse Suen
b0d6a7092e Fix failure in identifying app source type when path was '.' (#486) 2018-07-31 23:46:16 -07:00
Alexander Matyushentsev
1fe870c0d7 Issue #463 - Surface helm parameters to the application level (#485)
* Issue #463 - Surface helm parameters to the application level

* Move get helm params functionality to separate function

* Use github.com/ghodss/yaml in helm GetParameters
2018-08-01 09:44:56 +03:00
Jesse Suen
469cf1d164 Fix issue where application server was retrieving events from incorrect cluster (resolves #478) (#484) 2018-07-31 14:15:53 -07:00
Jesse Suen
00299707e5 Expand RBAC role to be able to create application events. Fix username claims extraction. (#479) 2018-07-31 11:15:44 -07:00
Jesse Suen
9f5a718323 Infer username from claims during an argocd relogin (resolves #475) (#483) 2018-07-31 10:05:52 -07:00
Jesse Suen
231d86e249 Create update-manifests.sh script to support manifest generation for personal images (#477)
Tweaks to README and getting_started.md
2018-07-30 16:13:54 -07:00
Jesse Suen
5fb8b3f73c Update getting_started.md with relogin command during password change (#473) 2018-07-27 18:35:52 -07:00
Jesse Suen
6fc345f555 Bump VERSION to v0.7.0. Update CHANGELOG.md. Tweak install/getting_started.md instructions (#471) 2018-07-27 18:06:21 -07:00
Jesse Suen
c9d5f2ec9e Add documentation for helm, application sources, and parameter overrides (#466) 2018-07-27 16:51:44 -07:00
Jesse Suen
b0a71612b7 Add argocd relogin command as a convenience around login to current context (#468) 2018-07-26 16:42:54 -07:00
Alexander Matyushentsev
88ff4b28b9 Issue #376 - Fix saving default connection status for repos and clusters (#467) 2018-07-26 19:16:38 +03:00
Alexander Matyushentsev
6905029c17 Issue #461 - Fix broken e2e tests (#464) 2018-07-26 01:46:13 +03:00
Andrew Merenbach
5d75dc02b1 Add verbose flag to tests (#462) 2018-07-25 13:35:51 -07:00
Jesse Suen
0e78172665 Make use of dex refresh tokens and store them into local config. (#456)
* Make use of dex refresh tokens and store them into local config
* API client will automatically redeem OIDC refresh token if auth token expired.
* Stop the practice of reissuing/resigning non-expiring dex claims in API server.
2018-07-25 12:57:31 -07:00
Alexander Matyushentsev
3ad036aacb Issue #443 - API to list helm apps (#460) 2018-07-25 21:03:00 +03:00
Andrew Merenbach
a364f8ab49 Expire local superuser tokens when their password changes (#450)
* Add ExpiresAt seconds

Per NumericDate having resolution of seconds at https://tools.ietf.org/html/rfc7519#page-6

* Rename expires for clarity; update comments

* Don't use different possible values for now

* Use intermediate variable for expires value

* Add pseudocode comments to session manager

* Update password storage

* Factor out LocalUsers

* Fix compile errors

* Add claim checks

* Support expiry on ReissueClaims tokens

* Set location to UTC for tokens

* Add logging for username

* Fix issuedAt type assertion

* Set mtime to UTC location

* Set second param on mgr.Create
2018-07-25 09:02:15 -07:00
Andrew Merenbach
7b6b945cbf Show CLI progress for sync and rollback (#393)
* Add ExpiresAt seconds

Per NumericDate having resolution of seconds at https://tools.ietf.org/html/rfc7519#page-6

* Rename expires for clarity; update comments

* Don't use different possible values for now

* Use intermediate variable for expires value

* Add pseudocode comments to session manager

* Update password storage

* Factor out LocalUsers

* Fix compile errors

* Add claim checks

* Support expiry on ReissueClaims tokens

* Set location to UTC for tokens

* Add logging for username

* Fix issuedAt type assertion

* Set mtime to UTC location

* Set second param on mgr.Create

* Update output for sync

* Major refactor

* Reduce verbosity

* Reduce duplicated code some more, thanks @jessesuen

* Move printout

* Move printout to success, not failure

* Revert "Move printout to success, not failure"

This reverts commit 3a6863d8f497c02bd381cf9ed6ff4a642c8bdcb5.

* Print final status on success _or_ failure

* Adjust printouts with frankenparameters

* Major refactor of data pipelining, thanks @jessesuen

* Refactor app state change printouts

* Fix number of Sprintf args

* Use previous format for keys, rather than hash

* Rename res => hook for clarity

* Don't print app resources initially, thanks @jessesuen

* Refactor Fprintf call to Fprintln

* Rename waitUntilOperationCompleted, thanks @jessesuen

* Refactor to merge data on update

* Default to updated for new resource states

* Use map for fields that actually change

* Don't let flapping lead to duplicate printouts

* Simplify caching mechanism
2018-07-25 09:01:50 -07:00
Alexander Matyushentsev
6a7df88cf4 Issue 419 - Clean watch resources (#448)
* Issue 419 - Clean watch resources

* Issue 419 - Fix handing goroutines
2018-07-25 03:23:13 +03:00
Jesse Suen
a7c7523a8c Support helm charts and yaml directories as an application source (#432)
* Support helm charts and yaml directories as an application source
* Run e2e test in parallel and increase timeout
2018-07-24 16:37:12 -07:00
Jesse Suen
36589f75f4 Update install manifests to v0.6.2 (#452) 2018-07-24 10:24:15 -07:00
Alexander Matyushentsev
b3af671803 Issue #340 - create application/project events for audit (#440)
* Issue #340 - create application/project events for audit

* Issue 340 - move username to message field

* Reviewer notes: fix possible panic
2018-07-24 18:48:13 +03:00
Alexander Matyushentsev
5dde0f6bd8 Issue #438 - audit logging interceptor is logging passwords in the clear (#441)
* Issue #438 - audit logging interceptor is logging passwords in the clear

* Issue #445 - remove request logging from repo-server
2018-07-24 07:07:18 +03:00
JazminGonzalez-Rivero
2343818ab5 Resolves 398 -> errors set status to known, status details set to error message (#437)
* if healthcheck fails, return unknown health status

* pass error through

* revert error pass

* fix error pass

* save first error
2018-07-23 11:32:35 -07:00
Jesse Suen
11bb1e3e56 Health check was using wrong converter for statefulsets, daemonset, replicasets (#439) 2018-07-20 13:42:33 -07:00
dthomson25
3dbb6f3002 Add ksonnet version to version endpoint (#433)
* Add ksonnet version to version endpoint

I needed to move config.go out of the cli package to fix a circular dependency.

* Remove ksonnetVersion field from the ArgoCD version struct
2018-07-20 09:13:16 -07:00
Alexander Matyushentsev
0591f2bcc5 Issue #340 - add gRPC payload logging interceptor (#434) 2018-07-19 23:17:29 +03:00
Jesse Suen
a48151f07a Fix regression where deployment health check incorrectly reported Healthy
Bump verison to v0.6.1
2018-07-18 00:08:03 -07:00
Jesse Suen
82fda1c7af Add UI GIF, docs for application health, resource hooks, tweaks to README.md (#429) 2018-07-17 18:03:54 -07:00
Alexander Matyushentsev
d108129972 Issue #428 - Add GKE specific installation instructions (#430) 2018-07-18 04:03:17 +03:00
Alexander Matyushentsev
6124ab1b3e Issue #351 - forward dex error message to login page (#425)
* Issue #351 - forward dex error message to login page

* Address reviewer notes: sort imports, move regex to package level var
2018-07-18 00:48:38 +03:00
Alexander Matyushentsev
e294a315fc Add RBAC documentation (#423) 2018-07-17 22:03:27 +03:00
Alexander Matyushentsev
3baed5295e Fix app creation command in getting_started.md (#422) 2018-07-17 20:05:50 +03:00
Jesse Suen
a5334ecde7 Update getting_started.md with new install instructions.
Generate install.yaml from IMAGE_NAMESPACE IMAGE_TAG values
Fix panic in initializeSettings()
2018-07-17 01:58:18 -07:00
Jesse Suen
9f35bad93f Rework installation process to apply from install.yaml (#421)
* update getting started to work for post 0.6
* create central install manifest from individual manifests
* point e2e tests to correct manifests dir
* Update roles required by api-server and application-controller to include CRUD on appproject CRD.
* Added back explanations of keys in the secret manifests

NOTE: install.yaml will need change to use a hard wired version (e.g. v0.6.0) in a subsequent checkin.
2018-07-16 18:37:41 -07:00
Alexander Matyushentsev
972f639051 Issue #419 - Add ability to dump heap profile by sending SIGUSR2 (#420)
* Issue #419 - Add ability to dump heap profile by sending SIGUSR2

* Regenerate Gopkg.lock
2018-07-17 03:48:30 +03:00
Jesse Suen
99cc4f8d39 Clean up RBAC policy rule format for non-project based resources (#418) 2018-07-16 15:00:14 -07:00
Alexander Matyushentsev
273f99b293 Issue #414 - fix nil pointer in 'argocd cluster add' (#416)
* Issue #414 - fix nil pointer in 'argocd cluster add'

* Add missing nil check
2018-07-16 23:54:12 +03:00
Jesse Suen
48ef2e919e Rename 'users' service into 'account' service (#415) 2018-07-16 13:52:40 -07:00
Jesse Suen
65b1b083ee Switch repo-server to use in-memory cache in lieu of redis. Periodically dump stats (#413) 2018-07-16 10:20:16 -07:00
Jesse Suen
c0367ed595 Add support for hook deletion policies (OnSuccess, OnFailure) (resolves #374) (#412) 2018-07-16 10:15:53 -07:00
JazminGonzalez-Rivero
062b13e92a Create User Service to support password management (#411)
* create User Api

* update UsersPasswordRequest to UpdatePasswordRequest

* update UserResponse to UpdatePasswordResponse

* current password only needs to be entered once
2018-07-16 02:34:35 -07:00
Jesse Suen
39b9f4d31a Add ability to terminate a running operation (resolves #379) (#409) 2018-07-13 17:13:31 -07:00
Alexander Matyushentsev
39701d0455 Issue #404 - Unset application parameters (#405) 2018-07-14 00:59:50 +03:00
Alexander Matyushentsev
6fbf78ef52 Issue #407 - fix nil pointer dereference in GetSpecErrors (#408) 2018-07-14 00:56:56 +03:00
Andrew Merenbach
7e0cd01758 Fix production swagger (#403)
* Factor out filepath

* Use packr to serve swagger.json

* Pass packr.Box instead of string, thanks @alexmt

* Fix test for Swagger UI server
2018-07-13 11:11:37 -07:00
JazminGonzalez-Rivero
76bf77eded ( resolves 375 ) add admin password util. (#394)
* add ability to update password whenneeded

* refactor structire so settingsverifier in settings util

* consolidate more

* move MakeSignature test

* re kick off build

* reretrigger build
2018-07-13 13:45:49 -04:00
Jesse Suen
97189f300e Fix issue where ingress incorrectly was converting to v1 instead of extensions/v1beta1 (#397) 2018-07-13 10:05:44 -07:00
Jesse Suen
078b2339bd Apply logic was ignoring kubectl apply failures (#395) 2018-07-13 10:03:56 -07:00
Andrew Merenbach
4915490cbb Change default connection state (#388) 2018-07-12 14:25:34 -07:00
Jesse Suen
f65859bc25 Label hooks so that the cluster resource watch will be notified about completions (#387) 2018-07-12 14:01:54 -07:00
Alexander Matyushentsev
543ae3cf13 Issue #364 - Assess health of StatefulSets, DaemonSets, ReplicaSets (#391) 2018-07-13 00:01:28 +03:00
Alexander Matyushentsev
610a4510cf Issue #385 - Sync and health status should be unknown if controller unable to load target/live state (#386) 2018-07-12 22:40:21 +03:00
Jesse Suen
7b92977889 Improve logging. Prevent some unecessary patches to app (#383) 2018-07-12 12:39:46 -07:00
Alexander Matyushentsev
a17806c37c Issue #349 - argocd wait passes when an ingress object failed (#384) 2018-07-12 22:38:25 +03:00
Jesse Suen
d6b87b2047 If manifest query is a commit sha, check cache first to prevent locking git repo (#382) 2018-07-12 04:15:12 -07:00
Jesse Suen
9d921f65f3 Various fixes to sync logic (#370)
* move hook resource state into SyncResult (from operation state)
* fix rollback to use apply based sync
* re-assess sync/health status between each sync phases
* PostSync hook should wait until application is Healthy (resolves #363)
2018-07-11 19:12:30 -07:00
Alexander Matyushentsev
32ba7f468c Issue #304 - Print information about app conditions (#371)
* Issue #304 - Print information about app conditions

* Reviewer notes: Remove unnecessary space

* Reviewer notes: use comma to separate conditions
2018-07-12 00:03:20 +03:00
Andrew Merenbach
9cddd4c368 Force app refresh after sync (#373) 2018-07-11 13:45:41 -07:00
Alexander Matyushentsev
8db465c699 Issue #277 - Warning message if controller detect resources which belongs to multiple application (#372)
* Issue #277 - Warning message if controller detect resources which belongs to multiple application

* Reviewer notes: add missing nil check
2018-07-11 23:00:48 +03:00
Jesse Suen
eb1caf2231 Use hook strategy as the default when performing a sync (#368) 2018-07-10 16:02:55 -07:00
Alexander Matyushentsev
adb84f3d03 Issue #210 - Application controller should not erase previously collected info about resources if repo or cluster is not available (#367) 2018-07-11 00:45:18 +03:00
JazminGonzalez-Rivero
67acd29541 move install functionality from custom to kubectl apply (#327)
* check for keys on server startup

* move manifest to it's own folder

* revert Gopkg.lock changes

* add default password warnings

* update getting started docs

* remove install dependency from e2e test

* fix test pathing

* readding 02* manifests

* set url to blank as default

* update sso docs

* update getting_started to include namespace

* make defaultSetting internal

* remove extra check, should be caught by  settingsMgr.GetSettings() error check

* fix manifests path

* error if configmap is missing, but replace if secret missing

* fix getting started

* set password to hostname

* update comment for initializeSettings

* remove unneeded bitbucket.webhook.uuid

* Gopkg.lock modified
2018-07-10 17:26:07 -04:00
Jesse Suen
41976122d5 app-name label was inadvertently injected into spec.selector if selector was omitted from v1beta1 specs (resolves #335) (#366)
Bump version to v0.6.0
2018-07-10 10:51:58 -07:00
Jesse Suen
d5b973c15a Fix git authentication implementation when using using SSH key (resolves #339) (#362) 2018-07-10 10:50:56 -07:00
Jesse Suen
3a6892a011 Cascade deletion is decided during app deletion, instead of app creation (resolves #301) (#361) 2018-07-10 10:50:29 -07:00
Jesse Suen
b092e1bc7d Remove unnecessary role privileges from api server (resolves #319). Fix linting issues (#359) 2018-07-09 13:53:45 -07:00
Jesse Suen
9d7d1989e9 Consolidate printing of hook resources with application resources (#358) 2018-07-09 13:46:23 -07:00
Alexander Matyushentsev
7d4dd0fdd5 Issue #304 - Add spec validation condition to application CRD (#360) 2018-07-09 20:45:03 +03:00
Jesse Suen
364415f83a Move k8s health assessment into a stand-alone library. Use kubectl convert to statically assess health (#356) 2018-07-09 10:13:56 -07:00
Jesse Suen
d633f6b299 Support for PreSync, Sync, PostSync resource hooks (#350)
* Rewrite controller sync logic to support workflow-based sync

* Redesign hook implementation to support generic resources as hooks
2018-07-07 00:54:06 -07:00
Andrew Merenbach
bf2fe9d33f Add missing health status (#354)
* Add missing health status

* Update swagger.json

* Default app and service health to Missing

* Fill in Missing status if status field is blank

* Simplify missing injection
2018-07-06 16:33:25 -07:00
Andrew Merenbach
c2fde1ddc0 Retry sync and rollback (#347)
* Add timeout to sync and rollback commands

* Make defaultCheckTimeoutSeconds package-global

* Add cancel to context for rollback, sync

* Add cancel after timeout to rollback/sync

* Assign appName earlier in sync

* Don't unnecessarily reassign ctx

* Use full func for timeout after all

* Try applying wait logic

* Add sync/health flags to rollback/sync

* Slight cleanup to realign with spec

* Clean up, still broken

* Adapt waitUntilOPerationCompleted, thanks @jessesuen

* Log fatal immediately after timeout

* Move fatal log back

* Reduce diff further

* Rm two blank lines
2018-07-03 15:35:58 -07:00
Jesse Suen
ebb47580a0 Refactor to enable application server unit testing. Add app create unit test. 2018-06-30 02:25:32 -07:00
Jesse Suen
4e533c90c2 Fix issue where --disable-auth did not disable RBAC properly (resolves #332) (#344) 2018-06-29 22:08:02 -07:00
Alexander Matyushentsev
9a14134f65 Issue #343 - Getting permission Denied application destination is not permitted in project (#345) 2018-06-29 21:56:24 -07:00
Andrew Merenbach
e58bdab492 Remove namespace from app URL (#334) 2018-06-28 13:57:52 -07:00
Jesse Suen
279b01a180 Refresh flag to sync should be optional, not required 2018-06-27 16:31:41 -07:00
Alexander Matyushentsev
f046884ae0 Issue #295 - Add repositories to project spec (#331) 2018-06-27 16:15:26 -07:00
Andrew Merenbach
f4ce59650d Idempotent cluster create (#328)
* Add upsert field to cluster create request

* Update generated files

* Add idempotence check to cluster

* Add command-line flag for upsert cluster

* Fix error logging in repository, cluster, application create

* Check Server instead of Name
2018-06-27 13:40:32 -07:00
Andrew Merenbach
bc49af56c0 RBAC import/export (#325)
* Simplify export with functional approach

* Add comment

* Export/import RBAC config map now
2018-06-27 13:10:39 -07:00
Andrew Merenbach
3e7116c427 Enable CGO on Linux builds (#320)
* Take first shot at enable CGO on Linux

* Simplify CGO_ENABLED flag check

* Use curly braces for consistency

* Build CLI with CGO if possible, thanks @jessesuen
2018-06-26 14:29:11 -07:00
Alexander Matyushentsev
12b3ec6989 Issue #295 - Allow editing project destinations using CLI (#317)
* Issue #295 - Allow editing projcet destinations using CLI

* Apply reviewer notes

* Fix error message in 'argocd project' cli

* Convert project name to positional argument
2018-06-26 13:49:31 -07:00
Andrew Merenbach
e536cc183a Idempotent repo add (#321)
* Add upsert field for repo creation

* Update generated files

* Update repo specs

* Redact existing repo

* Use one exit point to reduce chance of redaction error

* Set error to nil when appropriate

* Handle claims more properly

* Process error better

* Fix comparison, rm unneeded assignment

* Use pointers for repo RBAC name

* Apply repoRBACName in two other places

* DRY

* Don't nest unnecessarily

* Revert repoRBACName change to simplify diff

* Rearrange repo upsert check, thanks @jessesuen
2018-06-26 11:51:51 -07:00
Andrew Merenbach
63348fa903 Rm swagger.json from reposerver (#318) 2018-06-25 13:54:09 -07:00
Andrew Merenbach
ab00aef75e Generate swagger files (#278)
* Generate swagger files

* Add basic Swagger definitions

* Add reposerver swagger file

* Consolidate swagger files

* Move swagger files to swagger-ui directory instead

* Put swagger files in swagger-ui

* Fix order of operations

* Move back to swagger directory

* Serve API server swagger files raw for now

* Serve reposerver swagger files from API server

* Move back to subdirectories, thanks @alexmt

* Fix comment on application Rollback

* Update two more comments

* Fix comment in session.proto

* Update generated code

* Update generated swagger docs

* Fix comment for delete actions in cluster and repository swagger

* Set expected collisions and invoke mixins

* Update generated code

* Create swagger mixins from codegen

* Move swagger.json location, thanks @jazminGonzalez-Rivero

* Add ref cleanup for swagger combined

* Make fewer temp files when generating swagger

* Delete intermediate swagger files

* Serve new file at /swagger.json

* Set up UI server

* Update package lock

* Commit generated swagger.json files

* Add install commands for swagger

* Use ReDoc server instead of Swagger UI server

* Update lockfile

* Make URL paths more consistent

* Update package lock

* Separate out handlers for Swagger UI, JSON

* Rm unnecessary CORS headers

...since we're serving from the app server

* Simplify serving

* Further simplify serving code

* Update package lock

* Factor out swagger serving into util

* Add test for Swagger server

* Use ServeSwaggerUI method to run tests

* Update package lock

* Don't generate swagger for reposerver

* Reset to master Gopkg.lock and server/server.go

* Merge in prev change to server/server.go

* Redo changes to Gopkg.lock

* Fix number of conflicts

* Update generated swagger.json for server

* Fix issue with project feature error
2018-06-25 13:49:38 -07:00
Jesse Suen
653f9d3913 Remove local git credential test to prevent clobbering of OSX keychain credentials (resolves #315) 2018-06-24 03:39:05 -07:00
Alexander Matyushentsev
21c3fb905b Projects bug fixes: GET /api/v1/projects should return default project, (#313)
prevent creating/updating default project, make sure user cannot move
app to project unless it is permitted
2018-06-23 11:38:35 -07:00
JazminGonzalez-Rivero
4353f736d4 add validation to argocd app set -p (#309)
* Add CheckValidParam function

* fix typo
2018-06-22 14:22:30 -07:00
Jesse Suen
f3712313ba Update webhook.md to remove unnecessary step to restart server 2018-06-22 13:52:50 -07:00
Jesse Suen
b2d8fcb36e Update sso.md to remove unnecessary step to restart server 2018-06-22 13:50:52 -07:00
Alexander Matyushentsev
81021839d5 Issue #295 - implement app destination permissions validation (#310)
* Issue #295 - implement app destination permissions validation

* Apply reviewer notes. Use project to check application access. Update project access checks

* Use GetProject() instead of project to make sure default value is inferred

* Apply reviewer notes
2018-06-22 10:05:57 -07:00
Edward Lee
7c36dd6c56 Fix LICENSE copyright text 2018-06-21 01:03:50 -07:00
Jesse Suen
687e0b0dea Update getting_started.md with example repo and new UI steps 2018-06-20 18:40:49 -07:00
Jesse Suen
834e22d7b1 Support cluster management using the internal k8s API address https://kubernetes.default.svc (#307) 2018-06-20 16:50:15 -07:00
Alexander Matyushentsev
1e29f98924 Issue #295 - add project CRD, basic API and CLI implementation (#299) 2018-06-20 14:48:31 -07:00
Jesse Suen
933f3da538 Support diffing a local ksonnet app to the live application state (resolves #239) (#298) 2018-06-20 13:57:55 -07:00
Jesse Suen
a7fa2fd256 Add ability to show last operation result in app get. Show path in app list -o wide (#297) 2018-06-19 02:04:45 -07:00
Jesse Suen
0de1a3b20a Update dependencies: ksonnet v0.11, golang v1.10, debian v9.4 (#296) 2018-06-18 14:34:10 -07:00
Jesse Suen
bcc114ec60 Add ability to force a refresh of an app during get (resolves #269) (#293) 2018-06-18 10:22:58 -07:00
Jesse Suen
1148fae419 Add clean-debug make target to prevent packr from boxing debug artifacts into binaries 2018-06-15 14:31:26 -07:00
Jesse Suen
d7188c29f8 Remove redundant 'argocd' namespace from manifests 2018-06-15 14:20:28 -07:00
Jesse Suen
4b97732659 Automatically restart API server upon certificate changes (#292) 2018-06-15 14:16:50 -07:00
Jesse Suen
8ff98cc6e1 Add RBAC unit test for wildcards with sub-resources 2018-06-14 12:50:16 -07:00
Jesse Suen
cf0c324a74 Add unit test for using resource & action wildcards in a RBAC policy. Bump version to v0.5.2 2018-06-14 12:41:26 -07:00
Jesse Suen
69119a21cd Update getting_started.md to point to v0.5.1 2018-06-14 11:13:05 -07:00
Alexander Matyushentsev
16fa41d25b Issue #275 - Application controller fails to get app state if app has resource without name (#285) 2018-06-14 09:08:22 -07:00
Alexander Matyushentsev
4e170c2033 Update version to v0.5.1 2018-06-13 14:30:35 -07:00
Alexander Matyushentsev
3fbbe940a1 Issue #283 - API server incorrectly compose application fully qualified name for RBAC check (#284) 2018-06-13 13:05:39 -07:00
Alexander Matyushentsev
271b57e5c5 Issue #260 - Rate limiter is preventing force refreshes (e.g. webhook) from functioning (#282) 2018-06-13 11:34:33 -07:00
Andrew Merenbach
df0e2e4015 Fail app sync if prune flag is required (#276)
* Add status field to resource details

* Update generated code

* Set up const message responses

* Check number of resources requiring pruning

* Fix imports

* Use string, thanks @alexmt

* Update generated code
2018-06-12 10:54:11 -07:00
Alexander Matyushentsev
9fa622d63b Issue #280 - It is impossible to restrict application access by repository URL (#281)
* Issue #280 - It is impossible to restrict application access by repository URL

* Apply reviewer note
2018-06-12 10:43:16 -07:00
Alexander Matyushentsev
fed2149174 Add progressing deadline to test app to fix e2e tets slowness 2018-06-12 08:54:47 -07:00
Alexander Matyushentsev
aa4291183b Take into account number of unavailable replicas to decided if deployment is healthy or not (#270)
* Take into account number of unavailable replicas to decided if deployment is healthy or not

* Run one controller for all e2e tests to reduce tests duration

* Apply reviewer notes: use logic from kubectl/rollout_status.go to check deployment health
2018-06-07 11:05:46 -07:00
Alexander Matyushentsev
0d3fc9648f Issue #271 - perform three way diff only if resource has expected state and live state with last-applied-configuration annotation (#274) 2018-06-07 10:29:36 -07:00
Jesse Suen
339138b576 Remove hard requirement of initializing OIDC app during server startup (resolves #272) 2018-06-07 02:07:53 -07:00
Jesse Suen
666769f9d9 Fix issue preventing proper parsing of claims subject in RBAC enforcement 2018-06-07 00:17:00 -07:00
Jesse Suen
8fc594bd2b Add missing list, patch, update verbs to application-controller-role 2018-06-06 17:51:18 -07:00
Andrew Merenbach
8cf8ad7e24 Tweak flags for import/export, thanks @jessesuen (#268) 2018-06-06 16:32:30 -07:00
Andrew Merenbach
0818f698e6 Support resource import/export (#255)
* Add initial prototype for export

* Add client opts to argocd-util

* Make flags local

* Support output to file without piping

* Add comment to NewExportCommand

* Vastly clean up output, thanks @alexmt @jessesuen

* Nullify operation, thanks @alexmt

* Add additional error check

* Rm extraneous fmt.Sprint

* Clone export command to import

* Flesh out import feature

* Use const string for YAML separator

* Don't export enclosing lists

* Almost finished prototyping import

* Create settings now, too

* Create all resources now

* Nullify certificate before export

* Add JSON annotations, update comment

* Warn, don't fail, if cluster/repo already exist, thanks @alexmt

* Use minus instead of stdin/stdout, thanks @jessesuen
2018-06-06 14:53:14 -07:00
Jesse Suen
44a33b0a5f Repo names containing underscores were not being accepted (resolves #258) 2018-06-06 14:26:43 -07:00
wanghong230
85078bdb66 fix #120 refactor the rbac code to support customizable claims enforcement function (#265) 2018-06-06 14:20:34 -07:00
Jesse Suen
30a3dba7ad argocd-server needs to be built using packr to bundle RBAC policy files. Update packr (resolves #266) 2018-06-06 12:35:25 -07:00
Jesse Suen
0afc671723 Retry argocd app wait connection errors from EOF watch. Show detailed state changes 2018-06-06 11:24:49 -07:00
Jesse Suen
12e7447e9f Implement RBAC support (issue #120) (#263)
* introduce rbac library around casbin
* supports claims enforcement by iteration through user's groups
* supports filtering of resources by level of access
* policy loader and automatic updates from configmap
* support for builtin and userdefined policies
2018-06-05 21:44:13 -07:00
Alexander Matyushentsev
b675e79b89 Add path to API /application/{repo}/ksonnet response (#264)
* Add path to API /application/{repo}/ksonnet response

* Fix indentation
2018-06-05 14:37:26 -07:00
Jesse Suen
febdccfb58 Introduce argocd app manifests for printing the application manifests from git or live (#261) 2018-06-05 12:59:29 -07:00
Alexander Matyushentsev
54835a0d93 Implement workaround for https://github.com/golang/go/issues/21955 (#256) 2018-06-04 13:52:07 -07:00
Jesse Suen
423fe3487c REST payload of create/update for repos and cluster should be actual object 2018-05-31 18:15:33 -07:00
Jesse Suen
98cb3f7950 Bump version to v0.5.0 2018-05-31 17:56:23 -07:00
Jesse Suen
371492bf5c Handle case where upsert could be nil. Use proper error codes. More RESTful endpoints 2018-05-31 17:54:27 -07:00
Jesse Suen
7df831e96d Clean up .proto definitions for consistency and reduction of pointer usage (#253) 2018-05-31 17:21:09 -07:00
Alexander Matyushentsev
f0be1bd251 Fix bug secret controller which is causing update loop in secret controller (#251) 2018-05-31 16:06:41 -07:00
Alexander Matyushentsev
948341a885 ListDir should not fail if Redis is down (#252) 2018-05-31 16:06:30 -07:00
Alexander Matyushentsev
1b2bf8ce0e GET /cluster/<clustername> API should not panic if invalid cluster url is provided (#250) 2018-05-31 15:13:07 -07:00
Andrew Merenbach
4f68a0f634 Wrap method signatures (#249)
* Update application create to use upsert attribute

* Update CLI interface

* Use pointer to upsert

* Rename DeleteApplicationRequest for parity

* Add new ApplicationUpdateRequest wrapper

* Rename RepoUpdateRequest => RepoRESTUpdateRequest

* Add new RepositoryUpdateRequest

* Rename ClusterUpdateRequest -> ClusterRESTUpdateRequest

* Fix var names

* Update var use

* Use intermediate vars for clarity

* Update generated code

* Update mocks

* Update e2e cluster creation
2018-05-31 14:21:08 -07:00
Alexander Matyushentsev
e785abeb8f Issue #244 - Cluster/Repository connection status (#248) 2018-05-31 13:44:19 -07:00
302 changed files with 39426 additions and 5873 deletions

View File

@@ -29,7 +29,19 @@ spec:
- name: cmd
value: "{{item}}"
withItems:
- dep ensure && make cli lint test test-e2e
- dep ensure && make cli lint
- name: test-coverage
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && go get github.com/mattn/goveralls && make test-coverage"
- name: test-e2e
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make test-e2e"
- name: ci-builder
inputs:
@@ -44,12 +56,22 @@ spec:
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
args: ["{{inputs.parameters.cmd}}"]
args: ["mkfifo pipe; tee /tmp/logs.txt < pipe & {{inputs.parameters.cmd}} > pipe"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: COVERALLS_TOKEN
valueFrom:
secretKeyRef:
name: coverall-token
key: coverall-token
resources:
requests:
memory: 1024Mi
cpu: 200m
outputs:
artifacts:
- name: logs
path: /tmp/logs.txt
- name: ci-dind
inputs:
@@ -64,7 +86,7 @@ spec:
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]
args: ["mkfifo pipe; tee /tmp/logs.txt < pipe & until docker ps; do sleep 3; done && {{inputs.parameters.cmd}} > pipe"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: DOCKER_HOST
@@ -79,4 +101,7 @@ spec:
securityContext:
privileged: true
mirrorVolumeMounts: true
outputs:
artifacts:
- name: logs
path: /tmp/logs.txt

1
.gitignore vendored
View File

@@ -7,3 +7,4 @@ dist/
# delve debug binaries
cmd/**/debug
debug.test
coverage.out

View File

@@ -1,5 +1,215 @@
# Changelog
## v0.9.0
### Notes about upgrading from v0.8
* The `server.crt` and `server.key` fields of `argocd-secret` had been renamed to `tls.crt` and `tls.key` for
better integration with cert manager(issue #617). Existing `argocd-secret` should be updated accordingly to
preserve existing TLS certificate.
* Cluster wide resources should be allowed in default project (due to issue #330):
```
argocd project allow-cluster-resource default '*' '*'
```
### Changes since v0.8:
+ Auto-sync option in application CRD instance (issue #79)
+ Support raw jsonnet as an application source (issue #540)
+ Reorder K8s resources to correct creation order (issue #102)
+ Redact K8s secrets from API server payloads (issue #470)
+ Support --in-cluster authentication without providing a kubeconfig (issue #527)
+ Special handling of CustomResourceDefinitions (issue #613)
+ ArgoCD should download helm chart dependencies (issue #582)
+ Export ArgoCD stats as prometheus style metrics (issue #513)
+ Support restricting TLS version (issue #609)
+ Use 'kubectl auth reconcile' before 'kubectl apply' (issue #523)
+ Projects need controls on cluster-scoped resources (issue #330)
+ Support IAM Authentication for managing external K8s clusters (issue #482)
+ Compatibility with cert manager (issue #617)
* Enable TLS for repo server (issue #553)
* Split out dex into it's own deployment (instead of sidecar) (issue #555)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
+ [UI] Project Role/Token management from UI (issue #548)
+ [UI] App creation wizard should allow specifying source revision (issue #562)
+ [UI] Ability to modify application from UI (issue #615)
+ [UI] indicate when operation is in progress or has failed (issue #566)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Lazy enforcement of unknown cluster/namespace restricted resources (issue #599)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- [UI] Fix issue where projects filter does not work when application got changed
- [UI] Creating apps from directories is not obvious (issue #565)
- Helm hooks are being deployed as resources (issue #605)
- Disagreement in three way diff calculation (issue #597)
- SIGSEGV in kube.GetResourcesWithLabel (issue #587)
- ArgoCD fails to deploy resources list (issue #584)
- Branch tracking not working properly (issue #567)
- Controller hot loop when application source has bad manifests (issue #568)
## v0.8.2 (2018-09-12)
- Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression
- Fix CLI panic when performing an initial `argocd sync/wait`
## v0.8.1 (2018-09-10)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- [UI] Fix issue where projects filter does not work when application got changed
## v0.8.0 (2018-09-04)
### Notes about upgrading from v0.7
* The RBAC model has been improved to support explicit denies. What this means is that any previous
RBAC policy rules, need to be rewritten to include one extra column with the effect:
`allow` or `deny`. For example, if a rule was written like this:
```
p, my-org:my-team, applications, get, */*
```
It should be rewritten to look like this:
```
p, my-org:my-team, applications, get, */*, allow
```
### Changes since v0.7:
+ Support kustomize as an application source (issue #510)
+ Introduce project tokens for automation access (issue #498)
+ Add ability to delete a single application resource to support immutable updates (issue #262)
+ Update RBAC model to support explicit denies (issue #497)
+ Ability to view Kubernetes events related to application projects for auditing
+ Add PVC healthcheck to controller (issue #501)
+ Run all containers as an unprivileged user (issue #528)
* Upgrade ksonnet to v0.12.0
* Add readiness probes to API server (issue #522)
* Use gRPC error codes instead of fmt.Errorf (#532)
- API discovery becomes best effort when partial resource list is returned (issue #524)
- Fix `argocd app wait` printing incorrect Sync output (issue #542)
- Fix issue where argocd could not sync to a tag (#541)
- Fix issue where static assets were browser cached between upgrades (issue #489)
## v0.7.2 (2018-08-21)
- API discovery becomes best effort when partial resource list is returned (issue #524)
## v0.7.1 (2018-08-03)
+ Surface helm parameters to the application level (#485)
+ [UI] Improve application creation wizard (#459)
+ [UI] Show indicator when refresh is still in progress (#493)
* [UI] Improve data loading error notification (#446)
* Infer username from claims during an `argocd relogin` (#475)
* Expand RBAC role to be able to create application events. Fix username claims extraction
- Fix scalability issues with the ListApps API (#494)
- Fix issue where application server was retrieving events from incorrect cluster (#478)
- Fix failure in identifying app source type when path was '.'
- AppProjectSpec SourceRepos mislabeled (#490)
- Failed e2e test was not failing CI workflow
* Fix linux download link in getting_started.md (#487) (@chocopowwwa)
## v0.7.0 (2018-07-27)
+ Support helm charts and yaml directories as an application source
+ Audit trails in the form of API call logs
+ Generate kubernetes events for application state changes
+ Add ksonnet version to version endpoint (#433)
+ Show CLI progress for sync and rollback
+ Make use of dex refresh tokens and store them into local config
+ Expire local superuser tokens when their password changes
+ Add `argocd relogin` command as a convenience around login to current context
- Fix saving default connection status for repos and clusters
- Fix undesired fail-fast behavior of health check
- Fix memory leak in the cluster resource watch
- Health check for StatefulSets, DaemonSet, and ReplicaSets were failing due to use of wrong converters
## v0.6.2 (2018-07-23)
- Health check for StatefulSets, DaemonSet, and ReplicaSets were failing due to use of wrong converters
## v0.6.1 (2018-07-18)
- Fix regression where deployment health check incorrectly reported Healthy
+ Intercept dex SSO errors and present them in Argo login page
## v0.6.0 (2018-07-16)
+ Support PreSync, Sync, PostSync resource hooks
+ Introduce Application Projects for finer grain RBAC controls
+ Swagger Docs & UI
+ Support in-cluster deployments internal kubernetes service name
+ Refactoring & Improvements
* Improved error handling, status and condition reporting
* Remove installer in favor of kubectl apply instructions
* Add validation when setting application parameters
* Cascade deletion is decided during app deletion, instead of app creation
- Fix git authentication implementation when using using SSH key
- app-name label was inadvertently injected into spec.selector if selector was omitted from v1beta1 specs
## v0.5.4 (2018-06-27)
- Refresh flag to sync should be optional, not required
## v0.5.3 (2018-06-20)
+ Support cluster management using the internal k8s API address https://kubernetes.default.svc (#307)
+ Support diffing a local ksonnet app to the live application state (resolves #239) (#298)
+ Add ability to show last operation result in app get. Show path in app list -o wide (#297)
+ Update dependencies: ksonnet v0.11, golang v1.10, debian v9.4 (#296)
+ Add ability to force a refresh of an app during get (resolves #269) (#293)
+ Automatically restart API server upon certificate changes (#292)
## v0.5.2 (2018-06-14)
+ Resource events tab on application details page (#286)
+ Display pod status on application details page (#231)
## v0.5.1 (2018-06-13)
- API server incorrectly compose application fully qualified name for RBAC check (#283)
- UI crash while rendering application operation info if operation failed
## v0.5.0 (2018-06-12)
+ RBAC access control
+ Repository/Cluster state monitoring
+ ArgoCD settings import/export
+ Application creation UI wizard
+ argocd app manifests for printing the application manifests
+ argocd app unset command to unset parameter overrides
+ Fail app sync if prune flag is required (#276)
+ Take into account number of unavailable replicas to decided if deployment is healthy or not #270
+ Add ability to show parameters and overrides in CLI (resolves #240)
- Repo names containing underscores were not being accepted (#258)
- Cookie token was not parsed properly when mixed with other site cookies
## v0.4.7 (2018-06-07)
- Fix argocd app wait health checking logic
## v0.4.6 (2018-06-06)
- Retry argocd app wait connection errors from EOF watch. Show detailed state changes
## v0.4.5 (2018-05-31)
+ Add argocd app unset command to unset parameter overrides
- Cookie token was not parsed properly when mixed with other site cookies
## v0.4.4 (2018-05-30)
+ Add ability to show parameters and overrides in CLI (resolves #240)
+ Add Events API endpoint
+ Issue #238 - add upsert flag to 'argocd app create' command
+ Add repo browsing endpoint (#229)
+ Support subscribing to settings updates and auto-restart of dex and API server
- Issue #233 - Controller does not persist rollback operation result
- App sync frequently fails due to concurrent app modification
## v0.4.3 (2018-05-21)
- Move local branch deletion as part of git Reset() (resolves #185) (#222)
- Fix exit code for app wait (#219)
## v0.4.2 (2018-05-21)
+ Show URL in argocd app get
- Remove interactive context name prompt during login which broke login automation
* Rename force flag to cascade in argocd app delete
## v0.4.1 (2018-05-18)
+ Implemented argocd app wait command
## v0.4.0 (2018-05-17)
+ SSO Integration
+ GitHub Webhook
@@ -12,3 +222,36 @@
* Manifests are memoized in repo server
- Fix connection timeouts to SSH repos
## v0.3.2 (2018-05-03)
+ Application sync should delete 'unexpected' resources #139
+ Update ksonnet to v0.10.1
+ Detect unexpected resources
- Fix: App sync frequently fails due to concurrent app modification #147
- Fix: improve app state comparator: #136, #132
## v0.3.1 (2018-04-24)
+ Add new rollback RPC with numeric identifiers
+ New argo app history and argo app rollback command
+ Switch to gogo/protobuf for golang code generation
- Fix: create .argocd directory during argo login (issue #123)
- Fix: Allow overriding server or namespace separately (issue #110)
## v0.3.0 (2018-04-23)
+ Auth support
+ TLS support
+ DAG-based application view
+ Bulk watch
+ ksonnet v0.10.0-alpha.3
+ kubectl apply deployment strategy
+ CLI improvements for app management
## v0.2.0 (2018-04-03)
+ Rollback UI
+ Override parameters
## v0.1.0 (2018-03-12)
+ Define app in Github with dev and preprod environment using KSonnet
+ Add cluster Diff App with a cluster Deploy app in a cluster
+ Deploy a new version of the app in the cluster
+ App sync based on Github app config change - polling only
+ Basic UI: App diff between Git and k8s cluster for all environments Basic GUI

View File

@@ -1,10 +1,23 @@
## Requirements
Make sure you have following tools installed [golang](https://golang.org/), [dep](https://github.com/golang/dep), [protobuf](https://developers.google.com/protocol-buffers/),
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
Make sure you have following tools installed
* [docker](https://docs.docker.com/install/#supported-platforms)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
```
$ brew install go dep protobuf kubectl
$ brew tap go-swagger/go-swagger
$ brew install go dep protobuf kubectl ksonnet/tap/ks kubernetes-helm jq go-swagger
$ go get -u github.com/golang/protobuf/protoc-gen-go
$ go get -u github.com/go-swagger/go-swagger/cmd/swagger
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
```
Nice to have [gometalinter](https://github.com/alecthomas/gometalinter) and [goreman](https://github.com/mattn/goreman):
@@ -20,6 +33,23 @@ $ go get -u github.com/argoproj/argo-cd
$ dep ensure
$ make
```
NOTE: The make command can take a while, and we recommend building the specific component you are working on
* `make cli` - Make the argocd CLI tool
* `make server` - Make the API/repo/controller server
* `make codegen` - Builds protobuf and swagger files
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
## Generating ArgoCD manifests for a specific image repository/tag
During development, the `update-manifests.sh` script, can be used to conveniently regenerate the
ArgoCD installation manifests with a customized image namespace and tag. This enables developers
to easily apply manifests which are using the images that they pushed into their personal container
repository.
```
$ IMAGE_NAMESPACE=jessesuen IMAGE_TAG=latest ./hack/update-manifests.sh
$ kubectl apply -n argocd -f ./manifests/install.yaml
```
## Running locally
@@ -38,3 +68,10 @@ $ kubectl create -f install/manifests/01_application-crd.yaml
```
$ goreman start
```
## Troubleshooting
* Ensure argocd is installed: ./dist/argocd install
* Ensure you're logged in: ./dist/argocd login --username admin --password <whatever password you set at install> localhost:8080
* Ensure that roles are configured: kubectl create -f install/manifests/02c_argocd-rbac-cm.yaml
* Ensure minikube is running: minikube stop && minikube start
* Ensure Argo CD is aware of minikube: ./dist/argocd cluster add minikube

View File

@@ -1,13 +0,0 @@
Copyright 2017-2018 The Argo Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License..

136
Dockerfile Normal file
View File

@@ -0,0 +1,136 @@
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.10.3 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /tmp
# Install docker
ENV DOCKER_VERSION=18.06.0
RUN curl -O https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz && \
tar -xzf docker-${DOCKER_VERSION}-ce.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker
# Install dep
ENV DEP_VERSION=0.5.0
RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep
# Install gometalinter
RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v2.0.5/gometalinter-2.0.5-linux-amd64.tar.gz | \
tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \
ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2
# Install packr
ENV PACKR_VERSION=1.13.2
RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/packr_${PACKR_VERSION}_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# Install kubectl
RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /usr/local/bin/kubectl
# Install ksonnet
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /usr/local/bin/ks
# Option 2: use official tagged ksonnet release
ENV KSONNET_VERSION=0.11.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks
# Install helm
ENV HELM_VERSION=2.9.1
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /usr/local/bin/helm
# Install kustomize
ENV KUSTOMIZE_VERSION=1.0.8
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize
ENV AWS_IAM_AUTHENTICATOR_VERSION=0.3.0
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/aws-iam-authenticator
####################################################################################################
# ArgoCD Build stage which performs the actual build of ArgoCD binaries
####################################################################################################
FROM golang:1.10.3 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
####################################################################################################
# Final image
####################################################################################################
FROM debian:9.5-slim
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \
mkdir -p /home/argocd && \
chown argocd:argocd /home/argocd && \
apt-get update && \
apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
COPY --from=builder /usr/local/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/* /usr/local/bin/
# Symlink argocd binaries under root for backwards compatibility that expect it under /
RUN ln -s /usr/local/bin/argocd /argocd && \
ln -s /usr/local/bin/argocd-server /argocd-server && \
ln -s /usr/local/bin/argocd-util /argocd-util && \
ln -s /usr/local/bin/argocd-application-controller /argocd-application-controller && \
ln -s /usr/local/bin/argocd-repo-server /argocd-repo-server
USER argocd
RUN helm init --client-only
WORKDIR /home/argocd
ARG BINARY
CMD ${BINARY}

View File

@@ -1,83 +0,0 @@
FROM debian:9.3 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install go
ENV GO_VERSION 1.9.3
ENV GO_ARCH amd64
ENV GOPATH /root/go
ENV PATH ${GOPATH}/bin:/usr/local/go/bin:${PATH}
RUN wget https://storage.googleapis.com/golang/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
tar -C /usr/local/ -xf /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
rm /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz
# Install protoc, dep, packr
ENV PROTOBUF_VERSION 3.5.1
RUN cd /usr/local && \
wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/protoc-${PROTOBUF_VERSION}-linux-x86_64.zip && \
unzip protoc-*.zip && \
wget https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep && \
wget https://github.com/gobuffalo/packr/releases/download/v1.10.4/packr_1.10.4_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /root/go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
##############################################################
# This stage will pull in or build any CLI tooling we need for our final image
FROM golang:1.10 as cli-tooling
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /ks
# Option 2: use official tagged ksonnet release
env KSONNET_VERSION=0.10.2
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /ks
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl
##############################################################
FROM debian:9.3
RUN apt-get update && apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=cli-tooling /ks /usr/local/bin/ks
COPY --from=cli-tooling /kubectl /usr/local/bin/kubectl
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=root
COPY --from=builder /root/go/src/github.com/argoproj/argo-cd/dist/* /
ARG BINARY
CMD /${BINARY}

View File

@@ -1,22 +0,0 @@
FROM golang:1.9.2
WORKDIR /tmp
RUN curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.13.1.tgz && \
tar -xzf docker-1.13.1.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker && \
go get -u github.com/golang/dep/cmd/dep && \
go get -u gopkg.in/alecthomas/gometalinter.v2 && \
gometalinter.v2 --install
# Install kubectl
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl && mv /kubectl /usr/local/bin/kubectl
# Install ksonnet
env KSONNET_VERSION=0.10.2
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks && \
rm -rf /tmp/ks_${KSONNET_VERSION}

896
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,37 +1,57 @@
# Packages should only be added to the following list when we use them *outside* of our go code.
# (e.g. we want to build the binary to invoke as part of the build process, such as in
# generate-proto.sh). Normal use of golang packages should be added via `dep ensure`, and pinned
# with a [[constraint]] or [[override]] when version is important.
required = [
"github.com/golang/protobuf/protoc-gen-go",
"github.com/gogo/protobuf/protoc-gen-gofast",
"github.com/gogo/protobuf/protoc-gen-gogofast",
"golang.org/x/sync/errgroup",
"k8s.io/code-generator/cmd/go-to-protobuf",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
]
[[constraint]]
name = "google.golang.org/grpc"
version = "1.9.2"
version = "1.15.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "1.1.1"
# override github.com/grpc-ecosystem/go-grpc-middleware's constraint on master
[[override]]
name = "github.com/golang/protobuf"
version = "1.2.0"
[[constraint]]
name = "github.com/grpc-ecosystem/grpc-gateway"
version = "v1.3.1"
# override ksonnet's release-1.8 dependency
# prometheus does not believe in semversioning yet
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
[[constraint]]
branch = "release-1.10"
name = "k8s.io/api"
# override ksonnet dependency
[[override]]
branch = "release-1.9"
branch = "release-1.10"
name = "k8s.io/apimachinery"
[[constraint]]
branch = "release-1.9"
name = "k8s.io/api"
[[constraint]]
name = "k8s.io/apiextensions-apiserver"
branch = "release-1.9"
branch = "release-1.10"
[[constraint]]
branch = "release-1.9"
branch = "release-1.10"
name = "k8s.io/code-generator"
[[constraint]]
branch = "release-6.0"
branch = "release-7.0"
name = "k8s.io/client-go"
[[constraint]]
@@ -40,9 +60,17 @@ required = [
[[constraint]]
name = "github.com/ksonnet/ksonnet"
version = "v0.10.1"
version = "v0.11.0"
[[constraint]]
name = "github.com/gobuffalo/packr"
version = "v1.11.0"
# override ksonnet's logrus dependency
[[override]]
name = "github.com/sirupsen/logrus"
version = "v1.0.3"
revision = "ea8897e79973357ba785ac2533559a6297e83c44"
[[constraint]]
branch = "master"
name = "github.com/argoproj/pkg"

View File

@@ -187,7 +187,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2017-2018 The Argo Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -57,34 +57,38 @@ codegen: protogen clientgen
# NOTE: we use packr to do the build instead of go, since we embed .yaml files into the go binary.
# This enables ease of maintenance of the yaml files.
.PHONY: cli
cli:
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
cli: clean-debug
${PACKR_CMD} build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
.PHONY: cli-linux
cli-linux:
docker build --iidfile /tmp/argocd-linux-id --target builder --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" -f Dockerfile-argocd .
cli-linux: clean-debug
docker build --iidfile /tmp/argocd-linux-id --target argocd-build --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" .
docker create --name tmp-argocd-linux `cat /tmp/argocd-linux-id`
docker cp tmp-argocd-linux:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
docker cp tmp-argocd-linux:/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
docker rm tmp-argocd-linux
.PHONY: cli-darwin
cli-darwin:
docker build --iidfile /tmp/argocd-darwin-id --target builder --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" -f Dockerfile-argocd .
cli-darwin: clean-debug
docker build --iidfile /tmp/argocd-darwin-id --target argocd-build --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" .
docker create --name tmp-argocd-darwin `cat /tmp/argocd-darwin-id`
docker cp tmp-argocd-darwin:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker cp tmp-argocd-darwin:/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker rm tmp-argocd-darwin
.PHONY: argocd-util
argocd-util:
argocd-util: clean-debug
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: server
server:
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: manifests
manifests:
./hack/update-manifests.sh
.PHONY: server
server: clean-debug
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: server-image
server-image:
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) -f Dockerfile-argocd .
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) ; fi
.PHONY: repo-server
@@ -93,7 +97,7 @@ repo-server:
.PHONY: repo-server-image
repo-server-image:
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) -f Dockerfile-argocd .
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) ; fi
.PHONY: controller
@@ -102,17 +106,17 @@ controller:
.PHONY: controller-image
controller-image:
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) -f Dockerfile-argocd .
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) ; fi
.PHONY: cli-image
cli-image:
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) -f Dockerfile-argocd .
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) ; fi
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) -f Dockerfile-ci-builder .
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
.PHONY: lint
lint:
@@ -120,23 +124,34 @@ lint:
.PHONY: test
test:
go test `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
go test -v `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-coverage
test-coverage:
go test -v -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
@if [ "$(COVERALLS_TOKEN)" != "" ] ; then goveralls -ignore `find . -name '*.pb*.go' | grep -v vendor/ | sed 's!^./!!' | paste -d, -s -` -coverprofile=coverage.out -service=argo-ci -repotoken "$(COVERALLS_TOKEN)"; else echo 'No COVERALLS_TOKEN env var specified. Skipping submission to Coveralls.io'; fi
.PHONY: test-e2e
test-e2e:
go test ./test/e2e
go test -v -failfast -timeout 20m ./test/e2e
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug
clean-debug:
-find ${CURRENT_DIR} -name debug.test | xargs rm -f
.PHONY: clean
clean:
clean: clean-debug
-rm -rf ${CURRENT_DIR}/dist
.PHONY: precheckin
precheckin: test lint
.PHONY: release-precheck
release-precheck:
release-precheck: manifests
@if [ "$(GIT_TREE_STATE)" != "clean" ]; then echo 'git tree state is $(GIT_TREE_STATE)' ; exit 1; fi
@if [ -z "$(GIT_TAG)" ]; then echo 'commit must be tagged to perform release' ; exit 1; fi
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
.PHONY: release
release: release-precheck precheckin cli-darwin cli-linux server-image controller-image repo-server-image cli-image

View File

@@ -1,5 +1,4 @@
controller: go run ./cmd/argocd-application-controller/main.go --app-resync 10
controller: go run ./cmd/argocd-application-controller/main.go
api-server: go run ./cmd/argocd-server/main.go --insecure --disable-auth
repo-server: go run ./cmd/argocd-repo-server/main.go --loglevel debug
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -p 5557:5557 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/coreos/dex:v2.10.0 serve /dex.yaml"
redis: docker run --rm -p 6379:6379 redis:3.2.11

View File

@@ -1,9 +1,12 @@
[![Coverage Status](https://coveralls.io/repos/github/argoproj/argo-cd/badge.svg?branch=master)](https://coveralls.io/github/argoproj/argo-cd?branch=master)
# Argo CD - GitOps Continuous Delivery for Kubernetes
# Argo CD - Declarative Continuous Delivery for Kubernetes
## What is Argo CD?
Argo CD is a declarative, continuous delivery service based on ksonnet for Kubernetes.
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
![Argo CD UI](docs/argocd-ui.gif)
## Why Argo CD?
@@ -12,25 +15,37 @@ Application deployment and lifecycle management should be automated, auditable,
## Getting Started
Follow our [getting started guide](docs/getting_started.md).
Follow our [getting started guide](docs/getting_started.md). Further [documentation](docs/)
is provided for additional features.
## How it works
Argo CD uses git repositories as the source of truth for defining the desired application state as
well as the target deployment environments. Kubernetes manifests are specified as
[ksonnet](https://ksonnet.io) applications. Argo CD automates the deployment of the desired
application states in the specified target environments.
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD follows the **GitOps** pattern of using git repositories as the source of truth for defining
the desired application state. Kubernetes manifests can be specified in several ways:
* [ksonnet](https://ksonnet.io) applications
* [kustomize](https://kustomize.io) applications
* [helm](https://helm.sh) charts
* Plain directory of YAML/json manifests
Argo CD automates the deployment of the desired application states in the specified target environments.
Application deployments can track updates to branches, tags, or pinned to a specific version of
manifests at a git commit. See [tracking strategies](docs/tracking_strategies.md) for additional
details about the different tracking strategies available.
For a quick 10 minute overview of ArgoCD, check out the demo presented to the Sig Apps community
meeting:
[![Alt text](https://img.youtube.com/vi/aWDIQMbp1cc/0.jpg)](https://youtu.be/aWDIQMbp1cc?t=1m4s)
## Architecture
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD is implemented as a kubernetes controller which continuously monitors running applications
and compares the current, live state against the desired target state (as specified in the git repo).
A deployed application whose live state deviates from the target state is considered out-of-sync.
Argo CD reports & visualizes the differences as well as providing facilities to automatically or
A deployed application whose live state deviates from the target state is considered `OutOfSync`.
Argo CD reports & visualizes the differences, while providing facilities to automatically or
manually sync the live state back to the desired target state. Any modifications made to the desired
target state in the git repo can be automatically applied and reflected in the specified target
environments.
@@ -40,52 +55,23 @@ For additional details, see [architecture overview](docs/architecture.md).
## Features
* Automated deployment of applications to specified target environments
* Flexibility in support for multiple config management tools (Ksonnet, Kustomize, Helm, plain-YAML)
* Continuous monitoring of deployed applications
* Automated or manual syncing of applications to its target state
* Web and CLI based visualization of applications and differences between live vs. target state
* Automated or manual syncing of applications to its desired state
* Web and CLI based visualization of applications and differences between live vs. desired state
* Rollback/Roll-anywhere to any application state committed in the git repository
* SSO Integration (OIDC, LDAP, SAML 2.0, GitLab, Microsoft, LinkedIn)
* Health assessment statuses on all components of the application
* SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitLab, Microsoft, LinkedIn)
* Webhook Integration (GitHub, BitBucket, GitLab)
## What is ksonnet?
* [Jsonnet](http://jsonnet.org), the basis for ksonnet, is a domain specific configuration language,
which provides extreme flexibility for composing and manipulating JSON/YAML specifications.
* [Ksonnet](http://ksonnet.io) goes one step further by applying Jsonnet principles to Kubernetes
manifests. It provides an opinionated file & directory structure to organize applications into
reusable components, parameters, and environments. Environments can be hierarchical, which promotes
both re-use and granular customization of application and environment specifications.
## Why ksonnet?
Application configuration management is a hard problem and grows rapidly in complexity as you deploy
more applications, against more and more environments. Current templating systems, such as Jinja,
and Golang templating, are unnatural ways to maintain kubernetes manifests, and are not well suited to
capture subtle configuration differences between environments. Its ability to compose and re-use
application and environment configurations is also very limited.
Imagine we have a single guestbook application deployed in following environments:
| Environment | K8s Version | Application Image | DB Connection String | Environment Vars | Sidecars |
|---------------|-------------|------------------------|-----------------------|------------------|---------------|
| minikube | 1.10.0 | jesse/guestbook:latest | sql://locahost/db | DEBUG=true | |
| dev | 1.9.0 | app/guestbook:latest | sql://dev-test/db | DEBUG=true | |
| staging | 1.8.0 | app/guestbook:e3c0263 | sql://staging/db | | istio,dnsmasq |
| us-west-1 | 1.8.0 | app/guestbook:abc1234 | sql://prod/db | FOO_FEATURE=true | istio,dnsmasq |
| us-west-2 | 1.8.0 | app/guestbook:abc1234 | sql://prod/db | | istio,dnsmasq |
| us-east-1 | 1.9.0 | app/guestbook:abc1234 | sql://prod/db | BAR_FEATURE=true | istio,dnsmasq |
Ksonnet:
* Enables composition and re-use of common YAML specifications
* Allows overrides, additions, and subtractions of YAML sub-components specific to each environment
* Guarantees proper generation of K8s manifests suitable for the corresponding Kubernetes API version
* Provides [kubernetes-specific jsonnet libraries](https://github.com/ksonnet/ksonnet-lib) to enable
concise definition of kubernetes manifests
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
* Audit trails for application events and API calls
* Parameter overrides for overriding ksonnet/helm parameters in git
* Service account/access key management for CI pipelines
## Development Status
* Argo CD is in early development
* Argo CD is being used in production to deploy SaaS services at Intuit
## Roadmap
* PreSync, PostSync, OutOfSync hooks
* Customized application actions as Argo workflows
* Blue/Green & canary upgrades
* Auto-sync toggle to directly apply git state changes to live state
* Revamped UI, and feature parity with CLI
* Customizable application actions

View File

@@ -1 +1 @@
0.4.5
0.9.2

View File

@@ -8,12 +8,6 @@ import (
"strconv"
"time"
"github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
@@ -22,8 +16,15 @@ import (
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// load the oidc plugin (required to authenticate with OpenID Connect).
"github.com/argoproj/argo-cd/reposerver"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
"github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/stats"
)
const (
@@ -65,30 +66,25 @@ func newCommand() *cobra.Command {
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
// TODO (amatyushentsev): Use config map to store controller configuration
controllerConfig := controller.ApplicationControllerConfig{
Namespace: namespace,
InstanceID: "",
}
db := db.NewDB(namespace, kubeClient)
resyncDuration := time.Duration(appResyncPeriod) * time.Second
appStateManager := controller.NewAppStateManager(db, appClient, reposerver.NewRepositoryServerClientset(repoServerAddress), namespace)
appHealthManager := controller.NewAppHealthManager(db, namespace)
repoClientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
appController := controller.NewApplicationController(
namespace,
kubeClient,
appClient,
db,
appStateManager,
appHealthManager,
resyncDuration,
&controllerConfig)
repoClientset,
resyncDuration)
secretController := controller.NewSecretController(kubeClient, repoClientset, resyncDuration, namespace)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
log.Infof("Application Controller (version: %s) starting (namespace: %s)", argocd.GetVersion(), namespace)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
go secretController.Run(ctx)
go appController.Run(ctx, statusProcessors, operationProcessors)
// Wait forever
select {}

View File

@@ -4,6 +4,10 @@ import (
"fmt"
"net"
"os"
"time"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/errors"
@@ -12,9 +16,8 @@ import (
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/ksonnet"
"github.com/go-redis/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
)
const (
@@ -25,7 +28,8 @@ const (
func newCommand() *cobra.Command {
var (
logLevel string
logLevel string
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
)
var command = cobra.Command{
Use: cliName,
@@ -35,7 +39,11 @@ func newCommand() *cobra.Command {
errors.CheckError(err)
log.SetLevel(level)
server := reposerver.NewServer(git.NewFactory(), newCache())
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
errors.CheckError(err)
server, err := reposerver.NewServer(git.NewFactory(), newCache(), tlsConfigCustomizer)
errors.CheckError(err)
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
errors.CheckError(err)
@@ -45,6 +53,9 @@ func newCommand() *cobra.Command {
log.Infof("argocd-repo-server %s serving on %s", argocd.GetVersion(), listener.Addr())
log.Infof("ksonnet version: %s", ksVers)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
err = grpc.Serve(listener)
errors.CheckError(err)
return nil
@@ -52,17 +63,18 @@ func newCommand() *cobra.Command {
}
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
return &command
}
func newCache() cache.Cache {
//return cache.NewInMemoryCache(repository.DefaultRepoCacheExpiration)
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "",
DB: 0,
})
return cache.NewRedisCache(client, repository.DefaultRepoCacheExpiration)
return cache.NewInMemoryCache(repository.DefaultRepoCacheExpiration)
// client := redis.NewClient(&redis.Options{
// Addr: "localhost:6379",
// Password: "",
// DB: 0,
// })
// return cache.NewRedisCache(client, repository.DefaultRepoCacheExpiration)
}
func main() {

View File

@@ -2,27 +2,35 @@ package commands
import (
"context"
"flag"
"strconv"
"time"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server"
"github.com/argoproj/argo-cd/util/cli"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
)
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
insecure bool
logLevel string
clientConfig clientcmd.ClientConfig
staticAssetsDir string
repoServerAddress string
disableAuth bool
insecure bool
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
staticAssetsDir string
repoServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
)
var command = &cobra.Command{
Use: cliName,
@@ -33,28 +41,41 @@ func NewCommand() *cobra.Command {
errors.CheckError(err)
log.SetLevel(level)
// Set the glog level for the k8s go-client
_ = flag.CommandLine.Parse([]string{})
_ = flag.Lookup("logtostderr").Value.Set("true")
_ = flag.Lookup("v").Value.Set(strconv.Itoa(glogLevel))
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
errors.CheckError(err)
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
repoclientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
DisableAuth: disableAuth,
Insecure: insecure,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
DisableAuth: disableAuth,
TLSConfigCustomizer: tlsConfigCustomizer,
}
argocd := server.NewServer(argoCDOpts)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
for {
argocd := server.NewServer(argoCDOpts)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
argocd.Run(ctx, 8080)
@@ -67,8 +88,10 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&insecure, "insecure", false, "Run server without TLS")
command.Flags().StringVar(&staticAssetsDir, "staticassets", "", "Static assets directory path")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().StringVar(&repoServerAddress, "repo-server", "localhost:8081", "Repo server address.")
command.Flags().BoolVar(&disableAuth, "disable-auth", false, "Disable client authentication")
command.AddCommand(cli.NewVersionCmd(cliName))
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
return command
}

View File

@@ -6,14 +6,22 @@ import (
"io/ioutil"
"os"
"os/exec"
"strings"
"syscall"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/dex"
"github.com/argoproj/argo-cd/util/settings"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
@@ -26,6 +34,9 @@ import (
const (
// CLIName is the name of the CLI
cliName = "argocd-util"
// YamlSeparator separates sections of a YAML file
yamlSeparator = "\n---\n"
)
// NewCommand returns a new instance of an argocd command
@@ -45,6 +56,9 @@ func NewCommand() *cobra.Command {
command.AddCommand(cli.NewVersionCmd(cliName))
command.AddCommand(NewRunDexCommand())
command.AddCommand(NewGenDexConfigCommand())
command.AddCommand(NewImportCommand())
command.AddCommand(NewExportCommand())
command.AddCommand(NewSettingsCommand())
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
return command
@@ -154,6 +168,215 @@ func NewGenDexConfigCommand() *cobra.Command {
return &command
}
// NewImportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewImportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
)
var command = cobra.Command{
Use: "import SOURCE",
Short: "Import Argo CD data from stdin (specify `-') or a file",
RunE: func(c *cobra.Command, args []string) error {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var (
input []byte
err error
newSettings *settings.ArgoCDSettings
newRepos []*v1alpha1.Repository
newClusters []*v1alpha1.Cluster
newApps []*v1alpha1.Application
newRBACCM *apiv1.ConfigMap
)
if in := args[0]; in == "-" {
input, err = ioutil.ReadAll(os.Stdin)
errors.CheckError(err)
} else {
input, err = ioutil.ReadFile(in)
errors.CheckError(err)
}
inputStrings := strings.Split(string(input), yamlSeparator)
err = yaml.Unmarshal([]byte(inputStrings[0]), &newSettings)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[1]), &newRepos)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[2]), &newClusters)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[3]), &newApps)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[4]), &newRBACCM)
errors.CheckError(err)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
err = settingsMgr.SaveSettings(newSettings)
errors.CheckError(err)
db := db.NewDB(namespace, kubeClientset)
_, err = kubeClientset.CoreV1().ConfigMaps(namespace).Create(newRBACCM)
errors.CheckError(err)
for _, repo := range newRepos {
_, err := db.CreateRepository(context.Background(), repo)
if err != nil {
log.Warn(err)
}
}
for _, cluster := range newClusters {
_, err := db.CreateCluster(context.Background(), cluster)
if err != nil {
log.Warn(err)
}
}
appClientset := appclientset.NewForConfigOrDie(config)
for _, app := range newApps {
out, err := appClientset.ArgoprojV1alpha1().Applications(namespace).Create(app)
errors.CheckError(err)
log.Println(out)
}
return nil
},
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
return &command
}
// NewExportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewExportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
out string
)
var command = cobra.Command{
Use: "export",
Short: "Export all Argo CD data to stdout (default) or a file",
RunE: func(c *cobra.Command, args []string) error {
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
settings, err := settingsMgr.GetSettings()
errors.CheckError(err)
// certificate data is included in secrets that are exported alongside
settings.Certificate = nil
db := db.NewDB(namespace, kubeClientset)
clusters, err := db.ListClusters(context.Background())
errors.CheckError(err)
repos, err := db.ListRepositories(context.Background())
errors.CheckError(err)
appClientset := appclientset.NewForConfigOrDie(config)
apps, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(metav1.ListOptions{})
errors.CheckError(err)
rbacCM, err := kubeClientset.CoreV1().ConfigMaps(namespace).Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
// remove extraneous cruft from output
rbacCM.ObjectMeta = metav1.ObjectMeta{
Name: rbacCM.ObjectMeta.Name,
}
// remove extraneous cruft from output
for idx, app := range apps.Items {
apps.Items[idx].ObjectMeta = metav1.ObjectMeta{
Name: app.ObjectMeta.Name,
Finalizers: app.ObjectMeta.Finalizers,
}
apps.Items[idx].Status = v1alpha1.ApplicationStatus{
History: app.Status.History,
}
apps.Items[idx].Operation = nil
}
// take a list of exportable objects, marshal them to YAML,
// and return a string joined by a delimiter
output := func(delimiter string, oo ...interface{}) string {
out := make([]string, 0)
for _, o := range oo {
data, err := yaml.Marshal(o)
errors.CheckError(err)
out = append(out, string(data))
}
return strings.Join(out, delimiter)
}(yamlSeparator, settings, repos.Items, clusters.Items, apps.Items, rbacCM)
if out == "-" {
fmt.Println(output)
} else {
err = ioutil.WriteFile(out, []byte(output), 0644)
errors.CheckError(err)
}
return nil
},
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().StringVarP(&out, "out", "o", "-", "Output to the specified file instead of stdout")
return &command
}
// NewSettingsCommand returns a new instance of `argocd-util settings` command
func NewSettingsCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
updateSuperuser bool
superuserPassword string
updateSignature bool
)
var command = &cobra.Command{
Use: "settings",
Short: "Creates or updates ArgoCD settings",
Long: "Creates or updates ArgoCD settings",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if !(wasSpecified) {
namespace = "argocd"
}
kubeclientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(kubeclientset, namespace)
_, err = settings.UpdateSettings(superuserPassword, settingsMgr, updateSignature, updateSuperuser, namespace)
errors.CheckError(err)
},
}
command.Flags().BoolVar(&updateSuperuser, "update-superuser", false, "force updating the superuser password")
command.Flags().StringVar(&superuserPassword, "superuser-password", "", "password for super user")
command.Flags().BoolVar(&updateSignature, "update-signature", false, "force updating the server-side token signing signature")
clientConfig = cli.AddKubectlFlagsToCmd(command)
return command
}
func main() {
if err := NewCommand().Execute(); err != nil {
fmt.Println(err)

View File

@@ -0,0 +1,74 @@
package commands
import (
"context"
"fmt"
"os"
"syscall"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/account"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/settings"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
)
func NewAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "account",
Short: "Manage account settings",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewAccountUpdatePasswordCommand(clientOpts))
return command
}
func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
currentPassword string
newPassword string
)
var command = &cobra.Command{
Use: "update-password",
Short: "Update password",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if currentPassword == "" {
fmt.Print("*** Enter current password: ")
password, err := terminal.ReadPassword(syscall.Stdin)
errors.CheckError(err)
currentPassword = string(password)
fmt.Print("\n")
}
if newPassword == "" {
var err error
newPassword, err = settings.ReadAndConfirmPassword()
errors.CheckError(err)
}
updatePasswordRequest := account.UpdatePasswordRequest{
NewPassword: newPassword,
CurrentPassword: currentPassword,
}
conn, usrIf := argocdclient.NewClientOrDie(clientOpts).NewAccountClientOrDie()
defer util.Close(conn)
_, err := usrIf.UpdatePassword(context.Background(), &updatePasswordRequest)
errors.CheckError(err)
fmt.Printf("Password updated\n")
},
}
command.Flags().StringVar(&currentPassword, "current-password", "", "current password you wish to change")
command.Flags().StringVar(&newPassword, "new-password", "", "new password you want to update to")
return command
}

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,7 @@ import (
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
@@ -42,6 +43,12 @@ func NewClusterCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientc
// NewClusterAddCommand returns a new instance of an `argocd cluster add` command
func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientcmd.PathOptions) *cobra.Command {
var (
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
)
var command = &cobra.Command{
Use: "add",
Short: fmt.Sprintf("%s cluster add CONTEXT", cliName),
@@ -58,6 +65,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
if clstContext == nil {
log.Fatalf("Context %s does not exist in kubeconfig", args[0])
}
overrides := clientcmd.ConfigOverrides{
Context: *clstContext,
}
@@ -65,18 +73,40 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
// Install RBAC resources for managing the cluster
managerBearerToken := common.InstallClusterManagerRBAC(conf)
managerBearerToken := ""
var awsAuthConf *argoappv1.AWSAuthConfig
if awsClusterName != "" {
awsAuthConf = &argoappv1.AWSAuthConfig{
ClusterName: awsClusterName,
RoleARN: awsRoleArn,
}
} else {
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err = common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clst := NewCluster(args[0], conf, managerBearerToken)
clst, err = clusterIf.Create(context.Background(), clst)
clst := NewCluster(args[0], conf, managerBearerToken, awsAuthConf)
if inCluster {
clst.Server = common.KubernetesInternalAPIServerAddr
}
clstCreateReq := cluster.ClusterCreateRequest{
Cluster: clst,
Upsert: upsert,
}
clst, err = clusterIf.Create(context.Background(), &clstCreateReq)
errors.CheckError(err)
fmt.Printf("Cluster '%s' added\n", clst.Name)
},
}
command.PersistentFlags().StringVar(&pathOpts.LoadingRules.ExplicitPath, pathOpts.ExplicitFileFlag, pathOpts.LoadingRules.ExplicitPath, "use a particular kubeconfig file")
command.Flags().BoolVar(&inCluster, "in-cluster", false, "Indicates ArgoCD resides inside this cluster and should connect using the internal k8s hostname (kubernetes.default.svc)")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing cluster with the same name even if the spec differs")
command.Flags().StringVar(&awsClusterName, "aws-cluster-name", "", "AWS Cluster name if set then aws-iam-authenticator will be used to access cluster")
command.Flags().StringVar(&awsRoleArn, "aws-role-arn", "", "Optional AWS role arn. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.")
return command
}
@@ -96,9 +126,20 @@ func printKubeContexts(ca clientcmd.ConfigAccess) {
}
sort.Strings(contextNames)
if config.Clusters == nil {
return
}
for _, name := range contextNames {
// ignore malformed kube config entries
context := config.Contexts[name]
if context == nil {
continue
}
cluster := config.Clusters[context.Cluster]
if cluster == nil {
continue
}
prefix := " "
if config.CurrentContext == name {
prefix = "*"
@@ -108,7 +149,7 @@ func printKubeContexts(ca clientcmd.ConfigAccess) {
}
}
func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argoappv1.Cluster {
func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig) *argoappv1.Cluster {
tlsClientConfig := argoappv1.TLSClientConfig{
Insecure: conf.TLSClientConfig.Insecure,
ServerName: conf.TLSClientConfig.ServerName,
@@ -137,6 +178,7 @@ func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argo
Config: argoappv1.ClusterConfig{
BearerToken: managerBearerToken,
TLSClientConfig: tlsClientConfig,
AWSAuthConfig: awsAuthConf,
},
}
return &clst
@@ -178,9 +220,14 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
// clientset, err := kubernetes.NewForConfig(conf)
// errors.CheckError(err)
for _, clusterName := range args {
// TODO(jessesuen): find the right context and remove manager RBAC artifacts
// common.UninstallClusterManagerRBAC(conf)
// err := common.UninstallClusterManagerRBAC(clientset)
// errors.CheckError(err)
_, err := clusterIf.Delete(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
}
@@ -200,9 +247,9 @@ func NewClusterListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
clusters, err := clusterIf.List(context.Background(), &cluster.ClusterQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "SERVER\tNAME\tMESSAGE\n")
fmt.Fprintf(w, "SERVER\tNAME\tSTATUS\tMESSAGE\n")
for _, c := range clusters.Items {
fmt.Fprintf(w, "%s\t%s\t%s\n", c.Server, c.Name, c.Message)
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", c.Server, c.Name, c.ConnectionState.Status, c.ConnectionState.Message)
}
_ = w.Flush()
},

View File

@@ -1,75 +0,0 @@
package commands
import (
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/install"
"github.com/argoproj/argo-cd/util/cli"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
)
// NewInstallCommand returns a new instance of `argocd install` command
func NewInstallCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
installOpts install.InstallOptions
)
var command = &cobra.Command{
Use: "install",
Short: "Install Argo CD",
Long: "Install Argo CD",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if wasSpecified {
installOpts.Namespace = namespace
}
installer, err := install.NewInstaller(conf, installOpts)
errors.CheckError(err)
installer.Install()
},
}
command.Flags().BoolVar(&installOpts.Upgrade, "upgrade", false, "upgrade controller/ui deployments and configmap if already installed")
command.Flags().BoolVar(&installOpts.DryRun, "dry-run", false, "print the kubernetes manifests to stdout instead of installing")
command.Flags().StringVar(&installOpts.SuperuserPassword, "superuser-password", "", "password for super user")
command.Flags().StringVar(&installOpts.ControllerImage, "controller-image", install.DefaultControllerImage, "use a specified controller image")
command.Flags().StringVar(&installOpts.ServerImage, "server-image", install.DefaultServerImage, "use a specified api server image")
command.Flags().StringVar(&installOpts.UIImage, "ui-image", install.DefaultUIImage, "use a specified ui image")
command.Flags().StringVar(&installOpts.RepoServerImage, "repo-server-image", install.DefaultRepoServerImage, "use a specified repo server image")
command.Flags().StringVar(&installOpts.ImagePullPolicy, "image-pull-policy", "", "set the image pull policy of the pod specs")
clientConfig = cli.AddKubectlFlagsToCmd(command)
command.AddCommand(newSettingsCommand())
return command
}
// newSettingsCommand returns a new instance of `argocd install settings` command
func newSettingsCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
installOpts install.InstallOptions
)
var command = &cobra.Command{
Use: "settings",
Short: "Creates or updates ArgoCD settings",
Long: "Creates or updates ArgoCD settings",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if wasSpecified {
installOpts.Namespace = namespace
}
installer, err := install.NewInstaller(conf, installOpts)
errors.CheckError(err)
installer.InstallSettings()
},
}
command.Flags().BoolVar(&installOpts.UpdateSuperuser, "update-superuser", false, "force updating the superuser password")
command.Flags().StringVar(&installOpts.SuperuserPassword, "superuser-password", "", "password for super user")
command.Flags().BoolVar(&installOpts.UpdateSignature, "update-signature", false, "force updating the server-side token signing signature")
clientConfig = cli.AddKubectlFlagsToCmd(command)
return command
}

View File

@@ -77,6 +77,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
// Perform the login
var tokenString string
var refreshToken string
if !sso {
tokenString = passwordLogin(acdClient, username, password)
} else {
@@ -85,15 +86,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
if !ssoConfigured(acdSet) {
log.Fatalf("ArgoCD instance is not configured with SSO")
}
tokenString = oauth2Login(server, clientOpts.PlainText)
// The token which we just received from the OAuth2 flow, was from dex. ArgoCD
// currently does not back dex with any kind of persistent storage (it is run
// in-memory). As a result, this token cannot be used in any permanent capacity.
// Restarts of dex will result in a different signing key, and sessions becoming
// invalid. Instead we turn-around and ask ArgoCD to re-sign the token (who *does*
// have persistence of signing keys), and is what we store in the config. Should we
// ever decide to have a database layer for dex, the next line can be removed.
tokenString = tokenLogin(acdClient, tokenString)
tokenString, refreshToken = oauth2Login(server, clientOpts.PlainText)
}
parser := &jwt.Parser{
@@ -116,8 +109,9 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
Insecure: globalClientOpts.Insecure,
})
localCfg.UpsertUser(localconfig.User{
Name: ctxName,
AuthToken: tokenString,
Name: ctxName,
AuthToken: tokenString,
RefreshToken: refreshToken,
})
if ctxName == "" {
ctxName = server
@@ -163,8 +157,9 @@ func getFreePort() (int, error) {
return ln.Addr().(*net.TCPAddr).Port, ln.Close()
}
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and returns the JWT token
func oauth2Login(host string, plaintext bool) string {
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and
// returns the JWT token and a refresh token (if supported)
func oauth2Login(host string, plaintext bool) (string, string) {
ctx := context.Background()
port, err := getFreePort()
errors.CheckError(err)
@@ -183,6 +178,7 @@ func oauth2Login(host string, plaintext bool) string {
}
srv := &http.Server{Addr: ":" + strconv.Itoa(port)}
var tokenString string
var refreshToken string
loginCompleted := make(chan struct{})
callbackHandler := func(w http.ResponseWriter, r *http.Request) {
@@ -215,8 +211,9 @@ func oauth2Login(host string, plaintext bool) string {
log.Fatal(errMsg)
return
}
refreshToken, _ = tok.Extra("refresh_token").(string)
log.Debugf("Token: %s", tokenString)
log.Debugf("Refresh Token: %s", tokenString)
successPage := `
<div style="height:100px; width:100%!; display:flex; flex-direction: column; justify-content: center; align-items:center; background-color:#2ecc71; color:white; font-size:22"><div>Authentication successful!</div></div>
<p style="margin-top:20px; font-size:18; text-align:center">Authentication was successful, you can now return to CLI. This page will close automatically</p>
@@ -248,7 +245,7 @@ func oauth2Login(host string, plaintext bool) string {
}()
<-loginCompleted
_ = srv.Shutdown(ctx)
return tokenString
return tokenString, refreshToken
}
func passwordLogin(acdClient argocdclient.Client, username, password string) string {
@@ -263,14 +260,3 @@ func passwordLogin(acdClient argocdclient.Client, username, password string) str
errors.CheckError(err)
return createdSession.Token
}
func tokenLogin(acdClient argocdclient.Client, token string) string {
sessConn, sessionIf := acdClient.NewSessionClientOrDie()
defer util.Close(sessConn)
sessionRequest := session.SessionCreateRequest{
Token: token,
}
createdSession, err := sessionIf.Create(context.Background(), &sessionRequest)
errors.CheckError(err)
return createdSession.Token
}

View File

@@ -0,0 +1,790 @@
package commands
import (
"context"
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"time"
timeutil "github.com/argoproj/pkg/time"
"github.com/dustin/go-humanize"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/git"
projectutil "github.com/argoproj/argo-cd/util/project"
)
const (
policyTemplate = "p, proj:%s:%s, applications, %s, %s/%s, %s"
)
type projectOpts struct {
description string
destinations []string
sources []string
}
type policyOpts struct {
action string
permission string
object string
}
func (opts *projectOpts) GetDestinations() []v1alpha1.ApplicationDestination {
destinations := make([]v1alpha1.ApplicationDestination, 0)
for _, destStr := range opts.destinations {
parts := strings.Split(destStr, ",")
if len(parts) != 2 {
log.Fatalf("Expected destination of the form: server,namespace. Received: %s", destStr)
} else {
destinations = append(destinations, v1alpha1.ApplicationDestination{
Server: parts[0],
Namespace: parts[1],
})
}
}
return destinations
}
// NewProjectCommand returns a new instance of an `argocd proj` command
func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "proj",
Short: "Manage projects",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewProjectRoleCommand(clientOpts))
command.AddCommand(NewProjectCreateCommand(clientOpts))
command.AddCommand(NewProjectDeleteCommand(clientOpts))
command.AddCommand(NewProjectListCommand(clientOpts))
command.AddCommand(NewProjectSetCommand(clientOpts))
command.AddCommand(NewProjectAddDestinationCommand(clientOpts))
command.AddCommand(NewProjectRemoveDestinationCommand(clientOpts))
command.AddCommand(NewProjectAddSourceCommand(clientOpts))
command.AddCommand(NewProjectRemoveSourceCommand(clientOpts))
command.AddCommand(NewProjectAllowClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectAllowNamespaceResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyNamespaceResourceCommand(clientOpts))
return command
}
func addProjFlags(command *cobra.Command, opts *projectOpts) {
command.Flags().StringVarP(&opts.description, "description", "", "", "Project description")
command.Flags().StringArrayVarP(&opts.destinations, "dest", "d", []string{},
"Permitted destination server and namespace (e.g. https://192.168.99.100:8443,default)")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Permitted git source repository URL")
}
func addPolicyFlags(command *cobra.Command, opts *policyOpts) {
command.Flags().StringVarP(&opts.action, "action", "a", "", "Action to grant/deny permission on (e.g. get, create, list, update, delete)")
command.Flags().StringVarP(&opts.permission, "permission", "p", "allow", "Whether to allow or deny access to object with the action. This can only be 'allow' or 'deny'")
command.Flags().StringVarP(&opts.object, "object", "o", "", "Object within the project to grant/deny access. Use '*' for a wildcard. Will want access to '<project>/<object>'")
}
// NewProjectRoleCommand returns a new instance of the `argocd proj role` command
func NewProjectRoleCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
roleCommand := &cobra.Command{
Use: "role",
Short: "Manage a project's roles",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
roleCommand.AddCommand(NewProjectRoleListCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleGetCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddPolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemovePolicyCommand(clientOpts))
return roleCommand
}
// NewProjectRoleAddPolicyCommand returns a new instance of an `argocd proj role add-policy` command
func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "add-policy PROJECT ROLE-NAME",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if len(opts.action) <= 0 {
log.Fatal("Action needs to longer than 0 characters")
}
if len(opts.object) <= 0 {
log.Fatal("Objects needs to longer than 0 characters")
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
roleIndex, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
log.Fatal(err)
}
role := proj.Spec.Roles[roleIndex]
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
proj.Spec.Roles[roleIndex].Policies = append(role.Policies, policy)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleRemovePolicyCommand returns a new instance of an `argocd proj role remove-policy` command
func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "remove-policy PROJECT ROLE-NAME",
Short: "Remove a policy from a role within a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
if opts.permission != "allow" && opts.permission != "deny" {
log.Fatal("Permission flag can only have the values 'allow' or 'deny'")
}
if len(opts.action) <= 0 {
log.Fatal("Action needs to longer than 0 characters")
}
if len(opts.object) <= 0 {
log.Fatal("Objects needs to longer than 0 characters")
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
roleIndex, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
log.Fatal(err)
}
role := proj.Spec.Roles[roleIndex]
policyToRemove := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
duplicateIndex := -1
for i, policy := range role.Policies {
if policy == policyToRemove {
duplicateIndex = i
break
}
}
if duplicateIndex < 0 {
return
}
role.Policies[duplicateIndex] = role.Policies[len(role.Policies)-1]
proj.Spec.Roles[roleIndex].Policies = role.Policies[:len(role.Policies)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleCreateCommand returns a new instance of an `argocd proj role create` command
func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
description string
)
var command = &cobra.Command{
Use: "create PROJECT ROLE-NAME",
Short: "Create a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, err = projectutil.GetRoleIndexByName(proj, roleName)
if err == nil {
return
}
proj.Spec.Roles = append(proj.Spec.Roles, v1alpha1.ProjectRole{Name: roleName, Description: description})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
command.Flags().StringVarP(&description, "description", "", "", "Project description")
return command
}
// NewProjectRoleDeleteCommand returns a new instance of an `argocd proj role delete` command
func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT ROLE-NAME",
Short: "Delete a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index, err := projectutil.GetRoleIndexByName(proj, roleName)
if err != nil {
return
}
proj.Spec.Roles[index] = proj.Spec.Roles[len(proj.Spec.Roles)-1]
proj.Spec.Roles = proj.Spec.Roles[:len(proj.Spec.Roles)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleCreateTokenCommand returns a new instance of an `argocd proj role create-token` command
func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
expiresIn string
)
var command = &cobra.Command{
Use: "create-token PROJECT ROLE-NAME",
Short: "Create a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &project.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
errors.CheckError(err)
fmt.Println(token.Token)
},
}
command.Flags().StringVarP(&expiresIn, "expires-in", "e", "0s", "Duration before the token will expire. (Default: No expiration)")
return command
}
// NewProjectRoleDeleteTokenCommand returns a new instance of an `argocd proj role delete-token` command
func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete-token PROJECT ROLE-NAME ISSUED-AT",
Short: "Delete a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
issuedAt, err := strconv.ParseInt(args[2], 10, 64)
if err != nil {
log.Fatal(err)
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &project.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleListCommand returns a new instance of an `argocd proj roles list` command
func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List all the roles in a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range project.Spec.Roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
},
}
return command
}
// NewProjectRoleGetCommand returns a new instance of an `argocd proj roles get` command
func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "get PROJECT ROLE-NAME",
Short: "Get the details of a specific role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index, err := projectutil.GetRoleIndexByName(project, roleName)
errors.CheckError(err)
role := project.Spec.Roles[index]
printRoleFmtStr := "%-15s%s\n"
fmt.Printf(printRoleFmtStr, "Role Name:", roleName)
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
fmt.Printf("Policies:\n")
fmt.Printf("%s\n", project.ProjectPoliciesString())
fmt.Printf("JWT Tokens:\n")
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
for _, token := range role.JWTTokens {
expiresAt := "<none>"
if token.ExpiresAt > 0 {
expiresAt = humanizeTimestamp(token.ExpiresAt)
}
fmt.Fprintf(w, "%d\t%s\t%s\n", token.IssuedAt, humanizeTimestamp(token.IssuedAt), expiresAt)
}
_ = w.Flush()
},
}
return command
}
func humanizeTimestamp(epoch int64) string {
ts := time.Unix(epoch, 0)
return fmt.Sprintf("%s (%s)", ts.Format(time.RFC3339), humanize.Time(ts))
}
// NewProjectCreateCommand returns a new instance of an `argocd proj create` command
func NewProjectCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts projectOpts
)
var command = &cobra.Command{
Use: "create PROJECT",
Short: "Create a project",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
proj := v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{Name: projName},
Spec: v1alpha1.AppProjectSpec{
Description: opts.description,
Destinations: opts.GetDestinations(),
SourceRepos: opts.sources,
},
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err := projIf.Create(context.Background(), &project.ProjectCreateRequest{Project: &proj})
errors.CheckError(err)
},
}
addProjFlags(command, &opts)
return command
}
// NewProjectSetCommand returns a new instance of an `argocd proj set` command
func NewProjectSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts projectOpts
)
var command = &cobra.Command{
Use: "set PROJECT",
Short: "Set project parameters",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
visited := 0
c.Flags().Visit(func(f *pflag.Flag) {
visited++
switch f.Name {
case "description":
proj.Spec.Description = opts.description
case "dest":
proj.Spec.Destinations = opts.GetDestinations()
case "src":
proj.Spec.SourceRepos = opts.sources
}
})
if visited == 0 {
log.Error("Please set at least one option to update")
c.HelpFunc()(c, args)
os.Exit(1)
}
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addProjFlags(command, &opts)
return command
}
// NewProjectAddDestinationCommand returns a new instance of an `argocd proj add-destination` command
func NewProjectAddDestinationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-destination PROJECT SERVER NAMESPACE",
Short: "Add project destination",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, dest := range proj.Spec.Destinations {
if dest.Namespace == namespace && dest.Server == server {
log.Fatal("Specified destination is already defined in project")
}
}
proj.Spec.Destinations = append(proj.Spec.Destinations, v1alpha1.ApplicationDestination{Server: server, Namespace: namespace})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectRemoveDestinationCommand returns a new instance of an `argocd proj remove-destination` command
func NewProjectRemoveDestinationCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "remove-destination PROJECT SERVER NAMESPACE",
Short: "Remove project destination",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
server := args[1]
namespace := args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
for i, dest := range proj.Spec.Destinations {
if dest.Namespace == namespace && dest.Server == server {
index = i
break
}
}
if index == -1 {
log.Fatal("Specified destination does not exist in project")
} else {
proj.Spec.Destinations = append(proj.Spec.Destinations[:index], proj.Spec.Destinations[index+1:]...)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
return command
}
// NewProjectAddSourceCommand returns a new instance of an `argocd proj add-src` command
func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-source PROJECT URL",
Short: "Add project source repository",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
url := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
for _, item := range proj.Spec.SourceRepos {
if item == "*" && item == url {
log.Info("Wildcard source repository is already defined in project")
return
}
if item == git.NormalizeGitURL(url) {
log.Info("Specified source repository is already defined in project")
return
}
}
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, url)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string) bool) *cobra.Command {
return &cobra.Command{
Use: cmdUse,
Short: cmdDesc,
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, group, kind := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
if action(proj, group, kind) {
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
}
// NewProjectAllowNamespaceResourceCommand returns a new instance of an `deny-cluster-resources` command
func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-namespace-resource PROJECT group kind"
desc := "Removes namespaced resource from black list"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
log.Info("Specified cluster resource is not blacklisted")
return false
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist[:index], proj.Spec.NamespaceResourceBlacklist[index+1:]...)
return true
})
}
// NewProjectDenyNamespaceResourceCommand returns a new instance of an `argocd proj deny-namespace-resource` command
func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-namespace-resource PROJECT group kind"
desc := "Adds namespaced resource to black list"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
log.Infof("Group '%s' and kind '%s' are already blacklisted in project", item.Group, item.Kind)
return false
}
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist, v1.GroupKind{Group: group, Kind: kind})
return true
})
}
// NewProjectDenyClusterResourceCommand returns a new instance of an `deny-cluster-resource` command
func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-cluster-resource PROJECT group kind"
desc := "Adds cluster wide resource to white list"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
log.Info("Specified cluster resource already denied in project")
return false
}
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist[:index], proj.Spec.ClusterResourceWhitelist[index+1:]...)
return true
})
}
// NewProjectAllowClusterResourceCommand returns a new instance of an `argocd proj allow-cluster-resource` command
func NewProjectAllowClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-cluster-resource PROJECT group kind"
desc := "Removed cluster wide resource from white list"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
log.Infof("Group '%s' and kind '%s' are already whitelisted in project", item.Group, item.Kind)
return false
}
}
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist, v1.GroupKind{Group: group, Kind: kind})
return true
})
}
// NewProjectRemoveSourceCommand returns a new instance of an `argocd proj remove-src` command
func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "remove-source PROJECT URL",
Short: "Remove project source repository",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
url := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
index := -1
for i, item := range proj.Spec.SourceRepos {
if item == "*" && item == url {
index = i
break
}
if item == git.NormalizeGitURL(url) {
index = i
break
}
}
if index == -1 {
log.Info("Specified source repository does not exist in project")
} else {
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos[:index], proj.Spec.SourceRepos[index+1:]...)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
return command
}
// NewProjectDeleteCommand returns a new instance of an `argocd proj delete` command
func NewProjectDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT",
Short: "Delete project",
Run: func(c *cobra.Command, args []string) {
if len(args) == 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
for _, name := range args {
_, err := projIf.Delete(context.Background(), &project.ProjectQuery{Name: name})
errors.CheckError(err)
}
},
}
return command
}
// NewProjectListCommand returns a new instance of an `argocd proj list` command
func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list",
Short: "List projects",
Run: func(c *cobra.Command, args []string) {
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
projects, err := projIf.List(context.Background(), &project.ProjectQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
for _, p := range projects.Items {
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, p.Spec.Destinations, p.Spec.SourceRepos, p.Spec.ClusterResourceWhitelist, p.Spec.NamespaceResourceBlacklist)
}
_ = w.Flush()
},
}
return command
}

View File

@@ -0,0 +1,75 @@
package commands
import (
"fmt"
"os"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
)
// NewReloginCommand returns a new instance of `argocd relogin` command
func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
password string
)
var command = &cobra.Command{
Use: "relogin",
Short: "Refresh an expired authenticate token",
Long: "Refresh an expired authenticate token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 0 {
c.HelpFunc()(c, args)
os.Exit(1)
}
localCfg, err := localconfig.ReadLocalConfig(globalClientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
log.Fatalf("No context found. Login using `argocd login`")
}
configCtx, err := localCfg.ResolveContext(localCfg.CurrentContext)
errors.CheckError(err)
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
claims := jwt.StandardClaims{}
_, _, err = parser.ParseUnverified(configCtx.User.AuthToken, &claims)
errors.CheckError(err)
var tokenString string
var refreshToken string
if claims.Issuer == session.SessionManagerClaimsIssuer {
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
PlainText: configCtx.Server.PlainText,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
fmt.Printf("Relogging in as '%s'\n", claims.Subject)
tokenString = passwordLogin(acdClient, claims.Subject, password)
} else {
fmt.Println("Reinitiating SSO login")
tokenString, refreshToken = oauth2Login(configCtx.Server.Server, configCtx.Server.PlainText)
}
localCfg.UpsertUser(localconfig.User{
Name: localCfg.CurrentContext,
AuthToken: tokenString,
RefreshToken: refreshToken,
})
err = localconfig.WriteLocalConfig(*localCfg, globalClientOpts.ConfigPath)
errors.CheckError(err)
fmt.Printf("Context '%s' updated\n", localCfg.CurrentContext)
},
}
command.Flags().StringVar(&password, "password", "", "the password of an account to authenticate")
return command
}

View File

@@ -7,6 +7,9 @@ import (
"os"
"text/tabwriter"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
appsv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
@@ -14,8 +17,6 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// NewRepoCommand returns a new instance of an `argocd repo` command
@@ -39,6 +40,7 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
)
var command = &cobra.Command{
@@ -57,27 +59,36 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
repo.SSHPrivateKey = string(keyData)
}
err := git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
// First test the repo *without* username/password. This gives us a hint on whether this
// is a private repo.
// NOTE: it is important not to run git commands to test git credentials on the user's
// system since it may mess with their git credential store (e.g. osx keychain).
// See issue #315
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey)
if err != nil {
if repo.Username != "" && repo.Password != "" || git.IsSSHURL(repo.Repo) {
// if everything was supplied or repo URL is SSH url, one of the inputs was definitely bad
if git.IsSSHURL(repo.Repo) {
// If we failed using git SSH credentials, then the repo is automatically bad
log.Fatal(err)
}
// If we can't test the repo, it's probably private. Prompt for credentials and try again.
// If we can't test the repo, it's probably private. Prompt for credentials and
// let the server test it.
repo.Username, repo.Password = cli.PromptCredentials(repo.Username, repo.Password)
err = git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
}
errors.CheckError(err)
conn, repoIf := argocdclient.NewClientOrDie(clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
createdRepo, err := repoIf.Create(context.Background(), &repo)
repoCreateReq := repository.RepoCreateRequest{
Repo: &repo,
Upsert: upsert,
}
createdRepo, err := repoIf.Create(context.Background(), &repoCreateReq)
errors.CheckError(err)
fmt.Printf("repository '%s' added\n", createdRepo.Repo)
},
}
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&sshPrivateKeyPath, "sshPrivateKeyPath", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
return command
}
@@ -113,9 +124,9 @@ func NewRepoListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "REPO\tUSER\tMESSAGE\n")
fmt.Fprintf(w, "REPO\tUSER\tSTATUS\tMESSAGE\n")
for _, r := range repos.Items {
fmt.Fprintf(w, "%s\t%s\t%s\n", r.Repo, r.Username, r.Message)
fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", r.Repo, r.Username, r.ConnectionState.Status, r.ConnectionState.Message)
}
_ = w.Flush()
},

View File

@@ -27,10 +27,11 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewClusterCommand(&clientOpts, pathOpts))
command.AddCommand(NewApplicationCommand(&clientOpts))
command.AddCommand(NewLoginCommand(&clientOpts))
command.AddCommand(NewReloginCommand(&clientOpts))
command.AddCommand(NewRepoCommand(&clientOpts))
command.AddCommand(NewInstallCommand())
command.AddCommand(NewUninstallCommand())
command.AddCommand(NewContextCommand(&clientOpts))
command.AddCommand(NewProjectCommand(&clientOpts))
command.AddCommand(NewAccountCommand(&clientOpts))
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()
errors.CheckError(err)

View File

@@ -1,40 +0,0 @@
package commands
import (
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/install"
"github.com/argoproj/argo-cd/util/cli"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
)
// NewUninstallCommand returns a new instance of `argocd install` command
func NewUninstallCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
installOpts install.InstallOptions
deleteNamespace bool
deleteCRD bool
)
var command = &cobra.Command{
Use: "uninstall",
Short: "Uninstall Argo CD",
Long: "Uninstall Argo CD",
Run: func(c *cobra.Command, args []string) {
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
errors.CheckError(err)
if wasSpecified {
installOpts.Namespace = namespace
}
installer, err := install.NewInstaller(conf, installOpts)
errors.CheckError(err)
installer.Uninstall(deleteNamespace, deleteCRD)
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)
command.Flags().BoolVar(&deleteNamespace, "delete-namespace", false, "Also delete the namespace during uninstall")
command.Flags().BoolVar(&deleteCRD, "delete-crd", false, "Also delete the Application CRD during uninstall")
return command
}

View File

@@ -54,6 +54,7 @@ func NewVersionCmd(clientOpts *argocdclient.ClientOptions) *cobra.Command {
fmt.Printf(" GoVersion: %s\n", serverVers.GoVersion)
fmt.Printf(" Compiler: %s\n", serverVers.Compiler)
fmt.Printf(" Platform: %s\n", serverVers.Platform)
fmt.Printf(" Ksonnet Version: %s\n", serverVers.KsonnetVersion)
}
},

View File

@@ -1,8 +1,9 @@
package common
import (
"github.com/argoproj/argo-cd/pkg/apis/application"
rbacv1 "k8s.io/api/rbac/v1"
"github.com/argoproj/argo-cd/pkg/apis/application"
)
const (
@@ -19,12 +20,16 @@ const (
AuthCookieName = "argocd.token"
// ResourcesFinalizerName is a number of application CRD finalizer
ResourcesFinalizerName = "resources-finalizer." + MetadataPrefix
// KubernetesInternalAPIServerAddr is address of the k8s API server when accessing internal to the cluster
KubernetesInternalAPIServerAddr = "https://kubernetes.default.svc"
)
const (
ArgoCDAdminUsername = "admin"
ArgoCDSecretName = "argocd-secret"
ArgoCDConfigMapName = "argocd-cm"
ArgoCDAdminUsername = "admin"
ArgoCDSecretName = "argocd-secret"
ArgoCDConfigMapName = "argocd-cm"
ArgoCDRBACConfigMapName = "argocd-rbac-cm"
)
const (
@@ -44,6 +49,10 @@ const (
ArgoCDCLIClientAppID = "argo-cd-cli"
// EnvVarSSODebug is an environment variable to enable additional OAuth debugging in the API server
EnvVarSSODebug = "ARGOCD_SSO_DEBUG"
// EnvVarRBACDebug is an environment variable to enable additional RBAC debugging in the API server
EnvVarRBACDebug = "ARGOCD_RBAC_DEBUG"
// DefaultAppProjectName contains name of default app project. The default app project allows deploying application to any cluster.
DefaultAppProjectName = "default"
)
var (
@@ -53,6 +62,20 @@ var (
// LabelKeySecretType contains the type of argocd secret (either 'cluster' or 'repo')
LabelKeySecretType = MetadataPrefix + "/secret-type"
// AnnotationConnectionStatus contains connection state status
AnnotationConnectionStatus = MetadataPrefix + "/connection-status"
// AnnotationConnectionMessage contains additional information about connection status
AnnotationConnectionMessage = MetadataPrefix + "/connection-message"
// AnnotationConnectionModifiedAt contains timestamp when connection state had been modified
AnnotationConnectionModifiedAt = MetadataPrefix + "/connection-modified-at"
// AnnotationHook contains the hook type of a resource
AnnotationHook = MetadataPrefix + "/hook"
// AnnotationHookDeletePolicy is the policy of deleting a hook
AnnotationHookDeletePolicy = MetadataPrefix + "/hook-delete-policy"
// AnnotationHelmHook is the helm hook annotation
AnnotationHelmHook = "helm.sh/hook"
// LabelKeyApplicationControllerInstanceID is the label which allows to separate application among multiple running application controllers.
LabelKeyApplicationControllerInstanceID = application.ApplicationFullName + "/controller-instanceid"
@@ -79,4 +102,8 @@ var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
Resources: []string{"*"},
Verbs: []string{"*"},
},
{
NonResourceURLs: []string{"*"},
Verbs: []string{"*"},
},
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"time"
"github.com/argoproj/argo-cd/errors"
log "github.com/sirupsen/logrus"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -12,7 +11,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
// CreateServiceAccount creates a service account
@@ -20,7 +18,7 @@ func CreateServiceAccount(
clientset kubernetes.Interface,
serviceAccountName string,
namespace string,
) {
) error {
serviceAccount := apiv1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
@@ -34,12 +32,13 @@ func CreateServiceAccount(
_, err := clientset.CoreV1().ServiceAccounts(namespace).Create(&serviceAccount)
if err != nil {
if !apierr.IsAlreadyExists(err) {
log.Fatalf("Failed to create service account '%s': %v\n", serviceAccountName, err)
return fmt.Errorf("Failed to create service account %q: %v", serviceAccountName, err)
}
fmt.Printf("ServiceAccount '%s' already exists\n", serviceAccountName)
return
log.Infof("ServiceAccount %q already exists", serviceAccountName)
return nil
}
fmt.Printf("ServiceAccount '%s' created\n", serviceAccountName)
log.Infof("ServiceAccount %q created", serviceAccountName)
return nil
}
// CreateClusterRole creates a cluster role
@@ -47,7 +46,7 @@ func CreateClusterRole(
clientset kubernetes.Interface,
clusterRoleName string,
rules []rbacv1.PolicyRule,
) {
) error {
clusterRole := rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -62,16 +61,17 @@ func CreateClusterRole(
_, err := crclient.Create(&clusterRole)
if err != nil {
if !apierr.IsAlreadyExists(err) {
log.Fatalf("Failed to create ClusterRole '%s': %v\n", clusterRoleName, err)
return fmt.Errorf("Failed to create ClusterRole %q: %v", clusterRoleName, err)
}
_, err = crclient.Update(&clusterRole)
if err != nil {
log.Fatalf("Failed to update ClusterRole '%s': %v\n", clusterRoleName, err)
return fmt.Errorf("Failed to update ClusterRole %q: %v", clusterRoleName, err)
}
fmt.Printf("ClusterRole '%s' updated\n", clusterRoleName)
log.Infof("ClusterRole %q updated", clusterRoleName)
} else {
fmt.Printf("ClusterRole '%s' created\n", clusterRoleName)
log.Infof("ClusterRole %q created", clusterRoleName)
}
return nil
}
// CreateClusterRoleBinding create a ClusterRoleBinding
@@ -81,7 +81,7 @@ func CreateClusterRoleBinding(
serviceAccountName,
clusterRoleName string,
namespace string,
) {
) error {
roleBinding := rbacv1.ClusterRoleBinding{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -106,22 +106,34 @@ func CreateClusterRoleBinding(
_, err := clientset.RbacV1().ClusterRoleBindings().Create(&roleBinding)
if err != nil {
if !apierr.IsAlreadyExists(err) {
log.Fatalf("Failed to create ClusterRoleBinding %s: %v\n", clusterBindingRoleName, err)
return fmt.Errorf("Failed to create ClusterRoleBinding %s: %v", clusterBindingRoleName, err)
}
fmt.Printf("ClusterRoleBinding '%s' already exists\n", clusterBindingRoleName)
return
log.Infof("ClusterRoleBinding %q already exists", clusterBindingRoleName)
return nil
}
fmt.Printf("ClusterRoleBinding '%s' created, bound '%s' to '%s'\n", clusterBindingRoleName, serviceAccountName, clusterRoleName)
log.Infof("ClusterRoleBinding %q created, bound %q to %q", clusterBindingRoleName, serviceAccountName, clusterRoleName)
return nil
}
// InstallClusterManagerRBAC installs RBAC resources for a cluster manager to operate a cluster. Returns a token
func InstallClusterManagerRBAC(conf *rest.Config) string {
func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
const ns = "kube-system"
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
var err error
err = CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
if err != nil {
return "", err
}
err = CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
if err != nil {
return "", err
}
err = CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
if err != nil {
return "", err
}
var serviceAccount *apiv1.ServiceAccount
var secretName string
@@ -137,52 +149,51 @@ func InstallClusterManagerRBAC(conf *rest.Config) string {
return true, nil
})
if err != nil {
log.Fatalf("Failed to wait for service account secret: %v", err)
return "", fmt.Errorf("Failed to wait for service account secret: %v", err)
}
secret, err := clientset.CoreV1().Secrets(ns).Get(secretName, metav1.GetOptions{})
if err != nil {
log.Fatalf("Failed to retrieve secret '%s': %v", secretName, err)
return "", fmt.Errorf("Failed to retrieve secret %q: %v", secretName, err)
}
token, ok := secret.Data["token"]
if !ok {
log.Fatalf("Secret '%s' for service account '%s' did not have a token", secretName, serviceAccount)
return "", fmt.Errorf("Secret %q for service account %q did not have a token", secretName, serviceAccount)
}
return string(token)
return string(token), nil
}
// UninstallClusterManagerRBAC removes RBAC resources for a cluster manager to operate a cluster
func UninstallClusterManagerRBAC(conf *rest.Config) {
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
func UninstallClusterManagerRBAC(clientset kubernetes.Interface) error {
return UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
}
// UninstallRBAC uninstalls RBAC related resources for a binding, role, and service account
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) {
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) error {
if err := clientset.RbacV1().ClusterRoleBindings().Delete(bindingName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
log.Fatalf("Failed to delete ClusterRoleBinding: %v\n", err)
return fmt.Errorf("Failed to delete ClusterRoleBinding: %v", err)
}
fmt.Printf("ClusterRoleBinding '%s' not found\n", bindingName)
log.Infof("ClusterRoleBinding %q not found", bindingName)
} else {
fmt.Printf("ClusterRoleBinding '%s' deleted\n", bindingName)
log.Infof("ClusterRoleBinding %q deleted", bindingName)
}
if err := clientset.RbacV1().ClusterRoles().Delete(roleName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
log.Fatalf("Failed to delete ClusterRole: %v\n", err)
return fmt.Errorf("Failed to delete ClusterRole: %v", err)
}
fmt.Printf("ClusterRole '%s' not found\n", roleName)
log.Infof("ClusterRole %q not found", roleName)
} else {
fmt.Printf("ClusterRole '%s' deleted\n", roleName)
log.Infof("ClusterRole %q deleted", roleName)
}
if err := clientset.CoreV1().ServiceAccounts(namespace).Delete(serviceAccount, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
log.Fatalf("Failed to delete ServiceAccount: %v\n", err)
return fmt.Errorf("Failed to delete ServiceAccount: %v", err)
}
fmt.Printf("ServiceAccount '%s' in namespace '%s' not found\n", serviceAccount, namespace)
log.Infof("ServiceAccount %q in namespace %q not found", serviceAccount, namespace)
} else {
fmt.Printf("ServiceAccount '%s' deleted\n", serviceAccount)
log.Infof("ServiceAccount %q deleted", serviceAccount)
}
return nil
}

847
controller/appcontroller.go Normal file
View File

@@ -0,0 +1,847 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"reflect"
"runtime/debug"
"sync"
"time"
log "github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/strategicpatch"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"github.com/argoproj/argo-cd/common"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/kube"
)
const (
watchResourcesRetryTimeout = 10 * time.Second
updateOperationStateTimeout = 1 * time.Second
)
// ApplicationController is the controller for application resources.
type ApplicationController struct {
namespace string
kubeClientset kubernetes.Interface
kubectl kube.Kubectl
applicationClientset appclientset.Interface
auditLogger *argo.AuditLogger
appRefreshQueue workqueue.RateLimitingInterface
appOperationQueue workqueue.RateLimitingInterface
appInformer cache.SharedIndexInformer
appStateManager AppStateManager
statusRefreshTimeout time.Duration
repoClientset reposerver.Clientset
db db.ArgoDB
forceRefreshApps map[string]bool
forceRefreshAppsMutex *sync.Mutex
}
type ApplicationControllerConfig struct {
InstanceID string
Namespace string
}
// NewApplicationController creates new instance of ApplicationController.
func NewApplicationController(
namespace string,
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
repoClientset reposerver.Clientset,
appResyncPeriod time.Duration,
) *ApplicationController {
db := db.NewDB(namespace, kubeClientset)
kubectlCmd := kube.KubectlCmd{}
appStateManager := NewAppStateManager(db, applicationClientset, repoClientset, namespace, kubectlCmd)
ctrl := ApplicationController{
namespace: namespace,
kubeClientset: kubeClientset,
kubectl: kubectlCmd,
applicationClientset: applicationClientset,
repoClientset: repoClientset,
appRefreshQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
appOperationQueue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
appStateManager: appStateManager,
db: db,
statusRefreshTimeout: appResyncPeriod,
forceRefreshApps: make(map[string]bool),
forceRefreshAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(namespace, kubeClientset, "application-controller"),
}
ctrl.appInformer = ctrl.newApplicationInformer()
return &ctrl
}
// Run starts the Application CRD controller.
func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int, operationProcessors int) {
defer runtime.HandleCrash()
defer ctrl.appRefreshQueue.ShutDown()
go ctrl.appInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go ctrl.watchAppsResources()
for i := 0; i < statusProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppRefreshQueueItem() {
}
}, time.Second, ctx.Done())
}
for i := 0; i < operationProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppOperationQueueItem() {
}
}, time.Second, ctx.Done())
}
<-ctx.Done()
}
func (ctrl *ApplicationController) forceAppRefresh(appName string) {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
ctrl.forceRefreshApps[appName] = true
}
func (ctrl *ApplicationController) isRefreshForced(appName string) bool {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
_, ok := ctrl.forceRefreshApps[appName]
if ok {
delete(ctrl.forceRefreshApps, appName)
}
return ok
}
// watchClusterResources watches for resource changes annotated with application label on specified cluster and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchClusterResources(ctx context.Context, item appv1.Cluster) {
retryUntilSucceed(func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %v\n", r)
}
}()
config := item.RESTConfig()
ch, err := kube.WatchResourcesWithLabel(ctx, config, "", common.LabelApplicationName)
if err != nil {
return err
}
for event := range ch {
eventObj := event.Object.(*unstructured.Unstructured)
objLabels := eventObj.GetLabels()
if objLabels == nil {
objLabels = make(map[string]string)
}
if appName, ok := objLabels[common.LabelApplicationName]; ok {
ctrl.forceAppRefresh(appName)
ctrl.appRefreshQueue.Add(ctrl.namespace + "/" + appName)
}
}
return fmt.Errorf("resource updates channel has closed")
}, fmt.Sprintf("watch app resources on %s", item.Server), ctx, watchResourcesRetryTimeout)
}
func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
for _, obj := range apps {
if app, ok := obj.(*appv1.Application); ok && app.Spec.Destination.Server == cluster.Server {
return true
}
}
return false
}
// WatchAppsResources watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchAppsResources() {
watchingClusters := make(map[string]struct {
cancel context.CancelFunc
cluster *appv1.Cluster
})
retryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
info, ok := watchingClusters[event.Cluster.Server]
hasApps := isClusterHasApps(ctrl.appInformer.GetStore().List(), event.Cluster)
// cluster resources must be watched only if cluster has at least one app
if (event.Type == watch.Deleted || !hasApps) && ok {
info.cancel()
delete(watchingClusters, event.Cluster.Server)
} else if event.Type != watch.Deleted && !ok && hasApps {
ctx, cancel := context.WithCancel(context.Background())
watchingClusters[event.Cluster.Server] = struct {
cancel context.CancelFunc
cluster *appv1.Cluster
}{
cancel: cancel,
cluster: event.Cluster,
}
go ctrl.watchClusterResources(ctx, *event.Cluster)
}
}
onAppModified := func(obj interface{}) {
if app, ok := obj.(*appv1.Application); ok {
var cluster *appv1.Cluster
info, infoOk := watchingClusters[app.Spec.Destination.Server]
if infoOk {
cluster = info.cluster
} else {
cluster, _ = ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
}
if cluster != nil {
// trigger cluster event every time when app created/deleted to either start or stop watching resources
clusterEventCallback(&db.ClusterEvent{Cluster: cluster, Type: watch.Modified})
}
}
}
ctrl.appInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{AddFunc: onAppModified, DeleteFunc: onAppModified})
return ctrl.db.WatchClusters(context.Background(), clusterEventCallback)
}, "watch clusters", context.Background(), watchResourcesRetryTimeout)
<-context.Background().Done()
}
// retryUntilSucceed keep retrying given action with specified timeout until action succeed or specified context is done.
func retryUntilSucceed(action func() error, desc string, ctx context.Context, timeout time.Duration) {
ctxCompleted := false
go func() {
select {
case <-ctx.Done():
ctxCompleted = true
}
}()
for {
err := action()
if err == nil {
return
}
if err != nil {
if ctxCompleted {
log.Infof("Stop retrying %s", desc)
return
} else {
log.Warnf("Failed to %s: %+v, retrying in %v", desc, err, timeout)
time.Sleep(timeout)
}
}
}
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
if app.Operation != nil {
ctrl.processRequestedAppOperation(app)
} else if app.DeletionTimestamp != nil && app.CascadedDeletion() {
ctrl.finalizeApplicationDeletion(app)
}
return
}
func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Application) {
logCtx := log.WithField("application", app.Name)
logCtx.Infof("Deleting resources")
// Get refreshed application info, since informer app copy might be stale
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, metav1.GetOptions{})
if err != nil {
if !errors.IsNotFound(err) {
logCtx.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
}
return
}
clst, err := ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err == nil {
config := clst.RESTConfig()
err = kube.DeleteResourceWithLabel(config, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
if err == nil {
app.SetCascadedDeletion(false)
var patch []byte
patch, err = json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"finalizers": app.Finalizers,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
}
}
if err != nil {
ctrl.setAppCondition(app, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionDeletionError,
Message: err.Error(),
})
message := fmt.Sprintf("Unable to delete application resources: %v", err)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonStatusRefreshed, Type: v1.EventTypeWarning}, message)
} else {
logCtx.Info("Successfully deleted resources")
}
}
func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condition appv1.ApplicationCondition) {
index := -1
for i, exiting := range app.Status.Conditions {
if exiting.Type == condition.Type {
index = i
break
}
}
if index > -1 {
app.Status.Conditions[index] = condition
} else {
app.Status.Conditions = append(app.Status.Conditions, condition)
}
var patch []byte
patch, err := json.Marshal(map[string]interface{}{
"status": map[string]interface{}{
"conditions": app.Status.Conditions,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
if err != nil {
log.Errorf("Unable to set application condition: %v", err)
}
}
func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Application) {
logCtx := log.WithField("application", app.Name)
var state *appv1.OperationState
// Recover from any unexpected panics and automatically set the status to be failed
defer func() {
if r := recover(); r != nil {
logCtx.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
state.Phase = appv1.OperationError
if rerr, ok := r.(error); ok {
state.Message = rerr.Error()
} else {
state.Message = fmt.Sprintf("%v", r)
}
ctrl.setOperationState(app, state)
}
}()
if isOperationInProgress(app) {
// If we get here, we are about process an operation but we notice it is already in progress.
// We need to detect if the app object we pulled off the informer is stale and doesn't
// reflect the fact that the operation is completed. We don't want to perform the operation
// again. To detect this, always retrieve the latest version to ensure it is not stale.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err != nil {
logCtx.Errorf("Failed to retrieve latest application state: %v", err)
return
}
if !isOperationInProgress(freshApp) {
logCtx.Infof("Skipping operation on stale application state")
return
}
app = freshApp
state = app.Status.OperationState.DeepCopy()
logCtx.Infof("Resuming in-progress operation. phase: %s, message: %s", state.Phase, state.Message)
} else {
state = &appv1.OperationState{Phase: appv1.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
ctrl.setOperationState(app, state)
logCtx.Infof("Initialized new operation: %v", *app.Operation)
}
ctrl.appStateManager.SyncAppState(app, state)
if state.Phase == appv1.OperationRunning {
// It's possible for an app to be terminated while we were operating on it. We do not want
// to clobber the Terminated state with Running. Get the latest app state to check for this.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err == nil {
if freshApp.Status.OperationState != nil && freshApp.Status.OperationState.Phase == appv1.OperationTerminating {
state.Phase = appv1.OperationTerminating
state.Message = "operation is terminating"
// after this, we will get requeued to the workqueue, but next time the
// SyncAppState will operate in a Terminating phase, allowing the worker to perform
// cleanup (e.g. delete jobs, workflows, etc...)
}
}
}
ctrl.setOperationState(app, state)
if state.Phase.Completed() {
// if we just completed an operation, force a refresh so that UI will report up-to-date
// sync/health information
ctrl.forceAppRefresh(app.ObjectMeta.Name)
}
}
func (ctrl *ApplicationController) setOperationState(app *appv1.Application, state *appv1.OperationState) {
retryUntilSucceed(func() error {
if state.Phase == "" {
// expose any bugs where we neglect to set phase
panic("no phase was set")
}
if state.Phase.Completed() {
now := metav1.Now()
state.FinishedAt = &now
}
patch := map[string]interface{}{
"status": map[string]interface{}{
"operationState": state,
},
}
if state.Phase.Completed() {
// If operation is completed, clear the operation field to indicate no operation is
// in progress.
patch["operation"] = nil
}
if reflect.DeepEqual(app.Status.OperationState, state) {
log.Infof("No operation updates necessary to '%s'. Skipping patch", app.Name)
return nil
}
patchJSON, err := json.Marshal(patch)
if err != nil {
return err
}
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
_, err = appClient.Patch(app.Name, types.MergePatchType, patchJSON)
if err != nil {
return err
}
log.Infof("updated '%s' operation (phase: %s)", app.Name, state.Phase)
if state.Phase.Completed() {
eventInfo := argo.EventInfo{Reason: argo.EventReasonOperationCompleted}
var message string
if state.Phase.Successful() {
eventInfo.Type = v1.EventTypeNormal
message = "Operation succeeded"
} else {
eventInfo.Type = v1.EventTypeWarning
message = fmt.Sprintf("Operation failed: %v", state.Message)
}
ctrl.auditLogger.LogAppEvent(app, eventInfo, message)
}
return nil
}, "Update application operation state", context.Background(), updateOperationStateTimeout)
}
func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return
}
processNext = true
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appRefreshQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
if !ctrl.needRefreshAppStatus(app, ctrl.statusRefreshTimeout) {
return
}
app = app.DeepCopy()
conditions, hasErrors := ctrl.refreshAppConditions(app)
if hasErrors {
comparisonResult := app.Status.ComparisonResult.DeepCopy()
comparisonResult.Status = appv1.ComparisonStatusUnknown
health := app.Status.Health.DeepCopy()
health.Status = appv1.HealthStatusUnknown
ctrl.updateAppStatus(app, comparisonResult, health, nil, conditions)
return
}
comparisonResult, manifestInfo, compConditions, err := ctrl.appStateManager.CompareAppState(app, "", nil)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
} else {
conditions = append(conditions, compConditions...)
}
var parameters []*appv1.ComponentParameter
if manifestInfo != nil {
parameters = manifestInfo.Params
}
healthState, err := setApplicationHealth(ctrl.kubectl, comparisonResult)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{Type: appv1.ApplicationConditionComparisonError, Message: err.Error()})
}
syncErrCond := ctrl.autoSync(app, comparisonResult)
if syncErrCond != nil {
conditions = append(conditions, *syncErrCond)
}
ctrl.updateAppStatus(app, comparisonResult, healthState, parameters, conditions)
return
}
// needRefreshAppStatus answers if application status needs to be refreshed.
// Returns true if application never been compared, has changed or comparison result has expired.
func (ctrl *ApplicationController) needRefreshAppStatus(app *appv1.Application, statusRefreshTimeout time.Duration) bool {
logCtx := log.WithFields(log.Fields{"application": app.Name})
var reason string
expired := app.Status.ComparisonResult.ComparedAt.Add(statusRefreshTimeout).Before(time.Now().UTC())
if ctrl.isRefreshForced(app.Name) {
reason = "force refresh"
} else if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown && expired {
reason = "comparison status unknown"
} else if !app.Spec.Source.Equals(app.Status.ComparisonResult.ComparedTo) {
reason = "spec.source differs"
} else if expired {
reason = fmt.Sprintf("comparison expired. comparedAt: %v, expiry: %v", app.Status.ComparisonResult.ComparedAt, statusRefreshTimeout)
}
if reason != "" {
logCtx.Infof("Refreshing app status (%s)", reason)
return true
}
return false
}
func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application) ([]appv1.ApplicationCondition, bool) {
conditions := make([]appv1.ApplicationCondition, 0)
proj, err := argo.GetAppProject(&app.Spec, ctrl.applicationClientset, ctrl.namespace)
if err != nil {
if errors.IsNotFound(err) {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionInvalidSpecError,
Message: fmt.Sprintf("Application referencing project %s which does not exist", app.Spec.Project),
})
} else {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
Message: err.Error(),
})
}
} else {
specConditions, err := argo.GetSpecErrors(context.Background(), &app.Spec, proj, ctrl.repoClientset, ctrl.db)
if err != nil {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionUnknownError,
Message: err.Error(),
})
} else {
conditions = append(conditions, specConditions...)
}
}
// List of condition types which have to be reevaluated by controller; all remaining conditions should stay as is.
reevaluateTypes := map[appv1.ApplicationConditionType]bool{
appv1.ApplicationConditionInvalidSpecError: true,
appv1.ApplicationConditionUnknownError: true,
appv1.ApplicationConditionComparisonError: true,
appv1.ApplicationConditionSharedResourceWarning: true,
appv1.ApplicationConditionSyncError: true,
}
appConditions := make([]appv1.ApplicationCondition, 0)
for i := 0; i < len(app.Status.Conditions); i++ {
condition := app.Status.Conditions[i]
if _, ok := reevaluateTypes[condition.Type]; !ok {
appConditions = append(appConditions, condition)
}
}
hasErrors := false
for i := range conditions {
condition := conditions[i]
appConditions = append(appConditions, condition)
if condition.IsError() {
hasErrors = true
}
}
return appConditions, hasErrors
}
// setApplicationHealth updates the health statuses of all resources performed in the comparison
func setApplicationHealth(kubectl kube.Kubectl, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
var savedErr error
appHealth := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
if comparisonResult.Status == appv1.ComparisonStatusUnknown {
appHealth.Status = appv1.HealthStatusUnknown
}
for i, resource := range comparisonResult.Resources {
if resource.LiveState == "null" {
resource.Health = appv1.HealthStatus{Status: appv1.HealthStatusMissing}
} else {
var obj unstructured.Unstructured
err := json.Unmarshal([]byte(resource.LiveState), &obj)
if err != nil {
return nil, err
}
healthState, err := health.GetAppHealth(kubectl, &obj)
if err != nil && savedErr == nil {
savedErr = err
}
resource.Health = *healthState
}
comparisonResult.Resources[i] = resource
if health.IsWorse(appHealth.Status, resource.Health.Status) {
appHealth.Status = resource.Health.Status
}
}
return &appHealth, savedErr
}
// updateAppStatus persists updates to application status. Detects if there patch
func (ctrl *ApplicationController) updateAppStatus(
app *appv1.Application,
comparisonResult *appv1.ComparisonResult,
healthState *appv1.HealthStatus,
parameters []*appv1.ComponentParameter,
conditions []appv1.ApplicationCondition,
) {
logCtx := log.WithFields(log.Fields{"application": app.Name})
modifiedApp := app.DeepCopy()
if comparisonResult != nil {
modifiedApp.Status.ComparisonResult = *comparisonResult
if app.Status.ComparisonResult.Status != comparisonResult.Status {
message := fmt.Sprintf("Updated sync status: %s -> %s", app.Status.ComparisonResult.Status, comparisonResult.Status)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
}
logCtx.Infof("Comparison result: prev: %s. current: %s", app.Status.ComparisonResult.Status, comparisonResult.Status)
}
if healthState != nil {
if modifiedApp.Status.Health.Status != healthState.Status {
message := fmt.Sprintf("Updated health status: %s -> %s", modifiedApp.Status.Health.Status, healthState.Status)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonResourceUpdated, Type: v1.EventTypeNormal}, message)
}
modifiedApp.Status.Health = *healthState
}
if parameters != nil {
modifiedApp.Status.Parameters = make([]appv1.ComponentParameter, len(parameters))
for i := range parameters {
modifiedApp.Status.Parameters[i] = *parameters[i]
}
}
if conditions != nil {
modifiedApp.Status.Conditions = conditions
}
origBytes, err := json.Marshal(app)
if err != nil {
logCtx.Errorf("Error updating (marshal orig app): %v", err)
return
}
modifiedBytes, err := json.Marshal(modifiedApp)
if err != nil {
logCtx.Errorf("Error updating (marshal modified app): %v", err)
return
}
patch, err := strategicpatch.CreateTwoWayMergePatch(origBytes, modifiedBytes, appv1.Application{})
if err != nil {
logCtx.Errorf("Error calculating patch for update: %v", err)
return
}
if string(patch) == "{}" {
logCtx.Infof("No status changes. Skipping patch")
return
}
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
_, err = appClient.Patch(app.Name, types.MergePatchType, patch)
if err != nil {
logCtx.Warnf("Error updating application: %v", err)
} else {
logCtx.Infof("Update successful")
}
}
// autoSync will initiate a sync operation for an application configured with automated sync
func (ctrl *ApplicationController) autoSync(app *appv1.Application, comparisonResult *appv1.ComparisonResult) *appv1.ApplicationCondition {
if app.Spec.SyncPolicy == nil || app.Spec.SyncPolicy.Automated == nil {
return nil
}
logCtx := log.WithFields(log.Fields{"application": app.Name})
if app.Operation != nil {
logCtx.Infof("Skipping auto-sync: another operation is in progress")
return nil
}
// Only perform auto-sync if we detect OutOfSync status. This is to prevent us from attempting
// a sync when application is already in a Synced or Unknown state
if comparisonResult.Status != appv1.ComparisonStatusOutOfSync {
logCtx.Infof("Skipping auto-sync: application status is %s", comparisonResult.Status)
return nil
}
desiredCommitSHA := comparisonResult.Revision
// It is possible for manifests to remain OutOfSync even after a sync/kubectl apply (e.g.
// auto-sync with pruning disabled). We need to ensure that we do not keep Syncing an
// application in an infinite loop. To detect this, we only attempt the Sync if the revision
// and parameter overrides are different from our most recent sync operation.
if alreadyAttemptedSync(app, desiredCommitSHA) {
if app.Status.OperationState.Phase != appv1.OperationSucceeded {
logCtx.Warnf("Skipping auto-sync: failed previous sync attempt to %s", desiredCommitSHA)
message := fmt.Sprintf("Failed sync attempt to %s: %s", desiredCommitSHA, app.Status.OperationState.Message)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: message}
}
logCtx.Infof("Skipping auto-sync: most recent sync already to %s", desiredCommitSHA)
return nil
}
op := appv1.Operation{
Sync: &appv1.SyncOperation{
Revision: desiredCommitSHA,
Prune: app.Spec.SyncPolicy.Automated.Prune,
ParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
},
}
appIf := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace)
_, err := argo.SetAppOperation(context.Background(), appIf, ctrl.auditLogger, app.Name, &op)
if err != nil {
logCtx.Errorf("Failed to initiate auto-sync to %s: %v", desiredCommitSHA, err)
return &appv1.ApplicationCondition{Type: appv1.ApplicationConditionSyncError, Message: err.Error()}
}
message := fmt.Sprintf("Initiated automated sync to '%s'", desiredCommitSHA)
ctrl.auditLogger.LogAppEvent(app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: v1.EventTypeNormal}, message)
logCtx.Info(message)
return nil
}
// alreadyAttemptedSync returns whether or not the most recent sync was performed against the
// commitSHA and with the same parameter overrides which are currently set in the app
func alreadyAttemptedSync(app *appv1.Application, commitSHA string) bool {
if app.Status.OperationState == nil || app.Status.OperationState.Operation.Sync == nil || app.Status.OperationState.SyncResult == nil {
return false
}
if app.Status.OperationState.SyncResult.Revision != commitSHA {
return false
}
if !reflect.DeepEqual(appv1.ParameterOverrides(app.Spec.Source.ComponentParameterOverrides), app.Status.OperationState.Operation.Sync.ParameterOverrides) {
return false
}
return true
}
func (ctrl *ApplicationController) newApplicationInformer() cache.SharedIndexInformer {
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
ctrl.applicationClientset,
ctrl.statusRefreshTimeout,
ctrl.namespace,
func(options *metav1.ListOptions) {},
)
informer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
ctrl.appRefreshQueue.Add(key)
ctrl.appOperationQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err != nil {
return
}
oldApp, oldOK := old.(*appv1.Application)
newApp, newOK := new.(*appv1.Application)
if oldOK && newOK {
if toggledAutomatedSync(oldApp, newApp) {
log.WithField("application", newApp.Name).Info("Enabled automated sync")
ctrl.forceAppRefresh(newApp.Name)
}
}
ctrl.appRefreshQueue.Add(key)
ctrl.appOperationQueue.Add(key)
},
DeleteFunc: func(obj interface{}) {
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
// key function.
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
ctrl.appRefreshQueue.Add(key)
}
},
},
)
return informer
}
func isOperationInProgress(app *appv1.Application) bool {
return app.Status.OperationState != nil && !app.Status.OperationState.Phase.Completed()
}
// toggledAutomatedSync tests if an app went from auto-sync disabled to enabled.
// if it was toggled to be enabled, the informer handler will force a refresh
func toggledAutomatedSync(old *appv1.Application, new *appv1.Application) bool {
if new.Spec.SyncPolicy == nil || new.Spec.SyncPolicy.Automated == nil {
return false
}
// auto-sync is enabled. check if it was previously disabled
if old.Spec.SyncPolicy == nil || old.Spec.SyncPolicy.Automated == nil {
return true
}
// nothing changed
return false
}

View File

@@ -0,0 +1,231 @@
package controller
import (
"testing"
"time"
"github.com/ghodss/yaml"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
reposerver "github.com/argoproj/argo-cd/reposerver/mocks"
"github.com/stretchr/testify/assert"
)
func newFakeController(apps ...runtime.Object) *ApplicationController {
kubeClientset := fake.NewSimpleClientset()
appClientset := appclientset.NewSimpleClientset(apps...)
repoClientset := reposerver.Clientset{}
return NewApplicationController(
"argocd",
kubeClientset,
appClientset,
&repoClientset,
time.Minute,
)
}
var fakeApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
destination:
namespace: dummy-namespace
server: https://localhost:6443
project: default
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
syncPolicy:
automated: {}
status:
operationState:
finishedAt: 2018-09-21T23:50:29Z
message: successfully synced
operation:
sync:
revision: HEAD
phase: Succeeded
startedAt: 2018-09-21T23:50:25Z
syncResult:
resources:
- kind: RoleBinding
message: |-
rolebinding.rbac.authorization.k8s.io/always-outofsync reconciled
rolebinding.rbac.authorization.k8s.io/always-outofsync configured
name: always-outofsync
namespace: default
status: Synced
revision: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
`
func newFakeApp() *argoappv1.Application {
var app argoappv1.Application
err := yaml.Unmarshal([]byte(fakeApp), &app)
if err != nil {
panic(err)
}
return &app
}
func TestAutoSync(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.NotNil(t, app.Operation)
assert.NotNil(t, app.Operation.Sync)
assert.False(t, app.Operation.Sync.Prune)
}
func TestSkipAutoSync(t *testing.T) {
// Verify we skip when we previously synced to it in our most recent history
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
app := newFakeApp()
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond := ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
// Verify we skip when we are already Synced (even if revision is different)
app = newFakeApp()
ctrl = newFakeController(app)
compRes = argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond = ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
// Verify we skip when auto-sync is disabled
app = newFakeApp()
app.Spec.SyncPolicy = nil
ctrl = newFakeController(app)
compRes = argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond = ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
// Verify we skip when previous sync attempt failed and return error condition
// Set current to 'aaaaa', desired to 'bbbbb' and add 'bbbbb' to failure history
app = newFakeApp()
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
},
}
ctrl = newFakeController(app)
compRes = argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond = ctrl.autoSync(app, &compRes)
assert.NotNil(t, cond)
app, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// TestAutoSyncIndicateError verifies we skip auto-sync and return error condition if previous sync failed
func TestAutoSyncIndicateError(t *testing.T) {
app := newFakeApp()
app.Spec.Source.ComponentParameterOverrides = []argoappv1.ComponentParameter{
{
Name: "a",
Value: "1",
},
}
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{
ParameterOverrides: argoappv1.ParameterOverrides{
{
Name: "a",
Value: "1",
},
},
},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &compRes)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// TestAutoSyncParameterOverrides verifies we auto-sync if revision is same but parameter overrides are different
func TestAutoSyncParameterOverrides(t *testing.T) {
app := newFakeApp()
app.Spec.Source.ComponentParameterOverrides = []argoappv1.ComponentParameter{
{
Name: "a",
Value: "1",
},
}
ctrl := newFakeController(app)
compRes := argoappv1.ComparisonResult{
Status: argoappv1.ComparisonStatusOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{
ParameterOverrides: argoappv1.ParameterOverrides{
{
Name: "a",
Value: "2", // this value changed
},
},
},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &compRes)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications("argocd").Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.NotNil(t, app.Operation)
}

View File

@@ -1,560 +0,0 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"runtime/debug"
"sync"
"time"
"github.com/argoproj/argo-cd/common"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
appinformers "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/kube"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
)
const (
watchResourcesRetryTimeout = 10 * time.Second
updateOperationStateTimeout = 1 * time.Second
)
// ApplicationController is the controller for application resources.
type ApplicationController struct {
namespace string
kubeClientset kubernetes.Interface
applicationClientset appclientset.Interface
appRefreshQueue workqueue.RateLimitingInterface
appOperationQueue workqueue.RateLimitingInterface
appInformer cache.SharedIndexInformer
appStateManager AppStateManager
appHealthManager AppHealthManager
statusRefreshTimeout time.Duration
db db.ArgoDB
forceRefreshApps map[string]bool
forceRefreshAppsMutex *sync.Mutex
}
type ApplicationControllerConfig struct {
InstanceID string
Namespace string
}
// NewApplicationController creates new instance of ApplicationController.
func NewApplicationController(
namespace string,
kubeClientset kubernetes.Interface,
applicationClientset appclientset.Interface,
db db.ArgoDB,
appStateManager AppStateManager,
appHealthManager AppHealthManager,
appResyncPeriod time.Duration,
config *ApplicationControllerConfig,
) *ApplicationController {
appRefreshQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
appOperationQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
return &ApplicationController{
namespace: namespace,
kubeClientset: kubeClientset,
applicationClientset: applicationClientset,
appRefreshQueue: appRefreshQueue,
appOperationQueue: appOperationQueue,
appStateManager: appStateManager,
appHealthManager: appHealthManager,
appInformer: newApplicationInformer(applicationClientset, appRefreshQueue, appOperationQueue, appResyncPeriod, config),
db: db,
statusRefreshTimeout: appResyncPeriod,
forceRefreshApps: make(map[string]bool),
forceRefreshAppsMutex: &sync.Mutex{},
}
}
// Run starts the Application CRD controller.
func (ctrl *ApplicationController) Run(ctx context.Context, statusProcessors int, operationProcessors int) {
defer runtime.HandleCrash()
defer ctrl.appRefreshQueue.ShutDown()
go ctrl.appInformer.Run(ctx.Done())
go ctrl.watchAppsResources()
if !cache.WaitForCacheSync(ctx.Done(), ctrl.appInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
for i := 0; i < statusProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppRefreshQueueItem() {
}
}, time.Second, ctx.Done())
}
for i := 0; i < operationProcessors; i++ {
go wait.Until(func() {
for ctrl.processAppOperationQueueItem() {
}
}, time.Second, ctx.Done())
}
<-ctx.Done()
}
func (ctrl *ApplicationController) forceAppRefresh(appName string) {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
ctrl.forceRefreshApps[appName] = true
}
func (ctrl *ApplicationController) isRefreshForced(appName string) bool {
ctrl.forceRefreshAppsMutex.Lock()
defer ctrl.forceRefreshAppsMutex.Unlock()
_, ok := ctrl.forceRefreshApps[appName]
if ok {
delete(ctrl.forceRefreshApps, appName)
}
return ok
}
// watchClusterResources watches for resource changes annotated with application label on specified cluster and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchClusterResources(ctx context.Context, item appv1.Cluster) {
config := item.RESTConfig()
retryUntilSucceed(func() error {
ch, err := kube.WatchResourcesWithLabel(ctx, config, "", common.LabelApplicationName)
if err != nil {
return err
}
for event := range ch {
eventObj := event.Object.(*unstructured.Unstructured)
objLabels := eventObj.GetLabels()
if objLabels == nil {
objLabels = make(map[string]string)
}
if appName, ok := objLabels[common.LabelApplicationName]; ok {
ctrl.forceAppRefresh(appName)
ctrl.appRefreshQueue.Add(ctrl.namespace + "/" + appName)
}
}
return fmt.Errorf("resource updates channel has closed")
}, fmt.Sprintf("watch app resources on %s", config.Host), ctx, watchResourcesRetryTimeout)
}
// watchAppsResources watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (ctrl *ApplicationController) watchAppsResources() {
watchingClusters := make(map[string]context.CancelFunc)
retryUntilSucceed(func() error {
return ctrl.db.WatchClusters(context.Background(), func(event *db.ClusterEvent) {
cancel, ok := watchingClusters[event.Cluster.Server]
if event.Type == watch.Deleted && ok {
cancel()
delete(watchingClusters, event.Cluster.Server)
} else if event.Type != watch.Deleted && !ok {
ctx, cancel := context.WithCancel(context.Background())
watchingClusters[event.Cluster.Server] = cancel
go ctrl.watchClusterResources(ctx, *event.Cluster)
}
})
}, "watch clusters", context.Background(), watchResourcesRetryTimeout)
<-context.Background().Done()
}
// retryUntilSucceed keep retrying given action with specified timeout until action succeed or specified context is done.
func retryUntilSucceed(action func() error, desc string, ctx context.Context, timeout time.Duration) {
ctxCompleted := false
go func() {
select {
case <-ctx.Done():
ctxCompleted = true
}
}()
for {
err := action()
if err == nil {
return
}
if err != nil {
if ctxCompleted {
log.Infof("Stop retrying %s", desc)
return
} else {
log.Warnf("Failed to %s: %v, retrying in %v", desc, err, timeout)
time.Sleep(timeout)
}
}
}
}
func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appOperationQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appOperationQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
if app.Operation != nil {
ctrl.processRequestedAppOperation(app)
} else if app.DeletionTimestamp != nil && app.CascadedDeletion() {
ctrl.finalizeApplicationDeletion(app)
}
return
}
func (ctrl *ApplicationController) finalizeApplicationDeletion(app *appv1.Application) {
log.Infof("Deleting resources for application %s", app.Name)
// Get refreshed application info, since informer app copy might be stale
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, metav1.GetOptions{})
if err != nil {
if !errors.IsNotFound(err) {
log.Errorf("Unable to get refreshed application info prior deleting resources: %v", err)
}
return
}
clst, err := ctrl.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err == nil {
config := clst.RESTConfig()
err = kube.DeleteResourceWithLabel(config, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
if err == nil {
app.SetCascadedDeletion(false)
var patch []byte
patch, err = json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"finalizers": app.Finalizers,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
}
}
if err != nil {
log.Errorf("Unable to delete application resources: %v", err)
ctrl.setAppCondition(app, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionDeletionError,
Message: err.Error(),
})
} else {
log.Infof("Successfully deleted resources for application %s", app.Name)
}
}
func (ctrl *ApplicationController) setAppCondition(app *appv1.Application, condition appv1.ApplicationCondition) {
index := -1
for i, exiting := range app.Status.Conditions {
if exiting.Type == condition.Type {
index = i
break
}
}
if index > -1 {
app.Status.Conditions[index] = condition
} else {
app.Status.Conditions = append(app.Status.Conditions, condition)
}
var patch []byte
patch, err := json.Marshal(map[string]interface{}{
"status": map[string]interface{}{
"conditions": app.Status.Conditions,
},
})
if err == nil {
_, err = ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Patch(app.Name, types.MergePatchType, patch)
}
if err != nil {
log.Errorf("Unable to set application condition: %v", err)
}
}
func (ctrl *ApplicationController) processRequestedAppOperation(app *appv1.Application) {
state := appv1.OperationState{Phase: appv1.OperationRunning, Operation: *app.Operation, StartedAt: metav1.Now()}
// Recover from any unexpected panics and automatically set the status to be failed
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
// TODO: consider adding Error OperationStatus in addition to Failed
state.Phase = appv1.OperationError
if rerr, ok := r.(error); ok {
state.Message = rerr.Error()
} else {
state.Message = fmt.Sprintf("%v", r)
}
ctrl.setOperationState(app.Name, state, app.Operation)
}
}()
if app.Status.OperationState != nil && !app.Status.OperationState.Phase.Completed() {
// If we get here, we are about process an operation but we notice it is already Running.
// We need to detect if the controller crashed before completing the operation, or if the
// the app object we pulled off the informer is simply stale and doesn't reflect the fact
// that the operation is completed. We don't want to perform the operation again. To detect
// this, always retrieve the latest version to ensure it is not stale.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace).Get(app.ObjectMeta.Name, metav1.GetOptions{})
if err != nil {
log.Errorf("Failed to retrieve latest application state: %v", err)
return
}
if freshApp.Status.OperationState == nil || freshApp.Status.OperationState.Phase.Completed() {
log.Infof("Skipping operation on stale application state (%s)", app.ObjectMeta.Name)
return
}
log.Warnf("Found interrupted application operation %s %v", app.ObjectMeta.Name, app.Status.OperationState)
} else {
ctrl.setOperationState(app.Name, state, app.Operation)
}
if app.Operation.Sync != nil {
opRes := ctrl.appStateManager.SyncAppState(app, app.Operation.Sync.Revision, nil, app.Operation.Sync.DryRun, app.Operation.Sync.Prune)
state.Phase = opRes.Phase
state.Message = opRes.Message
state.SyncResult = opRes.SyncResult
} else if app.Operation.Rollback != nil {
var deploymentInfo *appv1.DeploymentInfo
for _, info := range app.Status.History {
if info.ID == app.Operation.Rollback.ID {
deploymentInfo = &info
break
}
}
if deploymentInfo == nil {
state.Phase = appv1.OperationFailed
state.Message = fmt.Sprintf("application %s does not have deployment with id %v", app.Name, app.Operation.Rollback.ID)
} else {
opRes := ctrl.appStateManager.SyncAppState(app, deploymentInfo.Revision, &deploymentInfo.ComponentParameterOverrides, app.Operation.Rollback.DryRun, app.Operation.Rollback.Prune)
state.Phase = opRes.Phase
state.Message = opRes.Message
state.RollbackResult = opRes.SyncResult
}
} else {
state.Phase = appv1.OperationFailed
state.Message = "Invalid operation request"
}
ctrl.setOperationState(app.Name, state, app.Operation)
}
func (ctrl *ApplicationController) setOperationState(appName string, state appv1.OperationState, operation *appv1.Operation) {
retryUntilSucceed(func() error {
var inProgressOpValue *appv1.Operation
if state.Phase == "" {
// expose any bugs where we neglect to set phase
panic("no phase was set")
}
if !state.Phase.Completed() {
// If operation is still running, we populate the app.operation field, which prevents
// any other operation from running at the same time. Otherwise, it is cleared by setting
// it to nil which indicates no operation is in progress.
inProgressOpValue = operation
} else {
nowTime := metav1.Now()
state.FinishedAt = &nowTime
}
patch, err := json.Marshal(map[string]interface{}{
"status": map[string]interface{}{
"operationState": state,
},
"operation": inProgressOpValue,
})
if err != nil {
return err
}
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(ctrl.namespace)
_, err = appClient.Patch(appName, types.MergePatchType, patch)
if err != nil {
return err
}
log.Infof("updated '%s' operation (phase: %s)", appName, state.Phase)
return nil
}, "Update application operation state", context.Background(), updateOperationStateTimeout)
}
func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext bool) {
appKey, shutdown := ctrl.appRefreshQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.appRefreshQueue.Done(appKey)
}()
obj, exists, err := ctrl.appInformer.GetIndexer().GetByKey(appKey.(string))
if err != nil {
log.Errorf("Failed to get application '%s' from informer index: %+v", appKey, err)
return
}
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return
}
app, ok := obj.(*appv1.Application)
if !ok {
log.Warnf("Key '%s' in index is not an application", appKey)
return
}
isForceRefreshed := ctrl.isRefreshForced(app.Name)
if isForceRefreshed || app.NeedRefreshAppStatus(ctrl.statusRefreshTimeout) {
log.Infof("Refreshing application '%s' status (force refreshed: %v)", app.Name, isForceRefreshed)
comparisonResult, parameters, healthState, err := ctrl.tryRefreshAppStatus(app.DeepCopy())
if err != nil {
comparisonResult = &appv1.ComparisonResult{
Status: appv1.ComparisonStatusError,
Error: fmt.Sprintf("Failed to get application status for application '%s': %v", app.Name, err),
ComparedTo: app.Spec.Source,
ComparedAt: metav1.Time{Time: time.Now().UTC()},
}
parameters = nil
healthState = &appv1.HealthStatus{Status: appv1.HealthStatusUnknown}
}
ctrl.updateAppStatus(app.Name, app.Namespace, comparisonResult, parameters, *healthState)
}
return
}
func (ctrl *ApplicationController) tryRefreshAppStatus(app *appv1.Application) (*appv1.ComparisonResult, *[]appv1.ComponentParameter, *appv1.HealthStatus, error) {
comparisonResult, manifestInfo, err := ctrl.appStateManager.CompareAppState(app)
if err != nil {
return nil, nil, nil, err
}
log.Infof("App %s comparison result: prev: %s. current: %s", app.Name, app.Status.ComparisonResult.Status, comparisonResult.Status)
parameters := make([]appv1.ComponentParameter, len(manifestInfo.Params))
for i := range manifestInfo.Params {
parameters[i] = *manifestInfo.Params[i]
}
healthState, err := ctrl.appHealthManager.GetAppHealth(app.Spec.Destination.Server, app.Spec.Destination.Namespace, comparisonResult)
if err != nil {
return nil, nil, nil, err
}
return comparisonResult, &parameters, healthState, nil
}
func (ctrl *ApplicationController) updateAppStatus(
appName string, namespace string, comparisonResult *appv1.ComparisonResult, parameters *[]appv1.ComponentParameter, healthState appv1.HealthStatus) {
statusPatch := make(map[string]interface{})
statusPatch["comparisonResult"] = comparisonResult
statusPatch["parameters"] = parameters
statusPatch["health"] = healthState
patch, err := json.Marshal(map[string]interface{}{
"status": statusPatch,
})
if err == nil {
appClient := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(namespace)
_, err = appClient.Patch(appName, types.MergePatchType, patch)
}
if err != nil {
log.Warnf("Error updating application: %v", err)
} else {
log.Info("Application update successful")
}
}
func newApplicationInformer(
appClientset appclientset.Interface,
appQueue workqueue.RateLimitingInterface,
appOperationQueue workqueue.RateLimitingInterface,
appResyncPeriod time.Duration,
config *ApplicationControllerConfig) cache.SharedIndexInformer {
appInformerFactory := appinformers.NewFilteredSharedInformerFactory(
appClientset,
appResyncPeriod,
config.Namespace,
func(options *metav1.ListOptions) {
var instanceIDReq *labels.Requirement
var err error
if config.InstanceID != "" {
instanceIDReq, err = labels.NewRequirement(common.LabelKeyApplicationControllerInstanceID, selection.Equals, []string{config.InstanceID})
} else {
instanceIDReq, err = labels.NewRequirement(common.LabelKeyApplicationControllerInstanceID, selection.DoesNotExist, nil)
}
if err != nil {
panic(err)
}
options.FieldSelector = fields.Everything().String()
labelSelector := labels.NewSelector().Add(*instanceIDReq)
options.LabelSelector = labelSelector.String()
},
)
informer := appInformerFactory.Argoproj().V1alpha1().Applications().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
appQueue.Add(key)
appOperationQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err == nil {
appQueue.Add(key)
appOperationQueue.Add(key)
}
},
DeleteFunc: func(obj interface{}) {
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
// key function.
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
appQueue.Add(key)
}
},
},
)
return informer
}

View File

@@ -1,137 +0,0 @@
package controller
import (
"context"
"encoding/json"
"fmt"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/kube"
"k8s.io/api/apps/v1"
coreV1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
const (
maxHistoryCnt = 5
)
type AppHealthManager interface {
GetAppHealth(server string, namespace string, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error)
}
type kubeAppHealthManager struct {
db db.ArgoDB
namespace string
}
func NewAppHealthManager(db db.ArgoDB, namespace string) AppHealthManager {
return &kubeAppHealthManager{db: db, namespace: namespace}
}
func (ctrl *kubeAppHealthManager) getServiceHealth(config *rest.Config, namespace string, name string) (*appv1.HealthStatus, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
service, err := clientSet.CoreV1().Services(namespace).Get(name, metav1.GetOptions{})
if err != nil {
return nil, err
}
health := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
if service.Spec.Type == coreV1.ServiceTypeLoadBalancer {
health.Status = appv1.HealthStatusProgressing
for _, ingress := range service.Status.LoadBalancer.Ingress {
if ingress.Hostname != "" || ingress.IP != "" {
health.Status = appv1.HealthStatusHealthy
break
}
}
}
return &health, nil
}
func (ctrl *kubeAppHealthManager) getDeploymentHealth(config *rest.Config, namespace string, name string) (*appv1.HealthStatus, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
deploy, err := clientSet.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{})
if err != nil {
return nil, err
}
health := appv1.HealthStatus{
Status: appv1.HealthStatusUnknown,
}
for _, condition := range deploy.Status.Conditions {
// deployment is healthy is it successfully progressed
if condition.Type == v1.DeploymentProgressing && condition.Status == "True" {
health.Status = appv1.HealthStatusHealthy
} else if condition.Type == v1.DeploymentReplicaFailure && condition.Status == "True" {
health.Status = appv1.HealthStatusDegraded
} else if condition.Type == v1.DeploymentProgressing && condition.Status == "False" {
health.Status = appv1.HealthStatusDegraded
} else if condition.Type == v1.DeploymentAvailable && condition.Status == "False" {
health.Status = appv1.HealthStatusDegraded
}
if health.Status != appv1.HealthStatusUnknown {
health.StatusDetails = fmt.Sprintf("%s:%s", condition.Reason, condition.Message)
break
}
}
return &health, nil
}
func (ctrl *kubeAppHealthManager) GetAppHealth(server string, namespace string, comparisonResult *appv1.ComparisonResult) (*appv1.HealthStatus, error) {
clst, err := ctrl.db.GetCluster(context.Background(), server)
if err != nil {
return nil, err
}
restConfig := clst.RESTConfig()
appHealth := appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
for i := range comparisonResult.Resources {
resource := comparisonResult.Resources[i]
if resource.LiveState == "null" {
resource.Health = appv1.HealthStatus{Status: appv1.HealthStatusUnknown}
} else {
var obj unstructured.Unstructured
err := json.Unmarshal([]byte(resource.LiveState), &obj)
if err != nil {
return nil, err
}
switch obj.GetKind() {
case kube.DeploymentKind:
state, err := ctrl.getDeploymentHealth(restConfig, namespace, obj.GetName())
if err != nil {
return nil, err
}
resource.Health = *state
case kube.ServiceKind:
state, err := ctrl.getServiceHealth(restConfig, namespace, obj.GetName())
if err != nil {
return nil, err
}
resource.Health = *state
default:
resource.Health = appv1.HealthStatus{Status: appv1.HealthStatusHealthy}
}
if resource.Health.Status == appv1.HealthStatusProgressing {
if appHealth.Status == appv1.HealthStatusHealthy {
appHealth.Status = appv1.HealthStatusProgressing
}
} else if resource.Health.Status == appv1.HealthStatusDegraded {
if appHealth.Status == appv1.HealthStatusHealthy || appHealth.Status == appv1.HealthStatusProgressing {
appHealth.Status = appv1.HealthStatusDegraded
}
}
}
comparisonResult.Resources[i] = resource
}
return &appHealth, nil
}

View File

@@ -0,0 +1,195 @@
package controller
import (
"context"
"encoding/json"
"runtime/debug"
"time"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/git"
)
type SecretController struct {
kubeClient kubernetes.Interface
secretQueue workqueue.RateLimitingInterface
secretInformer cache.SharedIndexInformer
repoClientset reposerver.Clientset
namespace string
}
func (ctrl *SecretController) Run(ctx context.Context) {
go ctrl.secretInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), ctrl.secretInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go wait.Until(func() {
for ctrl.processSecret() {
}
}, time.Second, ctx.Done())
}
func (ctrl *SecretController) processSecret() (processNext bool) {
secretKey, shutdown := ctrl.secretQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.secretQueue.Done(secretKey)
}()
obj, exists, err := ctrl.secretInformer.GetIndexer().GetByKey(secretKey.(string))
if err != nil {
log.Errorf("Failed to get secret '%s' from informer index: %+v", secretKey, err)
return
}
if !exists {
// This happens after secret was deleted, but the work queue still had an entry for it.
return
}
secret, ok := obj.(*corev1.Secret)
if !ok {
log.Warnf("Key '%s' in index is not an secret", secretKey)
return
}
if secret.Labels[common.LabelKeySecretType] == common.SecretTypeCluster {
cluster := db.SecretToCluster(secret)
ctrl.updateState(secret, ctrl.getClusterState(cluster))
} else if secret.Labels[common.LabelKeySecretType] == common.SecretTypeRepository {
repo := db.SecretToRepo(secret)
ctrl.updateState(secret, ctrl.getRepoConnectionState(repo))
}
return
}
func (ctrl *SecretController) getRepoConnectionState(repo *v1alpha1.Repository) v1alpha1.ConnectionState {
state := v1alpha1.ConnectionState{
ModifiedAt: repo.ConnectionState.ModifiedAt,
Status: v1alpha1.ConnectionStatusUnknown,
}
err := git.TestRepo(repo.Repo, repo.Username, repo.Password, repo.SSHPrivateKey)
if err == nil {
state.Status = v1alpha1.ConnectionStatusSuccessful
} else {
state.Status = v1alpha1.ConnectionStatusFailed
state.Message = err.Error()
}
return state
}
func (ctrl *SecretController) getClusterState(cluster *v1alpha1.Cluster) v1alpha1.ConnectionState {
state := v1alpha1.ConnectionState{
ModifiedAt: cluster.ConnectionState.ModifiedAt,
Status: v1alpha1.ConnectionStatusUnknown,
}
kubeClientset, err := kubernetes.NewForConfig(cluster.RESTConfig())
if err == nil {
_, err = kubeClientset.Discovery().ServerVersion()
}
if err == nil {
state.Status = v1alpha1.ConnectionStatusSuccessful
} else {
state.Status = v1alpha1.ConnectionStatusFailed
state.Message = err.Error()
}
return state
}
func (ctrl *SecretController) updateState(secret *corev1.Secret, state v1alpha1.ConnectionState) {
annotationsPatch := make(map[string]string)
for key, value := range db.AnnotationsFromConnectionState(&state) {
if secret.Annotations[key] != value {
annotationsPatch[key] = value
}
}
if len(annotationsPatch) > 0 {
annotationsPatch[common.AnnotationConnectionModifiedAt] = metav1.Now().Format(time.RFC3339)
patchData, err := json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": annotationsPatch,
},
})
if err != nil {
log.Warnf("Unable to prepare secret state annotation patch: %v", err)
} else {
_, err := ctrl.kubeClient.CoreV1().Secrets(secret.Namespace).Patch(secret.Name, types.MergePatchType, patchData)
if err != nil {
log.Warnf("Unable to patch secret state annotation: %v", err)
}
}
}
}
func newSecretInformer(client kubernetes.Interface, resyncPeriod time.Duration, namespace string, secretQueue workqueue.RateLimitingInterface) cache.SharedIndexInformer {
informerFactory := informers.NewFilteredSharedInformerFactory(
client,
resyncPeriod,
namespace,
func(options *metav1.ListOptions) {
var req *labels.Requirement
req, err := labels.NewRequirement(common.LabelKeySecretType, selection.In, []string{common.SecretTypeCluster, common.SecretTypeRepository})
if err != nil {
panic(err)
}
options.FieldSelector = fields.Everything().String()
labelSelector := labels.NewSelector().Add(*req)
options.LabelSelector = labelSelector.String()
},
)
informer := informerFactory.Core().V1().Secrets().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
secretQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err == nil {
secretQueue.Add(key)
}
},
},
)
return informer
}
func NewSecretController(kubeClient kubernetes.Interface, repoClientset reposerver.Clientset, resyncPeriod time.Duration, namespace string) *SecretController {
secretQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
return &SecretController{
kubeClient: kubeClient,
secretQueue: secretQueue,
secretInformer: newSecretInformer(kubeClient, resyncPeriod, namespace, secretQueue),
namespace: namespace,
repoClientset: repoClientset,
}
}

View File

@@ -6,6 +6,14 @@ import (
"fmt"
"time"
log "github.com/sirupsen/logrus"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
@@ -14,34 +22,32 @@ import (
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
kubeutil "github.com/argoproj/argo-cd/util/kube"
log "github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
)
const (
maxHistoryCnt = 5
)
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application) (*v1alpha1.ComparisonResult, *repository.ManifestResponse, error)
SyncAppState(app *v1alpha1.Application, revision string, overrides *[]v1alpha1.ComponentParameter, dryRun bool, prune bool) *v1alpha1.OperationState
CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
}
// KsonnetAppStateManager allows to compare application using KSonnet CLI
type KsonnetAppStateManager struct {
// appStateManager allows to compare application using KSonnet CLI
type appStateManager struct {
db db.ArgoDB
appclientset appclientset.Interface
kubectl kubeutil.Kubectl
repoClientset reposerver.Clientset
namespace string
}
// groupLiveObjects deduplicate list of kubernetes resources and choose correct version of resource: if resource has corresponding expected application resource then method pick
// kubernetes resource with matching version, otherwise chooses single kubernetes resource with any version
func (ks *KsonnetAppStateManager) groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstructured.Unstructured) map[string]*unstructured.Unstructured {
func groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstructured.Unstructured) map[string]*unstructured.Unstructured {
targetByFullName := make(map[string]*unstructured.Unstructured)
for _, obj := range targetObjs {
targetByFullName[getResourceFullName(obj)] = obj
@@ -79,19 +85,33 @@ func (ks *KsonnetAppStateManager) groupLiveObjects(liveObjs []*unstructured.Unst
return liveByFullName
}
// CompareAppState compares application spec and real app state using KSonnet
func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v1alpha1.ComparisonResult, *repository.ManifestResponse, error) {
repo := ks.getRepo(app.Spec.Source.RepoURL)
conn, repoClient, err := ks.repoClientset.NewRepositoryClient()
func (s *appStateManager) getTargetObjs(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, *repository.ManifestResponse, error) {
repo := s.getRepo(app.Spec.Source.RepoURL)
conn, repoClient, err := s.repoClientset.NewRepositoryClient()
if err != nil {
return nil, nil, err
}
defer util.Close(conn)
overrides := make([]*v1alpha1.ComponentParameter, len(app.Spec.Source.ComponentParameterOverrides))
if app.Spec.Source.ComponentParameterOverrides != nil {
if revision == "" {
revision = app.Spec.Source.TargetRevision
}
// Decide what overrides to compare with.
var mfReqOverrides []*v1alpha1.ComponentParameter
if overrides != nil {
// If overrides is supplied, use that
mfReqOverrides = make([]*v1alpha1.ComponentParameter, len(overrides))
for i := range overrides {
item := overrides[i]
mfReqOverrides[i] = &item
}
} else {
// Otherwise, use the overrides in the app spec
mfReqOverrides = make([]*v1alpha1.ComponentParameter, len(app.Spec.Source.ComponentParameterOverrides))
for i := range app.Spec.Source.ComponentParameterOverrides {
item := app.Spec.Source.ComponentParameterOverrides[i]
overrides[i] = &item
mfReqOverrides[i] = &item
}
}
@@ -99,40 +119,54 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
Repo: repo,
Environment: app.Spec.Source.Environment,
Path: app.Spec.Source.Path,
Revision: app.Spec.Source.TargetRevision,
ComponentParameterOverrides: overrides,
Revision: revision,
ComponentParameterOverrides: mfReqOverrides,
AppLabel: app.Name,
ValueFiles: app.Spec.Source.ValuesFiles,
Namespace: app.Spec.Destination.Namespace,
})
if err != nil {
return nil, nil, err
}
targetObjs := make([]*unstructured.Unstructured, len(manifestInfo.Manifests))
for i, manifest := range manifestInfo.Manifests {
targetObjs := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifestInfo.Manifests {
obj, err := v1alpha1.UnmarshalToUnstructured(manifest)
if err != nil {
return nil, nil, err
}
targetObjs[i] = obj
if isHook(obj) {
continue
}
targetObjs = append(targetObjs, obj)
}
return targetObjs, manifestInfo, nil
}
server, namespace := app.Spec.Destination.Server, app.Spec.Destination.Namespace
func (s *appStateManager) getLiveObjs(app *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (
[]*unstructured.Unstructured, map[string]*unstructured.Unstructured, error) {
log.Infof("Comparing app %s state in cluster %s (namespace: %s)", app.ObjectMeta.Name, server, namespace)
// Get the REST config for the cluster corresponding to the environment
clst, err := ks.db.GetCluster(context.Background(), server)
clst, err := s.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err != nil {
return nil, nil, err
}
restConfig := clst.RESTConfig()
// Retrieve the live versions of the objects
liveObjs, err := kubeutil.GetResourcesWithLabel(restConfig, namespace, common.LabelApplicationName, app.Name)
// Retrieve the live versions of the objects. exclude any hook objects
labeledObjs, err := kubeutil.GetResourcesWithLabel(restConfig, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
if err != nil {
return nil, nil, err
}
liveObjs := make([]*unstructured.Unstructured, 0)
for _, obj := range labeledObjs {
if isHook(obj) {
continue
}
liveObjs = append(liveObjs, obj)
}
liveObjByFullName := ks.groupLiveObjects(liveObjs, targetObjs)
liveObjByFullName := groupLiveObjects(liveObjs, targetObjs)
controlledLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
@@ -145,7 +179,7 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
for i, targetObj := range targetObjs {
fullName := getResourceFullName(targetObj)
liveObj := liveObjByFullName[fullName]
if liveObj == nil {
if liveObj == nil && targetObj.GetName() != "" {
// If we get here, it indicates we did not find the live resource when querying using
// our app label. However, it is possible that the resource was created/modified outside
// of ArgoCD. In order to determine that it is truly missing, we fall back to perform a
@@ -157,16 +191,56 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
}
apiResource, err := kubeutil.ServerResourceForGroupVersionKind(disco, gvk)
if err != nil {
return nil, nil, err
}
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, namespace)
if err != nil {
return nil, nil, err
if !apierr.IsNotFound(err) {
return nil, nil, err
}
// If we get here, the app is comprised of a custom resource which has yet to be registered
} else {
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, app.Spec.Destination.Namespace)
if err != nil {
return nil, nil, err
}
}
}
controlledLiveObj[i] = liveObj
delete(liveObjByFullName, fullName)
}
return controlledLiveObj, liveObjByFullName, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied overrides. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (s *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error) {
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
targetObjs, manifestInfo, err := s.getTargetObjs(app, revision, overrides)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
controlledLiveObj, liveObjByFullName, err := s.getLiveObjs(app, targetObjs)
if err != nil {
controlledLiveObj = make([]*unstructured.Unstructured, len(targetObjs))
liveObjByFullName = make(map[string]*unstructured.Unstructured)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
for _, liveObj := range controlledLiveObj {
if liveObj != nil && liveObj.GetLabels() != nil {
if appLabelVal, ok := liveObj.GetLabels()[common.LabelApplicationName]; ok && appLabelVal != "" && appLabelVal != app.Name {
conditions = append(conditions, v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
Message: fmt.Sprintf("Resource %s/%s is controller by applications '%s' and '%s'", liveObj.GetKind(), liveObj.GetName(), app.Name, appLabelVal),
})
}
}
}
// Move root level live resources to controlledLiveObj and add nil to targetObjs to indicate that target object is missing
for fullName := range liveObjByFullName {
@@ -177,11 +251,14 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
}
}
log.Infof("Comparing app %s state in cluster %s (namespace: %s)", app.ObjectMeta.Name, app.Spec.Destination.Server, app.Spec.Destination.Namespace)
// Do the actual comparison
diffResults, err := diff.DiffArray(targetObjs, controlledLiveObj)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
comparisonStatus := v1alpha1.ComparisonStatusSynced
resources := make([]v1alpha1.ResourceState, len(targetObjs))
@@ -206,7 +283,7 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
} else {
targetObjBytes, err := json.Marshal(targetObjs[i].Object)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
resState.TargetState = string(targetObjBytes)
}
@@ -219,7 +296,7 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
} else {
liveObjBytes, err := json.Marshal(controlledLiveObj[i].Object)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
resState.LiveState = string(liveObjBytes)
}
@@ -232,19 +309,26 @@ func (ks *KsonnetAppStateManager) CompareAppState(app *v1alpha1.Application) (*v
if liveResource != nil {
childResources, err := getChildren(liveResource, liveObjByFullName)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
resource.ChildLiveResources = childResources
resources[i] = resource
}
}
if failedToLoadObjs {
comparisonStatus = v1alpha1.ComparisonStatusUnknown
}
compResult := v1alpha1.ComparisonResult{
ComparedTo: app.Spec.Source,
ComparedAt: metav1.Time{Time: time.Now().UTC()},
Resources: resources,
Status: comparisonStatus,
}
return &compResult, manifestInfo, nil
if manifestInfo != nil {
compResult.Revision = manifestInfo.Revision
}
return &compResult, manifestInfo, conditions, nil
}
func hasParent(obj *unstructured.Unstructured) bool {
@@ -286,29 +370,7 @@ func getResourceFullName(obj *unstructured.Unstructured) string {
return fmt.Sprintf("%s:%s", obj.GetKind(), obj.GetName())
}
func (s *KsonnetAppStateManager) SyncAppState(
app *v1alpha1.Application, revision string, overrides *[]v1alpha1.ComponentParameter, dryRun bool, prune bool) *v1alpha1.OperationState {
if revision != "" {
app.Spec.Source.TargetRevision = revision
}
if overrides != nil {
app.Spec.Source.ComponentParameterOverrides = *overrides
}
opRes, manifest := s.syncAppResources(app, dryRun, prune)
if !dryRun && opRes.Phase.Successful() {
err := s.persistDeploymentInfo(app, manifest.Revision, manifest.Params, nil)
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = fmt.Sprintf("failed to record sync to history: %v", err)
}
}
return opRes
}
func (s *KsonnetAppStateManager) getRepo(repoURL string) *v1alpha1.Repository {
func (s *appStateManager) getRepo(repoURL string) *v1alpha1.Repository {
repo, err := s.db.GetRepository(context.Background(), repoURL)
if err != nil {
// If we couldn't retrieve from the repo service, assume public repositories
@@ -317,7 +379,7 @@ func (s *KsonnetAppStateManager) getRepo(repoURL string) *v1alpha1.Repository {
return repo
}
func (s *KsonnetAppStateManager) persistDeploymentInfo(
func (s *appStateManager) persistDeploymentInfo(
app *v1alpha1.Application, revision string, envParams []*v1alpha1.ComponentParameter, overrides *[]v1alpha1.ComponentParameter) error {
params := make([]v1alpha1.ComponentParameter, len(envParams))
@@ -325,16 +387,16 @@ func (s *KsonnetAppStateManager) persistDeploymentInfo(
param := *envParams[i]
params[i] = param
}
var nextId int64 = 0
var nextID int64 = 0
if len(app.Status.History) > 0 {
nextId = app.Status.History[len(app.Status.History)-1].ID + 1
nextID = app.Status.History[len(app.Status.History)-1].ID + 1
}
history := append(app.Status.History, v1alpha1.DeploymentInfo{
ComponentParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
Revision: revision,
Params: params,
DeployedAt: metav1.NewTime(time.Now()),
ID: nextId,
DeployedAt: metav1.NewTime(time.Now().UTC()),
ID: nextID,
})
if len(history) > maxHistoryCnt {
@@ -353,141 +415,18 @@ func (s *KsonnetAppStateManager) persistDeploymentInfo(
return err
}
func (s *KsonnetAppStateManager) syncAppResources(
app *v1alpha1.Application,
dryRun bool,
prune bool) (*v1alpha1.OperationState, *repository.ManifestResponse) {
opRes := v1alpha1.OperationState{
SyncResult: &v1alpha1.SyncOperationResult{},
}
comparison, manifestInfo, err := s.CompareAppState(app)
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = err.Error()
return &opRes, manifestInfo
}
clst, err := s.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = err.Error()
return &opRes, manifestInfo
}
config := clst.RESTConfig()
opRes.SyncResult.Resources = make([]*v1alpha1.ResourceDetails, len(comparison.Resources))
liveObjs := make([]*unstructured.Unstructured, len(comparison.Resources))
targetObjs := make([]*unstructured.Unstructured, len(comparison.Resources))
// First perform a `kubectl apply --dry-run` against all the manifests. This will detect most
// (but not all) validation issues with the users' manifests (e.g. will detect syntax issues,
// but will not not detect if they are mutating immutable fields). If anything fails, we will
// refuse to perform the sync.
dryRunSuccessful := true
for i, resourceState := range comparison.Resources {
liveObj, err := resourceState.LiveObject()
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = fmt.Sprintf("Failed to unmarshal live object: %v", err)
return &opRes, manifestInfo
}
targetObj, err := resourceState.TargetObject()
if err != nil {
opRes.Phase = v1alpha1.OperationError
opRes.Message = fmt.Sprintf("Failed to unmarshal target object: %v", err)
return &opRes, manifestInfo
}
liveObjs[i] = liveObj
targetObjs[i] = targetObj
resDetails, successful := syncObject(config, app.Spec.Destination.Namespace, targetObj, liveObj, prune, true)
if !successful {
dryRunSuccessful = false
}
opRes.SyncResult.Resources[i] = &resDetails
}
if !dryRunSuccessful {
opRes.Phase = v1alpha1.OperationFailed
opRes.Message = "one or more objects failed to apply (dry run)"
return &opRes, manifestInfo
}
if dryRun {
opRes.Phase = v1alpha1.OperationSucceeded
opRes.Message = "successfully synced (dry run)"
return &opRes, manifestInfo
}
// If we get here, all objects passed their dry-run, so we are now ready to actually perform the
// `kubectl apply`. Loop through the resources again, this time without dry-run.
syncSuccessful := true
for i := range comparison.Resources {
resDetails, successful := syncObject(config, app.Spec.Destination.Namespace, targetObjs[i], liveObjs[i], prune, false)
if !successful {
syncSuccessful = false
}
opRes.SyncResult.Resources[i] = &resDetails
}
if !syncSuccessful {
opRes.Message = "one or more objects failed to apply"
opRes.Phase = v1alpha1.OperationFailed
} else {
opRes.Message = "successfully synced"
opRes.Phase = v1alpha1.OperationSucceeded
}
return &opRes, manifestInfo
}
// syncObject performs a sync of a single resource
func syncObject(config *rest.Config, namespace string, targetObj, liveObj *unstructured.Unstructured, prune, dryRun bool) (v1alpha1.ResourceDetails, bool) {
obj := targetObj
if obj == nil {
obj = liveObj
}
resDetails := v1alpha1.ResourceDetails{
Name: obj.GetName(),
Kind: obj.GetKind(),
Namespace: namespace,
}
needsDelete := targetObj == nil
successful := true
if needsDelete {
if prune {
if dryRun {
resDetails.Message = "pruned (dry run)"
} else {
err := kubeutil.DeleteResource(config, liveObj, namespace)
if err != nil {
resDetails.Message = err.Error()
successful = false
} else {
resDetails.Message = "pruned"
}
}
} else {
resDetails.Message = "ignored (requires pruning)"
}
} else {
message, err := kube.ApplyResource(config, targetObj, namespace, dryRun)
if err != nil {
resDetails.Message = err.Error()
successful = false
} else {
resDetails.Message = message
}
}
return resDetails, successful
}
// NewAppStateManager creates new instance of Ksonnet app comparator
func NewAppStateManager(
db db.ArgoDB,
appclientset appclientset.Interface,
repoClientset reposerver.Clientset,
namespace string,
kubectl kubeutil.Kubectl,
) AppStateManager {
return &KsonnetAppStateManager{
return &appStateManager{
db: db,
appclientset: appclientset,
kubectl: kubectl,
repoClientset: repoClientset,
namespace: namespace,
}

52
controller/state_test.go Normal file
View File

@@ -0,0 +1,52 @@
package controller
import (
"testing"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
var podManifest = []byte(`
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: nginx:1.7.9
name: nginx
resources:
requests:
cpu: 0.2
`)
func newPod() *unstructured.Unstructured {
var un unstructured.Unstructured
err := yaml.Unmarshal(podManifest, &un)
if err != nil {
panic(err)
}
return &un
}
func TestIsHook(t *testing.T) {
pod := newPod()
assert.False(t, isHook(pod))
pod.SetAnnotations(map[string]string{"helm.sh/hook": "post-install"})
assert.True(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "PreSync"})
assert.True(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Skip"})
assert.False(t, isHook(pod))
pod = newPod()
pod.SetAnnotations(map[string]string{"argocd.argoproj.io/hook": "Unknown"})
assert.False(t, isHook(pod))
}

1051
controller/sync.go Normal file

File diff suppressed because it is too large Load Diff

391
controller/sync_test.go Normal file
View File

@@ -0,0 +1,391 @@
package controller
import (
"fmt"
"sort"
"testing"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
fakedisco "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/rest"
testcore "k8s.io/client-go/testing"
)
type kubectlOutput struct {
output string
err error
}
type mockKubectlCmd struct {
commands map[string]kubectlOutput
}
func (k mockKubectlCmd) DeleteResource(config *rest.Config, obj *unstructured.Unstructured, namespace string) error {
command, ok := k.commands[obj.GetName()]
if !ok {
return nil
}
return command.err
}
func (k mockKubectlCmd) ApplyResource(config *rest.Config, obj *unstructured.Unstructured, namespace string, dryRun, force bool) (string, error) {
command, ok := k.commands[obj.GetName()]
if !ok {
return "", nil
}
return command.output, command.err
}
// ConvertToVersion converts an unstructured object into the specified group/version
func (k mockKubectlCmd) ConvertToVersion(obj *unstructured.Unstructured, group, version string) (*unstructured.Unstructured, error) {
return obj, nil
}
func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
fakeDisco := &fakedisco.FakeDiscovery{Fake: &testcore.Fake{}}
fakeDisco.Resources = append(resources, &v1.APIResourceList{
APIResources: []v1.APIResource{
{Kind: "pod", Namespaced: true},
{Kind: "deployment", Namespaced: true},
{Kind: "service", Namespaced: true},
},
})
kube.FlushServerResourcesCache()
return &syncContext{
comparison: &v1alpha1.ComparisonResult{},
config: &rest.Config{},
namespace: "test-namespace",
syncRes: &v1alpha1.SyncOperationResult{},
syncOp: &v1alpha1.SyncOperation{
Prune: true,
SyncStrategy: &v1alpha1.SyncStrategy{
Apply: &v1alpha1.SyncStrategyApply{},
},
},
proj: &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Name: "test",
},
Spec: v1alpha1.AppProjectSpec{
ClusterResourceWhitelist: []v1.GroupKind{
{Group: "*", Kind: "*"},
},
},
},
opState: &v1alpha1.OperationState{},
disco: fakeDisco,
log: log.WithFields(log.Fields{"application": "fake-app"}),
}
}
func TestSyncCreateInSortedOrder(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"pod\"}",
}, {
LiveState: "",
TargetState: "{\"kind\":\"service\"}",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "pod" {
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: v1alpha1.SchemeGroupVersion.String(),
APIResources: []v1.APIResource{
{Name: "workflows", Namespaced: false, Kind: "Workflow", Group: "argoproj.io"},
{Name: "application", Namespaced: false, Kind: "Application", Group: "argoproj.io"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "argoproj.io", Kind: "*"},
}
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: `{"apiVersion": "rbac.authorization.k8s.io/v1", "kind": "ClusterRole", "metadata": {"name": "argo-ui-cluster-role" }}`,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.proj.Spec.NamespaceResourceBlacklist = []v1.GroupKind{
{Group: "*", Kind: "deployment"},
}
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"deployment\"}",
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"service\"}",
}, {
LiveState: "{\"kind\":\"pod\"}",
TargetState: "",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "pod" {
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
assert.Equal(t, v1alpha1.ResourceDetailsSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncDeleteSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "{\"kind\":\"service\"}",
TargetState: "",
}, {
LiveState: "{\"kind\":\"pod\"}",
TargetState: "",
},
},
}
syncCtx.sync()
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "pod" {
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "service" {
assert.Equal(t, v1alpha1.ResourceDetailsSyncedAndPruned, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{
commands: map[string]kubectlOutput{
"test-service": {
output: "",
err: fmt.Errorf("error: error validating \"test.yaml\": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false"),
},
},
}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "",
TargetState: "{\"kind\":\"service\", \"metadata\":{\"name\":\"test-service\"}}",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestSyncPruneFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = mockKubectlCmd{
commands: map[string]kubectlOutput{
"test-service": {
output: "",
err: fmt.Errorf(" error: timed out waiting for \"test-service\" to be synced"),
},
},
}
syncCtx.comparison = &v1alpha1.ComparisonResult{
Resources: []v1alpha1.ResourceState{{
LiveState: "{\"kind\":\"service\", \"metadata\":{\"name\":\"test-service\"}}",
TargetState: "",
},
},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResourceDetailsSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestRunWorkflows(t *testing.T) {
// syncCtx := newTestSyncCtx()
// syncCtx.doWorkflowSync(nil, nil)
}
func unsortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
}
func sortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
}
}
func TestSortKubernetesResourcesSuccessfully(t *testing.T) {
unsorted := unsortedManifest()
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := sortedManifest()
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestSortManifestHandleNil(t *testing.T) {
task := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
}
manifest := []syncTask{
{},
task,
}
ks := newKindSorter(manifest, resourceOrder)
sort.Sort(ks)
assert.Equal(t, task, manifest[0])
assert.Nil(t, manifest[1].targetObj)
}

16
docs/README.md Normal file
View File

@@ -0,0 +1,16 @@
# ArgoCD Documentation
## [Getting Started](getting_started.md)
## Concepts
* [Architecture](architecture.md)
* [Tracking Strategies](tracking_strategies.md)
## Features
* [Application Sources](application_sources.md)
* [Application Parameters](parameters.md)
* [Resource Health](health.md)
* [Resource Hooks](resource_hooks.md)
* [Single Sign On](sso.md)
* [Webhooks](webhook.md)
* [RBAC](rbac.md)

103
docs/application_sources.md Normal file
View File

@@ -0,0 +1,103 @@
# Application Source Types
ArgoCD supports several different ways in which kubernetes manifests can be defined:
* [ksonnet](https://ksonnet.io) applications
* [kustomize](https://kustomize.io) applications
* [helm](https://helm.sh) charts
* Simple directory of YAML/json manifests
Some additional considerations should be made when deploying apps of a particular type:
## Ksonnet
### Environments
Ksonnet has a first class concept of an "environment." To create an application from a ksonnet
app directory, an environment must be specified. For example, the following command creates the
"guestbook-default" app, which points to the `default` environment:
```
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
### Parameters
Ksonnet parameters all belong to a component. For example, the following are the parameters
available in the guestbook app, all of which belong to the `guestbook-ui` component:
```
$ ks param list
COMPONENT PARAM VALUE
========= ===== =====
guestbook-ui containerPort 80
guestbook-ui image "gcr.io/heptio-images/ks-guestbook-demo:0.1"
guestbook-ui name "guestbook-ui"
guestbook-ui replicas 1
guestbook-ui servicePort 80
guestbook-ui type "LoadBalancer"
```
When overriding ksonnet parameters in ArgoCD, the component name should also be specified in the
`argocd app set` command, in the form of `-p COMPONENT=PARAM=VALUE`. For example:
```
argocd app set guestbook-default -p guestbook-ui=image=gcr.io/heptio-images/ks-guestbook-demo:0.1
```
## Helm
### Values Files
Helm has the ability to use a different, or even multiple "values.yaml" files to derive its
parameters from. Alternate or multiple values file(s), can be specified using the `--values`
flag. The flag can be repeated to support multiple values files:
```
argocd app set helm-guestbook --values values-production.yaml
```
### Helm Parameters
Helm has the ability to set parameter values, which override any values in
a `values.yaml`. For example, `service.type` is a common parameter which is exposed in a Helm chart:
```
helm template . --set service.type=LoadBalancer
```
Similarly ArgoCD can override values in the `values.yaml` parameters using `argo app set` command,
in the form of `-p PARAM=VALUE`. For example:
```
argocd app set helm-guestbook -p service.type=LoadBalancer
```
### Helm Hooks
Helm hooks are equivalent in concept to [ArgoCD resource hooks](resource_hooks.md). In helm, a hook
is any normal kubernetes resource annotated with the `helm.sh/hook` annotation. When ArgoCD deploys
helm application which contains helm hooks, all helm hook resources are currently ignored during
the `kubectl apply` of the manifests. There is an
[open issue](https://github.com/argoproj/argo-cd/issues/355) to map Helm hooks to ArgoCD's concept
of Pre/Post/Sync hooks.
### Random Data
Helm templating has the ability to generate random data during chart rendering via the
`randAlphaNum` function. Many helm charts from the [charts repository](https://github.com/helm/charts)
make use of this feature. For example, the following is the secret for the
[redis helm chart](https://github.com/helm/charts/blob/master/stable/redis/templates/secrets.yaml):
```
data:
{{- if .Values.password }}
redis-password: {{ .Values.password | b64enc | quote }}
{{- else }}
redis-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
```
The ArgoCD application controller periodically compares git state against the live state, running
the `helm template <CHART>` command to generate the helm manifests. Because the random value is
regenerated every time the comparison is made, any application which makes use of the `randAlphaNum`
function will always be in an `OutOfSync` state. This can be mitigated by explicitly setting a
value, in the values.yaml such that the value is stable between each comparison. For example:
```
argocd app set redis -p password=abc123
```

View File

@@ -22,15 +22,57 @@ manifests when provided the following inputs:
* repository URL
* git revision (commit, tag, branch)
* application path
* application environment
* template specific settings: parameters, ksonnet environments, helm values.yaml
### Application Controller
The application controller is a Kubernetes controller which continuously monitors running
applications and compares the current, live state against the desired target state (as specified in
the git repo). It detects out-of-sync application state and optionally takes corrective action. It
is responsible for invoking any user-defined handlers (argo workflows) for Sync, OutOfSync events
the git repo). It detects `OutOfSync` application state and optionally takes corrective action. It
is responsible for invoking any user-defined hooks for lifcecycle events (PreSync, Sync, PostSync)
### Application CRD (Custom Resource Definition)
The Application CRD is the Kubernetes resource object representing a deployed application instance
in an environment. It holds a reference to the desired target state (repo, revision, app, environment)
of which the application controller will enforce state against.
in an environment. It is defined by two key pieces of information:
* `source` reference to the desired state in git (repository, revision, path, environment)
* `destination` reference to the target cluster and namespace.
An example spec is as follows:
```
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
environment: default
destination:
server: https://kubernetes.default.svc
namespace: default
```
### AppProject CRD (Custom Resource Definition)
The AppProject CRD is the Kubernetes resource object representing a grouping of applications. It is defined by three key pieces of information:
* `sourceRepos` reference to the reposities that applications within the project can pull manifests from.
* `destinations` reference to clusters and namespaces that applications within the project can deploy into.
* `roles` list of entities with defintions of their access to resources within the project.
An example spec is as follows:
```
spec:
description: Description of the project
destinations:
- namespace: default
server: https://kubernetes.default.svc
roles:
- description: Description of the role
jwtTokens:
- iat: 1535390316
name: role-name
policies:
- p, proj:proj-name:role-name, applications, get, proj-name/*, allow
- p, proj:proj-name:role-name, applications, sync, proj-name/*, deny
sourceRepos:
- https://github.com/argoproj/argocd-example-apps.git
```

BIN
docs/argocd-ui.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

BIN
docs/assets/create_app.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

BIN
docs/assets/select_app.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

BIN
docs/assets/select_env.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

BIN
docs/assets/select_repo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

View File

@@ -1,81 +1,185 @@
# Argo CD Getting Started
# ArgoCD Getting Started
An example Ksonnet guestbook application is provided to demonstrates how Argo CD works.
An example guestbook application is provided to demonstrate how ArgoCD works.
## Requirements
* Installed [minikube](https://github.com/kubernetes/minikube#installation)
* Installed the [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
* Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
* Have a [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`).
## 1. Download Argo CD
Download the latest Argo CD version
## 1. Install ArgoCD
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.8.2/manifests/install.yaml
```
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.4.3/argocd-darwin-amd64
This will create a new namespace, `argocd`, where ArgoCD services and application resources will live.
NOTE:
* On GKE with RBAC enabled, you may need to grant your account the ability to create new cluster roles
```bash
kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
```
## 2. Download ArgoCD CLI
Download the latest ArgoCD version:
On Mac:
```bash
brew install argoproj/tap/argocd
```
On Linux:
```bash
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.8.2/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
```
## 3. Open access to ArgoCD API server
## 2. Install Argo CD
```
argocd install
```
This will create a new namespace, `argocd`, where Argo CD services and application resources will live.
By default, the ArgoCD API server is not exposed with an external IP. To expose the API server,
change the service type to `LoadBalancer`:
## 3. Open access to Argo CD API server
By default, the Argo CD API server is not exposed with an external IP. To expose the API server,
change service type to `LoadBalancer`:
```
```bash
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
```
### Notes about Ingress and AWS Load Balancers
* If using Ingress objects without TLS from the ingress-controller to ArgoCD API server, you will
need to add the `--insecure` command line flag to the argocd-server deployment.
* AWS Classic ELB (in HTTP mode) and ALB do not have full support for HTTP2/gRPC which is the
protocol used by the `argocd` CLI. When using an AWS load balancer, either Classic ELB in
passthrough mode is needed, or NLBs.
## 4. Login to the server from the CLI
```
argocd login $(minikube service argocd-server -n argocd --url | cut -d'/' -f 3)
Login with using the `admin` user. The initial password is autogenerated to be the pod name of the
ArgoCD API server. This can be retrieved with the command:
```bash
kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2
```
Now, the Argo CD cli is configured to talk to API server and you can deploy your first application.
## 5. Connect and deploy the Guestbook application
1. Register the minikube cluster to Argo CD:
Using the above password, login to ArgoCD's external IP:
On Minikube:
```bash
argocd login $(minikube service argocd-server -n argocd --url | cut -d'/' -f 3) --name minikube
```
argocd cluster add minikube
```
The `argocd cluster add CONTEXT` command installs an `argocd-manager` ServiceAccount and ClusterRole into
the cluster associated with the supplied kubectl context. Argo CD then uses the associated service account
token to perform its required management tasks (i.e. deploy/monitoring).
2. Add the guestbook application and github repository containing the Guestbook application
```
argocd app create --name guestbook --repo https://github.com/argoproj/argo-cd.git --path examples/guestbook --env minikube --dest-server https://$(minikube ip):8443
Other clusters:
```bash
kubectl get svc -n argocd argocd-server
argocd login <EXTERNAL-IP>
```
Once the application is added, you can now see its status:
After logging in, change the password using the command:
```bash
argocd account update-password
argocd relogin
```
argocd app list
argocd app get guestbook
## 5. Register a cluster to deploy apps to
We will now register a cluster to deploy applications to. First list all clusters contexts in your
kubconfig:
```bash
argocd cluster add
```
Choose a context name from the list and supply it to `argocd cluster add CONTEXTNAME`. For example,
for minikube context, run:
```bash
argocd cluster add minikube --in-cluster
```
The above command installs an `argocd-manager` ServiceAccount and ClusterRole into the cluster
associated with the supplied kubectl context. ArgoCD uses the service account token to perform its
management tasks (i.e. deploy/monitoring).
The `--in-cluster` option indicates that the cluster we are registering, is the same cluster that
ArgoCD is running in. This allows ArgoCD to connect to the cluster using the internal kubernetes
hostname (kubernetes.default.svc). When registering a cluster external to ArgoCD, the `--in-cluster`
flag should be omitted.
## 6. Create the application from a git repository
### Creating apps via UI
Open a browser to the ArgoCD external UI, and login using the credentials set in step 4.
On Minikube:
```bash
minikube service argocd-server -n argocd
```
Connect a git repository containing your apps. An example repository containing a sample
guestbook application is available at https://github.com/argoproj/argocd-example-apps.git.
![connect repo](assets/connect_repo.png)
After connecting a git repository, select the guestbook application for creation:
![select repo](assets/select_repo.png)
![select app](assets/select_app.png)
![select env](assets/select_env.png)
![create app](assets/create_app.png)
### Creating apps via CLI
Applications can be also be created using the ArgoCD CLI:
```bash
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
## 7. Sync (deploy) the application
Once the guestbook application is created, you can now view its status:
From UI:
![create app](assets/guestbook-app.png)
From CLI:
```bash
$ argocd app get guestbook-default
Name: guestbook-default
Server: https://kubernetes.default.svc
Namespace: default
URL: https://192.168.64.36:31880/applications/argocd/guestbook-default
Environment: default
Repo: https://github.com/argoproj/argocd-example-apps.git
Path: guestbook
Target: HEAD
KIND NAME STATUS HEALTH
Service guestbook-ui OutOfSync
Deployment guestbook-ui OutOfSync
```
The application status is initially in an `OutOfSync` state, since the application has yet to be
deployed, and no Kubernetes resources have been created. To sync (deploy) the application, run:
```
argocd app sync guestbook
```bash
$ argocd app sync guestbook-default
Application: guestbook-default
Operation: Sync
Phase: Succeeded
Message: successfully synced
KIND NAME MESSAGE
Service guestbook-ui service "guestbook-ui" created
Deployment guestbook-ui deployment.apps "guestbook-ui" created
```
[![asciicast](https://asciinema.org/a/uYnbFMy5WI2rc9S49oEAyGLb0.png)](https://asciinema.org/a/uYnbFMy5WI2rc9S49oEAyGLb0)
This command retrieves the manifests from git repository and performs a `kubectl apply` of the
manifests. The guestbook app is now running and you can now view its resource
components, logs, events, and assessed health:
Argo CD also allows to view and manager applications using web UI. Get the web UI URL by running:
![view app](assets/guestbook-tree.png)
```
minikube service argocd-server -n argocd --url
```
## 8. Next Steps
![argo cd ui](argocd-ui.png)
ArgoCD supports additional features such as SSO, WebHooks, RBAC, Projects. See the rest of
the [documentation](./) for details.

18
docs/health.md Normal file
View File

@@ -0,0 +1,18 @@
# Resource Health
## Overview
ArgoCD provides built-in health assessment for several standard Kubernetes types, which is then
surfaced to the overall Application health status as a whole. The following checks are made for
specific types of kuberentes resources:
### Deployment, ReplicaSet, StatefulSet DaemonSet
* Observed generation is equal to desired generation.
* Number of **updated** replicas equals the number of desired replicas.
### Service
* If service type is of type `LoadBalancer`, the `status.loadBalancer.ingress` list is non-empty,
with at least one value for `hostname` or `IP`.
### Ingress
* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.

View File

@@ -0,0 +1,45 @@
# ArgoCD Release Instructions
1. Tag, build, and push argo-cd-ui
```bash
cd argo-cd-ui
git tag vX.Y.Z
git push upstream vX.Y.Z
IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true yarn docker
```
2. Create release-X.Y branch (if creating initial X.Y release)
```bash
git checkout -b release-X.Y
git push upstream release-X.Y
```
3. Update manifests with new version
```bash
vi manifests/base/kustomization.yaml # update with new image tags
make manifests
git commit -a -m "Update manifests to vX.Y.Z"
git push upstream master
```
4. Tag, build, and push release to docker hub
```bash
git tag vX.Y.Z
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true
git push upstream vX.Y.Z
```
5. Update argocd brew formula
```bash
git clone https://github.com/argoproj/homebrew-tap
cd homebrew-tap
shasum -a 256 ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
# edit argocd.rb with version and checksum
git commit -a -m "Update argocd to vX.Y.Z"
git push
```
6. Update documentation:
* Edit CHANGELOG.md with release notes
* Update getting_started.md with new version
* Create GitHub release from new tag and upload binaries (e.g. dist/argocd-darwin-amd64)

39
docs/parameters.md Normal file
View File

@@ -0,0 +1,39 @@
# Parameter Overrides
ArgoCD provides a mechanism to override the parameters of a ksonnet/helm app. This gives some extra
flexibility in having most of the application manifests defined in git, while leaving room for
*some* parts of the k8s manifests determined dynamically, or outside of git. It also serves as an
alternative way of redeploying an application by changing application parameters via ArgoCD, instead
of making the changes to the manifests in git.
**NOTE:** many consider this mode of operation as an anti-pattern to GitOps, since the source of
truth becomes a union of the git repository, and the application overrides. The ArgoCD parameter
overrides feature is provided mainly convenience to developers and is intended to be used more for
dev/test environments, vs. production environments.
To use parameter overrides, run the `argocd app set -p (COMPONENT=)PARAM=VALUE` command:
```
argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
argocd app sync guestbook
```
The following are situations where parameter overrides would be useful:
1. A team maintains a "dev" environment, which needs to be continually updated with the latest
version of their guestbook application after every build in the tip of master. To address this use
case, the application would expose an parameter named `image`, whose value used in the `dev`
environment contains a placeholder value (e.g. `example/guestbook:replaceme`). The placeholder value
would be determined externally (outside of git) such as a build systems. Then, as part of the build
pipeline, the parameter value of the `image` would be continually updated to the freshly built image
(e.g. `argocd app set guestbook -p guestbook=image=example/guestbook:abcd123`). A sync operation
would result in the application being redeployed with the new image.
2. A repository of helm manifests is already publicly available (e.g. https://github.com/helm/charts).
Since commit access to the repository is unavailable, it is useful to be able to install charts from
the public repository, customizing the deployment with different parameters, without resorting to
forking the repository to make the changes. For example, to install redis from the helm chart
repository and customize the the database password, you would run:
```
argocd app create redis --repo https://github.com/helm/charts.git --path stable/redis --dest-server https://kubernetes.default.svc --dest-namespace default -p password=abc123
```

162
docs/rbac.md Normal file
View File

@@ -0,0 +1,162 @@
# RBAC
## Overview
The RBAC feature enables restriction of access to ArgoCD resources. ArgoCD does not have its own
user management system and has only one built-in user `admin`. The `admin` user is a superuser and
it has unrestricted access to the system. RBAC requires [SSO configuration](./sso.md). Once SSO is
configured, additional RBAC roles can be defined, and SSO groups can man be mapped to roles.
## Configure RBAC
RBAC configuration allows defining roles and groups. ArgoCD has two pre-defined roles:
* `role:readonly` - read-only access to all resources
* `role:admin` - unrestricted access to all resources
These role definitions can be seen in [builtin-policy.csv](../util/rbac/builtin-policy.csv)
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below custom role `org-admin`. The role is assigned to any user which belongs to
`your-github-org:your-team` group. All other users get `role:readonly` and cannot modify ArgoCD settings.
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
data:
policy.default: role:readonly
policy.csv: |
p, role:org-admin, applications, *, */*, allow
p, role:org-admin, applications/*, *, */*, allow
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories/apps, get, *, allow
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
g, your-github-org:your-team, role:org-admin
kind: ConfigMap
metadata:
name: argocd-rbac-cm
```
## Configure Projects
Argo projects allow grouping applications which is useful if ArgoCD is used by multiple teams. Additionally, projects restrict source repositories and destination
Kubernetes clusters which can be used by applications belonging to the project.
### 1. Create new project
Following command creates project `myproject` which can deploy applications to namespace `default` of cluster `https://kubernetes.default.svc`. The valid application source is defined in the `https://github.com/argoproj/argocd-example-apps.git` repository.
```
argocd proj create myproject -d https://kubernetes.default.svc,default -s https://github.com/argoproj/argocd-example-apps.git
```
Project sources and destinations can be managed using commands
```
argocd project add-destination
argocd project remove-destination
argocd project add-source
argocd project remove-source
```
### 2. Assign application to a project
Each application belongs to a project. By default, all application belongs to the default project which provides access to any source repo/cluster. The application project can be
changes using `app set` command:
```
argocd app set guestbook-default --project myproject
```
### 3. Update RBAC rules
Following example configure admin access for two teams. Each team has access only two application of one project (`team1` can access `default` project and `team2` can access
`myproject` project).
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
data:
policy.default: ""
policy.csv: |
p, role:team1-admin, applications, *, default/*, allow
p, role:team1-admin, applications/*, *, default/*, allow
p, role:team1-admin, applications, *, myproject/*, allow
p, role:team1-admin, applications/*, *, myproject/*, allow
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories/apps, get, *, allow
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
g, role:team1-admin, org-admin
g, role:team2-admin, org-admin
g, your-github-org:your-team1, role:team1-admin
g, your-github-org:your-team2, role:team2-admin
kind: ConfigMap
metadata:
name: argocd-rbac-cm
```
## Project Roles
Projects include a feature called roles that allow users to define access to project's applications. A project can have multiple roles, and those roles can have different access granted to them. These permissions are called policies, and they are stored within the role as a list of casbin strings. A role's policy can only grant access to that role and are limited to applications within the role's project. However, the policies have an option for granting wildcard access to any application within a project.
In order to create roles in a project and add policies to a role, a user will need permission to update a project. The following commands can be used to manage a role.
```
argoproj proj role list
argoproj proj role get
argoproj proj role create
argoproj proj role delete
argoproj proj role add-policy
argoproj proj role remove-policy
```
Project roles can not be used unless a user creates a entity that is associated with that project role. ArgoCD supports creating JWT tokens with a role associated with it. Since the JWT token is associated with a role's policies, any changes to the role's policies will immediately take effect for that JWT token.
A user will need permission to update a project in order to create a JWT token for a role, and they can use the following commands to manage the JWT tokens.
```
argoproj proj role create-token
argoproj proj role delete-token
```
Since the JWT tokens aren't stored in ArgoCD, they can only be retrieved when they are created. A user can leverage them in the cli by either passing them in using the `--auth-token` flag or setting the ARGOCD_AUTH_TOKEN environment variable. The JWT tokens can be used until they expire or are revoked. The JWT tokens can created with or without an expiration, but the default on the cli is creates them without an expirations date. Even if a token has not expired, it can not be used if the token has been revoke.
Below is an example of leveraging a JWT token to access the guestbook application. It makes the assumption that the user already has a project named myproject and an application called guestbook-default.
```
PROJ=myproject
APP=guestbook-default
ROLE=get-role
argocd proj role create $PROJ $ROLE
argocd proj role create-token $PROJ $ROLE -e 10m
JWT=<value from command above>
argocd proj role list $PROJ
argocd proj role get $PROJ $ROLE
#This command will fail because the JWT Token associated with the project role does not have a policy to allow access to the application
argocd app get $APP --auth-token $JWT
# Adding a policy to grant access to the application for the new role
argocd proj role add-policy $PROJ $ROLE --action get --permission allow --object $APP
argocd app get $PROJ-$ROLE --auth-token $JWT
# Removing the policy we added and adding one with a wildcard.
argocd proj role remove-policy $PROJ $TOKEN -a get -o $PROJ-$TOKEN
argocd proj role remove-policy $PROJ $TOKEN -a get -o '*'
# The wildcard allows us to access the application due to the wildcard.
argocd app get $PROJ-$TOKEN --auth-token $JWT
argocd proj role get $PROJ
argocd proj role get $PROJ $ROLE
# Revoking the JWT token
argocd proj role delete-token $PROJ $ROLE <id field from the last command>
# This will fail since the JWT Token was deleted for the project role.
argocd app get $APP --auth-token $JWT
```

61
docs/resource_hooks.md Normal file
View File

@@ -0,0 +1,61 @@
# Resource Hooks
## Overview
Hooks are ways to interject custom logic before, during, and after a Sync operation. Some use cases
for hooks are:
* Using a `PreSync` hook to perform a database schema migration before deploying a new version of the app.
* Using a `Sync` hook to orchestrate a complex deployment requiring more sophistication than the
kubernetes rolling update strategy (e.g. a blue/green deployment).
* Using a `PostSync` hook to run integration and health checks after a deployment.
## Usage
Hooks are simply kubernetes manifests annotated with the `argocd.argoproj.io/hook` annotation. To
make use of hooks, simply add the annotation to any resource:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
annotations:
argocd.argoproj.io/hook: PreSync
```
During a Sync operation, ArgoCD will create the resource during the appropriate stage of the
deployment. Hooks can be any type of Kuberentes resource kind, but tend to be most useful as
[Kubernetes Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
or [Argo Workflows](https://github.com/argoproj/argo). Multiple hooks can be specified as a comma
separated list.
## Available Hooks
The following hooks are defined:
| Hook | Description |
|------|-------------|
| `PreSync` | Executes prior to the apply of the manifests. |
| `Sync` | Executes after all `PreSync` hooks completed and were successful. Occurs in conjuction with the apply of the manifests. |
| `Skip` | Indicates to ArgoCD to skip the apply of the manifest. This is typically used in conjunction with a `Sync` hook which is presumably handling the deployment in an alternate way (e.g. blue-green deployment) |
| `PostSync` | Executes after all `Sync` hooks completed and were successful, a succcessful apply, and all resources in a `Healthy` state. |
## Hook Deletion Policies
Hooks can be deleted in an automatic fashion using the annotation: `argocd.argoproj.io/hook-delete-policy`.
```yaml
apiVersion: batch/v1
kind: Job
metadata:
generateName: integration-test-
annotations:
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: OnSuccess
```
The following policies define when the hook will be deleted.
| Policy | Description |
|--------|-------------|
| `OnSuccess` | The hook resource is deleted after the hook succeeded (e.g. Job/Workflow completed successfully). |
| `OnFailure` | The hook resource is deleted after the hook failed. |

View File

@@ -3,9 +3,9 @@
## Overview
ArgoCD embeds and bundles [Dex](https://github.com/coreos/dex) as part of its installation, for the
purposes of delegating authentication to an external identity provider. Multiple types of identity
purpose of delegating authentication to an external identity provider. Multiple types of identity
providers are supported (OIDC, SAML, LDAP, GitHub, etc...). SSO configuration of ArgoCD requires
editing the `argocd-cm` ConfigMap with a
editing the `argocd-cm` ConfigMap with
[Dex connector](https://github.com/coreos/dex/tree/master/Documentation/connectors) settings.
This document describes how to configure ArgoCD SSO using GitHub (OAuth2) as an example, but the
@@ -35,7 +35,7 @@ kubectl edit configmap argocd-cm
[GitHub connector](https://github.com/coreos/dex/blob/master/Documentation/connectors/github.md)
documentation for explanation of the fields. A minimal config should populate the clientID,
clientSecret generated in Step 1.
* You will very likely want to restrict logins to one ore more GitHub organization. In the
* You will very likely want to restrict logins to one or more GitHub organization. In the
`connectors.config.orgs` list, add one or more GitHub organizations. Any member of the org will
then be able to login to ArgoCD to perform management tasks.
@@ -45,16 +45,39 @@ data:
dex.config: |
connectors:
- type: github
id: github
name: GitHub
config:
clientID: 5aae0fcec2c11634be8c
clientSecret: c6fcb18177869174bd09be2c51259fb049c9d4e5
orgs:
- name: your-github-org
# GitHub example
- type: github
id: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
# GitHub enterprise example
- type: github
id: acme-github
name: Acme GitHub
config:
hostName: github.acme.com
clientID: abcdefghijklmnopqrst
clientSecret: $dex.acme.clientSecret
orgs:
- name: your-github-org
# OIDC example (e.g. Okta)
- type: oidc
id: okta
name: Okta
config:
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $dex.okta.clientSecret
```
After saving, the changes should take affect automatically.
NOTES:
* Any values which start with '$' will look to a key in argocd-secret of the same name (minus the $),
to obtain the actual value. This allows you to store the `clientSecret` as a kubernetes secret.
@@ -62,11 +85,3 @@ NOTES:
ArgoCD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the
correct external callback URL (e.g. https://argocd.example.com/api/dex/callback)
### 3. Restart ArgoCD for changes to take effect
Any changes to the `argocd-cm` ConfigMap or `argocd-secret` Secret, currently require a restart of
the ArgoCD API server for the settings to take effect. Delete the `argocd-server` pod to force a
restart. [Issue #174](https://github.com/argoproj/argo-cd/issues/174) will address this limitation.
```
kubectl delete pod -l app=argocd-server
```

View File

@@ -1,41 +1,45 @@
# Tracking and Deployment Strategies
An ArgoCD application spec provides several different ways of track kubernetes resource manifests in git. This document describes the different techniques and the means of deploying those manifests to the target environment.
An ArgoCD application spec provides several different ways of track kubernetes resource manifests in
git. This document describes the different techniques and the means of deploying those manifests to
the target environment.
## Branch Tracking
If a branch name is specified, ArgoCD will continually compare live state against the resource manifests defined at the tip of the specified branch.
If a branch name is specified, ArgoCD will continually compare live state against the resource
manifests defined at the tip of the specified branch.
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the changes to the tracked branch, which will then be detected by ArgoCD controller.
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the
changes to the tracked branch, which will then be detected by ArgoCD controller.
## Tag Tracking
If a tag is specified, the manifests at the specified git tag will be used to perform the sync comparison. This provides some advantages over branch tracking in that a tag is generally considered more stable, and less frequently updated, with some manual judgement of what constitutes a tag.
If a tag is specified, the manifests at the specified git tag will be used to perform the sync
comparison. This provides some advantages over branch tracking in that a tag is generally considered
more stable, and less frequently updated, with some manual judgement of what constitutes a tag.
To redeploy an application, the user uses git to change the meaning of a tag by retagging it to a different commit SHA. ArgoCD will detect the new meaning of the tag when performing the comparison/sync.
To redeploy an application, the user uses git to change the meaning of a tag by retagging it to a
different commit SHA. ArgoCD will detect the new meaning of the tag when performing the
comparison/sync.
## Commit Pinning
If a git commit SHA is specified, the application is effectively pinned to the manifests defined at the specified commit. This is the most restrictive of the techniques and is typically used to control production environments.
If a git commit SHA is specified, the application is effectively pinned to the manifests defined at
the specified commit. This is the most restrictive of the techniques and is typically used to
control production environments.
Since commit SHAs cannot change meaning, the only way to change the live state of an application which is pinned to a commit, is by updating the tracking revision in the application to a different commit containing the new manifests.
Note that parameter overrides can still be made against a application which is pinned to a revision.
Since commit SHAs cannot change meaning, the only way to change the live state of an application
which is pinned to a commit, is by updating the tracking revision in the application to a different
commit containing the new manifests. Note that [parameter overrides](parameters.md) can still be set
on an application which is pinned to a revision.
## Auto-Sync [(Not Yet Implemented)]((https://github.com/argoproj/argo-cd/issues/79))
In all tracking strategies, the application will have the option to sync automatically. If auto-sync
is configured, the new resources manifests will be applied automatically -- as soon as a difference
is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync
will be needed using the Argo UI, CLI, or API.
## Parameter Overrides
ArgoCD provides means to override the parameters of a ksonnet app. This gives some extra flexibility in having *some* parts of the k8s manifests determined dynamically. It also serves as an alternative way of redeploying an application by changing application parameters via ArgoCD, instead of making the changes to the manifests in git.
The following is an example of where this would be useful: A team maintains a "dev" environment, which needs to be continually updated with the latest version of their guestbook application after every build in the tip of master. To address this use case, the ksonnet application should expose an parameter named `image`, whose value used in the `dev` environment contains a placeholder value (e.g. `example/guestbook:replaceme`), intended to be set externally (outside of git) such as a build systems. As part of the build pipeline, the parameter value of the `image` would be continually updated to the freshly built image (e.g. `example/guestbook:abcd123`). A sync operation would result in the application being redeployed with the new image.
ArgoCD provides these operations conveniently via the CLI, or alternatively via the gRPC/REST API.
```
$ argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
$ argocd app sync guestbook
```
Note that in all tracking strategies, any parameter overrides set in the application instance will be honored.
## [Auto-Sync](https://github.com/argoproj/argo-cd/issues/79) (Not Yet Implemented)
In all tracking strategies, the application will have the option to sync automatically. If auto-sync is configured, the new resources manifests will be applied automatically -- as soon as a difference is detected between the target state (git) and live state. If auto-sync is disabled, a manual sync will be needed using the Argo UI, CLI, or API.
Note that in all tracking strategies, any [parameter overrides](parameters.md) set in the
application instance take precedence over the git state.

View File

@@ -60,11 +60,4 @@ stringData:
```
### 3. Restart ArgoCD for changes to take effect
Any changes to the `argocd-cm` ConfigMap or `argocd-secret` Secret, currently require a restart of
the ArgoCD API server for the settings to take effect. Delete the `argocd-server` pod to force a
restart. [Issue #174](https://github.com/argoproj/argo-cd/issues/174) will address this limitation.
```
kubectl delete pod -l app=argocd-server
```
After saving, the changes should take affect automatically.

View File

@@ -23,7 +23,7 @@ local appDeployment = deployment
params.replicas,
container
.new(params.name, params.image)
.withPorts(containerPort.new(targetPort)),
labels);
.withPorts(containerPort.new(targetPort)) + if params.command != null then { command: [ params.command ] } else {},
labels).withProgressDeadlineSeconds(5);
k.core.v1.list.new([appService, appDeployment])

View File

@@ -12,7 +12,8 @@
name: "guestbook-ui",
replicas: 1,
servicePort: 80,
type: "LoadBalancer",
type: "ClusterIP",
command: null,
},
},
}

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#! /usr/bin/env bash
# This script auto-generates protobuf related files. It is intended to be run manually when either
# API types are added/modified, or server gRPC calls are added. The generated files should then
@@ -55,6 +55,8 @@ GOPROTOBINARY=gogofast
# protoc-gen-grpc-gateway is used to build <service>.pb.gw.go files from from .proto files
go build -i -o dist/protoc-gen-grpc-gateway ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
# protoc-gen-swagger is used to build swagger.json
go build -i -o dist/protoc-gen-swagger ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
# Generate server/<service>/(<service>.pb.go|<service>.pb.gw.go)
PROTO_FILES=$(find $PROJECT_ROOT \( -name "*.proto" -and -path '*/server/*' -or -path '*/reposerver/*' -and -name "*.proto" \))
@@ -62,8 +64,8 @@ for i in ${PROTO_FILES}; do
# Path to the google API gateway annotations.proto will be different depending if we are
# building natively (e.g. from workspace) vs. part of a docker build.
if [ -f /.dockerenv ]; then
GOOGLE_PROTO_API_PATH=/root/go/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
GOGO_PROTOBUF_PATH=/root/go/src/github.com/gogo/protobuf
GOOGLE_PROTO_API_PATH=$GOPATH/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
GOGO_PROTOBUF_PATH=$GOPATH/src/github.com/gogo/protobuf
else
GOOGLE_PROTO_API_PATH=${PROJECT_ROOT}/vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis
GOGO_PROTOBUF_PATH=${PROJECT_ROOT}/vendor/github.com/gogo/protobuf
@@ -77,5 +79,44 @@ for i in ${PROTO_FILES}; do
-I${GOGO_PROTOBUF_PATH} \
--${GOPROTOBINARY}_out=plugins=grpc:$GOPATH/src \
--grpc-gateway_out=logtostderr=true:$GOPATH/src \
--swagger_out=logtostderr=true:. \
$i
done
# collect_swagger gathers swagger files into a subdirectory
collect_swagger() {
SWAGGER_ROOT="$1"
EXPECTED_COLLISIONS="$2"
SWAGGER_OUT="${SWAGGER_ROOT}/swagger.json"
PRIMARY_SWAGGER=`mktemp`
COMBINED_SWAGGER=`mktemp`
cat <<EOF > "${PRIMARY_SWAGGER}"
{
"swagger": "2.0",
"info": {
"title": "Consolidate Services",
"description": "Description of all APIs",
"version": "version not set"
},
"paths": {}
}
EOF
/bin/rm -f "${SWAGGER_OUT}"
/usr/bin/find "${SWAGGER_ROOT}" -name '*.swagger.json' -exec /usr/local/bin/swagger mixin -c "${EXPECTED_COLLISIONS}" "${PRIMARY_SWAGGER}" '{}' \+ > "${COMBINED_SWAGGER}"
/usr/local/bin/jq -r 'del(.definitions[].properties[]? | select(."$ref"!=null and .description!=null).description) | del(.definitions[].properties[]? | select(."$ref"!=null and .title!=null).title)' "${COMBINED_SWAGGER}" > "${SWAGGER_OUT}"
/bin/rm "${PRIMARY_SWAGGER}" "${COMBINED_SWAGGER}"
}
# clean up generated swagger files (should come after collect_swagger)
clean_swagger() {
SWAGGER_ROOT="$1"
/usr/bin/find "${SWAGGER_ROOT}" -name '*.swagger.json' -delete
}
collect_swagger server 21
clean_swagger server
clean_swagger reposerver

View File

@@ -1,168 +0,0 @@
package main
import (
"context"
"fmt"
"hash/fnv"
"log"
"os"
"strings"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/git"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
// origRepoURLToSecretName hashes repo URL to the secret name using a formula.
// Part of the original repo name is incorporated for debugging purposes
func origRepoURLToSecretName(repo string) string {
repo = git.NormalizeGitURL(repo)
h := fnv.New32a()
_, _ = h.Write([]byte(repo))
parts := strings.Split(strings.TrimSuffix(repo, ".git"), "/")
return fmt.Sprintf("repo-%s-%v", strings.ToLower(parts[len(parts)-1]), h.Sum32())
}
// repoURLToSecretName hashes repo URL to the secret name using a formula.
// Part of the original repo name is incorporated for debugging purposes
func repoURLToSecretName(repo string) string {
repo = strings.ToLower(git.NormalizeGitURL(repo))
h := fnv.New32a()
_, _ = h.Write([]byte(repo))
parts := strings.Split(strings.TrimSuffix(repo, ".git"), "/")
return fmt.Sprintf("repo-%s-%v", parts[len(parts)-1], h.Sum32())
}
// RenameSecret renames a Kubernetes secret in a given namespace.
func renameSecret(namespace, oldName, newName string) {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
loadingRules.DefaultClientConfig = &clientcmd.DefaultClientConfig
overrides := clientcmd.ConfigOverrides{}
clientConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, &overrides)
log.Printf("Renaming secret %q to %q in namespace %q\n", oldName, newName, namespace)
config, err := clientConfig.ClientConfig()
if err != nil {
log.Println("Could not retrieve client config: ", err)
return
}
kubeclientset := kubernetes.NewForConfigOrDie(config)
repoSecret, err := kubeclientset.CoreV1().Secrets(namespace).Get(oldName, metav1.GetOptions{})
if err != nil {
log.Println("Could not retrieve old secret: ", err)
return
}
repoSecret.ObjectMeta.Name = newName
repoSecret.ObjectMeta.ResourceVersion = ""
repoSecret, err = kubeclientset.CoreV1().Secrets(namespace).Create(repoSecret)
if err != nil {
log.Println("Could not create new secret: ", err)
return
}
err = kubeclientset.CoreV1().Secrets(namespace).Delete(oldName, &metav1.DeleteOptions{})
if err != nil {
log.Println("Could not remove old secret: ", err)
}
}
// RenameRepositorySecrets ensures that repository secrets use the new naming format.
func renameRepositorySecrets(clientOpts argocdclient.ClientOptions, namespace string) {
conn, repoIf := argocdclient.NewClientOrDie(&clientOpts).NewRepoClientOrDie()
defer util.Close(conn)
repos, err := repoIf.List(context.Background(), &repository.RepoQuery{})
if err != nil {
log.Println("An error occurred, so skipping secret renaming: ", err)
return
}
log.Println("Renaming repository secrets...")
for _, repo := range repos.Items {
oldSecretName := origRepoURLToSecretName(repo.Repo)
newSecretName := repoURLToSecretName(repo.Repo)
if oldSecretName != newSecretName {
log.Printf("Repo %q had its secret name change, so updating\n", repo.Repo)
renameSecret(namespace, oldSecretName, newSecretName)
}
}
}
/*
// PopulateAppDestinations ensures that apps have a Server and Namespace set explicitly.
func populateAppDestinations(clientOpts argocdclient.ClientOptions) {
conn, appIf := argocdclient.NewClientOrDie(&clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
apps, err := appIf.List(context.Background(), &application.ApplicationQuery{})
if err != nil {
log.Println("An error occurred, so skipping destination population: ", err)
return
}
log.Println("Populating app Destination fields")
for _, app := range apps.Items {
changed := false
log.Printf("Ensuring destination field is populated on app %q\n", app.ObjectMeta.Name)
if app.Spec.Destination.Server == "" {
if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown || app.Status.ComparisonResult.Status == appv1.ComparisonStatusError {
log.Printf("App %q was missing Destination.Server, but could not fill it in: %s", app.ObjectMeta.Name, app.Status.ComparisonResult.Status)
} else {
log.Printf("App %q was missing Destination.Server, so setting to %q\n", app.ObjectMeta.Name, app.Status.ComparisonResult.Server)
app.Spec.Destination.Server = app.Status.ComparisonResult.Server
changed = true
}
}
if app.Spec.Destination.Namespace == "" {
if app.Status.ComparisonResult.Status == appv1.ComparisonStatusUnknown || app.Status.ComparisonResult.Status == appv1.ComparisonStatusError {
log.Printf("App %q was missing Destination.Namespace, but could not fill it in: %s", app.ObjectMeta.Name, app.Status.ComparisonResult.Status)
} else {
log.Printf("App %q was missing Destination.Namespace, so setting to %q\n", app.ObjectMeta.Name, app.Status.ComparisonResult.Namespace)
app.Spec.Destination.Namespace = app.Status.ComparisonResult.Namespace
changed = true
}
}
if changed {
_, err = appIf.UpdateSpec(context.Background(), &application.ApplicationSpecRequest{
AppName: app.Name,
Spec: &app.Spec,
})
if err != nil {
log.Println("An error occurred (but continuing anyway): ", err)
}
}
}
}
*/
func main() {
if len(os.Args) < 3 {
log.Fatalf("USAGE: %s SERVER NAMESPACE\n", os.Args[0])
}
server, namespace := os.Args[1], os.Args[2]
log.Printf("Using argocd server %q and namespace %q\n", server, namespace)
isLocalhost := false
switch {
case strings.HasPrefix(server, "localhost:"):
isLocalhost = true
case strings.HasPrefix(server, "127.0.0.1:"):
isLocalhost = true
}
clientOpts := argocdclient.ClientOptions{
ServerAddr: server,
Insecure: true,
PlainText: isLocalhost,
}
renameRepositorySecrets(clientOpts, namespace)
//populateAppDestinations(clientOpts)
}

22
hack/update-manifests.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/sh
SRCROOT="$( cd "$(dirname "$0")/.." ; pwd -P )"
AUTOGENMSG="# This is an auto-generated file. DO NOT EDIT"
update_image () {
if [ ! -z "${IMAGE_NAMESPACE}" ]; then
sed -i '' 's| image: \(.*\)/\(argocd-.*\)| image: '"${IMAGE_NAMESPACE}"'/\2|g' ${1}
fi
if [ ! -z "${IMAGE_TAG}" ]; then
sed -i '' 's|\( image: .*/argocd-.*\)\:.*|\1:'"${IMAGE_TAG}"'|g' ${1}
fi
}
echo "${AUTOGENMSG}" > ${SRCROOT}/manifests/install.yaml
kustomize build ${SRCROOT}/manifests/cluster-install >> ${SRCROOT}/manifests/install.yaml
update_image ${SRCROOT}/manifests/install.yaml
echo "${AUTOGENMSG}" > ${SRCROOT}/manifests/namespace-install.yaml
kustomize build ${SRCROOT}/manifests/base >> ${SRCROOT}/manifests/namespace-install.yaml
update_image ${SRCROOT}/manifests/namespace-install.yaml

View File

@@ -1,379 +0,0 @@
package install
import (
"fmt"
"strings"
"syscall"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/password"
"github.com/argoproj/argo-cd/util/session"
"github.com/argoproj/argo-cd/util/settings"
tlsutil "github.com/argoproj/argo-cd/util/tls"
"github.com/ghodss/yaml"
"github.com/gobuffalo/packr"
log "github.com/sirupsen/logrus"
"github.com/yudai/gojsondiff/formatter"
"golang.org/x/crypto/ssh/terminal"
appsv1beta2 "k8s.io/api/apps/v1beta2"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apiextensionsv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
var (
// These values will be overridden by the link flags during build
// (e.g. imageTag will use the official release tag on tagged builds)
imageNamespace = "argoproj"
imageTag = "latest"
// Default namespace and image names which `argocd install` uses during install
DefaultInstallNamespace = "argocd"
DefaultControllerImage = imageNamespace + "/argocd-application-controller:" + imageTag
DefaultUIImage = imageNamespace + "/argocd-ui:" + imageTag
DefaultServerImage = imageNamespace + "/argocd-server:" + imageTag
DefaultRepoServerImage = imageNamespace + "/argocd-repo-server:" + imageTag
)
// InstallOptions stores a collection of installation settings.
type InstallOptions struct {
DryRun bool
Upgrade bool
UpdateSuperuser bool
UpdateSignature bool
SuperuserPassword string
Namespace string
ControllerImage string
UIImage string
ServerImage string
RepoServerImage string
ImagePullPolicy string
}
type Installer struct {
InstallOptions
box packr.Box
config *rest.Config
dynClientPool dynamic.ClientPool
disco discovery.DiscoveryInterface
}
func NewInstaller(config *rest.Config, opts InstallOptions) (*Installer, error) {
shallowCopy := *config
inst := Installer{
InstallOptions: opts,
box: packr.NewBox("./manifests"),
config: &shallowCopy,
}
if opts.Namespace == "" {
inst.Namespace = DefaultInstallNamespace
}
var err error
inst.dynClientPool = dynamic.NewDynamicClientPool(inst.config)
inst.disco, err = discovery.NewDiscoveryClientForConfig(inst.config)
if err != nil {
return nil, err
}
return &inst, nil
}
func (i *Installer) Install() {
i.InstallNamespace()
i.InstallApplicationCRD()
i.InstallSettings()
i.InstallApplicationController()
i.InstallArgoCDServer()
i.InstallArgoCDRepoServer()
}
func (i *Installer) Uninstall(deleteNamespace, deleteCRD bool) {
manifests := i.box.List()
for _, manifestPath := range manifests {
if strings.HasSuffix(manifestPath, ".yaml") || strings.HasSuffix(manifestPath, ".yml") {
var obj unstructured.Unstructured
i.unmarshalManifest(manifestPath, &obj)
switch strings.ToLower(obj.GetKind()) {
case "namespace":
if !deleteNamespace {
log.Infof("Skipped deletion of Namespace: '%s'", obj.GetName())
continue
}
case "customresourcedefinition":
if !deleteCRD {
log.Infof("Skipped deletion of CustomResourceDefinition: '%s'", obj.GetName())
continue
}
}
i.MustUninstallResource(&obj)
}
}
}
func (i *Installer) InstallNamespace() {
if i.Namespace != DefaultInstallNamespace {
// don't create namespace if a different one was supplied
return
}
var namespace apiv1.Namespace
i.unmarshalManifest("00_namespace.yaml", &namespace)
namespace.ObjectMeta.Name = i.Namespace
i.MustInstallResource(kube.MustToUnstructured(&namespace))
}
func (i *Installer) InstallApplicationCRD() {
var applicationCRD apiextensionsv1beta1.CustomResourceDefinition
i.unmarshalManifest("01_application-crd.yaml", &applicationCRD)
i.MustInstallResource(kube.MustToUnstructured(&applicationCRD))
}
func (i *Installer) InstallSettings() {
kubeclientset, err := kubernetes.NewForConfig(i.config)
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(kubeclientset, i.Namespace)
cdSettings, err := settingsMgr.GetSettings()
if err != nil {
if apierr.IsNotFound(err) {
cdSettings = &settings.ArgoCDSettings{}
} else {
log.Fatal(err)
}
}
if cdSettings.ServerSignature == nil || i.UpdateSignature {
// set JWT signature
signature, err := session.MakeSignature(32)
errors.CheckError(err)
cdSettings.ServerSignature = signature
}
if cdSettings.LocalUsers == nil {
cdSettings.LocalUsers = make(map[string]string)
}
if _, ok := cdSettings.LocalUsers[common.ArgoCDAdminUsername]; !ok || i.UpdateSuperuser {
passwordRaw := i.SuperuserPassword
if passwordRaw == "" {
passwordRaw = readAndConfirmPassword()
}
hashedPassword, err := password.HashPassword(passwordRaw)
errors.CheckError(err)
cdSettings.LocalUsers = map[string]string{
common.ArgoCDAdminUsername: hashedPassword,
}
}
if cdSettings.Certificate == nil {
// generate TLS cert
hosts := []string{
"localhost",
"argocd-server",
fmt.Sprintf("argocd-server.%s", i.Namespace),
fmt.Sprintf("argocd-server.%s.svc", i.Namespace),
fmt.Sprintf("argocd-server.%s.svc.cluster.local", i.Namespace),
}
certOpts := tlsutil.CertOptions{
Hosts: hosts,
Organization: "Argo CD",
IsCA: true,
}
cert, err := tlsutil.GenerateX509KeyPair(certOpts)
errors.CheckError(err)
cdSettings.Certificate = cert
}
err = settingsMgr.SaveSettings(cdSettings)
errors.CheckError(err)
}
func readAndConfirmPassword() string {
for {
fmt.Print("*** Enter an admin password: ")
password, err := terminal.ReadPassword(syscall.Stdin)
errors.CheckError(err)
fmt.Print("\n")
fmt.Print("*** Confirm the admin password: ")
confirmPassword, err := terminal.ReadPassword(syscall.Stdin)
errors.CheckError(err)
fmt.Print("\n")
if string(password) == string(confirmPassword) {
return string(password)
}
log.Error("Passwords do not match")
}
}
func (i *Installer) InstallApplicationController() {
var applicationControllerServiceAccount apiv1.ServiceAccount
var applicationControllerRole rbacv1.Role
var applicationControllerRoleBinding rbacv1.RoleBinding
var applicationControllerDeployment appsv1beta2.Deployment
i.unmarshalManifest("03a_application-controller-sa.yaml", &applicationControllerServiceAccount)
i.unmarshalManifest("03b_application-controller-role.yaml", &applicationControllerRole)
i.unmarshalManifest("03c_application-controller-rolebinding.yaml", &applicationControllerRoleBinding)
i.unmarshalManifest("03d_application-controller-deployment.yaml", &applicationControllerDeployment)
applicationControllerDeployment.Spec.Template.Spec.Containers[0].Image = i.ControllerImage
applicationControllerDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerServiceAccount))
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerRole))
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerRoleBinding))
i.MustInstallResource(kube.MustToUnstructured(&applicationControllerDeployment))
}
func (i *Installer) InstallArgoCDServer() {
var argoCDServerServiceAccount apiv1.ServiceAccount
var argoCDServerControllerRole rbacv1.Role
var argoCDServerControllerRoleBinding rbacv1.RoleBinding
var argoCDServerControllerDeployment appsv1beta2.Deployment
var argoCDServerService apiv1.Service
i.unmarshalManifest("04a_argocd-server-sa.yaml", &argoCDServerServiceAccount)
i.unmarshalManifest("04b_argocd-server-role.yaml", &argoCDServerControllerRole)
i.unmarshalManifest("04c_argocd-server-rolebinding.yaml", &argoCDServerControllerRoleBinding)
i.unmarshalManifest("04d_argocd-server-deployment.yaml", &argoCDServerControllerDeployment)
i.unmarshalManifest("04e_argocd-server-service.yaml", &argoCDServerService)
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[0].Image = i.ServerImage
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[1].Image = i.UIImage
argoCDServerControllerDeployment.Spec.Template.Spec.InitContainers[1].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
argoCDServerControllerDeployment.Spec.Template.Spec.Containers[0].Image = i.ServerImage
argoCDServerControllerDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerServiceAccount))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerControllerRole))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerControllerRoleBinding))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerControllerDeployment))
i.MustInstallResource(kube.MustToUnstructured(&argoCDServerService))
}
func (i *Installer) InstallArgoCDRepoServer() {
var argoCDRepoServerControllerDeployment appsv1beta2.Deployment
var argoCDRepoServerService apiv1.Service
i.unmarshalManifest("05a_argocd-repo-server-deployment.yaml", &argoCDRepoServerControllerDeployment)
i.unmarshalManifest("05b_argocd-repo-server-service.yaml", &argoCDRepoServerService)
argoCDRepoServerControllerDeployment.Spec.Template.Spec.Containers[0].Image = i.RepoServerImage
argoCDRepoServerControllerDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy = apiv1.PullPolicy(i.ImagePullPolicy)
i.MustInstallResource(kube.MustToUnstructured(&argoCDRepoServerControllerDeployment))
i.MustInstallResource(kube.MustToUnstructured(&argoCDRepoServerService))
}
func (i *Installer) unmarshalManifest(fileName string, obj interface{}) {
yamlBytes, err := i.box.MustBytes(fileName)
errors.CheckError(err)
err = yaml.Unmarshal(yamlBytes, obj)
errors.CheckError(err)
}
func (i *Installer) MustInstallResource(obj *unstructured.Unstructured) *unstructured.Unstructured {
obj, err := i.InstallResource(obj)
errors.CheckError(err)
return obj
}
func isNamespaced(obj *unstructured.Unstructured) bool {
switch obj.GetKind() {
case "Namespace", "ClusterRole", "ClusterRoleBinding", "CustomResourceDefinition":
return false
}
return true
}
// InstallResource creates or updates a resource. If installed resource is up-to-date, does nothing
func (i *Installer) InstallResource(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
if isNamespaced(obj) {
obj.SetNamespace(i.Namespace)
}
// remove 'creationTimestamp' and 'status' fields from object so that the diff will not be modified
obj.SetCreationTimestamp(metav1.Time{})
delete(obj.Object, "status")
if i.DryRun {
printYAML(obj)
return nil, nil
}
gvk := obj.GroupVersionKind()
dclient, err := i.dynClientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return nil, err
}
apiResource, err := kube.ServerResourceForGroupVersionKind(i.disco, gvk)
if err != nil {
return nil, err
}
reIf := dclient.Resource(apiResource, i.Namespace)
liveObj, err := reIf.Create(obj)
if err == nil {
fmt.Printf("%s '%s' created\n", liveObj.GetKind(), liveObj.GetName())
return liveObj, nil
}
if !apierr.IsAlreadyExists(err) {
return nil, err
}
liveObj, err = reIf.Get(obj.GetName(), metav1.GetOptions{})
if err != nil {
return nil, err
}
diffRes := diff.Diff(obj, liveObj)
if !diffRes.Modified {
fmt.Printf("%s '%s' up-to-date\n", liveObj.GetKind(), liveObj.GetName())
return liveObj, nil
}
if !i.Upgrade {
log.Println(diffRes.ASCIIFormat(obj, formatter.AsciiFormatterConfig{}))
return nil, fmt.Errorf("%s '%s' already exists. Rerun with --upgrade to update", obj.GetKind(), obj.GetName())
}
liveObj, err = reIf.Update(obj)
if err != nil {
return nil, err
}
fmt.Printf("%s '%s' updated\n", liveObj.GetKind(), liveObj.GetName())
return liveObj, nil
}
func (i *Installer) MustUninstallResource(obj *unstructured.Unstructured) {
err := i.UninstallResource(obj)
errors.CheckError(err)
}
// UninstallResource deletes a resource from the cluster
func (i *Installer) UninstallResource(obj *unstructured.Unstructured) error {
if isNamespaced(obj) {
obj.SetNamespace(i.Namespace)
}
gvk := obj.GroupVersionKind()
dclient, err := i.dynClientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return err
}
apiResource, err := kube.ServerResourceForGroupVersionKind(i.disco, gvk)
if err != nil {
return err
}
reIf := dclient.Resource(apiResource, i.Namespace)
deletePolicy := metav1.DeletePropagationForeground
err = reIf.Delete(obj.GetName(), &metav1.DeleteOptions{PropagationPolicy: &deletePolicy})
if err != nil {
if apierr.IsNotFound(err) {
fmt.Printf("%s '%s' not found\n", obj.GetKind(), obj.GetName())
return nil
}
return err
}
fmt.Printf("%s '%s' deleted\n", obj.GetKind(), obj.GetName())
return nil
}
func printYAML(obj interface{}) {
objBytes, err := yaml.Marshal(obj)
if err != nil {
log.Fatalf("Failed to marshal %v", obj)
}
fmt.Printf("---\n%s\n", string(objBytes))
}

View File

@@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: argocd

View File

@@ -1,57 +0,0 @@
# NOTE: the values here are just a example and are not the values used during an install.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
# url is the externally facing base URL of ArgoCD.
# This field is required when configuring SSO, which ArgoCD uses as part the redirectURI for the
# dex connectors. When configuring the application in the SSO provider (e.g. github, okta), the
# authorization callback URL will be url + /api/dex/callback. For example, if ArgoCD's url is
# https://example.com, then the auth callback to set in the SSO provider should be:
# https://example.com/api/dex/callback
url: http://localhost:8080
# dex.config holds the contents of the configuration yaml for the dex OIDC/Oauth2 provider sidecar.
# Only a subset of a full dex config is required, namely the connectors list. ArgoCD will generate
# the complete dex config based on the configured URL, and the known callback endpoint which the
# ArgoCD API server exposes (i.e. /api/dex/callback).
dex.config: |
# connectors is a list of dex connector configurations. For details on available connectors and
# how to configure them, see: https://github.com/coreos/dex/tree/master/Documentation/connectors
# NOTE:
# * Any values which start with '$' will look to a key in argocd-secret of the same name, to
# obtain the actual value.
# * ArgoCD will automatically set the 'redirectURI' field in any OAuth2 connectors, to match the
# external callback URL (e.g. https://example.com/api/dex/callback)
connectors:
# GitHub example
- type: github
id: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
# GitHub enterprise example
- type: github
id: acme-github
name: Acme GitHub
config:
hostName: github.acme.com
clientID: abcdefghijklmnopqrst
clientSecret: $dex.acme.clientSecret
orgs:
- name: your-github-org
# OIDC example (e.g. Okta)
- type: oidc
id: okta
name: Okta
config:
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $dex.okta.clientSecret

View File

@@ -1,27 +0,0 @@
# NOTE: the values in this secret are provided as working manifest example and are not the values
# used during an install. New values will be generated as part of `argocd install`
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
namespace: argocd
type: Opaque
stringData:
# bcrypt hash of the string "password"
admin.password: $2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W
# random server signature key for session validation
server.secretkey: aEDvv73vv70F77+9CRBSNu+/vTYQ77+9EUFh77+9LzFyJ++/vXfLsO+/vWRbeu+/ve+/vQ==
# The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
# events. To enable webhooks, configure one or more of the following keys with the shared git
# provider webhook secret. The payload URL configured in the git provider should use the
# /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
github.webhook.secret: shhhh! it's a github secret
gitlab.webhook.secret: shhhh! it's a gitlab secret
bitbucket.webhook.uuid: your-bitbucket-uuid
# the following of user defined keys which are referenced in the example argocd-cm configmap
# as pat of SSO configuration.
dex.github.clientSecret: nv1vx8w4gw5byrflujfkxww6ykh85yq818aorvwy
dex.acme.clientSecret: 5pp7dyre3d5nyk0ree1tr0gd68k18xn94x8lfae9
dex.okta.clientSecret: x41ztv6ufyf07oyoopc6f62p222c00mox2ciquvt

16
manifests/README.md Normal file
View File

@@ -0,0 +1,16 @@
# ArgoCD Installation Manifests
Two sets of installation manifests are provided:
* [install.yaml](install.yaml) - Standard ArgoCD installation with cluster-admin access. Use this
manifest set if you plan to use ArgoCD to deploy applications in the same cluster that ArgoCD runs
in (i.e. kubernetes.svc.default). Will still be able to deploy to external clusters with inputted
credentials.
* [namespace-install.yaml](namespace-install.yaml) - Installation of ArgoCD which requires only
namespace level privileges (does not need cluster roles). Use this manifest set if you do not
need ArgoCD to deploy applications in the same cluster that ArgoCD runs in, and will rely solely
on inputted cluster credentials. An example of using this set of manifests is if you run several
ArgoCD instances for different teams, where each instance will bedeploying applications to
external clusters. Will still be possible to deploy to the same cluster (kubernetes.svc.default)
with inputted credentials (i.e. `argocd cluster add <CONTEXT> --in-cluster`).

View File

@@ -1,8 +1,7 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-controller
namespace: argocd
spec:
selector:
matchLabels:

View File

@@ -2,7 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: application-controller-role
namespace: argocd
rules:
- apiGroups:
- ""
@@ -11,10 +10,14 @@ rules:
verbs:
- get
- watch
- list
- patch
- update
- apiGroups:
- argoproj.io
resources:
- applications
- appprojects
verbs:
- create
- get
@@ -23,3 +26,11 @@ rules:
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list

View File

@@ -2,7 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: application-controller-role-binding
namespace: argocd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -2,4 +2,3 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: application-controller
namespace: argocd

View File

@@ -0,0 +1,14 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: appprojects.argoproj.io
spec:
group: argoproj.io
names:
kind: AppProject
plural: appprojects
shortNames:
- appproj
- appprojs
scope: Namespaced
version: v1alpha1

View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
# data:
# # ArgoCD's externally facing base URL. Required for configuring SSO
# # url: https://argo-cd-demo.argoproj.io
#
# # A dex connector configuration. See documentation on how to configure SSO:
# # https://github.com/argoproj/argo-cd/blob/master/docs/sso.md#2-configure-argocd-for-sso
# dex.config: |
# connectors:
# # GitHub example
# - type: github
# id: github
# name: GitHub
# config:
# clientID: aabbccddeeff00112233
# clientSecret: $dex.github.clientSecret
# orgs:
# - name: your-github-org
# teams:
# - red-team

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: argocd-metrics
spec:
ports:
- name: http
protocol: TCP
port: 8082
targetPort: 8082
selector:
app: argocd-server

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
# data:
# # An RBAC policy .csv file containing additional policy and role definitions.
# # See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md on how to write RBAC policies.
# policy.csv: |
# # Give all members of "my-org:team-alpha" the ability to sync apps in "my-project"
# p, my-org:team-alpha, applications, sync, my-project/*, allow
# # Make all members of "my-org:team-beta" admins
# g, my-org:team-beta, role:admin
#
# # The default role ArgoCD will fall back to, when authorizing API requests
# policy.default: role:readonly

View File

@@ -1,8 +1,7 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-repo-server
namespace: argocd
spec:
selector:
matchLabels:
@@ -12,13 +11,15 @@ spec:
labels:
app: argocd-repo-server
spec:
automountServiceAccountToken: false
containers:
- name: argocd-repo-server
image: argoproj/argocd-repo-server:latest
command: [/argocd-repo-server]
ports:
- containerPort: 8081
- name: redis
image: redis:3.2.11
ports:
- containerPort: 6379
- containerPort: 8081
readinessProbe:
tcpSocket:
port: 8081
initialDelaySeconds: 5
periodSeconds: 10

View File

@@ -2,7 +2,6 @@ apiVersion: v1
kind: Service
metadata:
name: argocd-repo-server
namespace: argocd
spec:
ports:
- port: 8081

View File

@@ -0,0 +1,26 @@
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
type: Opaque
# data:
# # TLS certificate and private key for API server.
# # Autogenerated with a self-signed ceritificate if keys are missing.
# tls.crt:
# tls.key:
#
# # bcrypt hash of the admin password and it's last modified time. Autogenerated on initial
# # startup. To reset a forgotten password, delete both keys and restart argocd-server.
# admin.password:
# admin.passwordMtime:
#
# # random server signature key for session validation. Autogenerated on initial startup
# server.secretkey:
#
# # The following keys hold the shared secret for authenticating GitHub/GitLab/BitBucket webhook
# # events. To enable webhooks, configure one or more of the following keys with the shared git
# # provider webhook secret. The payload URL configured in the git provider should use the
# # /api/webhook endpoint of your ArgoCD instance (e.g. https://argocd.example.com/api/webhook)
# github.webhook.secret:
# gitlab.webhook.secret:
# bitbucket.webhook.uuid:

View File

@@ -1,8 +1,7 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
spec:
selector:
matchLabels:
@@ -14,12 +13,6 @@ spec:
spec:
serviceAccountName: argocd-server
initContainers:
- name: copyutil
image: argoproj/argocd-server:latest
command: [cp, /argocd-util, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
- name: ui
image: argoproj/argocd-ui:latest
command: [cp, -r, /app, /shared]
@@ -33,12 +26,12 @@ spec:
volumeMounts:
- mountPath: /shared
name: static-files
- name: dex
image: quay.io/coreos/dex:v2.10.0
command: [/shared/argocd-util, rundex]
volumeMounts:
- mountPath: /shared
name: static-files
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 30
volumes:
- emptyDir: {}
name: static-files

View File

@@ -2,33 +2,11 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argocd-server-role
namespace: argocd
rules:
- apiGroups:
- ""
resources:
- pods
- pods/exec
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
@@ -42,6 +20,7 @@ rules:
- argoproj.io
resources:
- applications
- appprojects
verbs:
- create
- get
@@ -50,3 +29,10 @@ rules:
- update
- delete
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- list

View File

@@ -2,7 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-server-role-binding
namespace: argocd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -2,4 +2,3 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: argocd-server
namespace: argocd

View File

@@ -2,7 +2,6 @@ apiVersion: v1
kind: Service
metadata:
name: argocd-server
namespace: argocd
spec:
ports:
- name: http

View File

@@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: dex-server
spec:
selector:
matchLabels:
app: dex-server
template:
metadata:
labels:
app: dex-server
spec:
serviceAccountName: dex-server
initContainers:
- name: copyutil
image: argoproj/argocd-server:latest
command: [cp, /argocd-util, /shared]
volumeMounts:
- mountPath: /shared
name: static-files
containers:
- name: dex
image: quay.io/dexidp/dex:v2.11.0
command: [/shared/argocd-util, rundex]
ports:
- containerPort: 5556
- containerPort: 5557
volumeMounts:
- mountPath: /shared
name: static-files
volumes:
- emptyDir: {}
name: static-files

View File

@@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dex-server-role
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- get
- list
- watch

View File

@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dex-server-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dex-server-role
subjects:
- kind: ServiceAccount
name: dex-server

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: dex-server

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: dex-server
spec:
ports:
- name: http
protocol: TCP
port: 5556
targetPort: 5556
- name: grpc
protocol: TCP
port: 5557
targetPort: 5557
selector:
app: dex-server

View File

@@ -0,0 +1,31 @@
resources:
- application-crd.yaml
- appproject-crd.yaml
- argocd-cm.yaml
- argocd-secret.yaml
- argocd-rbac-cm.yaml
- application-controller-sa.yaml
- application-controller-role.yaml
- application-controller-rolebinding.yaml
- application-controller-deployment.yaml
- argocd-server-sa.yaml
- argocd-server-role.yaml
- argocd-server-rolebinding.yaml
- argocd-server-deployment.yaml
- argocd-server-service.yaml
- argocd-metrics-service.yaml
- argocd-repo-server-deployment.yaml
- argocd-repo-server-service.yaml
- dex-server-sa.yaml
- dex-server-role.yaml
- dex-server-rolebinding.yaml
- dex-server-deployment.yaml
- dex-server-service.yaml
imageTags:
- name: argoproj/argocd-server
newTag: v0.9.2
- name: argoproj/argocd-repo-server
newTag: v0.9.2
- name: argoproj/application-controller
newTag: v0.9.2

View File

@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: application-controller-clusterrole
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: application-controller-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: application-controller-clusterrole
subjects:
- kind: ServiceAccount
name: application-controller
namespace: argocd

View File

@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argocd-server-clusterrole
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- delete

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argocd-server-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argocd-server-clusterrole
subjects:
- kind: ServiceAccount
name: argocd-server
namespace: argocd

View File

@@ -0,0 +1,8 @@
bases:
- ../base
resources:
- application-controller-clusterrole.yaml
- application-controller-clusterrolebinding.yaml
- argocd-server-clusterrole.yaml
- argocd-server-clusterrolebinding.yaml

Some files were not shown because too many files have changed in this diff Show More