Compare commits

...

513 Commits

Author SHA1 Message Date
Alex Collins
e0bd546a07 Update manifests to v1.0.2 2019-06-14 09:46:05 -07:00
Alex Collins
984829fcc8 Merge branch 'release-1.0' of github.com:argoproj/argo-cd into release-1.0 2019-06-14 09:38:08 -07:00
Jesse Suen
c48e27f265 Cluster registration was unintentionally persisting client-cert auth credentials (#1742)
Remove unused CreateClusterFromKubeConfig server method
2019-06-14 04:03:01 -07:00
Alex Collins
5fe1447b72 Update manifests to v1.0.1 2019-05-28 10:08:34 -07:00
Alex Collins
539516bd43 Update manifests to v1.0.1 2019-05-28 10:07:32 -07:00
Alex Collins
8a57d544ff Update manifests to v1.0.1 2019-05-28 09:01:21 -07:00
Alex Collins
cd77e2a048 Update manifests to v1.0.1 2019-05-28 08:58:01 -07:00
Alex Collins
a52f766815 removes file which cannot be compiled 2019-05-24 15:03:49 -07:00
Alex Collins
646fd37e16 Public git creds (#1633) 2019-05-24 15:01:03 -07:00
Alexander Matyushentsev
c74ca22023 Update manifests to v1.0.0 2019-05-16 13:00:40 -07:00
Alexander Matyushentsev
2d170be242 Issue #1471 - Support configuring requested OIDC provider scopes and enforced RBAC scopes (#1585)
* Issue #1471 - Support configuring requested OIDC provider scopes and enforced RBAC scopes

* Apply reviewer notes
2019-05-16 07:35:44 -07:00
Alexander Matyushentsev
079101522d Issue #1533 - Prevent reconciliation loop for self-managed apps (#1608) 2019-05-14 08:05:28 -07:00
Jesse Suen
1bea98e01b Supply resourceVersion to watch request to prevent reading of stale cache (#1612) 2019-05-13 15:01:15 -07:00
Alexander Matyushentsev
7c09221f7c Update manifests to v1.0.0-rc3 2019-05-09 09:52:56 -07:00
Alexander Matyushentsev
6355e910d4 Fix flaky TestGetIngressInfo unit test (#1529) 2019-05-09 09:52:16 -07:00
Alexander Matyushentsev
891e0320d7 Issue #1586 - Ignore patch errors during diffing normalization (#1599) 2019-05-09 09:30:48 -07:00
Alexander Matyushentsev
486323ae58 Issue #1596 - SSH URLs support is partially broken (#1597) 2019-05-09 09:30:42 -07:00
Alexander Matyushentsev
4ef875aa0b Issue #1552 - Improve rendering app image information (#1584) 2019-05-09 09:30:31 -07:00
Alexander Matyushentsev
e756b8db7a Fix ingress browsable url formatting if port is not string (#1576) 2019-05-09 09:28:49 -07:00
Alexander Matyushentsev
8023f8ac8d Issue #1579 - Impossible to sync to HEAD from UI if auto-sync is enabled (#1580) 2019-05-09 09:28:42 -07:00
Alexander Matyushentsev
803408904a Issue #1570 - Application controller is unable to delete self-referenced app (#1574) 2019-05-09 09:28:35 -07:00
Alexander Matyushentsev
702f9095da Issue #1546 - Add liveness probe to repo server/api servers (#1560) 2019-05-09 09:28:30 -07:00
Alexander Matyushentsev
0b9ee1ae6d ISsue #1557 - Controller incorrectly report health state of self managed application (#1558) 2019-05-09 09:28:19 -07:00
Alexander Matyushentsev
2f003e08ff Issue #1540 - Fix kustomize manifest generation crash is manifest has image without version (#1559) 2019-05-09 09:28:14 -07:00
Paul Brit
e090857d6b Fix hardcoded 'git' user in util/git.NewClient (#1556)
Closes #1555
2019-05-09 09:28:09 -07:00
dthomson25
4e29fff5a3 Improve Rollout health.lua (#1554) 2019-05-09 09:28:03 -07:00
Alexander Matyushentsev
e279377696 Fix invalid URL for ingress without hostname (#1553) 2019-05-01 15:38:40 -07:00
Alexander Matyushentsev
5e52839ce3 Issue #1533 - Prevent reconciliation loop for self-managed apps (#1547) 2019-05-01 10:22:09 -07:00
Alexander Matyushentsev
3ca632a552 Update manifests to v1.0.0-rc2 2019-04-30 13:19:50 -07:00
Alexander Matyushentsev
cfe55357ac Rollout health checks/actions should support v0.2 and v0.2+ versions (#1543) 2019-04-30 13:18:15 -07:00
Alex Collins
0f6d768eca Fixes bug in normalizer (#1542) 2019-04-30 11:32:54 -07:00
Alexander Matyushentsev
75330da328 Ingress resource might get invalid ExternalURL (#1522) (#1523) 2019-04-30 11:14:18 -07:00
Alexander Matyushentsev
a1bcbab0e5 Issue 1476 - Avoid validating repository in application controller (#1535) 2019-04-30 11:10:06 -07:00
Alexander Matyushentsev
db9272032a Issue #1414 - Load target resource using K8S if conversion fails (#1527) 2019-04-30 11:10:02 -07:00
Alexander Matyushentsev
d6d6c655ff Issue #1476 - Add repo server grpc call timeout (#1528) 2019-04-30 11:09:58 -07:00
Alex Collins
58acc92790 Adds support for configuring repo creds at a domain/org level. Closes… (#1496) 2019-04-30 11:09:53 -07:00
Simon Behar
c3074c0977 Whitelisting of resources (#1509)
* Added whitelisting of resources
2019-04-30 11:09:26 -07:00
Simon Behar
af254f3047 Added ability to sync specific labels from the command line (#1501)
* Finished initial implementation

* Added tests and fix a few bugs
2019-04-30 11:09:16 -07:00
Alex Collins
c140976eeb Updates Makefile 2019-04-24 10:52:53 -07:00
Alex Collins
05c22d4ddc Updates VERSION 2019-04-24 10:51:50 -07:00
Alex Collins
3e08938a20 Updates VERSION 2019-04-24 10:49:30 -07:00
Alex Collins
5937bb574d Update manifests to v1.0.0-rc1 2019-04-24 10:48:24 -07:00
Alex Collins
ded55b26d1 Update manifests to v1.0.0-rc1 2019-04-24 10:46:23 -07:00
Alex Collins
d79ed65de0 Updated CHANGELOG.md 2019-04-24 10:35:04 -07:00
Alexander Matyushentsev
3b71bd05a4 Issue #1411 - Document private repository configuration (#1515) 2019-04-24 10:26:07 -07:00
Alexander Matyushentsev
e75a7a5dea Update min client version and cache version to 1.0.0 (#1517) 2019-04-24 10:15:02 -07:00
Alexander Matyushentsev
60273ba84f Issue #1499 - Use ingress host information to populate application external URL (#1511) 2019-04-23 10:34:53 -07:00
Alexander Matyushentsev
c33604f2ef v0.12.2 Change log (#1508) 2019-04-22 15:26:55 -07:00
Alex Collins
eea804b3f6 Allow empty. Close #1504 (#1506) 2019-04-22 14:07:12 -07:00
Alex Collins
25edf8ac3f Update CHANGELOG.md (#1500) 2019-04-22 13:28:29 -07:00
Omer Kahani
3ed6dc91dd Add riskified to organizations using ArgoCD (#1497) 2019-04-21 07:25:26 -07:00
Alex Collins
13e5348177 Updates CHANGELOG for v1.0.0 (#1469) 2019-04-19 11:44:42 -07:00
Alexander Matyushentsev
90e44c092a Issue #86 - Custom actions bug fixing (#1494) 2019-04-19 10:27:12 -07:00
Simon Behar
8027882c1c Added --resource flag to argocd app wait (#1453) 2019-04-19 09:59:06 -07:00
Alexander Matyushentsev
ad9ed33f8d Fix flaky e2e test. Again (#1489) 2019-04-19 09:05:42 -07:00
Alex Collins
ddf5f0cf46 Introduces new RBAC permissions that are required for changing cluste… (#1440) 2019-04-19 08:54:30 -07:00
Alexander Matyushentsev
76811a992e Change loggin level in util function to Debug (#1488) 2019-04-18 11:58:30 -07:00
Alexander Matyushentsev
2eac7bf457 Issue #1476 - Fix racing condition in controller cache (#1485) 2019-04-18 08:12:18 -07:00
Alex Collins
53cbcd362d Adds a faster way to run e2e locally (#1475) 2019-04-17 10:53:37 -07:00
Alexander Matyushentsev
11c878b847 Change version to 1.0.0 (#1473) 2019-04-17 08:35:07 -07:00
Alexander Matyushentsev
25d5333894 Fix flaky e2e test (#1474) 2019-04-17 08:21:18 -07:00
dthomson25
4541ca664a Initial Custom Actions Implementation (#1369) 2019-04-16 14:50:44 -07:00
Alexander Matyushentsev
97422b4148 Improve e2e tests for app with secrets (#1466) 2019-04-16 13:04:54 -07:00
Alex Collins
4df07a278d Adds label to Github issue templates (#1468) 2019-04-16 11:54:17 -07:00
Alexander Matyushentsev
efa418c58b Document steps to troubleshot cluster configuration (#1467) 2019-04-16 11:41:44 -07:00
Alexander Matyushentsev
be40dbc8cc Issue #1326 - Rollback UI is not showing correct ksonnet parameters in preview (#1464) 2019-04-16 08:52:48 -07:00
dthomson25
0bd7023b66 Add link to e2e testing on contributing guide (#1456) 2019-04-15 13:50:46 -07:00
Marc
db82456dde don't compare secrets, since argo-cd doesn't have access to their data (#1459) 2019-04-15 13:46:03 -07:00
Alex Collins
a51441546c more-information-needed (#1463) 2019-04-15 13:39:13 -07:00
Alex Collins
0bd323140d Docs (#1441) 2019-04-15 13:39:04 -07:00
Ryan Fernandes
ad22949925 grammar change. added an 'if' (#1465) 2019-04-15 13:19:11 -07:00
Alex Collins
0726ee8995 Fixes goroutine leak. Closes #1381 (#1457) 2019-04-15 10:50:05 -07:00
Alexander Matyushentsev
c120004084 Fix e2e test flakyness (#1462) 2019-04-15 09:55:30 -07:00
Alexander Matyushentsev
e15b97ee08 Document how to use helm without internet access (#1448) 2019-04-12 15:22:38 -07:00
Alexander Matyushentsev
bbc7d39928 Regenerate manifests (#1454) 2019-04-12 14:38:08 -07:00
Alexander Matyushentsev
b53c34c3f7 Generate random name for grpc proxy unix socket file instead of time stamp (#1455) 2019-04-12 14:25:01 -07:00
Alex Collins
d2928d5b31 Shows the health of the application. Closes #1433 (#1434) 2019-04-12 11:52:37 -07:00
Karsten Siemer
7e76d6de33 Overlay selector of argocd-redis-ha service (#1436)
* The selector of the argocd-redis-ha service wasn't being overlayed and the service never got to have endpoints

* Generated install.yaml and namespace-install.yaml using make manifests
2019-04-12 09:17:01 -07:00
Alexander Matyushentsev
3eac376a41 Revert "Redis mastergroup name should be resolvable and argocd-redis-ha is (#1450)" (#1452)
This reverts commit 7084e3af5c.
2019-04-12 07:44:45 -07:00
Karsten Siemer
7084e3af5c Redis mastergroup name should be resolvable and argocd-redis-ha is (#1450)
the mastergroup name of redis was set as argocd since this is not
resolvable because no service has this name, this should be
renamed to the service which selects all redis pods
2019-04-12 07:26:52 -07:00
Alexander Matyushentsev
ac3d12c746 Issue #1446 - Delete helm temp directories (#1449) 2019-04-12 05:24:38 -07:00
Jonah Back
41a3352516 Fix github reference to use mainline instead of fork (#1445) 2019-04-11 18:32:12 -07:00
Alexander Matyushentsev
311ff8caed Issue #1389 - Fix null pointer exception in secret normalization function (#1443) 2019-04-11 11:46:42 -07:00
Alexander Matyushentsev
197bbda02e Issue #1425 - Argo CD should not delete CRDs (#1428) 2019-04-11 09:07:14 -07:00
Alexander Matyushentsev
56cd8fcc95 Fix invalid ignoreDifferences config example (#1437) 2019-04-11 07:53:56 -07:00
Alex Collins
3c4b42de75 Displays resources that are being deleted as "Progressing". Closes #1410 (#1426) 2019-04-11 07:47:59 -07:00
Le Van Nghia
e4b8a9d895 Added CyberAgent and OpenSaaS Studio to organizations using ArgoCD (#1427) 2019-04-10 07:59:24 -07:00
Alex Collins
76d25d3795 Perform health assessments on all resource nodes in the tree. Closes #1382 (#1422) 2019-04-09 18:15:24 -07:00
Alex Collins
97a59ca753 Enables Probot stale and no-respones plugins. Closes #1418 (#1419) 2019-04-09 17:35:44 -07:00
Alex Collins
544bd47e94 Nils health if the resource does not provide it. Closes #1383 (#1408) 2019-04-09 15:05:14 -07:00
Alexander Matyushentsev
56916a0321 Add v0.12.1 release notes (#1423) 2019-04-09 14:57:18 -07:00
Michael Goodness
eff83a45cd Add Ticketmaster to "Who uses" section of README (#1424)
Signed-off-by: Michael Goodness <mike.goodness@ticketmaster.com>
2019-04-09 14:42:04 -07:00
Alex Collins
9df1e27191 Fixes doc bugs. Closes #1395 (#1403) 2019-04-09 11:01:04 -07:00
Alexander Matyushentsev
abe25f62d0 Run 'go fmt' for application.go and server.go (#1417) 2019-04-09 09:43:53 -07:00
dthomson25
ad5d26f08a Add patch audit (#1416)
* Add auditing to patching commands

* Omit Patch Resource logs to prevent secret leaks
2019-04-09 08:57:22 -07:00
Alexander Matyushentsev
dea731a6b2 Add networking test app (#1409) 2019-04-08 16:29:08 -07:00
Isaac Gaskin
1d19447e8e issue #1202: docs(help examples): adding template and first examples for the app command (#1398)
shameless ripoff of kubectl example templating
2019-04-08 15:47:02 -07:00
Alexander Matyushentsev
ac938c8738 Issue #1406 - Don't try deleting application resource if it already have (#1407) 2019-04-08 15:08:48 -07:00
Alex Collins
88a1c2a593 Pod health (#1365) 2019-04-08 14:49:57 -07:00
Petr Jediný
1e8db87320 Add KompiTech GmbH to organizations using Argo CD (#1402) 2019-04-08 12:45:40 -07:00
Alexander Matyushentsev
7382ebce27 Issue #1404 - App controller unnecessary set namespace to cluster level resources (#1405) 2019-04-08 12:02:06 -07:00
Alex Collins
9988b3d8e6 Mkdocs2 (#1393) 2019-04-08 09:20:36 -07:00
Jesse Suen
6b69449175 Add OpenAPI validation in CRD schema (#1256) 2019-04-06 17:18:00 -07:00
dthomson25
85a5fb5a41 Allow wait to return on health or suspended (#1392) 2019-04-06 10:31:07 -06:00
Alex Collins
f5bc901dd7 Create docs website (#1387) Closes #1390 2019-04-05 15:12:27 -07:00
Alex Collins
4ac062d09e Removes componentParameterOverrides. Closes #1372 (#1378) 2019-04-05 08:26:37 -07:00
Marcin Jasion
a15ca7259c Fix project.yaml link in README.md (#1384) 2019-04-05 07:34:52 -07:00
brushmate
d4ee7972ca Add Yieldlab to organzations using Argo CD (#1385) 2019-04-05 07:34:09 -07:00
Alexander Matyushentsev
86f6b657e2 Issue #1374 - Add k8s objects circular dependency protection to getApp method (#1379) 2019-04-04 17:52:30 -07:00
Alexander Matyushentsev
ac7906fdea Issue #1366 - Fix null pointer dereference error in 'argocd app wait' (#1380) 2019-04-04 17:49:34 -07:00
dthomson25
4d494f3a1b Magically increase the code coverage!!! (#1370) 2019-04-04 10:06:10 -06:00
Alexander Matyushentsev
57ff5b25e4 Issue #1012 - kubectl v1.13 fails to convert extensions/NetworkPolicy (#1360) 2019-04-04 08:30:35 -07:00
Jesse Suen
28fa4a7571 MAGA: Make ArgoCD Golang Again! (#1279) 2019-04-04 02:35:13 -07:00
Alex Collins
723228598e Adds images to resource tree (#1351) 2019-04-03 15:11:48 -07:00
Alexander Matyushentsev
790cdd1d45 Add 'Who uses Argo CD?' section (#1361) 2019-04-02 22:27:54 -07:00
Tom Wieczorek
81e21a551d Add mapping to new canonical Ingress API group (#1348)
Since Kubernetes 1.14, Ingress resources are only available via networking.k8s.io/v1beta1.
2019-04-02 21:25:09 -07:00
dthomson25
7cf3f6cd19 Fix Failing Linter (#1350) 2019-04-02 17:39:04 -06:00
Alexander Matyushentsev
506d95da10 Issue #1294 - CLI diff should take into account resource customizations (#1337)
* Issue #1294 - CLI diff should take into account resource customizations

* Apply reviewer notes: add comments to type definition and e2e test
2019-04-02 13:59:55 -07:00
Alexander Matyushentsev
36b4683e84 Issue #908 - Surface Service/Ingress external IPs, hostname to application (#1347) 2019-04-02 08:48:34 -07:00
Noah Kantrowitz
2becacd48d Copy-paste error: clusterResourceWhitelist -> namespaceResourceBlacklist (#1343)
Same fix as #1312 but in another file.
2019-04-01 09:17:09 -07:00
Alex Collins
ae41425c77 gotestsum (#1341) 2019-03-30 22:14:35 -07:00
Alexander Matyushentsev
59837cb513 Issue #1218 - Allow using any name for secrets which store cluster credentials (#1336) 2019-03-29 22:09:36 -07:00
Alexander Matyushentsev
66e5d51329 Issue #733 - 'argocd app wait' should fail sooner if app transitioned to (#1339)
Issue #733 - 'argocd app wait' should fail sooner if app transitioned to Degraded state
2019-03-29 21:00:50 -07:00
Alexander Matyushentsev
15dfa79708 Issue #357 - Expose application nodes networking information (#1333) 2019-03-29 20:59:25 -07:00
Alexander Matyushentsev
896d46525e Don't run lint after running codegen (#1338) 2019-03-29 13:27:22 -07:00
Daniel van den Berg
a8b70b411c Declarative setup doc update (#1334)
This change updates the documentation around declarative setups. The
docs did not explicitly distinguish between adding an HTTPS repository
or an SSH repository, and this PR clarifies that.
2019-03-29 08:13:32 -07:00
Alex Collins
cd25c4b3c9 Enables default lint checks, fixes lint and bugs (#1330) 2019-03-28 13:37:53 -07:00
Alex Collins
b28d8361f5 Adds "make build" target, and running lint,build,test (#1331) 2019-03-28 11:20:51 -07:00
Jesse Suen
b40ba175a3 Update argocd-util import/export to support proper backup and restore (#1328) 2019-03-27 17:05:59 -07:00
Alex Collins
dfa91d87cf Adds support for kustomize edit set image. Closes #1275 (#1324) 2019-03-27 12:54:23 -07:00
Alex Collins
102c24cc29 Fixs deps (#1325) 2019-03-26 15:35:32 -07:00
Alex Collins
7d3b6cc8e0 Force color logging locally (#1316) 2019-03-26 13:59:03 -07:00
Alexander Matyushentsev
9ef7064cc4 Use paused field in rollout health check (#1321) 2019-03-26 11:07:06 -07:00
Alexander Matyushentsev
56f0ff204e Issue #1319 - Fix invalid group filtering in 'patch-resource' command (#1320) 2019-03-26 08:01:35 -07:00
Alexander Matyushentsev
af896533df Issue #1135 - Run e2e tests in throw-away kubernetes cluster (#1318)
* Issue #1135 - Run e2e tests in throw-away kubernetes cluster
2019-03-24 07:35:57 -07:00
Jesse Suen
aa099f3fc0 Update CHANGELOG.md for v0.12 release (#1317) 2019-03-22 21:02:20 -07:00
Jesse Suen
e07a877e73 Use Recreate deployment strategy for controller (#1315) 2019-03-22 11:50:15 -07:00
Jesse Suen
1f675f4bb9 Fix goroutine leak in RetryUntilSucceed (#1314) 2019-03-22 11:50:00 -07:00
Jesse Suen
e482d74d19 Support a separate OAuth2 CLI clientID different from server (#1307) 2019-03-22 03:23:51 -07:00
Tom Wieczorek
50bff3e540 Copy-paste error: clusterResourceWhitelist -> namespaceResourceBlacklist (#1312) 2019-03-22 02:35:39 -07:00
Andre Krueger
0d7c42ba54 Honor os environment variables for helm commands (#1306) 2019-03-21 16:51:04 -07:00
Alexander Matyushentsev
ec7cbf8e15 Issue #1308 - argo diff --local fails if live object does not exist (#1309) 2019-03-21 15:32:44 -07:00
Alexander Matyushentsev
d60fb2b449 Unavailable cache should not prevent reconciling/syncing application (#1303) 2019-03-20 14:02:54 -07:00
Jesse Suen
dc989dbebc Update redis-ha chart to resolve redis failover issues (#1301) 2019-03-20 12:06:18 -07:00
Marc
09164cae6c only print to stdout, if there is a diff + exit code (#1288) 2019-03-19 18:58:52 -07:00
Alexander Matyushentsev
80f0f779db Fix sample dashboard link in metrics doc (#1299) 2019-03-19 14:26:26 -07:00
Alexander Matyushentsev
c605e892b6 Issue #1258 - Disable CGO_ENABLED for server/controller binaries (#1286) 2019-03-19 14:25:19 -07:00
Alexander Matyushentsev
80fe3e1877 Controller don't stop running watches on cluster resync (#1298) 2019-03-19 13:25:01 -07:00
Jesse Suen
8f7a7ef6a4 Update dashboard to have controller/repo-server stats. Collapsible rows (#1295) 2019-03-19 10:41:51 -07:00
hartman17
2aad4d0ab5 Sample Grafana dashboard (#1277) 2019-03-19 01:12:21 -07:00
Alexander Matyushentsev
df7b0c6682 Issue #1290 - Fix concurrent read/write error in state cache (#1293) 2019-03-18 23:38:10 -07:00
Jesse Suen
ea1519de82 Fix a goroutine leak in api-server application.PodLogs and application.Watch (#1292) 2019-03-18 21:50:11 -07:00
Alexander Matyushentsev
b60067af97 Issue #1287 - Fix local diff of non-namespaced resources. Also handle duplicates in local diff (#1289) 2019-03-18 21:19:08 -07:00
Jesse Suen
22ddd53ea5 Fix isssue where argocd app set -p required repo privileges. (#1280)
Grant patch privileges to argocd-server
2019-03-18 14:39:32 -07:00
Alexander Matyushentsev
cafe24da86 Issue #1070 - Handle duplicated resource definitions (#1284) 2019-03-18 13:21:03 -07:00
Yann Soubeyrand
c33acf749c Fix documentation on diffing customization (#1285) 2019-03-18 13:20:44 -07:00
Jesse Suen
dab3b688f0 Add golang prometheus metrics to controller and repo-server (#1281) 2019-03-18 11:32:20 -07:00
dthomson25
a34d2c750b Add note about Kustomize1 (#1263) 2019-03-17 22:25:20 -07:00
Jesse Suen
5210c678b9 Git cloning via SSH was not verifying host public key (#1276) 2019-03-15 14:29:10 -07:00
Alexander Matyushentsev
baf157901c Rename Application observedAt to reconciledAt and use observedAt to notify about partial app refresh (#1270) 2019-03-14 16:42:36 -07:00
Alexander Matyushentsev
e457dd6f6c Bug fix: set 'Version' field while saving application resources tree (#1268) 2019-03-14 15:52:50 -07:00
Alexander Matyushentsev
2724aeef32 Avoid doing full reconciliation unless application 'managed' resource has changed (#1267) 2019-03-14 14:54:34 -07:00
Jesse Suen
1d3ec93ec7 Support kustomize apps with remote bases in private repos in the same host (#1264) 2019-03-14 14:25:05 -07:00
Omer Kahani
fea3899f26 Fix project.yaml link location (#1257) 2019-03-12 10:38:22 -07:00
Alex Collins
f016acdade Enable debug logging for local development (#1260)
* Enable debug logging for local development

* Update Procfile
2019-03-12 10:31:51 -07:00
Alex Collins
0c4d5009a2 Tweak lint (#1259) 2019-03-12 10:31:35 -07:00
Alexander Matyushentsev
815ba879e6 Issue #1252 - Application controller incorrectly build application objects tree (#1253) 2019-03-11 11:31:46 -07:00
Alexander Matyushentsev
3df86a7918 Issue #1247 - Fix CRD creation/deletion handling (#1249) 2019-03-11 08:50:00 -07:00
Alex Collins
5e7b48c9a2 Migrates from gometalinter to golangci-lint. Closes #1225 (#1226) 2019-03-08 16:22:04 -08:00
Jesse Suen
0f248e9149 Replace git fetch implementation with git CLI (from go-git) (#1244) 2019-03-08 14:08:02 -08:00
Alexander Matyushentsev
461d8c980f Fix nil pointer dereference in CompareAppState (#1234) (#1240) 2019-03-07 19:24:47 -08:00
Alexander Matyushentsev
9a7fecef06 Issue #1231 - Deprecated resource kinds from 'extensions' groups are not reconciled correctly (#1232) 2019-03-06 01:42:26 -08:00
Alexander Matyushentsev
39c63371bf Update link to config management plugins in custom_tools.md (#1228) 2019-03-06 01:16:19 -08:00
Jesse Suen
80b0e1138c Update documentation for v0.12.0 (#1227)
* Sort kustomize params in GetAppDetails
2019-03-06 00:09:01 -08:00
Alexander Matyushentsev
3acc0b3af2 Issue #1229 - App creation failed for public repository (#1230) 2019-03-06 00:02:27 -08:00
Jesse Suen
39174ab969 Move parameters listing from GenerateManifests to GetAppDetails (#1221)
* Move parameters listing from GenerateManifests to GetAppDetails
* Fix logging to use standard logger to honor CLI loglevel
2019-03-05 14:56:47 -08:00
Alexander Matyushentsev
0578b6cf36 Issue 1065 - The '--grpc-web' flag is ignored by login command (#1215) 2019-03-04 11:40:59 -08:00
Jesse Suen
cc7b283f23 Deprecate componentParameterOverrides in favor of source specific config (#1207)
* Deprecate componentParameterOverrides in favor of source specific config
* Support rollback when application source changes
* Removes the legacy spec.source.environment and spec.source.valuesFiles which were deprecated in v0.11
* Fix issue where argocd app create APPNAME --file didn't fail when there were name conflicts
* Fix issue where auto-sync and app deletion would cause infighting
2019-03-04 00:56:36 -08:00
Jesse Suen
4adca869c8 Support talking to Dex using local cluster address instead of public address (#1211) 2019-03-03 23:46:19 -08:00
Alexander Matyushentsev
80af8ce93c Issue #701 - Add config management plugins documentation (#1175)
* Issue #701 - Add config management plugins documentation
2019-03-03 23:19:09 -08:00
Alexander Matyushentsev
c90b020a83 Issue #1065 - Correctly read chunked http response in GRPC proxy (#1214) 2019-03-03 22:36:04 -08:00
dthomson25
052bd6808b Fix broken test for rollout health lua (#1213) 2019-03-03 20:26:19 -08:00
Alexander Matyushentsev
87507d50c6 Issue #1161 - no need to maintain map of existing CRDs since it is handled by resourceVersion usage (#1194) 2019-03-01 12:09:47 -08:00
Jesse Suen
058d32ec0b Fix issue where CLI would panic after timeout when cli did not have get permissions (#1209) 2019-03-01 12:06:30 -08:00
dthomson25
e830f5d4e7 Add Suspended status to Rollout health script (#1203) 2019-03-01 10:54:57 -08:00
dthomson25
a848090014 Add cli command to patch resources (#1200) 2019-02-28 14:43:38 -08:00
Jesse Suen
ecb2601164 Include resource, action, object in permission denied errors (#1188) 2019-02-28 13:11:47 -08:00
Alex Collins
31e801425f Lints local imports. Closes #1197 (#1198) 2019-02-27 18:05:55 -08:00
Jesse Suen
d386e24df4 Rename excludedResources config key to resource.exclusions. Support hot reload (#1189) 2019-02-27 16:44:17 -08:00
dthomson25
a9d352efb2 Add suspended status (#1187) 2019-02-27 14:36:25 -08:00
Alexander Matyushentsev
80fb0c90ea 'argocd app diff --local' is broken if application.instanceLabelKey setting is not configured (#1191) 2019-02-27 00:38:12 -08:00
Alexander Matyushentsev
f3172d6727 Issue #1161 - Update resource version on every watch event (#1192) 2019-02-26 23:24:30 -08:00
Jesse Suen
af0f6e578b Introduce prometheus histogram for app reconcile performance (#1184) 2019-02-26 23:09:46 -08:00
Alexander Matyushentsev
863e66ecde Support patching resource using REST API (#1186) 2019-02-26 15:57:47 -08:00
Alex Collins
0b07ce6d79 Makes the fields of excluded resources optional. Closes #1183 (#1185) 2019-02-26 14:39:49 -08:00
Alex Collins
02319bcfd7 Adds support for Jsonnet External Variables and Top-Level Arguments. … (#1165) 2019-02-26 11:50:13 -08:00
Jesse Suen
6654601bb1 Switch to kustomize v2.0.2 (from v2.0.1) (#1178) 2019-02-26 09:34:01 -08:00
narg95
caacba907c invalidate repo cache on delete (#1182)
Signed-off-by: Nestor <nesterran@gmail.com>
2019-02-26 07:12:29 -08:00
Alexander Matyushentsev
2f62d1b763 Issue #1161 - Use correct resource version in K8S watch API calls to avoid lost update events (#1173) 2019-02-26 07:09:14 -08:00
Takahiro Tsuruda
ee617d13d5 Fix typo to link from readme (#1179) 2019-02-26 00:53:28 -08:00
Alex Collins
03e0076eec Make test more tolerant (#1177) 2019-02-25 20:46:10 -08:00
Jesse Suen
33953954a2 Switch to kustomize2 as default. Add argocd-ha install manifests (#1169) 2019-02-25 15:25:57 -08:00
Alex Collins
e0594aa9b5 Adds support for patching applications. Closes #1162 (#1166) 2019-02-25 14:25:25 -08:00
Alexander Matyushentsev
99bb3cff43 Issue #701 - Rename config management pluging command 'template' to 'generate' (#1174) 2019-02-25 14:13:28 -08:00
Jesse Suen
8295a5cc6b Fix reconcile hotloop due to incorrect app source equality check (#1170) 2019-02-25 10:51:24 -08:00
Liviu Costea
f704cd07e6 Use kubernetes recommended labels (#1168)
The recommended labels are the ones described here:
https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
The manifests labels should be in sync with the helm chart
Also this is a follow up after the discussion from:
https://github.com/argoproj/argo-cd/pull/1035
2019-02-24 01:42:23 -08:00
Alexander Matyushentsev
d2df9dbdfb Let config management plugin inherit system env variables (#1163) 2019-02-22 15:23:39 -08:00
Jesse Suen
eb431308de Add application sync counters as new prometheus metric. Add API-server metrics (#1156) 2019-02-22 15:20:34 -08:00
Alexander Matyushentsev
418676ffab Issue #701 - Support for custom templaters (#1151)
* Issue #701 - Support for custom templaters
2019-02-22 14:56:11 -08:00
Alexander Matyushentsev
87b327f52d Issue #1075 - Ability to selectively ignore differences to support fuzzy diff comparisons (#1130)
* Issue #1075 - Ability to selectively ignore differences to support fuzzy diff comparisons
2019-02-22 13:19:10 -08:00
Jesse Suen
21be93548c Fix issue where argocd app diff reversed the left/right comparison (#1158) 2019-02-22 11:43:23 -08:00
Jesse Suen
c2a000c605 Fix issue where YAML file did not split correctly when encoded in UTF-16 (#1155) 2019-02-22 11:42:50 -08:00
Alexander Matyushentsev
943f3364e1 Document custom health checks and diffing customization (#1140) 2019-02-21 09:40:50 -08:00
Alex Collins
06c55b348a Allows you to exclude resources based on API group, kind, and cluster. Fixes #1010 (#1147) 2019-02-21 08:30:13 -08:00
Alex Collins
c630e19d0d Display a warning if the JWT cookie is too large. Fixes #1103 (#1146)
* Display a warning if the JWT cookie is too large. Fixes #1103

* Removes double message
2019-02-21 08:27:18 -08:00
Alex Collins
5222d47b5d Adds some instructions on how to run images locally. (#1121) 2019-02-20 14:19:07 -08:00
Alex Collins
1c1d9c95ef Adds support for Kustomize 2.0.1. Fixes #1085 (#1138) 2019-02-19 10:31:52 -08:00
Jesse Suen
7d21318c64 Update CHANGELOG.md for v0.11.2 (#1144) 2019-02-19 10:00:55 -08:00
Lev Aminov
1770fb250b Switch to correct Redis port (#1143) 2019-02-19 09:23:05 -08:00
Asier Marruedo
d555245fd5 Fix EncodeX509KeyPair function so it takes in account chained certificates (#1137) 2019-02-19 01:18:59 -08:00
Alexander Matyushentsev
665b80c048 Issue #1132 - Interactive application/project edit (#1133) 2019-02-15 14:59:52 -08:00
Alexander Matyushentsev
ce6ee88721 Issue #911 - Implement cert-manager CRD health checks (#1139) 2019-02-15 14:51:13 -08:00
Stuart Harris
8d5c15b8f4 Add service manifest for redis (#1134)
* add service manifest for redis
2019-02-15 11:02:04 -08:00
Alexander Matyushentsev
ecfb009f66 Disable authentication in dev setup (#1136) 2019-02-15 09:16:33 -08:00
Alex Collins
cbe862765f Adds support for ARGOCD_OPTS envvar for global variables. Fixes #1081 (#1131) 2019-02-14 15:04:06 -08:00
Jesse Suen
173bf63617 Exclude metrics.k8s.io from watch (#1128) 2019-02-14 13:40:01 -08:00
Ed Lee
15ef5b425f added community blog to readme (#1129) 2019-02-14 09:42:00 -08:00
Alexander Matyushentsev
d2ca6715c6 Issue #1087 - Exclude hooks from local diff (#1123) 2019-02-13 21:04:38 -08:00
Jesse Suen
19906ded5b Fix issue where dex restart could cause login failures (#1114) 2019-02-13 18:07:47 -08:00
Alex Collins
4283b8ad0d Adds client retry. Fixes #959 (#1119) 2019-02-13 15:32:07 -08:00
Alexander Matyushentsev
cb9eb0a9bb Issue #937 - Use redis as a shared throwaway cache (#1120) 2019-02-13 15:20:40 -08:00
Jesse Suen
3e4e4f30ec Prevent deletion hotloop. Improve reconciliation log for easier log queries (#1115) 2019-02-13 10:11:01 -08:00
Jesse Suen
bc32e7472f Revert broken fix for azure repos which broke private repositories (#1108) 2019-02-13 10:10:04 -08:00
Jesse Suen
70b0194218 Nil out application sources if source spec is equal to their zero value (#1109)
Fix ability to set directory-recurse=false from CLI
2019-02-12 23:40:18 -08:00
dthomson25
b1edc18faa Stop logging /cluster.ClusterService/Create (#1069)
* Stop logging /cluster.ClusterService/Create and other places
2019-02-11 17:12:00 -08:00
Alexander Matyushentsev
b5e733d29b Issue #1076 - support wildcard globs for project sources & destinations (#1106) 2019-02-11 15:14:08 -08:00
Alex Collins
c6dd3727fb Added a recurse option for directories. Fixes 1083 (#1096)
* Added a recurse option for directorys. Fixes 1083

* Linting

* Makes manifests minimally valid

* Adds tests for VerifyOneSourceType

* Adds recurse field to ManifestRequest

* Fixes spelling

* Adds --directory-recurse option to local diff

* Adds omitempty to recurse flag

* Uses paths to join path

* Use filepath.Walk

* Supports --directory-recurse when adding/setting apps from the CLI

* Fixes problem with recurse

* Updates API to surface directory.recurse

* Nil directory if recurse is false
2019-02-11 13:56:30 -08:00
Lev Aminov
802da56e14 Fix rollback command help (#1104) 2019-02-11 11:08:29 -08:00
Alex Collins
eeed50e6c0 Update CONTRIBUTING.md (#1098)
* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md
2019-02-11 10:38:43 -08:00
Jecho
bc38c06f37 fixed minor typo in docs (#1102) 2019-02-08 17:15:33 -08:00
Alexander Matyushentsev
c49e8da3d6 Mention brew tap argoproj/tap in getting started (#1097) 2019-02-07 11:25:38 -08:00
Jesse Suen
b2b5eea343 Add security docs and how to build custom repo-server from Dockerfile (#1078) 2019-02-02 01:42:48 -08:00
Alexander Matyushentsev
e68fe35a55 Issue #1065 - Support using grpc-web in argocd cli (#1077) 2019-02-01 13:37:39 -08:00
Jesse Suen
297a91fde4 Refactor packr box usage into new assets library. Add faster DEV_IMAGE build (#1073) 2019-02-01 13:12:52 -08:00
Michael Goodness
d6c88cd77a Split manifests into components (#1035) 2019-01-31 12:54:46 -08:00
Danny Thomson
00421bb46e Add custom resource health through lua 2019-01-31 11:46:09 -08:00
Jesse Suen
cefa9d9ba4 Switch to CLI git fetch from go-git to support fetching Azure DevOps repos (#1071) 2019-01-31 01:02:22 -08:00
Danny Thomson
1295d71988 Add Rollout resource progress to cli 2019-01-29 11:57:49 -08:00
Danny Thomson
204428cf0a Add Patch resource to API server 2019-01-29 11:57:49 -08:00
Alexander Matyushentsev
f205c1380f Document v0.11.1 changes (#1049) 2019-01-24 15:39:10 -08:00
Jesse Suen
8875ebc9f8 Enable docker buildkit in ci builds (#1060) 2019-01-24 15:34:45 -08:00
Jesse Suen
ccbf80312e Relax ingress/service health check to accept non-empty ingress list (#1053) 2019-01-22 16:40:26 -08:00
Alexander Matyushentsev
838d77050b Fix test compile error (#1052) 2019-01-18 15:15:56 -08:00
Jesse Suen
711e271583 Handle case where manifests contain a null items list (#1051) 2019-01-18 15:04:52 -08:00
Jesse Suen
d40bbb23cb Fix controller deadlock when checking for stale cache (#1046)
* Controller cache was susceptible to clock skew in managed cluster

* Fix controller deadlock when checking for stale cache
2019-01-18 10:38:51 -08:00
Jesse Suen
130a5ee0d9 Controller cache was susceptible to clock skew in managed cluster (#1043) 2019-01-18 10:38:21 -08:00
Alexander Matyushentsev
074c812592 Fix sync operation sorting (#1042) 2019-01-18 07:32:50 -08:00
Jesse Suen
ea428eb722 Fix ability to unset ApplicationSource specific parameters (#1041) 2019-01-17 19:38:35 -08:00
Alexander Matyushentsev
8da5fd9bb7 Issue #1039 - Correct redirect to login page if dex authentication is not successful (#1040) 2019-01-17 18:42:57 -08:00
Alexander Matyushentsev
cf486a480e Hooks result should have Running phase by default (given we don't have Pending state) (#1037) 2019-01-17 13:15:48 -08:00
Alexander Matyushentsev
a36af99dcd Issue #1033 - Fix force resource delete API (#1034) 2019-01-17 10:36:18 -08:00
Jesse Suen
421e4ff058 Fix PermissionDenied issue during app creation with project roles. Fix custom casbin adapter (#1030) 2019-01-17 10:30:31 -08:00
Alexander Matyushentsev
4140c9867d Replace grpc repo-server parallelism limit interceptor with semaphore (#1029) 2019-01-16 15:56:26 -08:00
Jesse Suen
2988cebaa9 Downgrade kubectl to v1.12 to regain kubectl convert functionality (#1023) 2019-01-16 11:44:33 -08:00
Alexander Matyushentsev
6a4d84d42c Issue #1025 - Fix /v1/applications/<appName>/manifests for app with helm depencencies (#1026) 2019-01-16 11:30:42 -08:00
Alexander Matyushentsev
07effbd950 Issue #937 - Allow using redis as a cache in repo-server (#1020)
* Issue #937 - Allow using redis as a cache in repo-server

* Support repo server grpc methods throttling

* Upgrade redis
2019-01-16 09:12:48 -08:00
Christopher Adigun
4e36713c8b Correct "basehref " in the sample UI base path (#1024)
The UI path example in the yaml should be "- --basehref" instead of - --base-href in the ingress documentation.
2019-01-16 07:59:49 -08:00
Jesse Suen
2939907c48 Do not allow metadata.creationTimestamp to affect sync status (#1021) 2019-01-15 22:51:46 -08:00
Jesse Suen
0b64a813da Switch to a custom casbin adapter for rbac enforcment (#1022) 2019-01-15 22:50:50 -08:00
Jesse Suen
5f06be724c Graceful handling of clusters where API resource discovery is partially successful (#1018) 2019-01-15 10:22:38 -08:00
Alexander Matyushentsev
2200a27a5f Issue #1013 - handle k8s resources circular dependency (#1016) 2019-01-15 03:24:28 -08:00
Ed Lee
4e2700b5d4 Update README (#1014) 2019-01-15 00:02:02 -08:00
Jesse Suen
e56d3a0412 Fix app diff --local command (#1008) 2019-01-14 23:38:54 -08:00
Saradhi Sreegiriraju
1996d191c0 Update parameters.md (#1007) 2019-01-12 15:50:38 -08:00
Jesse Suen
bf5cf3256f Update CHANGELOG, docs to use stable tag, and tweak getting started guide (#1005) 2019-01-10 20:56:04 -08:00
Jesse Suen
f732e23488 Moving apps between projects requires create/update in new project (#1002) 2019-01-10 11:37:32 -08:00
Jesse Suen
a10b0af718 Update docs to use v0.11.0-rc6 (#1001) 2019-01-09 16:17:32 -08:00
Jesse Suen
09067585fa Settings were getting re-initialized when incomplete. Session manager now uses settings manager (#1000) 2019-01-09 15:46:38 -08:00
Alexander Matyushentsev
bc4c5d83ce Log manifest with debug log level (#999) 2019-01-09 15:19:20 -08:00
Jesse Suen
7750c359b9 Update docs to use v0.11.0-rc5 (#994) 2019-01-09 08:15:06 -08:00
Jesse Suen
f87d16f90f Add better project policy rule validation (#990) 2019-01-08 15:43:12 -08:00
Alexander Matyushentsev
fa4b761612 Use informers to load ArgoCD settings (#989)
* Use informers to load Argo CD settings
2019-01-08 14:53:45 -08:00
Jesse Suen
3884b07295 Eliminate reconcile hotloop by prevent Endpoint updates from requeuing apps (#986) 2019-01-07 14:35:16 -08:00
Jesse Suen
3379585847 Increase QPS and Burst used in K8s client configs to 25/50 (#984) 2019-01-07 14:25:07 -08:00
Jesse Suen
5490766972 Fix issue where custom resource objects might get synced to incorrect namespace during initial sync (#982) 2019-01-07 13:46:11 -08:00
Alexander Matyushentsev
0d3fe64c08 Fix loading cluster connection status (#980) 2019-01-04 17:57:30 -08:00
Jesse Suen
2745bab613 Update golang to v1.11.4 (#977) 2019-01-04 10:37:50 -08:00
Alexander Matyushentsev
df0591a831 Issue #978 - Fix application rollback to deployment without overrides (#979)
* Issue #978 - Fix application rollback to deployment without overrides

* Fix imports sorting
2019-01-04 10:16:41 -08:00
Paul van Staden
88dbcee873 Improving documentation regarding params (#974) (#975) 2019-01-04 07:41:17 -08:00
Jesse Suen
053875f47f Update versions for kubectl (v1.13.1), helm (v2.12.1), ksonnet (v0.13.1) (#973) 2019-01-03 15:16:08 -08:00
Alexander Matyushentsev
24063e7a7b Reduce timeout for checking cluster health (#972) 2019-01-03 11:11:01 -08:00
Alexander Matyushentsev
4d1e65c6b0 Update sample commands in project management doc (#971) 2019-01-03 10:57:23 -08:00
Jesse Suen
8c4a7a9b39 Update docs to use v0.11.0-rc2 version (#964) 2018-12-27 21:05:59 -08:00
Alexander Matyushentsev
229d4c167a Use --refresh --hard-refresh flags in 'app get' 'app diff' commands (#963) 2018-12-27 16:08:52 -08:00
Alexander Matyushentsev
3f362be146 Issue #916 - Use 'diff' to render actual vs target state difference (#962) 2018-12-27 08:30:20 -08:00
Jesse Suen
57e979c24c Show sync policy in app list view (#961) 2018-12-27 04:00:59 -08:00
Jesse Suen
1582ddf3c4 Handle diff corner case where Role/ClusterRole rules are null (#960) 2018-12-27 04:00:16 -08:00
Alexander Matyushentsev
0d8121710f Load repo/cluster status in parallel to improve /repos /clusters API performance (#958) 2018-12-26 14:20:26 -08:00
Alexander Matyushentsev
8d020fa694 Issue #956 - Slow comparison if cluster is down (#957) 2018-12-26 13:38:35 -08:00
Jesse Suen
04564add01 Make injected application instance label configurable from default (#944)
* Make injected application instance label configurable from default
Stop removing ksonnet.io/component label, unless using legacy label

* Fix applying of resources when namespace is empty
2018-12-23 22:25:04 -08:00
Zvi Cahana
881d052f0d Prefix controller resource names with 'argocd-' (#917)
* Prefix controller resource names with 'argocd-'

* Regenerate installation manifests

* Rename some additional application-controller occurrences

* Rename [cluster]role[binding] resources

* Regenerate installation manifests
2018-12-20 13:16:01 -08:00
Alexander Matyushentsev
14ec5f6e33 Issue #950 - Application controller don't refresh app after destination update (#951) 2018-12-20 12:48:42 -08:00
lbrictson
6389fc5c6d Update aws-iam-authenticator to new version, fix url (#948) 2018-12-19 16:29:48 -08:00
Alexander Matyushentsev
eee33e336d Correctly drop cluster cache after CRD creation/deletion (#947) 2018-12-19 11:43:21 -08:00
Jesse Suen
1570736631 Diff library handles case where live object has null secret data (#945) 2018-12-19 10:05:04 -08:00
Alexander Matyushentsev
d42fabaa7a Issue #939 - Fix nil dereference error in Diff function (#940) 2018-12-18 15:06:58 -08:00
Alexander Matyushentsev
c904fa9092 Issue 914 - Allow invalidating application related cache (#931) 2018-12-17 18:23:35 -08:00
Alexander Matyushentsev
761033ccef Issue 906 - Support setting different base href in UI (#930) 2018-12-14 14:00:43 -08:00
Alexander Matyushentsev
5166344d33 Issue #912 - Make ResourceNode 'tags' into a more generic 'info' struct (#926)
* Issue #912 - Make ResourceNode 'tags' into a more generic 'info' struct
2018-12-12 13:16:55 -08:00
Alexander Matyushentsev
a1cc18c285 Issue #927 - Add missing handlings for deprecated extensions group kinds (#928) 2018-12-12 13:16:32 -08:00
Alexander Matyushentsev
34a080faa9 Issue #922 - Fix nil derefrence error in 'argocd app diff' command (#925) 2018-12-12 09:46:53 -08:00
Alexander Matyushentsev
1deeada249 Issue #910 - Reconstruct tree structure on the flight to avoid inconsistent state (#921) 2018-12-10 14:48:31 -08:00
Alexander Matyushentsev
84863acac9 Issue #915 - Local 'argocd app diff' fails (#920) 2018-12-10 13:56:04 -08:00
Alexander Matyushentsev
ec23932203 Add v0.11.0-rc1 to getting_started.md (#919) 2018-12-10 10:26:36 -08:00
Jesse Suen
483872a190 Fix issue preventing kustomize apps being multi-namespaced (#913) 2018-12-10 02:30:21 -08:00
Alexander Matyushentsev
457e137dda Enforces looses user claims if default role is set (#907) 2018-12-07 17:04:17 -08:00
Alexander Matyushentsev
4a06693994 Server should accept clients with pre-release version (#905) 2018-12-07 16:44:23 -08:00
Alexander Matyushentsev
0f39f1032a Issue #897 - Secret data not redacted in last-applied-configuration (#902) 2018-12-07 15:40:55 -08:00
Alexander Matyushentsev
4f12660c1b Fix discovering cluster wide resources with namespace (#904) 2018-12-07 15:40:41 -08:00
Jesse Suen
832564864f Give 'get' access to the argocd-server cluster role (#903) 2018-12-07 15:26:47 -08:00
Jesse Suen
51638bc6f5 API client watch helper to retry disconnections from API server (#896) 2018-12-07 11:33:58 -08:00
dthomson25
8ea474a3c9 Remove gracePeriod seconds option from API (#900) 2018-12-07 10:50:40 -08:00
Jesse Suen
689c498549 Add protection from malformed project policies being sent to casbin (#888)
Group and role name validation (resolves #843)
2018-12-06 16:11:59 -08:00
Tom Wieczorek
eb2a716661 Update to kustomize 1.0.11 (#889) 2018-12-06 16:00:29 -08:00
dthomson25
0af3836f74 Add force delete option to API (#891) 2018-12-06 16:00:10 -08:00
Alexander Matyushentsev
11b0c2848c Issue #770 - Support loading app details by directory (#893) 2018-12-06 15:32:43 -08:00
Michael Goodness
636968d381 Add initContainer volumeMount to custom tooling docs (#892) 2018-12-06 15:32:34 -08:00
Alexander Matyushentsev
7eb211eb94 Issue #760 - Properly read watch events to avoid nil pointer errors (#890) 2018-12-06 15:20:37 -08:00
Jesse Suen
5cedfb8ead CLI support for multi-namespaced applications (#886)
Make `argocd relogin` unnecessary when updating password
Print application details as part of `app wait`, `app sync` (issue #737)
2018-12-05 22:23:13 -08:00
Stephen Haynes
c20a49c531 Enable --auto-prune for app create if --sync-policy is automated (#887) 2018-12-05 21:59:16 -08:00
Alexander Matyushentsev
874fe69683 Issue #887 - OIDC config needs to be able to reference .keys (#885) 2018-12-05 18:38:01 -08:00
Alexander Matyushentsev
cfefec06a9 Add declarative argocd setup docs (#813)
* Add declarative argocd setup docs

* Add Helm repositories documentation
2018-12-05 17:48:19 -08:00
Jesse Suen
0a7d14040d Update release notes for v0.11 and add more documentation (#883) 2018-12-05 17:07:13 -08:00
Alexander Matyushentsev
974ab11b76 Issue #874 - Helm repositories config missing username/password (#882) 2018-12-05 11:36:43 -08:00
Stephen Haynes
7d40d228da build cli with packr (#875) 2018-12-04 22:42:37 -08:00
Alexander Matyushentsev
c73ea2de1d Remove parameters field from ApplicationStatus (#872)
* Remove parameters field from ApplicationStatus

* Remove unnecessary override parameters validation
2018-12-04 22:28:58 -08:00
Alexander Matyushentsev
c308165816 Application controller does not save application parameters in app crd (#871) 2018-12-04 13:50:07 -08:00
Alexander Matyushentsev
38720b2e46 Fix flaky e2e test (#870) 2018-12-04 10:38:57 -08:00
Alexander Matyushentsev
e7b2e9f639 Issue #868 - Filter out extensions group resources which are mirrored in apps group (#869) 2018-12-04 10:06:42 -08:00
Jesse Suen
cbaf8a0bc8 Promote resources field in ComparisonStatus to application.status
Fix pruning/syncing when changing application namespace
Rename DeploymentInfo to RevisionHistory to be consistent with k8s
2018-12-04 10:03:01 -08:00
Jesse Suen
f5861aa708 Refactor, consolidate and rename resource type datastructures 2018-12-04 10:03:01 -08:00
dthomson25
2fba6abc8d Add local diff back (#863) 2018-12-04 09:45:42 -08:00
Stephen Haynes
f762188b89 build application-controller with packr (#866) 2018-12-03 17:29:45 -08:00
Alexander Matyushentsev
d987416c9b Issue #747 - Declaratively add helm repositories (#864) 2018-12-03 15:15:37 -08:00
Alexander Matyushentsev
246392f0f6 Issue #858 - Support loading resource events for multi-network apps (#865) 2018-12-03 14:53:11 -08:00
Jesse Suen
0693a6dd70 Use standard Scheme Convert function instead of the kubectl based converter (#860) 2018-12-03 10:36:50 -08:00
Jesse Suen
4fa33f300b Proper treatment of resource lifecycle hooks: (#859)
* do not allow hooks to affect Synced or Health status
* do not delete hooks during a sync --prune
* add health statuses for jobs and pods
2018-12-03 10:27:43 -08:00
Jesse Suen
2c8e9fa9ac Switch to k8s recommended app.kubernetes.io/instance label (#857)
Remove ability to set helm release name
Reorganize Argo CD constants
2018-11-30 23:54:01 -08:00
Alexander Matyushentsev
6ac3e8ec45 Issue #853 - pod logs does not work in multi namespaced apps (#855) 2018-11-30 15:40:01 -08:00
Alexander Matyushentsev
c66b444213 Fix app diff command (#854) 2018-11-30 15:39:49 -08:00
Jesse Suen
64be0913ad Only run helm dependency build when necessary (issue #786) (#851) 2018-11-30 13:51:31 -08:00
Jesse Suen
ec70110ab2 Normalize app spec during controller reconciliation and API server create/update (#848) 2018-11-30 13:50:27 -08:00
Alexander Matyushentsev
93ad11095a Resources events streaming bug fixes: panic (#699), stale cache detection, restaring bad watchers (#852) 2018-11-30 11:29:12 -08:00
Jesse Suen
26af75061e Remove git URL normalization in favor of fuzzy equivalence (issue #838) (#849) 2018-11-30 10:41:47 -08:00
Jesse Suen
83fb5fb388 Rename 'controlled resources' to 'managed resources' (#850)
Rename 'resources tree' to 'resource tree'
2018-11-30 10:32:31 -08:00
Alexander Matyushentsev
6dede28f72 Issue #696 - Support apps with static namespaces in resources (#842) 2018-11-29 15:34:46 -08:00
Zvi Cahana
800c4b1d48 Build argocd-util as a statically linked binary (#845) 2018-11-29 12:57:31 -08:00
Jesse Suen
3a9196ce18 gRPC API client and gateway now supply user-agent. Require client min version as v0.11 (#841)
With this change, the gRPC api client and grpc-gateway now supply a user-agent, `argocd-client/X.Y.Z`, with their all requests. This enables us to discern various versions of the CLI as the requestor, and reject requests from incompatible clients. We assume legacy clients as clients that only supply a single user-agent, grpc-go/1.15.0.
2018-11-28 14:06:02 -08:00
Jesse Suen
76c5df087a Update kustomize base when setting image tags (#833) 2018-11-28 13:55:38 -08:00
Alexander Matyushentsev
6368a3e548 Refactor application controller (#840)
* Refactor application controller
2018-11-28 13:38:02 -08:00
Jesse Suen
cde040e10f Serve CLI binaries directly from API server (#837) 2018-11-27 13:39:06 -08:00
Jesse Suen
477ca61da5 Resolve ambiguous revisions in API server when initiating syncs (issue #818) (#834)
Incorporate revision information into event messages
2018-11-27 13:38:00 -08:00
Zvi Cahana
2b23812e6e Relax validation to permit app with no manifests (#832) 2018-11-27 02:52:46 -08:00
Tom Wieczorek
b53ad60b48 Split up CRD manifests into their own folder (#674)
This way, CRD and app deployments can be separated, e.g. if someone wishes to install Argo CD multiple times.

Also: Make update-manifests.sh fail-fast and more resilient against spaces in paths and user's CDPATH settings.
2018-11-26 14:40:15 -08:00
Jesse Suen
db92f0b569 Explicitly check for namespace before running auth reconcile (#826) 2018-11-24 11:31:09 -08:00
Jesse Suen
456ae3ab84 Support the ability to map OIDC groups to project roles (issue #742) (#817)
Fix CLI usability of `argocd proj list` (issue #769)
Use constants for RBAC resources and actions (issue #453)
Introduce RBAC policy enforcer backed by project informer cache
Introduce `argocd proj get PROJECT`
2018-11-23 11:31:20 -08:00
Jesse Suen
9347f4157e Special case secrets to base64 encode stringData before performing diff (issue #763)
When performing a diff, and the group kind is a v1.Secret, we will base64 encode the stringData field (if present) to the data field before sending to the diffing library. This way we will prevent false positives of OutOfSync secrets which are really Synced when factoring in stringData.
2018-11-21 12:00:44 -08:00
Jesse Suen
3b447a19ea Support for Pods as a sync hook (#801)
Add app label to pod metadata in job.spec.template.labels
Fix issue where hooks could bypass a project whitelist/blacklist (issue #794)
Fix issue where deletion of hooks did not perform a cascaded deletion (issue #797)
2018-11-21 11:58:08 -08:00
Alexander Matyushentsev
a70cab7ac1 Fix repository settings deserialization (#812) 2018-11-19 13:00:07 -08:00
Jesse Suen
4b2aecc9bc Ignore metadata.namespace in config when performing two-way diff (issue #784) (#810) 2018-11-19 12:47:10 -08:00
Jesse Suen
9cb445a71a Diff view shows incorrect base/value comparison (issue #725) (#809) 2018-11-19 12:41:53 -08:00
Jesse Suen
bcd9cd6cd7 Reorder auth reconcile after apply to prevent namespace creation (#808) 2018-11-19 12:40:37 -08:00
Jesse Suen
16ff90c070 Defer deletion of app object until all resources have been deleted (issue #636) (#807)
Purge app cache after deletion is successful (issue #802)
2018-11-19 12:25:45 -08:00
Jesse Suen
925f9486e3 Restructure application sources to separate types (#799) 2018-11-17 16:20:25 -08:00
Jesse Suen
b439424cef Use default server addresses. Use an imagePullPolicy of Always for manifests (#796) 2018-11-17 16:00:55 -08:00
Alexander Matyushentsev
12423e9358 Issue #621 - Fix child resource deletion (#800) 2018-11-17 11:42:33 -08:00
Jesse Suen
bb82919131 Fix make all target and use archiveLogs workflow feature (#795) 2018-11-16 18:14:52 -08:00
Alexander Matyushentsev
be4f9d85ae Issue #782 - Application type is incorrectly inferred as 'directory' if app source path starts with '.' (#789) 2018-11-16 17:12:44 -08:00
Alexander Matyushentsev
275b9e194d Issue #355 - Treat 'crd-install' hooks as normal k8s resource (#792) 2018-11-16 17:12:21 -08:00
Alexander Matyushentsev
44087fa1b5 Issue #621 - Remove resources state from application CRD (#758) 2018-11-16 17:10:04 -08:00
Will Medlar
d3043461a6 Fix typo in documentation for hook delete policy (#793) 2018-11-16 12:20:28 -08:00
Alexander Matyushentsev
3f7a0d3c97 Issue #790 - Fix application controller panic (#791) 2018-11-16 10:48:49 -08:00
Jesse Suen
361931f104 Move to single master image for all argocd services (issue #762) 2018-11-15 18:11:10 -08:00
Jesse Suen
641b7fd060 Bump up the default status/operation processors to 20/10 respectively 2018-11-15 18:11:10 -08:00
Jesse Suen
17bcb3bdbf Update docs to describe how to customize repo-server (issue #772) (#778) 2018-11-15 14:34:39 -08:00
Jesse Suen
7520a8a4f8 Update CHANGELOG and docs to point to v0.10.6 (#777) 2018-11-14 19:08:36 -08:00
Jesse Suen
8734f287eb Fix issue preventing in-cluster app sync due to go-client changes (issue #774) (#775) 2018-11-14 18:13:54 -08:00
Conor Fennell
7552a3610f add metrics label for service monitor discovery (#765) 2018-11-14 11:55:47 -08:00
Jesse Suen
f4387d5394 Update CHANGELOG and docs to point to v0.10.5 install manifests (#771) 2018-11-13 19:41:51 -08:00
Conor Fennell
ffa0c340b8 add argo cluster permission to view logs (#766) 2018-11-13 08:46:46 -08:00
Conor Fennell
34aad3f0df add project label to all metrics (#767)
* add project label to all metrics

* fix metrics files formatting
2018-11-13 08:41:38 -08:00
Niclas Mietz
49e022a2c0 Update getting_started to latest argo-cd version (#761)
Signed-off-by: solidnerd <niclas@mietz.io>
2018-11-10 10:14:59 -08:00
Alexander Matyushentsev
317d2a8aa8 Issue #536 - Declarative setup and configuration of ArgoCD (#745)
* Issue #536 - Declarative setup and configuration of ArgoCD

* Add missing rules to application-controller role

* Fix broken test; update install manifests
2018-11-09 09:58:07 -08:00
Will Medlar
f52589c128 Return partial settings from configmap if the argocd secret is not found (#755) 2018-11-08 08:31:39 -08:00
Jesse Suen
75d2c57688 Health check is not discerning apiVersion when assessing CRDs (issue #753) (#754) 2018-11-07 17:21:22 -08:00
Alessandro Marrella
92d0df1412 Updated helm (#749) 2018-11-07 11:29:01 -08:00
Taylor D. Edmiston
34bb60f064 Make Argo CD naming consistent (#694)
* Make Argo CD naming consistent

* Change ArgoCD to Argo CD on new lines
2018-11-05 11:29:01 -08:00
Benoit Sigoure
1206926ac2 Update version to v0.10.3 in getting started guide. (#739) 2018-11-01 13:13:49 -07:00
Alexander Matyushentsev
958096aaa8 Issue #697 - Ability to perform field selection in API (#736) 2018-10-30 10:20:29 -07:00
dthomson25
2c8c2fcd64 Support adding name prefix in helm and kustomize (#735) 2018-10-29 16:05:22 -07:00
Andrew Merenbach
d7b3a5c6eb Use presence of components dir in ksonnet validation app validation (#734) 2018-10-29 14:42:49 -07:00
Jesse Suen
5c7a3329f3 Support for external OIDC providers and implicit login flows (#727) 2018-10-29 01:36:53 -07:00
Andrew Merenbach
a0b5af0dae Revert "Validate Ksonnet apps through component dir presence (#708)" (#730)
This reverts commit 1844be638b.
2018-10-29 00:13:51 -07:00
Alexander Matyushentsev
37c25383b7 Fix applying TLS version settings (#731) 2018-10-28 23:28:39 -07:00
Jesse Suen
2498f60c57 Update dependencies to k8s v1.12 and client-go v9.0 (#729)
Update dependencies to k8s v1.12 and client-go v9.0 (resolves #353)
Fix issue where applications could not be deleted on k8s v1.12 (resolves #718)
Refactor k8s dynamic resource libraries to promote code reuse
2018-10-28 22:46:13 -07:00
Tom Wieczorek
7e390e76d0 Update to kustomize 1.0.10 (#728) (#728)
See also kubernetes-sigs/kustomize#514
2018-10-26 17:33:38 -07:00
Tom Wieczorek
ce7d02c94a Update to kustomize 1.0.9 (#722) 2018-10-25 11:48:34 -07:00
dthomson25
b578de77f7 Fix app refresh err when k8s patch is too slow (#724) 2018-10-25 10:55:57 -07:00
Mario Duarte
d055efba3c Fix nil pointer dereference in util/health (#723) 2018-10-25 10:19:38 -07:00
Alexander Matyushentsev
93bc108a24 Issue #670 - Allow using Sets the value of different fields in kustomization file. (#720) 2018-10-25 09:13:22 -07:00
Alexander Matyushentsev
3fd528de2b Changelog for v0.10.1 release (#719) 2018-10-24 13:56:39 -07:00
Andrew Merenbach
1844be638b Validate Ksonnet apps through component dir presence (#708) 2018-10-24 10:59:22 -07:00
Alexander Matyushentsev
dca1996640 Issue #657 - Use codecov to collect test coverage (#717) 2018-10-23 14:42:41 -07:00
Jesse Suen
cb3656b45c Handle case where OIDC settings become invalid after dex server restart (issue #710) (#715) 2018-10-23 13:45:27 -07:00
Jesse Suen
5c2eaf202e Update getting_started to use v0.10.0 (#714) 2018-10-21 00:33:27 -07:00
Jesse Suen
5abba4f85b git clean also needs to clean files under gitignore (issue #711) (#712) 2018-10-19 22:10:37 -07:00
Jesse Suen
22b77f5b34 RBAC for cluster wide install was missing permissions to list events across namespaces (resolves #704) (#705) 2018-10-19 00:04:03 -07:00
Alexander Matyushentsev
c76db90437 Issue #628 - Remove RollbackOperation in favor of Sync with ParameterOverrides (#706) 2018-10-18 20:23:22 -07:00
Alexander Matyushentsev
080f7ff4e0 Add 0.10 changelog (#700) 2018-10-18 18:21:49 -07:00
Alexander Matyushentsev
c5730c8f5f Issue #672 - Metrics endpoint not reachable through the metrics kubernetes service (#692) 2018-10-17 09:44:12 -07:00
Alexander Matyushentsev
d46f284d9f Issue #690 - Increase GRPC message limit (#691) 2018-10-16 16:34:26 -07:00
Alexander Matyushentsev
221f19ae15 Add argocd-util cluster-kubeconfig command (#689) 2018-10-16 16:17:58 -07:00
Alexander Matyushentsev
550cb277df Issue #686 - Resource is always out of sync if it has only 'ksonnet.io/component' label (#688) 2018-10-15 12:57:30 -07:00
Alexander Matyushentsev
9c79af9340 Issue #682 - Operation stuck in 'in progress' state if application has no resources (#684) 2018-10-11 11:18:17 -04:00
Andrew Merenbach
1ba52c8880 Allow more fine-grained sync (closes #508) (#666) 2018-10-10 10:12:20 -07:00
Andrew Merenbach
92629067f7 Upgrade testify (#667)
* Update Gopkg.toml

* Update Gopkg.lock
2018-10-08 11:01:00 -07:00
Alexander Matyushentsev
93a808e65a Issue #627 - Cluster watch needs to be restarted when CRDs get created (#678) 2018-10-05 13:18:12 -04:00
Alexander Matyushentsev
bf99b251f8 Issue #679 - Default project is created without permission to deploy cluster level resources (#680) 2018-10-05 11:42:54 -04:00
Alexander Matyushentsev
f491540636 Issue #426 - Support public not-connected repo in app creation UI (#675) 2018-10-04 12:46:39 -04:00
dthomson25
7f84f7d541 Add project get permission automatically to roles (#665) 2018-10-01 12:44:06 -07:00
Alexander Matyushentsev
42b01f7126 Add v0.9.2 changelog (#662) 2018-09-28 13:14:06 -04:00
Andrew Merenbach
7e5c17939b Add errgroup dependency for Packr (#648) 2018-09-27 18:27:09 -07:00
Alexander Matyushentsev
d6937ec629 Issue #650 - Temporary ignore service catalog resources (#661) 2018-09-27 20:58:45 -04:00
Andrew Merenbach
f5a32f47d3 Update generated files (#660) 2018-09-27 13:07:16 -07:00
Jesse Suen
316fcc6126 Fix issue where argocd-server logged credentials in plain text during repo add (issue #653) 2018-09-27 12:48:23 -07:00
Jesse Suen
e163177a12 Switch to go-git for all remote git interactions including auth (issue #651) 2018-09-27 12:48:23 -07:00
Jesse Suen
1fe257c71e Do not append .git extension during normalization for Azure hosted git (issue #643) (#645) 2018-09-27 11:54:04 -07:00
Andrew Merenbach
1eaa813f28 Use ksonnet CLI instead of ksonnet libs (#590) (#626) 2018-09-27 11:52:08 -07:00
dthomson25
924dad8980 Normalize policies by always adding space after comma (#659) 2018-09-27 11:24:48 -07:00
dthomson25
1ba10a1a20 Remove default params from app history (#649) 2018-09-27 11:24:25 -07:00
Stephen Haynes
ab02e10791 update to kustomize 1.0.8 (#644) 2018-09-26 14:24:59 -07:00
Jesse Suen
dd94e5e5c3 Add version check during release to ensure compiled version is accurate (#646) 2018-09-26 07:40:42 -07:00
Jesse Suen
1fcb90c4d9 Documentation clarifications and fixes (#642) 2018-09-25 08:01:41 -07:00
Alexander Matyushentsev
523c7ddf82 Update getting_started.md with new version; update releasing steps (#641) 2018-09-24 15:43:06 -07:00
Jesse Suen
3577a68d2d Update documentation with auto-sync and projects (issue #521) (#616) 2018-09-24 15:27:30 -07:00
Alexander Matyushentsev
d963f5fcc5 Issue #639 - Repo server unable to execute ls-remote for private repos (#640) 2018-09-24 14:20:52 -07:00
Alexander Matyushentsev
bed82d68df Update changelog and fix release command dependency (#638) 2018-09-24 12:58:17 -07:00
Jesse Suen
359271dfa8 Update manifests to support in-cluster installations (#634) 2018-09-24 10:14:31 -07:00
Jesse Suen
0af77a0706 Add more event sources and provide better detail in event messages (issue #635) (#637)
* Expand SyncOperation to also store parameter overrides
Fix auto-sync when used with parameter overrides

* Add more event sources and provide better detail in event messages (issue #635)
2018-09-24 08:52:43 -07:00
Jesse Suen
e6efd79ad8 Support ability to use a helm values files from a URL (issue #624) 2018-09-21 16:05:42 -07:00
Jesse Suen
c953934d2e Simplify the RBAC resources to remove unnecessary sub-resources (issue #629) 2018-09-21 15:25:08 -07:00
Alexander Matyushentsev
5b4742d42b Issue #613 - Don't delete CRD (#630) 2018-09-21 10:29:32 -07:00
Jesse Suen
269f70df51 Trim git url during normalization (issue #614) (#623) 2018-09-20 16:26:17 -07:00
Jesse Suen
67177f933b Fix false OutOfSync condition when an explicit namespace is set in the config (#622) 2018-09-20 14:52:16 -07:00
Jesse Suen
606fdcded7 Rename server.crt/server.key to tls.crt/tls.key to integrate with Ingress (issue #617) 2018-09-20 12:49:23 -07:00
Alexander Matyushentsev
70b9db68b4 Issue #599 - Lazy enforcement of unknown cluster/namespace restricted resources (#612) 2018-09-20 09:48:54 -07:00
Jesse Suen
dc8a2f5d62 Support for exporting prometheus metrics about ArgoCD applications (#608) 2018-09-17 14:05:11 -07:00
Alexander Matyushentsev
8830cf9556 609 - Support restricting TLS version (#610) 2018-09-17 13:14:00 -07:00
Jesse Suen
bfb558eb92 Fix issue where helm hooks were being deployed as part of sync (issue #605) 2018-09-17 11:29:44 -07:00
Jesse Suen
505866a4c6 Support helm charts with dependencies and namespace sensitivity (issue #582) 2018-09-17 11:29:44 -07:00
Yuki Kodama
acd2de80fb Update getting started to point to v0.8.2 (#607) 2018-09-15 23:45:06 -07:00
Alexander Matyushentsev
0b08bf4537 Issue #523 - Use 'kubectl auth reconcile' for RBAC resources (#600) 2018-09-14 20:38:35 -07:00
Jesse Suen
223091482c Improve three-way diff to provide more accurate Sync status and diff result (issue #597) (#604) 2018-09-14 19:10:11 -07:00
Andrew Merenbach
4699946e1b Derive dedicated Dex deployment (#564)
Put Dex into its own deployment and service to decouple API server stability from auth token processing
2018-09-14 17:08:12 -07:00
Jesse Suen
097f87fd52 Improve remarshalling to use reflection/schema builders to handle all k8s core types (#603) 2018-09-14 16:17:20 -07:00
Alexander Matyushentsev
66b4f3a685 Issue #515 - handle concurrent settings initialization by api servers (#602) 2018-09-14 15:09:12 -07:00
Jesse Suen
02116d4bfc Fix comparison failure when app contains unregistered custom resource (issue #583) (#596) 2018-09-13 14:02:04 -07:00
Jesse Suen
fb17589af6 Fix race conditions in kube.GetResourcesWithLabel and DeleteResourceWithLabel (issue #587) (#593) 2018-09-13 13:58:47 -07:00
Alexander Matyushentsev
15ce7ea880 Issue #584 - ArgoCD fails to deploy resources list (#598) 2018-09-13 13:52:30 -07:00
Alexander Matyushentsev
57a3123a55 Issue #482 - Support IAM Authentication for managing external K8s clusters (#588) 2018-09-13 00:09:23 -07:00
Jesse Suen
32e96e4bb2 Fix app sync / wait panic in CLI 2018-09-12 23:41:42 -07:00
Jesse Suen
47ee26a77a Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression 2018-09-12 23:41:42 -07:00
dthomson25
9cd5d52fbc Add iat as path param for delete token http call (#586) 2018-09-12 19:49:20 -07:00
Alexander Matyushentsev
aa2afcd47b Issue #330 - Projects need controls on cluster-scoped resources (#558)
* Issue #330 - Projects need controls on cluster-scoped resources

* Issue #330 - Introduce namespace resources black-list
2018-09-11 15:10:47 -07:00
Jesse Suen
fd510e7933 Support an automated sync policy upon detection of OutOfSync status from git (#571) 2018-09-11 14:28:53 -07:00
Jesse Suen
e29d5b9634 In-memory implementation of ls-remote using go-git to reduce repo lock contention (#574) 2018-09-11 13:53:51 -07:00
Conor Fennell
2f9891b15b Issue #577 - Add rbac non resource url policy for argocd-manager-role (#578)
* Add rbac non resource url policy for argocd-manager-role
* allow all non resource urls to be added through rbac
2018-09-11 13:23:10 -07:00
Jesse Suen
c3ecd615ff Update getting started and docs to point to v0.8.1 (#575) 2018-09-10 19:05:47 -07:00
Jesse Suen
4e22a3cb21 Add link to SigApps video and update CHANGELOG for v0.8.1 (#572) 2018-09-10 16:08:08 -07:00
Jesse Suen
bc98b65190 Fix controller hot loop when app source contains bad manifests (issue #568) (#570) 2018-09-10 10:58:13 -07:00
Jesse Suen
02b756ef40 Fix issue where branch checkout did not have accurate git tree state (issue #567) (#569) 2018-09-10 10:55:12 -07:00
dthomson25
954706570c Reorder K8s resources to correct creation order (#551) 2018-09-10 10:14:14 -07:00
Alexander Matyushentsev
e2faf6970f Issue #527 - Support --in-cluster authentication without providing a kubeconfig (#559)
* Issue #527 - Support --in-cluster authentication without providing a kubeconfig

* Issue #527 - make sure resources are watched for 'local' cluster
2018-09-10 08:20:17 -07:00
Alexander Matyushentsev
a528ae9c12 Issue #553 - Turn on TLS for repo server (#563) 2018-09-08 00:17:29 +03:00
Alexander Matyushentsev
0a5871eba4 Issue #470 - K8s secrets need to be redacted in API server (#560) 2018-09-07 23:51:32 +03:00
Alexander Matyushentsev
27471d5249 Issue #540 - Support raw jsonnet as an application source (#561) 2018-09-07 21:15:19 +03:00
Alexander Matyushentsev
ed484c00db Issue 499 - fileFiles path should be relative to app directory (#552) 2018-09-05 23:37:26 +03:00
Jesse Suen
b868f26ca4 Update documentation for v0.8.0 (#550) 2018-09-04 22:31:21 -07:00
Jesse Suen
d7c04ae24c Update manifests to use v0.8.0
Make manifests friendly to `kubectl apply` semantics by omitting `data:` field
RBAC docs improvements
2018-09-04 17:58:50 -07:00
Jesse Suen
5bcf8c40e0 Minor improvements to token CLI (#549) 2018-09-04 17:57:31 -07:00
dthomson25
b8e30ed953 Add documentation on project roles and JWT tokens (#533)
* Add documentation on project roles and JWT tokens
* Add AppProject CRD to architecture.md
2018-09-04 17:57:00 -07:00
Jesse Suen
e3adb30ca7 Run all containers as an unprivileged user (resolves #528) (#546) 2018-09-04 13:47:00 -07:00
Jesse Suen
1e8c570c8a Fix argocd app wait printing incorrect Sync output (resolves #542) (#543)
Timeout condition was not printing final status.
2018-08-31 11:25:15 -07:00
Jesse Suen
40f2220f1d Fix issue where argocd could not sync to a tag (#541) 2018-08-31 11:24:14 -07:00
dthomson25
4da779c44c Add PVC healthcheck to controller (#501) (#537) 2018-08-28 16:40:32 -07:00
Jesse Suen
b54a5a3e25 Refactor Makefile/build to use a single Dockerfile. Update kustomize to v1.0.7 (#538) 2018-08-28 16:00:14 -07:00
dthomson25
f572bcff58 Create default project on startup (#535)
Implements to solve issue #514
2018-08-28 10:06:01 -07:00
Andrew Merenbach
d47b7e6128 Use gRPC error codes instead of fmt.Errorf (#532) 2018-08-27 17:54:29 -07:00
Andrew Merenbach
8d9e4faae9 Add health check on API server (#522)
* Add app health endpoints

* Update generated files

* Revert "Update generated files"

This reverts commit 40f490797645ed0f30d05785748e3919dea31b7f.

* Revert "Add app health endpoints"

This reverts commit 650688dd2ee4a533e29b7df69e0bbb2436eead6b.

* Add dedicated health endpoint

* Update generated files

* Slim down basic server

* Update generated files

* Update health server creation

* Fix import, endpoint casing

* Flesh out basic health check

* Add additional endpoints, fix check, thanks @jessesuen

* Fix errors

* Update generated files

* Simplify health check, update endpoint

* Update generated files

* Factor out health check code

* Update generated files

* Rm health endpoint

* Add healthz utility

* Log error instead of printing it

* Update comment

* Add liveness, readiness probes to manifest for API server

* Add health check test

* Tweak timeouts, endpoints in probes, thanks @jessesuen

* Tweak probes, thanks @jessesuen
2018-08-23 09:24:21 -07:00
Jesse Suen
cf630055b0 API discovery becomes best effort when partial resource list is returned (resolves #524) (#526) 2018-08-21 13:53:42 -07:00
dthomson25
a5870c894f Fix typo in sso.md (#518) 2018-08-21 09:10:31 -07:00
Andrew Merenbach
c236ee99d4 Use named FIFO so we can exit with non-zero status (#516) 2018-08-16 13:22:29 -07:00
Jesse Suen
130e242aa9 Fix build breakage (#517) 2018-08-15 18:31:43 -07:00
Jesse Suen
39f0a17d0d Add ability to delete a single application resource (issue #262) (#511) 2018-08-15 15:01:29 -07:00
Jesse Suen
da0682afa7 Support for kustomize app directories (#510) 2018-08-15 14:54:56 -07:00
Andrew Merenbach
3c755a2002 Upgrade Ksonnet (#506) 2018-08-15 12:56:41 -07:00
dthomson25
66f64fbf15 Add Project JWT tokens (#498)
Implemented Project JWT Tokens (#472) using #228 as the overall design
2018-08-15 12:54:24 -07:00
Alexander Matyushentsev
4c0a0e09e2 Issue 435 - pump ci logs to s3 (#509) 2018-08-14 01:59:43 +03:00
Alexander Matyushentsev
f8de6084ed Issue #458 - Add api to load project events (#504) 2018-08-10 01:55:43 +03:00
Andrew Merenbach
36624f9d89 Enable code coverage (#500)
* Update Gopkg.toml

* Update Gopkg.lock

* Add new test-coverage command

* Update .gitignore to ignore coverage.out

* Test injection of COVERALLS_TOKEN variable

* Add draft of .travis.yml

* Rm recursive coveralls token

* Ensure that goveralls gets installed

* Rm second Go version

* Update workflow with coverage testing

* Change service from argo to argo-ci

* Rm .travis.yml

* Try setting coveralls token more explicitly

* Try file-based instead of env-based token

* Try both methods of providing token

* Go back to just env-based token

* Update with another printout test

* Try using container, thanks @alexmt

* Simplify for now, take 2

* Rm quotes

* Move env to ci-builder template

* Rm coveralls token

* Add coverage badge for current branch, take 2

* Add else statement for output in case of missing token

* Ensure we use the race detector

* Don't install goveralls with dep ensure

* Update generated files

* Try ignoring intermediate files

* Don't use race detector for now

* Try new pattern to ignore

* Try different pattern now

* Try different ignore path

* Try a different ignore style

* Ignore generated protobuf files properly now

* Rm standalone test since we have test-coverage
2018-08-09 15:54:15 -07:00
Alexander Matyushentsev
cbf1e3419b Issue #489 - Static assets are being browser cached between upgrades (#502) 2018-08-08 21:10:40 +03:00
Andrew Merenbach
7c8cc41d4c Support explicit deny (#497)
* Honor deny in RBAC model

* Add explicit allow for roles

* Update tests

* Test explicit deny

* Test deny,allow=>deny and allow,deny=>deny
2018-08-08 09:33:44 -07:00
Andrew Merenbach
5dbbd0a76f Support UI cluster creation (#469)
* Add kubeconfig string to ClusterCreateRequest

* Update generated files

* Copy and adapt cluster management logic into db

* Add service account deletion to db

* Return errors from new DB methods

* Adapt InstallClusterManagerRBAC for db

* Update errors in db

* Return error if it occurs from db

* Integrate code to (un)install cluster manager

* Use invalid argument instead of failed precondition

* Set bearer token if error is nil

* Rm cluster RBAC install from CLI

* Rm cluster mgmt install from e2e

* Rm common/install.go

* Move install components into server/cluster, thanks @jessesuen

* Rm unneeded ctx arg

* Restore common/installer.go

* Replace all quoted percent-s with percent-q

* Refactor common/installer.go with error returns

* Return errors rather than exiting fatally

* Return proper number of args

* Slim down cluster methods again

* Simplify, simplify, simplify

* Return gRPC error if RBAC could not be installed

* Issue log entries, not print statements

* Fix log import

* Update generated files

* Refactor

* Major cleanup

* Unmarshal context now

* Put claims check after bearer token insertion

* Initial work to use Kubernetes manifest to create a cluster

* Pass context name now

* Wire up prototype

* Add missing parameter for e2e test

* Just read file directly

* Change how we read cluster server

* Support more attributes from localconfig

* Update generated files

* Support incluster flag

* Comment out unused field for now

* Rm previous NewCluster function

* Unmarshal kube config successfully

* Handle insecure clusters, too

* Use existing logic to get config, thanks @jessesuen

* Revert cluster.go to master version

* Update invocations of RBAC installation

* Fix e2e invocation

* Don't remove management account, thanks @jessesuen

* Fix missing error check in e2e test

* Fix missing clientset arg in e2e fixture

* Create kubeclientset for kubeconfig, thanks @jessesuen
2018-08-06 11:27:48 -07:00
578 changed files with 76825 additions and 287424 deletions

View File

@@ -10,32 +10,94 @@ spec:
value: master
- name: repo
value: https://github.com/argoproj/argo-cd.git
volumes:
- name: k3setc
emptyDir: {}
- name: k3svar
emptyDir: {}
- name: tmp
emptyDir: {}
templates:
- name: argo-cd-ci
steps:
- - name: build
template: ci-dind
arguments:
parameters:
- name: cmd
value: "{{item}}"
withItems:
- make controller-image server-image repo-server-image
- - name: build-e2e
template: build-e2e
- name: test
template: ci-builder
arguments:
parameters:
- name: cmd
value: "{{item}}"
withItems:
- dep ensure && make cli lint test
- name: test-e2e
template: ci-builder
arguments:
parameters:
- name: cmd
value: "dep ensure && make test-e2e"
value: "dep ensure && make lint test && bash <(curl -s https://codecov.io/bash) -f coverage.out"
# The step builds argo cd image, deploy argo cd components into throw-away kubernetes cluster provisioned using k3s and run e2e tests against it.
- name: build-e2e
inputs:
artifacts:
- name: code
path: /go/src/github.com/argoproj/argo-cd
git:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
# Main contains build argocd image. The image is saved it into k3s agent images directory so it could be preloaded by the k3s cluster.
args: ["
dep ensure && until docker ps; do sleep 3; done && \
make image DEV_IMAGE=true && mkdir -p /var/lib/rancher/k3s/agent/images && \
docker save argocd:latest > /var/lib/rancher/k3s/agent/images/argocd.tar && \
touch /var/lib/rancher/k3s/ready && until ls /etc/rancher/k3s/k3s.yaml; do sleep 3; done && \
kubectl create ns argocd-e2e && kustomize build ./test/manifests/ci | kubectl apply -n argocd-e2e -f - && \
kubectl rollout status deployment -n argocd-e2e argocd-application-controller && kubectl rollout status deployment -n argocd-e2e argocd-server && \
git config --global user.email \"test@example.com\" && \
export ARGOCD_SERVER=$(kubectl get service argocd-server -o=jsonpath={.spec.clusterIP} -n argocd-e2e):443 && make test-e2e"
]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: USER
value: argocd
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
- name: KUBECONFIG
value: /etc/rancher/k3s/k3s.yaml
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
sidecars:
- name: dind
image: docker:18.09-dind
securityContext:
privileged: true
resources:
requests:
memory: 2048Mi
cpu: 500m
mirrorVolumeMounts: true
# Steps waits for file /var/lib/rancher/k3s/ready which indicates that all required images are ready, then starts the cluster.
- name: k3s
image: rancher/k3s:v0.3.0-rc1
imagePullPolicy: Always
command: [sh, -c]
args: ["until ls /var/lib/rancher/k3s/ready; do sleep 3; done && k3s server || true"]
securityContext:
privileged: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: k3setc
mountPath: /etc/rancher/k3s
- name: k3svar
mountPath: /var/lib/rancher/k3s
- name: ci-builder
inputs:
@@ -48,14 +110,23 @@ spec:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:latest
command: [sh, -c]
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [bash, -c]
args: ["{{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: CODECOV_TOKEN
valueFrom:
secretKeyRef:
name: codecov-token
key: codecov-token
resources:
requests:
memory: 1024Mi
cpu: 200m
archiveLocation:
archiveLogs: true
- name: ci-dind
inputs:
@@ -68,21 +139,25 @@ spec:
repo: "{{workflow.parameters.repo}}"
revision: "{{workflow.parameters.revision}}"
container:
image: argoproj/argo-cd-ci-builder:latest
image: argoproj/argo-cd-ci-builder:v0.13.1
imagePullPolicy: Always
command: [sh, -c]
args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"]
workingDir: /go/src/github.com/argoproj/argo-cd
env:
- name: DOCKER_HOST
value: 127.0.0.1
- name: DOCKER_BUILDKIT
value: "1"
resources:
requests:
memory: 1024Mi
cpu: 200m
sidecars:
- name: dind
image: docker:17.10-dind
image: docker:18.09-dind
securityContext:
privileged: true
mirrorVolumeMounts: true
archiveLocation:
archiveLogs: true

7
.codecov.yml Normal file
View File

@@ -0,0 +1,7 @@
ignore:
- "**/*.pb.go"
- "**/*.pb.gw.go"
- "**/*_test.go"
- "pkg/apis/.*"
- "pkg/client/.*"
- "test/.*"

View File

@@ -1,4 +1,12 @@
# Prevent vendor directory from being copied to ensure we are not not pulling unexpected cruft from
# a user's workspace, and are only building off of what is locked by dep.
vendor
dist
.vscode/
.idea/
.DS_Store
vendor/
dist/
*.iml
# delve debug binaries
cmd/**/debug
debug.test
coverage.out

27
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: 'enhancement'
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

1
.github/no-response.yml vendored Normal file
View File

@@ -0,0 +1 @@
# See https://github.com/probot/no-response

1
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1 @@
# See https://github.com/probot/stale

2
.gitignore vendored
View File

@@ -3,7 +3,9 @@
.DS_Store
vendor/
dist/
site/
*.iml
# delve debug binaries
cmd/**/debug
debug.test
coverage.out

21
.golangci.yml Normal file
View File

@@ -0,0 +1,21 @@
run:
deadline: 8m
skip-files:
- ".*\\.pb\\.go"
skip-dirs:
- pkg/client
- vendor
linter-settings:
goimports:
local-prefixes: github.com/argoproj/argo-cd
linters:
enable:
- vet
- gofmt
- goimports
- deadcode
- varcheck
- structcheck
- ineffassign
- unconvert
- misspell

View File

@@ -1,5 +1,620 @@
# Changelog
## v1.0.0
### New Features
#### Network View
TODO
#### Custom Actions
Argo CD introduces Custom Resource Actions to allow users to provide their own Lua scripts to modify existing Kubernetes resources in their applications. These actions are exposed in the UI to allow easy, safe, and reliable changes to their resources. This functionality can be used to introduce functionality such as suspending and enabling a Kubernetes cronjob, continue a BlueGreen deployment with Argo Rollouts, or scaling a deployment.
#### UI Enhancements
* New color palette intended to highlight unhealthily and out-of-sync resources more clearly.
* The health of more resources is displayed, so it easier to quickly zoom to unhealthy pods, replica-sets, etc.
* Resources that do not have health no longer appear to be healthy.
### Breaking Changes
* Remove deprecated componentParameterOverrides field #1372
### Changes since v0.12.2
#### Enhancements
* `argocd app wait` should have `--resource` flag like sync #1206
* Adds support for `kustomize edit set image`. Closes #1275 (#1324)
* Allow wait to return on health or suspended (#1392)
* Application warning when a manifest is defined twice #1070
* Create new documentation website #1390
* Default view should resource view instead of diff view #1354
* Display number of errors on resource tab #1477
* Displays resources that are being deleted as "Progressing". Closes #1410 (#1426)
* Generate random name for grpc proxy unix socket file instead of time stamp (#1455)
* Issue #357 - Expose application nodes networking information (#1333)
* Issue #1404 - App controller unnecessary set namespace to cluster level resources (#1405)
* Nils health if the resource does not provide it. Closes #1383 (#1408)
* Perform health assessments on all resource nodes in the tree. Closes #1382 (#1422)
* Remove deprecated componentParameterOverrides field #1372
* Shows the health of the application. Closes #1433 (#1434)
* Surface Service/Ingress external IPs, hostname to application #908
* Surface pod status to tree view #1358
* Support for customizable resource actions as Lua scripts #86
* UI / API Errors Truncated, Time Out #1386
* UI Enhancement Proposals Quick Wins #1274
* Update argocd-util import/export to support proper backup and restore (#1328)
* Whitelisting repos/clusters in projects should consider repo/cluster permissions #1432
#### Bug Fixes
- Don't compare secrets in the CLI, since argo-cd doesn't have access to their data (#1459)
- Dropdown menu should not have sync item for unmanaged resources #1357
- Fixes goroutine leak. Closes #1381 (#1457)
- Improve input style #1217
- Issue #908 - Surface Service/Ingress external IPs, hostname to application (#1347)
- kustomization fields are all mandatory #1504
- Resource node details is crashing if live resource is missing $1505
- Rollback UI is not showing correct ksonnet parameters in preview #1326
- See details of applications fails with "r.nodes is undefined" #1371
- UI fails to load custom actions is resource is not deployed #1502
- Unable to create app from private repo: x509: certificate signed by unknown authority #1171
## v0.12.2 (2019-04-22)
### Changes since v0.12.1
- Fix racing condition in controller cache (#1498)
- "bind: address already in use" after switching to gRPC-Web (#1451)
- Annoying warning while using --grpc-web flag (#1420)
- Delete helm temp directories (#1446)
- Fix null pointer exception in secret normalization function (#1389)
- Argo CD should not delete CRDs(#1425)
- UI is unable to load cluster level resource manifest (#1429)
## v0.12.1 (2019-04-09)
### Changes since v0.12.0
- [UI] applications view blows up when user does not have permissions (#1368)
- Add k8s objects circular dependency protection to getApp method (#1374)
- App controller unnecessary set namespace to cluster level resources (#1404)
- Changing SSO login URL to be a relative link so it's affected by basehref (#101) (@arnarg)
- CLI diff should take into account resource customizations (#1294)
- Don't try deleting application resource if it already has `deletionTimestamp` (#1406)
- Fix invalid group filtering in 'patch-resource' command (#1319)
- Fix null pointer dereference error in 'argocd app wait' (#1366)
- kubectl v1.13 fails to convert extensions/NetworkPolicy (#1012)
- Patch APIs are not audited (#1397)
+ 'argocd app wait' should fail sooner if app transitioned to Degraded state (#733)
+ Add mapping to new canonical Ingress API group - kubernetes 1.14 support (#1348) (@twz123)
+ Adds support for `kustomize edit set image`. (#1275)
+ Allow using any name for secrets which store cluster credentials (#1218)
+ Update argocd-util import/export to support proper backup and restore (#1048)
## v0.12.0 (2019-03-20)
### New Features
#### Improved UI
Many improvements to the UI were made, including:
* Table view when viewing applications
* Filters on applications
* Table view when viewing application resources
* YAML editor in UI
* Switch to text-based diff instead of json diff
* Ability to edit application specs
#### Custom Health Assessments (CRD Health)
Argo CD has long been able to perform health assessments on resources, however this could only
assess the health for a few native kubernetes types (deployments, statefulsets, daemonsets, etc...).
Now, Argo CD can be extended to gain understanding of any CRD health, in the form of Lua scripts.
For example, using this feature, Argo CD now understands the CertManager Certificate CRD and will
report a Degraded status when there are issues with the cert.
#### Configuration Management Plugins
Argo CD introduces Config Management Plugins to support custom configuration management tools other
than the set that Argo CD provides out-of-the-box (Helm, Kustomize, Ksonnet, Jsonnet). Using config
management plugins, Argo CD can be configured to run specified commands to render manifests. This
makes it possible for Argo CD to support other config management tools (kubecfg, kapitan, shell
scripts, etc...).
#### High Availability
Argo CD is now fully HA. A set HA of manifests are provided for users who wish to run Argo CD in
a highly available manner. NOTE: The HA installation will require at least three different nodes due
to pod anti-affinity roles in the specs.
#### Improved Application Source
* Support for Kustomize 2
* YAML/JSON/Jsonnet Directories can now be recursed
* Support for Jsonnet external variables and top-level arguments
#### Additional Prometheus Metrics
Argo CD provides the following additional prometheus metrics:
* Sync counter to track sync activity and results over time
* Application reconciliation (refresh) performance to track Argo CD performance and controller activity
* Argo CD API Server metrics for monitoring HTTP/gRPC requests
#### Fuzzy Diff Logic
Argo CD can now be configured to ignore known differences for resource types by specifying a json
pointer to the field path to ignore. This helps prevent OutOfSync conditions when a user has no
control over the manifests. Ignored differences can be configured either at an application level,
or a system level, based on a group/kind.
#### Resource Exclusions
Argo CD can now be configured to completely ignore entire classes of resources group/kinds.
Excluding high-volume resources improves performance and memory usage, and reduces load and
bandwidth to the Kubernetes API server. It also allows users to fine-tune the permissions that
Argo CD needs to a cluster by preventing Argo CD from attempting to watch resources of that
group/kind.
#### gRPC-Web Support
The argocd CLI can be now configured to communicate to the Argo CD API server using gRPC-Web
(HTTP1.1) using a new CLI flag `--grpc-web`. This resolves some compatibility issues users were
experiencing with ingresses and gRPC (HTTP2), and should enable argocd CLI to work with virtually
any load balancer, ingress controller, or API gateway.
#### CLI features
Argo CD introduces some additional CLI commands:
* `argocd app edit APPNAME` - to edit an application spec using preferred EDITOR
* `argocd proj edit PROJNAME` - to edit an project spec using preferred EDITOR
* `argocd app patch APPNAME` - to patch an application spec
* `argocd app patch-resource APPNAME` - to patch a specific resource which is part of an application
### Breaking Changes
#### Label selector changes, dex-server rename
The label selectors for deployments were been renamed to use kubernetes common labels
(`app.kuberentes.io/name=NAME` instead of `app=NAME`). Since K8s deployment label selectors are
immutable, during an upgrade from v0.11 to v0.12, the old deployments should be deleted using
`--cascade=false` which allows the new deployments to be created without introducing downtime.
Once the new deployments are ready, the older replicasets can be deleted. Use the following
instructions to upgrade from v0.11 to v0.12 without introducing downtime:
```
# delete the deployments with cascade=false. this orphan the replicasets, but leaves the pods running
kubectl delete deploy --cascade=false argocd-server argocd-repo-server argocd-application-controller
# apply the new manifests and wait for them to finish rolling out
kubectl apply <new install manifests>
kubectl rollout status deploy/argocd-application-controller
kubectl rollout status deploy/argocd-repo-server
kubectl rollout status deploy/argocd-application-controller
# delete old replicasets which are using the legacy label
kubectl delete rs -l app=argocd-server
kubectl delete rs -l app=argocd-repo-server
kubectl delete rs -l app=argocd-application-controller
# delete the legacy dex-server which was renamed
kubectl delete deploy dex-server
```
#### Deprecation of spec.source.componentParameterOverrides
For declarative application specs, the `spec.source.componentParameterOverrides` field is now
deprecated in favor of application source specific config. They are replaced with new fields
specific to their respective config management. For example, a Helm application spec using the
legacy field:
```yaml
spec:
source:
componentParameterOverrides:
- name: image.tag
value: v1.2
```
should move to:
```yaml
spec:
source:
helm:
parameters:
- name: image.tag
value: v1.2
```
Argo CD will automatically duplicate the legacy field values to the new locations (and vice versa)
as part of automatic migration. The legacy `spec.source.componentParameterOverrides` field will be
kept around for the v0.12 release (for migration purposes) and will be removed in the next Argo CD
release.
#### Removal of spec.source.environment and spec.source.valuesFiles
The `spec.source.environment` and `spec.source.valuesFiles` fields, which were deprecated in v0.11,
are now completely removed from the Application spec.
#### API/CLI compatibility
Due to API spec changes related to the deprecation of componentParameterOverrides, Argo CD v0.12
has a minimum client version of v0.12.0. Older CLI clients will be rejected.
### Changes since v0.11:
+ Improved UI
+ Custom Health Assessments (CRD Health)
+ Configuration Management Plugins
+ High Availability
+ Fuzzy Diff Logic
+ Resource Exclusions
+ gRPC-Web Support
+ CLI features
+ Additional prometheus metrics
+ Sample Grafana dashboard (#1277) (@hartman17)
+ Support for Kustomize 2
+ YAML/JSON/Jsonnet Directories can now be recursed
+ Support for Jsonnet external variables and top-level arguments
+ Optimized reconciliation performance for applications with very active resources (#1267)
+ Support a separate OAuth2 CLI clientID different from server (#1307)
+ argocd diff: only print to stdout, if there is a diff + exit code (#1288) (@marcb1)
+ Detection and handling of duplicated resource definitions (#1284)
+ Support kustomize apps with remote bases in private repos in the same host (#1264)
+ Support patching resource using REST API (#1186)
* Deprecate componentParameterOverrides in favor of source specific config (#1207)
* Support talking to Dex using local cluster address instead of public address (#1211)
* Use Recreate deployment strategy for controller (#1315)
* Honor os environment variables for helm commands (#1306) (@1337andre)
* Disable CGO_ENABLED for server/controller binaries (#1286)
* Documentation fixes and improvements (@twz123, @yann-soubeyrand, @OmerKahani, @dulltz)
- Fix CRD creation/deletion handling (#1249)
- Git cloning via SSH was not verifying host public key (#1276)
- Fixed multiple goroutine leaks in controller and api-server
- Fix isssue where `argocd app set -p` required repo privileges. (#1280)
- Fix local diff of non-namespaced resources. Also handle duplicates in local diff (#1289)
- Deprecated resource kinds from 'extensions' groups are not reconciled correctly (#1232)
- Fix issue where CLI would panic after timeout when cli did not have get permissions (#1209)
- invalidate repo cache on delete (#1182) (@narg95)
## v0.11.2 (2019-02-19)
+ Adds client retry. Fixes #959 (#1119)
- Prevent deletion hotloop (#1115)
- Fix EncodeX509KeyPair function so it takes in account chained certificates (#1137) (@amarruedo)
- Exclude metrics.k8s.io from watch (#1128)
- Fix issue where dex restart could cause login failures (#1114)
- Relax ingress/service health check to accept non-empty ingress list (#1053)
- [UI] Correctly handle empty response from repository/<repo>/apps API
## v0.11.1 (2019-01-18)
+ Allow using redis as a cache in repo-server (#1020)
- Fix controller deadlock when checking for stale cache (#1044)
- Namespaces are not being sorted during apply (#1038)
- Controller cache was susceptible to clock skew in managed cluster
- Fix ability to unset ApplicationSource specific parameters
- Fix force resource delete API (#1033)
- Incorrect PermissionDenied error during app creation when using project roles + user-defined RBAC (#1019)
- Fix `kubctl convert` issue preventing deployment of extensions/NetworkPolicy (#1012)
- Do not allow metadata.creationTimestamp to affect sync status (#1021)
- Graceful handling of clusters where API resource discovery is partially successful (#1018)
- Handle k8s resources circular dependency (#1016)
- Fix `app diff --local` command (#1008)
## v0.11.0 (2019-01-10)
This is Argo CD's biggest release ever and introduces a completely redesigned controller architecture.
### New Features
#### Performance & Scalability
The application controller has a completely redesigned architecture which improved performance and
scalability during application reconciliation.
This was achieved by introducing an in-memory, live state cache of lightweight Kubernetes object
metadata. During reconciliation, the controller no longer performs expensive, in-line queries of app
related resources in K8s API server, instead relying on the metadata available in the live state
cache. This dramatically improves performance and responsiveness, and is less burdensome to the K8s
API server.
#### Object relationship visualization for CRDs
With the new controller design, Argo CD is now able to understand ownership relationship between
*all* Kubernetes objects, not just the built-in types. This enables Argo CD to visualize
parent/child relationships between all kubernetes objects, including CRDs.
#### Multi-namespaced applications
During sync, Argo CD will now honor any explicitly set namespace in a manifest. Manifests without a
namespace will continue deploy to the "preferred" namespace, as specified in app's
`spec.destination.namespace`. This enables support for a class of applications which install to
multiple namespaces. For example, Argo CD can now install the
[prometheus-operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
helm chart, which deploys some resources into `kube-system`, and others into the
`prometheus-operator` namespace.
#### Large application support
Full resource objects are no longer stored in the Application CRD object status. Instead, only
lightweight metadata is stored in the status, such as a resource's sync and health status.
This change enabled Argo CD to support applications with a very large number of resources
(e.g. istio), and reduces the bandwidth requirements when listing applications in the UI.
#### Resource lifecycle hook improvements
Resource lifecycle hooks (e.g. PreSync, PostSync) are now visible/manageable from the UI.
Additionally, bare Pods with a restart policy of Never can now be used as a resource hook, as an
alternative to Jobs, Workflows.
#### K8s recommended application labels
The tracking label for resources has been changed to use `app.kubernetes.io/instance`, as
recommended in [Kubernetes recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/),
(changed from `applications.argoproj.io/app-name`). This will enable applications managed by Argo CD
to interoperate with other tooling which are also converging on this labeling, such as the
Kubernetes dashboard. Additionally, Argo CD no longer injects any tracking labels at the
`spec.template.metadata` level.
#### External OIDC provider support
Argo CD now supports auth delegation to an existing, external OIDC providers without the need for
running Dex (e.g. Okta, OneLogin, Auth0, Microsoft, etc...)
The optional, [Dex IDP OIDC provider](https://github.com/dexidp/dex) is still bundled as part of the
default installation, in order to provide a seamless out-of-box experience, enabling Argo CD to
integrate with non-OIDC providers, and to benefit from Dex's full range of
[connectors](https://github.com/dexidp/dex/tree/master/Documentation/connectors).
#### OIDC group bindings to Project Roles
OIDC group claims from an OAuth2 provider can now be bound to a Argo CD project roles. Previously,
group claims could only be managed in the centralized ConfigMap, `argocd-rbac-cm`. They can now be
managed at a project level. This enables project admins to self service access to applications
within a project.
#### Declarative Argo CD configuration
Argo CD settings can be now be configured either declaratively, or imperatively. The `argocd-cm`
ConfigMap now has a `repositories` field, which can reference credentials in a normal Kubernetes
secret which you can create declaratively, outside of Argo CD.
#### Helm repository support
Helm repositories can be configured at the system level, enabling the deployment of helm charts
which have a dependency to external helm repositories.
### Breaking changes:
* Argo CD's resource names were renamed for consistency. For example, the application-controller
deployment was renamed to argocd-application-controller. When upgrading from v0.10 to v0.11,
the older resources should be pruned to avoid inconsistent state and controller in-fighting.
* As a consequence to moving to recommended kubernetes labels, when upgrading from v0.10 to v0.11,
all applications will immediately be OutOfSync due to the change in tracking labels. This will
correct itself with another sync of the application. However, since Pods will be recreated, please
take this into consideration, especially if your applications are configured with auto-sync.
* There was significant reworking of the `app.status` fields to reduce the payload size, simplify
the datastructure and remove fields which were no longer used by the controller. No breaking
changes were made in `app.spec`.
* An older Argo CD CLI (v0.10 and below) will not be compatible with Argo CD v0.11. To keep
CI pipelines in sync with the API server, it is recommended to have pipelines download the CLI
directly from the API server https://${ARGOCD_SERVER}/download/argocd-linux-amd64 during the CI
pipeline.
### Changes since v0.10:
* Improve Application state reconciliation performance (#806)
* Refactor, consolidate and rename resource type data structures
+ Declarative setup and configuration of ArgoCD (#536)
+ Declaratively add helm repositories (#747)
+ Switch to k8s recommended app.kubernetes.io/instance label (#857)
+ Ability for a single application to deploy into multiple namespaces (#696)
+ Self service group access to project applications (#742)
+ Support for Pods as a sync hook (#801)
+ Support 'crd-install' helm hook (#355)
+ Use external 'diff' utility to render actual vs target state difference
+ Show sync policy in app list view
* Remove resources state from application CRD (#758)
* API server & UI should serve argocd binaries instead of linking to GitHub (#716)
* Update versions for kubectl (v1.13.1), helm (v2.12.1), ksonnet (v0.13.1)
* Update version of aws-iam-authenticator (0.4.0-alpha.1)
* Ability to force refresh of application manifests from git
* Improve diff assessment for Secrets, ClusterRoles, Roles
- Failed to deploy helm chart with local dependencies and no internet access (#786)
- Out of sync reported if Secrets with stringData are used (#763)
- Unable to delete application in K8s v1.12 (#718)
## v0.10.6 (2018-11-14)
- Fix issue preventing in-cluster app sync due to go-client changes (issue #774)
## v0.10.5 (2018-11-13)
+ Increase concurrency of application controller
* Update dependencies to k8s v1.12 and client-go v9.0 (#729)
- add argo cluster permission to view logs (#766) (@conorfennell)
- Fix issue where applications could not be deleted on k8s v1.12
- Allow 'syncApplication' action to reference target revision rather then hard-coding to 'HEAD' (#69) (@chrisgarland)
- Issue #768 - Fix application wizard crash
## v0.10.4 (2018-11-07)
* Upgrade to Helm v0.11.0 (@amarrella)
- Health check is not discerning apiVersion when assessing CRDs (issue #753)
- Fix nil pointer dereference in util/health (@mduarte)
## v0.10.3 (2018-10-28)
* Fix applying TLS version settings
* Update to kustomize 1.0.10 (@twz123)
## v0.10.2 (2018-10-25)
* Update to kustomize 1.0.9 (@twz123)
- Fix app refresh err when k8s patch is too slow
## v0.10.1 (2018-10-24)
- Handle case where OIDC settings become invalid after dex server restart (issue #710)
- git clean also needs to clean files under gitignore (issue #711)
## v0.10.0 (2018-10-19)
### Changes since v0.9:
+ Allow more fine-grained sync (issue #508)
+ Display init container logs (issue #681)
+ Redirect to /auth/login instead of /login when SSO token is used for authenticaion (issue #348)
+ Support ability to use a helm values files from a URL (issue #624)
+ Support public not-connected repo in app creation UI (issue #426)
+ Use ksonnet CLI instead of ksonnet libs (issue #626)
+ We should be able to select the order of the `yaml` files while creating a Helm App (#664)
* Remove default params from app history (issue #556)
* Update to ksonnet v0.13.0
* Update to kustomize 1.0.8
- API Server fails to return apps due to grpc max message size limit (issue #690)
- App Creation UI for Helm Apps shows only files prefixed with `values-` (issue #663)
- App creation UI should allow specifying values files outside of helm app directory bug (issue #658)
- argocd-server logs credentials in plain text when adding git repositories (issue #653)
- Azure Repos do not work as a repository (issue #643)
- Better update conflict error handing during app editing (issue #685)
- Cluster watch needs to be restarted when CRDs get created (issue #627)
- Credentials not being accepted for Google Source Repositories (issue #651)
- Default project is created without permission to deploy cluster level resources (issue #679)
- Generate role token click resets policy changes (issue #655)
- Input type text instead of password on Connect repo panel (issue #693)
- Metrics endpoint not reachable through the metrics kubernetes service (issue #672)
- Operation stuck in 'in progress' state if application has no resources (issue #682)
- Project should influence options for cluster and namespace during app creation (issue #592)
- Repo server unable to execute ls-remote for private repos (issue #639)
- Resource is always out of sync if it has only 'ksonnet.io/component' label (issue #686)
- Resource nodes are 'jumping' on app details page (issue #683)
- Sync always suggest using latest revision instead of target UI bug (issue #669)
- Temporary ignore service catalog resources (issue #650)
## v0.9.2 (2018-09-28)
* Update to kustomize 1.0.8
- Fix issue where argocd-server logged credentials in plain text during repo add (issue #653)
- Credentials not being accepted for Google Source Repositories (issue #651)
- Azure Repos do not work as a repository (issue #643)
- Temporary ignore service catalog resources (issue #650)
- Normalize policies by always adding space after comma
## v0.9.1 (2018-09-24)
- Repo server unable to execute ls-remote for private repos (issue #639)
## v0.9.0 (2018-09-24)
### Notes about upgrading from v0.8
* Cluster wide resources should be allowed in default project (due to issue #330):
```
argocd project allow-cluster-resource default '*' '*'
```
* Projects now provide the ability to allow or deny deployments of cluster-scoped resources
(e.g. Namespaces, ClusterRoles, CustomResourceDefinitions). When upgrading from v0.8 to v0.9, to
match the behavior of v0.8 (which did not have restrictions on deploying resources) and continue to
allow deployment of cluster-scoped resources, an additional command should be run:
```bash
argocd proj allow-cluster-resource default '*' '*'
```
The above command allows the `default` project to deploy any cluster-scoped resources which matches
the behavior of v0.8.
* The secret keys in the argocd-secret containing the TLS certificate and key, has been renamed from
`server.crt` and `server.key` to the standard `tls.crt` and `tls.key` keys. This enables Argo CD
to integrate better with Ingress and cert-manager. When upgrading to v0.9, the `server.crt` and
`server.key` keys in argocd-secret should be renamed to the new keys.
### Changes since v0.8:
+ Auto-sync option in application CRD instance (issue #79)
+ Support raw jsonnet as an application source (issue #540)
+ Reorder K8s resources to correct creation order (issue #102)
+ Redact K8s secrets from API server payloads (issue #470)
+ Support --in-cluster authentication without providing a kubeconfig (issue #527)
+ Special handling of CustomResourceDefinitions (issue #613)
+ Argo CD should download helm chart dependencies (issue #582)
+ Export Argo CD stats as prometheus style metrics (issue #513)
+ Support restricting TLS version (issue #609)
+ Use 'kubectl auth reconcile' before 'kubectl apply' (issue #523)
+ Projects need controls on cluster-scoped resources (issue #330)
+ Support IAM Authentication for managing external K8s clusters (issue #482)
+ Compatibility with cert manager (issue #617)
* Enable TLS for repo server (issue #553)
* Split out dex into it's own deployment (instead of sidecar) (issue #555)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
+ [UI] Project Role/Token management from UI (issue #548)
+ [UI] App creation wizard should allow specifying source revision (issue #562)
+ [UI] Ability to modify application from UI (issue #615)
+ [UI] indicate when operation is in progress or has failed (issue #566)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Lazy enforcement of unknown cluster/namespace restricted resources (issue #599)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- Fix issue where Argo CD fails to deploy when resources are in a K8s list format (issue #584)
- Fix comparison failure when app contains unregistered custom resource (issue #583)
- Fix issue where helm hooks were being deployed as part of sync (issue #605)
- Fix race conditions in kube.GetResourcesWithLabel and DeleteResourceWithLabel (issue #587)
- [UI] Fix issue where projects filter does not work when application got changed
- [UI] Creating apps from directories is not obvious (issue #565)
- Helm hooks are being deployed as resources (issue #605)
- Disagreement in three way diff calculation (issue #597)
- SIGSEGV in kube.GetResourcesWithLabel (issue #587)
- Argo CD fails to deploy resources list (issue #584)
- Branch tracking not working properly (issue #567)
- Controller hot loop when application source has bad manifests (issue #568)
## v0.8.2 (2018-09-12)
- Downgrade ksonnet from v0.12.0 to v0.11.0 due to quote unescape regression
- Fix CLI panic when performing an initial `argocd sync/wait`
## v0.8.1 (2018-09-10)
+ [UI] Support selection of helm values files in App creation wizard (issue #499)
+ [UI] Support specifying source revision in App creation wizard allow (issue #503)
+ [UI] Improve resource diff rendering (issue #457)
+ [UI] Indicate number of ready containers in pod (issue #539)
+ [UI] Indicate when app is overriding parameters (issue #503)
+ [UI] Provide a YAML view of resources (issue #396)
- Fix issue where changes were not pulled when tracking a branch (issue #567)
- Fix controller hot loop when app source contains bad manifests (issue #568)
- [UI] Fix issue where projects filter does not work when application got changed
## v0.8.0 (2018-09-04)
### Notes about upgrading from v0.7
* The RBAC model has been improved to support explicit denies. What this means is that any previous
RBAC policy rules, need to be rewritten to include one extra column with the effect:
`allow` or `deny`. For example, if a rule was written like this:
```
p, my-org:my-team, applications, get, */*
```
It should be rewritten to look like this:
```
p, my-org:my-team, applications, get, */*, allow
```
### Changes since v0.7:
+ Support kustomize as an application source (issue #510)
+ Introduce project tokens for automation access (issue #498)
+ Add ability to delete a single application resource to support immutable updates (issue #262)
+ Update RBAC model to support explicit denies (issue #497)
+ Ability to view Kubernetes events related to application projects for auditing
+ Add PVC healthcheck to controller (issue #501)
+ Run all containers as an unprivileged user (issue #528)
* Upgrade ksonnet to v0.12.0
* Add readiness probes to API server (issue #522)
* Use gRPC error codes instead of fmt.Errorf (#532)
- API discovery becomes best effort when partial resource list is returned (issue #524)
- Fix `argocd app wait` printing incorrect Sync output (issue #542)
- Fix issue where argocd could not sync to a tag (#541)
- Fix issue where static assets were browser cached between upgrades (issue #489)
## v0.7.2 (2018-08-21)
- API discovery becomes best effort when partial resource list is returned (issue #524)
## v0.7.1 (2018-08-03)
+ Surface helm parameters to the application level (#485)
+ [UI] Improve application creation wizard (#459)
@@ -70,7 +685,7 @@
## v0.5.0 (2018-06-12)
+ RBAC access control
+ Repository/Cluster state monitoring
+ ArgoCD settings import/export
+ Argo CD settings import/export
+ Application creation UI wizard
+ argocd app manifests for printing the application manifests
+ argocd app unset command to unset parameter overrides

View File

@@ -1,76 +0,0 @@
## Requirements
Make sure you have following tools installed
* [docker](https://docs.docker.com/install/#supported-platforms)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
```
$ brew tap go-swagger/go-swagger
$ brew install go dep protobuf kubectl ksonnet/tap/ks kubernetes-helm jq go-swagger
$ go get -u github.com/golang/protobuf/protoc-gen-go
$ go get -u github.com/go-swagger/go-swagger/cmd/swagger
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
$ go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
```
Nice to have [gometalinter](https://github.com/alecthomas/gometalinter) and [goreman](https://github.com/mattn/goreman):
```
$ go get -u gopkg.in/alecthomas/gometalinter.v2 github.com/mattn/goreman && gometalinter.v2 --install
```
## Building
```
$ go get -u github.com/argoproj/argo-cd
$ dep ensure
$ make
```
NOTE: The make command can take a while, and we recommend building the specific component you are working on
* `make cli` - Make the argocd CLI tool
* `make server` - Make the API/repo/controller server
* `make codegen` - Builds protobuf and swagger files
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
## Generating ArgoCD manifests for a specific image repository/tag
During development, the `update-manifests.sh` script, can be used to conveniently regenerate the
ArgoCD installation manifests with a customized image namespace and tag. This enables developers
to easily apply manifests which are using the images that they pushed into their personal container
repository.
```
$ IMAGE_NAMESPACE=jessesuen IMAGE_TAG=latest ./hack/update-manifests.sh
$ kubectl apply -n argocd -f ./manifests/install.yaml
```
## Running locally
You need to have access to kubernetes cluster (including [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) or [docker edge](https://docs.docker.com/docker-for-mac/install/) ) in order to run Argo CD on your laptop:
* install kubectl: `brew install kubectl`
* make sure `kubectl` is connected to your cluster (e.g. `kubectl get pods` should work).
* install application CRD using following command:
```
$ kubectl create -f install/manifests/01_application-crd.yaml
```
* start Argo CD services using [goreman](https://github.com/mattn/goreman):
```
$ goreman start
```
## Troubleshooting
* Ensure argocd is installed: ./dist/argocd install
* Ensure you're logged in: ./dist/argocd login --username admin --password <whatever password you set at install> localhost:8080
* Ensure that roles are configured: kubectl create -f install/manifests/02c_argocd-rbac-cm.yaml
* Ensure minikube is running: minikube stop && minikube start
* Ensure Argo CD is aware of minikube: ./dist/argocd cluster add minikube

152
Dockerfile Normal file
View File

@@ -0,0 +1,152 @@
####################################################################################################
# Builder image
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM golang:1.11.4 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /tmp
# Install docker
ENV DOCKER_CHANNEL stable
ENV DOCKER_VERSION 18.09.1
RUN wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" && \
tar --extract --file docker.tgz --strip-components 1 --directory /usr/local/bin/ && \
rm docker.tgz
# Install dep
ENV DEP_VERSION=0.5.0
RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep
# Install gometalinter
ENV GOMETALINTER_VERSION=2.0.12
RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v${GOMETALINTER_VERSION}/gometalinter-${GOMETALINTER_VERSION}-linux-amd64.tar.gz | \
tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \
ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2
# Install packr
ENV PACKR_VERSION=1.21.9
RUN wget https://github.com/gobuffalo/packr/releases/download/v${PACKR_VERSION}/packr_${PACKR_VERSION}_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# Install kubectl
# NOTE: keep the version synced with https://storage.googleapis.com/kubernetes-release/release/stable.txt
ENV KUBECTL_VERSION=1.14.0
RUN curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl && \
chmod +x /usr/local/bin/kubectl && \
kubectl version --client
# Install ksonnet
ENV KSONNET_VERSION=0.13.1
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks && \
ks version
# Install helm
ENV HELM_VERSION=2.12.1
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /usr/local/bin/helm && \
helm version --client
# Install kustomize
ENV KUSTOMIZE1_VERSION=1.0.11
RUN curl -L -o /usr/local/bin/kustomize1 https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE1_VERSION}/kustomize_${KUSTOMIZE1_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize1 && \
kustomize1 version
ENV KUSTOMIZE_VERSION=2.0.3
RUN curl -L -o /usr/local/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/kustomize && \
kustomize version
# Install AWS IAM Authenticator
ENV AWS_IAM_AUTHENTICATOR_VERSION=0.4.0-alpha.1
RUN curl -L -o /usr/local/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/${AWS_IAM_AUTHENTICATOR_VERSION}/aws-iam-authenticator_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64 && \
chmod +x /usr/local/bin/aws-iam-authenticator
# Install golangci-lint
RUN wget https://install.goreleaser.com/github.com/golangci/golangci-lint.sh && \
chmod +x ./golangci-lint.sh && \
./golangci-lint.sh -b $GOPATH/bin && \
golangci-lint linters
COPY .golangci.yml ${GOPATH}/src/dummy/.golangci.yml
RUN cd ${GOPATH}/src/dummy && \
touch dummy.go \
golangci-lint run
####################################################################################################
# Argo CD Base - used as the base for both the release and dev argocd images
####################################################################################################
FROM debian:9.5-slim as argocd-base
RUN groupadd -g 999 argocd && \
useradd -r -u 999 -g argocd argocd && \
mkdir -p /home/argocd && \
chown argocd:argocd /home/argocd && \
apt-get update && \
apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY hack/ssh_known_hosts /etc/ssh/ssh_known_hosts
COPY hack/git-ask-pass.sh /usr/local/bin/git-ask-pass.sh
COPY --from=builder /usr/local/bin/ks /usr/local/bin/ks
COPY --from=builder /usr/local/bin/helm /usr/local/bin/helm
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/kustomize1 /usr/local/bin/kustomize1
COPY --from=builder /usr/local/bin/kustomize /usr/local/bin/kustomize
COPY --from=builder /usr/local/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=argocd
USER argocd
WORKDIR /home/argocd
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM golang:1.11.4 as argocd-build
COPY --from=builder /usr/local/bin/dep /usr/local/bin/dep
COPY --from=builder /usr/local/bin/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /go/src/github.com/argoproj/argo-cd
COPY . .
RUN make cli server controller repo-server argocd-util && \
make CLI_NAME=argocd-darwin-amd64 GOOS=darwin cli
####################################################################################################
# Final image
####################################################################################################
FROM argocd-base
COPY --from=argocd-build /go/src/github.com/argoproj/argo-cd/dist/argocd* /usr/local/bin/

View File

@@ -1,88 +0,0 @@
FROM debian:9.4 as builder
RUN apt-get update && apt-get install -y \
git \
make \
wget \
gcc \
zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install go
ENV GO_VERSION 1.10.3
ENV GO_ARCH amd64
ENV GOPATH /root/go
ENV PATH ${GOPATH}/bin:/usr/local/go/bin:${PATH}
RUN wget https://storage.googleapis.com/golang/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
tar -C /usr/local/ -xf /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \
rm /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz
# Install protoc, dep, packr
ENV PROTOBUF_VERSION 3.5.1
RUN cd /usr/local && \
wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/protoc-${PROTOBUF_VERSION}-linux-x86_64.zip && \
unzip protoc-*.zip && \
wget https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64 -O /usr/local/bin/dep && \
chmod +x /usr/local/bin/dep && \
wget https://github.com/gobuffalo/packr/releases/download/v1.11.0/packr_1.11.0_linux_amd64.tar.gz && \
tar -vxf packr*.tar.gz -C /tmp/ && \
mv /tmp/packr /usr/local/bin/packr
# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep
# to install all the packages of our dep lock file
COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml
COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock
RUN cd ${GOPATH}/src/dummy && \
dep ensure -vendor-only && \
mv vendor/* ${GOPATH}/src/ && \
rmdir vendor
# Perform the build
WORKDIR /root/go/src/github.com/argoproj/argo-cd
COPY . .
ARG MAKE_TARGET="cli server controller repo-server argocd-util"
RUN make ${MAKE_TARGET}
##############################################################
# This stage will pull in or build any CLI tooling we need for our final image
FROM golang:1.10 as cli-tooling
# NOTE: we frequently switch between tip of master ksonnet vs. official builds. Comment/uncomment
# the corresponding section to switch between the two options:
# Option 1: build ksonnet ourselves
#RUN go get -v -u github.com/ksonnet/ksonnet && mv ${GOPATH}/bin/ksonnet /ks
# Option 2: use official tagged ksonnet release
env KSONNET_VERSION=0.11.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /ks
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl
env HELM_VERSION=2.9.1
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /helm
##############################################################
FROM debian:9.3
RUN apt-get update && apt-get install -y git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=cli-tooling /ks /usr/local/bin/ks
COPY --from=cli-tooling /helm /usr/local/bin/helm
COPY --from=cli-tooling /kubectl /usr/local/bin/kubectl
# workaround ksonnet issue https://github.com/ksonnet/ksonnet/issues/298
ENV USER=root
COPY --from=builder /root/go/src/github.com/argoproj/argo-cd/dist/* /
ARG BINARY
CMD /${BINARY}

View File

@@ -1,28 +0,0 @@
FROM golang:1.10.3
WORKDIR /tmp
RUN curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.13.1.tgz && \
tar -xzf docker-1.13.1.tgz && \
mv docker/docker /usr/local/bin/docker && \
rm -rf ./docker && \
go get -u github.com/golang/dep/cmd/dep && \
go get -u gopkg.in/alecthomas/gometalinter.v2 && \
gometalinter.v2 --install
# Install kubectl
RUN curl -o /kubectl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
chmod +x /kubectl && mv /kubectl /usr/local/bin/kubectl
# Install ksonnet
env KSONNET_VERSION=0.11.0
RUN wget https://github.com/ksonnet/ksonnet/releases/download/v${KSONNET_VERSION}/ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
tar -C /tmp/ -xf ks_${KSONNET_VERSION}_linux_amd64.tar.gz && \
mv /tmp/ks_${KSONNET_VERSION}_linux_amd64/ks /usr/local/bin/ks && \
rm -rf /tmp/ks_${KSONNET_VERSION}
# Install helm
env HELM_VERSION=2.9.1
RUN wget https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
tar -C /tmp/ -xf helm-v${HELM_VERSION}-linux-amd64.tar.gz && \
mv /tmp/linux-amd64/helm /usr/local/bin/helm

5
Dockerfile.dev Normal file
View File

@@ -0,0 +1,5 @@
####################################################################################################
# argocd-dev
####################################################################################################
FROM argocd-base
COPY argocd* /usr/local/bin/

1097
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,55 +1,68 @@
# Packages should only be added to the following list when we use them *outside* of our go code.
# (e.g. we want to build the binary to invoke as part of the build process, such as in
# generate-proto.sh). Normal use of golang packages should be added via `dep ensure`, and pinned
# with a [[constraint]] or [[override]] when version is important.
required = [
"github.com/golang/protobuf/protoc-gen-go",
"github.com/gogo/protobuf/protoc-gen-gofast",
"github.com/gogo/protobuf/protoc-gen-gogofast",
"golang.org/x/sync/errgroup",
"k8s.io/code-generator/cmd/go-to-protobuf",
"k8s.io/kube-openapi/cmd/openapi-gen",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway",
"github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger",
"github.com/golang/protobuf/protoc-gen-go",
"golang.org/x/sync/errgroup",
]
[[constraint]]
name = "google.golang.org/grpc"
version = "1.9.2"
version = "1.15.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "1.1.1"
# override github.com/grpc-ecosystem/go-grpc-middleware's constraint on master
[[override]]
name = "github.com/golang/protobuf"
version = "1.2.0"
[[constraint]]
name = "github.com/grpc-ecosystem/grpc-gateway"
version = "v1.3.1"
# override argo outdated dependency
[[override]]
branch = "release-1.10"
# prometheus does not believe in semversioning yet
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "7858729281ec582767b20e0d696b6041d995d5e0"
[[constraint]]
branch = "release-1.12"
name = "k8s.io/api"
[[override]]
branch = "release-1.10"
name = "k8s.io/apimachinery"
[[constraint]]
name = "k8s.io/apiextensions-apiserver"
branch = "release-1.10"
[[constraint]]
branch = "release-1.10"
branch = "release-1.12"
name = "k8s.io/code-generator"
[[override]]
branch = "release-7.0"
[[constraint]]
branch = "release-9.0"
name = "k8s.io/client-go"
[[constraint]]
name = "github.com/stretchr/testify"
version = "1.2.1"
[[constraint]]
name = "github.com/ksonnet/ksonnet"
version = "v0.11.0"
version = "1.2.2"
[[constraint]]
name = "github.com/gobuffalo/packr"
version = "v1.11.0"
# override ksonnet's logrus dependency
[[constraint]]
branch = "master"
name = "github.com/argoproj/pkg"
[[constraint]]
branch = "master"
name = "github.com/yudai/gojsondiff"
[[override]]
name = "github.com/sirupsen/logrus"
revision = "ea8897e79973357ba785ac2533559a6297e83c44"
revision = "master"
name = "k8s.io/kube-openapi"

148
Makefile
View File

@@ -9,6 +9,17 @@ GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi)
GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi)
PACKR_CMD=$(shell if [ "`which packr`" ]; then echo "packr"; else echo "go run vendor/github.com/gobuffalo/packr/packr/main.go"; fi)
TEST_CMD=$(shell [ "`which gotestsum`" != "" ] && echo gotestsum -- || echo go test)
PATH:=$(PATH):$(PWD)/hack
# docker image publishing options
DOCKER_PUSH=false
IMAGE_TAG=latest
# perform static compilation
STATIC_BUILD=true
# build development images
DEV_IMAGE=false
override LDFLAGS += \
-X ${PACKAGE}.version=${VERSION} \
@@ -16,19 +27,14 @@ override LDFLAGS += \
-X ${PACKAGE}.gitCommit=${GIT_COMMIT} \
-X ${PACKAGE}.gitTreeState=${GIT_TREE_STATE}
# docker image publishing options
DOCKER_PUSH=false
IMAGE_TAG=latest
ifeq (${STATIC_BUILD}, true)
override LDFLAGS += -extldflags "-static"
endif
ifneq (${GIT_TAG},)
IMAGE_TAG=${GIT_TAG}
LDFLAGS += -X ${PACKAGE}.gitTag=${GIT_TAG}
endif
ifneq (${IMAGE_NAMESPACE},)
override LDFLAGS += -X ${PACKAGE}/install.imageNamespace=${IMAGE_NAMESPACE}
endif
ifneq (${IMAGE_TAG},)
override LDFLAGS += -X ${PACKAGE}/install.imageTag=${IMAGE_TAG}
endif
ifeq (${DOCKER_PUSH},true)
ifndef IMAGE_NAMESPACE
@@ -41,95 +47,114 @@ IMAGE_PREFIX=${IMAGE_NAMESPACE}/
endif
.PHONY: all
all: cli server-image controller-image repo-server-image argocd-util
all: cli image argocd-util
.PHONY: protogen
protogen:
./hack/generate-proto.sh
.PHONY: openapigen
openapigen:
./hack/update-openapi.sh
.PHONY: clientgen
clientgen:
./hack/update-codegen.sh
.PHONY: codegen
codegen: protogen clientgen
codegen: protogen clientgen openapigen
# NOTE: we use packr to do the build instead of go, since we embed .yaml files into the go binary.
# This enables ease of maintenance of the yaml files.
.PHONY: cli
cli: clean-debug
${PACKR_CMD} build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${CLI_NAME} ./cmd/argocd
.PHONY: cli-linux
cli-linux: clean-debug
docker build --iidfile /tmp/argocd-linux-id --target builder --build-arg MAKE_TARGET="cli IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-linux-amd64" -f Dockerfile-argocd .
docker create --name tmp-argocd-linux `cat /tmp/argocd-linux-id`
docker cp tmp-argocd-linux:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-linux-amd64 dist/
.PHONY: release-cli
release-cli: clean-debug image
docker create --name tmp-argocd-linux $(IMAGE_PREFIX)argocd:$(IMAGE_TAG)
docker cp tmp-argocd-linux:/usr/local/bin/argocd ${DIST_DIR}/argocd-linux-amd64
docker cp tmp-argocd-linux:/usr/local/bin/argocd-darwin-amd64 ${DIST_DIR}/argocd-darwin-amd64
docker rm tmp-argocd-linux
.PHONY: cli-darwin
cli-darwin: clean-debug
docker build --iidfile /tmp/argocd-darwin-id --target builder --build-arg MAKE_TARGET="cli GOOS=darwin IMAGE_TAG=$(IMAGE_TAG) IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) CLI_NAME=argocd-darwin-amd64" -f Dockerfile-argocd .
docker create --name tmp-argocd-darwin `cat /tmp/argocd-darwin-id`
docker cp tmp-argocd-darwin:/root/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64 dist/
docker rm tmp-argocd-darwin
.PHONY: argocd-util
argocd-util: clean-debug
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS} -extldflags "-static"' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
# Build argocd-util as a statically linked binary, so it could run within the alpine-based dex container (argoproj/argo-cd#844)
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
.PHONY: install-manifest
install-manifest:
if [ "${IMAGE_NAMESPACE}" = "" ] ; then echo "IMAGE_NAMESPACE must be set to build install manifest" ; exit 1 ; fi
.PHONY: manifests
manifests:
./hack/update-manifests.sh
# NOTE: we use packr to do the build instead of go, since we embed swagger files and policy.csv
# files into the go binary
.PHONY: server
server: clean-debug
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
.PHONY: server-image
server-image:
docker build --build-arg BINARY=argocd-server -t $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-server:$(IMAGE_TAG) ; fi
.PHONY: repo-server
repo-server:
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
.PHONY: repo-server-image
repo-server-image:
docker build --build-arg BINARY=argocd-repo-server -t $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-repo-server:$(IMAGE_TAG) ; fi
.PHONY: controller
controller:
CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd/argocd-application-controller
CGO_ENABLED=0 ${PACKR_CMD} build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd/argocd-application-controller
.PHONY: controller-image
controller-image:
docker build --build-arg BINARY=argocd-application-controller -t $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-application-controller:$(IMAGE_TAG) ; fi
.PHONY: packr
packr:
go build -o ${DIST_DIR}/packr ./vendor/github.com/gobuffalo/packr/packr/
.PHONY: cli-image
cli-image:
docker build --build-arg BINARY=argocd -t $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) -f Dockerfile-argocd .
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd-cli:$(IMAGE_TAG) ; fi
.PHONY: image
ifeq ($(DEV_IMAGE), true)
# The "dev" image builds the binaries from the users desktop environment (instead of in Docker)
# which speeds up builds. Dockerfile.dev needs to be copied into dist to perform the build, since
# the dist directory is under .dockerignore.
image: packr
docker build -t argocd-base --target argocd-base .
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-server ./cmd/argocd-server
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-application-controller ./cmd/argocd-application-controller
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-repo-server ./cmd/argocd-repo-server
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-util ./cmd/argocd-util
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd ./cmd/argocd
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 dist/packr build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argocd-darwin-amd64 ./cmd/argocd
cp Dockerfile.dev dist
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) -f dist/Dockerfile.dev dist
else
image:
docker build -t $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) .
endif
@if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocd:$(IMAGE_TAG) ; fi
.PHONY: builder-image
builder-image:
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) -f Dockerfile-ci-builder .
docker build -t $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG) --target builder .
docker push $(IMAGE_PREFIX)argo-cd-ci-builder:$(IMAGE_TAG)
.PHONY: dep-ensure
dep-ensure:
dep ensure -no-vendor
.PHONY: lint
lint:
gometalinter.v2 --config gometalinter.json ./...
golangci-lint run --fix
.PHONY: build
build:
go build `go list ./... | grep -v resource_customizations`
.PHONY: test
test:
go test -v `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
$(TEST_CMD) -covermode=count -coverprofile=coverage.out `go list ./... | grep -v "github.com/argoproj/argo-cd/test/e2e"`
.PHONY: test-e2e
test-e2e:
go test -v -failfast -timeout 20m ./test/e2e
test-e2e: cli
$(TEST_CMD) -v -failfast -timeout 20m ./test/e2e
.PHONY: start-e2e
start-e2e: cli
killall goreman || true
kubectl create ns argocd-e2e || true
kubens argocd-e2e
kustomize build test/manifests/base | kubectl apply -f -
make start
# Cleans VSCode debug.test files from sub-dirs to prevent them from being included in packr boxes
.PHONY: clean-debug
@@ -140,13 +165,20 @@ clean-debug:
clean: clean-debug
-rm -rf ${CURRENT_DIR}/dist
.PHONY: precheckin
precheckin: test lint
.PHONY: start
start:
killall goreman || true
kubens argocd
goreman start
.PHONY: pre-commit
pre-commit: dep-ensure codegen build lint test
.PHONY: release-precheck
release-precheck: install-manifest
release-precheck: manifests
@if [ "$(GIT_TREE_STATE)" != "clean" ]; then echo 'git tree state is $(GIT_TREE_STATE)' ; exit 1; fi
@if [ -z "$(GIT_TAG)" ]; then echo 'commit must be tagged to perform release' ; exit 1; fi
@if [ "$(GIT_TAG)" != "v`cat VERSION`" ]; then echo 'VERSION does not match git tag'; exit 1; fi
.PHONY: release
release: release-precheck precheckin cli-darwin cli-linux server-image controller-image repo-server-image cli-image
release: release-precheck pre-commit image release-cli

View File

@@ -1,4 +1,5 @@
controller: go run ./cmd/argocd-application-controller/main.go
api-server: go run ./cmd/argocd-server/main.go --insecure --disable-auth
repo-server: go run ./cmd/argocd-repo-server/main.go --loglevel debug
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -p 5557:5557 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/coreos/dex:v2.10.0 serve /dex.yaml"
controller: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-application-controller/main.go --loglevel debug --redis localhost:6379 --repo-server localhost:8081"
api-server: sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true go run ./cmd/argocd-server/main.go --loglevel debug --redis localhost:6379 --disable-auth --insecure --dex-server http://localhost:5556 --repo-server localhost:8081 --staticassets ../argo-cd-ui/dist/app"
repo-server: sh -c "FORCE_LOG_COLORS=1 go run ./cmd/argocd-repo-server/main.go --loglevel debug --redis localhost:6379"
dex: sh -c "go run ./cmd/argocd-util/main.go gendexcfg -o `pwd`/dist/dex.yaml && docker run --rm -p 5556:5556 -v `pwd`/dist/dex.yaml:/dex.yaml quay.io/dexidp/dex:v2.14.0 serve /dex.yaml"
redis: docker run --rm --name argocd-redis -i -p 6379:6379 redis:5.0.3-alpine --save ""--appendonly no

View File

@@ -1,3 +1,5 @@
[![slack](https://img.shields.io/badge/slack-argoproj-brightgreen.svg?logo=slack)](https://argoproj.github.io/community/join-slack)
[![codecov](https://codecov.io/gh/argoproj/argo-cd/branch/master/graph/badge.svg)](https://codecov.io/gh/argoproj/argo-cd)
# Argo CD - Declarative Continuous Delivery for Kubernetes
@@ -5,65 +7,26 @@
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
![Argo CD UI](docs/argocd-ui.gif)
![Argo CD UI](docs/assets/argocd-ui.gif)
## Why Argo CD?
Application definitions, configurations, and environments should be declarative and version controlled.
Application deployment and lifecycle management should be automated, auditable, and easy to understand.
## Getting Started
Follow our [getting started guide](docs/getting_started.md). Further [documentation](docs/)
is provided for additional features.
## Who uses Argo CD?
## How it works
Organizations below are **officially** using Argo CD. Please send a PR with your organization name if you are using Argo CD.
Argo CD follows the **GitOps** pattern of using git repositories as the source of truth for defining
the desired application state. Kubernetes manifests can be specified in several ways:
* [ksonnet](https://ksonnet.io) applications
* [helm](https://helm.sh) charts
* Simple directory of YAML/json manifests
1. [Intuit](https://www.intuit.com/)
2. [KompiTech GmbH](https://www.kompitech.com/)
3. [Yieldlab](https://www.yieldlab.de/)
4. [Ticketmaster](https://ticketmaster.com)
5. [CyberAgent](https://www.cyberagent.co.jp/en/)
6. [OpenSaaS Studio](https://opensaas.studio)
7. [Riskified](https://www.riskified.com/)
Argo CD automates the deployment of the desired application states in the specified target environments.
Application deployments can track updates to branches, tags, or pinned to a specific version of
manifests at a git commit. See [tracking strategies](docs/tracking_strategies.md) for additional
details about the different tracking strategies available.
## Documentation
## Architecture
![Argo CD Architecture](docs/argocd_architecture.png)
Argo CD is implemented as a kubernetes controller which continuously monitors running applications
and compares the current, live state against the desired target state (as specified in the git repo).
A deployed application whose live state deviates from the target state is considered `OutOfSync`.
Argo CD reports & visualizes the differences, while providing facilities to automatically or
manually sync the live state back to the desired target state. Any modifications made to the desired
target state in the git repo can be automatically applied and reflected in the specified target
environments.
For additional details, see [architecture overview](docs/architecture.md).
## Features
* Automated deployment of applications to specified target environments
* Continuous monitoring of deployed applications
* Automated or manual syncing of applications to its desired state
* Web and CLI based visualization of applications and differences between live vs. desired state
* Rollback/Roll-anywhere to any application state committed in the git repository
* Health assessment statuses on all components of the application
* SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitLab, Microsoft, LinkedIn)
* Webhook Integration (GitHub, BitBucket, GitLab)
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
* Audit trails for application events and API calls
* Parameter overrides for overriding ksonnet/helm parameters in git
## Development Status
* Argo CD is being used in production to deploy SaaS services at Intuit
## Roadmap
* Auto-sync toggle to directly apply git state changes to live state
* Service account/access key management for CI pipelines
* Support for additional config management tools (Kustomize?)
* Revamped UI, and feature parity with CLI
* Customizable application actions
To learn more about Argo CD [go to the complete documentation](https://argoproj.github.io/argo-cd/).

View File

@@ -1 +1 @@
0.7.1
1.0.2

29
assets/builtin-policy.csv Normal file
View File

@@ -0,0 +1,29 @@
# Built-in policy which defines two roles: role:readonly and role:admin,
# and additionally assigns the admin user to the role:admin role.
# There are two policy formats:
# 1. Applications (which belong to a project):
# p, <user/group>, <resource>, <action>, <project>/<object>
# 2. All other resources:
# p, <user/group>, <resource>, <action>, <object>
p, role:readonly, applications, get, */*, allow
p, role:readonly, clusters, get, *, allow
p, role:readonly, repositories, get, *, allow
p, role:readonly, projects, get, *, allow
p, role:admin, applications, create, */*, allow
p, role:admin, applications, update, */*, allow
p, role:admin, applications, delete, */*, allow
p, role:admin, applications, sync, */*, allow
p, role:admin, clusters, create, *, allow
p, role:admin, clusters, update, *, allow
p, role:admin, clusters, delete, *, allow
p, role:admin, repositories, create, *, allow
p, role:admin, repositories, update, *, allow
p, role:admin, repositories, delete, *, allow
p, role:admin, projects, create, *, allow
p, role:admin, projects, update, *, allow
p, role:admin, projects, delete, *, allow
g, role:admin, role:readonly
g, admin, role:admin
1 # Built-in policy which defines two roles: role:readonly and role:admin,
2 # and additionally assigns the admin user to the role:admin role.
3 # There are two policy formats:
4 # 1. Applications (which belong to a project):
5 # p, <user/group>, <resource>, <action>, <project>/<object>
6 # 2. All other resources:
7 # p, <user/group>, <resource>, <action>, <object>
8 p, role:readonly, applications, get, */*, allow
9 p, role:readonly, clusters, get, *, allow
10 p, role:readonly, repositories, get, *, allow
11 p, role:readonly, projects, get, *, allow
12 p, role:admin, applications, create, */*, allow
13 p, role:admin, applications, update, */*, allow
14 p, role:admin, applications, delete, */*, allow
15 p, role:admin, applications, sync, */*, allow
16 p, role:admin, clusters, create, *, allow
17 p, role:admin, clusters, update, *, allow
18 p, role:admin, clusters, delete, *, allow
19 p, role:admin, repositories, create, *, allow
20 p, role:admin, repositories, update, *, allow
21 p, role:admin, repositories, delete, *, allow
22 p, role:admin, projects, create, *, allow
23 p, role:admin, projects, update, *, allow
24 p, role:admin, projects, delete, *, allow
25 g, role:admin, role:readonly
26 g, admin, role:admin

View File

@@ -2,13 +2,13 @@
r = sub, res, act, obj
[policy_definition]
p = sub, res, act, obj
p = sub, res, act, obj, eft
[role_definition]
g = _, _
[policy_effect]
e = some(where (p.eft == allow))
e = some(where (p.eft == allow)) && !some(where (p.eft == deny))
[matchers]
m = g(r.sub, p.sub) && keyMatch(r.res, p.res) && keyMatch(r.act, p.act) && keyMatch(r.obj, p.obj)

File diff suppressed because it is too large Load Diff

View File

@@ -2,28 +2,29 @@ package main
import (
"context"
"flag"
"fmt"
"os"
"strconv"
"time"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// load the oidc plugin (required to authenticate with OpenID Connect).
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
"github.com/argoproj/argo-cd"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/controller"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/settings"
"github.com/argoproj/argo-cd/util/stats"
)
@@ -36,29 +37,27 @@ const (
func newCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
appResyncPeriod int64
repoServerAddress string
statusProcessors int
operationProcessors int
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
appResyncPeriod int64
repoServerAddress string
repoServerTimeoutSeconds int
statusProcessors int
operationProcessors int
logLevel string
glogLevel int
cacheSrc func() (*cache.Cache, error)
)
var command = cobra.Command{
Use: cliName,
Short: "application-controller is a controller to operate on applications CRD",
RunE: func(c *cobra.Command, args []string) error {
level, err := log.ParseLevel(logLevel)
errors.CheckError(err)
log.SetLevel(level)
// Set the glog level for the k8s go-client
_ = flag.CommandLine.Parse([]string{})
_ = flag.Lookup("logtostderr").Value.Set("true")
_ = flag.Lookup("v").Value.Set(strconv.Itoa(glogLevel))
cli.SetLogLevel(logLevel)
cli.SetGLogLevel(glogLevel)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
config.QPS = common.K8sClientConfigQPS
config.Burst = common.K8sClientConfigBurst
kubeClient := kubernetes.NewForConfigOrDie(config)
appClient := appclientset.NewForConfigOrDie(config)
@@ -66,37 +65,32 @@ func newCommand() *cobra.Command {
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
// TODO (amatyushentsev): Use config map to store controller configuration
controllerConfig := controller.ApplicationControllerConfig{
Namespace: namespace,
InstanceID: "",
}
db := db.NewDB(namespace, kubeClient)
resyncDuration := time.Duration(appResyncPeriod) * time.Second
repoClientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
appStateManager := controller.NewAppStateManager(db, appClient, repoClientset, namespace)
repoClientset := reposerver.NewRepoServerClientset(repoServerAddress, repoServerTimeoutSeconds)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
appController := controller.NewApplicationController(
cache, err := cacheSrc()
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(ctx, kubeClient, namespace)
appController, err := controller.NewApplicationController(
namespace,
settingsMgr,
kubeClient,
appClient,
repoClientset,
db,
appStateManager,
resyncDuration,
&controllerConfig)
secretController := controller.NewSecretController(kubeClient, repoClientset, resyncDuration, namespace)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
cache,
resyncDuration)
errors.CheckError(err)
log.Infof("Application Controller (version: %s) starting (namespace: %s)", argocd.GetVersion(), namespace)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
go secretController.Run(ctx)
go appController.Run(ctx, statusProcessors, operationProcessors)
// Wait forever
select {}
},
@@ -104,11 +98,13 @@ func newCommand() *cobra.Command {
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().Int64Var(&appResyncPeriod, "app-resync", defaultAppResyncPeriod, "Time period in seconds for application resync.")
command.Flags().StringVar(&repoServerAddress, "repo-server", "localhost:8081", "Repo server address.")
command.Flags().StringVar(&repoServerAddress, "repo-server", common.DefaultRepoServerAddr, "Repo server address.")
command.Flags().IntVar(&repoServerTimeoutSeconds, "repo-server-timeout-seconds", 60, "Repo server RPC call timeout seconds.")
command.Flags().IntVar(&statusProcessors, "status-processors", 1, "Number of application status processors")
command.Flags().IntVar(&operationProcessors, "operation-processors", 1, "Number of application operation processors")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command
}

View File

@@ -3,50 +3,59 @@ package main
import (
"fmt"
"net"
"net/http"
"os"
"time"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
"github.com/argoproj/argo-cd/util/ksonnet"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
)
const (
// CLIName is the name of the CLI
cliName = "argocd-repo-server"
port = 8081
)
func newCommand() *cobra.Command {
var (
logLevel string
logLevel string
parallelismLimit int64
cacheSrc func() (*cache.Cache, error)
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
)
var command = cobra.Command{
Use: cliName,
Short: "Run argocd-repo-server",
RunE: func(c *cobra.Command, args []string) error {
level, err := log.ParseLevel(logLevel)
errors.CheckError(err)
log.SetLevel(level)
cli.SetLogLevel(logLevel)
server := reposerver.NewServer(git.NewFactory(), newCache())
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
errors.CheckError(err)
cache, err := cacheSrc()
errors.CheckError(err)
server, err := reposerver.NewServer(git.NewFactory(), cache, tlsConfigCustomizer, parallelismLimit)
errors.CheckError(err)
grpc := server.CreateGRPC()
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", common.PortRepoServer))
errors.CheckError(err)
ksVers, err := ksonnet.KsonnetVersion()
errors.CheckError(err)
http.Handle("/metrics", promhttp.Handler())
go func() { errors.CheckError(http.ListenAndServe(fmt.Sprintf(":%d", common.PortRepoServerMetrics), nil)) }()
log.Infof("argocd-repo-server %s serving on %s", argocd.GetVersion(), listener.Addr())
log.Infof("ksonnet version: %s", ksVers)
stats.RegisterStackDumper()
stats.StartStatsTicker(10 * time.Minute)
stats.RegisterHeapDumper("memprofile")
@@ -57,19 +66,12 @@ func newCommand() *cobra.Command {
}
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().Int64Var(&parallelismLimit, "parallelismlimit", 0, "Limit on number of concurrent manifests generate requests. Any value less the 1 means no limit.")
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(&command)
cacheSrc = cache.AddCacheFlagsToCmd(&command)
return &command
}
func newCache() cache.Cache {
return cache.NewInMemoryCache(repository.DefaultRepoCacheExpiration)
// client := redis.NewClient(&redis.Options{
// Addr: "localhost:6379",
// Password: "",
// DB: 0,
// })
// return cache.NewRedisCache(client, repository.DefaultRepoCacheExpiration)
}
func main() {
if err := newCommand().Execute(); err != nil {
fmt.Println(err)

View File

@@ -2,66 +2,75 @@ package commands
import (
"context"
"flag"
"strconv"
"time"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/server"
"github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/stats"
"github.com/argoproj/argo-cd/util/tls"
)
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
insecure bool
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
staticAssetsDir string
repoServerAddress string
disableAuth bool
insecure bool
logLevel string
glogLevel int
clientConfig clientcmd.ClientConfig
staticAssetsDir string
baseHRef string
repoServerAddress string
dexServerAddress string
disableAuth bool
tlsConfigCustomizerSrc func() (tls.ConfigCustomizer, error)
cacheSrc func() (*cache.Cache, error)
)
var command = &cobra.Command{
Use: cliName,
Short: "Run the argocd API server",
Long: "Run the argocd API server",
Run: func(c *cobra.Command, args []string) {
level, err := log.ParseLevel(logLevel)
errors.CheckError(err)
log.SetLevel(level)
// Set the glog level for the k8s go-client
_ = flag.CommandLine.Parse([]string{})
_ = flag.Lookup("logtostderr").Value.Set("true")
_ = flag.Lookup("v").Value.Set(strconv.Itoa(glogLevel))
cli.SetLogLevel(logLevel)
cli.SetGLogLevel(glogLevel)
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
config.QPS = common.K8sClientConfigQPS
config.Burst = common.K8sClientConfigBurst
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
tlsConfigCustomizer, err := tlsConfigCustomizerSrc()
errors.CheckError(err)
cache, err := cacheSrc()
errors.CheckError(err)
kubeclientset := kubernetes.NewForConfigOrDie(config)
appclientset := appclientset.NewForConfigOrDie(config)
repoclientset := reposerver.NewRepositoryServerClientset(repoServerAddress)
repoclientset := reposerver.NewRepoServerClientset(repoServerAddress, 0)
argoCDOpts := server.ArgoCDServerOpts{
Insecure: insecure,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
DisableAuth: disableAuth,
Insecure: insecure,
Namespace: namespace,
StaticAssetsDir: staticAssetsDir,
BaseHRef: baseHRef,
KubeClientset: kubeclientset,
AppClientset: appclientset,
RepoClientset: repoclientset,
DexServerAddr: dexServerAddress,
DisableAuth: disableAuth,
TLSConfigCustomizer: tlsConfigCustomizer,
Cache: cache,
}
stats.RegisterStackDumper()
@@ -69,10 +78,10 @@ func NewCommand() *cobra.Command {
stats.RegisterHeapDumper("memprofile")
for {
argocd := server.NewServer(argoCDOpts)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
argocd.Run(ctx, 8080)
argocd := server.NewServer(ctx, argoCDOpts)
argocd.Run(ctx, common.PortAPIServer)
cancel()
}
},
@@ -81,10 +90,14 @@ func NewCommand() *cobra.Command {
clientConfig = cli.AddKubectlFlagsToCmd(command)
command.Flags().BoolVar(&insecure, "insecure", false, "Run server without TLS")
command.Flags().StringVar(&staticAssetsDir, "staticassets", "", "Static assets directory path")
command.Flags().StringVar(&baseHRef, "basehref", "/", "Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from /")
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
command.Flags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level")
command.Flags().StringVar(&repoServerAddress, "repo-server", "localhost:8081", "Repo server address.")
command.Flags().StringVar(&repoServerAddress, "repo-server", common.DefaultRepoServerAddr, "Repo server address")
command.Flags().StringVar(&dexServerAddress, "dex-server", common.DefaultDexServerAddr, "Dex server address")
command.Flags().BoolVar(&disableAuth, "disable-auth", false, "Disable client authentication")
command.AddCommand(cli.NewVersionCmd(cliName))
tlsConfigCustomizerSrc = tls.AddTLSFlagsToCmd(command)
cacheSrc = cache.AddCacheFlagsToCmd(command)
return command
}

View File

@@ -1,30 +1,38 @@
package main
import (
"bufio"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"strings"
"syscall"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/dex"
"github.com/argoproj/argo-cd/util/settings"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/errors"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/dex"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
// load the gcp plugin (required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
// load the oidc plugin (required to authenticate with OpenID Connect).
@@ -34,9 +42,15 @@ import (
const (
// CLIName is the name of the CLI
cliName = "argocd-util"
// YamlSeparator separates sections of a YAML file
yamlSeparator = "\n---\n"
yamlSeparator = "---\n"
)
var (
configMapResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "configmaps"}
secretResource = schema.GroupVersionResource{Group: "", Version: "v1", Resource: "secrets"}
applicationsResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "applications"}
appprojectsResource = schema.GroupVersionResource{Group: "argoproj.io", Version: "v1alpha1", Resource: "appprojects"}
)
// NewCommand returns a new instance of an argocd command
@@ -47,7 +61,7 @@ func NewCommand() *cobra.Command {
var command = &cobra.Command{
Use: cliName,
Short: "argocd-util has internal tools used by ArgoCD",
Short: "argocd-util has internal tools used by Argo CD",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
},
@@ -58,7 +72,7 @@ func NewCommand() *cobra.Command {
command.AddCommand(NewGenDexConfigCommand())
command.AddCommand(NewImportCommand())
command.AddCommand(NewExportCommand())
command.AddCommand(NewSettingsCommand())
command.AddCommand(NewClusterConfig())
command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error")
return command
@@ -70,7 +84,7 @@ func NewRunDexCommand() *cobra.Command {
)
var command = cobra.Command{
Use: "rundex",
Short: "Runs dex generating a config using settings from the ArgoCD configmap and secret",
Short: "Runs dex generating a config using settings from the Argo CD configmap and secret",
RunE: func(c *cobra.Command, args []string) error {
_, err := exec.LookPath("dex")
errors.CheckError(err)
@@ -79,17 +93,15 @@ func NewRunDexCommand() *cobra.Command {
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
settings, err := settingsMgr.GetSettings()
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClientset, namespace)
prevSettings, err := settingsMgr.GetSettings()
errors.CheckError(err)
ctx := context.Background()
settingsMgr.StartNotifier(ctx, settings)
updateCh := make(chan struct{}, 1)
updateCh := make(chan *settings.ArgoCDSettings, 1)
settingsMgr.Subscribe(updateCh)
for {
var cmd *exec.Cmd
dexCfgBytes, err := dex.GenerateDexConfigYAML(settings)
dexCfgBytes, err := dex.GenerateDexConfigYAML(prevSettings)
errors.CheckError(err)
if len(dexCfgBytes) == 0 {
log.Infof("dex is not configured")
@@ -106,10 +118,11 @@ func NewRunDexCommand() *cobra.Command {
// loop until the dex config changes
for {
<-updateCh
newDexCfgBytes, err := dex.GenerateDexConfigYAML(settings)
newSettings := <-updateCh
newDexCfgBytes, err := dex.GenerateDexConfigYAML(newSettings)
errors.CheckError(err)
if string(newDexCfgBytes) != string(dexCfgBytes) {
prevSettings = newSettings
log.Infof("dex config modified. restarting dex")
if cmd != nil && cmd.Process != nil {
err = cmd.Process.Signal(syscall.SIGTERM)
@@ -137,14 +150,14 @@ func NewGenDexConfigCommand() *cobra.Command {
)
var command = cobra.Command{
Use: "gendexcfg",
Short: "Generates a dex config from ArgoCD settings",
Short: "Generates a dex config from Argo CD settings",
RunE: func(c *cobra.Command, args []string) error {
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClientset, namespace)
settings, err := settingsMgr.GetSettings()
errors.CheckError(err)
dexCfgBytes, err := dex.GenerateDexConfigYAML(settings)
@@ -154,7 +167,29 @@ func NewGenDexConfigCommand() *cobra.Command {
return nil
}
if out == "" {
fmt.Printf(string(dexCfgBytes))
dexCfg := make(map[string]interface{})
err := yaml.Unmarshal(dexCfgBytes, &dexCfg)
errors.CheckError(err)
if staticClientsInterface, ok := dexCfg["staticClients"]; ok {
if staticClients, ok := staticClientsInterface.([]interface{}); ok {
for i := range staticClients {
staticClient := staticClients[i]
if mappings, ok := staticClient.(map[string]interface{}); ok {
for key := range mappings {
if key == "secret" {
mappings[key] = "******"
}
}
staticClients[i] = mappings
}
}
dexCfg["staticClients"] = staticClients
}
}
errors.CheckError(err)
maskedDexCfgBytes, err := yaml.Marshal(dexCfg)
errors.CheckError(err)
fmt.Print(string(maskedDexCfgBytes))
} else {
err = ioutil.WriteFile(out, dexCfgBytes, 0644)
errors.CheckError(err)
@@ -172,94 +207,153 @@ func NewGenDexConfigCommand() *cobra.Command {
func NewImportCommand() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
prune bool
dryRun bool
)
var command = cobra.Command{
Use: "import SOURCE",
Short: "Import Argo CD data from stdin (specify `-') or a file",
RunE: func(c *cobra.Command, args []string) error {
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
var (
input []byte
err error
newSettings *settings.ArgoCDSettings
newRepos []*v1alpha1.Repository
newClusters []*v1alpha1.Cluster
newApps []*v1alpha1.Application
newRBACCM *apiv1.ConfigMap
)
if in := args[0]; in == "-" {
input, err = ioutil.ReadAll(os.Stdin)
errors.CheckError(err)
} else {
input, err = ioutil.ReadFile(in)
errors.CheckError(err)
}
inputStrings := strings.Split(string(input), yamlSeparator)
err = yaml.Unmarshal([]byte(inputStrings[0]), &newSettings)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[1]), &newRepos)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[2]), &newClusters)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[3]), &newApps)
errors.CheckError(err)
err = yaml.Unmarshal([]byte(inputStrings[4]), &newRBACCM)
errors.CheckError(err)
config, err := clientConfig.ClientConfig()
config.QPS = 100
config.Burst = 50
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
acdClients := newArgoCDClientsets(config, namespace)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
err = settingsMgr.SaveSettings(newSettings)
var input []byte
if in := args[0]; in == "-" {
input, err = ioutil.ReadAll(os.Stdin)
} else {
input, err = ioutil.ReadFile(in)
}
errors.CheckError(err)
db := db.NewDB(namespace, kubeClientset)
var dryRunMsg string
if dryRun {
dryRunMsg = " (dry run)"
}
_, err = kubeClientset.CoreV1().ConfigMaps(namespace).Create(newRBACCM)
// pruneObjects tracks live objects and it's current resource version. any remaining
// items in this map indicates the resource should be pruned since it no longer appears
// in the backup
pruneObjects := make(map[kube.ResourceKey]string)
configMaps, err := acdClients.configMaps.List(metav1.ListOptions{})
errors.CheckError(err)
for _, cm := range configMaps.Items {
cmName := cm.GetName()
if cmName == common.ArgoCDConfigMapName || cmName == common.ArgoCDRBACConfigMapName {
pruneObjects[kube.ResourceKey{Group: "", Kind: "ConfigMap", Name: cm.GetName()}] = cm.GetResourceVersion()
}
}
secrets, err := acdClients.secrets.List(metav1.ListOptions{})
errors.CheckError(err)
for _, secret := range secrets.Items {
if isArgoCDSecret(nil, secret) {
pruneObjects[kube.ResourceKey{Group: "", Kind: "Secret", Name: secret.GetName()}] = secret.GetResourceVersion()
}
}
applications, err := acdClients.applications.List(metav1.ListOptions{})
errors.CheckError(err)
for _, app := range applications.Items {
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "Application", Name: app.GetName()}] = app.GetResourceVersion()
}
projects, err := acdClients.projects.List(metav1.ListOptions{})
errors.CheckError(err)
for _, proj := range projects.Items {
pruneObjects[kube.ResourceKey{Group: "argoproj.io", Kind: "AppProject", Name: proj.GetName()}] = proj.GetResourceVersion()
}
for _, repo := range newRepos {
_, err := db.CreateRepository(context.Background(), repo)
if err != nil {
log.Warn(err)
// Create or replace existing object
objs, err := kube.SplitYAML(string(input))
errors.CheckError(err)
for _, obj := range objs {
gvk := obj.GroupVersionKind()
key := kube.ResourceKey{Group: gvk.Group, Kind: gvk.Kind, Name: obj.GetName()}
resourceVersion, exists := pruneObjects[key]
delete(pruneObjects, key)
var dynClient dynamic.ResourceInterface
switch obj.GetKind() {
case "Secret":
dynClient = acdClients.secrets
case "ConfigMap":
dynClient = acdClients.configMaps
case "AppProject":
dynClient = acdClients.projects
case "Application":
dynClient = acdClients.applications
}
if !exists {
if !dryRun {
_, err = dynClient.Create(obj, metav1.CreateOptions{})
errors.CheckError(err)
}
fmt.Printf("%s/%s %s created%s\n", gvk.Group, gvk.Kind, obj.GetName(), dryRunMsg)
} else {
if !dryRun {
obj.SetResourceVersion(resourceVersion)
_, err = dynClient.Update(obj, metav1.UpdateOptions{})
errors.CheckError(err)
}
fmt.Printf("%s/%s %s replaced%s\n", gvk.Group, gvk.Kind, obj.GetName(), dryRunMsg)
}
}
for _, cluster := range newClusters {
_, err := db.CreateCluster(context.Background(), cluster)
if err != nil {
log.Warn(err)
// Delete objects not in backup
for key := range pruneObjects {
if prune {
var dynClient dynamic.ResourceInterface
switch key.Kind {
case "Secret":
dynClient = acdClients.secrets
case "AppProject":
dynClient = acdClients.projects
case "Application":
dynClient = acdClients.applications
default:
log.Fatalf("Unexpected kind '%s' in prune list", key.Kind)
}
if !dryRun {
err = dynClient.Delete(key.Name, &metav1.DeleteOptions{})
errors.CheckError(err)
}
fmt.Printf("%s/%s %s pruned%s\n", key.Group, key.Kind, key.Name, dryRunMsg)
} else {
fmt.Printf("%s/%s %s needs pruning\n", key.Group, key.Kind, key.Name)
}
}
appClientset := appclientset.NewForConfigOrDie(config)
for _, app := range newApps {
out, err := appClientset.ArgoprojV1alpha1().Applications(namespace).Create(app)
errors.CheckError(err)
log.Println(out)
}
return nil
},
}
clientConfig = cli.AddKubectlFlagsToCmd(&command)
command.Flags().BoolVar(&dryRun, "dry-run", false, "Print what will be performed")
command.Flags().BoolVar(&prune, "prune", false, "Prune secrets, applications and projects which do not appear in the backup")
return &command
}
type argoCDClientsets struct {
configMaps dynamic.ResourceInterface
secrets dynamic.ResourceInterface
applications dynamic.ResourceInterface
projects dynamic.ResourceInterface
}
func newArgoCDClientsets(config *rest.Config, namespace string) *argoCDClientsets {
dynamicIf, err := dynamic.NewForConfig(config)
errors.CheckError(err)
return &argoCDClientsets{
configMaps: dynamicIf.Resource(configMapResource).Namespace(namespace),
secrets: dynamicIf.Resource(secretResource).Namespace(namespace),
applications: dynamicIf.Resource(applicationsResource).Namespace(namespace),
projects: dynamicIf.Resource(appprojectsResource).Namespace(namespace),
}
}
// NewExportCommand defines a new command for exporting Kubernetes and Argo CD resources.
func NewExportCommand() *cobra.Command {
var (
@@ -269,69 +363,48 @@ func NewExportCommand() *cobra.Command {
var command = cobra.Command{
Use: "export",
Short: "Export all Argo CD data to stdout (default) or a file",
RunE: func(c *cobra.Command, args []string) error {
Run: func(c *cobra.Command, args []string) {
config, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
kubeClientset := kubernetes.NewForConfigOrDie(config)
settingsMgr := settings.NewSettingsManager(kubeClientset, namespace)
settings, err := settingsMgr.GetSettings()
errors.CheckError(err)
// certificate data is included in secrets that are exported alongside
settings.Certificate = nil
db := db.NewDB(namespace, kubeClientset)
clusters, err := db.ListClusters(context.Background())
errors.CheckError(err)
repos, err := db.ListRepositories(context.Background())
errors.CheckError(err)
appClientset := appclientset.NewForConfigOrDie(config)
apps, err := appClientset.ArgoprojV1alpha1().Applications(namespace).List(metav1.ListOptions{})
errors.CheckError(err)
rbacCM, err := kubeClientset.CoreV1().ConfigMaps(namespace).Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
// remove extraneous cruft from output
rbacCM.ObjectMeta = metav1.ObjectMeta{
Name: rbacCM.ObjectMeta.Name,
}
// remove extraneous cruft from output
for idx, app := range apps.Items {
apps.Items[idx].ObjectMeta = metav1.ObjectMeta{
Name: app.ObjectMeta.Name,
Finalizers: app.ObjectMeta.Finalizers,
}
apps.Items[idx].Status = v1alpha1.ApplicationStatus{
History: app.Status.History,
}
apps.Items[idx].Operation = nil
}
// take a list of exportable objects, marshal them to YAML,
// and return a string joined by a delimiter
output := func(delimiter string, oo ...interface{}) string {
out := make([]string, 0)
for _, o := range oo {
data, err := yaml.Marshal(o)
errors.CheckError(err)
out = append(out, string(data))
}
return strings.Join(out, delimiter)
}(yamlSeparator, settings, repos.Items, clusters.Items, apps.Items, rbacCM)
var writer io.Writer
if out == "-" {
fmt.Println(output)
writer = os.Stdout
} else {
err = ioutil.WriteFile(out, []byte(output), 0644)
f, err := os.Create(out)
errors.CheckError(err)
defer util.Close(f)
writer = bufio.NewWriter(f)
}
acdClients := newArgoCDClientsets(config, namespace)
acdConfigMap, err := acdClients.configMaps.Get(common.ArgoCDConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdConfigMap)
acdRBACConfigMap, err := acdClients.configMaps.Get(common.ArgoCDRBACConfigMapName, metav1.GetOptions{})
errors.CheckError(err)
export(writer, *acdRBACConfigMap)
referencedSecrets := getReferencedSecrets(*acdConfigMap)
secrets, err := acdClients.secrets.List(metav1.ListOptions{})
errors.CheckError(err)
for _, secret := range secrets.Items {
if isArgoCDSecret(referencedSecrets, secret) {
export(writer, secret)
}
}
projects, err := acdClients.projects.List(metav1.ListOptions{})
errors.CheckError(err)
for _, proj := range projects.Items {
export(writer, proj)
}
applications, err := acdClients.applications.List(metav1.ListOptions{})
errors.CheckError(err)
for _, app := range applications.Items {
export(writer, app)
}
return nil
},
}
@@ -341,37 +414,130 @@ func NewExportCommand() *cobra.Command {
return &command
}
// NewSettingsCommand returns a new instance of `argocd-util settings` command
func NewSettingsCommand() *cobra.Command {
// getReferencedSecrets examines the argocd-cm config for any referenced repo secrets and returns a
// map of all referenced secrets.
func getReferencedSecrets(un unstructured.Unstructured) map[string]bool {
var cm apiv1.ConfigMap
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &cm)
errors.CheckError(err)
referencedSecrets := make(map[string]bool)
if reposRAW, ok := cm.Data["repositories"]; ok {
repoCreds := make([]settings.RepoCredentials, 0)
err := yaml.Unmarshal([]byte(reposRAW), &repoCreds)
errors.CheckError(err)
for _, cred := range repoCreds {
if cred.PasswordSecret != nil {
referencedSecrets[cred.PasswordSecret.Name] = true
}
if cred.SSHPrivateKeySecret != nil {
referencedSecrets[cred.SSHPrivateKeySecret.Name] = true
}
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
}
}
if helmReposRAW, ok := cm.Data["helm.repositories"]; ok {
helmRepoCreds := make([]settings.HelmRepoCredentials, 0)
err := yaml.Unmarshal([]byte(helmReposRAW), &helmRepoCreds)
errors.CheckError(err)
for _, cred := range helmRepoCreds {
if cred.CASecret != nil {
referencedSecrets[cred.CASecret.Name] = true
}
if cred.CertSecret != nil {
referencedSecrets[cred.CertSecret.Name] = true
}
if cred.KeySecret != nil {
referencedSecrets[cred.KeySecret.Name] = true
}
if cred.UsernameSecret != nil {
referencedSecrets[cred.UsernameSecret.Name] = true
}
if cred.PasswordSecret != nil {
referencedSecrets[cred.PasswordSecret.Name] = true
}
}
}
return referencedSecrets
}
// isArgoCDSecret returns whether or not the given secret is a part of Argo CD configuration
// (e.g. argocd-secret, repo credentials, or cluster credentials)
func isArgoCDSecret(repoSecretRefs map[string]bool, un unstructured.Unstructured) bool {
secretName := un.GetName()
if secretName == common.ArgoCDSecretName {
return true
}
if repoSecretRefs != nil {
if _, ok := repoSecretRefs[secretName]; ok {
return true
}
}
if labels := un.GetLabels(); labels != nil {
if _, ok := labels[common.LabelKeySecretType]; ok {
return true
}
}
if annotations := un.GetAnnotations(); annotations != nil {
if annotations[common.AnnotationKeyManagedBy] == common.AnnotationValueManagedByArgoCD {
return true
}
}
return false
}
// export writes the unstructured object and removes extraneous cruft from output before writing
func export(w io.Writer, un unstructured.Unstructured) {
name := un.GetName()
finalizers := un.GetFinalizers()
apiVersion := un.GetAPIVersion()
kind := un.GetKind()
labels := un.GetLabels()
annotations := un.GetAnnotations()
unstructured.RemoveNestedField(un.Object, "metadata")
un.SetName(name)
un.SetFinalizers(finalizers)
un.SetAPIVersion(apiVersion)
un.SetKind(kind)
un.SetLabels(labels)
un.SetAnnotations(annotations)
data, err := yaml.Marshal(un.Object)
errors.CheckError(err)
_, err = w.Write(data)
errors.CheckError(err)
_, err = w.Write([]byte(yamlSeparator))
errors.CheckError(err)
}
// NewClusterConfig returns a new instance of `argocd-util kubeconfig` command
func NewClusterConfig() *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
updateSuperuser bool
superuserPassword string
updateSignature bool
clientConfig clientcmd.ClientConfig
)
var command = &cobra.Command{
Use: "settings",
Short: "Creates or updates ArgoCD settings",
Long: "Creates or updates ArgoCD settings",
Use: "kubeconfig CLUSTER_URL OUTPUT_PATH",
Short: "Generates kubeconfig for the specified cluster",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
serverUrl := args[0]
output := args[1]
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
namespace, wasSpecified, err := clientConfig.Namespace()
namespace, _, err := clientConfig.Namespace()
errors.CheckError(err)
if !(wasSpecified) {
namespace = "argocd"
}
kubeclientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
settingsMgr := settings.NewSettingsManager(kubeclientset, namespace)
_ = settings.UpdateSettings(superuserPassword, settingsMgr, updateSignature, updateSuperuser, namespace)
cluster, err := db.NewDB(namespace, settings.NewSettingsManager(context.Background(), kubeclientset, namespace), kubeclientset).GetCluster(context.Background(), serverUrl)
errors.CheckError(err)
err = kube.WriteKubeConfig(cluster.RESTConfig(), namespace, output)
errors.CheckError(err)
},
}
command.Flags().BoolVar(&updateSuperuser, "update-superuser", false, "force updating the superuser password")
command.Flags().StringVar(&superuserPassword, "superuser-password", "", "password for super user")
command.Flags().BoolVar(&updateSignature, "update-signature", false, "force updating the server-side token signing signature")
clientConfig = cli.AddKubectlFlagsToCmd(command)
return command
}

View File

@@ -6,13 +6,15 @@ import (
"os"
"syscall"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/account"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/settings"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/localconfig"
)
func NewAccountCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
@@ -50,7 +52,9 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
fmt.Print("\n")
}
if newPassword == "" {
newPassword = settings.ReadAndConfirmPassword()
var err error
newPassword, err = cli.ReadAndConfirmPassword()
errors.CheckError(err)
}
updatePasswordRequest := account.UpdatePasswordRequest{
@@ -58,11 +62,30 @@ func NewAccountUpdatePasswordCommand(clientOpts *argocdclient.ClientOptions) *co
CurrentPassword: currentPassword,
}
conn, usrIf := argocdclient.NewClientOrDie(clientOpts).NewAccountClientOrDie()
acdClient := argocdclient.NewClientOrDie(clientOpts)
conn, usrIf := acdClient.NewAccountClientOrDie()
defer util.Close(conn)
_, err := usrIf.UpdatePassword(context.Background(), &updatePasswordRequest)
ctx := context.Background()
_, err := usrIf.UpdatePassword(ctx, &updatePasswordRequest)
errors.CheckError(err)
fmt.Printf("Password updated\n")
// Get a new JWT token after updating the password
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
configCtx, err := localCfg.ResolveContext(clientOpts.Context)
errors.CheckError(err)
claims, err := configCtx.User.Claims()
errors.CheckError(err)
tokenString := passwordLogin(acdClient, claims.Subject, newPassword)
localCfg.UpsertUser(localconfig.User{
Name: localCfg.CurrentContext,
AuthToken: tokenString,
})
err = localconfig.WriteLocalConfig(*localCfg, clientOpts.ConfigPath)
errors.CheckError(err)
fmt.Printf("Context '%s' updated\n", localCfg.CurrentContext)
},
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,150 @@
package commands
import (
"context"
"fmt"
"os"
"sort"
"text/tabwriter"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/application"
"github.com/argoproj/argo-cd/util"
)
// NewApplicationResourceActionsCommand returns a new instance of an `argocd app actions` command
func NewApplicationResourceActionsCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "actions",
Short: "Manage Resource actions",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
command.AddCommand(NewApplicationResourceActionsListCommand(clientOpts))
command.AddCommand(NewApplicationResourceActionsRunCommand(clientOpts))
return command
}
// NewApplicationResourceActionsListCommand returns a new instance of an `argocd app actions list` command
func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var namespace string
var kind string
var group string
var resourceName string
var all bool
var command = &cobra.Command{
Use: "list APPNAME",
Short: "Lists available actions on a resource",
}
command.Run = func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
appName := args[0]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
availableActions := make(map[string][]argoappv1.ResourceAction)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
availActionsForResource, err := appIf.ListResourceActions(ctx, &application.ApplicationResourceRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: obj.GetName(),
Group: gvk.Group,
Kind: gvk.Kind,
})
errors.CheckError(err)
availableActions[obj.GetName()] = availActionsForResource.Actions
}
var keys []string
for key := range availableActions {
keys = append(keys, key)
}
sort.Strings(keys)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "RESOURCE\tACTION\n")
fmt.Println()
for key := range availableActions {
for i := range availableActions[key] {
action := availableActions[key][i]
fmt.Fprintf(w, "%s\t%s\n", key, action.Name)
}
}
_ = w.Flush()
}
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource")
command.Flags().StringVar(&kind, "kind", "", "Kind")
err := command.MarkFlagRequired("kind")
errors.CheckError(err)
command.Flags().StringVar(&group, "group", "", "Group")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
command.Flags().BoolVar(&all, "all", false, "Indicates whether to list actions on multiple matching resources")
return command
}
// NewApplicationResourceActionsRunCommand returns a new instance of an `argocd app actions run` command
func NewApplicationResourceActionsRunCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var namespace string
var kind string
var group string
var resourceName string
var all bool
var command = &cobra.Command{
Use: "run APPNAME ACTION",
Short: "Runs an available action on resource(s)",
}
command.Flags().StringVar(&resourceName, "resource-name", "", "Name of resource")
command.Flags().StringVar(&kind, "kind", "", "Kind")
err := command.MarkFlagRequired("kind")
errors.CheckError(err)
command.Flags().StringVar(&group, "group", "", "Group")
command.Flags().StringVar(&namespace, "namespace", "", "Namespace")
command.Flags().BoolVar(&all, "all", false, "Indicates whether to run the action on multiple matching resources")
command.Run = func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
appName := args[0]
actionName := args[1]
conn, appIf := argocdclient.NewClientOrDie(clientOpts).NewApplicationClientOrDie()
defer util.Close(conn)
ctx := context.Background()
resources, err := appIf.ManagedResources(ctx, &application.ResourcesQuery{ApplicationName: &appName})
errors.CheckError(err)
filteredObjects := filterResources(command, resources.Items, group, kind, namespace, resourceName, all)
for i := range filteredObjects {
obj := filteredObjects[i]
gvk := obj.GroupVersionKind()
objResourceName := obj.GetName()
_, err := appIf.RunResourceAction(context.Background(), &application.ResourceActionRunRequest{
Name: &appName,
Namespace: obj.GetNamespace(),
ResourceName: objResourceName,
Group: gvk.Group,
Kind: gvk.Kind,
Action: actionName,
})
errors.CheckError(err)
}
}
return command
}

View File

@@ -0,0 +1,25 @@
package commands
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestParseLabels(t *testing.T) {
validLabels := []string{"key=value", "foo=bar", "intuit=inc"}
result, err := parseLabels(validLabels)
assert.NoError(t, err)
assert.Len(t, result, 3)
invalidLabels := []string{"key=value", "too=many=equals"}
_, err = parseLabels(invalidLabels)
assert.Error(t, err)
emptyLabels := []string{}
result, err = parseLabels(emptyLabels)
assert.NoError(t, err)
assert.Len(t, result, 0)
}

View File

@@ -9,17 +9,19 @@ import (
"strings"
"text/tabwriter"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/cluster"
"github.com/argoproj/argo-cd/util"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
// NewClusterCommand returns a new instance of an `argocd cluster` command
@@ -43,8 +45,10 @@ func NewClusterCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientc
// NewClusterAddCommand returns a new instance of an `argocd cluster add` command
func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clientcmd.PathOptions) *cobra.Command {
var (
inCluster bool
upsert bool
inCluster bool
upsert bool
awsRoleArn string
awsClusterName string
)
var command = &cobra.Command{
Use: "add",
@@ -62,6 +66,7 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
if clstContext == nil {
log.Fatalf("Context %s does not exist in kubeconfig", args[0])
}
overrides := clientcmd.ConfigOverrides{
Context: *clstContext,
}
@@ -69,12 +74,23 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
conf, err := clientConfig.ClientConfig()
errors.CheckError(err)
// Install RBAC resources for managing the cluster
managerBearerToken := common.InstallClusterManagerRBAC(conf)
managerBearerToken := ""
var awsAuthConf *argoappv1.AWSAuthConfig
if awsClusterName != "" {
awsAuthConf = &argoappv1.AWSAuthConfig{
ClusterName: awsClusterName,
RoleARN: awsRoleArn,
}
} else {
// Install RBAC resources for managing the cluster
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
managerBearerToken, err = common.InstallClusterManagerRBAC(clientset)
errors.CheckError(err)
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
clst := NewCluster(args[0], conf, managerBearerToken)
clst := NewCluster(args[0], conf, managerBearerToken, awsAuthConf)
if inCluster {
clst.Server = common.KubernetesInternalAPIServerAddr
}
@@ -88,8 +104,10 @@ func NewClusterAddCommand(clientOpts *argocdclient.ClientOptions, pathOpts *clie
},
}
command.PersistentFlags().StringVar(&pathOpts.LoadingRules.ExplicitPath, pathOpts.ExplicitFileFlag, pathOpts.LoadingRules.ExplicitPath, "use a particular kubeconfig file")
command.Flags().BoolVar(&inCluster, "in-cluster", false, "Indicates ArgoCD resides inside this cluster and should connect using the internal k8s hostname (kubernetes.default.svc)")
command.Flags().BoolVar(&inCluster, "in-cluster", false, "Indicates Argo CD resides inside this cluster and should connect using the internal k8s hostname (kubernetes.default.svc)")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing cluster with the same name even if the spec differs")
command.Flags().StringVar(&awsClusterName, "aws-cluster-name", "", "AWS Cluster name if set then aws-iam-authenticator will be used to access cluster")
command.Flags().StringVar(&awsRoleArn, "aws-role-arn", "", "Optional AWS role arn. If set then AWS IAM Authenticator assume a role to perform cluster operations instead of the default AWS credential provider chain.")
return command
}
@@ -132,24 +150,12 @@ func printKubeContexts(ca clientcmd.ConfigAccess) {
}
}
func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argoappv1.Cluster {
func NewCluster(name string, conf *rest.Config, managerBearerToken string, awsAuthConf *argoappv1.AWSAuthConfig) *argoappv1.Cluster {
tlsClientConfig := argoappv1.TLSClientConfig{
Insecure: conf.TLSClientConfig.Insecure,
ServerName: conf.TLSClientConfig.ServerName,
CertData: conf.TLSClientConfig.CertData,
KeyData: conf.TLSClientConfig.KeyData,
CAData: conf.TLSClientConfig.CAData,
}
if len(conf.TLSClientConfig.CertData) == 0 && conf.TLSClientConfig.CertFile != "" {
data, err := ioutil.ReadFile(conf.TLSClientConfig.CertFile)
errors.CheckError(err)
tlsClientConfig.CertData = data
}
if len(conf.TLSClientConfig.KeyData) == 0 && conf.TLSClientConfig.KeyFile != "" {
data, err := ioutil.ReadFile(conf.TLSClientConfig.KeyFile)
errors.CheckError(err)
tlsClientConfig.KeyData = data
}
if len(conf.TLSClientConfig.CAData) == 0 && conf.TLSClientConfig.CAFile != "" {
data, err := ioutil.ReadFile(conf.TLSClientConfig.CAFile)
errors.CheckError(err)
@@ -161,6 +167,7 @@ func NewCluster(name string, conf *rest.Config, managerBearerToken string) *argo
Config: argoappv1.ClusterConfig{
BearerToken: managerBearerToken,
TLSClientConfig: tlsClientConfig,
AWSAuthConfig: awsAuthConf,
},
}
return &clst
@@ -202,9 +209,14 @@ func NewClusterRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comm
}
conn, clusterIf := argocdclient.NewClientOrDie(clientOpts).NewClusterClientOrDie()
defer util.Close(conn)
// clientset, err := kubernetes.NewForConfig(conf)
// errors.CheckError(err)
for _, clusterName := range args {
// TODO(jessesuen): find the right context and remove manager RBAC artifacts
// common.UninstallClusterManagerRBAC(conf)
// err := common.UninstallClusterManagerRBAC(clientset)
// errors.CheckError(err)
_, err := clusterIf.Delete(context.Background(), &cluster.ClusterQuery{Server: clusterName})
errors.CheckError(err)
}

View File

@@ -2,4 +2,8 @@ package commands
const (
cliName = "argocd"
// DefaultSSOLocalPort is the localhost port to listen on for the temporary web server performing
// the OAuth2 login flow.
DefaultSSOLocalPort = 8085
)

View File

@@ -8,11 +8,12 @@ import (
"strings"
"text/tabwriter"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
// NewContextCommand returns a new instance of an `argocd ctx` command

View File

@@ -2,15 +2,19 @@ package commands
import (
"context"
"crypto/tls"
"fmt"
"net"
"net/http"
"os"
"strconv"
"time"
"github.com/argoproj/argo-cd/common"
oidc "github.com/coreos/go-oidc"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/skratchdot/open-golang/open"
"github.com/spf13/cobra"
"golang.org/x/oauth2"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/session"
@@ -19,11 +23,8 @@ import (
"github.com/argoproj/argo-cd/util/cli"
grpc_util "github.com/argoproj/argo-cd/util/grpc"
"github.com/argoproj/argo-cd/util/localconfig"
jwt "github.com/dgrijalva/jwt-go"
log "github.com/sirupsen/logrus"
"github.com/skratchdot/open-golang/open"
"github.com/spf13/cobra"
"golang.org/x/oauth2"
oidcutil "github.com/argoproj/argo-cd/util/oidc"
"github.com/argoproj/argo-cd/util/rand"
)
// NewLoginCommand returns a new instance of `argocd login` command
@@ -33,6 +34,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
username string
password string
sso bool
ssoPort int
)
var command = &cobra.Command{
Use: "login SERVER",
@@ -66,6 +68,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
ServerAddr: server,
Insecure: globalClientOpts.Insecure,
PlainText: globalClientOpts.PlainText,
GRPCWeb: globalClientOpts.GRPCWeb,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
setConn, setIf := acdClient.NewSettingsClientOrDie()
@@ -81,12 +84,15 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
if !sso {
tokenString = passwordLogin(acdClient, username, password)
} else {
acdSet, err := setIf.Get(context.Background(), &settings.SettingsQuery{})
ctx := context.Background()
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)
if !ssoConfigured(acdSet) {
log.Fatalf("ArgoCD instance is not configured with SSO")
}
tokenString, refreshToken = oauth2Login(server, clientOpts.PlainText)
ctx = oidc.ClientContext(ctx, httpClient)
acdSet, err := setIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, oauth2conf, provider)
}
parser := &jwt.Parser{
@@ -107,6 +113,7 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
Server: server,
PlainText: globalClientOpts.PlainText,
Insecure: globalClientOpts.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
})
localCfg.UpsertUser(localconfig.User{
Name: ctxName,
@@ -130,7 +137,8 @@ func NewLoginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comman
command.Flags().StringVar(&ctxName, "name", "", "name to use for the context")
command.Flags().StringVar(&username, "username", "", "the username of an account to authenticate")
command.Flags().StringVar(&password, "password", "", "the password of an account to authenticate")
command.Flags().BoolVar(&sso, "sso", false, "Perform SSO login")
command.Flags().BoolVar(&sso, "sso", false, "perform SSO login")
command.Flags().IntVar(&ssoPort, "sso-port", DefaultSSOLocalPort, "port to run local OAuth2 login application")
return command
}
@@ -144,97 +152,107 @@ func userDisplayName(claims jwt.MapClaims) string {
return claims["sub"].(string)
}
func ssoConfigured(set *settings.Settings) bool {
return set.DexConfig != nil && len(set.DexConfig.Connectors) > 0
}
// getFreePort asks the kernel for a free open port that is ready to use.
func getFreePort() (int, error) {
ln, err := net.Listen("tcp", "[::]:0")
if err != nil {
return 0, err
}
return ln.Addr().(*net.TCPAddr).Port, ln.Close()
}
// oauth2Login opens a browser, runs a temporary HTTP server to delegate OAuth2 login flow and
// returns the JWT token and a refresh token (if supported)
func oauth2Login(host string, plaintext bool) (string, string) {
ctx := context.Background()
port, err := getFreePort()
func oauth2Login(ctx context.Context, port int, oauth2conf *oauth2.Config, provider *oidc.Provider) (string, string) {
oauth2conf.RedirectURL = fmt.Sprintf("http://localhost:%d/auth/callback", port)
oidcConf, err := oidcutil.ParseConfig(provider)
errors.CheckError(err)
var scheme = "https"
if plaintext {
scheme = "http"
}
conf := &oauth2.Config{
ClientID: common.ArgoCDCLIClientAppID,
Scopes: []string{"openid", "profile", "email", "groups", "offline_access"},
Endpoint: oauth2.Endpoint{
AuthURL: fmt.Sprintf("%s://%s%s/auth", scheme, host, common.DexAPIEndpoint),
TokenURL: fmt.Sprintf("%s://%s%s/token", scheme, host, common.DexAPIEndpoint),
},
RedirectURL: fmt.Sprintf("http://localhost:%d/auth/callback", port),
}
srv := &http.Server{Addr: ":" + strconv.Itoa(port)}
log.Debug("OIDC Configuration:")
log.Debugf(" supported_scopes: %v", oidcConf.ScopesSupported)
log.Debugf(" response_types_supported: %v", oidcConf.ResponseTypesSupported)
// handledRequests ensures we do not handle more requests than necessary
handledRequests := 0
// completionChan is to signal flow completed. Non-empty string indicates error
completionChan := make(chan string)
// stateNonce is an OAuth2 state nonce
stateNonce := rand.RandString(10)
var tokenString string
var refreshToken string
loginCompleted := make(chan struct{})
handleErr := func(w http.ResponseWriter, errMsg string) {
http.Error(w, errMsg, http.StatusBadRequest)
completionChan <- errMsg
}
// Authorization redirect callback from OAuth2 auth flow.
// Handles both implicit and authorization code flow
callbackHandler := func(w http.ResponseWriter, r *http.Request) {
defer func() {
loginCompleted <- struct{}{}
}()
log.Debugf("Callback: %s", r.URL)
// Authorization redirect callback from OAuth2 auth flow.
if errMsg := r.FormValue("error"); errMsg != "" {
http.Error(w, errMsg+": "+r.FormValue("error_description"), http.StatusBadRequest)
log.Fatal(errMsg)
if formErr := r.FormValue("error"); formErr != "" {
handleErr(w, formErr+": "+r.FormValue("error_description"))
return
}
code := r.FormValue("code")
if code == "" {
errMsg := fmt.Sprintf("no code in request: %q", r.Form)
http.Error(w, errMsg, http.StatusBadRequest)
log.Fatal(errMsg)
return
}
tok, err := conf.Exchange(ctx, code)
errors.CheckError(err)
log.Info("Authentication successful")
var ok bool
tokenString, ok = tok.Extra("id_token").(string)
if !ok {
errMsg := "no id_token in token response"
http.Error(w, errMsg, http.StatusInternalServerError)
log.Fatal(errMsg)
handledRequests++
if handledRequests > 2 {
// Since implicit flow will redirect back to ourselves, this counter ensures we do not
// fallinto a redirect loop (e.g. user visits the page by hand)
handleErr(w, "Unable to complete login flow: too many redirects")
return
}
refreshToken, _ = tok.Extra("refresh_token").(string)
log.Debugf("Token: %s", tokenString)
log.Debugf("Refresh Token: %s", tokenString)
if len(r.Form) == 0 {
// If we get here, no form data was set. We presume to be performing an implicit login
// flow where the id_token is contained in a URL fragment, making it inaccessible to be
// read from the request. This javascript will redirect the browser to send the
// fragments as query parameters so our callback handler can read and return token.
fmt.Fprintf(w, `<script>window.location.search = window.location.hash.substring(1)</script>`)
return
}
if state := r.FormValue("state"); state != stateNonce {
handleErr(w, "Unknown state nonce")
return
}
tokenString = r.FormValue("id_token")
if tokenString == "" {
code := r.FormValue("code")
if code == "" {
handleErr(w, fmt.Sprintf("no code in request: %q", r.Form))
return
}
tok, err := oauth2conf.Exchange(ctx, code)
if err != nil {
handleErr(w, err.Error())
return
}
var ok bool
tokenString, ok = tok.Extra("id_token").(string)
if !ok {
handleErr(w, "no id_token in token response")
return
}
refreshToken, _ = tok.Extra("refresh_token").(string)
}
successPage := `
<div style="height:100px; width:100%!; display:flex; flex-direction: column; justify-content: center; align-items:center; background-color:#2ecc71; color:white; font-size:22"><div>Authentication successful!</div></div>
<p style="margin-top:20px; font-size:18; text-align:center">Authentication was successful, you can now return to CLI. This page will close automatically</p>
<script>window.onload=function(){setTimeout(this.close, 4000)}</script>
`
fmt.Fprintf(w, successPage)
fmt.Fprint(w, successPage)
completionChan <- ""
}
srv := &http.Server{Addr: "localhost:" + strconv.Itoa(port)}
http.HandleFunc("/auth/callback", callbackHandler)
// add transport for self-signed certificate to context
sslcli := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
},
}
ctx = context.WithValue(ctx, oauth2.HTTPClient, sslcli)
// Redirect user to login & consent page to ask for permission for the scopes specified above.
log.Info("Opening browser for authentication")
url := conf.AuthCodeURL("state", oauth2.AccessTypeOffline)
log.Infof("Authentication URL: %s", url)
fmt.Printf("Opening browser for authentication\n")
var url string
grantType := oidcutil.InferGrantType(oauth2conf, oidcConf)
switch grantType {
case oidcutil.GrantTypeAuthorizationCode:
url = oauth2conf.AuthCodeURL(stateNonce, oauth2.AccessTypeOffline)
case oidcutil.GrantTypeImplicit:
url = oidcutil.ImplicitFlowURL(oauth2conf, stateNonce, oauth2.AccessTypeOffline)
default:
log.Fatalf("Unsupported grant type: %v", grantType)
}
fmt.Printf("Performing %s flow login: %s\n", grantType, url)
time.Sleep(1 * time.Second)
err = open.Run(url)
errors.CheckError(err)
@@ -243,8 +261,16 @@ func oauth2Login(host string, plaintext bool) (string, string) {
log.Fatalf("listen: %s\n", err)
}
}()
<-loginCompleted
errMsg := <-completionChan
if errMsg != "" {
log.Fatal(errMsg)
}
fmt.Printf("Authentication successful\n")
ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
_ = srv.Shutdown(ctx)
log.Debugf("Token: %s", tokenString)
log.Debugf("Refresh Token: %s", refreshToken)
return tokenString, refreshToken
}

View File

@@ -1,26 +1,29 @@
package commands
import (
"context"
"encoding/json"
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"time"
"github.com/dustin/go-humanize"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"strings"
"context"
"fmt"
"text/tabwriter"
"github.com/spf13/pflag"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/git"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/apis/meta/v1"
)
type projectOpts struct {
@@ -29,6 +32,12 @@ type projectOpts struct {
sources []string
}
type policyOpts struct {
action string
permission string
object string
}
func (opts *projectOpts) GetDestinations() []v1alpha1.ApplicationDestination {
destinations := make([]v1alpha1.ApplicationDestination, 0)
for _, destStr := range opts.destinations {
@@ -55,22 +64,40 @@ func NewProjectCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
os.Exit(1)
},
}
command.AddCommand(NewProjectRoleCommand(clientOpts))
command.AddCommand(NewProjectCreateCommand(clientOpts))
command.AddCommand(NewProjectGetCommand(clientOpts))
command.AddCommand(NewProjectDeleteCommand(clientOpts))
command.AddCommand(NewProjectListCommand(clientOpts))
command.AddCommand(NewProjectSetCommand(clientOpts))
command.AddCommand(NewProjectEditCommand(clientOpts))
command.AddCommand(NewProjectAddDestinationCommand(clientOpts))
command.AddCommand(NewProjectRemoveDestinationCommand(clientOpts))
command.AddCommand(NewProjectAddSourceCommand(clientOpts))
command.AddCommand(NewProjectRemoveSourceCommand(clientOpts))
command.AddCommand(NewProjectAllowClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyClusterResourceCommand(clientOpts))
command.AddCommand(NewProjectAllowNamespaceResourceCommand(clientOpts))
command.AddCommand(NewProjectDenyNamespaceResourceCommand(clientOpts))
return command
}
func addProjFlags(command *cobra.Command, opts *projectOpts) {
command.Flags().StringVarP(&opts.description, "description", "", "desc", "Project description")
command.Flags().StringVarP(&opts.description, "description", "", "", "Project description")
command.Flags().StringArrayVarP(&opts.destinations, "dest", "d", []string{},
"Allowed deployment destination. Includes comma separated server url and namespace (e.g. https://192.168.99.100:8443,default")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Allowed deployment source repository URL.")
"Permitted destination server and namespace (e.g. https://192.168.99.100:8443,default)")
command.Flags().StringArrayVarP(&opts.sources, "src", "s", []string{}, "Permitted git source repository URL")
}
func addPolicyFlags(command *cobra.Command, opts *policyOpts) {
command.Flags().StringVarP(&opts.action, "action", "a", "", "Action to grant/deny permission on (e.g. get, create, list, update, delete)")
command.Flags().StringVarP(&opts.permission, "permission", "p", "allow", "Whether to allow or deny access to object with the action. This can only be 'allow' or 'deny'")
command.Flags().StringVarP(&opts.object, "object", "o", "", "Object within the project to grant/deny access. Use '*' for a wildcard. Will want access to '<project>/<object>'")
}
func humanizeTimestamp(epoch int64) string {
ts := time.Unix(epoch, 0)
return fmt.Sprintf("%s (%s)", ts.Format(time.RFC3339), humanize.Time(ts))
}
// NewProjectCreateCommand returns a new instance of an `argocd proj create` command
@@ -242,8 +269,13 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
errors.CheckError(err)
for _, item := range proj.Spec.SourceRepos {
if item == git.NormalizeGitURL(url) {
log.Fatal("Specified source repository is already defined in project")
if item == "*" && item == url {
fmt.Printf("Source repository '*' already allowed in project\n")
return
}
if git.SameURL(item, url) {
fmt.Printf("Source repository '%s' already allowed in project\n", item)
return
}
}
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos, url)
@@ -254,6 +286,104 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
return command
}
func modifyProjectResourceCmd(cmdUse, cmdDesc string, clientOpts *argocdclient.ClientOptions, action func(proj *v1alpha1.AppProject, group string, kind string) bool) *cobra.Command {
return &cobra.Command{
Use: cmdUse,
Short: cmdDesc,
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, group, kind := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
if action(proj, group, kind) {
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
}
},
}
}
// NewProjectAllowNamespaceResourceCommand returns a new instance of an `deny-cluster-resources` command
func NewProjectAllowNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-namespace-resource PROJECT GROUP KIND"
desc := "Removes a namespaced API resource from the blacklist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
fmt.Printf("Group '%s' and kind '%s' not in blacklisted namespaced resources\n", group, kind)
return false
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist[:index], proj.Spec.NamespaceResourceBlacklist[index+1:]...)
return true
})
}
// NewProjectDenyNamespaceResourceCommand returns a new instance of an `argocd proj deny-namespace-resource` command
func NewProjectDenyNamespaceResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-namespace-resource PROJECT GROUP KIND"
desc := "Adds a namespaced API resource to the blacklist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.NamespaceResourceBlacklist {
if item.Group == group && item.Kind == kind {
fmt.Printf("Group '%s' and kind '%s' already present in blacklisted namespaced resources\n", group, kind)
return false
}
}
proj.Spec.NamespaceResourceBlacklist = append(proj.Spec.NamespaceResourceBlacklist, v1.GroupKind{Group: group, Kind: kind})
return true
})
}
// NewProjectDenyClusterResourceCommand returns a new instance of an `deny-cluster-resource` command
func NewProjectDenyClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "deny-cluster-resource PROJECT GROUP KIND"
desc := "Removes a cluster-scoped API resource from the whitelist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
index := -1
for i, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
index = i
break
}
}
if index == -1 {
fmt.Printf("Group '%s' and kind '%s' not in whitelisted cluster resources\n", group, kind)
return false
}
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist[:index], proj.Spec.ClusterResourceWhitelist[index+1:]...)
return true
})
}
// NewProjectAllowClusterResourceCommand returns a new instance of an `argocd proj allow-cluster-resource` command
func NewProjectAllowClusterResourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
use := "allow-cluster-resource PROJECT GROUP KIND"
desc := "Adds a cluster-scoped API resource to the whitelist"
return modifyProjectResourceCmd(use, desc, clientOpts, func(proj *v1alpha1.AppProject, group string, kind string) bool {
for _, item := range proj.Spec.ClusterResourceWhitelist {
if item.Group == group && item.Kind == kind {
fmt.Printf("Group '%s' and kind '%s' already present in whitelisted cluster resources\n", group, kind)
return false
}
}
proj.Spec.ClusterResourceWhitelist = append(proj.Spec.ClusterResourceWhitelist, v1.GroupKind{Group: group, Kind: kind})
return true
})
}
// NewProjectRemoveSourceCommand returns a new instance of an `argocd proj remove-src` command
func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
@@ -274,13 +404,13 @@ func NewProjectRemoveSourceCommand(clientOpts *argocdclient.ClientOptions) *cobr
index := -1
for i, item := range proj.Spec.SourceRepos {
if item == git.NormalizeGitURL(url) {
if item == url {
index = i
break
}
}
if index == -1 {
log.Fatal("Specified source repository does not exist in project")
fmt.Printf("Source repository '%s' does not exist in project\n", url)
} else {
proj.Spec.SourceRepos = append(proj.Spec.SourceRepos[:index], proj.Spec.SourceRepos[index+1:]...)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
@@ -324,12 +454,155 @@ func NewProjectListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comman
projects, err := projIf.List(context.Background(), &project.ProjectQuery{})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\n")
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\n")
for _, p := range projects.Items {
fmt.Fprintf(w, "%s\t%s\t%v\n", p.Name, p.Spec.Description, p.Spec.Destinations)
printProjectLine(w, &p)
}
_ = w.Flush()
},
}
return command
}
func printProjectLine(w io.Writer, p *v1alpha1.AppProject) {
var destinations, sourceRepos, clusterWhitelist, namespaceBlacklist string
switch len(p.Spec.Destinations) {
case 0:
destinations = "<none>"
case 1:
destinations = fmt.Sprintf("%s,%s", p.Spec.Destinations[0].Server, p.Spec.Destinations[0].Namespace)
default:
destinations = fmt.Sprintf("%d destinations", len(p.Spec.Destinations))
}
switch len(p.Spec.SourceRepos) {
case 0:
sourceRepos = "<none>"
case 1:
sourceRepos = p.Spec.SourceRepos[0]
default:
sourceRepos = fmt.Sprintf("%d repos", len(p.Spec.SourceRepos))
}
switch len(p.Spec.ClusterResourceWhitelist) {
case 0:
clusterWhitelist = "<none>"
case 1:
clusterWhitelist = fmt.Sprintf("%s/%s", p.Spec.ClusterResourceWhitelist[0].Group, p.Spec.ClusterResourceWhitelist[0].Kind)
default:
clusterWhitelist = fmt.Sprintf("%d resources", len(p.Spec.ClusterResourceWhitelist))
}
switch len(p.Spec.NamespaceResourceBlacklist) {
case 0:
namespaceBlacklist = "<none>"
default:
namespaceBlacklist = fmt.Sprintf("%d resources", len(p.Spec.NamespaceResourceBlacklist))
}
fmt.Fprintf(w, "%s\t%s\t%v\t%v\t%v\t%v\n", p.Name, p.Spec.Description, destinations, sourceRepos, clusterWhitelist, namespaceBlacklist)
}
// NewProjectGetCommand returns a new instance of an `argocd proj get` command
func NewProjectGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
const printProjFmtStr = "%-34s%s\n"
var command = &cobra.Command{
Use: "get PROJECT",
Short: "Get project details",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
p, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
fmt.Printf(printProjFmtStr, "Name:", p.Name)
fmt.Printf(printProjFmtStr, "Description:", p.Spec.Description)
// Print destinations
dest0 := "<none>"
if len(p.Spec.Destinations) > 0 {
dest0 = fmt.Sprintf("%s,%s", p.Spec.Destinations[0].Server, p.Spec.Destinations[0].Namespace)
}
fmt.Printf(printProjFmtStr, "Destinations:", dest0)
for i := 1; i < len(p.Spec.Destinations); i++ {
fmt.Printf(printProjFmtStr, "", fmt.Sprintf("%s,%s", p.Spec.Destinations[i].Server, p.Spec.Destinations[i].Namespace))
}
// Print sources
src0 := "<none>"
if len(p.Spec.SourceRepos) > 0 {
src0 = p.Spec.SourceRepos[0]
}
fmt.Printf(printProjFmtStr, "Repositories:", src0)
for i := 1; i < len(p.Spec.SourceRepos); i++ {
fmt.Printf(printProjFmtStr, "", p.Spec.SourceRepos[i])
}
// Print whitelisted cluster resources
cwl0 := "<none>"
if len(p.Spec.ClusterResourceWhitelist) > 0 {
cwl0 = fmt.Sprintf("%s/%s", p.Spec.ClusterResourceWhitelist[0].Group, p.Spec.ClusterResourceWhitelist[0].Kind)
}
fmt.Printf(printProjFmtStr, "Whitelisted Cluster Resources:", cwl0)
for i := 1; i < len(p.Spec.ClusterResourceWhitelist); i++ {
fmt.Printf(printProjFmtStr, "", fmt.Sprintf("%s/%s", p.Spec.ClusterResourceWhitelist[i].Group, p.Spec.ClusterResourceWhitelist[i].Kind))
}
// Print blacklisted namespaced resources
rbl0 := "<none>"
if len(p.Spec.NamespaceResourceBlacklist) > 0 {
rbl0 = fmt.Sprintf("%s/%s", p.Spec.NamespaceResourceBlacklist[0].Group, p.Spec.NamespaceResourceBlacklist[0].Kind)
}
fmt.Printf(printProjFmtStr, "Blacklisted Namespaced Resources:", rbl0)
for i := 1; i < len(p.Spec.NamespaceResourceBlacklist); i++ {
fmt.Printf(printProjFmtStr, "", fmt.Sprintf("%s/%s", p.Spec.NamespaceResourceBlacklist[i].Group, p.Spec.NamespaceResourceBlacklist[i].Kind))
}
},
}
return command
}
func NewProjectEditCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "edit PROJECT",
Short: "Edit project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
projData, err := json.Marshal(proj.Spec)
errors.CheckError(err)
projData, err = yaml.JSONToYAML(projData)
errors.CheckError(err)
cli.InteractiveEdit(fmt.Sprintf("%s-*-edit.yaml", projName), projData, func(input []byte) error {
input, err = yaml.YAMLToJSON(input)
if err != nil {
return err
}
updatedSpec := v1alpha1.AppProjectSpec{}
err = json.Unmarshal(input, &updatedSpec)
if err != nil {
return err
}
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
if err != nil {
return err
}
proj.Spec = updatedSpec
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
if err != nil {
return fmt.Errorf("Failed to update project:\n%v", err)
}
return err
})
},
}
return command
}

View File

@@ -0,0 +1,377 @@
package commands
import (
"context"
"fmt"
"os"
"strconv"
"text/tabwriter"
timeutil "github.com/argoproj/pkg/time"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/server/project"
"github.com/argoproj/argo-cd/util"
projectutil "github.com/argoproj/argo-cd/util/project"
)
const (
policyTemplate = "p, proj:%s:%s, applications, %s, %s/%s, %s"
)
// NewProjectRoleCommand returns a new instance of the `argocd proj role` command
func NewProjectRoleCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
roleCommand := &cobra.Command{
Use: "role",
Short: "Manage a project's roles",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
os.Exit(1)
},
}
roleCommand.AddCommand(NewProjectRoleListCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleGetCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleCreateTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleDeleteTokenCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleAddPolicyCommand(clientOpts))
roleCommand.AddCommand(NewProjectRoleRemovePolicyCommand(clientOpts))
return roleCommand
}
// NewProjectRoleAddPolicyCommand returns a new instance of an `argocd proj role add-policy` command
func NewProjectRoleAddPolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "add-policy PROJECT ROLE-NAME",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
policy := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
proj.Spec.Roles[roleIndex].Policies = append(role.Policies, policy)
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleRemovePolicyCommand returns a new instance of an `argocd proj role remove-policy` command
func NewProjectRoleRemovePolicyCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
opts policyOpts
)
var command = &cobra.Command{
Use: "remove-policy PROJECT ROLE-NAME",
Short: "Remove a policy from a role within a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, roleIndex, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
policyToRemove := fmt.Sprintf(policyTemplate, proj.Name, role.Name, opts.action, proj.Name, opts.object, opts.permission)
duplicateIndex := -1
for i, policy := range role.Policies {
if policy == policyToRemove {
duplicateIndex = i
break
}
}
if duplicateIndex < 0 {
return
}
role.Policies[duplicateIndex] = role.Policies[len(role.Policies)-1]
proj.Spec.Roles[roleIndex].Policies = role.Policies[:len(role.Policies)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
addPolicyFlags(command, &opts)
return command
}
// NewProjectRoleCreateCommand returns a new instance of an `argocd proj role create` command
func NewProjectRoleCreateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
description string
)
var command = &cobra.Command{
Use: "create PROJECT ROLE-NAME",
Short: "Create a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, _, err = projectutil.GetRoleByName(proj, roleName)
if err == nil {
fmt.Printf("Role '%s' already exists\n", roleName)
return
}
proj.Spec.Roles = append(proj.Spec.Roles, v1alpha1.ProjectRole{Name: roleName, Description: description})
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Role '%s' created\n", roleName)
},
}
command.Flags().StringVarP(&description, "description", "", "", "Project description")
return command
}
// NewProjectRoleDeleteCommand returns a new instance of an `argocd proj role delete` command
func NewProjectRoleDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete PROJECT ROLE-NAME",
Short: "Delete a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
_, index, err := projectutil.GetRoleByName(proj, roleName)
if err != nil {
fmt.Printf("Role '%s' does not exist in project\n", roleName)
return
}
proj.Spec.Roles[index] = proj.Spec.Roles[len(proj.Spec.Roles)-1]
proj.Spec.Roles = proj.Spec.Roles[:len(proj.Spec.Roles)-1]
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Role '%s' deleted\n", roleName)
},
}
return command
}
// NewProjectRoleCreateTokenCommand returns a new instance of an `argocd proj role create-token` command
func NewProjectRoleCreateTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
expiresIn string
)
var command = &cobra.Command{
Use: "create-token PROJECT ROLE-NAME",
Short: "Create a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
duration, err := timeutil.ParseDuration(expiresIn)
errors.CheckError(err)
token, err := projIf.CreateToken(context.Background(), &project.ProjectTokenCreateRequest{Project: projName, Role: roleName, ExpiresIn: int64(duration.Seconds())})
errors.CheckError(err)
fmt.Println(token.Token)
},
}
command.Flags().StringVarP(&expiresIn, "expires-in", "e", "0s", "Duration before the token will expire. (Default: No expiration)")
return command
}
// NewProjectRoleDeleteTokenCommand returns a new instance of an `argocd proj role delete-token` command
func NewProjectRoleDeleteTokenCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "delete-token PROJECT ROLE-NAME ISSUED-AT",
Short: "Delete a project token",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
issuedAt, err := strconv.ParseInt(args[2], 10, 64)
errors.CheckError(err)
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
_, err = projIf.DeleteToken(context.Background(), &project.ProjectTokenDeleteRequest{Project: projName, Role: roleName, Iat: issuedAt})
errors.CheckError(err)
},
}
return command
}
// NewProjectRoleListCommand returns a new instance of an `argocd proj roles list` command
func NewProjectRoleListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "list PROJECT",
Short: "List all the roles in a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 1 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
project, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range project.Spec.Roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
_ = w.Flush()
},
}
return command
}
// NewProjectRoleGetCommand returns a new instance of an `argocd proj roles get` command
func NewProjectRoleGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "get PROJECT ROLE-NAME",
Short: "Get the details of a specific role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
roleName := args[1]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
role, _, err := projectutil.GetRoleByName(proj, roleName)
errors.CheckError(err)
printRoleFmtStr := "%-15s%s\n"
fmt.Printf(printRoleFmtStr, "Role Name:", roleName)
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
fmt.Printf("Policies:\n")
fmt.Printf("%s\n", proj.ProjectPoliciesString())
fmt.Printf("JWT Tokens:\n")
// TODO(jessesuen): print groups
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
for _, token := range role.JWTTokens {
expiresAt := "<none>"
if token.ExpiresAt > 0 {
expiresAt = humanizeTimestamp(token.ExpiresAt)
}
fmt.Fprintf(w, "%d\t%s\t%s\n", token.IssuedAt, humanizeTimestamp(token.IssuedAt), expiresAt)
}
_ = w.Flush()
},
}
return command
}
// NewProjectRoleAddGroupCommand returns a new instance of an `argocd proj role add-group` command
func NewProjectRoleAddGroupCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "add-group PROJECT ROLE-NAME GROUP-CLAIM",
Short: "Add a policy to a project role",
Run: func(c *cobra.Command, args []string) {
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := projectutil.AddGroupToRole(proj, roleName, groupName)
errors.CheckError(err)
if updated {
fmt.Printf("Group '%s' already present in role '%s'\n", groupName, roleName)
return
}
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Group '%s' added to role '%s'\n", groupName, roleName)
},
}
return command
}
// NewProjectRoleRemoveGroupCommand returns a new instance of an `argocd proj role remove-group` command
func NewProjectRoleRemoveGroupCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var command = &cobra.Command{
Use: "remove-group PROJECT ROLE-NAME GROUP-CLAIM",
Short: "Remove a group claim from a role within a project",
Run: func(c *cobra.Command, args []string) {
if len(args) != 3 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName, roleName, groupName := args[0], args[1], args[2]
conn, projIf := argocdclient.NewClientOrDie(clientOpts).NewProjectClientOrDie()
defer util.Close(conn)
proj, err := projIf.Get(context.Background(), &project.ProjectQuery{Name: projName})
errors.CheckError(err)
updated, err := projectutil.RemoveGroupFromRole(proj, roleName, groupName)
errors.CheckError(err)
if !updated {
fmt.Printf("Group '%s' not present in role '%s'\n", groupName, roleName)
return
}
_, err = projIf.Update(context.Background(), &project.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
fmt.Printf("Group '%s' removed from role '%s'\n", groupName, roleName)
},
}
return command
}

View File

@@ -1,15 +1,18 @@
package commands
import (
"context"
"fmt"
"os"
jwt "github.com/dgrijalva/jwt-go"
oidc "github.com/coreos/go-oidc"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/server/settings"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/argoproj/argo-cd/util/session"
)
@@ -18,6 +21,7 @@ import (
func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
password string
ssoPort int
)
var command = &cobra.Command{
Use: "relogin",
@@ -36,28 +40,34 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
configCtx, err := localCfg.ResolveContext(localCfg.CurrentContext)
errors.CheckError(err)
parser := &jwt.Parser{
SkipClaimsValidation: true,
}
claims := jwt.StandardClaims{}
_, _, err = parser.ParseUnverified(configCtx.User.AuthToken, &claims)
errors.CheckError(err)
var tokenString string
var refreshToken string
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
GRPCWeb: globalClientOpts.GRPCWeb,
PlainText: configCtx.Server.PlainText,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
claims, err := configCtx.User.Claims()
errors.CheckError(err)
if claims.Issuer == session.SessionManagerClaimsIssuer {
clientOpts := argocdclient.ClientOptions{
ConfigPath: "",
ServerAddr: configCtx.Server.Server,
Insecure: configCtx.Server.Insecure,
PlainText: configCtx.Server.PlainText,
}
acdClient := argocdclient.NewClientOrDie(&clientOpts)
fmt.Printf("Relogging in as '%s'\n", claims.Subject)
tokenString = passwordLogin(acdClient, claims.Subject, password)
} else {
fmt.Println("Reinitiating SSO login")
tokenString, refreshToken = oauth2Login(configCtx.Server.Server, configCtx.Server.PlainText)
setConn, setIf := acdClient.NewSettingsClientOrDie()
defer util.Close(setConn)
ctx := context.Background()
httpClient, err := acdClient.HTTPClient()
errors.CheckError(err)
ctx = oidc.ClientContext(ctx, httpClient)
acdSet, err := setIf.Get(ctx, &settings.SettingsQuery{})
errors.CheckError(err)
oauth2conf, provider, err := acdClient.OIDCConfig(ctx, acdSet)
errors.CheckError(err)
tokenString, refreshToken = oauth2Login(ctx, ssoPort, oauth2conf, provider)
}
localCfg.UpsertUser(localconfig.User{
@@ -71,5 +81,6 @@ func NewReloginCommand(globalClientOpts *argocdclient.ClientOptions) *cobra.Comm
},
}
command.Flags().StringVar(&password, "password", "", "the password of an account to authenticate")
command.Flags().IntVar(&ssoPort, "sso-port", DefaultSSOLocalPort, "port to run local OAuth2 login application")
return command
}

View File

@@ -39,9 +39,10 @@ func NewRepoCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// NewRepoAddCommand returns a new instance of an `argocd repo add` command
func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
repo appsv1.Repository
upsert bool
sshPrivateKeyPath string
insecureIgnoreHostKey bool
)
var command = &cobra.Command{
Use: "add REPO",
@@ -59,14 +60,15 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
repo.SSHPrivateKey = string(keyData)
}
repo.InsecureIgnoreHostKey = insecureIgnoreHostKey
// First test the repo *without* username/password. This gives us a hint on whether this
// is a private repo.
// NOTE: it is important not to run git commands to test git credentials on the user's
// system since it may mess with their git credential store (e.g. osx keychain).
// See issue #315
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey)
err := git.TestRepo(repo.Repo, "", "", repo.SSHPrivateKey, repo.InsecureIgnoreHostKey)
if err != nil {
if git.IsSSHURL(repo.Repo) {
if yes, _ := git.IsSSHURL(repo.Repo); yes {
// If we failed using git SSH credentials, then the repo is automatically bad
log.Fatal(err)
}
@@ -87,7 +89,8 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
}
command.Flags().StringVar(&repo.Username, "username", "", "username to the repository")
command.Flags().StringVar(&repo.Password, "password", "", "password to the repository")
command.Flags().StringVar(&sshPrivateKeyPath, "sshPrivateKeyPath", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().StringVar(&sshPrivateKeyPath, "ssh-private-key-path", "", "path to the private ssh key (e.g. ~/.ssh/id_rsa)")
command.Flags().BoolVar(&insecureIgnoreHostKey, "insecure-ignore-host-key", false, "disables SSH strict host key checking")
command.Flags().BoolVar(&upsert, "upsert", false, "Override an existing repository with the same name even if the spec differs")
return command
}

View File

@@ -1,13 +1,26 @@
package commands
import (
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/localconfig"
"github.com/spf13/cobra"
"k8s.io/client-go/tools/clientcmd"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util/cli"
"github.com/argoproj/argo-cd/util/config"
"github.com/argoproj/argo-cd/util/localconfig"
)
func init() {
cobra.OnInitialize(initConfig)
}
var logLevel string
func initConfig() {
cli.SetLogLevel(logLevel)
}
// NewCommand returns a new instance of an argocd command
func NewCommand() *cobra.Command {
var (
@@ -17,7 +30,7 @@ func NewCommand() *cobra.Command {
var command = &cobra.Command{
Use: cliName,
Short: "argocd controls a ArgoCD server",
Short: "argocd controls a Argo CD server",
Run: func(c *cobra.Command, args []string) {
c.HelpFunc()(c, args)
},
@@ -35,12 +48,13 @@ func NewCommand() *cobra.Command {
defaultLocalConfigPath, err := localconfig.DefaultLocalConfigPath()
errors.CheckError(err)
command.PersistentFlags().StringVar(&clientOpts.ConfigPath, "config", defaultLocalConfigPath, "Path to ArgoCD config")
command.PersistentFlags().StringVar(&clientOpts.ServerAddr, "server", "", "ArgoCD server address")
command.PersistentFlags().BoolVar(&clientOpts.PlainText, "plaintext", false, "Disable TLS")
command.PersistentFlags().BoolVar(&clientOpts.Insecure, "insecure", false, "Skip server certificate and domain verification")
command.PersistentFlags().StringVar(&clientOpts.CertFile, "server-crt", "", "Server certificate file")
command.PersistentFlags().StringVar(&clientOpts.AuthToken, "auth-token", "", "Authentication token")
command.PersistentFlags().StringVar(&clientOpts.ConfigPath, "config", config.GetFlag("config", defaultLocalConfigPath), "Path to Argo CD config")
command.PersistentFlags().StringVar(&clientOpts.ServerAddr, "server", config.GetFlag("server", ""), "Argo CD server address")
command.PersistentFlags().BoolVar(&clientOpts.PlainText, "plaintext", config.GetBoolFlag("plaintext"), "Disable TLS")
command.PersistentFlags().BoolVar(&clientOpts.Insecure, "insecure", config.GetBoolFlag("insecure"), "Skip server certificate and domain verification")
command.PersistentFlags().StringVar(&clientOpts.CertFile, "server-crt", config.GetFlag("server-crt", ""), "Server certificate file")
command.PersistentFlags().StringVar(&clientOpts.AuthToken, "auth-token", config.GetFlag("auth-token", ""), "Authentication token")
command.PersistentFlags().BoolVar(&clientOpts.GRPCWeb, "grpc-web", config.GetBoolFlag("grpc-web"), "Enables gRPC-web protocol. Useful if Argo CD server is behind proxy which does not support HTTP2.")
command.PersistentFlags().StringVar(&logLevel, "loglevel", config.GetFlag("loglevel", "info"), "Set the logging level. One of: debug|info|warn|error")
return command
}

View File

@@ -4,12 +4,13 @@ import (
"context"
"fmt"
"github.com/golang/protobuf/ptypes/empty"
"github.com/spf13/cobra"
argocd "github.com/argoproj/argo-cd"
"github.com/argoproj/argo-cd/errors"
argocdclient "github.com/argoproj/argo-cd/pkg/apiclient"
"github.com/argoproj/argo-cd/util"
"github.com/golang/protobuf/ptypes/empty"
"github.com/spf13/cobra"
)
// NewVersionCmd returns a new `version` command to be used as a sub-command to root

View File

@@ -1,105 +1,116 @@
package common
import (
rbacv1 "k8s.io/api/rbac/v1"
"github.com/argoproj/argo-cd/pkg/apis/application"
// Default service addresses and URLS of Argo CD internal services
const (
// DefaultRepoServerAddr is the gRPC address of the Argo CD repo server
DefaultRepoServerAddr = "argocd-repo-server:8081"
// DefaultDexServerAddr is the HTTP address of the Dex OIDC server, which we run a reverse proxy against
DefaultDexServerAddr = "http://argocd-dex-server:5556"
// DefaultRedisAddr is the default redis address
DefaultRedisAddr = "argocd-redis:6379"
)
// Kubernetes ConfigMap and Secret resource names which hold Argo CD settings
const (
// MetadataPrefix is the prefix used for our labels and annotations
MetadataPrefix = "argocd.argoproj.io"
// SecretTypeRepository indicates a secret type of repository
SecretTypeRepository = "repository"
// SecretTypeCluster indicates a secret type of cluster
SecretTypeCluster = "cluster"
// AuthCookieName is the HTTP cookie name where we store our auth token
AuthCookieName = "argocd.token"
// ResourcesFinalizerName is a number of application CRD finalizer
ResourcesFinalizerName = "resources-finalizer." + MetadataPrefix
// KubernetesInternalAPIServerAddr is address of the k8s API server when accessing internal to the cluster
KubernetesInternalAPIServerAddr = "https://kubernetes.default.svc"
)
const (
ArgoCDAdminUsername = "admin"
ArgoCDSecretName = "argocd-secret"
ArgoCDConfigMapName = "argocd-cm"
ArgoCDSecretName = "argocd-secret"
ArgoCDRBACConfigMapName = "argocd-rbac-cm"
)
const (
PortAPIServer = 8080
PortRepoServer = 8081
PortArgoCDMetrics = 8082
PortArgoCDAPIServerMetrics = 8083
PortRepoServerMetrics = 8084
)
// Argo CD application related constants
const (
// KubernetesInternalAPIServerAddr is address of the k8s API server when accessing internal to the cluster
KubernetesInternalAPIServerAddr = "https://kubernetes.default.svc"
// DefaultAppProjectName contains name of 'default' app project, which is available in every Argo CD installation
DefaultAppProjectName = "default"
// ArgoCDAdminUsername is the username of the 'admin' user
ArgoCDAdminUsername = "admin"
// ArgoCDUserAgentName is the default user-agent name used by the gRPC API client library and grpc-gateway
ArgoCDUserAgentName = "argocd-client"
// AuthCookieName is the HTTP cookie name where we store our auth token
AuthCookieName = "argocd.token"
// RevisionHistoryLimit is the max number of successful sync to keep in history
RevisionHistoryLimit = 10
// K8sClientConfigQPS controls the QPS to be used in K8s REST client configs
K8sClientConfigQPS = 25
// K8sClientConfigBurst controls the burst to be used in K8s REST client configs
K8sClientConfigBurst = 50
)
// Dex related constants
const (
// DexAPIEndpoint is the endpoint where we serve the Dex API server
DexAPIEndpoint = "/api/dex"
// LoginEndpoint is ArgoCD's shorthand login endpoint which redirects to dex's OAuth 2.0 provider's consent page
// LoginEndpoint is Argo CD's shorthand login endpoint which redirects to dex's OAuth 2.0 provider's consent page
LoginEndpoint = "/auth/login"
// CallbackEndpoint is ArgoCD's final callback endpoint we reach after OAuth 2.0 login flow has been completed
// CallbackEndpoint is Argo CD's final callback endpoint we reach after OAuth 2.0 login flow has been completed
CallbackEndpoint = "/auth/callback"
// ArgoCDClientAppName is name of the Oauth client app used when registering our web app to dex
ArgoCDClientAppName = "ArgoCD"
ArgoCDClientAppName = "Argo CD"
// ArgoCDClientAppID is the Oauth client ID we will use when registering our app to dex
ArgoCDClientAppID = "argo-cd"
// ArgoCDCLIClientAppName is name of the Oauth client app used when registering our CLI to dex
ArgoCDCLIClientAppName = "ArgoCD CLI"
ArgoCDCLIClientAppName = "Argo CD CLI"
// ArgoCDCLIClientAppID is the Oauth client ID we will use when registering our CLI to dex
ArgoCDCLIClientAppID = "argo-cd-cli"
)
// Resource metadata labels and annotations (keys and values) used by Argo CD components
const (
// LabelKeyAppInstance is the label key to use to uniquely identify the instance of an application
// The Argo CD application name is used as the instance name
LabelKeyAppInstance = "app.kubernetes.io/instance"
// LegacyLabelApplicationName is the legacy label (v0.10 and below) and is superceded by 'app.kubernetes.io/instance'
LabelKeyLegacyApplicationName = "applications.argoproj.io/app-name"
// LabelKeySecretType contains the type of argocd secret (currently: 'cluster')
LabelKeySecretType = "argocd.argoproj.io/secret-type"
// LabelValueSecretTypeCluster indicates a secret type of cluster
LabelValueSecretTypeCluster = "cluster"
// AnnotationKeyHook contains the hook type of a resource
AnnotationKeyHook = "argocd.argoproj.io/hook"
// AnnotationKeyHookDeletePolicy is the policy of deleting a hook
AnnotationKeyHookDeletePolicy = "argocd.argoproj.io/hook-delete-policy"
// AnnotationKeyRefresh is the annotation key which indicates that app needs to be refreshed. Removed by application controller after app is refreshed.
// Might take values 'normal'/'hard'. Value 'hard' means manifest cache and target cluster state cache should be invalidated before refresh.
AnnotationKeyRefresh = "argocd.argoproj.io/refresh"
// AnnotationKeyManagedBy is annotation name which indicates that k8s resource is managed by an application.
AnnotationKeyManagedBy = "managed-by"
// AnnotationValueManagedByArgoCD is a 'managed-by' annotation value for resources managed by Argo CD
AnnotationValueManagedByArgoCD = "argocd.argoproj.io"
// AnnotationKeyHelmHook is the helm hook annotation
AnnotationKeyHelmHook = "helm.sh/hook"
// AnnotationValueHelmHookCRDInstall is a value of crd helm hook
AnnotationValueHelmHookCRDInstall = "crd-install"
// ResourcesFinalizerName the finalizer value which we inject to finalize deletion of an application
ResourcesFinalizerName = "resources-finalizer.argocd.argoproj.io"
)
// Environment variables for tuning and debugging Argo CD
const (
// EnvVarSSODebug is an environment variable to enable additional OAuth debugging in the API server
EnvVarSSODebug = "ARGOCD_SSO_DEBUG"
// EnvVarRBACDebug is an environment variable to enable additional RBAC debugging in the API server
EnvVarRBACDebug = "ARGOCD_RBAC_DEBUG"
// DefaultAppProjectName contains name of default app project. The default app project allows deploying application to any cluster.
DefaultAppProjectName = "default"
// EnvVarFakeInClusterConfig is an environment variable to fake an in-cluster RESTConfig using
// the current kubectl context (for development purposes)
EnvVarFakeInClusterConfig = "ARGOCD_FAKE_IN_CLUSTER"
)
var (
// LabelKeyAppInstance refers to the application instance resource name
LabelKeyAppInstance = MetadataPrefix + "/app-instance"
// LabelKeySecretType contains the type of argocd secret (either 'cluster' or 'repo')
LabelKeySecretType = MetadataPrefix + "/secret-type"
// AnnotationConnectionStatus contains connection state status
AnnotationConnectionStatus = MetadataPrefix + "/connection-status"
// AnnotationConnectionMessage contains additional information about connection status
AnnotationConnectionMessage = MetadataPrefix + "/connection-message"
// AnnotationConnectionModifiedAt contains timestamp when connection state had been modified
AnnotationConnectionModifiedAt = MetadataPrefix + "/connection-modified-at"
// AnnotationHook contains the hook type of a resource
AnnotationHook = MetadataPrefix + "/hook"
// AnnotationHookDeletePolicy is the policy of deleting a hook
AnnotationHookDeletePolicy = MetadataPrefix + "/hook-delete-policy"
// AnnotationHelmHook is the helm hook annotation
AnnotationHelmHook = "helm.sh/hook"
// LabelKeyApplicationControllerInstanceID is the label which allows to separate application among multiple running application controllers.
LabelKeyApplicationControllerInstanceID = application.ApplicationFullName + "/controller-instanceid"
// LabelApplicationName is the label which indicates that resource belongs to application with the specified name
LabelApplicationName = application.ApplicationFullName + "/app-name"
// AnnotationKeyRefresh is the annotation key in the application which is updated with an
// arbitrary value (i.e. timestamp) on a git event, to force the controller to wake up and
// re-evaluate the application
AnnotationKeyRefresh = application.ApplicationFullName + "/refresh"
)
// ArgoCDManagerServiceAccount is the name of the service account for managing a cluster
const (
ArgoCDManagerServiceAccount = "argocd-manager"
ArgoCDManagerClusterRole = "argocd-manager-role"
ArgoCDManagerClusterRoleBinding = "argocd-manager-role-binding"
// MinClientVersion is the minimum client version that can interface with this API server.
// When introducing breaking changes to the API or datastructures, this number should be bumped.
// The value here may be lower than the current value in VERSION
MinClientVersion = "1.0.0"
// CacheVersion is a objects version cached using util/cache/cache.go.
// Number should be bumped in case of backward incompatible change to make sure cache is invalidated after upgrade.
CacheVersion = "1.0.0"
)
// ArgoCDManagerPolicyRules are the policies to give argocd-manager
var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
{
APIGroups: []string{"*"},
Resources: []string{"*"},
Verbs: []string{"*"},
},
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"time"
"github.com/argoproj/argo-cd/errors"
log "github.com/sirupsen/logrus"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -12,15 +11,34 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
// ArgoCDManagerServiceAccount is the name of the service account for managing a cluster
const (
ArgoCDManagerServiceAccount = "argocd-manager"
ArgoCDManagerClusterRole = "argocd-manager-role"
ArgoCDManagerClusterRoleBinding = "argocd-manager-role-binding"
)
// ArgoCDManagerPolicyRules are the policies to give argocd-manager
var ArgoCDManagerPolicyRules = []rbacv1.PolicyRule{
{
APIGroups: []string{"*"},
Resources: []string{"*"},
Verbs: []string{"*"},
},
{
NonResourceURLs: []string{"*"},
Verbs: []string{"*"},
},
}
// CreateServiceAccount creates a service account
func CreateServiceAccount(
clientset kubernetes.Interface,
serviceAccountName string,
namespace string,
) {
) error {
serviceAccount := apiv1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
@@ -34,12 +52,13 @@ func CreateServiceAccount(
_, err := clientset.CoreV1().ServiceAccounts(namespace).Create(&serviceAccount)
if err != nil {
if !apierr.IsAlreadyExists(err) {
log.Fatalf("Failed to create service account '%s': %v\n", serviceAccountName, err)
return fmt.Errorf("Failed to create service account %q: %v", serviceAccountName, err)
}
fmt.Printf("ServiceAccount '%s' already exists\n", serviceAccountName)
return
log.Infof("ServiceAccount %q already exists", serviceAccountName)
return nil
}
fmt.Printf("ServiceAccount '%s' created\n", serviceAccountName)
log.Infof("ServiceAccount %q created", serviceAccountName)
return nil
}
// CreateClusterRole creates a cluster role
@@ -47,7 +66,7 @@ func CreateClusterRole(
clientset kubernetes.Interface,
clusterRoleName string,
rules []rbacv1.PolicyRule,
) {
) error {
clusterRole := rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -62,16 +81,17 @@ func CreateClusterRole(
_, err := crclient.Create(&clusterRole)
if err != nil {
if !apierr.IsAlreadyExists(err) {
log.Fatalf("Failed to create ClusterRole '%s': %v\n", clusterRoleName, err)
return fmt.Errorf("Failed to create ClusterRole %q: %v", clusterRoleName, err)
}
_, err = crclient.Update(&clusterRole)
if err != nil {
log.Fatalf("Failed to update ClusterRole '%s': %v\n", clusterRoleName, err)
return fmt.Errorf("Failed to update ClusterRole %q: %v", clusterRoleName, err)
}
fmt.Printf("ClusterRole '%s' updated\n", clusterRoleName)
log.Infof("ClusterRole %q updated", clusterRoleName)
} else {
fmt.Printf("ClusterRole '%s' created\n", clusterRoleName)
log.Infof("ClusterRole %q created", clusterRoleName)
}
return nil
}
// CreateClusterRoleBinding create a ClusterRoleBinding
@@ -81,7 +101,7 @@ func CreateClusterRoleBinding(
serviceAccountName,
clusterRoleName string,
namespace string,
) {
) error {
roleBinding := rbacv1.ClusterRoleBinding{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
@@ -106,22 +126,33 @@ func CreateClusterRoleBinding(
_, err := clientset.RbacV1().ClusterRoleBindings().Create(&roleBinding)
if err != nil {
if !apierr.IsAlreadyExists(err) {
log.Fatalf("Failed to create ClusterRoleBinding %s: %v\n", clusterBindingRoleName, err)
return fmt.Errorf("Failed to create ClusterRoleBinding %s: %v", clusterBindingRoleName, err)
}
fmt.Printf("ClusterRoleBinding '%s' already exists\n", clusterBindingRoleName)
return
log.Infof("ClusterRoleBinding %q already exists", clusterBindingRoleName)
return nil
}
fmt.Printf("ClusterRoleBinding '%s' created, bound '%s' to '%s'\n", clusterBindingRoleName, serviceAccountName, clusterRoleName)
log.Infof("ClusterRoleBinding %q created, bound %q to %q", clusterBindingRoleName, serviceAccountName, clusterRoleName)
return nil
}
// InstallClusterManagerRBAC installs RBAC resources for a cluster manager to operate a cluster. Returns a token
func InstallClusterManagerRBAC(conf *rest.Config) string {
func InstallClusterManagerRBAC(clientset kubernetes.Interface) (string, error) {
const ns = "kube-system"
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
err := CreateServiceAccount(clientset, ArgoCDManagerServiceAccount, ns)
if err != nil {
return "", err
}
err = CreateClusterRole(clientset, ArgoCDManagerClusterRole, ArgoCDManagerPolicyRules)
if err != nil {
return "", err
}
err = CreateClusterRoleBinding(clientset, ArgoCDManagerClusterRoleBinding, ArgoCDManagerServiceAccount, ArgoCDManagerClusterRole, ns)
if err != nil {
return "", err
}
var serviceAccount *apiv1.ServiceAccount
var secretName string
@@ -137,52 +168,51 @@ func InstallClusterManagerRBAC(conf *rest.Config) string {
return true, nil
})
if err != nil {
log.Fatalf("Failed to wait for service account secret: %v", err)
return "", fmt.Errorf("Failed to wait for service account secret: %v", err)
}
secret, err := clientset.CoreV1().Secrets(ns).Get(secretName, metav1.GetOptions{})
if err != nil {
log.Fatalf("Failed to retrieve secret '%s': %v", secretName, err)
return "", fmt.Errorf("Failed to retrieve secret %q: %v", secretName, err)
}
token, ok := secret.Data["token"]
if !ok {
log.Fatalf("Secret '%s' for service account '%s' did not have a token", secretName, serviceAccount)
return "", fmt.Errorf("Secret %q for service account %q did not have a token", secretName, serviceAccount)
}
return string(token)
return string(token), nil
}
// UninstallClusterManagerRBAC removes RBAC resources for a cluster manager to operate a cluster
func UninstallClusterManagerRBAC(conf *rest.Config) {
clientset, err := kubernetes.NewForConfig(conf)
errors.CheckError(err)
UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
func UninstallClusterManagerRBAC(clientset kubernetes.Interface) error {
return UninstallRBAC(clientset, "kube-system", ArgoCDManagerClusterRoleBinding, ArgoCDManagerClusterRole, ArgoCDManagerServiceAccount)
}
// UninstallRBAC uninstalls RBAC related resources for a binding, role, and service account
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) {
func UninstallRBAC(clientset kubernetes.Interface, namespace, bindingName, roleName, serviceAccount string) error {
if err := clientset.RbacV1().ClusterRoleBindings().Delete(bindingName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
log.Fatalf("Failed to delete ClusterRoleBinding: %v\n", err)
return fmt.Errorf("Failed to delete ClusterRoleBinding: %v", err)
}
fmt.Printf("ClusterRoleBinding '%s' not found\n", bindingName)
log.Infof("ClusterRoleBinding %q not found", bindingName)
} else {
fmt.Printf("ClusterRoleBinding '%s' deleted\n", bindingName)
log.Infof("ClusterRoleBinding %q deleted", bindingName)
}
if err := clientset.RbacV1().ClusterRoles().Delete(roleName, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
log.Fatalf("Failed to delete ClusterRole: %v\n", err)
return fmt.Errorf("Failed to delete ClusterRole: %v", err)
}
fmt.Printf("ClusterRole '%s' not found\n", roleName)
log.Infof("ClusterRole %q not found", roleName)
} else {
fmt.Printf("ClusterRole '%s' deleted\n", roleName)
log.Infof("ClusterRole %q deleted", roleName)
}
if err := clientset.CoreV1().ServiceAccounts(namespace).Delete(serviceAccount, &metav1.DeleteOptions{}); err != nil {
if !apierr.IsNotFound(err) {
log.Fatalf("Failed to delete ServiceAccount: %v\n", err)
return fmt.Errorf("Failed to delete ServiceAccount: %v", err)
}
fmt.Printf("ServiceAccount '%s' in namespace '%s' not found\n", serviceAccount, namespace)
log.Infof("ServiceAccount %q in namespace %q not found", serviceAccount, namespace)
} else {
fmt.Printf("ServiceAccount '%s' deleted\n", serviceAccount)
log.Infof("ServiceAccount %q deleted", serviceAccount)
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,468 @@
package controller
import (
"context"
"testing"
"time"
"github.com/argoproj/argo-cd/common"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
kubetesting "k8s.io/client-go/testing"
"k8s.io/client-go/tools/cache"
mockstatecache "github.com/argoproj/argo-cd/controller/cache/mocks"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
mockreposerver "github.com/argoproj/argo-cd/reposerver/mocks"
"github.com/argoproj/argo-cd/reposerver/repository"
mockrepoclient "github.com/argoproj/argo-cd/reposerver/repository/mocks"
"github.com/argoproj/argo-cd/test"
utilcache "github.com/argoproj/argo-cd/util/cache"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
type fakeData struct {
apps []runtime.Object
manifestResponse *repository.ManifestResponse
managedLiveObjs map[kube.ResourceKey]*unstructured.Unstructured
}
func newFakeController(data *fakeData) *ApplicationController {
var clust corev1.Secret
err := yaml.Unmarshal([]byte(fakeCluster), &clust)
if err != nil {
panic(err)
}
// Mock out call to GenerateManifest
mockRepoClient := mockrepoclient.RepoServerServiceClient{}
mockRepoClient.On("GenerateManifest", mock.Anything, mock.Anything).Return(data.manifestResponse, nil)
mockRepoClientset := mockreposerver.Clientset{}
mockRepoClientset.On("NewRepoServerClient").Return(&fakeCloser{}, &mockRepoClient, nil)
secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-secret",
Namespace: test.FakeArgoCDNamespace,
},
Data: map[string][]byte{
"admin.password": []byte("test"),
"server.secretkey": []byte("test"),
},
}
cm := corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "argocd-cm",
Namespace: test.FakeArgoCDNamespace,
},
Data: nil,
}
kubeClient := fake.NewSimpleClientset(&clust, &cm, &secret)
settingsMgr := settings.NewSettingsManager(context.Background(), kubeClient, test.FakeArgoCDNamespace)
ctrl, err := NewApplicationController(
test.FakeArgoCDNamespace,
settingsMgr,
kubeClient,
appclientset.NewSimpleClientset(data.apps...),
&mockRepoClientset,
utilcache.NewCache(utilcache.NewInMemoryCache(1*time.Hour)),
time.Minute,
)
if err != nil {
panic(err)
}
cancelProj := test.StartInformer(ctrl.projInformer)
defer cancelProj()
cancelApp := test.StartInformer(ctrl.appInformer)
defer cancelApp()
// Mock out call to GetManagedLiveObjs if fake data supplied
if data.managedLiveObjs != nil {
mockStateCache := mockstatecache.LiveStateCache{}
mockStateCache.On("GetManagedLiveObjs", mock.Anything, mock.Anything).Return(data.managedLiveObjs, nil)
mockStateCache.On("IsNamespaced", mock.Anything, mock.Anything).Return(true, nil)
ctrl.stateCache = &mockStateCache
ctrl.appStateManager.(*appStateManager).liveStateCache = &mockStateCache
}
return ctrl
}
type fakeCloser struct{}
func (f *fakeCloser) Close() error { return nil }
var fakeCluster = `
apiVersion: v1
data:
# {"bearerToken":"fake","tlsClientConfig":{"insecure":true},"awsAuthConfig":null}
config: eyJiZWFyZXJUb2tlbiI6ImZha2UiLCJ0bHNDbGllbnRDb25maWciOnsiaW5zZWN1cmUiOnRydWV9LCJhd3NBdXRoQ29uZmlnIjpudWxsfQ==
# minikube
name: aHR0cHM6Ly9sb2NhbGhvc3Q6NjQ0Mw==
# https://localhost:6443
server: aHR0cHM6Ly9sb2NhbGhvc3Q6NjQ0Mw==
kind: Secret
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
name: some-secret
namespace: ` + test.FakeArgoCDNamespace + `
type: Opaque
`
var fakeApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
uid: "123"
name: my-app
namespace: ` + test.FakeArgoCDNamespace + `
spec:
destination:
namespace: ` + test.FakeDestNamespace + `
server: https://localhost:6443
project: default
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
syncPolicy:
automated: {}
status:
operationState:
finishedAt: 2018-09-21T23:50:29Z
message: successfully synced
operation:
sync:
revision: HEAD
phase: Succeeded
startedAt: 2018-09-21T23:50:25Z
syncResult:
resources:
- kind: RoleBinding
message: |-
rolebinding.rbac.authorization.k8s.io/always-outofsync reconciled
rolebinding.rbac.authorization.k8s.io/always-outofsync configured
name: always-outofsync
namespace: default
status: Synced
revision: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
`
func newFakeApp() *argoappv1.Application {
var app argoappv1.Application
err := yaml.Unmarshal([]byte(fakeApp), &app)
if err != nil {
panic(err)
}
return &app
}
func TestAutoSync(t *testing.T) {
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.NotNil(t, app.Operation)
assert.NotNil(t, app.Operation.Sync)
assert.False(t, app.Operation.Sync.Prune)
}
func TestSkipAutoSync(t *testing.T) {
// Verify we skip when we previously synced to it in our most recent history
// Set current to 'aaaaa', desired to 'aaaa' and mark system OutOfSync
{
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// Verify we skip when we are already Synced (even if revision is different)
{
app := newFakeApp()
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeSynced,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// Verify we skip when auto-sync is disabled
{
app := newFakeApp()
app.Spec.SyncPolicy = nil
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// Verify we skip when application is marked for deletion
{
app := newFakeApp()
now := metav1.Now()
app.DeletionTimestamp = &now
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// Verify we skip when previous sync attempt failed and return error condition
// Set current to 'aaaaa', desired to 'bbbbb' and add 'bbbbb' to failure history
{
app := newFakeApp()
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
Source: *app.Spec.Source.DeepCopy(),
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
}
cond := ctrl.autoSync(app, &syncStatus)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
}
// TestAutoSyncIndicateError verifies we skip auto-sync and return error condition if previous sync failed
func TestAutoSyncIndicateError(t *testing.T) {
app := newFakeApp()
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
Parameters: []argoappv1.HelmParameter{
{
Name: "a",
Value: "1",
},
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{
Source: app.Spec.Source.DeepCopy(),
},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
Source: *app.Spec.Source.DeepCopy(),
},
}
cond := ctrl.autoSync(app, &syncStatus)
assert.NotNil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.Nil(t, app.Operation)
}
// TestAutoSyncParameterOverrides verifies we auto-sync if revision is same but parameter overrides are different
func TestAutoSyncParameterOverrides(t *testing.T) {
app := newFakeApp()
app.Spec.Source.Helm = &argoappv1.ApplicationSourceHelm{
Parameters: []argoappv1.HelmParameter{
{
Name: "a",
Value: "1",
},
},
}
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
syncStatus := argoappv1.SyncStatus{
Status: argoappv1.SyncStatusCodeOutOfSync,
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}
app.Status.OperationState = &argoappv1.OperationState{
Operation: argoappv1.Operation{
Sync: &argoappv1.SyncOperation{
Source: &argoappv1.ApplicationSource{
Helm: &argoappv1.ApplicationSourceHelm{
Parameters: []argoappv1.HelmParameter{
{
Name: "a",
Value: "2", // this value changed
},
},
},
},
},
},
Phase: argoappv1.OperationFailed,
SyncResult: &argoappv1.SyncOperationResult{
Revision: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
},
}
cond := ctrl.autoSync(app, &syncStatus)
assert.Nil(t, cond)
app, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(test.FakeArgoCDNamespace).Get("my-app", metav1.GetOptions{})
assert.NoError(t, err)
assert.NotNil(t, app.Operation)
}
// TestFinalizeAppDeletion verifies application deletion
func TestFinalizeAppDeletion(t *testing.T) {
app := newFakeApp()
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
appObj := kube.MustToUnstructured(&app)
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}, managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(appObj): appObj,
}})
patched := false
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
defaultReactor := fakeAppCs.ReactionChain[0]
fakeAppCs.ReactionChain = nil
fakeAppCs.AddReactor("get", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
return defaultReactor.React(action)
})
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
patched = true
return true, nil, nil
})
err := ctrl.finalizeApplicationDeletion(app)
assert.NoError(t, err)
assert.True(t, patched)
}
// TestNormalizeApplication verifies we normalize an application during reconciliation
func TestNormalizeApplication(t *testing.T) {
defaultProj := argoappv1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: test.FakeArgoCDNamespace,
},
Spec: argoappv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []argoappv1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
},
}
app := newFakeApp()
app.Spec.Project = ""
app.Spec.Source.Kustomize = &argoappv1.ApplicationSourceKustomize{NamePrefix: "foo-"}
data := fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
{
// Verify we normalize the app because project is missing
ctrl := newFakeController(&data)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.Add(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
normalized := false
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
if string(patchAction.GetPatch()) == `{"spec":{"project":"default"}}` {
normalized = true
}
}
return true, nil, nil
})
ctrl.processAppRefreshQueueItem()
assert.True(t, normalized)
}
{
// Verify we don't unnecessarily normalize app when project is set
app.Spec.Project = "default"
data.apps[0] = app
ctrl := newFakeController(&data)
key, _ := cache.MetaNamespaceKeyFunc(app)
ctrl.appRefreshQueue.Add(key)
fakeAppCs := ctrl.applicationClientset.(*appclientset.Clientset)
fakeAppCs.ReactionChain = nil
normalized := false
fakeAppCs.AddReactor("patch", "*", func(action kubetesting.Action) (handled bool, ret runtime.Object, err error) {
if patchAction, ok := action.(kubetesting.PatchAction); ok {
if string(patchAction.GetPatch()) == `{"spec":{"project":"default"}}` {
normalized = true
}
}
return true, nil, nil
})
ctrl.processAppRefreshQueueItem()
assert.False(t, normalized)
}
}
func TestHandleAppUpdated(t *testing.T) {
app := newFakeApp()
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
app.Spec.Destination.Server = common.KubernetesInternalAPIServerAddr
ctrl := newFakeController(&fakeData{apps: []runtime.Object{app}})
ctrl.handleAppUpdated(app.Name, true, kube.GetObjectRef(kube.MustToUnstructured(app)))
isRequested, _ := ctrl.isRefreshRequested(app.Name)
assert.False(t, isRequested)
ctrl.handleAppUpdated(app.Name, true, corev1.ObjectReference{UID: "test", Kind: kube.DeploymentKind, Name: "test", Namespace: "default"})
isRequested, _ = ctrl.isRefreshRequested(app.Name)
assert.True(t, isRequested)
}

191
controller/cache/cache.go vendored Normal file
View File

@@ -0,0 +1,191 @@
package cache
import (
"context"
"sync"
log "github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/tools/cache"
"github.com/argoproj/argo-cd/controller/metrics"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
type LiveStateCache interface {
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
// Executes give callback against resource specified by the key and all its children
IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode)) error
// Returns state of live nodes which correspond for target nodes of specified application.
GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error)
// Starts watching resources of each controlled cluster.
Run(ctx context.Context)
// Invalidate invalidates the entire cluster state cache
Invalidate()
}
type AppUpdatedHandler = func(appName string, fullRefresh bool, ref v1.ObjectReference)
func GetTargetObjKey(a *appv1.Application, un *unstructured.Unstructured, isNamespaced bool) kube.ResourceKey {
key := kube.GetResourceKey(un)
if !isNamespaced {
key.Namespace = ""
} else if isNamespaced && key.Namespace == "" {
key.Namespace = a.Spec.Destination.Namespace
}
return key
}
func NewLiveStateCache(
db db.ArgoDB,
appInformer cache.SharedIndexInformer,
settings *settings.ArgoCDSettings,
kubectl kube.Kubectl,
metricsServer *metrics.MetricsServer,
onAppUpdated AppUpdatedHandler) LiveStateCache {
return &liveStateCache{
appInformer: appInformer,
db: db,
clusters: make(map[string]*clusterInfo),
lock: &sync.Mutex{},
onAppUpdated: onAppUpdated,
kubectl: kubectl,
settings: settings,
metricsServer: metricsServer,
}
}
type liveStateCache struct {
db db.ArgoDB
clusters map[string]*clusterInfo
lock *sync.Mutex
appInformer cache.SharedIndexInformer
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
settings *settings.ArgoCDSettings
metricsServer *metrics.MetricsServer
}
func (c *liveStateCache) getCluster(server string) (*clusterInfo, error) {
c.lock.Lock()
defer c.lock.Unlock()
info, ok := c.clusters[server]
if !ok {
cluster, err := c.db.GetCluster(context.Background(), server)
if err != nil {
return nil, err
}
info = &clusterInfo{
apisMeta: make(map[schema.GroupKind]*apiMeta),
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
nsIndex: make(map[string]map[kube.ResourceKey]*node),
onAppUpdated: c.onAppUpdated,
kubectl: c.kubectl,
cluster: cluster,
syncTime: nil,
syncLock: &sync.Mutex{},
log: log.WithField("server", cluster.Server),
settings: c.settings,
}
c.clusters[cluster.Server] = info
}
return info, nil
}
func (c *liveStateCache) getSyncedCluster(server string) (*clusterInfo, error) {
info, err := c.getCluster(server)
if err != nil {
return nil, err
}
err = info.ensureSynced()
if err != nil {
return nil, err
}
return info, nil
}
func (c *liveStateCache) Invalidate() {
log.Info("invalidating live state cache")
c.lock.Lock()
defer c.lock.Unlock()
for _, clust := range c.clusters {
clust.lock.Lock()
clust.invalidate()
clust.lock.Unlock()
}
log.Info("live state cache invalidated")
}
func (c *liveStateCache) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return false, err
}
return clusterInfo.isNamespaced(obj), nil
}
func (c *liveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(child appv1.ResourceNode)) error {
clusterInfo, err := c.getSyncedCluster(server)
if err != nil {
return err
}
clusterInfo.iterateHierarchy(key, action)
return nil
}
func (c *liveStateCache) GetManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
clusterInfo, err := c.getSyncedCluster(a.Spec.Destination.Server)
if err != nil {
return nil, err
}
return clusterInfo.getManagedLiveObjs(a, targetObjs, c.metricsServer)
}
func isClusterHasApps(apps []interface{}, cluster *appv1.Cluster) bool {
for _, obj := range apps {
if app, ok := obj.(*appv1.Application); ok && app.Spec.Destination.Server == cluster.Server {
return true
}
}
return false
}
// Run watches for resource changes annotated with application label on all registered clusters and schedule corresponding app refresh.
func (c *liveStateCache) Run(ctx context.Context) {
util.RetryUntilSucceed(func() error {
clusterEventCallback := func(event *db.ClusterEvent) {
c.lock.Lock()
defer c.lock.Unlock()
if cluster, ok := c.clusters[event.Cluster.Server]; ok {
if event.Type == watch.Deleted {
cluster.invalidate()
delete(c.clusters, event.Cluster.Server)
} else if event.Type == watch.Modified {
cluster.cluster = event.Cluster
cluster.invalidate()
}
} else if event.Type == watch.Added && isClusterHasApps(c.appInformer.GetStore().List(), event.Cluster) {
go func() {
// warm up cache for cluster with apps
_, _ = c.getSyncedCluster(event.Cluster.Server)
}()
}
}
return c.db.WatchClusters(ctx, clusterEventCallback)
}, "watch clusters", ctx, clusterRetryTimeout)
<-ctx.Done()
}

479
controller/cache/cluster.go vendored Normal file
View File

@@ -0,0 +1,479 @@
package cache
import (
"context"
"fmt"
"runtime/debug"
"sync"
"time"
"github.com/argoproj/argo-cd/controller/metrics"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/health"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
const (
clusterSyncTimeout = 24 * time.Hour
clusterRetryTimeout = 10 * time.Second
watchResourcesRetryTimeout = 1 * time.Second
)
type apiMeta struct {
namespaced bool
resourceVersion string
watchCancel context.CancelFunc
}
type clusterInfo struct {
syncLock *sync.Mutex
syncTime *time.Time
syncError error
apisMeta map[schema.GroupKind]*apiMeta
lock *sync.Mutex
nodes map[kube.ResourceKey]*node
nsIndex map[string]map[kube.ResourceKey]*node
onAppUpdated AppUpdatedHandler
kubectl kube.Kubectl
cluster *appv1.Cluster
log *log.Entry
settings *settings.ArgoCDSettings
}
func (c *clusterInfo) replaceResourceCache(gk schema.GroupKind, resourceVersion string, objs []unstructured.Unstructured) {
c.lock.Lock()
defer c.lock.Unlock()
info, ok := c.apisMeta[gk]
if ok {
objByKind := make(map[kube.ResourceKey]*unstructured.Unstructured)
for i := range objs {
objByKind[kube.GetResourceKey(&objs[i])] = &objs[i]
}
for i := range objs {
obj := &objs[i]
key := kube.GetResourceKey(&objs[i])
existingNode, exists := c.nodes[key]
c.onNodeUpdated(exists, existingNode, obj, key)
}
for key, existingNode := range c.nodes {
if key.Kind != gk.Kind || key.Group != gk.Group {
continue
}
if _, ok := objByKind[key]; !ok {
c.onNodeRemoved(key, existingNode)
}
}
info.resourceVersion = resourceVersion
}
}
func (c *clusterInfo) createObjInfo(un *unstructured.Unstructured, appInstanceLabel string) *node {
ownerRefs := un.GetOwnerReferences()
// Special case for endpoint. Remove after https://github.com/kubernetes/kubernetes/issues/28483 is fixed
if un.GroupVersionKind().Group == "" && un.GetKind() == kube.EndpointsKind && len(un.GetOwnerReferences()) == 0 {
ownerRefs = append(ownerRefs, metav1.OwnerReference{
Name: un.GetName(),
Kind: kube.ServiceKind,
APIVersion: "",
})
}
nodeInfo := &node{
resourceVersion: un.GetResourceVersion(),
ref: kube.GetObjectRef(un),
ownerRefs: ownerRefs,
}
populateNodeInfo(un, nodeInfo)
appName := kube.GetAppInstanceLabel(un, appInstanceLabel)
if len(ownerRefs) == 0 && appName != "" {
nodeInfo.appName = appName
nodeInfo.resource = un
}
nodeInfo.health, _ = health.GetResourceHealth(un, c.settings.ResourceOverrides)
return nodeInfo
}
func (c *clusterInfo) setNode(n *node) {
key := n.resourceKey()
c.nodes[key] = n
ns, ok := c.nsIndex[key.Namespace]
if !ok {
ns = make(map[kube.ResourceKey]*node)
c.nsIndex[key.Namespace] = ns
}
ns[key] = n
}
func (c *clusterInfo) removeNode(key kube.ResourceKey) {
delete(c.nodes, key)
if ns, ok := c.nsIndex[key.Namespace]; ok {
delete(ns, key)
if len(ns) == 0 {
delete(c.nsIndex, key.Namespace)
}
}
}
func (c *clusterInfo) invalidate() {
c.syncLock.Lock()
defer c.syncLock.Unlock()
c.syncTime = nil
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
c.apisMeta = nil
}
func (c *clusterInfo) synced() bool {
if c.syncTime == nil {
return false
}
if c.syncError != nil {
return time.Now().Before(c.syncTime.Add(clusterRetryTimeout))
}
return time.Now().Before(c.syncTime.Add(clusterSyncTimeout))
}
func (c *clusterInfo) stopWatching(gk schema.GroupKind) {
c.syncLock.Lock()
defer c.syncLock.Unlock()
if info, ok := c.apisMeta[gk]; ok {
info.watchCancel()
delete(c.apisMeta, gk)
c.replaceResourceCache(gk, "", []unstructured.Unstructured{})
log.Warnf("Stop watching %s not found on %s.", gk, c.cluster.Server)
}
}
// startMissingWatches lists supported cluster resources and start watching for changes unless watch is already running
func (c *clusterInfo) startMissingWatches() error {
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
for i := range apis {
api := apis[i]
if _, ok := c.apisMeta[api.GroupKind]; !ok {
ctx, cancel := context.WithCancel(context.Background())
info := &apiMeta{namespaced: api.Meta.Namespaced, watchCancel: cancel}
c.apisMeta[api.GroupKind] = info
go c.watchEvents(ctx, api, info)
}
}
return nil
}
func runSynced(lock *sync.Mutex, action func() error) error {
lock.Lock()
defer lock.Unlock()
return action()
}
func (c *clusterInfo) watchEvents(ctx context.Context, api kube.APIResourceInfo, info *apiMeta) {
util.RetryUntilSucceed(func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
}()
err = runSynced(c.syncLock, func() error {
if info.resourceVersion == "" {
list, err := api.Interface.List(metav1.ListOptions{})
if err != nil {
return err
}
c.replaceResourceCache(api.GroupKind, list.GetResourceVersion(), list.Items)
}
return nil
})
if err != nil {
return err
}
w, err := api.Interface.Watch(metav1.ListOptions{ResourceVersion: info.resourceVersion})
if errors.IsNotFound(err) {
c.stopWatching(api.GroupKind)
return nil
}
err = runSynced(c.syncLock, func() error {
if errors.IsGone(err) {
info.resourceVersion = ""
log.Warnf("Resource version of %s on %s is too old.", api.GroupKind, c.cluster.Server)
}
return err
})
if err != nil {
return err
}
defer w.Stop()
for {
select {
case <-ctx.Done():
return nil
case event, ok := <-w.ResultChan():
if ok {
obj := event.Object.(*unstructured.Unstructured)
info.resourceVersion = obj.GetResourceVersion()
err = c.processEvent(event.Type, obj)
if err != nil {
log.Warnf("Failed to process event %s %s/%s/%s: %v", event.Type, obj.GroupVersionKind(), obj.GetNamespace(), obj.GetName(), err)
continue
}
if kube.IsCRD(obj) {
if event.Type == watch.Deleted {
group, groupOk, groupErr := unstructured.NestedString(obj.Object, "spec", "group")
kind, kindOk, kindErr := unstructured.NestedString(obj.Object, "spec", "names", "kind")
if groupOk && groupErr == nil && kindOk && kindErr == nil {
gk := schema.GroupKind{Group: group, Kind: kind}
c.stopWatching(gk)
}
} else {
err = runSynced(c.syncLock, func() error {
return c.startMissingWatches()
})
}
}
if err != nil {
log.Warnf("Failed to start missing watch: %v", err)
}
} else {
return fmt.Errorf("Watch %s on %s has closed", api.GroupKind, c.cluster.Server)
}
}
}
}, fmt.Sprintf("watch %s on %s", api.GroupKind, c.cluster.Server), ctx, watchResourcesRetryTimeout)
}
func (c *clusterInfo) sync() (err error) {
c.log.Info("Start syncing cluster")
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
c.apisMeta = make(map[schema.GroupKind]*apiMeta)
c.nodes = make(map[kube.ResourceKey]*node)
apis, err := c.kubectl.GetAPIResources(c.cluster.RESTConfig(), c.settings)
if err != nil {
return err
}
lock := sync.Mutex{}
err = util.RunAllAsync(len(apis), func(i int) error {
api := apis[i]
list, err := api.Interface.List(metav1.ListOptions{})
if err != nil {
return err
}
lock.Lock()
for i := range list.Items {
c.setNode(c.createObjInfo(&list.Items[i], c.settings.GetAppInstanceLabelKey()))
}
lock.Unlock()
return nil
})
if err == nil {
err = c.startMissingWatches()
}
if err != nil {
log.Errorf("Failed to sync cluster %s: %v", c.cluster.Server, err)
return err
}
c.log.Info("Cluster successfully synced")
return nil
}
func (c *clusterInfo) ensureSynced() error {
c.syncLock.Lock()
defer c.syncLock.Unlock()
if c.synced() {
return c.syncError
}
err := c.sync()
syncTime := time.Now()
c.syncTime = &syncTime
c.syncError = err
return c.syncError
}
func (c *clusterInfo) iterateHierarchy(key kube.ResourceKey, action func(child appv1.ResourceNode)) {
c.lock.Lock()
defer c.lock.Unlock()
if objInfo, ok := c.nodes[key]; ok {
action(objInfo.asResourceNode())
nsNodes := c.nsIndex[key.Namespace]
for _, child := range nsNodes {
if objInfo.isParentOf(child) {
action(child.asResourceNode())
child.iterateChildren(nsNodes, map[kube.ResourceKey]bool{objInfo.resourceKey(): true}, action)
}
}
}
}
func (c *clusterInfo) isNamespaced(obj *unstructured.Unstructured) bool {
if api, ok := c.apisMeta[kube.GetResourceKey(obj).GroupKind()]; ok && !api.namespaced {
return false
}
return true
}
func (c *clusterInfo) getManagedLiveObjs(a *appv1.Application, targetObjs []*unstructured.Unstructured, metricsServer *metrics.MetricsServer) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
c.lock.Lock()
defer c.lock.Unlock()
managedObjs := make(map[kube.ResourceKey]*unstructured.Unstructured)
// iterate all objects in live state cache to find ones associated with app
for key, o := range c.nodes {
if o.appName == a.Name && o.resource != nil && len(o.ownerRefs) == 0 {
managedObjs[key] = o.resource
}
}
config := metrics.AddMetricsTransportWrapper(metricsServer, a, c.cluster.RESTConfig())
// iterate target objects and identify ones that already exist in the cluster,\
// but are simply missing our label
lock := &sync.Mutex{}
err := util.RunAllAsync(len(targetObjs), func(i int) error {
targetObj := targetObjs[i]
key := GetTargetObjKey(a, targetObj, c.isNamespaced(targetObj))
lock.Lock()
managedObj := managedObjs[key]
lock.Unlock()
if managedObj == nil {
if existingObj, exists := c.nodes[key]; exists {
if existingObj.resource != nil {
managedObj = existingObj.resource
} else {
var err error
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), existingObj.ref.Name, existingObj.ref.Namespace)
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
}
}
}
if managedObj != nil {
converted, err := c.kubectl.ConvertToVersion(managedObj, targetObj.GroupVersionKind().Group, targetObj.GroupVersionKind().Version)
if err != nil {
// fallback to loading resource from kubernetes if conversion fails
log.Warnf("Failed to convert resource: %v", err)
managedObj, err = c.kubectl.GetResource(config, targetObj.GroupVersionKind(), managedObj.GetName(), managedObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
return nil
}
return err
}
} else {
managedObj = converted
}
lock.Lock()
managedObjs[key] = managedObj
lock.Unlock()
}
return nil
})
if err != nil {
return nil, err
}
return managedObjs, nil
}
func (c *clusterInfo) processEvent(event watch.EventType, un *unstructured.Unstructured) error {
c.lock.Lock()
defer c.lock.Unlock()
key := kube.GetResourceKey(un)
existingNode, exists := c.nodes[key]
if event == watch.Deleted {
if exists {
c.onNodeRemoved(key, existingNode)
}
} else if event != watch.Deleted {
c.onNodeUpdated(exists, existingNode, un, key)
}
return nil
}
func (c *clusterInfo) onNodeUpdated(exists bool, existingNode *node, un *unstructured.Unstructured, key kube.ResourceKey) {
nodes := make([]*node, 0)
if exists {
nodes = append(nodes, existingNode)
}
newObj := c.createObjInfo(un, c.settings.GetAppInstanceLabelKey())
c.setNode(newObj)
nodes = append(nodes, newObj)
toNotify := make(map[string]bool)
for i := range nodes {
n := nodes[i]
if ns, ok := c.nsIndex[n.ref.Namespace]; ok {
app := n.getApp(ns)
if app == "" || skipAppRequeing(key) {
continue
}
toNotify[app] = n.isRootAppNode() || toNotify[app]
}
}
for name, full := range toNotify {
c.onAppUpdated(name, full, newObj.ref)
}
}
func (c *clusterInfo) onNodeRemoved(key kube.ResourceKey, n *node) {
appName := n.appName
if ns, ok := c.nsIndex[key.Namespace]; ok {
appName = n.getApp(ns)
}
c.removeNode(key)
if appName != "" {
c.onAppUpdated(appName, n.isRootAppNode(), n.ref)
}
}
var (
ignoredRefreshResources = map[string]bool{
"/" + kube.EndpointsKind: true,
}
)
// skipAppRequeing checks if the object is an API type which we want to skip requeuing against.
// We ignore API types which have a high churn rate, and/or whose updates are irrelevant to the app
func skipAppRequeing(key kube.ResourceKey) bool {
return ignoredRefreshResources[key.Group+"/"+key.Kind]
}

421
controller/cache/cluster_test.go vendored Normal file
View File

@@ -0,0 +1,421 @@
package cache
import (
"fmt"
"sort"
"strings"
"sync"
"testing"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic/fake"
"github.com/argoproj/argo-cd/errors"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
"github.com/argoproj/argo-cd/util/settings"
)
func strToUnstructured(jsonStr string) *unstructured.Unstructured {
obj := make(map[string]interface{})
err := yaml.Unmarshal([]byte(jsonStr), &obj)
errors.CheckError(err)
return &unstructured.Unstructured{Object: obj}
}
func mustToUnstructured(obj interface{}) *unstructured.Unstructured {
un, err := kube.ToUnstructured(obj)
errors.CheckError(err)
return un
}
var (
testPod = strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: helm-guestbook-pod
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
resourceVersion: "123"`)
testRS = strToUnstructured(`
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: helm-guestbook-rs
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
kind: Deployment
name: helm-guestbook
resourceVersion: "123"`)
testDeploy = strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: helm-guestbook
name: helm-guestbook
namespace: default
resourceVersion: "123"`)
testService = strToUnstructured(`
apiVersion: v1
kind: Service
metadata:
name: helm-guestbook
namespace: default
resourceVersion: "123"
spec:
selector:
app: guestbook
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost`)
testIngress = strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
spec:
backend:
serviceName: not-found-service
servicePort: 443
rules:
- host: helm-guestbook.com
http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /
- backend:
serviceName: helm-guestbook
servicePort: https
path: /
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
)
func newCluster(objs ...*unstructured.Unstructured) *clusterInfo {
runtimeObjs := make([]runtime.Object, len(objs))
for i := range objs {
runtimeObjs[i] = objs[i]
}
scheme := runtime.NewScheme()
client := fake.NewSimpleDynamicClient(scheme, runtimeObjs...)
apiResources := []kube.APIResourceInfo{{
GroupKind: schema.GroupKind{Group: "", Kind: "Pod"},
Interface: client.Resource(schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}),
Meta: metav1.APIResource{Namespaced: true},
}, {
GroupKind: schema.GroupKind{Group: "apps", Kind: "ReplicaSet"},
Interface: client.Resource(schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "replicasets"}),
Meta: metav1.APIResource{Namespaced: true},
}, {
GroupKind: schema.GroupKind{Group: "apps", Kind: "Deployment"},
Interface: client.Resource(schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"}),
Meta: metav1.APIResource{Namespaced: true},
}}
return newClusterExt(kubetest.MockKubectlCmd{APIResources: apiResources})
}
func newClusterExt(kubectl kube.Kubectl) *clusterInfo {
return &clusterInfo{
lock: &sync.Mutex{},
nodes: make(map[kube.ResourceKey]*node),
onAppUpdated: func(appName string, fullRefresh bool, reference corev1.ObjectReference) {},
kubectl: kubectl,
nsIndex: make(map[string]map[kube.ResourceKey]*node),
cluster: &appv1.Cluster{},
syncTime: nil,
syncLock: &sync.Mutex{},
apisMeta: make(map[schema.GroupKind]*apiMeta),
log: log.WithField("cluster", "test"),
settings: &settings.ArgoCDSettings{},
}
}
func getChildren(cluster *clusterInfo, un *unstructured.Unstructured) []appv1.ResourceNode {
hierarchy := make([]appv1.ResourceNode, 0)
cluster.iterateHierarchy(kube.GetResourceKey(un), func(child appv1.ResourceNode) {
hierarchy = append(hierarchy, child)
})
return hierarchy[1:]
}
func TestGetChildren(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
assert.Equal(t, []appv1.ResourceNode{{
ResourceRef: appv1.ResourceRef{
Kind: "Pod",
Namespace: "default",
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
Version: "",
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
ResourceVersion: "123",
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
}}, rsChildren)
deployChildren := getChildren(cluster, testDeploy)
assert.Equal(t, append([]appv1.ResourceNode{{
ResourceRef: appv1.ResourceRef{
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
Group: "apps",
Version: "v1",
},
ResourceVersion: "123",
Health: &appv1.HealthStatus{Status: appv1.HealthStatusHealthy},
Info: []appv1.InfoItem{},
ParentRefs: []appv1.ResourceRef{{Group: "apps", Version: "", Kind: "Deployment", Namespace: "default", Name: "helm-guestbook"}},
}}, rsChildren...), deployChildren)
}
func TestGetManagedLiveObjs(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
targetDeploy := strToUnstructured(`
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-guestbook
labels:
app: helm-guestbook`)
managedObjs, err := cluster.getManagedLiveObjs(&appv1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "helm-guestbook"},
Spec: appv1.ApplicationSpec{
Destination: appv1.ApplicationDestination{
Namespace: "default",
},
},
}, []*unstructured.Unstructured{targetDeploy}, nil)
assert.Nil(t, err)
assert.Equal(t, managedObjs, map[kube.ResourceKey]*unstructured.Unstructured{
kube.NewResourceKey("apps", "Deployment", "default", "helm-guestbook"): testDeploy,
})
}
func TestChildDeletedEvent(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
err = cluster.processEvent(watch.Deleted, testPod)
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
assert.Equal(t, []appv1.ResourceNode{}, rsChildren)
}
func TestProcessNewChildEvent(t *testing.T) {
cluster := newCluster(testPod, testRS, testDeploy)
err := cluster.ensureSynced()
assert.Nil(t, err)
newPod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: helm-guestbook-pod2
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
resourceVersion: "123"`)
err = cluster.processEvent(watch.Added, newPod)
assert.Nil(t, err)
rsChildren := getChildren(cluster, testRS)
sort.Slice(rsChildren, func(i, j int) bool {
return strings.Compare(rsChildren[i].Name, rsChildren[j].Name) < 0
})
assert.Equal(t, []appv1.ResourceNode{{
ResourceRef: appv1.ResourceRef{
Kind: "Pod",
Namespace: "default",
Name: "helm-guestbook-pod",
Group: "",
Version: "v1",
},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
Version: "",
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
}},
ResourceVersion: "123",
}, {
ResourceRef: appv1.ResourceRef{
Kind: "Pod",
Namespace: "default",
Name: "helm-guestbook-pod2",
Group: "",
Version: "v1",
},
NetworkingInfo: &appv1.ResourceNetworkingInfo{Labels: testPod.GetLabels()},
Info: []appv1.InfoItem{{Name: "Containers", Value: "0/0"}},
Health: &appv1.HealthStatus{Status: appv1.HealthStatusUnknown},
ParentRefs: []appv1.ResourceRef{{
Group: "apps",
Version: "",
Kind: "ReplicaSet",
Namespace: "default",
Name: "helm-guestbook-rs",
}},
ResourceVersion: "123",
}}, rsChildren)
}
func TestUpdateResourceTags(t *testing.T) {
pod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{Kind: "Pod", APIVersion: "v1"},
ObjectMeta: metav1.ObjectMeta{Name: "testPod", Namespace: "default"},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Name: "test",
Image: "test",
}},
},
}
cluster := newCluster(mustToUnstructured(pod))
err := cluster.ensureSynced()
assert.Nil(t, err)
podNode := cluster.nodes[kube.GetResourceKey(mustToUnstructured(pod))]
assert.NotNil(t, podNode)
assert.Equal(t, []appv1.InfoItem{{Name: "Containers", Value: "0/1"}}, podNode.info)
pod.Status = corev1.PodStatus{
ContainerStatuses: []corev1.ContainerStatus{{
State: corev1.ContainerState{
Terminated: &corev1.ContainerStateTerminated{
ExitCode: -1,
},
},
}},
}
err = cluster.processEvent(watch.Modified, mustToUnstructured(pod))
assert.Nil(t, err)
podNode = cluster.nodes[kube.GetResourceKey(mustToUnstructured(pod))]
assert.NotNil(t, podNode)
assert.Equal(t, []appv1.InfoItem{{Name: "Status Reason", Value: "ExitCode:-1"}, {Name: "Containers", Value: "0/1"}}, podNode.info)
}
func TestUpdateAppResource(t *testing.T) {
updatesReceived := make([]string, 0)
cluster := newCluster(testPod, testRS, testDeploy)
cluster.onAppUpdated = func(appName string, fullRefresh bool, _ corev1.ObjectReference) {
updatesReceived = append(updatesReceived, fmt.Sprintf("%s: %v", appName, fullRefresh))
}
err := cluster.ensureSynced()
assert.Nil(t, err)
err = cluster.processEvent(watch.Modified, mustToUnstructured(testPod))
assert.Nil(t, err)
assert.Contains(t, updatesReceived, "helm-guestbook: false")
}
func TestCircularReference(t *testing.T) {
dep := testDeploy.DeepCopy()
dep.SetOwnerReferences([]metav1.OwnerReference{{
Name: testPod.GetName(),
Kind: testPod.GetKind(),
APIVersion: testPod.GetAPIVersion(),
}})
cluster := newCluster(testPod, testRS, dep)
err := cluster.ensureSynced()
assert.Nil(t, err)
children := getChildren(cluster, dep)
assert.Len(t, children, 2)
node := cluster.nodes[kube.GetResourceKey(dep)]
assert.NotNil(t, node)
app := node.getApp(cluster.nodes)
assert.Equal(t, "", app)
}
func TestWatchCacheUpdated(t *testing.T) {
removed := testPod.DeepCopy()
removed.SetName(testPod.GetName() + "-removed-pod")
updated := testPod.DeepCopy()
updated.SetName(testPod.GetName() + "-updated-pod")
updated.SetResourceVersion("updated-pod-version")
cluster := newCluster(removed, updated)
err := cluster.ensureSynced()
assert.Nil(t, err)
added := testPod.DeepCopy()
added.SetName(testPod.GetName() + "-new-pod")
podGroupKind := testPod.GroupVersionKind().GroupKind()
cluster.replaceResourceCache(podGroupKind, "updated-list-version", []unstructured.Unstructured{*updated, *added})
_, ok := cluster.nodes[kube.GetResourceKey(removed)]
assert.False(t, ok)
updatedNode, ok := cluster.nodes[kube.GetResourceKey(updated)]
assert.True(t, ok)
assert.Equal(t, updatedNode.resourceVersion, "updated-pod-version")
_, ok = cluster.nodes[kube.GetResourceKey(added)]
assert.True(t, ok)
}

246
controller/cache/info.go vendored Normal file
View File

@@ -0,0 +1,246 @@
package cache
import (
"fmt"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
k8snode "k8s.io/kubernetes/pkg/util/node"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/kube"
)
func populateNodeInfo(un *unstructured.Unstructured, node *node) {
gvk := un.GroupVersionKind()
switch gvk.Group {
case "":
switch gvk.Kind {
case kube.PodKind:
populatePodInfo(un, node)
return
case kube.ServiceKind:
populateServiceInfo(un, node)
return
}
case "extensions":
switch gvk.Kind {
case kube.IngressKind:
populateIngressInfo(un, node)
return
}
}
node.info = []v1alpha1.InfoItem{}
}
func getIngress(un *unstructured.Unstructured) []v1.LoadBalancerIngress {
ingress, ok, err := unstructured.NestedSlice(un.Object, "status", "loadBalancer", "ingress")
if !ok || err != nil {
return nil
}
res := make([]v1.LoadBalancerIngress, 0)
for _, item := range ingress {
if lbIngress, ok := item.(map[string]interface{}); ok {
if hostname := lbIngress["hostname"]; hostname != nil {
res = append(res, v1.LoadBalancerIngress{Hostname: fmt.Sprintf("%s", hostname)})
} else if ip := lbIngress["ip"]; ip != nil {
res = append(res, v1.LoadBalancerIngress{IP: fmt.Sprintf("%s", ip)})
}
}
}
return res
}
func populateServiceInfo(un *unstructured.Unstructured, node *node) {
targetLabels, _, _ := unstructured.NestedStringMap(un.Object, "spec", "selector")
ingress := make([]v1.LoadBalancerIngress, 0)
if serviceType, ok, err := unstructured.NestedString(un.Object, "spec", "type"); ok && err == nil && serviceType == string(v1.ServiceTypeLoadBalancer) {
ingress = getIngress(un)
}
node.networkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetLabels: targetLabels, Ingress: ingress}
}
func populateIngressInfo(un *unstructured.Unstructured, node *node) {
ingress := getIngress(un)
targetsMap := make(map[v1alpha1.ResourceRef]bool)
if backend, ok, err := unstructured.NestedMap(un.Object, "spec", "backend"); ok && err == nil {
targetsMap[v1alpha1.ResourceRef{
Group: "",
Kind: kube.ServiceKind,
Namespace: un.GetNamespace(),
Name: fmt.Sprintf("%s", backend["serviceName"]),
}] = true
}
urlsSet := make(map[string]bool)
if rules, ok, err := unstructured.NestedSlice(un.Object, "spec", "rules"); ok && err == nil {
for i := range rules {
rule, ok := rules[i].(map[string]interface{})
if !ok {
continue
}
host := rule["host"]
if host == nil || host == "" {
for i := range ingress {
host = util.FirstNonEmpty(ingress[i].Hostname, ingress[i].IP)
if host != "" {
break
}
}
}
paths, ok, err := unstructured.NestedSlice(rule, "http", "paths")
if !ok || err != nil {
continue
}
for i := range paths {
path, ok := paths[i].(map[string]interface{})
if !ok {
continue
}
if serviceName, ok, err := unstructured.NestedString(path, "backend", "serviceName"); ok && err == nil {
targetsMap[v1alpha1.ResourceRef{
Group: "",
Kind: kube.ServiceKind,
Namespace: un.GetNamespace(),
Name: serviceName,
}] = true
}
if port, ok, err := unstructured.NestedFieldNoCopy(path, "backend", "servicePort"); ok && err == nil && host != "" && host != nil {
stringPort := ""
switch typedPod := port.(type) {
case int64:
stringPort = fmt.Sprintf("%d", typedPod)
case float64:
stringPort = fmt.Sprintf("%d", int64(typedPod))
case string:
stringPort = typedPod
default:
stringPort = fmt.Sprintf("%v", port)
}
switch stringPort {
case "80", "http":
urlsSet[fmt.Sprintf("http://%s", host)] = true
case "443", "https":
urlsSet[fmt.Sprintf("https://%s", host)] = true
default:
urlsSet[fmt.Sprintf("http://%s:%s", host, stringPort)] = true
}
}
}
}
}
targets := make([]v1alpha1.ResourceRef, 0)
for target := range targetsMap {
targets = append(targets, target)
}
urls := make([]string, 0)
for url := range urlsSet {
urls = append(urls, url)
}
node.networkingInfo = &v1alpha1.ResourceNetworkingInfo{TargetRefs: targets, Ingress: ingress, ExternalURLs: urls}
}
func populatePodInfo(un *unstructured.Unstructured, node *node) {
pod := v1.Pod{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(un.Object, &pod)
if err != nil {
node.info = []v1alpha1.InfoItem{}
return
}
restarts := 0
totalContainers := len(pod.Spec.Containers)
readyContainers := 0
reason := string(pod.Status.Phase)
if pod.Status.Reason != "" {
reason = pod.Status.Reason
}
imagesSet := make(map[string]bool)
for _, container := range pod.Spec.InitContainers {
imagesSet[container.Image] = true
}
for _, container := range pod.Spec.Containers {
imagesSet[container.Image] = true
}
node.images = nil
for image := range imagesSet {
node.images = append(node.images, image)
}
initializing := false
for i := range pod.Status.InitContainerStatuses {
container := pod.Status.InitContainerStatuses[i]
restarts += int(container.RestartCount)
switch {
case container.State.Terminated != nil && container.State.Terminated.ExitCode == 0:
continue
case container.State.Terminated != nil:
// initialization is failed
if len(container.State.Terminated.Reason) == 0 {
if container.State.Terminated.Signal != 0 {
reason = fmt.Sprintf("Init:Signal:%d", container.State.Terminated.Signal)
} else {
reason = fmt.Sprintf("Init:ExitCode:%d", container.State.Terminated.ExitCode)
}
} else {
reason = "Init:" + container.State.Terminated.Reason
}
initializing = true
case container.State.Waiting != nil && len(container.State.Waiting.Reason) > 0 && container.State.Waiting.Reason != "PodInitializing":
reason = "Init:" + container.State.Waiting.Reason
initializing = true
default:
reason = fmt.Sprintf("Init:%d/%d", i, len(pod.Spec.InitContainers))
initializing = true
}
break
}
if !initializing {
restarts = 0
hasRunning := false
for i := len(pod.Status.ContainerStatuses) - 1; i >= 0; i-- {
container := pod.Status.ContainerStatuses[i]
restarts += int(container.RestartCount)
if container.State.Waiting != nil && container.State.Waiting.Reason != "" {
reason = container.State.Waiting.Reason
} else if container.State.Terminated != nil && container.State.Terminated.Reason != "" {
reason = container.State.Terminated.Reason
} else if container.State.Terminated != nil && container.State.Terminated.Reason == "" {
if container.State.Terminated.Signal != 0 {
reason = fmt.Sprintf("Signal:%d", container.State.Terminated.Signal)
} else {
reason = fmt.Sprintf("ExitCode:%d", container.State.Terminated.ExitCode)
}
} else if container.Ready && container.State.Running != nil {
hasRunning = true
readyContainers++
}
}
// change pod status back to "Running" if there is at least one container still reporting as "Running" status
if reason == "Completed" && hasRunning {
reason = "Running"
}
}
if pod.DeletionTimestamp != nil && pod.Status.Reason == k8snode.NodeUnreachablePodReason {
reason = "Unknown"
} else if pod.DeletionTimestamp != nil {
reason = "Terminating"
}
node.info = make([]v1alpha1.InfoItem, 0)
if reason != "" {
node.info = append(node.info, v1alpha1.InfoItem{Name: "Status Reason", Value: reason})
}
node.info = append(node.info, v1alpha1.InfoItem{Name: "Containers", Value: fmt.Sprintf("%d/%d", readyContainers, totalContainers)})
node.networkingInfo = &v1alpha1.ResourceNetworkingInfo{Labels: un.GetLabels()}
}

108
controller/cache/info_test.go vendored Normal file
View File

@@ -0,0 +1,108 @@
package cache
import (
"sort"
"strings"
"testing"
v1 "k8s.io/api/core/v1"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
"github.com/stretchr/testify/assert"
)
func TestGetPodInfo(t *testing.T) {
pod := strToUnstructured(`
apiVersion: v1
kind: Pod
metadata:
name: helm-guestbook-pod
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
kind: ReplicaSet
name: helm-guestbook-rs
resourceVersion: "123"
labels:
app: guestbook
spec:
containers:
- image: bar`)
node := &node{}
populateNodeInfo(pod, node)
assert.Equal(t, []v1alpha1.InfoItem{{Name: "Containers", Value: "0/1"}}, node.info)
assert.Equal(t, []string{"bar"}, node.images)
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{Labels: map[string]string{"app": "guestbook"}}, node.networkingInfo)
}
func TestGetServiceInfo(t *testing.T) {
node := &node{}
populateNodeInfo(testService, node)
assert.Equal(t, 0, len(node.info))
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
TargetLabels: map[string]string{"app": "guestbook"},
Ingress: []v1.LoadBalancerIngress{{Hostname: "localhost"}},
}, node.networkingInfo)
}
func TestGetIngressInfo(t *testing.T) {
node := &node{}
populateNodeInfo(testIngress, node)
assert.Equal(t, 0, len(node.info))
sort.Slice(node.networkingInfo.TargetRefs, func(i, j int) bool {
return strings.Compare(node.networkingInfo.TargetRefs[j].Name, node.networkingInfo.TargetRefs[i].Name) < 0
})
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
TargetRefs: []v1alpha1.ResourceRef{{
Namespace: "default",
Group: "",
Kind: kube.ServiceKind,
Name: "not-found-service",
}, {
Namespace: "default",
Group: "",
Kind: kube.ServiceKind,
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://helm-guestbook.com"},
}, node.networkingInfo)
}
func TestGetIngressInfoNoHost(t *testing.T) {
ingress := strToUnstructured(`
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helm-guestbook
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: helm-guestbook
servicePort: 443
path: /
status:
loadBalancer:
ingress:
- ip: 107.178.210.11`)
node := &node{}
populateNodeInfo(ingress, node)
assert.Equal(t, &v1alpha1.ResourceNetworkingInfo{
Ingress: []v1.LoadBalancerIngress{{IP: "107.178.210.11"}},
TargetRefs: []v1alpha1.ResourceRef{{
Namespace: "default",
Group: "",
Kind: kube.ServiceKind,
Name: "helm-guestbook",
}},
ExternalURLs: []string{"https://107.178.210.11"},
}, node.networkingInfo)
}

View File

@@ -0,0 +1,96 @@
// Code generated by mockery v1.0.0. DO NOT EDIT.
package mocks
import context "context"
import kube "github.com/argoproj/argo-cd/util/kube"
import mock "github.com/stretchr/testify/mock"
import unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
import v1alpha1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
// LiveStateCache is an autogenerated mock type for the LiveStateCache type
type LiveStateCache struct {
mock.Mock
}
// Delete provides a mock function with given fields: server, obj
func (_m *LiveStateCache) Delete(server string, obj *unstructured.Unstructured) error {
ret := _m.Called(server, obj)
var r0 error
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured) error); ok {
r0 = rf(server, obj)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetManagedLiveObjs provides a mock function with given fields: a, targetObjs
func (_m *LiveStateCache) GetManagedLiveObjs(a *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
ret := _m.Called(a, targetObjs)
var r0 map[kube.ResourceKey]*unstructured.Unstructured
if rf, ok := ret.Get(0).(func(*v1alpha1.Application, []*unstructured.Unstructured) map[kube.ResourceKey]*unstructured.Unstructured); ok {
r0 = rf(a, targetObjs)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[kube.ResourceKey]*unstructured.Unstructured)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(*v1alpha1.Application, []*unstructured.Unstructured) error); ok {
r1 = rf(a, targetObjs)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Invalidate provides a mock function with given fields:
func (_m *LiveStateCache) Invalidate() {
_m.Called()
}
// IsNamespaced provides a mock function with given fields: server, obj
func (_m *LiveStateCache) IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error) {
ret := _m.Called(server, obj)
var r0 bool
if rf, ok := ret.Get(0).(func(string, *unstructured.Unstructured) bool); ok {
r0 = rf(server, obj)
} else {
r0 = ret.Get(0).(bool)
}
var r1 error
if rf, ok := ret.Get(1).(func(string, *unstructured.Unstructured) error); ok {
r1 = rf(server, obj)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// IterateHierarchy provides a mock function with given fields: server, key, action
func (_m *LiveStateCache) IterateHierarchy(server string, key kube.ResourceKey, action func(v1alpha1.ResourceNode)) error {
ret := _m.Called(server, key, action)
var r0 error
if rf, ok := ret.Get(0).(func(string, kube.ResourceKey, func(v1alpha1.ResourceNode)) error); ok {
r0 = rf(server, key, action)
} else {
r0 = ret.Error(0)
}
return r0
}
// Run provides a mock function with given fields: ctx
func (_m *LiveStateCache) Run(ctx context.Context) {
_m.Called(ctx)
}

134
controller/cache/node.go vendored Normal file
View File

@@ -0,0 +1,134 @@
package cache
import (
log "github.com/sirupsen/logrus"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util/kube"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
)
type node struct {
resourceVersion string
ref v1.ObjectReference
ownerRefs []metav1.OwnerReference
info []appv1.InfoItem
appName string
// available only for root application nodes
resource *unstructured.Unstructured
// networkingInfo are available only for known types involved into networking: Ingress, Service, Pod
networkingInfo *appv1.ResourceNetworkingInfo
images []string
health *appv1.HealthStatus
}
func (n *node) isRootAppNode() bool {
return n.appName != "" && len(n.ownerRefs) == 0
}
func (n *node) resourceKey() kube.ResourceKey {
return kube.NewResourceKey(n.ref.GroupVersionKind().Group, n.ref.Kind, n.ref.Namespace, n.ref.Name)
}
func (n *node) isParentOf(child *node) bool {
for _, ownerRef := range child.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
if kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name) == n.resourceKey() {
return true
}
}
return false
}
func ownerRefGV(ownerRef metav1.OwnerReference) schema.GroupVersion {
gv, err := schema.ParseGroupVersion(ownerRef.APIVersion)
if err != nil {
gv = schema.GroupVersion{}
}
return gv
}
func (n *node) getApp(ns map[kube.ResourceKey]*node) string {
return n.getAppRecursive(ns, map[kube.ResourceKey]bool{})
}
func (n *node) getAppRecursive(ns map[kube.ResourceKey]*node, visited map[kube.ResourceKey]bool) string {
if !visited[n.resourceKey()] {
visited[n.resourceKey()] = true
} else {
log.Warnf("Circular dependency detected: %v.", visited)
return n.appName
}
if n.appName != "" {
return n.appName
}
for _, ownerRef := range n.ownerRefs {
gv := ownerRefGV(ownerRef)
if parent, ok := ns[kube.NewResourceKey(gv.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)]; ok {
app := parent.getAppRecursive(ns, visited)
if app != "" {
return app
}
}
}
return ""
}
func newResourceKeySet(set map[kube.ResourceKey]bool, keys ...kube.ResourceKey) map[kube.ResourceKey]bool {
newSet := make(map[kube.ResourceKey]bool)
for k, v := range set {
newSet[k] = v
}
for i := range keys {
newSet[keys[i]] = true
}
return newSet
}
func (n *node) asResourceNode() appv1.ResourceNode {
gv, err := schema.ParseGroupVersion(n.ref.APIVersion)
if err != nil {
gv = schema.GroupVersion{}
}
parentRefs := make([]appv1.ResourceRef, len(n.ownerRefs))
for _, ownerRef := range n.ownerRefs {
ownerGvk := schema.FromAPIVersionAndKind(ownerRef.APIVersion, ownerRef.Kind)
ownerKey := kube.NewResourceKey(ownerGvk.Group, ownerRef.Kind, n.ref.Namespace, ownerRef.Name)
parentRefs[0] = appv1.ResourceRef{Name: ownerRef.Name, Kind: ownerKey.Kind, Namespace: n.ref.Namespace, Group: ownerKey.Group}
}
return appv1.ResourceNode{
ResourceRef: appv1.ResourceRef{
Name: n.ref.Name,
Group: gv.Group,
Version: gv.Version,
Kind: n.ref.Kind,
Namespace: n.ref.Namespace,
},
ParentRefs: parentRefs,
Info: n.info,
ResourceVersion: n.resourceVersion,
NetworkingInfo: n.networkingInfo,
Images: n.images,
Health: n.health,
}
}
func (n *node) iterateChildren(ns map[kube.ResourceKey]*node, parents map[kube.ResourceKey]bool, action func(child appv1.ResourceNode)) {
for childKey, child := range ns {
if n.isParentOf(ns[childKey]) {
if parents[childKey] {
key := n.resourceKey()
log.Warnf("Circular dependency detected. %s is child and parent of %s", childKey.String(), key.String())
} else {
action(child.asResourceNode())
child.iterateChildren(ns, newResourceKeySet(parents, n.resourceKey()), action)
}
}
}
}

29
controller/cache/node_test.go vendored Normal file
View File

@@ -0,0 +1,29 @@
package cache
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/argoproj/argo-cd/util/settings"
)
var c = &clusterInfo{settings: &settings.ArgoCDSettings{}}
func TestIsParentOf(t *testing.T) {
child := c.createObjInfo(testPod, "")
parent := c.createObjInfo(testRS, "")
grandParent := c.createObjInfo(testDeploy, "")
assert.True(t, parent.isParentOf(child))
assert.False(t, grandParent.isParentOf(child))
}
func TestIsParentOfSameKindDifferentGroup(t *testing.T) {
rs := testRS.DeepCopy()
rs.SetAPIVersion("somecrd.io/v1")
child := c.createObjInfo(testPod, "")
invalidParent := c.createObjInfo(rs, "")
assert.False(t, invalidParent.isParentOf(child))
}

View File

@@ -0,0 +1,198 @@
package metrics
import (
"net/http"
"strconv"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/labels"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
applister "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
"github.com/argoproj/argo-cd/util/git"
)
type MetricsServer struct {
*http.Server
syncCounter *prometheus.CounterVec
k8sRequestCounter *prometheus.CounterVec
reconcileHistogram *prometheus.HistogramVec
}
const (
// MetricsPath is the endpoint to collect application metrics
MetricsPath = "/metrics"
)
// Follow Prometheus naming practices
// https://prometheus.io/docs/practices/naming/
var (
descAppDefaultLabels = []string{"namespace", "name", "project"}
descAppInfo = prometheus.NewDesc(
"argocd_app_info",
"Information about application.",
append(descAppDefaultLabels, "repo", "dest_server", "dest_namespace"),
nil,
)
descAppCreated = prometheus.NewDesc(
"argocd_app_created_time",
"Creation time in unix timestamp for an application.",
descAppDefaultLabels,
nil,
)
descAppSyncStatusCode = prometheus.NewDesc(
"argocd_app_sync_status",
"The application current sync status.",
append(descAppDefaultLabels, "sync_status"),
nil,
)
descAppHealthStatus = prometheus.NewDesc(
"argocd_app_health_status",
"The application current health status.",
append(descAppDefaultLabels, "health_status"),
nil,
)
)
// NewMetricsServer returns a new prometheus server which collects application metrics
func NewMetricsServer(addr string, appLister applister.ApplicationLister) *MetricsServer {
mux := http.NewServeMux()
appRegistry := NewAppRegistry(appLister)
appRegistry.MustRegister(prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}))
appRegistry.MustRegister(prometheus.NewGoCollector())
mux.Handle(MetricsPath, promhttp.HandlerFor(appRegistry, promhttp.HandlerOpts{}))
syncCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_app_sync_total",
Help: "Number of application syncs.",
},
append(descAppDefaultLabels, "phase"),
)
appRegistry.MustRegister(syncCounter)
k8sRequestCounter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "argocd_app_k8s_request_total",
Help: "Number of kubernetes requests executed during application reconciliation.",
},
append(descAppDefaultLabels, "response_code"),
)
appRegistry.MustRegister(k8sRequestCounter)
reconcileHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "argocd_app_reconcile",
Help: "Application reconciliation performance.",
// Buckets chosen after observing a ~2100ms mean reconcile time
Buckets: []float64{0.25, .5, 1, 2, 4, 8, 16},
},
append(descAppDefaultLabels),
)
appRegistry.MustRegister(reconcileHistogram)
return &MetricsServer{
Server: &http.Server{
Addr: addr,
Handler: mux,
},
syncCounter: syncCounter,
k8sRequestCounter: k8sRequestCounter,
reconcileHistogram: reconcileHistogram,
}
}
// IncSync increments the sync counter for an application
func (m *MetricsServer) IncSync(app *argoappv1.Application, state *argoappv1.OperationState) {
if !state.Phase.Completed() {
return
}
m.syncCounter.WithLabelValues(app.Namespace, app.Name, app.Spec.GetProject(), string(state.Phase)).Inc()
}
// IncKubernetesRequest increments the kubernetes requests counter for an application
func (m *MetricsServer) IncKubernetesRequest(app *argoappv1.Application, statusCode int) {
m.k8sRequestCounter.WithLabelValues(app.Namespace, app.Name, app.Spec.GetProject(), strconv.Itoa(statusCode)).Inc()
}
// IncReconcile increments the reconcile counter for an application
func (m *MetricsServer) IncReconcile(app *argoappv1.Application, duration time.Duration) {
m.reconcileHistogram.WithLabelValues(app.Namespace, app.Name, app.Spec.GetProject()).Observe(duration.Seconds())
}
type appCollector struct {
store applister.ApplicationLister
}
// NewAppCollector returns a prometheus collector for application metrics
func NewAppCollector(appLister applister.ApplicationLister) prometheus.Collector {
return &appCollector{
store: appLister,
}
}
// NewAppRegistry creates a new prometheus registry that collects applications
func NewAppRegistry(appLister applister.ApplicationLister) *prometheus.Registry {
registry := prometheus.NewRegistry()
registry.MustRegister(NewAppCollector(appLister))
return registry
}
// Describe implements the prometheus.Collector interface
func (c *appCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- descAppInfo
ch <- descAppCreated
ch <- descAppSyncStatusCode
ch <- descAppHealthStatus
}
// Collect implements the prometheus.Collector interface
func (c *appCollector) Collect(ch chan<- prometheus.Metric) {
apps, err := c.store.List(labels.NewSelector())
if err != nil {
log.Warnf("Failed to collect applications: %v", err)
return
}
for _, app := range apps {
collectApps(ch, app)
}
}
func boolFloat64(b bool) float64 {
if b {
return 1
}
return 0
}
func collectApps(ch chan<- prometheus.Metric, app *argoappv1.Application) {
addConstMetric := func(desc *prometheus.Desc, t prometheus.ValueType, v float64, lv ...string) {
project := app.Spec.GetProject()
lv = append([]string{app.Namespace, app.Name, project}, lv...)
ch <- prometheus.MustNewConstMetric(desc, t, v, lv...)
}
addGauge := func(desc *prometheus.Desc, v float64, lv ...string) {
addConstMetric(desc, prometheus.GaugeValue, v, lv...)
}
addGauge(descAppInfo, 1, git.NormalizeGitURL(app.Spec.Source.RepoURL), app.Spec.Destination.Server, app.Spec.Destination.Namespace)
addGauge(descAppCreated, float64(app.CreationTimestamp.Unix()))
syncStatus := app.Status.Sync.Status
addGauge(descAppSyncStatusCode, boolFloat64(syncStatus == argoappv1.SyncStatusCodeSynced), string(argoappv1.SyncStatusCodeSynced))
addGauge(descAppSyncStatusCode, boolFloat64(syncStatus == argoappv1.SyncStatusCodeOutOfSync), string(argoappv1.SyncStatusCodeOutOfSync))
addGauge(descAppSyncStatusCode, boolFloat64(syncStatus == argoappv1.SyncStatusCodeUnknown || syncStatus == ""), string(argoappv1.SyncStatusCodeUnknown))
healthStatus := app.Status.Health.Status
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusUnknown || healthStatus == ""), argoappv1.HealthStatusUnknown)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusProgressing), argoappv1.HealthStatusProgressing)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusSuspended), argoappv1.HealthStatusSuspended)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusHealthy), argoappv1.HealthStatusHealthy)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusDegraded), argoappv1.HealthStatusDegraded)
addGauge(descAppHealthStatus, boolFloat64(healthStatus == argoappv1.HealthStatusMissing), argoappv1.HealthStatusMissing)
}

View File

@@ -0,0 +1,233 @@
package metrics
import (
"context"
"log"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/ghodss/yaml"
"github.com/stretchr/testify/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/cache"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned/fake"
appinformer "github.com/argoproj/argo-cd/pkg/client/informers/externalversions"
applister "github.com/argoproj/argo-cd/pkg/client/listers/application/v1alpha1"
)
const fakeApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
destination:
namespace: dummy-namespace
server: https://localhost:6443
project: important-project
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
status:
sync:
status: Synced
health:
status: Healthy
`
const expectedResponse = `# HELP argocd_app_created_time Creation time in unix timestamp for an application.
# TYPE argocd_app_created_time gauge
argocd_app_created_time{name="my-app",namespace="argocd",project="important-project"} -6.21355968e+10
# HELP argocd_app_health_status The application current health status.
# TYPE argocd_app_health_status gauge
argocd_app_health_status{health_status="Degraded",name="my-app",namespace="argocd",project="important-project"} 0
argocd_app_health_status{health_status="Healthy",name="my-app",namespace="argocd",project="important-project"} 1
argocd_app_health_status{health_status="Missing",name="my-app",namespace="argocd",project="important-project"} 0
argocd_app_health_status{health_status="Progressing",name="my-app",namespace="argocd",project="important-project"} 0
argocd_app_health_status{health_status="Suspended",name="my-app",namespace="argocd",project="important-project"} 0
argocd_app_health_status{health_status="Unknown",name="my-app",namespace="argocd",project="important-project"} 0
# HELP argocd_app_info Information about application.
# TYPE argocd_app_info gauge
argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="important-project",repo="https://github.com/argoproj/argocd-example-apps"} 1
# HELP argocd_app_sync_status The application current sync status.
# TYPE argocd_app_sync_status gauge
argocd_app_sync_status{name="my-app",namespace="argocd",project="important-project",sync_status="OutOfSync"} 0
argocd_app_sync_status{name="my-app",namespace="argocd",project="important-project",sync_status="Synced"} 1
argocd_app_sync_status{name="my-app",namespace="argocd",project="important-project",sync_status="Unknown"} 0
`
const fakeDefaultApp = `
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
destination:
namespace: dummy-namespace
server: https://localhost:6443
source:
path: some/path
repoURL: https://github.com/argoproj/argocd-example-apps.git
status:
sync:
status: Synced
health:
status: Healthy
`
const expectedDefaultResponse = `# HELP argocd_app_created_time Creation time in unix timestamp for an application.
# TYPE argocd_app_created_time gauge
argocd_app_created_time{name="my-app",namespace="argocd",project="default"} -6.21355968e+10
# HELP argocd_app_health_status The application current health status.
# TYPE argocd_app_health_status gauge
argocd_app_health_status{health_status="Degraded",name="my-app",namespace="argocd",project="default"} 0
argocd_app_health_status{health_status="Healthy",name="my-app",namespace="argocd",project="default"} 1
argocd_app_health_status{health_status="Missing",name="my-app",namespace="argocd",project="default"} 0
argocd_app_health_status{health_status="Progressing",name="my-app",namespace="argocd",project="default"} 0
argocd_app_health_status{health_status="Suspended",name="my-app",namespace="argocd",project="default"} 0
argocd_app_health_status{health_status="Unknown",name="my-app",namespace="argocd",project="default"} 0
# HELP argocd_app_info Information about application.
# TYPE argocd_app_info gauge
argocd_app_info{dest_namespace="dummy-namespace",dest_server="https://localhost:6443",name="my-app",namespace="argocd",project="default",repo="https://github.com/argoproj/argocd-example-apps"} 1
# HELP argocd_app_sync_status The application current sync status.
# TYPE argocd_app_sync_status gauge
argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_status="OutOfSync"} 0
argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_status="Synced"} 1
argocd_app_sync_status{name="my-app",namespace="argocd",project="default",sync_status="Unknown"} 0
`
func newFakeApp(fakeApp string) *argoappv1.Application {
var app argoappv1.Application
err := yaml.Unmarshal([]byte(fakeApp), &app)
if err != nil {
panic(err)
}
return &app
}
func newFakeLister(fakeApp ...string) (context.CancelFunc, applister.ApplicationLister) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var fakeApps []runtime.Object
for _, name := range fakeApp {
fakeApps = append(fakeApps, newFakeApp(name))
}
appClientset := appclientset.NewSimpleClientset(fakeApps...)
factory := appinformer.NewFilteredSharedInformerFactory(appClientset, 0, "argocd", func(options *metav1.ListOptions) {})
appInformer := factory.Argoproj().V1alpha1().Applications().Informer()
go appInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), appInformer.HasSynced) {
log.Fatal("Timed out waiting for caches to sync")
}
return cancel, factory.Argoproj().V1alpha1().Applications().Lister()
}
func testApp(t *testing.T, fakeApp string, expectedResponse string) {
cancel, appLister := newFakeLister(fakeApp)
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister)
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
assert.Equal(t, rr.Code, http.StatusOK)
body := rr.Body.String()
log.Println(body)
assertMetricsPrinted(t, expectedResponse, body)
}
type testCombination struct {
application string
expectedResponse string
}
func TestMetrics(t *testing.T) {
combinations := []testCombination{
{
application: fakeApp,
expectedResponse: expectedResponse,
},
{
application: fakeDefaultApp,
expectedResponse: expectedDefaultResponse,
},
}
for _, combination := range combinations {
testApp(t, combination.application, combination.expectedResponse)
}
}
const appSyncTotal = `# HELP argocd_app_sync_total Number of application syncs.
# TYPE argocd_app_sync_total counter
argocd_app_sync_total{name="my-app",namespace="argocd",phase="Error",project="important-project"} 1
argocd_app_sync_total{name="my-app",namespace="argocd",phase="Failed",project="important-project"} 1
argocd_app_sync_total{name="my-app",namespace="argocd",phase="Succeeded",project="important-project"} 2
`
func TestMetricsSyncCounter(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationRunning})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationFailed})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationError})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
metricsServ.IncSync(fakeApp, &argoappv1.OperationState{Phase: argoappv1.OperationSucceeded})
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
assert.Equal(t, rr.Code, http.StatusOK)
body := rr.Body.String()
log.Println(body)
assertMetricsPrinted(t, appSyncTotal, body)
}
// assertMetricsPrinted asserts every line in the expected lines appears in the body
func assertMetricsPrinted(t *testing.T, expectedLines, body string) {
for _, line := range strings.Split(expectedLines, "\n") {
assert.Contains(t, body, line)
}
}
const appReconcileMetrics = `argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="0.25"} 0
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="0.5"} 0
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="1"} 0
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="2"} 0
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="4"} 0
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="8"} 1
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="16"} 1
argocd_app_reconcile_bucket{name="my-app",namespace="argocd",project="important-project",le="+Inf"} 1
argocd_app_reconcile_sum{name="my-app",namespace="argocd",project="important-project"} 5
argocd_app_reconcile_count{name="my-app",namespace="argocd",project="important-project"} 1
`
func TestReconcileMetrics(t *testing.T) {
cancel, appLister := newFakeLister()
defer cancel()
metricsServ := NewMetricsServer("localhost:8082", appLister)
fakeApp := newFakeApp(fakeApp)
metricsServ.IncReconcile(fakeApp, 5*time.Second)
req, err := http.NewRequest("GET", "/metrics", nil)
assert.NoError(t, err)
rr := httptest.NewRecorder()
metricsServ.Handler.ServeHTTP(rr, req)
assert.Equal(t, rr.Code, http.StatusOK)
body := rr.Body.String()
log.Println(body)
assertMetricsPrinted(t, appReconcileMetrics, body)
}

View File

@@ -0,0 +1,37 @@
package metrics
import (
"net/http"
"k8s.io/client-go/rest"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
)
type metricsRoundTripper struct {
roundTripper http.RoundTripper
app *v1alpha1.Application
metricsServer *MetricsServer
}
func (mrt *metricsRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) {
resp, err := mrt.roundTripper.RoundTrip(r)
statusCode := 0
if resp != nil {
statusCode = resp.StatusCode
}
mrt.metricsServer.IncKubernetesRequest(mrt.app, statusCode)
return resp, err
}
// AddMetricsTransportWrapper adds a transport wrapper which increments 'argocd_app_k8s_request_total' counter on each kubernetes request
func AddMetricsTransportWrapper(server *MetricsServer, app *v1alpha1.Application, config *rest.Config) *rest.Config {
wrap := config.WrapTransport
config.WrapTransport = func(rt http.RoundTripper) http.RoundTripper {
if wrap != nil {
rt = wrap(rt)
}
return &metricsRoundTripper{roundTripper: rt, metricsServer: server, app: app}
}
return config
}

View File

@@ -1,205 +0,0 @@
package controller
import (
"context"
"encoding/json"
"time"
"runtime/debug"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/db"
log "github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
)
type SecretController struct {
kubeClient kubernetes.Interface
secretQueue workqueue.RateLimitingInterface
secretInformer cache.SharedIndexInformer
repoClientset reposerver.Clientset
namespace string
}
func (ctrl *SecretController) Run(ctx context.Context) {
go ctrl.secretInformer.Run(ctx.Done())
if !cache.WaitForCacheSync(ctx.Done(), ctrl.secretInformer.HasSynced) {
log.Error("Timed out waiting for caches to sync")
return
}
go wait.Until(func() {
for ctrl.processSecret() {
}
}, time.Second, ctx.Done())
}
func (ctrl *SecretController) processSecret() (processNext bool) {
secretKey, shutdown := ctrl.secretQueue.Get()
if shutdown {
processNext = false
return
} else {
processNext = true
}
defer func() {
if r := recover(); r != nil {
log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
}
ctrl.secretQueue.Done(secretKey)
}()
obj, exists, err := ctrl.secretInformer.GetIndexer().GetByKey(secretKey.(string))
if err != nil {
log.Errorf("Failed to get secret '%s' from informer index: %+v", secretKey, err)
return
}
if !exists {
// This happens after secret was deleted, but the work queue still had an entry for it.
return
}
secret, ok := obj.(*corev1.Secret)
if !ok {
log.Warnf("Key '%s' in index is not an secret", secretKey)
return
}
if secret.Labels[common.LabelKeySecretType] == common.SecretTypeCluster {
cluster := db.SecretToCluster(secret)
ctrl.updateState(secret, ctrl.getClusterState(cluster))
} else if secret.Labels[common.LabelKeySecretType] == common.SecretTypeRepository {
repo := db.SecretToRepo(secret)
ctrl.updateState(secret, ctrl.getRepoConnectionState(repo))
}
return
}
func (ctrl *SecretController) getRepoConnectionState(repo *v1alpha1.Repository) v1alpha1.ConnectionState {
state := v1alpha1.ConnectionState{
ModifiedAt: repo.ConnectionState.ModifiedAt,
Status: v1alpha1.ConnectionStatusUnknown,
}
closer, client, err := ctrl.repoClientset.NewRepositoryClient()
if err != nil {
log.Errorf("Unable to create repository client: %v", err)
return state
}
defer util.Close(closer)
_, err = client.ListDir(context.Background(), &repository.ListDirRequest{Repo: repo, Path: ".gitignore"})
if err == nil {
state.Status = v1alpha1.ConnectionStatusSuccessful
} else {
state.Status = v1alpha1.ConnectionStatusFailed
state.Message = err.Error()
}
return state
}
func (ctrl *SecretController) getClusterState(cluster *v1alpha1.Cluster) v1alpha1.ConnectionState {
state := v1alpha1.ConnectionState{
ModifiedAt: cluster.ConnectionState.ModifiedAt,
Status: v1alpha1.ConnectionStatusUnknown,
}
kubeClientset, err := kubernetes.NewForConfig(cluster.RESTConfig())
if err == nil {
_, err = kubeClientset.Discovery().ServerVersion()
}
if err == nil {
state.Status = v1alpha1.ConnectionStatusSuccessful
} else {
state.Status = v1alpha1.ConnectionStatusFailed
state.Message = err.Error()
}
return state
}
func (ctrl *SecretController) updateState(secret *corev1.Secret, state v1alpha1.ConnectionState) {
annotationsPatch := make(map[string]string)
for key, value := range db.AnnotationsFromConnectionState(&state) {
if secret.Annotations[key] != value {
annotationsPatch[key] = value
}
}
if len(annotationsPatch) > 0 {
annotationsPatch[common.AnnotationConnectionModifiedAt] = metav1.Now().Format(time.RFC3339)
patchData, err := json.Marshal(map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": annotationsPatch,
},
})
if err != nil {
log.Warnf("Unable to prepare secret state annotation patch: %v", err)
} else {
_, err := ctrl.kubeClient.CoreV1().Secrets(secret.Namespace).Patch(secret.Name, types.MergePatchType, patchData)
if err != nil {
log.Warnf("Unable to patch secret state annotation: %v", err)
}
}
}
}
func newSecretInformer(client kubernetes.Interface, resyncPeriod time.Duration, namespace string, secretQueue workqueue.RateLimitingInterface) cache.SharedIndexInformer {
informerFactory := informers.NewFilteredSharedInformerFactory(
client,
resyncPeriod,
namespace,
func(options *metav1.ListOptions) {
var req *labels.Requirement
req, err := labels.NewRequirement(common.LabelKeySecretType, selection.In, []string{common.SecretTypeCluster, common.SecretTypeRepository})
if err != nil {
panic(err)
}
options.FieldSelector = fields.Everything().String()
labelSelector := labels.NewSelector().Add(*req)
options.LabelSelector = labelSelector.String()
},
)
informer := informerFactory.Core().V1().Secrets().Informer()
informer.AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
secretQueue.Add(key)
}
},
UpdateFunc: func(old, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err == nil {
secretQueue.Add(key)
}
},
},
)
return informer
}
func NewSecretController(kubeClient kubernetes.Interface, repoClientset reposerver.Clientset, resyncPeriod time.Duration, namespace string) *SecretController {
secretQueue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
return &SecretController{
kubeClient: kubeClient,
secretQueue: secretQueue,
secretInformer: newSecretInformer(kubeClient, resyncPeriod, namespace, secretQueue),
namespace: namespace,
repoClientset: repoClientset,
}
}

View File

@@ -10,390 +10,353 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/tools/cache"
"github.com/argoproj/argo-cd/common"
statecache "github.com/argoproj/argo-cd/controller/cache"
"github.com/argoproj/argo-cd/controller/metrics"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
appclientset "github.com/argoproj/argo-cd/pkg/client/clientset/versioned"
"github.com/argoproj/argo-cd/reposerver"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/util"
"github.com/argoproj/argo-cd/util/argo"
"github.com/argoproj/argo-cd/util/db"
"github.com/argoproj/argo-cd/util/diff"
"github.com/argoproj/argo-cd/util/health"
hookutil "github.com/argoproj/argo-cd/util/hook"
kubeutil "github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/settings"
)
const (
maxHistoryCnt = 5
)
type managedResource struct {
Target *unstructured.Unstructured
Live *unstructured.Unstructured
Diff diff.DiffResult
Group string
Version string
Kind string
Namespace string
Name string
Hook bool
}
func GetLiveObjs(res []managedResource) []*unstructured.Unstructured {
objs := make([]*unstructured.Unstructured, len(res))
for i := range res {
objs[i] = res[i].Live
}
return objs
}
type ResourceInfoProvider interface {
IsNamespaced(server string, obj *unstructured.Unstructured) (bool, error)
}
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error)
CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, state *v1alpha1.OperationState)
}
// ksonnetAppStateManager allows to compare application using KSonnet CLI
type ksonnetAppStateManager struct {
db db.ArgoDB
appclientset appclientset.Interface
repoClientset reposerver.Clientset
namespace string
type comparisonResult struct {
reconciledAt metav1.Time
syncStatus *v1alpha1.SyncStatus
healthStatus *v1alpha1.HealthStatus
resources []v1alpha1.ResourceStatus
managedResources []managedResource
conditions []v1alpha1.ApplicationCondition
hooks []*unstructured.Unstructured
diffNormalizer diff.Normalizer
appSourceType v1alpha1.ApplicationSourceType
}
// groupLiveObjects deduplicate list of kubernetes resources and choose correct version of resource: if resource has corresponding expected application resource then method pick
// kubernetes resource with matching version, otherwise chooses single kubernetes resource with any version
func groupLiveObjects(liveObjs []*unstructured.Unstructured, targetObjs []*unstructured.Unstructured) map[string]*unstructured.Unstructured {
targetByFullName := make(map[string]*unstructured.Unstructured)
for _, obj := range targetObjs {
targetByFullName[getResourceFullName(obj)] = obj
}
liveListByFullName := make(map[string][]*unstructured.Unstructured)
for _, obj := range liveObjs {
list := liveListByFullName[getResourceFullName(obj)]
if list == nil {
list = make([]*unstructured.Unstructured, 0)
}
list = append(list, obj)
liveListByFullName[getResourceFullName(obj)] = list
}
liveByFullName := make(map[string]*unstructured.Unstructured)
for fullName, list := range liveListByFullName {
targetObj := targetByFullName[fullName]
var liveObj *unstructured.Unstructured
if targetObj != nil {
for i := range list {
if list[i].GetAPIVersion() == targetObj.GetAPIVersion() {
liveObj = list[i]
break
}
}
} else {
liveObj = list[0]
}
if liveObj != nil {
liveByFullName[getResourceFullName(liveObj)] = liveObj
}
}
return liveByFullName
// appStateManager allows to compare applications to git
type appStateManager struct {
metricsServer *metrics.MetricsServer
db db.ArgoDB
settings *settings.ArgoCDSettings
appclientset appclientset.Interface
projInformer cache.SharedIndexInformer
kubectl kubeutil.Kubectl
repoClientset reposerver.Clientset
liveStateCache statecache.LiveStateCache
namespace string
}
func (s *ksonnetAppStateManager) getTargetObjs(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) ([]*unstructured.Unstructured, *repository.ManifestResponse, error) {
repo := s.getRepo(app.Spec.Source.RepoURL)
conn, repoClient, err := s.repoClientset.NewRepositoryClient()
func (m *appStateManager) getRepoObjs(app *v1alpha1.Application, source v1alpha1.ApplicationSource, appLabelKey, revision string, noCache bool) ([]*unstructured.Unstructured, []*unstructured.Unstructured, *repository.ManifestResponse, error) {
helmRepos, err := m.db.ListHelmRepos(context.Background())
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
repo, err := m.db.GetRepository(context.Background(), source.RepoURL)
if err != nil {
return nil, nil, nil, err
}
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return nil, nil, nil, err
}
defer util.Close(conn)
if revision == "" {
revision = app.Spec.Source.TargetRevision
revision = source.TargetRevision
}
// Decide what overrides to compare with.
var mfReqOverrides []*v1alpha1.ComponentParameter
if overrides != nil {
// If overrides is supplied, use that
mfReqOverrides = make([]*v1alpha1.ComponentParameter, len(overrides))
for i := range overrides {
item := overrides[i]
mfReqOverrides[i] = &item
}
} else {
// Otherwise, use the overrides in the app spec
mfReqOverrides = make([]*v1alpha1.ComponentParameter, len(app.Spec.Source.ComponentParameterOverrides))
for i := range app.Spec.Source.ComponentParameterOverrides {
item := app.Spec.Source.ComponentParameterOverrides[i]
mfReqOverrides[i] = &item
}
tools := make([]*appv1.ConfigManagementPlugin, len(m.settings.ConfigManagementPlugins))
for i := range m.settings.ConfigManagementPlugins {
tools[i] = &m.settings.ConfigManagementPlugins[i]
}
manifestInfo, err := repoClient.GenerateManifest(context.Background(), &repository.ManifestRequest{
Repo: repo,
Environment: app.Spec.Source.Environment,
Path: app.Spec.Source.Path,
Revision: revision,
ComponentParameterOverrides: mfReqOverrides,
AppLabel: app.Name,
ValueFiles: app.Spec.Source.ValuesFiles,
Repo: repo,
HelmRepos: helmRepos,
Revision: revision,
NoCache: noCache,
AppLabelKey: appLabelKey,
AppLabelValue: app.Name,
Namespace: app.Spec.Destination.Namespace,
ApplicationSource: &source,
Plugins: tools,
})
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
targetObjs := make([]*unstructured.Unstructured, 0)
hooks := make([]*unstructured.Unstructured, 0)
for _, manifest := range manifestInfo.Manifests {
obj, err := v1alpha1.UnmarshalToUnstructured(manifest)
if err != nil {
return nil, nil, err
return nil, nil, nil, err
}
if isHook(obj) {
continue
if hookutil.IsHook(obj) {
hooks = append(hooks, obj)
} else {
targetObjs = append(targetObjs, obj)
}
targetObjs = append(targetObjs, obj)
}
return targetObjs, manifestInfo, nil
return targetObjs, hooks, manifestInfo, nil
}
func (s *ksonnetAppStateManager) getLiveObjs(app *v1alpha1.Application, targetObjs []*unstructured.Unstructured) (
[]*unstructured.Unstructured, map[string]*unstructured.Unstructured, error) {
func DeduplicateTargetObjects(
server string,
namespace string,
objs []*unstructured.Unstructured,
infoProvider ResourceInfoProvider,
) ([]*unstructured.Unstructured, []v1alpha1.ApplicationCondition, error) {
// Get the REST config for the cluster corresponding to the environment
clst, err := s.db.GetCluster(context.Background(), app.Spec.Destination.Server)
if err != nil {
return nil, nil, err
}
restConfig := clst.RESTConfig()
// Retrieve the live versions of the objects. exclude any hook objects
labeledObjs, err := kubeutil.GetResourcesWithLabel(restConfig, app.Spec.Destination.Namespace, common.LabelApplicationName, app.Name)
if err != nil {
return nil, nil, err
}
liveObjs := make([]*unstructured.Unstructured, 0)
for _, obj := range labeledObjs {
if isHook(obj) {
continue
targetByKey := make(map[kubeutil.ResourceKey][]*unstructured.Unstructured)
for i := range objs {
obj := objs[i]
isNamespaced, err := infoProvider.IsNamespaced(server, obj)
if err != nil {
return objs, nil, err
}
liveObjs = append(liveObjs, obj)
}
liveObjByFullName := groupLiveObjects(liveObjs, targetObjs)
controlledLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
// Move live resources which have corresponding target object to controlledLiveObj
dynClientPool := dynamic.NewDynamicClientPool(restConfig)
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
if err != nil {
return nil, nil, err
}
for i, targetObj := range targetObjs {
fullName := getResourceFullName(targetObj)
liveObj := liveObjByFullName[fullName]
if liveObj == nil && targetObj.GetName() != "" {
// If we get here, it indicates we did not find the live resource when querying using
// our app label. However, it is possible that the resource was created/modified outside
// of ArgoCD. In order to determine that it is truly missing, we fall back to perform a
// direct lookup of the resource by name. See issue #141
gvk := targetObj.GroupVersionKind()
dclient, err := dynClientPool.ClientForGroupVersionKind(gvk)
if err != nil {
return nil, nil, err
}
apiResource, err := kubeutil.ServerResourceForGroupVersionKind(disco, gvk)
if err != nil {
return nil, nil, err
}
liveObj, err = kubeutil.GetLiveResource(dclient, targetObj, apiResource, app.Spec.Destination.Namespace)
if err != nil {
return nil, nil, err
}
if !isNamespaced {
obj.SetNamespace("")
} else if obj.GetNamespace() == "" {
obj.SetNamespace(namespace)
}
controlledLiveObj[i] = liveObj
delete(liveObjByFullName, fullName)
key := kubeutil.GetResourceKey(obj)
targetByKey[key] = append(targetByKey[key], obj)
}
conditions := make([]v1alpha1.ApplicationCondition, 0)
result := make([]*unstructured.Unstructured, 0)
for key, targets := range targetByKey {
if len(targets) > 1 {
conditions = append(conditions, appv1.ApplicationCondition{
Type: appv1.ApplicationConditionRepeatedResourceWarning,
Message: fmt.Sprintf("Resource %s appeared %d times among application resources.", key.String(), len(targets)),
})
}
result = append(result, targets[len(targets)-1])
}
return controlledLiveObj, liveObjByFullName, nil
return result, conditions, nil
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied overrides. If revision or overrides are empty, then compares against
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (s *ksonnetAppStateManager) CompareAppState(app *v1alpha1.Application, revision string, overrides []v1alpha1.ComponentParameter) (
*v1alpha1.ComparisonResult, *repository.ManifestResponse, []v1alpha1.ApplicationCondition, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource, noCache bool) (*comparisonResult, error) {
diffNormalizer, err := argo.NewDiffNormalizer(app.Spec.IgnoreDifferences, m.settings.ResourceOverrides)
if err != nil {
return nil, err
}
logCtx := log.WithField("application", app.Name)
logCtx.Infof("Comparing app state (cluster: %s, namespace: %s)", app.Spec.Destination.Server, app.Spec.Destination.Namespace)
observedAt := metav1.Now()
failedToLoadObjs := false
conditions := make([]v1alpha1.ApplicationCondition, 0)
targetObjs, manifestInfo, err := s.getTargetObjs(app, revision, overrides)
appLabelKey := m.settings.GetAppInstanceLabelKey()
targetObjs, hooks, manifestInfo, err := m.getRepoObjs(app, source, appLabelKey, revision, noCache)
if err != nil {
targetObjs = make([]*unstructured.Unstructured, 0)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
controlledLiveObj, liveObjByFullName, err := s.getLiveObjs(app, targetObjs)
targetObjs, dedupConditions, err := DeduplicateTargetObjects(app.Spec.Destination.Server, app.Spec.Destination.Namespace, targetObjs, m.liveStateCache)
if err != nil {
controlledLiveObj = make([]*unstructured.Unstructured, len(targetObjs))
liveObjByFullName = make(map[string]*unstructured.Unstructured)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
}
conditions = append(conditions, dedupConditions...)
logCtx.Debugf("Generated config manifests")
liveObjByKey, err := m.liveStateCache.GetManagedLiveObjs(app, targetObjs)
if err != nil {
liveObjByKey = make(map[kubeutil.ResourceKey]*unstructured.Unstructured)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
failedToLoadObjs = true
}
for _, liveObj := range controlledLiveObj {
if liveObj != nil && liveObj.GetLabels() != nil {
if appLabelVal, ok := liveObj.GetLabels()[common.LabelApplicationName]; ok && appLabelVal != "" && appLabelVal != app.Name {
logCtx.Debugf("Retrieved lived manifests")
for _, liveObj := range liveObjByKey {
if liveObj != nil {
appInstanceName := kubeutil.GetAppInstanceLabel(liveObj, appLabelKey)
if appInstanceName != "" && appInstanceName != app.Name {
conditions = append(conditions, v1alpha1.ApplicationCondition{
Type: v1alpha1.ApplicationConditionSharedResourceWarning,
Message: fmt.Sprintf("Resource %s/%s is controller by applications '%s' and '%s'", liveObj.GetKind(), liveObj.GetName(), app.Name, appLabelVal),
Message: fmt.Sprintf("%s/%s is part of a different application: %s", liveObj.GetKind(), liveObj.GetName(), appInstanceName),
})
}
}
}
// Move root level live resources to controlledLiveObj and add nil to targetObjs to indicate that target object is missing
for fullName := range liveObjByFullName {
liveObj := liveObjByFullName[fullName]
if !hasParent(liveObj) {
targetObjs = append(targetObjs, nil)
controlledLiveObj = append(controlledLiveObj, liveObj)
managedLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
for i, obj := range targetObjs {
gvk := obj.GroupVersionKind()
ns := util.FirstNonEmpty(obj.GetNamespace(), app.Spec.Destination.Namespace)
if namespaced, err := m.liveStateCache.IsNamespaced(app.Spec.Destination.Server, obj); err == nil && !namespaced {
ns = ""
}
key := kubeutil.NewResourceKey(gvk.Group, gvk.Kind, ns, obj.GetName())
if liveObj, ok := liveObjByKey[key]; ok {
managedLiveObj[i] = liveObj
delete(liveObjByKey, key)
} else {
managedLiveObj[i] = nil
}
}
log.Infof("Comparing app %s state in cluster %s (namespace: %s)", app.ObjectMeta.Name, app.Spec.Destination.Server, app.Spec.Destination.Namespace)
logCtx.Debugf("built managed objects list")
// Everything remaining in liveObjByKey are "extra" resources that aren't tracked in git.
// The following adds all the extras to the managedLiveObj list and backfills the targetObj
// list with nils, so that the lists are of equal lengths for comparison purposes.
for _, obj := range liveObjByKey {
targetObjs = append(targetObjs, nil)
managedLiveObj = append(managedLiveObj, obj)
}
// Do the actual comparison
diffResults, err := diff.DiffArray(targetObjs, controlledLiveObj)
diffResults, err := diff.DiffArray(targetObjs, managedLiveObj, diffNormalizer)
if err != nil {
return nil, nil, nil, err
return nil, err
}
comparisonStatus := v1alpha1.ComparisonStatusSynced
resources := make([]v1alpha1.ResourceState, len(targetObjs))
syncCode := v1alpha1.SyncStatusCodeSynced
managedResources := make([]managedResource, len(targetObjs))
resourceSummaries := make([]v1alpha1.ResourceStatus, len(targetObjs))
for i := 0; i < len(targetObjs); i++ {
resState := v1alpha1.ResourceState{
ChildLiveResources: make([]v1alpha1.ResourceNode, 0),
obj := managedLiveObj[i]
if obj == nil {
obj = targetObjs[i]
}
if obj == nil {
continue
}
gvk := obj.GroupVersionKind()
resState := v1alpha1.ResourceStatus{
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
Kind: gvk.Kind,
Version: gvk.Version,
Group: gvk.Group,
Hook: hookutil.IsHook(obj),
}
diffResult := diffResults.Diffs[i]
if diffResult.Modified {
// Set resource state to 'OutOfSync' since target and corresponding live resource are different
resState.Status = v1alpha1.ComparisonStatusOutOfSync
comparisonStatus = v1alpha1.ComparisonStatusOutOfSync
if resState.Hook {
// For resource hooks, don't store sync status, and do not affect overall sync status
} else if diffResult.Modified || targetObjs[i] == nil || managedLiveObj[i] == nil {
// Set resource state to OutOfSync since one of the following is true:
// * target and live resource are different
// * target resource not defined and live resource is extra
// * target resource present but live resource is missing
resState.Status = v1alpha1.SyncStatusCodeOutOfSync
syncCode = v1alpha1.SyncStatusCodeOutOfSync
} else {
resState.Status = v1alpha1.ComparisonStatusSynced
resState.Status = v1alpha1.SyncStatusCodeSynced
}
if targetObjs[i] == nil {
resState.TargetState = "null"
// Set resource state to 'OutOfSync' since target resource is missing and live resource is unexpected
resState.Status = v1alpha1.ComparisonStatusOutOfSync
comparisonStatus = v1alpha1.ComparisonStatusOutOfSync
} else {
targetObjBytes, err := json.Marshal(targetObjs[i].Object)
if err != nil {
return nil, nil, nil, err
}
resState.TargetState = string(targetObjBytes)
managedResources[i] = managedResource{
Name: resState.Name,
Namespace: resState.Namespace,
Group: resState.Group,
Kind: resState.Kind,
Version: resState.Version,
Live: managedLiveObj[i],
Target: targetObjs[i],
Diff: diffResult,
Hook: resState.Hook,
}
if controlledLiveObj[i] == nil {
resState.LiveState = "null"
// Set resource state to 'OutOfSync' since target resource present but corresponding live resource is missing
resState.Status = v1alpha1.ComparisonStatusOutOfSync
comparisonStatus = v1alpha1.ComparisonStatusOutOfSync
} else {
liveObjBytes, err := json.Marshal(controlledLiveObj[i].Object)
if err != nil {
return nil, nil, nil, err
}
resState.LiveState = string(liveObjBytes)
}
resources[i] = resState
resourceSummaries[i] = resState
}
for i, resource := range resources {
liveResource := controlledLiveObj[i]
if liveResource != nil {
childResources, err := getChildren(liveResource, liveObjByFullName)
if err != nil {
return nil, nil, nil, err
}
resource.ChildLiveResources = childResources
resources[i] = resource
}
}
if failedToLoadObjs {
comparisonStatus = v1alpha1.ComparisonStatusUnknown
syncCode = v1alpha1.SyncStatusCodeUnknown
}
compResult := v1alpha1.ComparisonResult{
ComparedTo: app.Spec.Source,
ComparedAt: metav1.Time{Time: time.Now().UTC()},
Resources: resources,
Status: comparisonStatus,
syncStatus := v1alpha1.SyncStatus{
ComparedTo: appv1.ComparedTo{
Source: source,
Destination: app.Spec.Destination,
},
Status: syncCode,
}
return &compResult, manifestInfo, conditions, nil
}
func hasParent(obj *unstructured.Unstructured) bool {
// TODO: remove special case after Service and Endpoint get explicit relationship ( https://github.com/kubernetes/kubernetes/issues/28483 )
return obj.GetKind() == kubeutil.EndpointsKind || metav1.GetControllerOf(obj) != nil
}
func isControlledBy(obj *unstructured.Unstructured, parent *unstructured.Unstructured) bool {
// TODO: remove special case after Service and Endpoint get explicit relationship ( https://github.com/kubernetes/kubernetes/issues/28483 )
if obj.GetKind() == kubeutil.EndpointsKind && parent.GetKind() == kubeutil.ServiceKind {
return obj.GetName() == parent.GetName()
if manifestInfo != nil {
syncStatus.Revision = manifestInfo.Revision
}
return metav1.IsControlledBy(obj, parent)
}
func getChildren(parent *unstructured.Unstructured, liveObjByFullName map[string]*unstructured.Unstructured) ([]v1alpha1.ResourceNode, error) {
children := make([]v1alpha1.ResourceNode, 0)
for fullName, obj := range liveObjByFullName {
if isControlledBy(obj, parent) {
delete(liveObjByFullName, fullName)
childResource := v1alpha1.ResourceNode{}
json, err := json.Marshal(obj)
if err != nil {
return nil, err
}
childResource.State = string(json)
childResourceChildren, err := getChildren(obj, liveObjByFullName)
if err != nil {
return nil, err
}
childResource.Children = childResourceChildren
children = append(children, childResource)
}
}
return children, nil
}
healthStatus, err := health.SetApplicationHealth(resourceSummaries, GetLiveObjs(managedResources), m.settings.ResourceOverrides, func(obj *unstructured.Unstructured) bool {
return !isSelfReferencedApp(app, kubeutil.GetObjectRef(obj))
})
func getResourceFullName(obj *unstructured.Unstructured) string {
return fmt.Sprintf("%s:%s", obj.GetKind(), obj.GetName())
}
func (s *ksonnetAppStateManager) getRepo(repoURL string) *v1alpha1.Repository {
repo, err := s.db.GetRepository(context.Background(), repoURL)
if err != nil {
// If we couldn't retrieve from the repo service, assume public repositories
repo = &v1alpha1.Repository{Repo: repoURL}
conditions = append(conditions, appv1.ApplicationCondition{Type: v1alpha1.ApplicationConditionComparisonError, Message: err.Error()})
}
return repo
compRes := comparisonResult{
reconciledAt: observedAt,
syncStatus: &syncStatus,
healthStatus: healthStatus,
resources: resourceSummaries,
managedResources: managedResources,
conditions: conditions,
hooks: hooks,
diffNormalizer: diffNormalizer,
}
if manifestInfo != nil {
compRes.appSourceType = v1alpha1.ApplicationSourceType(manifestInfo.SourceType)
}
return &compRes, nil
}
func (s *ksonnetAppStateManager) persistDeploymentInfo(
app *v1alpha1.Application, revision string, envParams []*v1alpha1.ComponentParameter, overrides *[]v1alpha1.ComponentParameter) error {
params := make([]v1alpha1.ComponentParameter, len(envParams))
for i := range envParams {
param := *envParams[i]
params[i] = param
}
var nextID int64 = 0
func (m *appStateManager) persistRevisionHistory(app *v1alpha1.Application, revision string, source v1alpha1.ApplicationSource) error {
var nextID int64
if len(app.Status.History) > 0 {
nextID = app.Status.History[len(app.Status.History)-1].ID + 1
}
history := append(app.Status.History, v1alpha1.DeploymentInfo{
ComponentParameterOverrides: app.Spec.Source.ComponentParameterOverrides,
Revision: revision,
Params: params,
DeployedAt: metav1.NewTime(time.Now().UTC()),
ID: nextID,
history := append(app.Status.History, v1alpha1.RevisionHistory{
Revision: revision,
DeployedAt: metav1.NewTime(time.Now().UTC()),
ID: nextID,
Source: source,
})
if len(history) > maxHistoryCnt {
history = history[1 : maxHistoryCnt+1]
if len(history) > common.RevisionHistoryLimit {
history = history[1 : common.RevisionHistoryLimit+1]
}
patch, err := json.Marshal(map[string]map[string][]v1alpha1.DeploymentInfo{
patch, err := json.Marshal(map[string]map[string][]v1alpha1.RevisionHistory{
"status": {
"history": history,
},
@@ -401,7 +364,7 @@ func (s *ksonnetAppStateManager) persistDeploymentInfo(
if err != nil {
return err
}
_, err = s.appclientset.ArgoprojV1alpha1().Applications(s.namespace).Patch(app.Name, types.MergePatchType, patch)
_, err = m.appclientset.ArgoprojV1alpha1().Applications(m.namespace).Patch(app.Name, types.MergePatchType, patch)
return err
}
@@ -411,11 +374,21 @@ func NewAppStateManager(
appclientset appclientset.Interface,
repoClientset reposerver.Clientset,
namespace string,
kubectl kubeutil.Kubectl,
settings *settings.ArgoCDSettings,
liveStateCache statecache.LiveStateCache,
projInformer cache.SharedIndexInformer,
metricsServer *metrics.MetricsServer,
) AppStateManager {
return &ksonnetAppStateManager{
db: db,
appclientset: appclientset,
repoClientset: repoClientset,
namespace: namespace,
return &appStateManager{
liveStateCache: liveStateCache,
db: db,
appclientset: appclientset,
kubectl: kubectl,
repoClientset: repoClientset,
namespace: namespace,
settings: settings,
projInformer: projInformer,
metricsServer: metricsServer,
}
}

263
controller/state_test.go Normal file
View File

@@ -0,0 +1,263 @@
package controller
import (
"encoding/json"
"testing"
v1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/argo-cd/common"
argoappv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
)
// TestCompareAppStateEmpty tests comparison when both git and live have no objects
func TestCompareAppStateEmpty(t *testing.T) {
app := newFakeApp()
data := fakeData{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateMissing tests when there is a manifest defined in git which doesn't exist in live
func TestCompareAppStateMissing(t *testing.T) {
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{string(test.PodManifest)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateExtra tests when there is an extra object in live but not defined in git
func TestCompareAppStateExtra(t *testing.T) {
pod := test.NewPod()
pod.SetNamespace(test.FakeDestNamespace)
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
data := fakeData{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
key: pod,
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeOutOfSync, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateHook checks that hooks are detected during manifest generation, and not
// considered as part of resources when assessing Synced status
func TestCompareAppStateHook(t *testing.T) {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
podBytes, _ := json.Marshal(pod)
app := newFakeApp()
data := fakeData{
apps: []runtime.Object{app},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{string(podBytes)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 0, len(compRes.resources))
assert.Equal(t, 0, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
// TestCompareAppStateExtraHook tests when there is an extra _hook_ object in live but not defined in git
func TestCompareAppStateExtraHook(t *testing.T) {
pod := test.NewPod()
pod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
pod.SetNamespace(test.FakeDestNamespace)
app := newFakeApp()
key := kube.ResourceKey{Group: "", Kind: "Pod", Namespace: test.FakeDestNamespace, Name: app.Name}
data := fakeData{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
key: pod,
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Equal(t, argoappv1.SyncStatusCodeSynced, compRes.syncStatus.Status)
assert.Equal(t, 1, len(compRes.resources))
assert.Equal(t, 1, len(compRes.managedResources))
assert.Equal(t, 0, len(compRes.conditions))
}
func toJSON(t *testing.T, obj *unstructured.Unstructured) string {
data, err := json.Marshal(obj)
assert.NoError(t, err)
return string(data)
}
func TestCompareAppStateDuplicatedNamespacedResources(t *testing.T) {
obj1 := test.NewPod()
obj1.SetNamespace(test.FakeDestNamespace)
obj2 := test.NewPod()
obj3 := test.NewPod()
obj3.SetNamespace("kube-system")
app := newFakeApp()
data := fakeData{
manifestResponse: &repository.ManifestResponse{
Manifests: []string{toJSON(t, obj1), toJSON(t, obj2), toJSON(t, obj3)},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(obj1): obj1,
kube.GetResourceKey(obj3): obj3,
},
}
ctrl := newFakeController(&data)
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.NotNil(t, compRes)
assert.Contains(t, compRes.conditions, argoappv1.ApplicationCondition{
Message: "Resource /Pod/fake-dest-ns/my-pod appeared 2 times among application resources.",
Type: argoappv1.ApplicationConditionRepeatedResourceWarning,
})
assert.Equal(t, 2, len(compRes.resources))
}
var defaultProj = argoappv1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: test.FakeArgoCDNamespace,
},
Spec: argoappv1.AppProjectSpec{
SourceRepos: []string{"*"},
Destinations: []argoappv1.ApplicationDestination{
{
Server: "*",
Namespace: "*",
},
},
},
}
func TestSetHealth(t *testing.T) {
app := newFakeApp()
deployment := kube.MustToUnstructured(&v1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1beta1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: "demo",
Namespace: "default",
},
})
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(deployment): deployment,
},
})
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
}
func TestSetHealthSelfReferencedApp(t *testing.T) {
app := newFakeApp()
unstructuredApp := kube.MustToUnstructured(app)
deployment := kube.MustToUnstructured(&v1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1beta1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: "demo",
Namespace: "default",
},
})
ctrl := newFakeController(&fakeData{
apps: []runtime.Object{app, &defaultProj},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: map[kube.ResourceKey]*unstructured.Unstructured{
kube.GetResourceKey(deployment): deployment,
kube.GetResourceKey(unstructuredApp): unstructuredApp,
},
})
compRes, err := ctrl.appStateManager.CompareAppState(app, "", app.Spec.Source, false)
assert.NoError(t, err)
assert.Equal(t, compRes.healthStatus.Status, argoappv1.HealthStatusHealthy)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,66 @@
package controller
import (
"testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
var clusterRoleHook = `
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRole",
"metadata": {
"name": "cluster-role-hook",
"annotations": {
"argocd.argoproj.io/hook": "PostSync"
}
}
}`
func TestSyncHookProjectPermissions(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: "v1",
APIResources: []v1.APIResource{
{Name: "pod", Namespaced: true, Kind: "Pod", Group: "v1"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.kubectl = kubetest.MockKubectlCmd{}
crHook, _ := v1alpha1.UnmarshalToUnstructured(clusterRoleHook)
syncCtx.compareResult = &comparisonResult{
hooks: []*unstructured.Unstructured{
crHook,
},
managedResources: []managedResource{{
Target: test.NewPod(),
}},
}
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{}
syncCtx.syncOp.SyncStrategy = nil
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Len(t, syncCtx.syncRes.Resources, 0)
assert.Contains(t, syncCtx.opState.Message, "not permitted in project")
// Now add the resource to the whitelist and try again. Resource should be created
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
}
syncCtx.syncOp.SyncStrategy = nil
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[0].Status)
}

560
controller/sync_hooks.go Normal file
View File

@@ -0,0 +1,560 @@
package controller
import (
"fmt"
"reflect"
"strings"
wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1"
apiv1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kubernetes/pkg/apis/batch"
"github.com/argoproj/argo-cd/common"
appv1 "github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/util"
hookutil "github.com/argoproj/argo-cd/util/hook"
"github.com/argoproj/argo-cd/util/kube"
)
// doHookSync initiates (or continues) a hook-based sync. This method will be invoked when there may
// already be in-flight (potentially incomplete) jobs/workflows, and should be idempotent.
func (sc *syncContext) doHookSync(syncTasks []syncTask, hooks []*unstructured.Unstructured) {
if !sc.startedPreSyncPhase() {
if !sc.verifyPermittedHooks(hooks) {
return
}
}
// 1. Run PreSync hooks
if !sc.runHooks(hooks, appv1.HookTypePreSync) {
return
}
// 2. Run Sync hooks (e.g. blue-green sync workflow)
// Before performing Sync hooks, apply any normal manifests which aren't annotated with a hook.
// We only want to do this once per operation.
shouldContinue := true
if !sc.startedSyncPhase() {
if !sc.syncNonHookTasks(syncTasks) {
sc.setOperationPhase(appv1.OperationFailed, "one or more objects failed to apply")
return
}
shouldContinue = false
}
if !sc.runHooks(hooks, appv1.HookTypeSync) {
shouldContinue = false
}
if !shouldContinue {
return
}
// 3. Run PostSync hooks
// Before running PostSync hooks, we want to make rollout is complete (app is healthy). If we
// already started the post-sync phase, then we do not need to perform the health check.
postSyncHooks, _ := sc.getHooks(appv1.HookTypePostSync)
if len(postSyncHooks) > 0 && !sc.startedPostSyncPhase() {
sc.log.Infof("PostSync application health check: %s", sc.compareResult.healthStatus.Status)
if sc.compareResult.healthStatus.Status != appv1.HealthStatusHealthy {
sc.setOperationPhase(appv1.OperationRunning, fmt.Sprintf("waiting for %s state to run %s hooks (current health: %s)",
appv1.HealthStatusHealthy, appv1.HookTypePostSync, sc.compareResult.healthStatus.Status))
return
}
}
if !sc.runHooks(hooks, appv1.HookTypePostSync) {
return
}
// if we get here, all hooks successfully completed
sc.setOperationPhase(appv1.OperationSucceeded, "successfully synced")
}
// verifyPermittedHooks verifies all hooks are permitted in the project
func (sc *syncContext) verifyPermittedHooks(hooks []*unstructured.Unstructured) bool {
for _, hook := range hooks {
gvk := hook.GroupVersionKind()
serverRes, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("unable to identify api resource type: %v", gvk))
return false
}
if !sc.proj.IsResourcePermitted(metav1.GroupKind{Group: gvk.Group, Kind: gvk.Kind}, serverRes.Namespaced) {
sc.setOperationPhase(appv1.OperationFailed, fmt.Sprintf("Hook resource %s:%s is not permitted in project %s", gvk.Group, gvk.Kind, sc.proj.Name))
return false
}
if serverRes.Namespaced && !sc.proj.IsDestinationPermitted(appv1.ApplicationDestination{Namespace: hook.GetNamespace(), Server: sc.server}) {
gvk := hook.GroupVersionKind()
sc.setResourceDetails(&appv1.ResourceResult{
Name: hook.GetName(),
Group: gvk.Group,
Version: gvk.Version,
Kind: hook.GetKind(),
Namespace: hook.GetNamespace(),
Message: fmt.Sprintf("namespace %v is not permitted in project '%s'", hook.GetNamespace(), sc.proj.Name),
Status: appv1.ResultCodeSyncFailed,
})
return false
}
}
return true
}
// getHooks returns all Argo CD hooks, optionally filtered by ones of the specific type(s)
func (sc *syncContext) getHooks(hookTypes ...appv1.HookType) ([]*unstructured.Unstructured, error) {
var hooks []*unstructured.Unstructured
for _, hook := range sc.compareResult.hooks {
if hook.GetNamespace() == "" {
hook.SetNamespace(sc.namespace)
}
if !hookutil.IsArgoHook(hook) {
// TODO: in the future, if we want to map helm hooks to Argo CD lifecycles, we should
// include helm hooks in the returned list
continue
}
if len(hookTypes) > 0 {
match := false
for _, desiredType := range hookTypes {
if isHookType(hook, desiredType) {
match = true
break
}
}
if !match {
continue
}
}
hooks = append(hooks, hook)
}
return hooks, nil
}
// runHooks iterates & filters the target manifests for resources of the specified hook type, then
// creates the resource. Updates the sc.opRes.hooks with the current status. Returns whether or not
// we should continue to the next hook phase.
func (sc *syncContext) runHooks(hooks []*unstructured.Unstructured, hookType appv1.HookType) bool {
shouldContinue := true
for _, hook := range hooks {
if hookType == appv1.HookTypeSync && isHookType(hook, appv1.HookTypeSkip) {
// If we get here, we are invoking all sync hooks and reached a resource that is
// annotated with the Skip hook. This will update the resource details to indicate it
// was skipped due to annotation
gvk := hook.GroupVersionKind()
sc.setResourceDetails(&appv1.ResourceResult{
Name: hook.GetName(),
Group: gvk.Group,
Version: gvk.Version,
Kind: hook.GetKind(),
Namespace: hook.GetNamespace(),
Message: "Skipped",
})
continue
}
if !isHookType(hook, hookType) {
continue
}
updated, err := sc.runHook(hook, hookType)
if err != nil {
sc.setOperationPhase(appv1.OperationError, fmt.Sprintf("%s hook error: %v", hookType, err))
return false
}
if updated {
// If the result of running a hook, caused us to modify hook resource state, we should
// not proceed to the next hook phase. This is because before proceeding to the next
// phase, we want a full health assessment to happen. By returning early, we allow
// the application to get requeued into the controller workqueue, and on the next
// process iteration, a new CompareAppState() will be performed to get the most
// up-to-date live state. This enables us to accurately wait for an application to
// become Healthy before proceeding to run PostSync tasks.
shouldContinue = false
}
}
if !shouldContinue {
sc.log.Infof("Stopping after %s phase due to modifications to hook resource state", hookType)
return false
}
completed, successful := areHooksCompletedSuccessful(hookType, sc.syncRes.Resources)
if !completed {
return false
}
if !successful {
sc.setOperationPhase(appv1.OperationFailed, fmt.Sprintf("%s hook failed", hookType))
return false
}
return true
}
// syncNonHookTasks syncs or prunes the objects that are not handled by hooks using an apply sync.
// returns true if the sync was successful
func (sc *syncContext) syncNonHookTasks(syncTasks []syncTask) bool {
var nonHookTasks []syncTask
for _, task := range syncTasks {
if task.targetObj == nil {
nonHookTasks = append(nonHookTasks, task)
} else {
annotations := task.targetObj.GetAnnotations()
if annotations != nil && annotations[common.AnnotationKeyHook] != "" {
// we are doing a hook sync and this resource is annotated with a hook annotation
continue
}
// if we get here, this resource does not have any hook annotation so we
// should perform an `kubectl apply`
nonHookTasks = append(nonHookTasks, task)
}
}
return sc.doApplySync(nonHookTasks, false, sc.syncOp.SyncStrategy.Hook.Force, true)
}
// runHook runs the supplied hook and updates the hook status. Returns true if the result of
// invoking this method resulted in changes to any hook status
func (sc *syncContext) runHook(hook *unstructured.Unstructured, hookType appv1.HookType) (bool, error) {
// Hook resources names are deterministic, whether they are defined by the user (metadata.name),
// or formulated at the time of the operation (metadata.generateName). If user specifies
// metadata.generateName, then we will generate a formulated metadata.name before submission.
if hook.GetName() == "" {
postfix := strings.ToLower(fmt.Sprintf("%s-%s-%d", sc.syncRes.Revision[0:7], hookType, sc.opState.StartedAt.UTC().Unix()))
generatedName := hook.GetGenerateName()
hook = hook.DeepCopy()
hook.SetName(fmt.Sprintf("%s%s", generatedName, postfix))
}
// Check our hook statuses to see if we already completed this hook.
// If so, this method is a noop
prevStatus := sc.getHookStatus(hook, hookType)
if prevStatus != nil && prevStatus.HookPhase.Completed() {
return false, nil
}
gvk := hook.GroupVersionKind()
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
return false, err
}
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, hook.GetNamespace())
var liveObj *unstructured.Unstructured
existing, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
if err != nil {
if !apierr.IsNotFound(err) {
return false, fmt.Errorf("Failed to get status of %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
_, err := sc.kubectl.ApplyResource(sc.config, hook, hook.GetNamespace(), false, false)
if err != nil {
return false, fmt.Errorf("Failed to create %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
created, err := resIf.Get(hook.GetName(), metav1.GetOptions{})
if err != nil {
return true, fmt.Errorf("Failed to get status of %s hook %s '%s': %v", hookType, gvk, hook.GetName(), err)
}
sc.log.Infof("%s hook %s '%s' created", hookType, gvk, created.GetName())
sc.setOperationPhase(appv1.OperationRunning, fmt.Sprintf("running %s hooks", hookType))
liveObj = created
} else {
liveObj = existing
}
hookStatus := newHookStatus(liveObj, hookType)
if hookStatus.HookPhase.Completed() {
if enforceHookDeletePolicy(hook, hookStatus.HookPhase) {
err = sc.deleteHook(hook.GetName(), hook.GetNamespace(), hook.GroupVersionKind())
if err != nil {
hookStatus.HookPhase = appv1.OperationFailed
hookStatus.Message = fmt.Sprintf("failed to delete %s hook: %v", hookStatus.HookPhase, err)
}
}
}
return sc.updateHookStatus(hookStatus), nil
}
// enforceHookDeletePolicy examines the hook deletion policy of a object and deletes it based on the status
func enforceHookDeletePolicy(hook *unstructured.Unstructured, phase appv1.OperationPhase) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
deletePolicies := strings.Split(annotations[common.AnnotationKeyHookDeletePolicy], ",")
for _, dp := range deletePolicies {
policy := appv1.HookDeletePolicy(strings.TrimSpace(dp))
if policy == appv1.HookDeletePolicyHookSucceeded && phase == appv1.OperationSucceeded {
return true
}
if policy == appv1.HookDeletePolicyHookFailed && phase == appv1.OperationFailed {
return true
}
}
return false
}
// isHookType tells whether or not the supplied object is a hook of the specified type
func isHookType(hook *unstructured.Unstructured, hookType appv1.HookType) bool {
annotations := hook.GetAnnotations()
if annotations == nil {
return false
}
resHookTypes := strings.Split(annotations[common.AnnotationKeyHook], ",")
for _, ht := range resHookTypes {
if string(hookType) == strings.TrimSpace(ht) {
return true
}
}
return false
}
// newHookStatus returns a hook status from an _live_ unstructured object
func newHookStatus(hook *unstructured.Unstructured, hookType appv1.HookType) appv1.ResourceResult {
gvk := hook.GroupVersionKind()
hookStatus := appv1.ResourceResult{
Name: hook.GetName(),
Kind: hook.GetKind(),
Group: gvk.Group,
Version: gvk.Version,
HookType: hookType,
HookPhase: appv1.OperationRunning,
Namespace: hook.GetNamespace(),
}
if isBatchJob(gvk) {
updateStatusFromBatchJob(hook, &hookStatus)
} else if isArgoWorkflow(gvk) {
updateStatusFromArgoWorkflow(hook, &hookStatus)
} else if isPod(gvk) {
updateStatusFromPod(hook, &hookStatus)
} else {
hookStatus.HookPhase = appv1.OperationSucceeded
hookStatus.Message = fmt.Sprintf("%s created", hook.GetName())
}
return hookStatus
}
// isRunnable returns if the resource object is a runnable type which needs to be terminated
func isRunnable(res *appv1.ResourceResult) bool {
gvk := res.GroupVersionKind()
return isBatchJob(gvk) || isArgoWorkflow(gvk) || isPod(gvk)
}
func isBatchJob(gvk schema.GroupVersionKind) bool {
return gvk.Group == "batch" && gvk.Kind == "Job"
}
func updateStatusFromBatchJob(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var job batch.Job
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &job)
if err != nil {
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
failed := false
var failMsg string
complete := false
var message string
for _, condition := range job.Status.Conditions {
switch condition.Type {
case batch.JobFailed:
failed = true
complete = true
failMsg = condition.Message
case batch.JobComplete:
complete = true
message = condition.Message
}
}
if !complete {
hookStatus.HookPhase = appv1.OperationRunning
hookStatus.Message = message
} else if failed {
hookStatus.HookPhase = appv1.OperationFailed
hookStatus.Message = failMsg
} else {
hookStatus.HookPhase = appv1.OperationSucceeded
hookStatus.Message = message
}
}
func isArgoWorkflow(gvk schema.GroupVersionKind) bool {
return gvk.Group == "argoproj.io" && gvk.Kind == "Workflow"
}
func updateStatusFromArgoWorkflow(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var wf wfv1.Workflow
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &wf)
if err != nil {
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
switch wf.Status.Phase {
case wfv1.NodePending, wfv1.NodeRunning:
hookStatus.HookPhase = appv1.OperationRunning
case wfv1.NodeSucceeded:
hookStatus.HookPhase = appv1.OperationSucceeded
case wfv1.NodeFailed:
hookStatus.HookPhase = appv1.OperationFailed
case wfv1.NodeError:
hookStatus.HookPhase = appv1.OperationError
}
hookStatus.Message = wf.Status.Message
}
func isPod(gvk schema.GroupVersionKind) bool {
return gvk.Group == "" && gvk.Kind == "Pod"
}
func updateStatusFromPod(hook *unstructured.Unstructured, hookStatus *appv1.ResourceResult) {
var pod apiv1.Pod
err := runtime.DefaultUnstructuredConverter.FromUnstructured(hook.Object, &pod)
if err != nil {
hookStatus.HookPhase = appv1.OperationError
hookStatus.Message = err.Error()
return
}
getFailMessage := func(ctr *apiv1.ContainerStatus) string {
if ctr.State.Terminated != nil {
if ctr.State.Terminated.Message != "" {
return ctr.State.Terminated.Message
}
if ctr.State.Terminated.Reason == "OOMKilled" {
return ctr.State.Terminated.Reason
}
if ctr.State.Terminated.ExitCode != 0 {
return fmt.Sprintf("container %q failed with exit code %d", ctr.Name, ctr.State.Terminated.ExitCode)
}
}
return ""
}
switch pod.Status.Phase {
case apiv1.PodPending, apiv1.PodRunning:
hookStatus.HookPhase = appv1.OperationRunning
case apiv1.PodSucceeded:
hookStatus.HookPhase = appv1.OperationSucceeded
case apiv1.PodFailed:
hookStatus.HookPhase = appv1.OperationFailed
if pod.Status.Message != "" {
// Pod has a nice error message. Use that.
hookStatus.Message = pod.Status.Message
return
}
for _, ctr := range append(pod.Status.InitContainerStatuses, pod.Status.ContainerStatuses...) {
if msg := getFailMessage(&ctr); msg != "" {
hookStatus.Message = msg
return
}
}
case apiv1.PodUnknown:
hookStatus.HookPhase = appv1.OperationError
}
}
func (sc *syncContext) getHookStatus(hookObj *unstructured.Unstructured, hookType appv1.HookType) *appv1.ResourceResult {
for _, hr := range sc.syncRes.Resources {
if !hr.IsHook() {
continue
}
ns := util.FirstNonEmpty(hookObj.GetNamespace(), sc.namespace)
if hookEqual(hr, hookObj.GroupVersionKind().Group, hookObj.GetKind(), ns, hookObj.GetName(), hookType) {
return hr
}
}
return nil
}
func hookEqual(hr *appv1.ResourceResult, group, kind, namespace, name string, hookType appv1.HookType) bool {
return bool(
hr.Group == group &&
hr.Kind == kind &&
hr.Namespace == namespace &&
hr.Name == name &&
hr.HookType == hookType)
}
// updateHookStatus updates the status of a hook. Returns true if the hook was modified
func (sc *syncContext) updateHookStatus(hookStatus appv1.ResourceResult) bool {
sc.lock.Lock()
defer sc.lock.Unlock()
for i, prev := range sc.syncRes.Resources {
if !prev.IsHook() {
continue
}
if hookEqual(prev, hookStatus.Group, hookStatus.Kind, hookStatus.Namespace, hookStatus.Name, hookStatus.HookType) {
if reflect.DeepEqual(prev, hookStatus) {
return false
}
if prev.HookPhase != hookStatus.HookPhase {
sc.log.Infof("Hook %s %s/%s hookPhase: %s -> %s", hookStatus.HookType, prev.Kind, prev.Name, prev.HookPhase, hookStatus.HookPhase)
}
if prev.Status != hookStatus.Status {
sc.log.Infof("Hook %s %s/%s status: %s -> %s", hookStatus.HookType, prev.Kind, prev.Name, prev.Status, hookStatus.Status)
}
if prev.Message != hookStatus.Message {
sc.log.Infof("Hook %s %s/%s message: '%s' -> '%s'", hookStatus.HookType, prev.Kind, prev.Name, prev.Message, hookStatus.Message)
}
sc.syncRes.Resources[i] = &hookStatus
return true
}
}
sc.syncRes.Resources = append(sc.syncRes.Resources, &hookStatus)
sc.log.Infof("Set new hook %s %s/%s. phase: %s, message: %s", hookStatus.HookType, hookStatus.Kind, hookStatus.Name, hookStatus.HookPhase, hookStatus.Message)
return true
}
// areHooksCompletedSuccessful checks if all the hooks of the specified type are completed and successful
func areHooksCompletedSuccessful(hookType appv1.HookType, hookStatuses []*appv1.ResourceResult) (bool, bool) {
isSuccessful := true
for _, hookStatus := range hookStatuses {
if !hookStatus.IsHook() {
continue
}
if hookStatus.HookType != hookType {
continue
}
if !hookStatus.HookPhase.Completed() {
return false, false
}
if !hookStatus.HookPhase.Successful() {
isSuccessful = false
}
}
return true, isSuccessful
}
// terminate looks for any running jobs/workflow hooks and deletes the resource
func (sc *syncContext) terminate() {
terminateSuccessful := true
for _, hookStatus := range sc.syncRes.Resources {
if !hookStatus.IsHook() {
continue
}
if hookStatus.HookPhase.Completed() {
continue
}
if isRunnable(hookStatus) {
hookStatus.HookPhase = appv1.OperationFailed
err := sc.deleteHook(hookStatus.Name, hookStatus.Namespace, hookStatus.GroupVersionKind())
if err != nil {
hookStatus.Message = fmt.Sprintf("Failed to delete %s hook %s/%s: %v", hookStatus.HookType, hookStatus.Kind, hookStatus.Name, err)
terminateSuccessful = false
} else {
hookStatus.Message = fmt.Sprintf("Deleted %s hook %s/%s", hookStatus.HookType, hookStatus.Kind, hookStatus.Name)
}
sc.updateHookStatus(*hookStatus)
}
}
if terminateSuccessful {
sc.setOperationPhase(appv1.OperationFailed, "Operation terminated")
} else {
sc.setOperationPhase(appv1.OperationError, "Operation termination had errors")
}
}
func (sc *syncContext) deleteHook(name, namespace string, gvk schema.GroupVersionKind) error {
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, gvk)
if err != nil {
return err
}
resource := kube.ToGroupVersionResource(gvk.GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, resource, namespace)
propagationPolicy := metav1.DeletePropagationForeground
return resIf.Delete(name, &metav1.DeleteOptions{PropagationPolicy: &propagationPolicy})
}

View File

@@ -1,26 +1,528 @@
package controller
import (
"fmt"
"sort"
"testing"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
fakedisco "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/rest"
testcore "k8s.io/client-go/testing"
"github.com/argoproj/argo-cd/common"
"github.com/argoproj/argo-cd/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/reposerver/repository"
"github.com/argoproj/argo-cd/test"
"github.com/argoproj/argo-cd/util/kube"
"github.com/argoproj/argo-cd/util/kube/kubetest"
)
func newTestSyncCtx() *syncContext {
return &syncContext{
comparison: &v1alpha1.ComparisonResult{},
config: &rest.Config{},
namespace: "test-namespace",
syncOp: &v1alpha1.SyncOperation{},
opState: &v1alpha1.OperationState{},
log: log.WithFields(log.Fields{"application": "fake-app"}),
func newTestSyncCtx(resources ...*v1.APIResourceList) *syncContext {
fakeDisco := &fakedisco.FakeDiscovery{Fake: &testcore.Fake{}}
fakeDisco.Resources = append(resources,
&v1.APIResourceList{
GroupVersion: "v1",
APIResources: []v1.APIResource{
{Kind: "Pod", Group: "", Version: "v1", Namespaced: true},
{Kind: "Service", Group: "", Version: "v1", Namespaced: true},
},
},
&v1.APIResourceList{
GroupVersion: "apps/v1",
APIResources: []v1.APIResource{
{Kind: "Deployment", Group: "apps", Version: "v1", Namespaced: true},
},
})
sc := syncContext{
config: &rest.Config{},
namespace: test.FakeArgoCDNamespace,
server: test.FakeClusterURL,
syncRes: &v1alpha1.SyncOperationResult{},
syncOp: &v1alpha1.SyncOperation{
Prune: true,
SyncStrategy: &v1alpha1.SyncStrategy{
Apply: &v1alpha1.SyncStrategyApply{},
},
},
proj: &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{{
Server: test.FakeClusterURL,
Namespace: test.FakeArgoCDNamespace,
}},
ClusterResourceWhitelist: []v1.GroupKind{
{Group: "*", Kind: "*"},
},
},
},
opState: &v1alpha1.OperationState{},
disco: fakeDisco,
log: log.WithFields(log.Fields{"application": "fake-app"}),
}
sc.kubectl = kubetest.MockKubectlCmd{}
return &sc
}
func TestSyncNotPermittedNamespace(t *testing.T) {
syncCtx := newTestSyncCtx()
targetPod := test.NewPod()
targetPod.SetNamespace("kube-system")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: targetPod,
}, {
Live: nil,
Target: test.NewService(),
}},
}
syncCtx.sync()
assert.Equal(t, v1alpha1.OperationFailed, syncCtx.opState.Phase)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncCreateInSortedOrder(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewPod(),
}, {
Live: nil,
Target: test.NewService(),
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateNotWhitelistedClusterResources(t *testing.T) {
syncCtx := newTestSyncCtx(&v1.APIResourceList{
GroupVersion: v1alpha1.SchemeGroupVersion.String(),
APIResources: []v1.APIResource{
{Name: "workflows", Namespaced: false, Kind: "Workflow", Group: "argoproj.io"},
{Name: "application", Namespaced: false, Kind: "Application", Group: "argoproj.io"},
},
}, &v1.APIResourceList{
GroupVersion: "rbac.authorization.k8s.io/v1",
APIResources: []v1.APIResource{
{Name: "clusterroles", Namespaced: false, Kind: "ClusterRole", Group: "rbac.authorization.k8s.io"},
},
})
syncCtx.proj.Spec.ClusterResourceWhitelist = []v1.GroupKind{
{Group: "argoproj.io", Kind: "*"},
}
syncCtx.kubectl = kubetest.MockKubectlCmd{}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: kube.MustToUnstructured(&rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{Kind: "ClusterRole", APIVersion: "rbac.authorization.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{Name: "argo-ui-cluster-role"}}),
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncBlacklistedNamespacedResources(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.proj.Spec.NamespaceResourceBlacklist = []v1.GroupKind{
{Group: "*", Kind: "Deployment"},
}
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewDeployment(),
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
assert.Contains(t, syncCtx.syncRes.Resources[0].Message, "not permitted in project")
}
func TestSyncSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: test.NewService(),
}, {
Live: test.NewPod(),
Target: nil,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 2)
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodeSynced, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncDeleteSuccessfully(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: test.NewService(),
Target: nil,
}, {
Live: test.NewPod(),
Target: nil,
}},
}
syncCtx.sync()
for i := range syncCtx.syncRes.Resources {
if syncCtx.syncRes.Resources[i].Kind == "Pod" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else if syncCtx.syncRes.Resources[i].Kind == "Service" {
assert.Equal(t, v1alpha1.ResultCodePruned, syncCtx.syncRes.Resources[i].Status)
} else {
t.Error("Resource isn't a pod or a service")
}
}
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestSyncCreateFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
"test-service": {
Output: "",
Err: fmt.Errorf("error: error validating \"test.yaml\": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false"),
},
},
}
testSvc := test.NewService()
testSvc.SetAPIVersion("")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: testSvc,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func TestSyncPruneFailure(t *testing.T) {
syncCtx := newTestSyncCtx()
syncCtx.kubectl = kubetest.MockKubectlCmd{
Commands: map[string]kubetest.KubectlOutput{
"test-service": {
Output: "",
Err: fmt.Errorf(" error: timed out waiting for \"test-service\" to be synced"),
},
},
}
testSvc := test.NewService()
testSvc.SetName("test-service")
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: testSvc,
Target: nil,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 1)
assert.Equal(t, v1alpha1.ResultCodeSyncFailed, syncCtx.syncRes.Resources[0].Status)
}
func unsortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
}
}
func TestRunWorkflows(t *testing.T) {
// syncCtx := newTestSyncCtx()
// syncCtx.doWorkflowSync(nil, nil)
func sortedManifest() []syncTask {
return []syncTask{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
},
},
},
}
}
func TestSortKubernetesResourcesSuccessfully(t *testing.T) {
unsorted := unsortedManifest()
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := sortedManifest()
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestSortManifestHandleNil(t *testing.T) {
task := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
}
manifest := []syncTask{
{},
task,
}
ks := newKindSorter(manifest, resourceOrder)
sort.Sort(ks)
assert.Equal(t, task, manifest[0])
assert.Nil(t, manifest[1].targetObj)
}
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": "argoproj.io/alpha1",
"kind": "Workflow",
},
}}
namespace := syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
"kind": "Namespace",
},
},
}
unsorted := []syncTask{crd, namespace}
ks := newKindSorter(unsorted, resourceOrder)
sort.Sort(ks)
expectedOrder := []syncTask{namespace, crd}
assert.Equal(t, len(unsorted), len(expectedOrder))
for i, sorted := range unsorted {
assert.Equal(t, expectedOrder[i], sorted)
}
}
func TestDontSyncOrPruneHooks(t *testing.T) {
syncCtx := newTestSyncCtx()
targetPod := test.NewPod()
targetPod.SetName("dont-create-me")
targetPod.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
liveSvc := test.NewService()
liveSvc.SetName("dont-prune-me")
liveSvc.SetAnnotations(map[string]string{common.AnnotationKeyHook: "PreSync"})
syncCtx.compareResult = &comparisonResult{
managedResources: []managedResource{{
Live: nil,
Target: targetPod,
Hook: true,
}, {
Live: liveSvc,
Target: nil,
Hook: true,
}},
}
syncCtx.sync()
assert.Len(t, syncCtx.syncRes.Resources, 0)
syncCtx.sync()
assert.Equal(t, syncCtx.opState.Phase, v1alpha1.OperationSucceeded)
}
func TestPersistRevisionHistory(t *testing.T) {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
defaultProject := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
}
data := fakeData{
apps: []runtime.Object{app, defaultProject},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
// Sync with source unspecified
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{},
}}
ctrl.appStateManager.SyncAppState(app, opState)
// Ensure we record spec.source into sync result
assert.Equal(t, app.Spec.Source, opState.SyncResult.Source)
updatedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, v1.GetOptions{})
assert.Nil(t, err)
assert.Equal(t, 1, len(updatedApp.Status.History))
assert.Equal(t, app.Spec.Source, updatedApp.Status.History[0].Source)
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}
func TestPersistRevisionHistoryRollback(t *testing.T) {
app := newFakeApp()
app.Status.OperationState = nil
app.Status.History = nil
defaultProject := &v1alpha1.AppProject{
ObjectMeta: v1.ObjectMeta{
Namespace: test.FakeArgoCDNamespace,
Name: "default",
},
}
data := fakeData{
apps: []runtime.Object{app, defaultProject},
manifestResponse: &repository.ManifestResponse{
Manifests: []string{},
Namespace: test.FakeDestNamespace,
Server: test.FakeClusterURL,
Revision: "abc123",
},
managedLiveObjs: make(map[kube.ResourceKey]*unstructured.Unstructured),
}
ctrl := newFakeController(&data)
// Sync with source specified
source := v1alpha1.ApplicationSource{
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{
Name: "test",
Value: "123",
},
},
},
}
opState := &v1alpha1.OperationState{Operation: v1alpha1.Operation{
Sync: &v1alpha1.SyncOperation{
Source: &source,
},
}}
ctrl.appStateManager.SyncAppState(app, opState)
// Ensure we record opState's source into sync result
assert.Equal(t, source, opState.SyncResult.Source)
updatedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(app.Name, v1.GetOptions{})
assert.Nil(t, err)
assert.Equal(t, 1, len(updatedApp.Status.History))
assert.Equal(t, source, updatedApp.Status.History[0].Source)
assert.Equal(t, "abc123", updatedApp.Status.History[0].Revision)
}

186
docs/CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,186 @@
# Contributing
## Before You Start
You must install and run the ArgoCD using a local Kubernetes (e.g. Docker for Desktop or Minikube) first. This will help you understand the application, but also get your local environment set-up.
Then, to get a good grounding in Go, try out [the tutorial](https://tour.golang.org/).
## Pre-requisites
Install:
* [docker](https://docs.docker.com/install/#supported-platforms)
* [golang](https://golang.org/)
* [dep](https://github.com/golang/dep)
* [protobuf](https://developers.google.com/protocol-buffers/)
* [ksonnet](https://github.com/ksonnet/ksonnet#install)
* [helm](https://github.com/helm/helm/releases)
* [kustomize](https://github.com/kubernetes-sigs/kustomize/releases)
* [go-swagger](https://github.com/go-swagger/go-swagger/blob/master/docs/install.md)
* [jq](https://stedolan.github.io/jq/)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* [kubectx](https://kubectx.dev)
* [minikube](https://kubernetes.io/docs/setup/minikube/) or Docker for Desktop
Brew users can quickly install the lot:
```bash
brew tap go-swagger/go-swagger
brew install go dep protobuf kubectl kubectx ksonnet/tap/ks kubernetes-helm jq go-swagger
```
!!! note "Kustomize"
Since Argo CD supports Kustomize v1.0 and v2.0, you will need to install both versions in order for the unit tests to run. The Kustomize 1 unit test expects to find a `kustomize1` binary in the path. You can use this [link](https://github.com/argoproj/argo-cd/blob/master/Dockerfile#L66-L69) to find the Kustomize 1 currently used by Argo CD and modify the curl command to download the correct OS.
Set up environment variables (e.g. is `~/.bashrc`):
```bash
export GOPATH=~/go
export PATH=$PATH:$GOPATH/bin
```
Install go dependencies:
```bash
go get -u github.com/golang/protobuf/protoc-gen-go
go get -u github.com/go-swagger/go-swagger/cmd/swagger
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
go get -u github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
go get -u github.com/golangci/golangci-lint/cmd/golangci-lint
go get -u github.com/mattn/goreman
go get -u gotest.tools/gotestsum
```
## Building
```bash
go get -u github.com/argoproj/argo-cd
dep ensure
make
```
The make command can take a while, and we recommend building the specific component you are working on
* `make codegen` - Builds protobuf and swagger files
* `make cli` - Make the argocd CLI tool
* `make server` - Make the API/repo/controller server
* `make argocd-util` - Make the administrator's utility, used for certain tasks such as import/export
## Running Tests
To run unit tests:
```bash
make test
```
Check out the following [documentation](https://github.com/argoproj/argo-cd/blob/master/docs/developer-guide/test-e2e.md) for instructions on running the e2e tests.
## Running Locally
It is much easier to run and debug if you run ArgoCD on your local machine than in the Kubernetes cluster.
You should scale the deployments to zero:
```bash
kubectl -n argocd scale deployment.extensions/argocd-application-controller --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-dex-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-repo-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-server --replicas 0
kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 0
```
Then checkout and build the UI next to your code
```
cd ~/go/src/github.com/argoproj
git clone git@github.com:argoproj/argo-cd-ui.git
```
Follow the UI's [README](https://github.com/argoproj/argo-cd-ui/blob/master/README.md) to build it.
Note: you'll need to use the https://localhost:6443 cluster now.
Then start the services:
```bash
cd ~/go/src/github.com/argoproj/argo-cd
make start
```
You can now execute `argocd` command against your locally running ArgoCD by appending `--server localhost:8080 --plaintext --insecure`, e.g.:
```bash
argocd app set guestbook --path guestbook --repo https://github.com/argoproj/argocd-example-apps.git --dest-server https://localhost:6443 --dest-namespace default --server localhost:8080 --plaintext --insecure
```
You can open the UI: http://localhost:8080
Note: you'll need to use the https://kubernetes.default.svc cluster now.
## Running Local Containers
You may need to run containers locally, so here's how:
Create login to Docker Hub, then login.
```bash
docker login
```
Add your username as the environment variable, e.g. to your `~/.bash_profile`:
```bash
export IMAGE_NAMESPACE=alexcollinsintuit
```
If you have not built the UI image (see [the UI README](https://github.com/argoproj/argo-cd-ui/blob/master/README.md)), then do the following:
```bash
docker pull argoproj/argocd-ui:latest
docker tag argoproj/argocd-ui:latest $IMAGE_NAMESPACE/argocd-ui:latest
docker push $IMAGE_NAMESPACE/argocd-ui:latest
```
Build the images:
```bash
DOCKER_PUSH=true make image
```
Update the manifests:
```bash
make manifests
```
Install the manifests:
```bash
kubectl -n argocd apply --force -f manifests/install.yaml
```
Scale your deployments up:
```bash
kubectl -n argocd scale deployment.extensions/argocd-application-controller --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-dex-server --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-repo-server --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-server --replicas 1
kubectl -n argocd scale deployment.extensions/argocd-redis --replicas 1
```
Now you can set-up the port-forwarding and open the UI or CLI.
## Pre-commit Checks
Before you commit, make sure you've formatted and linted your code, or your PR will fail CI:
```bash
STAGED_GO_FILES=$(git diff --cached --name-only | grep ".go$")
gofmt -w $STAGED_GO_FILES
make codgen
make precommit ;# lint and test
```

View File

@@ -1,16 +0,0 @@
# ArgoCD Documentation
## [Getting Started](getting_started.md)
## Concepts
* [Architecture](architecture.md)
* [Tracking Strategies](tracking_strategies.md)
## Features
* [Application Sources](application_sources.md)
* [Application Parameters](parameters.md)
* [Resource Health](health.md)
* [Resource Hooks](resource_hooks.md)
* [Single Sign On](sso.md)
* [Webhooks](webhook.md)
* [RBAC](rbac.md)

6
docs/SUPPORT.md Normal file
View File

@@ -0,0 +1,6 @@
# Support
1. Make sure you've read [understanding the basics](understand_the_basics.md) the [getting started guide](getting_started.md).
2. Looked for an answer [the frequently asked questions](faq.md).
3. Ask a question in [the Argo CD Slack channel ⧉](https://argoproj.slack.com/messages/CASHNF6MS).
4. [Read issues, report a bug, or request a feature ⧉](https://github.com/argoproj/argo-cd/issues)

View File

Before

Width:  |  Height:  |  Size: 3.5 MiB

After

Width:  |  Height:  |  Size: 3.5 MiB

View File

Before

Width:  |  Height:  |  Size: 119 KiB

After

Width:  |  Height:  |  Size: 119 KiB

BIN
docs/assets/dashboard.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

BIN
docs/assets/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 58 KiB

16
docs/core_concepts.md Normal file
View File

@@ -0,0 +1,16 @@
# Core Concepts
Let's assume you're familiar with core Git, Docker, Kubernetes, Continuous Delivery, and GitOps concepts.
* **Application** A group of Kubernetes resources as defined by a manifest. This is a Custom Resource Definition (CRD).
* **Application source type** Which **Tool** is used to build the application.
* **Target state** The desired state of an application, as represented by files in a Git repository.
* **Live state** The live state of that application. What pods etc are deployed.
* **Sync status** Whether or not the live state matches the target state. Is the deployed application the same as Git says it should be?
* **Sync** The process of making an application move to its target state. E.g. by applying changes to a Kubernetes cluster.
* **Sync operation status** Whether or not a sync succeeded.
* **Refresh** Compare the latest code in Git with the live state. Figure out what is different.
* **Health** The health the application, is it running correctly? Can it serve requests?
* **Tool** A tool to create manifests from a directory of files. E.g. Kustomize or Ksonnet. See **Application Source Type**.
* **Configuration management tool** See **Tool**.
* **Configuration management plugin** A custom tool.

View File

@@ -0,0 +1,3 @@
# API Docs
You can find Swagger docs but setting the path `/swagger-ui` to your Argo CD UI's. E.g. [http://localhost:8080/swagger-ui](http://localhost:8080/swagger-ui).

View File

@@ -0,0 +1,10 @@
# Overview
!!! warning "You probably don't want to be reading this section of the docs."
This part of the manual is aimed at people wanting to develop third-party applications that interact with Argo CD, e.g.
* An chat bot
* An Slack integration
!!! note
Please make sure you've completed the [getting started guide](../getting_started.md).

View File

@@ -0,0 +1,48 @@
# Releasing
1. Tag, build, and push argo-cd-ui
```bash
cd argo-cd-ui
git checkout -b release-X.Y
git tag vX.Y.Z
git push upstream release-X.Y --tags
IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true yarn docker
```
2. Create release-X.Y branch (if creating initial X.Y release)
```bash
git checkout -b release-X.Y
git push upstream release-X.Y
```
3. Update VERSION and manifests with new version
```bash
vi VERSION # ensure value is desired X.Y.Z semantic version
make manifests IMAGE_TAG=vX.Y.Z
git commit -a -m "Update manifests to vX.Y.Z"
git push upstream release-X.Y
```
4. Tag, build, and push release to docker hub
```bash
git tag vX.Y.Z
make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z DOCKER_PUSH=true
git push upstream vX.Y.Z
```
5. Update argocd brew formula
```bash
git clone https://github.com/argoproj/homebrew-tap
cd homebrew-tap
./update.sh ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
git commit -a -m "Update argocd to vX.Y.Z"
git push
```
6. Update documentation:
* Edit CHANGELOG.md with release notes
* Update `stable` tag
```
git tag stable --force && git push upstream stable --force
```
* Create GitHub release from new tag and upload binaries (e.g. dist/argocd-darwin-amd64)

View File

@@ -0,0 +1,30 @@
# Site
## Developing And Testing
The web site is build using `mkdocs` and `mkdocs-material`.
To test:
```bash
mkdocs serve
```
Check for broken external links:
```bash
find docs -name '*.md' -exec grep -l http {} + | xargs awesome_bot -t 3 --allow-dupe --allow-redirect -w argocd.example.com:443,argocd.example.com,kubernetes.default.svc:443,kubernetes.default.svc,mycluster.com,https://github.com/argoproj/my-private-repository,192.168.0.20,storage.googleapis.com,localhost:8080,localhost:6443,your-kubernetes-cluster-addr,10.97.164.88 --skip-save-results --
```
## Deploying
```bash
mkdocs gh-deploy
```
## Analytics
!!! tip
Don't forget to disable your ad-blocker when testing.
We collect [Google Analytics](https://analytics.google.com/analytics/web/#/report-home/a105170809w198079555p192782995).

View File

@@ -0,0 +1,19 @@
# E2E Tests
The directory contains E2E tests and test applications. The test assume that Argo CD services are installed into `argocd-e2e` namespace or cluster in current context. One throw-away
namespace `argocd-e2e***` is created prior to tests execute. The throw-away namespace is used as a target namespace for test applications.
The `test/e2e/testdata` directory contains various Argo CD applications. Before test execution directory is copies into `/tmp/argocd-e2e***` temp directory and used in tests as a
Git repository via file url: `file:///tmp/argocd-e2e***`.
## Running Tests Locally
1. Start the e2e version `make start-e2e`
1. Run the tests: `make test-e2e`
You can observe the tests by using the UI [http://localhost:8080/applications](http://localhost:8080/applications).
## CI Set-up
The tests are executed by Argo Workflow defined at `.argo-ci/ci.yaml`. CI job The builds an Argo CD image, deploy argo cd components into throw-away kubernetes cluster provisioned
using k3s and run e2e tests against it.

58
docs/faq.md Normal file
View File

@@ -0,0 +1,58 @@
# FAQ
## Why is my application still `OutOfSync` immediately after a successful Sync?
See [Diffing](user-guide/diffing.md) documentation for reasons resources can be OutOfSync, and ways to configure
Argo CD to ignore fields when differences are expected.
## Why is my application stuck in `Progressing` state?
Argo CD provides health for several standard Kubernetes types. The `Ingress` and `StatefulSet` types have known issues which might cause health check
to return `Progressing` state instead of `Healthy`.
* `Ingress` is considered healthy if `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`. Some ingress controllers
([contour](https://github.com/heptio/contour/issues/403), [traefik](https://github.com/argoproj/argo-cd/issues/968#issuecomment-451082913)) don't update
`status.loadBalancer.ingress` field which causes `Ingress` to stuck in `Progressing` state forever.
* `StatufulSet` is considered healthy if value of `status.updatedReplicas` field matches to `spec.replicas` field. Due to Kubernetes bug
[kubernetes/kubernetes#68573](https://github.com/kubernetes/kubernetes/issues/68573) the `status.updatedReplicas` is not populated. So unless you run Kubernetes version which
include the fix [kubernetes/kubernetes#67570](https://github.com/kubernetes/kubernetes/pull/67570) `StatefulSet` might stay in `Progressing` state.
As workaround Argo CD allows providing [health check](operator-manual/health.md) customization which overrides default behavior.
## I forgot the admin password, how do I reset it?
Edit the `argocd-secret` secret and update the `admin.password` field with a new bcrypt hash. You
can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash. Another option
is to delete both the `admin.password` and `admin.passwordMtime` keys and restart argocd-server.
## Argo CD cannot deploy Helm Chart based applications without internet access, how can I solve it?
Argo CD might fail to generate Helm chart manifests if the chart has dependencies located in external repositories. To solve the problem you need to make sure that `requirements.yaml`
uses only internally available Helm repositories. Even if the chart uses only dependencies from internal repos Helm might decide to refresh `stable` repo. As workaround override
`stable` repo URL in `argocd-cm` config map:
```yaml
data:
helm.repositories: |
- url: http://<internal-helm-repo-host>:8080
name: stable
```
## I've configured [cluster secret](./operator-manual/declarative-setup.md#clusters) but it does not show up in CLI/UI, how do I fix it?
Check if cluster secret has `argocd.argoproj.io/secret-type: cluster` label. If secret has the label but the cluster is still not visible then make sure it might be a
permission issue. Try to list clusters using `admin` user (e.g. `argocd login --username admin && argocd cluster list`).
## Argo CD is unable to connect to my cluster, how do I troubleshoot it?
Use the following steps to reconstruct configured cluster config and connect to your cluster manually using kubectl:
```bash
kubectl exec -it <argocd-pod-name> bash # ssh into any argocd server pod
argocd-util kubeconfig https://<cluster-url> /tmp/config --namespace argocd # generate your cluster config
KUBECONFIG=/tmp/config kubectl get pods # test connection manually
```
Now you can manually verify that cluster is accessible from the Argo CD pod.

View File

@@ -1,177 +1,175 @@
# ArgoCD Getting Started
# Getting Started
An example guestbook application is provided to demonstrate how ArgoCD works.
!!! tip
This guide assumes you have a grounding in the tools that Argo CD is based on. Please read the [understanding the basics](understand_the_basics.md).
## Requirements
* Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool
* Have a [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`).
## 1. Install ArgoCD
```
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.7.1/manifests/install.yaml
```
This will create a new namespace, `argocd`, where ArgoCD services and application resources will live.
## 1. Install Argo CD
NOTE:
* On GKE with RBAC enabled, you may need to grant your account the ability to create new cluster roles
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
This will create a new namespace, `argocd`, where Argo CD services and application resources will live.
On GKE, you will need grant your account the ability to create new cluster roles:
```bash
kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com
```
## 2. Download ArgoCD CLI
## 2. Download Argo CD CLI
Download the latest ArgoCD version:
Download the latest Argo CD version from [https://github.com/argoproj/argo-cd/releases/latest].
On Mac:
```
Also available in Mac Homebrew:
```bash
brew tap argoproj/tap
brew install argoproj/tap/argocd
```
On Linux:
```
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.7.1/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
```
## 3. Access The Argo CD API Server
## 3. Open access to ArgoCD API server
By default, the Argo CD API server is not exposed with an external IP. To access the API server,
choose one of the following techniques to expose the Argo CD API server:
By default, the ArgoCD API server is not exposed with an external IP. To expose the API server,
change the service type to `LoadBalancer`:
### Service Type Load Balancer
Change the argocd-server service type to `LoadBalancer`:
```
```bash
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
```
## 4. Login to the server from the CLI
### Ingress
Follow the [ingress documentation](operator-manual/ingress.md) on how to configure Argo CD with ingress.
Login with using the `admin` user. The initial password is autogenerated to be the pod name of the
ArgoCD API server. This can be retrieved with the command:
```
kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2
### Port Forwarding
Kubectl port-forwarding can also be used to connect to the API server without exposing the service.
```bash
kubectl port-forward svc/argocd-server -n argocd 8080:443
```
Using the above password, login to ArgoCD's external IP:
The API server can then be accessed using the localhost:8080
On Minikube:
```
argocd login $(minikube service argocd-server -n argocd --url | cut -d'/' -f 3) --name minikube
```
Other clusters:
```
kubectl get svc -n argocd argocd-server
argocd login <EXTERNAL-IP>
## 4. Login Using The CLI
Login as the `admin` user. The initial password is autogenerated to be the pod name of the
Argo CD API server. This can be retrieved with the command:
```bash
kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2
```
After logging in, change the password using the command:
Using the above password, login to Argo CD's IP or hostname:
```bash
argocd login <ARGOCD_SERVER>
```
Change the password using the command:
```bash
argocd account update-password
argocd relogin
```
## 5. Register a cluster to deploy apps to
## 5. Register A Cluster To Deploy Apps To (Optional)
We will now register a cluster to deploy applications to. First list all clusters contexts in your
kubconfig:
```
This step registers a cluster's credentials to Argo CD, and is only necessary when deploying to
an external cluster. When deploying internally (to the same cluster that Argo CD is running in),
https://kubernetes.default.svc should be used as the application's K8s API server address.
First list all clusters contexts in your current kubconfig:
```bash
argocd cluster add
```
Choose a context name from the list and supply it to `argocd cluster add CONTEXTNAME`. For example,
for minikube context, run:
```
argocd cluster add minikube --in-cluster
for docker-for-desktop context, run:
```bash
argocd cluster add docker-for-desktop
```
The above command installs an `argocd-manager` ServiceAccount and ClusterRole into the cluster
associated with the supplied kubectl context. ArgoCD uses the service account token to perform its
management tasks (i.e. deploy/monitoring).
The above command installs a ServiceAccount (`argocd-manager`), into the kube-system namespace of
that kubectl context, and binds the service account to an admin-level ClusterRole. Argo CD uses this
service account token to perform its management tasks (i.e. deploy/monitoring).
The `--in-cluster` option indicates that the cluster we are registering, is the same cluster that
ArgoCD is running in. This allows ArgoCD to connect to the cluster using the internal kubernetes
hostname (kubernetes.default.svc). When registering a cluster external to ArgoCD, the `--in-cluster`
flag should be omitted.
!!! note
The rules of the `argocd-manager-role` role can be modified such that it only has `create`, `update`, `patch`, `delete` privileges to a limited set of namespaces, groups, kinds.
However `get`, `list`, `watch` privileges are required at the cluster-scope for Argo CD to function.
## 6. Create the application from a git repository
## 6. Create An Application From A Git Repository
### Creating apps via UI
An example repository containing a guestbook application is available at
https://github.com/argoproj/argocd-example-apps.git to demonstrate how Argo CD works.
Open a browser to the ArgoCD external UI, and login using the credentials set in step 4.
### Creating Apps Via CLI
On Minikube:
```
minikube service argocd-server -n argocd
```
~~~bash
argocd app create guestbook \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
~~~
Connect a git repository containing your apps. An example repository containing a sample
guestbook application is available at https://github.com/argoproj/argocd-example-apps.git.
### Creating Apps Via UI
Open a browser to the Argo CD external UI, and login using the credentials, IP/hostname set in step 4.
Connect the https://github.com/argoproj/argocd-example-apps.git repo to Argo CD:
![connect repo](assets/connect_repo.png)
After connecting a git repository, select the guestbook application for creation:
After connecting a repository, select the guestbook application for creation:
![select repo](assets/select_repo.png)
![select app](assets/select_app.png)
![select env](assets/select_env.png)
![create app](assets/create_app.png)
### Creating apps via CLI
Applications can be also be created using the ArgoCD CLI:
```
argocd app create guestbook-default --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --env default
```
## 7. Sync (deploy) the application
## 7. Sync (Deploy) The Application
Once the guestbook application is created, you can now view its status:
From UI:
![create app](assets/guestbook-app.png)
```bash
$ argocd app get guestbook
Name: guestbook
Server: https://kubernetes.default.svc
Namespace: default
URL: https://10.97.164.88/applications/guestbook
Repo: https://github.com/argoproj/argocd-example-apps.git
Target:
Path: guestbook
Sync Policy: <none>
Sync Status: OutOfSync from (1ff8a67)
Health Status: Missing
From CLI:
```
$ argocd app get guestbook-default
Name: guestbook-default
Server: https://kubernetes.default.svc
Namespace: default
URL: https://192.168.64.36:31880/applications/argocd/guestbook-default
Environment: default
Repo: https://github.com/argoproj/argocd-example-apps.git
Path: guestbook
Target: HEAD
KIND NAME STATUS HEALTH
Service guestbook-ui OutOfSync
Deployment guestbook-ui OutOfSync
GROUP KIND NAMESPACE NAME STATUS HEALTH
apps Deployment default guestbook-ui OutOfSync Missing
Service default guestbook-ui OutOfSync Missing
```
The application status is initially in an `OutOfSync` state, since the application has yet to be
The application status is initially `OutOfSync` state, since the application has yet to be
deployed, and no Kubernetes resources have been created. To sync (deploy) the application, run:
```
$ argocd app sync guestbook-default
Application: guestbook-default
Operation: Sync
Phase: Succeeded
Message: successfully synced
KIND NAME MESSAGE
Service guestbook-ui service "guestbook-ui" created
Deployment guestbook-ui deployment.apps "guestbook-ui" created
```bash
argocd app sync guestbook
```
This command retrieves the manifests from git repository and performs a `kubectl apply` of the
manifests. The guestbook app is now running and you can now view its resource
components, logs, events, and assessed health:
This command retrieves the manifests from the repository and performs a `kubectl apply` of the
manifests. The guestbook app is now running and you can now view its resource components, logs,
events, and assessed health status:
### From UI:
![guestbook app](assets/guestbook-app.png)
![view app](assets/guestbook-tree.png)
## 8. Next Steps
ArgoCD supports additional features such as SSO, WebHooks, RBAC, Projects. See the rest of
the [documentation](./) for details.

View File

@@ -1,18 +0,0 @@
# Resource Health
## Overview
ArgoCD provides built-in health assessment for several standard Kubernetes types, which is then
surfaced to the overall Application health status as a whole. The following checks are made for
specific types of kuberentes resources:
### Deployment, ReplicaSet, StatefulSet DaemonSet
* Observed generation is equal to desired generation.
* Number of **updated** replicas equals the number of desired replicas.
### Service
* If service type is of type `LoadBalancer`, the `status.loadBalancer.ingress` list is non-empty,
with at least one value for `hostname` or `IP`.
### Ingress
* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.

93
docs/index.md Normal file
View File

@@ -0,0 +1,93 @@
# Overview
## What Is Argo CD?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
![Argo CD UI](assets/argocd-ui.gif)
## Why Argo CD?
Application definitions, configurations, and environments should be declarative and version controlled.
Application deployment and lifecycle management should be automated, auditable, and easy to understand.
## Getting Started
### Quick Start
```bash
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
Follow our [getting started guide](getting_started.md). Further [documentation](docs/)
is provided for additional features.
## How it works
Argo CD follows the **GitOps** pattern of using Git repositories as the source of truth for defining
the desired application state. Kubernetes manifests can be specified in several ways:
* [kustomize](https://kustomize.io) applications
* [helm](https://helm.sh) charts
* [ksonnet](https://ksonnet.io) applications
* [jsonnet](https://jsonnet.org) files
* Plain directory of YAML/json manifests
* Any custom config management tool configured as a config management plugin
Argo CD automates the deployment of the desired application states in the specified target environments.
Application deployments can track updates to branches, tags, or pinned to a specific version of
manifests at a Git commit. See [tracking strategies](user-guide/tracking_strategies.md) for additional
details about the different tracking strategies available.
For a quick 10 minute overview of Argo CD, check out the demo presented to the Sig Apps community
meeting:
[![Alt text](https://img.youtube.com/vi/aWDIQMbp1cc/0.jpg)](https://youtu.be/aWDIQMbp1cc?t=1m4s)
## Architecture
![Argo CD Architecture](assets/argocd_architecture.png)
Argo CD is implemented as a kubernetes controller which continuously monitors running applications
and compares the current, live state against the desired target state (as specified in the Git repo).
A deployed application whose live state deviates from the target state is considered `OutOfSync`.
Argo CD reports & visualizes the differences, while providing facilities to automatically or
manually sync the live state back to the desired target state. Any modifications made to the desired
target state in the Git repo can be automatically applied and reflected in the specified target
environments.
For additional details, see [architecture overview](operator-manual/architecture.md).
## Features
* Automated deployment of applications to specified target environments
* Support for multiple config management/templating tools (Kustomize, Helm, Ksonnet, Jsonnet, plain-YAML)
* Ability to manage and deploy to multiple clusters
* SSO Integration (OIDC, OAuth2, LDAP, SAML 2.0, GitHub, GitLab, Microsoft, LinkedIn)
* Multi-tenancy and RBAC policies for authorization
* Rollback/Roll-anywhere to any application configuration committed in Git repository
* Health status analysis of application resources
* Automated configuration drift detection and visualization
* Automated or manual syncing of applications to its desired state
* Web UI which provides real-time view of application activity
* CLI for automation and CI integration
* Webhook integration (GitHub, BitBucket, GitLab)
* Access tokens for automation
* PreSync, Sync, PostSync hooks to support complex application rollouts (e.g.blue/green & canary upgrades)
* Audit trails for application events and API calls
* Prometheus metrics
* Parameter overrides for overriding ksonnet/helm parameters in Git
## Community Blogs And Presentations
* GitOps with Argo CD: [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager](https://www.ibm.com/blogs/bluemix/2019/02/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2/)
* KubeCon talk: [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be)
* KubeCon talk: [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU)
* Among other things, describes how Kubeflow uses Argo CD to implement GitOPs for ML
* SIG Apps demo: [Argo CD - GitOps Continuous Delivery for Kubernetes](https://www.youtube.com/watch?v=aWDIQMbp1cc&feature=youtu.be&t=1m4s)
## Development Status
Argo CD is actively developed and is being used in production to deploy SaaS services at Intuit

View File

@@ -0,0 +1,75 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
# helm specific config
helm:
valueFiles:
- values-prod.yaml
# kustomize specific config
kustomize:
# Optional image name prefix
namePrefix: prod-
# Optional image tags passed to "kustomize edit set imagetag" is Kustomize 1 only.
imageTags:
- name: gcr.io/heptio-images/ks-guestbook-demo
value: "0.2"
# Optional images passed to "kustomize edit set image" is Kustomize 2 only.
images:
- gcr.io/heptio-images/ks-guestbook-demo:0.2
# directory
directory:
recurse: true
jsonnet:
# A list of Jsonnet External Variables
extVars:
- name: foo
value: bar
# You can use "code to determine if the value is either string (false, the default) or Jsonnet code (if code is true).
- code: true
name: baz
value: "true"
# A list of Jsonnet Top-level Arguments
tlas:
- code: false
name: foo
value: bar
# plugin specific config
plugin:
- name: mypluginname
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: guestbook
# Sync policy
syncPolicy:
automated:
prune: true
# Ignore differences at the specified json pointers
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas

View File

@@ -1,52 +1,34 @@
# Argo CD - Architectural Overview
# Architectural Overview
![Argo CD Architecture](argocd_architecture.png)
![Argo CD Architecture](../assets/argocd_architecture.png)
## Components
### API Server
The API server is a gRPC/REST server which exposes the API consumed by the Web UI, CLI, and CI/CD
systems. It has the following responsibilities:
* application management and status reporting
* invoking of application operations (e.g. sync, rollback, user-defined actions)
* repository and cluster credential management (stored as K8s secrets)
* authentication and auth delegation to external identity providers
* RBAC enforcement
* listener/forwarder for git webhook events
* listener/forwarder for Git webhook events
### Repository Server
The repository server is an internal service which maintains a local cache of the git repository
The repository server is an internal service which maintains a local cache of the Git repository
holding the application manifests. It is responsible for generating and returning the Kubernetes
manifests when provided the following inputs:
* repository URL
* git revision (commit, tag, branch)
* revision (commit, tag, branch)
* application path
* template specific settings: parameters, ksonnet environments, helm values.yaml
### Application Controller
The application controller is a Kubernetes controller which continuously monitors running
applications and compares the current, live state against the desired target state (as specified in
the git repo). It detects `OutOfSync` application state and optionally takes corrective action. It
the repo). It detects `OutOfSync` application state and optionally takes corrective action. It
is responsible for invoking any user-defined hooks for lifcecycle events (PreSync, Sync, PostSync)
### Application CRD (Custom Resource Definition)
The Application CRD is the Kubernetes resource object representing a deployed application instance
in an environment. It is defined by two key pieces of information:
* `source` reference to the desired state in git (repository, revision, path, environment)
* `destination` reference to the target cluster and namespace.
An example spec is as follows:
```
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
environment: default
destination:
server: https://kubernetes.default.svc
namespace: default
```

View File

@@ -0,0 +1,118 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
# Argo CD's externally facing base URL (optional). Required when configuring SSO
url: https://argo-cd-demo.argoproj.io
# A dex connector configuration (optional). See SSO configuration documentation:
# https://github.com/argoproj/argo-cd/blob/master/docs/sso.md
# https://github.com/dexidp/dex/tree/master/Documentation/connectors
dex.config: |
connectors:
# GitHub example
- type: github
id: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
teams:
- red-team
# OIDC configuration as an alternative to dex (optional).
oidc.config: |
name: Okta
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $oidc.okta.clientSecret
# Optional set of OIDC scopes to request. If omitted, defaults to: ["openid", "profile", "email", "groups"]
requestedScopes: ["openid", "profile", "email"]
# Git repositories configure Argo CD with (optional).
# This list is updated when configuring/removing repos from the UI/CLI
repositories: |
- url: https://github.com/argoproj/my-private-repository
passwordSecret:
name: my-secret
key: password
usernameSecret:
name: my-secret
key: username
sshPrivateKeySecret:
name: my-secret
key: sshPrivateKey
# Non-standard and private Helm repositories (optional).
helm.repositories: |
- url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
name: istio.io
- url: https://my-private-chart-repo.internal
name: private-repo
usernameSecret:
name: my-secret
key: username
passwordSecret:
name: my-secret
key: password
# Configuration to customize resource behavior (optional). Keys are in the form: group/Kind.
resource.customizations: |
admissionregistration.k8s.io/MutatingWebhookConfiguration:
# List of json pointers in the object to ignore differences
ignoreDifferences: |
jsonPointers:
- webhooks/0/clientConfig/caBundle
certmanager.k8s.io/Certificate:
# Lua script for customizing the health status assessment
health.lua: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for i, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "False" then
hs.status = "Degraded"
hs.message = condition.message
return hs
end
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = condition.message
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for certificate"
return hs
# Configuration to completely ignore entire classes of resource group/kinds (optional).
# Excluding high-volume resources improves performance and memory usage, and reduces load and
# bandwidth to the Kubernetes API server.
# These are globs, so a "*" will match all values.
# If you omit groups/kinds/clusters then they will match all groups/kind/clusters.
# NOTE: events.k8s.io and metrics.k8s.io are excluded by default
resource.exclusions: |
- apiGroups:
- repositories.stash.appscode.com
kinds:
- Snapshot
clusters:
- "*.local"
# Configuration to add a config management plugin.
configManagementPlugins: |
- name: kasane
init:
command: [kasane, update]
generate:
command: [kasane, show]
# The metadata.label key name where Argo CD injects the app name as a tracking label (optional).
# Tracking labels are used to determine which resources need to be deleted when pruning.
# If omitted, Argo CD injects the app name into the label: 'app.kubernetes.io/instance'
application.instanceLabelKey: mycompany.com/appname

View File

@@ -0,0 +1,26 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
# policy.csv is an file containing user-defined RBAC policies and role definitions (optional).
# Policy rules are in the form:
# p, subject, resource, action, object, effect
# Role definitions and bindings are in the form:
# g, subject, inherited-subject
# See https://github.com/argoproj/argo-cd/blob/master/docs/rbac.md for additional information.
policy.csv: |
# Grant all members of the group 'my-org:team-alpha; the ability to sync apps in 'my-project'
p, my-org:team-alpha, applications, sync, my-project/*, allow
# Grant all members of 'my-org:team-beta' admins
g, my-org:team-beta, role:admin
# policy.default is the name of the default role which Argo CD will falls back to, when
# authorizing API requests (optional). If omitted or empty, users may be still be able to login,
# but will see no apps, projects, etc...
policy.default: role:readonly
# scopes controls which OIDC scopes to examine during rbac enforcement (in addition to `sub` scope).
# If omitted, defaults to: `[groups]`. The scope value can be a string, or a list of strings.
scopes: [cognito:groups, email]

View File

@@ -0,0 +1,25 @@
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
type: Opaque
data:
# TLS certificate and private key for API server (required).
# Autogenerated with a self-signed ceritificate when keys are missing or invalid.
tls.crt:
tls.key:
# bcrypt hash of the admin password and its last modified time (required).
# Autogenerated to be the name of the argocd-server pod when missing.
admin.password:
admin.passwordMtime:
# random server signature key for session validation (required).
# Autogenerated when missing.
server.secretkey:
# Shared secrets for authenticating GitHub, GitLab, BitBucket webhook events (optional).
# See https://github.com/argoproj/argo-cd/blob/master/docs/webhook.md for additional details.
github.webhook.secret:
gitlab.webhook.secret:
bitbucket.webhook.uuid:

View File

@@ -0,0 +1,73 @@
# Custom Tooling
Argo CD bundles preferred versions of its supported templating tools (helm, kustomize, ks, jsonnet)
as part of its container images. Sometimes, it may be desired to use a specific version of a tool
other than what Argo CD bundles. Some reasons to do this might be:
* To upgrade/downgrade to a specific version of a tool due to bugs or bug fixes.
* To install additional dependencies which to be used by kustomize's configmap/secret generators
(e.g. curl, vault, gpg, AWS CLI)
* To install a [config management plugin](../user-guide/application_sources.md#config-management-plugins)
As the Argo CD repo-server is the single service responsible for generating Kubernetes manifests, it
can be customized to use alternative toolchain required by your environment.
## Adding Tools Via Volume Mounts
The first technique is to use an `init` container and a `volumeMount` to copy a different verison of
a tool into the repo-server container. In the following example, an init container is overwriting
the helm binary with a different version than what is bundled in Argo CD:
```yaml
spec:
# 1. Define an emptyDir volume which will hold the custom binaries
volumes:
- name: custom-tools
emptyDir: {}
# 2. Use an init container to download/copy custom binaries into the emptyDir
initContainers:
- name: download-tools
image: alpine:3.8
command: [sh, -c]
args:
- wget -qO- https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz | tar -xvzf - &&
mv linux-amd64/helm /custom-tools/
volumeMounts:
- mountPath: /custom-tools
name: custom-tools
# 3. Volume mount the custom binary to the bin directory (overriding the existing version)
containers:
- name: argocd-repo-server
volumeMounts:
- mountPath: /usr/local/bin/helm
name: custom-tools
subPath: helm
```
## BYOI (Build Your Own Image)
Sometimes replacing a binary isn't sufficient and you need to install other dependencies. The
following example builds an entirely customized repo-server from a Dockerfile, installing extra
dependencies that may be needed for generating manifests.
```Dockerfile
FROM argoproj/argocd:latest
# Switch to root for the ability to perform install
USER root
# Install tools needed for your repo-server to retrieve & decrypt secrets, render manifests
# (e.g. curl, awscli, gpg, sops)
RUN apt-get update && \
apt-get install -y \
curl \
awscli \
gpg && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
curl -o /usr/local/bin/sops -L https://github.com/mozilla/sops/releases/download/3.2.0/sops-3.2.0.linux && \
chmod +x /usr/local/bin/sops
# Switch back to non-root user
USER argocd
```

View File

@@ -0,0 +1,406 @@
# Declarative Setup
Argo CD applications, projects and settings can be defined declaratively using Kubernetes manifests.
## Quick Reference
| Name | Kind | Description |
|------|------|-------------|
| [`argocd-cm.yaml`](argocd-cm.yaml) | ConfigMap | General Argo CD configuration |
| [`argocd-secret.yaml`](argocd-secret.yaml) | Secret | Password, Certificates, Signing Key |
| [`argocd-rbac-cm.yaml`](argocd-rbac-cm.yaml) | ConfigMap | RBAC Configuration |
| [`application.yaml`](application.yaml) | Application | Example application spec |
| [`project.yaml`](project.yaml) | AppProject | Example project spec |
## Applications
The Application CRD is the Kubernetes resource object representing a deployed application instance
in an environment. It is defined by two key pieces of information:
* `source` reference to the desired state in Git (repository, revision, path, environment)
* `destination` reference to the target cluster and namespace.
A minimal Application spec is as follows:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
destination:
server: https://kubernetes.default.svc
namespace: guestbook
```
See [application.yaml](application.yaml) for additional fields
!!! warning
By default, deleting an application will not perform a cascade delete, thereby deleting its resources. You must add the finalizer if you want this behaviour - which you may well not want.
```yaml
metadata:
finalizers:
- resources-finalizer.argocd.argoproj.io
```
### App of Apps of Apps
You can create an application that creates other applications, which in turn can create other applications.
This allows you to declaratively manage a group of applications that can be deployed and configured in concert.
## Projects
The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications.
It is defined by the following key pieces of information:
* `sourceRepos` reference to the repositories that applications within the project can pull manifests from.
* `destinations` reference to clusters and namespaces that applications within the project can deploy into.
* `roles` list of entities with definitions of their access to resources within the project.
An example spec is as follows:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-project
spec:
description: Example Project
# Allow manifests to deploy from any Git repos
sourceRepos:
- '*'
# Only permit applications to deploy to the guestbook namespace in the same cluster
destinations:
- namespace: guestbook
server: https://kubernetes.default.svc
# Deny all cluster-scoped resources from being created, except for Namespace
clusterResourceWhitelist:
- group: ''
kind: Namespace
# Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy
namespaceResourceBlacklist:
- group: ''
kind: ResourceQuota
- group: ''
kind: LimitRange
- group: ''
kind: NetworkPolicy
roles:
# A role which provides read-only access to all applications in the project
- name: read-only
description: Read-only privileges to my-project
policies:
- p, proj:my-project:read-only, applications, get, my-project/*, allow
groups:
- my-oidc-group
# A role which provides sync privileges to only the guestbook-dev application, e.g. to provide
# sync privileges to a CI system
- name: ci-role
description: Sync privileges for guestbook-dev
policies:
- p, proj:my-project:ci-role, applications, sync, my-project/guestbook-dev, allow
# NOTE: JWT tokens can only be generated by the API server and the token is not persisted
# anywhere by Argo CD. It can be prematurely revoked by removing the entry from this list.
jwtTokens:
- iat: 1535390316
```
## Repositories
Repository credentials are stored in secret. Use following steps to configure a repo:
1. Create secret which contains repository credentials. Consider using [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets) to store encrypted secret
definition as a Kubernetes manifest.
2. Register repository in the `argocd-cm` config map. Each repository must have `url` field and, depending on whether you connect using HTTPS or SSH, `usernameSecret` and `passwordSecret` (for HTTPS) or `sshPrivateKeySecret` (for SSH).
Example for HTTPS:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
repositories: |
- url: https://github.com/argoproj/my-private-repository
passwordSecret:
name: my-secret
key: password
usernameSecret:
name: my-secret
key: username
```
Example for SSH:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
repositories: |
- url: git@github.com:argoproj/my-private-repository
sshPrivateKeySecret:
name: my-secret
key: sshPrivateKey
```
!!! tip
The Kubernetes documentation has [instructions for creating a secret containing a private key](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys).
### Repository Credentials (v1.1+)
If you want to use the same credentials for multiple repositories, you can use `repository.credentials`:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
repositories: |
- url: https://github.com/argoproj/private-repo
- url: https://github.com/argoproj/other-private-repo
repository.credentials: |
- url: https://github.com/argoproj
passwordSecret:
name: my-secret
key: password
usernameSecret:
name: my-secret
key: username
```
Argo CD will only use the credentials if you omit `usernameSecret`, `passwordSecret`, and `sshPrivateKeySecret` fields (`insecureIgnoreHostKey` is ignored).
A credential may be match if it's URL is the prefix of the repository's URL. The means that credentials may match, e.g in the above example both [https://github.com/argoproj](https://github.com/argoproj) and [https://github.com](https://github.com) would match. Argo CD selects the first one that matches.
!!! tip
Order your credentials with the most specific at the top and the least specific at the bottom.
A complete example.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
repositories: |
# this has it's own credentials
- url: https://github.com/argoproj/private-repo
passwordSecret:
name: private-repo-secret
key: password
usernameSecret:
name: private-repo-secret
key: username
sshPrivateKeySecret:
name: private-repo-secret
key: sshPrivateKey
- url: https://github.com/argoproj/other-private-repo
- url: https://github.com/otherproj/another-private-repo
repository.credentials: |
# this will be used for the second repo
- url: https://github.com/argoproj
passwordSecret:
name: other-private-repo-secret
key: password
usernameSecret:
name: other-private-repo-secret
key: username
sshPrivateKeySecret:
name: other-private-repo-secret
key: sshPrivateKey
# this will be used for the third repo
- url: https://github.com
passwordSecret:
name: another-private-repo-secret
key: password
usernameSecret:
name: another-private-repo-secret
key: username
sshPrivateKeySecret:
name: another-private-repo-secret
key: sshPrivateKey
```
## Clusters
Cluster credentials are stored in secrets same as repository credentials but does not require entry in `argocd-cm` config map. Each secret must have label
`argocd.argoproj.io/secret-type: cluster`.
The secret data must include following fields:
* `name` - cluster name
* `server` - cluster api server url
* `config` - JSON representation of following data structure:
```yaml
# Basic authentication settings
username: string
password: string
# Bearer authentication settings
bearerToken: string
# IAM authentication configuration
awsAuthConfig:
clusterName: string
roleARN: string
# Transport layer security configuration settings
tlsClientConfig:
# PEM-encoded bytes (typically read from a client certificate file).
caData: string
# PEM-encoded bytes (typically read from a client certificate file).
certData: string
# Server should be accessed without verifying the TLS certificate
insecure: boolean
# PEM-encoded bytes (typically read from a client certificate key file).
keyData: string
# ServerName is passed to the server for SNI and is used in the client to check server
# ceritificates against. If ServerName is empty, the hostname used to contact the
# server is used.
serverName: string
```
Cluster secret example:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: mycluster.com
server: https://mycluster.com
config: |
{
"bearerToken": "<authentication token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate>"
}
}
```
## Helm Chart Repositories
Non standard Helm Chart repositories have to be registered under the `helm.repositories` key in the
`argocd-cm` ConfigMap. Each repository must have `url` and `name` fields. For private Helm repos you
may need to configure access credentials and HTTPS settings using `usernameSecret`, `passwordSecret`,
`caSecret`, `certSecret` and `keySecret` fields.
Example:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
helm.repositories: |
- url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
name: istio.io
- url: https://argoproj.github.io/argo-helm
name: argo
usernameSecret:
name: my-secret
key: username
passwordSecret:
name: my-secret
key: password
caSecret:
name: my-secret
key: ca
certSecret:
name: my-secret
key: cert
keySecret:
name: my-secret
key: key
```
## Resource Exclusion
Resources can be excluded from discovery and sync so that ArgoCD is unaware of them. For example, `events.k8s.io` and `metrics.k8s.io` are always excluded. Use cases:
* You have temporal issues and you want to exclude problematic resources.
* There are many of a kind of resources that impacts ArgoCD's performance.
* Restrict ArgoCD's access to certain kinds of resources, e.g. secrets. See [security.md#cluster-rbac](security.md#cluster-rbac).
To configure this, edit the `argcd-cm` config map:
```
kubectl edit configmap argocd-cm -n argocdconfigmap/argocd-cm edited
```
Add `resource.exclusions`, e.g.:
```yaml
apiVersion: v1
data:
resource.exclusions: |
- apiGroups:
- "*"
kinds:
- "*"
clusters:
- https://192.168.0.20
kind: ConfigMap
```
The `resource.exclusions` node is a list of objects. Each object can have:
- `apiGroups` A list of globs to match the API group.
- `kinds` A list of kinds to match. Can be "*" to match all.
- `cluster` A list of globs to match the cluster.
If all three match, then the resource is ignored.
Notes:
* Quote globs in your YAML to avoid parsing errors.
* Invalid globs result in the whole rule being ignored.
* If you add a rule that matches existing resources, these will appear in the interface as `OutOfSync`.
## SSO & RBAC
* SSO configuration details: [SSO](sso.md)
* RBAC configuration details: [RBAC](rbac.md)
## Manage Argo CD Using Argo CD
Argo CD is able to manage itself since all settings are represented by Kubernetes manifests. The suggested way is to create [Kustomize](https://github.com/kubernetes-sigs/kustomize)
based application which uses base Argo CD manifests from [https://github.com/argoproj/argo-cd] and apply required changes on top.
Example of `kustomization.yaml`:
```yaml
bases:
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v0.10.6
# additional resources like ingress rules, cluster and repository secrets.
resources:
- clusters-secrets.yaml
- repos-secrets.yaml
# changes to config maps
patchesStrategicMerge:
- overlays/argo-cd-cm.yaml
```
The live example of self managed Argo CD config is available at https://cd.apps.argoproj.io and with configuration
stored at [argoproj/argoproj-deployments](https://github.com/argoproj/argoproj-deployments/tree/master/argocd).
!!! note
You will need to sign-in using your github account to get access to https://cd.apps.argoproj.io

View File

@@ -0,0 +1,90 @@
# Resource Health
## Overview
Argo CD provides built-in health assessment for several standard Kubernetes types, which is then
surfaced to the overall Application health status as a whole. The following checks are made for
specific types of kuberentes resources:
### Deployment, ReplicaSet, StatefulSet DaemonSet
* Observed generation is equal to desired generation.
* Number of **updated** replicas equals the number of desired replicas.
### Service
* If service type is of type `LoadBalancer`, the `status.loadBalancer.ingress` list is non-empty,
with at least one value for `hostname` or `IP`.
### Ingress
* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.
### PersistentVolumeClaim
* The `status.phase` is `Bound`
## Custom Health Checks
Argo CD supports custom health checks written in [Lua](https://www.lua.org/). This is useful if you:
* Are affected by known issues where your `Ingress` or `StatefulSet` resources are stuck in `Progressing` state because of bug in your resource controller.
* Have a custom resource for which Argo CD does not have a built-in health check.
There are two ways to configure a custom health check. The next two sections describe those ways.
### Way 1. Define a Custom Health Check in `argocd-cm` ConfigMap
Custom health checks can be defined in `resource.customizations` field of `argocd-cm`. Following example demonstrates a health check for `certmanager.k8s.io/Certificate`.
```yaml
data:
resource.customizations: |
certmanager.k8s.io/Certificate:
health.lua: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for i, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "False" then
hs.status = "Degraded"
hs.message = condition.message
return hs
end
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = condition.message
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for certificate"
return hs
```
The `obj` is a global variable which contains the resource. The script must return an object with status and optional message field.
NOTE: as a security measure you don't have access to most of the standard Lua libraries.
### Way 2. Contribute a Custom Health Check
A health check can be bundled into Argo CD. Custom health check scripts are located in the `resource_customizations` directory of [https://github.com/argoproj/argo-cd](https://github.com/argoproj/argo-cd). This must have the following directory structure:
```
argo-cd
|-- resource_customizations
| |-- your.crd.group.io # CRD group
| | |-- MyKind # Resource kind
| | | |-- health.lua # Health check
| | | |-- health_test.yaml # Test inputs and expected results
| | | +-- testdata # Directory with test resource YAML definitions
```
Each health check must have tests defined in `health_test.yaml` file. The `health_test.yaml` is a YAML file with the following structure:
```yaml
tests:
- healthStatus:
status: ExpectedStatus
message: Expected message
inputPath: testdata/test-resource-definition.yaml
```
The [PR#1139](https://github.com/argoproj/argo-cd/pull/1139) is an example of Cert Manager CRDs custom health check.

View File

@@ -0,0 +1,6 @@
# Overview
This guide is for administrator and operator wanting to install and configure Argo CD for other developers.
!!! note
Please make sure you've completed the [getting started guide](../getting_started.md).

View File

@@ -0,0 +1,177 @@
# Ingress Configuration
Argo CD runs both a gRPC server (used by the CLI), as well as a HTTP/HTTPS server (used by the UI).
Both protocols are exposed by the argocd-server service object on the following ports:
* 443 - gRPC/HTTPS
* 80 - HTTP (redirects to HTTPS)
There are several ways how Ingress can be configured.
## [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx)
### Option 1: SSL-Passthrough
Because Argo CD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a
challenge when attempting to define a single nginx ingress object and rule for the argocd-service,
since the `nginx.ingress.kubernetes.io/backend-protocol` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol)
accepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS).
In order to expose the Argo CD API server with a single ingress rule and hostname, the
`nginx.ingress.kubernetes.io/ssl-passthrough` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough)
must be used to passthrough TLS connections and terminate TLS at the Argo CD API server.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: argocd.example.com
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
```
The above rule terminates TLS at the Argo CD API server, which detects the protocol being used,
and responds appropriately. Note that the `nginx.ingress.kubernetes.io/ssl-passthrough` annotation
requires that the `--enable-ssl-passthrough` flag be added to the command line arguments to
`nginx-ingress-controller`.
### Option 2: Multiple Ingress Objects And Hosts
Since ingress-nginx Ingress supports only a single protocol per Ingress object, an alternative
way would be to define two Ingress objects. One for HTTP/HTTPS, and the other for gRPC:
HTTP/HTTPS Ingress:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-http-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- http:
paths:
- backend:
serviceName: argocd-server
servicePort: http
host: argocd.example.com
tls:
- hosts:
- argocd.example.com
secretName: argocd-secret
```
gRPC Ingress:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-grpc-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
rules:
- http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
host: grpc.argocd.example.com
tls:
- hosts:
- grpc.argocd.example.com
secretName: argocd-secret
```
The API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the
`--insecure` flag to the argocd-server command:
```yaml
spec:
template:
spec:
name: argocd-server
containers:
- command:
- /argocd-server
- --staticassets
- /shared/app
- --repo-server
- argocd-repo-server:8081
- --insecure
```
The obvious disadvantage to this approach is that this technique require two separate hostnames for
the API server -- one for gRPC and the other for HTTP/HTTPS. However it allow TLS termination to
happen at the ingress controller.
## AWS Application Load Balancers (ALBs) And Classic ELB (HTTP Mode)
Neither ALBs and Classic ELB in HTTP mode, do not have full support for HTTP2/gRPC which is the
protocol used by the `argocd` CLI. Thus, when using an AWS load balancer, either Classic ELB in
passthrough mode is needed, or NLBs.
## UI Base Path
If Argo CD UI is available under non-root path (e.g. `/argo-cd` instead of `/`) then UI path should be configured in API server.
To configure UI path add `--basehref` flag into `argocd-server` deployment command:
```yaml
spec:
template:
spec:
name: argocd-server
containers:
- command:
- /argocd-server
- --staticassets
- /shared/app
- --repo-server
- argocd-repo-server:8081
- --basehref
- /argo-cd
```
NOTE: flag `--basehref` only changes UI base URL. API server keep using `/` path so you need to add URL rewrite rule to proxy config.
Example nginx.conf with URL rewrite:
```
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 443;
location /argo-cd {
rewrite /argo-cd/(.*) /$1 break;
proxy_pass https://localhost:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
```

View File

@@ -0,0 +1,70 @@
# Metrics
Argo CD exposes two sets of Prometheus metrics
## Application Metrics
Metrics about applications. Scraped at the `argocd-metrics:8082/metrics` endpoint.
* Gauge for application health status
* Gauge for application sync status
* Counter for application sync history
## API Server Metrics
Metrics about API Server API request and response activity (request totals, response codes, etc...).
Scraped at the `argocd-server-metrics:8083/metrics` endpoint.
## Prometheus Operator
If using Prometheus Operator, the following ServiceMonitor example manifests can be used.
Change `metadata.labels.release` to the name of label selected by your Prometheus.
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-metrics
labels:
release: prometheus-operator
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-metrics
endpoints:
- port: metrics
```
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-server-metrics
labels:
release: prometheus-operator
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server-metrics
endpoints:
- port: metrics
```
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-repo-server-metrics
labels:
release: prometheus-operator
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
endpoints:
- port: metrics
```
## Dashboards
You can find an example Grafana dashboard [here](https://github.com/argoproj/argo-cd/blob/master/examples/dashboard.json)
![dashboard](../assets/dashboard.jpg)

View File

@@ -0,0 +1,51 @@
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-project
spec:
# Project description
description: Example Project
# Allow manifests to deploy from any Git repos
sourceRepos:
- '*'
# Only permit applications to deploy to the guestbook namespace in the same cluster
destinations:
- namespace: guestbook
server: https://kubernetes.default.svc
# Deny all cluster-scoped resources from being created, except for Namespace
clusterResourceWhitelist:
- group: ''
kind: Namespace
# Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy
namespaceResourceBlacklist:
- group: ''
kind: ResourceQuota
- group: ''
kind: LimitRange
- group: ''
kind: NetworkPolicy
roles:
# A role which provides read-only access to all applications in the project
- name: read-only
description: Read-only privileges to my-project
policies:
- p, proj:my-project:read-only, applications, get, my-project/*, allow
groups:
- my-oidc-group
# A role which provides sync privileges to only the guestbook-dev application, e.g. to provide
# sync privileges to a CI system
- name: ci-role
description: Sync privileges for guestbook-dev
policies:
- p, proj:my-project:ci-role, applications, sync, my-project/guestbook-dev, allow
# NOTE: JWT tokens can only be generated by the API server and the token is not persisted
# anywhere by Argo CD. It can be prematurely revoked by removing the entry from this list.
jwtTokens:
- iat: 1535390316

View File

@@ -0,0 +1,42 @@
# RBAC
## Overview
The RBAC feature enables restriction of access to Argo CD resources. Argo CD does not have its own
user management system and has only one built-in user `admin`. The `admin` user is a superuser and
it has unrestricted access to the system. RBAC requires [SSO configuration](./sso.md). Once SSO is
configured, additional RBAC roles can be defined, and SSO groups can man be mapped to roles.
## Configure RBAC
RBAC configuration allows defining roles and groups. Argo CD has two pre-defined roles:
* `role:readonly` - read-only access to all resources
* `role:admin` - unrestricted access to all resources
These role definitions can be seen in [builtin-policy.csv](https://github.com/argoproj/argo-cd/blob/master/assets/builtin-policy.csv)
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below
configures a custom role, named `org-admin`. The role is assigned to any user which belongs to
`your-github-org:your-team` group. All other users get the default policy of `role:readonly`,
which cannot modify Argo CD settings.
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.default: role:readonly
policy.csv: |
p, role:org-admin, applications, *, */*, allow
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
g, your-github-org:your-team, role:org-admin
```

View File

@@ -0,0 +1,168 @@
# Security
Argo CD has undergone rigourous internal security reviews and penetration testing to satisfy [PCI
compliance](https://www.pcisecuritystandards.org) requirements. The following are some security
topics and implementation details of Argo CD.
## Authentication
Authentication to Argo CD API server is performed exclusively using [JSON Web Tokens](https://jwt.io)
(JWTs). Username/password bearer tokens are not used for authentication. The JWT is obtained/managed
in one of the following ways:
1. For the local `admin` user, a username/password is exchanged for a JWT using the `/api/v1/session`
endpoint. This token is signed & issued by the Argo CD API server itself, and has no expiration.
When the admin password is updated, all existing admin JWT tokens are immediately revoked.
The password is stored as a bcrypt hash in the [`argocd-secret`](https://github.com/argoproj/argo-cd/blob/master/manifests/base/config/argocd-secret.yaml) Secret.
2. For Single Sign-On users, the user completes an OAuth2 login flow to the configured OIDC identity
provider (either delegated through the bundled Dex provider, or directly to a self-managed OIDC
provider). This JWT is signed & issued by the IDP, and expiration and revokation is handled by
the provider. Dex tokens expire after 24 hours.
3. Automation tokens are generated for a project using the `/api/v1/projects/{project}/roles/{role}/token`
endpoint, and are signed & issued by Argo CD. These tokens are limited in scope and privilege,
and can only be used to manage application resources in the project which it belongs to. Project
JWTs have a configurable expiration and can be immediately revoked by deleting the JWT reference
ID from the project role.
## Authorization
Authorization is performed by iterating the list of group membership in a user's JWT groups claims,
and comparing each group against the roles/rules in the [RBAC](rbac.md) policy. Any matched rule
permits access to the API request.
## TLS
All network communication is performed over TLS including service-to-service communication between
the three components (argocd-server, argocd-repo-server, argocd-application-controller). The Argo CD
API server can enforce the use of TLS 1.2 using the flag: `--tlsminversion 1.2`.
## Sensitive Information
### Secrets
Argo CD never returns sensitive data from its API, and redacts all sensitive data in API payloads
and logs. This includes:
* cluster credentials
* Git credentials
* OAuth2 client secrets
* Kubernetes Secret values
### External Cluster Credentials
To manage external clusters, Argo CD stores the credentials of the external cluster as a Kubernetes
Secret in the argocd namespace. This secret contains the K8s API bearer token associated with the
`argocd-manager` ServiceAccount created during `argocd cluster add`, along with connection options
to that API server (TLS configuration/certs, aws-iam-authenticator RoleARN, etc...).
The information is used to reconstruct a REST config and kubeconfig to the cluster used by Argo CD
services.
To rotate the bearer token used by Argo CD, the token can be deleted (e.g. using kubectl) which
causes kuberentes to generate a new secret with a new bearer token. The new token can be re-inputted
to Argo CD by re-running `argocd cluster add`. Run the following commands against the *_managed_*
cluster:
```bash
# run using a kubeconfig for the externally managed cluster
kubectl delete secret argocd-manager-token-XXXXXX -n kube-system
argocd cluster add CONTEXTNAME
```
To revoke Argo CD's access to a managed cluster, delete the RBAC artifacts against the *_managed_*
cluster, and remove the cluster entry from Argo CD:
```bash
# run using a kubeconfig for the externally managed cluster
kubectl delete sa argocd-manager -n kube-system
kubectl delete clusterrole argocd-manager-role
kubectl delete clusterrolebinding argocd-manager-role-binding
argocd cluster rm https://your-kubernetes-cluster-addr
```
> NOTE: for AWS EKS clusters, [aws-iam-authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator)
is used to authenticate to the external cluster, which uses IAM roles in lieu of locally stored
tokens, so token rotation is not needed, and revokation is handled through IAM.
## Cluster RBAC
By default, Argo CD uses a [clusteradmin level role](https://github.com/argoproj/argo-cd/blob/master/manifests/base/application-controller/argocd-application-controller-role.yaml)
in order to:
1. watch & operate on cluster state
2. deploy resources to the cluster
Although Argo CD requires cluster-wide **_read_** privileges to resources in the managed cluster to
function properly, it does not necessarily need full **_write_** privileges to the cluster. The
ClusterRole used by argocd-server and argocd-application-controller can be modified such
that write privileges are limited to only the namespaces and resources that you wish Argo CD to
manage.
To fine-tune privileges of externally managed clusters, edit the ClusterRole of the `argocd-manager-role`
```bash
# run using a kubeconfig for the externally managed cluster
kubectl edit clusterrole argocd-manager-role
```
To fine-tune privileges which Argo CD has against its own cluster (i.e. https://kubernetes.default.svc),
edit the following cluster roles where Argo CD is running in:
```bash
# run using a kubeconfig to the cluster Argo CD is running in
kubectl edit clusterrole argocd-server
kubectl edit clusterrole argocd-application-controller
```
!!! tip
If you want to deny ArgoCD access to a kind of resource then add it as an [excluded resource](declarative-setup.md#resource-exclusion).
## Auditing
As a GitOps deployment tool, the Git commit history provides a natural audit log of what changes
were made to application configuration, when they were made, and by whom. However, this audit log
only applies to what happened in Git and does not necessarily correlate one-to-one with events
that happen in a cluster. For example, User A could have made multiple commits to application
manifests, but User B could have just only synced those changes to the cluster sometime later.
To complement the Git revision history, Argo CD emits Kubernetes Events of application activity,
indicating the responsible actor when applicable. For example:
```bash
$ kubectl get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
1m 1m 1 guestbook.157f7c5edd33aeac Application Normal ResourceCreated argocd-server admin created application
1m 1m 1 guestbook.157f7c5f0f747acf Application Normal ResourceUpdated argocd-application-controller Updated sync status: -> OutOfSync
1m 1m 1 guestbook.157f7c5f0fbebbff Application Normal ResourceUpdated argocd-application-controller Updated health status: -> Missing
1m 1m 1 guestbook.157f7c6069e14f4d Application Normal OperationStarted argocd-server admin initiated sync to HEAD (8a1cb4a02d3538e54907c827352f66f20c3d7b0d)
1m 1m 1 guestbook.157f7c60a55a81a8 Application Normal OperationCompleted argocd-application-controller Sync operation to 8a1cb4a02d3538e54907c827352f66f20c3d7b0d succeeded
1m 1m 1 guestbook.157f7c60af1ccae2 Application Normal ResourceUpdated argocd-application-controller Updated sync status: OutOfSync -> Synced
1m 1m 1 guestbook.157f7c60af5bc4f0 Application Normal ResourceUpdated argocd-application-controller Updated health status: Missing -> Progressing
1m 1m 1 guestbook.157f7c651990e848 Application Normal ResourceUpdated argocd-application-controller Updated health status: Progressing -> Healthy
```
These events can be then be persisted for longer periods of time using other tools as
[Event Exporter](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/event-exporter) or
[Event Router](https://github.com/heptiolabs/eventrouter).
## WebHook Payloads
Payloads from webhook events are considered untrusted. Argo CD only examines the payload to infer
the involved applications of the webhook event (e.g. which repo was modified), then refreshes
the related application for reconciliation. This refresh is the same refresh which occurs regularly
at three minute intervals, just fast-tracked by the webhook event.
## Reporting Vulnerabilities
Please report security vulnerabilities by e-mailing:
* [Jesse_Suen@intuit.com](mailto:Jesse_Suen@intuit.com)
* [Alexander_Matyushentsev@intuit.com](mailto:Alexander_Matyushentsev@intuit.com)
* [Edward_Lee@intuit.com](mailto:Edward_Lee@intuit.com)

114
docs/operator-manual/sso.md Normal file
View File

@@ -0,0 +1,114 @@
# SSO Configuration
## Overview
Argo CD does not have any local users other than the built-in `admin` user. All other users are
expected to login via SSO. There are two ways that SSO can be configured:
* Bundled Dex OIDC provider - use this option if your current provider does not support OIDC (e.g. SAML,
LDAP) or if you wish to leverage any of Dex's connector features (e.g. the ability to map GitHub
organizations and teams to OIDC groups claims).
* Existing OIDC provider - use this if you already have an OIDC provider which you are using (e.g.
Okta, OneLogin, Auth0, Microsoft), where you manage your users, groups, and memberships.
## Dex
Argo CD embeds and bundles [Dex](https://github.com/coreos/dex) as part of its installation, for the
purpose of delegating authentication to an external identity provider. Multiple types of identity
providers are supported (OIDC, SAML, LDAP, GitHub, etc...). SSO configuration of Argo CD requires
editing the `argocd-cm` ConfigMap with
[Dex connector](https://github.com/coreos/dex/tree/master/Documentation/connectors) settings.
This document describes how to configure Argo CD SSO using GitHub (OAuth2) as an example, but the
steps should be similar for other identity providers.
### 1. Register the application in the identity provider
In GitHub, register a new application. The callback address should be the `/api/dex/callback`
endpoint of your Argo CD URL (e.g. https://argocd.example.com/api/dex/callback).
![Register OAuth App](../assets/register-app.png "Register OAuth App")
After registering the app, you will receive an OAuth2 client ID and secret. These values will be
inputted into the Argo CD configmap.
![OAuth2 Client Config](../assets/oauth2-config.png "OAuth2 Client Config")
### 2. Configure Argo CD for SSO
Edit the argocd-cm configmap:
```bash
kubectl edit configmap argocd-cm -n argocd
```
* In the `url` key, input the base URL of Argo CD. In this example, it is https://argocd.example.com
* In the `dex.config` key, add the `github` connector to the `connectors` sub field. See Dex's
[GitHub connector](https://github.com/coreos/dex/blob/master/Documentation/connectors/github.md)
documentation for explanation of the fields. A minimal config should populate the clientID,
clientSecret generated in Step 1.
* You will very likely want to restrict logins to one or more GitHub organization. In the
`connectors.config.orgs` list, add one or more GitHub organizations. Any member of the org will
then be able to login to Argo CD to perform management tasks.
```yaml
data:
url: https://argocd.example.com
dex.config: |
connectors:
# GitHub example
- type: github
id: github
name: GitHub
config:
clientID: aabbccddeeff00112233
clientSecret: $dex.github.clientSecret
orgs:
- name: your-github-org
# GitHub enterprise example
- type: github
id: acme-github
name: Acme GitHub
config:
hostName: github.acme.com
clientID: abcdefghijklmnopqrst
clientSecret: $dex.acme.clientSecret
orgs:
- name: your-github-org
```
After saving, the changes should take affect automatically.
NOTES:
* Any values which start with '$' will look to a key in argocd-secret of the same name (minus the $),
to obtain the actual value. This allows you to store the `clientSecret` as a kubernetes secret.
* There is no need to set `redirectURI` in the `connectors.config` as shown in the dex documentation.
Argo CD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the
correct external callback URL (e.g. https://argocd.example.com/api/dex/callback)
## Existing OIDC Provider
To configure Argo CD to delegate authenticate to your existing OIDC provider, add the OAuth2
configuration to the `argocd-cm` ConfigMap under the `oidc.config` key:
```yaml
data:
url: https://argocd.example.com
oidc.config: |
name: Okta
issuer: https://dev-123456.oktapreview.com
clientID: aaaabbbbccccddddeee
clientSecret: $oidc.okta.clientSecret
# Some OIDC providers require a separate clientID for different callback URLs.
# For example, if configuring Argo CD with self-hosted Dex, you will need a separate client ID
# for the 'localhost' (CLI) client to Dex. This field is optional. If omitted, the CLI will
# use the same clientID as the Argo CD server
cliClientID: vvvvwwwwxxxxyyyyzzzz
```

View File

@@ -0,0 +1,68 @@
# Git Webhook Configuration
## Overview
Argo CD polls Git repositories every three minutes to detect changes to the manifests. To eliminate
this delay from polling, the API server can be configured to receive webhook events. Argo CD supports
Git webhook notifications from GitHub, GitLab, and BitBucket. The following explains how to configure
a Git webhook for GitHub, but the same process should be applicable to other providers.
### 1. Create The WebHook In The Git Provider
In your Git provider, navigate to the settings page where webhooks can be configured. The payload
URL configured in the Git provider should use the `/api/webhook` endpoint of your Argo CD instance
(e.g. [https://argocd.example.com/api/webhook]). If you wish to use a shared secret, input an
arbitrary value in the secret. This value will be used when configuring the webhook in the next step.
![Add Webhook](../assets/webhook-config.png "Add Webhook")
### 2. Configure Argo CD With The WebHook Secret Optional)
Configuring a webhook shared secret is optional, since Argo CD will still refresh applications
related to the Git repository, even with unauthenticated webhook events. This is safe to do since
the contents of webhook payloads are considered untrusted, and will only result in a refresh of the
application (a process which already occurs at three-minute intervals). If Argo CD is publicly
accessible, then configuring a webhook secret is recommended to prevent a DDoS attack.
In the `argocd-secret` kubernetes secret, configure one of the following keys with the Git
provider's webhook secret configured in step 1.
| Provider | K8s Secret Key |
|---------- | ------------------------ |
| GitHub | `github.webhook.secret` |
| GitLab | `gitlab.webhook.secret` |
| BitBucket | `bitbucket.webhook.uuid` |
Edit the Argo CD kubernetes secret:
```bash
kubectl edit secret argocd-secret -n argocd
```
TIP: for ease of entering secrets, kubernetes supports inputting secrets in the `stringData` field,
which saves you the trouble of base64 encoding the values and copying it to the `data` field.
Simply copy the shared webhook secret created in step 1, to the corresponding
GitHub/GitLab/BitBucket key under the `stringData` field:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: argocd-secret
namespace: argocd
type: Opaque
data:
...
stringData:
# github webhook secret
github.webhook.secret: shhhh! it's a github secret
# gitlab webhook secret
gitlab.webhook.secret: shhhh! it's a gitlab secret
# bitbucket webhook secret
bitbucket.webhook.uuid: your-bitbucket-uuid
```
After saving, the changes should take affect automatically.

View File

@@ -1,39 +0,0 @@
# Parameter Overrides
ArgoCD provides a mechanism to override the parameters of a ksonnet/helm app. This gives some extra
flexibility in having most of the application manifests defined in git, while leaving room for
*some* parts of the k8s manifests determined dynamically, or outside of git. It also serves as an
alternative way of redeploying an application by changing application parameters via ArgoCD, instead
of making the changes to the manifests in git.
**NOTE:** many consider this mode of operation as an anti-pattern to GitOps, since the source of
truth becomes a union of the git repository, and the application overrides. The ArgoCD parameter
overrides feature is provided mainly convenience to developers and is intended to be used more for
dev/test environments, vs. production environments.
To use parameter overrides, run the `argocd app set -p (COMPONENT=)PARAM=VALUE` command:
```
argocd app set guestbook -p guestbook=image=example/guestbook:abcd123
argocd app sync guestbook
```
The following are situations where parameter overrides would be useful:
1. A team maintains a "dev" environment, which needs to be continually updated with the latest
version of their guestbook application after every build in the tip of master. To address this use
case, the application would expose an parameter named `image`, whose value used in the `dev`
environment contains a placeholder value (e.g. `example/guestbook:replaceme`). The placeholder value
would be determined externally (outside of git) such as a build systems. Then, as part of the build
pipeline, the parameter value of the `image` would be continually updated to the freshly built image
(e.g. `argocd app set guestbook -p guestbook=image=example/guestbook:abcd123`). A sync operation
would result in the application being redeployed with the new image.
2. A repository of helm manifests is already publicly available (e.g. https://github.com/helm/charts).
Since commit access to the repository is unavailable, it is useful to be able to install charts from
the public repository, customizing the deployment with different parameters, without resorting to
forking the repository to make the changes. For example, to install redis from the helm chart
repository and customize the the database password, you would run:
```
argocd app create redis --repo https://github.com/helm/charts.git --path stable/redis --dest-server https://kubernetes.default.svc --dest-namespace default -p password=abc123
```

View File

@@ -1,103 +0,0 @@
# RBAC
## Overview
The feature RBAC allows restricting access to ArgoCD resources. ArgoCD does not have own user management system and has only one built-in user `admin`. The `admin` user is a
superuser and it has full access. RBAC requires configuring [SSO](./sso.md) integration. Once [SSO](./sso.md) is connected you can define RBAC roles and map roles to groups.
## Configure RBAC
RBAC configuration allows defining roles and groups. ArgoCD has two pre-defined roles: role `role:readonly` which provides read-only access to all resources and role `role:admin`
which provides full access. Role definitions are available in [builtin-policy.csv](../util/rbac/builtin-policy.csv) file.
Additional roles and groups can be configured in `argocd-rbac-cm` ConfigMap. The example below custom role `org-admin`. The role is assigned to any user which belongs to
`your-github-org:your-team` group. All other users get `role:readonly` and cannot modify ArgoCD settings.
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
data:
policy.default: role:readonly
policy.csv: |
p, role:org-admin, applications, *, */*
p, role:org-admin, applications/*, *, */*
p, role:org-admin, clusters, get, *
p, role:org-admin, repositories, get, *
p, role:org-admin, repositories/apps, get, *
p, role:org-admin, repositories, create, *
p, role:org-admin, repositories, update, *
p, role:org-admin, repositories, delete, *
g, your-github-org:your-team, role:org-admin
kind: ConfigMap
metadata:
name: argocd-rbac-cm
```
## Configure Projects
Argo projects allow grouping applications which is useful if ArgoCD is used by multiple teams. Additionally, projects restrict source repositories and destination
Kubernetes clusters which can be used by applications belonging to the project.
### 1. Create new project
Following command creates project `myproject` which can deploy applications to namespace `default` of cluster `https://kubernetes.default.svc`. The valid application source is defined in the `https://github.com/argoproj/argocd-example-apps.git` repository.
```
argocd proj create myproject -d https://kubernetes.default.svc,default -s https://github.com/argoproj/argocd-example-apps.git
```
Project sources and destinations can be managed using commands
```
argocd project add-destination
argocd project remove-destination
argocd project add-source
argocd project remove-source
```
### 2. Assign application to a project
Each application belongs to a project. By default, all application belongs to the default project which provides access to any source repo/cluster. The application project can be
changes using `app set` command:
```
argocd app set guestbook-default --project myproject
```
### 3. Update RBAC rules
Following example configure admin access for two teams. Each team has access only two application of one project (`team1` can access `default` project and `team2` can access
`myproject` project).
*ConfigMap `argocd-rbac-cm` example:*
```yaml
apiVersion: v1
data:
policy.default: ""
policy.csv: |
p, role:team1-admin, applications, *, default/*
p, role:team1-admin, applications/*, *, default/*
p, role:team1-admin, applications, *, myproject/*
p, role:team1-admin, applications/*, *, myproject/*
p, role:org-admin, clusters, get, *
p, role:org-admin, repositories, get, *
p, role:org-admin, repositories/apps, get, *
p, role:org-admin, repositories, create, *
p, role:org-admin, repositories, update, *
p, role:org-admin, repositories, delete, *
g, role:team1-admin, org-admin
g, role:team2-admin, org-admin
g, your-github-org:your-team1, role:team1-admin
g, your-github-org:your-team2, role:team2-admin
kind: ConfigMap
metadata:
name: argocd-rbac-cm
```

Some files were not shown because too many files have changed in this diff Show More